entry_id
stringlengths 33
33
| published
stringlengths 14
14
| title
stringlengths 15
199
| authors
list | primary_category
stringlengths 5
18
| categories
list | text
stringlengths 1
461k
|
---|---|---|---|---|---|---|
http://arxiv.org/abs/2307.00943v1
|
20230703113355
|
A revised graduated cylindrical shell model and its application to a prominence eruption
|
[
"Qing-Min Zhang",
"Zhen-Yong Hou",
"Xian-Yong Bai"
] |
astro-ph.SR
|
[
"astro-ph.SR"
] |
– Vol. – No. XX, 000–000
^1Key Laboratory of Dark Matter and Space Astronomy, Purple Mountain Observatory, Nanjing 210023, China; [email protected]
^2Yunnan Key Laboratory of Solar physics and Space Science, Kunming 650216, China
^3School of Earth and Space Sciences, Peking University, Beijing 100871, China
^4National Astronomical Observatories, Chinese Academy of Sciences, Beijing 100101, China
^5University of Chinese Academy of Sciences, Beijing 100049, China
^6Key Laboratory of Solar Activity and Space Weather, National Space Science Center, Chinese Academy of Sciences, Beijing 100190, China
In this paper, the well-known graduated cylindrical shell (GCS) model is slightly revised by introducing longitudinal and latitudinal deflections of prominences originating from active regions (ARs).
Subsequently, it is applied to the three-dimensional (3D) reconstruction of an eruptive prominence in AR 13110,
which produced an M1.7 class flare and a fast coronal mass ejection (CME) on 2022 September 23.
It is revealed that the prominence undergoes acceleration from ∼246 to ∼708 km s^-1.
Meanwhile, the prominence experiences southward deflection by 15±1 without longitudinal deflection, suggesting that the prominence erupts non-radially.
Southward deflections of the prominence and associated CME are consistent, validating the results of fitting using the revised GCS model.
Besides, the true speed of the CME is calculated to be 1637±15 km s^-1, which is ∼2.3 times higher than that of prominence.
This is indicative of continuing acceleration of the prominence during which flare magnetic reconnection reaches maximum beneath the erupting prominence.
Hence, the reconstruction using the revised GCS model could successfully track a prominence in its early phase of evolution, including acceleration and deflection.
Zhang et al.
A revised GCS model
A revised graduated cylindrical shell model and its application to a prominence eruption
Qing-Min Zhang1,2, Zhen-Yong Hou3, Xian-Yong Bai4,5,6
August 1, 2023
========================================================================================
§ INTRODUCTION
Solar flares and coronal mass ejections (CMEs) are the most powerful activities in the solar atmosphere, which have drastic and profound influences on the heliosphere ().
The primary origins of flares and CMEs are believed to be impulsive eruptions of solar prominences or filaments ().
Prominences observed in Hα or extreme-ultraviolet (EUV) wavelengths usually show helical structures (),
and fast rotations or untwisting motions are frequently detected during eruptions .
Before loss of equilibrium, the gravity of a prominence is balanced by the upward tension force of magnetic dips within a sheared arcade or a flux rope ().
A magnetic flux rope comprises a bundle of twisted field lines, which are wrapping around a common axis ().
Flux ropes play a central role in driving flares and CMEs ().
Sometimes, they could be heated up to ∼10 MK before or during eruptions and are termed as hot channels (),
which are merely observed in 94 Å and 131 Å of the Atmospheric Imaging Assembly (AIA; ) on board the Solar Dynamics Observatory (SDO) spacecraft.
Flux ropes propagate radially in most cases. However, a fraction of them undergo deflections and propagate non-radially ().
The inclination angle with the normal direction lies in the range of 15-70.
In the typical three-part structure of CMEs, the dark cavity and bright core are considered to be a flux rope and the embedded prominence ().
The three-dimensional (3D) shape and direction of a CME are essential in estimating the arrival time and geo-effectiveness of a CME.
The well-known cone model, resembling an ice cream, was proposed and applied to investigate the evolutions of morphology and kinematics of halo CMEs ().
This model assumes a constant angular width and a constant linear speed during propagation in radial direction <cit.>.
Considering that a part of prominences and the driven CMEs propagate non-radially, <cit.> put forward a revised cone model and applied it to two prominence eruptions.
The tip of the cone is located at the source region of CME. The model is characterized by four parameters: the length (r) and angular width (ω) of the cone,
and two angles (ϕ_1 and θ_1) denoting the deflections in longitudinal and latitudinal directions.
Using this model, <cit.> satisfactorily tracked the 3D evolution of a halo CME as far as ∼12 R_⊙ on 2011 June 21.
<cit.> proposed the graduated cylindrical shell (GCS) model to perform 3D reconstructions of flux rope-like CMEs ().
The flux rope in their model looks like a croissant, which has two identical legs with a length of h and angular separation of 2α ().
The legs are connected by a circulus with varying cross sections so that the aspect ratio κ keeps constant.
Another angle γ represents the tilt angle of the polarity inversion line (PIL) of the source region with a longitude ϕ and a latitude θ, respectively.
Besides, electron number density (N_e) is considered to synthesize white-light (WL) images observed by coronagraphs.
Thanks to multiperspective observations from the Large Angle and Spectrometric Coronagraph (LASCO; ) on board the SOHO spacecraft
and WL coronagraphs (COR1, COR2) on board the twin Solar TErrestrial RElations Observatory (STEREO; ) spacecraft,
the GCS model has been widely used to perform 3D reconstructions of CMEs ().
<cit.> developed an analytic 3D model for flux rope-like CMEs that incorporate all major deformations during their propagations,
such as deflection, rotation, “pancaking", front flattening, and skewing.
The 3D morphologies of eruptive prominences could be obtained using the triangulation technique
when simultaneous observations from two or three perspectives are available ().
Deflection, kinking, and rotation of the prominences are found based on the 3D reconstruction.
Until now, the GCS model has rarely been applied to the reconstruction of eruptive prominences, especially those propagating non-radially.
In this paper, the GCS model is slightly modified and applied to reconstruct the shapes of an eruptive prominence in NOAA active region (AR) 13110 (N16E84),
which produced a GOES M1.7 class flare and a fast CME on 2022 September 23.
The model is described in Section <ref>. The results of 3D reconstruction are presented in Section <ref>.
A brief summary and discussions are given in Section <ref>.
§ REVISED GCS MODEL
Similar to the revised cone model, the GCS model is also modified in two aspects:
Firstly, the tip of the two legs is located at the source region of the eruptive prominence rather than the solar center.
This applies to flux ropes originating from active regions, instead of quiescent prominences with much longer extensions ().
It should be emphasized that the footpoints of a flux rope have separation and are not strictly close to each other <cit.>.
Moreover, the footpoints may experience long-distance migration during eruption <cit.>. In this respect, the assumption that the footpoints of a flux rope are cospatial is relatively strong.
Secondly, the GCS symmetry axis passing through the circulus has inclination angles of ϕ_1 and θ_1 with respect to the local longitude and latitude, respectively.
The parameters h, α, κ, γ, ϕ, and θ have the same meanings ().
γ=0 and γ=90 indicate that the PIL is parallel and perpendicular to the longitude, respectively.
Since the traditional GCS model reduces to the ice-cream cone model when α=0 (), the revised GCS model also reduces to the revised cone model when α=0 <cit.>.
The transform between the heliocentric coordinate system (HCS; X_h, Y_h, Z_h) and local coordinate system (LCS; X_l, Y_l, Z_l) is ():
(
[ x_h; y_h; z_h; ])
=M_2
(
[ x_l; y_l; z_l; ])
+
(
[ R_⊙cosθcosϕ; R_⊙cosθsinϕ; R_⊙sinθ; ]),
where
M_2=
(
[ sinθcosϕ -sinϕ cosϕcosθ; sinθsinϕ cosϕ sinϕcosθ; -cosθ 0 sinθ; ]).
The transform between LCS and GCS flux-rope coordinate system (FCS; X_f, Y_f, Z_f) is:
(
[ x_l; y_l; z_l; ])
=M_1
(
[ x_f; y_f; z_f; ]),
where
M_1=
(
[ cosθ_1cosϕ_1 -sinϕ_1 cosϕ_1sinθ_1; cosθ_1sinϕ_1 cosϕ_1 sinϕ_1sinθ_1; -sinθ_1 0 cosθ_1; ]).
To reconstruct the shape of a flux rope in the revised model, observations from multiple viewpoints are needed as far as possible.
In Figure <ref>(a), the relative positions of Earth and two artificial satellites (SAT-1 and SAT-2) are denoted with green, maroon, and purple circles, respectively.
The separation angles between the artificial satellites with the Sun-Earth connection are denoted by ξ_1 and ξ_2, respectively.
Note that SAT-1 and SAT-2 could be the ahead STEREO (hereafter STA) and behind STEREO (hereafter STB),
or Extreme Ultraviolet Imager (EUI; ) on board Solar Orbiter (SolO; ),
or Wide-Field Imager for Solar Probe Plus (WISPR; ) on board Parker Solar Probe (PSP; ).
Note that both SolO and PSP are much closer to the Sun than STEREO.
Consequently, the transform between the SAT-1 coordinate system (X_s1, Y_s1, Z_s1) and HCS is:
(
[ x_s1; y_s1; z_s1; ])
=M_s1(
[ x_h; y_h; z_h; ]),
where
M_s1=
(
[ cosξ_1 sinξ_1 0; -sinξ_1 cosξ_1 0; 0 0 1; ]).
Similarly, the transform between the SAT-2 coordinate system (X_s2, Y_s2, Z_s2) and HCS is:
(
[ x_s2; y_s2; z_s2; ])
=M_s2(
[ x_h; y_h; z_h; ]),
where
M_s2=
(
[ cosξ_2 sinξ_2 0; -sinξ_2 cosξ_2 0; 0 0 1; ]).
To show the revised GCS model more clearly, four artificial flux ropes are created, in which h=200, α=35, κ=0.1045, δ=arcsin(κ)=6.
The source region is characterized by ϕ=75, θ=30, and γ=45.
The differences between the four flux ropes lie in the inclination angles (ϕ_1 and θ_1) of the symmetry axis, which are listed in Table <ref>.
In Case1, the direction of flux rope axis is exactly radial and there is no deflection. In Case2 (Case3), the flux rope experiences longitudinal (latitudinal) deflection, respectively.
In Case4, there are deflections in both directions. Take ξ_1=-15 and ξ_2=90 (see Figure <ref>(a)),
Figure <ref> shows different views of four flux ropes from Earth (first column), SAT-1 (second column), SAT-2 (third column), and solar North Pole (last column).
SAT-1 has a smaller separation angle with Earth, so that the morphologies of flux ropes from these two perspectives have slight differences.
Since SAT-2 is orthogonal to Earth, the morphologies of flux ropes from these two perspectives represent face-on and edge-on views, respectively.
In next Section, the revised GCS model will be applied to an eruptive prominence on 2022 September 23 without considering the electron number density.
§ APPLICATION TO A PROMINENCE ERUPTION
§.§ Flare and CME
The event occurred in AR 13110, accompanied by an M1.7 flare and a fast CME. Figure <ref>(a) shows SXR light curves of the flare in 1-8 Å (red line) and 0.5-4 Å (purple line).
The SXR emissions increase from 17:48:00 UT, peak at 18:10:00 UT, and decrease slowly until ∼18:50:00 UT.
Time evolutions of the prominence eruption and flare are illustrated by six 131 Å images observed by SDO/AIA in Figure <ref> and the associated online movie (anim131.mp4).
Panel (a) shows AR 13110 with weak brightening before eruption. The prominence shows up and stands out after ∼17:46:00 UT (panel (b)).
It continues to rise and expands in height, during which the flare loops brighten significantly (panels (c-d)).
The prominence accelerates and the apex escapes the field of view (FOV) of AIA, leaving behind the hot post-flare loops that cool down gradually (panels (e-f)).
It is noticed that the footpoints of the prominence remain in the AR without considerable separation.
The morphological evolution of the prominence is similar in other EUV and 1600 Å wavelengths of AIA, indicating its multithermal nature ().
In Figure <ref>, the top panels show running-difference WL images of the related CME observed by LASCO/C2.
The CME[www.sidc.be/cactus/] first appears at 18:12:00 UT and propagates eastward with an angular width of ∼50
and at a speed of ∼1644 km s^-1 (see Table <ref>). It is worth mentioning that the angular width is measured for the CME itself.
Since an interplanetary shock wave was driven by the CME (Figure <ref>(b-c)), the recorded angular width of the CME reaches 189,
which is much wider than the CME itself [cdaw.gsfc.nasa.gov/CME_list/UNIVERSAL_ver1/2022_09/univ2022_09.html].
In Figure <ref>(b), the green, maroon, and purple circles represent the positions of Earth, STA, and STB on 2022 September 23.
The twin satellites had separation angles of -17.9 and 12.9 with the Sun-Earth connection, although STB stopped working after 2016.
The middle and bottom panels of Figure <ref> show running-difference images of STA/COR2 during 18:23-19:38 UT.
The CME enters the FOV of COR2 at 18:23:30 UT and propagates eastward with an angular width of ∼64 (see Table <ref>).
The height evolution of the CME leading edge in the FOV of COR2 is plotted with green diamonds in Figure <ref>(b).
A linear fitting results in an apparent speed of ∼1482 km s^-1.
§.§ 3D shapes of the prominence
The eruptive prominence was not only observed by SDO/AIA as shown in Figure <ref>,
but also observed by the Sun Watcher using Active Pixel System detector and image processing (SWAP; ) in 174 Å
on board the PROBA 2 spacecraft with a larger FOV but a lower resolution than AIA,
and the Solar Upper Transition Region Imager (SUTRI; ) onboard the Space Advanced Technology demonstration satellite (SATech-01).
SUTRI takes full-disk solar images at Ne vii 465 Å with a FOV of ∼41.6'×41.6', a spatial resolution of ∼8, and a normal cadence of 30 s.
The Ne vii line is formed at ∼0.5 MK in the upper transition region ().
Meanwhile, the Extreme-ultraviolet Imager (EUVI; ) on board STA detected the prominence in 195 and 304 Å from another perspective (Figure <ref>(b)).
In Figure <ref>, the top panels show the prominence simultaneously observed by AIA 304 Å (base-difference image), SWAP 174 Å (base-difference image),
and EUVI 304 Å (original image) passbands around 17:55:40 UT.
Due to the low cadence (10 minutes) of EUVI 304 Å passband, this is the only time when the prominence is entirely visible in all instruments.
Owing to the smaller FOV of AIA than SWAP and EUVI, the whole prominence was captured by SWAP and EUVI,
while the outermost part (i.e., apex) of the prominence was missed by AIA. It is obvious that the two legs are much brighter than the top of the prominence.
In panel (c1), the prominence presents clear helical structure, implying that the magnetic fields supporting the prominence is most probably a flux rope.
The bottom panels of Figure <ref> show the same images, which are superposed with projections of the reconstructed flux rope (atrovirens, magenta, and blue dots) using the revised GCS model.
The 3D reconstruction is performed by repeatedly adjusting the free parameters described in Section <ref>, while the source region location (ϕ=-84, θ=15) is fixed.
The best-fit model is subjectively judged when projections of the flux rope nicely match the prominence in EUV images.
From Figure <ref>(a2-c2), it is revealed that the fitting of the prominence using the revised GCS model is satisfactory.
The derived parameters are: h=150, α=45, κ=0.087 (δ=5), ϕ_1=0, θ_1=16, and γ=20.
The height of leading ledge is h_LE=3966, the edge-on width of the flux rope is ω_EO=2δ=10,
and the face-on angular width is ω_FO=2(α+δ)=100.
The flux rope axis deviates from the local vertical direction by 16 and the heliocentric distance (h_HC) of the leading edge reaches ∼1.4 R_⊙.
Although there is only one time of simultaneous observations of the prominence from multiple perspectives,
3D reconstruction could still be conducted using observations of telescopes along the Sun-Earth connection ().
In Figure <ref>, the top panels show the prominence observed by AIA 304 Å and SWAP 174 Å around 17:57:27 UT.
The prominence was fully visible in SWAP 174 Å image at 17:57:25 UT, but was partly visible in AIA 304 Å image at 17:57:29 UT.
The bottom panels show the same images overlaid with projections of reconstructed flux ropes (atrovirens and magenta dots).
Consistency between the shapes of prominence and flux ropes indicates that the fittings are still gratifying. The derived parameters are drawn in Figure <ref>(c-d).
Before 17:54:00 UT, the prominence rose gradually and was entirely recorded in AIA 304 Å and SUTRI 465 Å passbands.
Figure <ref> shows 304 Å images (a1-a5) and 465 Å images (b1-b5) overlaid with projections of the reconstructed flux ropes (atrovirens and blue dots) during 17:49-17:53 UT.
The prominence looks like an ear and the two legs are much clearer than the top.
The reconstructed flux ropes coincide with the prominence much better at the legs than the top due to its irregular and asymmetric shape. The derived parameters are drawn in Figure <ref>(c-d).
Linear fittings of h_LE are separately performed during 17:49:17-17:52:17 UT and 17:53:30-17:57:30 UT, giving rise to true speeds of ∼246 and ∼708 km s^-1 of the erupting prominence.
Accordingly, the prominence was undergoing acceleration during its early phase of eruption (17:49-17:57 UT).
In Figure <ref>(b), time variation of h_HC is plotted with blue circles, which has the same trend as h_LE.
The value of γ increases from 0 to 30, which is probably indicative of counterclockwise rotation of the prominence axis during eruption ().
The edge-on width ω_EO keeps a constant of ∼10.
The face-on width ω_FO decreases from ∼162 to a minimum of ∼100 around 17:53:45 UT and increases to ∼104 around 17:57:25 UT.
The inclination angle θ_1 increases slightly from 14 to 16, suggesting a southward deflection of the prominence.
The values of ϕ_1 remain 0, meaning that there is no longitudinal deflection.
In Table <ref>, the CPA of CME is 85-88, indicating a southward deflection of CME by 11-14.
In this regard, deflections of the prominence and related CME are accordant, which justifies the results of fitting using the revised GCS model.
Furthermore, the true speeds (V_3D) of CME are estimated to be 1653 and 1622 km s^-1 using the apparent speeds in the FOVs of LASCO/C2 and STA/COR2, which are very close to each other.
It is noted that the speed of CME (1637±15 km s^-1) is ∼2.3 times higher than that of prominence, implying continuing acceleration of the prominence between 17:57 UT and 18:23 UT.
§ SUMMARY AND DISCUSSION
In this paper, the GCS model is slightly revised by introducing longitudinal and latitudinal deflections of prominences originating from ARs.
Subsequently, it is applied to the 3D reconstruction of an eruptive prominence in AR 13110, which produced an M1.7 class flare and a fast CME on 2022 September 23.
It is found that the prominence undergoes acceleration from ∼246 to ∼708 km s^-1.
Meanwhile, the prominence experiences southward deflection by 14-16 without longitudinal deflection, suggesting that the prominence erupts non-radially.
Southward deflections of the prominence and associated CME are consistent, validating the results of fitting using the revised GCS model.
Besides, the true speed of the CME is calculated to be 1637±15 km s^-1, which is ∼2.3 times higher than that of prominence.
This is indicative of continuing acceleration of the prominence during which flare magnetic reconnection reaches maximum beneath the erupting prominence.
Hence, the reconstruction using the revised GCS model could successfully track a prominence in its early phase of evolution until ∼1.5 R_⊙, including acceleration and deflection.
Morphological reconstructions of prominences/filaments are abundant using stereoscopic observations in UV, EUV, and Hα passbands from two or three viewpoints.
The triangulation method has been widely used to perform reconstructions of both quiescent and AR prominences ().
However, this method utilizes simultaneous images from two perspectives.
In the current study, there is only one moment (∼17:55:45 UT) of observations from SDO/AIA and STA/EUVI when triangulation method is usable (Figure <ref>).
On the contrary, the revised GCS model is at work even if there are observations from a single perspective (Figures <ref>,<ref>),
although more perspectives impose better constraints and have lower uncertainties.
This is particularly advantageous to the reconstruction of hot channels since routine observations in hot emission lines (such as 94, 131 Å) with STEREO and SolO/EUI are still unavailable.
Calculations of the thermal energies of hot channels using this model will be the topic of our next paper.
Of course, there are limitations of the revised GCS model. Firstly, the model is applicable to AR prominences whose footpoints are close to each other,
instead of quiescent prominences with much larger sizes and extensions. Secondly, the model is applicable to coherent, loop-like prominences, rather than those presenting irregular and ragged shapes.
Lastly, 3D reconstructions of prominences are severely constrained by the FOVs of solar telescopes working at UV, EUV, and Hα wavelengths,
which is in contrast to the reconstructions of CMEs observed by coronagraphs with much larger FOVs.
In Figure <ref>(b), the heliocentric distance of the flux rope leading edge reaches ∼1.5 R_⊙ at 17:57:25 UT, which is still blocked by the occulting disk of LASCO/C2.
With the advent of peak year of the twenty fifth solar cycle, large-scale solar eruptions are booming, which have sustained impact on near-Earth space environment.
Precise reconstructions of the shape and direction of eruptive prominences and the related CMEs will undoubtedly improve our ability of space weather forecast.
In the future, more case studies and statistical analysis are worthwhile using stereoscopic observations from spaceborne and ground-based telescopes,
such as SDO/AIA, STEREO/EUVI, SolO/EUI, SWAP, SUTRI, the Chinese Hα Solar Explorer (CHASE; ),
and the New Vacuum Solar Telescope (NVST; ).
The authors appreciate Profs. Hui Tian and Hongqiang Song for helpful discussions.
SDO is a mission of NASAs Living With a Star Program. AIA data are courtesy of the NASA/SDO science teams.
SUTRI is a collaborative project conducted by the National Astronomical Observatories of CAS, Peking University, Tongji University,
Xi'an Institute of Optics and Precision Mechanics of CAS and the Innovation Academy for Microsatellites of CAS.
This work is supported by the National Key R&D Program of China 2022YFF0503003 (2022YFF0503000), 2021YFA1600500 (2021YFA1600502),
and Yunnan Key Laboratory of Solar Physics and Space Science under the number YNSPCC202206.
99
[Amari et al.(2003)]am03 Amari, T., Luciani, J. F., Aly, J. J., et al. 2003, , 585, 1073. doi:10.1086/345501
[Aulanier et al.(2010)]au10 Aulanier, G., Török, T., Démoulin, P., et al. 2010, , 708, 314. doi:10.1088/0004-637X/708/1/314
[Bai et al.(2023)]bai23 Bai, X., Tian, H., Deng, Y., et al. 2023, Research in Astronomy and Astrophysics, 23, 065014. doi:10.1088/1674-4527/accc74
[Berghmans et al.(2006)]ber06 Berghmans, D., Hochedez, J. F., Defise, J. M., et al. 2006, Advances in Space Research, 38, 1807. doi:10.1016/j.asr.2005.03.070
[Bi et al.(2013)]bi13 Bi, Y., Jiang, Y., Yang, J., et al. 2013, , 773, 162. doi:10.1088/0004-637X/773/2/162
[Brueckner et al.(1995)]bru95 Brueckner, G. E., Howard, R. A., Koomen, M. J., et al. 1995, , 162, 357. doi:10.1007/BF00733434
[Chen(2011)]chen11 Chen, P. F. 2011, Living Reviews in Solar Physics, 8, 1. doi:10.12942/lrsp-2011-1
[Chen et al.(2018)]chen18 Chen, Y., Tian, H., Su, Y., et al. 2018, , 856, 21. doi:10.3847/1538-4357/aaaf68
[Cheng et al.(2013)]cx13 Cheng, X., Zhang, J., Ding, M. D., et al. 2013, , 763, 43
[Cheng et al.(2014)]cx14 Cheng, X., Ding, M. D., Guo, Y., et al. 2014, , 780, 28
[Dai et al.(2021)]dai21 Dai, J., Zhang, Q., Zhang, Y., et al. 2021, , 923, 74. doi:10.3847/1538-4357/ac2d97
[Fan & Gibson(2003)]fan03 Fan, Y. & Gibson, S. E. 2003, , 589, L105. doi:10.1086/375834
[Fox et al.(2016)]fox16 Fox, N. J., Velli, M. C., Bale, S. D., et al. 2016, , 204, 7. doi:10.1007/s11214-015-0211-6
[Gou et al.(2023)]gou23 Gou, T., Liu, R., Veronig, A. M., et al. 2023, Nature Astronomy. doi:10.1038/s41550-023-01966-2
[Guo et al.(2019)]guo19 Guo, Y., Xu, Y., Ding, M. D., et al. 2019, , 884, L1. doi:10.3847/2041-8213/ab4514
[Guo et al.(2022)]guo22 Guo, J. H., Ni, Y. W., Zhou, Y. H., et al. 2022, , 667, A89. doi:10.1051/0004-6361/202244253
[Green et al.(2007)]gre07 Green, L. M., Kliem, B., Török, T., et al. 2007, , 246, 365. doi:10.1007/s11207-007-9061-z
[Hess et al.(2020)]hess20 Hess, P., Rouillard, A. P., Kouloumvakos, A., et al. 2020, , 246, 25. doi:10.3847/1538-4365/ab4ff0
[Illing & Hundhausen(1985)]ih85 Illing, R. M. E. & Hundhausen, A. J. 1985, , 90, 275. doi:10.1029/JA090iA01p00275
[Inoue et al.(2018)]in18 Inoue, S., Kusano, K., Büchner, J., et al. 2018, Nature Communications, 9, 174. doi:10.1038/s41467-017-02616-8
[Isavnin(2016)]is16 Isavnin, A. 2016, , 833, 267. doi:10.3847/1538-4357/833/2/267
[Janvier et al.(2015)]jan15 Janvier, M., Aulanier, G., & Démoulin, P. 2015, , 290, 3425. doi:10.1007/s11207-015-0710-3
[Jiang et al.(2021)]jia21 Jiang, C., Feng, X., Liu, R., et al. 2021, Nature Astronomy, 5, 1126. doi:10.1038/s41550-021-01414-z
[Kaiser et al.(2008)]kai08 Kaiser, M. L., Kucera, T. A., Davila, J. M., et al. 2008, , 136, 5. doi:10.1007/s11214-007-9277-0
[Kumar et al.(2012)]kum12 Kumar, P., Cho, K.-S., Bong, S.-C., et al. 2012, , 746, 67. doi:10.1088/0004-637X/746/1/67
[Lemen et al.(2012)]lem12 Lemen, J. R., Title, A. M., Akin, D. J., et al. 2012, , 275, 17. doi:10.1007/s11207-011-9776-8
[Liewer et al.(2015)]lie15 Liewer, P., Panasenco, O., Vourlidas, A., et al. 2015, , 290, 3343. doi:10.1007/s11207-015-0794-9
[Li et al.(2011)]li11 Li, T., Zhang, J., Zhang, Y., et al. 2011, , 739, 43. doi:10.1088/0004-637X/739/1/43
[Li et al.(2022a)]li22a Li, L. P., Song, H., Peter, H., et al. 2022, , 941, L1
[Li et al.(2022b)]li22b Li, C., Fang, C., Li, Z., et al. 2022, Science China Physics, Mechanics, and Astronomy, 65, 289602. doi:10.1007/s11433-022-1893-3
[Liu et al.(2012)]liu12 Liu, W., Berger, T. E., & Low, B. C. 2012, , 745, L21. doi:10.1088/2041-8205/745/2/L21
[Liu et al.(2014)]liu14 Liu, Z., Xu, J., Gu, B.-Z., et al. 2014, Research in Astronomy and Astrophysics, 14, 705-718. doi:10.1088/1674-4527/14/6/009
[Liu et al.(2022)]liu22 Liu, Y., Su, Y., Liu, R., et al. 2022, , 941, 83. doi:10.3847/1538-4357/aca08c
[Lu et al.(2017)]lu17 Lu, L., Inhester, B., Feng, L., et al. 2017, , 835, 188. doi:10.3847/1538-4357/835/2/188
[Luna & Moreno-Insertis(2021)]luna21 Luna, M. & Moreno-Insertis, F. 2021, , 912, 75. doi:10.3847/1538-4357/abec46
[Mei et al.(2020)]mei20 Mei, Z. X., Keppens, R., Cai, Q. W., et al. 2020, , 493, 4816. doi:10.1093/mnras/staa555
[Michałek et al.(2003)]mich03 Michałek, G., Gopalswamy, N., & Yashiro, S. 2003, , 584, 472. doi:10.1086/345526
[Mierla et al.(2009)]mi09 Mierla, M., Inhester, B., Marqué, C., et al. 2009, , 259, 123. doi:10.1007/s11207-009-9416-8
[Mitra & Joshi(2019)]mj19 Mitra, P. K. & Joshi, B. 2019, , 884, 46. doi:10.3847/1538-4357/ab3a96
[Möstl et al.(2014)]mo14 Möstl, C., Amla, K., Hall, J. R., et al. 2014, , 787, 119. doi:10.1088/0004-637X/787/2/119
[Müller et al.(2020)]mu20 Müller, D., St. Cyr, O. C., Zouganelis, I., et al. 2020, , 642, A1. doi:10.1051/0004-6361/202038467
[Qiu et al.(2004)]qiu04 Qiu, J., Wang, H., Cheng, C. Z., et al. 2004, , 604, 900. doi:10.1086/382122
[Reames(2013)]re13 Reames, D. V. 2013, , 175, 53. doi:10.1007/s11214-013-9958-9
[Rochus et al.(2020)]roch20 Rochus, P., Auchère, F., Berghmans, D., et al. 2020, , 642, A8. doi:10.1051/0004-6361/201936663
[Roussev et al.(2003)]rou03 Roussev, I. I., Gombosi, T. I., Sokolov, I. V., et al. 2003, , 595, L57. doi:10.1086/378878
[Sahade et al.(2023)]sa23 Sahade, A., Vourildas, A., Balmaceda, L., et al. 2023, arXiv:2303.15998. doi:10.48550/arXiv.2303.15998
[Shen et al.(2019)]shen19 Shen, Y., Chen, P. F., Liu, Y. D., et al. 2019, , 873, 22. doi:10.3847/1538-4357/ab01dd
[Shibata & Magara(2011)]sm11 Shibata, K. & Magara, T. 2011, Living Reviews in Solar Physics, 8, 6. doi:10.12942/lrsp-2011-6
[Song et al.(2023)]song23 Song, H., Zhang, J., Li, L., et al. 2023, , 942, 19. doi:10.3847/1538-4357/aca6e0
[Thernisien et al.(2006)]the06 Thernisien, A. F. R., Howard, R. A., & Vourlidas, A. 2006, , 652, 763
[Thernisien et al.(2009)]the09 Thernisien, A., Vourlidas, A., & Howard, R. A. 2009, , 256, 111
[Thernisien(2011)]the11 Thernisien, A. 2011, , 194, 33
[Thompson(2009)]tho09 Thompson, W. T. 2009, icarus, 200, 351. doi:10.1016/j.icarus.2008.12.011
[Tian(2017)]tian17 Tian, H. 2017, Research in Astronomy and Astrophysics, 17, 110. doi:10.1088/1674-4527/17/11/110
[Titov & Démoulin(1999)]td99 Titov, V. S. & Démoulin, P. 1999, , 351, 707
[Vourlidas et al.(2013)]vour13 Vourlidas, A., Lynch, B. J., Howard, R. A., et al. 2013, , 284, 179. doi:10.1007/s11207-012-0084-8
[Vourlidas et al.(2016)]vour16 Vourlidas, A., Howard, R. A., Plunkett, S. P., et al. 2016, , 204, 83. doi:10.1007/s11214-014-0114-y
[Wang et al.(2015)]wang15 Wang, H., Cao, W., Liu, C., et al. 2015, Nature Communications, 6, 7008. doi:10.1038/ncomms8008
[Wuelser et al.(2004)]wu04 Wuelser, J.-P., Lemen, J. R., Tarbell, T. D., et al. 2004, , 5171, 111. doi:10.1117/12.506877
[Xie et al.(2004)]xie04 Xie, H., Ofman, L., & Lawrence, G. 2004, Journal of Geophysical Research (Space Physics), 109, A03109. doi:10.1029/2003JA010226
[Yan et al.(2014)]yan14 Yan, X. L., Xue, Z. K., Liu, J. H., et al. 2014, , 797, 52. doi:10.1088/0004-637X/797/1/52
[Zhang et al.(2012)]zj12 Zhang, J., Cheng, X., & Ding, M.-D. 2012, Nature Communications, 3, 747. doi:10.1038/ncomms1753
[Zhang et al.(2010)]zqm10 Zhang, Q.-M., Guo, Y., Chen, P.-F., et al. 2010, Research in Astronomy and Astrophysics, 10, 461. doi:10.1088/1674-4527/10/5/006
[Zhang(2021)]zqm2021 Zhang, Q. M. 2021, , 653, L2. doi:10.1051/0004-6361/202141982
[Zhang(2022)]zqm2022 Zhang, Q. M. 2022, , 660, A144. doi:10.1051/0004-6361/202142942
[Zhang et al.(2022a)]zqm22a Zhang, Q. M., Chen, J. L., Li, S. T., et al. 2022a, , 297, 18. doi:10.1007/s11207-022-01952-3
[Zhang et al.(2022b)]zqm22b Zhang, Q., Li, C., Li, D., et al. 2022b, , 937, L21. doi:10.3847/2041-8213/ac8e01
[Zhou et al.(2018)]zhou18 Zhou, Y.-H., Xia, C., Keppens, R., et al. 2018, , 856, 179. doi:10.3847/1538-4357/aab614
[Zhou et al.(2020)]zhou20 Zhou, Z., Liu, R., Cheng, X., et al. 2020, , 891, 180. doi:10.3847/1538-4357/ab7666
[Zhou et al.(2023)]zhou23 Zhou, Y., Ji, H., & Zhang, Q. 2023, , 298, 35. doi:10.1007/s11207-023-02126-5
|
http://arxiv.org/abs/2307.01093v1
|
20230703151303
|
Detecting new fundamental fields with Pulsar Timing Arrays
|
[
"Chao Zhang",
"Ning Dai",
"Qing Gao",
"Yungui Gong",
"Tong Jiang",
"Xuchen Lu"
] |
gr-qc
|
[
"gr-qc",
"astro-ph.HE"
] |
[email protected]
School of Aeronautics and Astronautics, Shanghai Jiao Tong
University, Shanghai 200240, China
Corresponding author. [email protected]
School of Physics, Huazhong University of Science and Technology, Wuhan, Hubei
430074, China
Corresponding author. [email protected]
School of Physical Science and Technology, Southwest University, Chongqing 400715, China
[email protected]
School of Physics, Huazhong University of Science and Technology, Wuhan, Hubei
430074, China
Department of Physics, School of Physical Science and Technology, Ningbo University, Ningbo, Zhejiang 315211, China
[email protected]
School of Physics, Huazhong University of Science and Technology, Wuhan, Hubei
430074, China
[email protected]
School of Physics, Huazhong University of Science and Technology, Wuhan, Hubei
430074, China
Strong evidence of the existence of the Stochastic Gravitational-Wave Background (SGWB) has been reported by the NANOGrav, PPTA, EPTA and CPTA collaborations.
The Bayesian posteriors of the Gravitational-Wave Background (GWB) amplitude and spectrum are compatible with current astrophysical predictions for the GWB from the population of supermassive black hole binaries (SMBHBs).
In this paper, we discuss the corrections arising from the extra scalar or vector radiation to the characteristic dimensionless strain in PTA experiments and explore the possibility to detect charges surrounding massive black holes, which could give rise to SGWB with vector or scalar polarizations.
The parametrized frequency-dependent characteristic dimensionless strain is used to take a Bayesian analysis and the Bayes factor is also computed for charged and neutral SMBHBs.
The Bayesian posterior of GWB tensor amplitude is log_10 A_T=-14.85^+0.26_-0.38 and spectral exponent α=-0.60^+0.32_-0.36.
The Bayesian posterior for vector or scalar amplitude A_V, S is nearly flat and there is nearly no constraint from the current observation data.
The Bayesian factor is 0.71 far less than 100, so the current observation can not support the existence of the charged SMBHB.
Detecting new fundamental fields with Pulsar Timing Arrays
Xuchen Lu
July 2023
==========================================================
§ INTRODUCTION
The first direct detection of gravitational wave (GW) event GW150914 provides a new perspective on understanding gravity in nonlinear and strong field regimes, marking the inception of GW astronomy <cit.>.
To date, over 90 GW events resulting from binary star mergers have been detected <cit.>.
In addition to these individual and instantaneous GW sources,
there is a continuous interest in the stochastic gravitational-wave background (SGWB), whose signals are from multiple continuous GW sources.
The recent data is released by multiple Pulsar Timing Array (PTA) experiments, including the North American Nanohertz Observatory for Gravitational Waves (NANOGrav) <cit.>, the European PTA (EPTA) <cit.>, the Parkes PTA (PPTA) <cit.>, and the Chinese PTA (CPTA) <cit.>, show evidence for Hellings-Downs angular correlations <cit.>, which indicating the observed stochastic common spectrum can be interpreted as an SGWB.
The observations strongly support the hypothesis that the signals, with a frequency spectrum emitted by supermassive black hole binaries (SMBHBs).
There are supermassive black holes (SMBHs) with masses ranging from 10^5 to 10^10 M_⊙ located at the center of most the galaxies <cit.>.
These SMBHs can form SMBHBs after their host galaxies merged with other galaxies.
The GWs emitted by the SMBHBs, contribute to a noise-like broadband signal in the nHz range <cit.>.
Due to the gravitational reaction, the GWs frequencies evolve slowly, and
the frequency spectrum follows a power-law relationship, with h_c(f) ∝ f^-2/3 at 2σ by the latest data <cit.>.
However, environmental and statistical effects may lead to different predictions <cit.>.
The expected amplitude of the astrophysical background is subject to an order of magnitude uncertainty due to several factors,
such as the distribution of masses of the SMBHBs, the eccentricity of the binary-orbits, and the redshift.
Moreover, the possible contribution of modified gravity theories and the charges and spin of the SMBHs may adds the uncertainty in the amplitude estimation <cit.>.
There are also numerous interpretations based on cosmological sources, including cosmic strings and domain walls <cit.>, first-order phase transitions <cit.>, and primordial fluctuations <cit.>.
The SGWBs generated by cosmological sources are significant for understanding new physics beyond the Standard Model and providing insights into the primordial universe.
The observation of SGWB is also important for the study of new fields such as scalar and vector fields around SMBHs.
The BHs can be charged with scalar charges in scalar-tensor theories <cit.>, or the vector charges of some dark matter models with multi-charged components <cit.>.
Even the BHs may carry electromagnetic charges, as predicted by the model of Kerr-Newman BHs <cit.>.
It's shown that the future GW observations of space and ground detectors can impose severe limits on the charge of BH <cit.>.
The discovery of nHz SGWB opens a new window for testing astrophysical processes and whether extra polarization emission exists.
In addition to tensor GW emission, binaries may also emit scalar and vector radiation.
These additional forms of emission can occur when the BHs comprising the binaries possess scalar charges or vector charges.
The extra polarizations such as vector and scalar polarization would give rise to SGWB.
In this study, we focus on the possibility of detecting the extra polarization power spectrum from charged SMBHBs within the new results of NANOGrav and PPTA.
The paper is organized as follows: in Sec. <ref> we discuss the corrections arising from the extra scalar or vector radiation to the characteristic dimensionless strain in PTA experiments.
The parameterized frequency-dependent characteristic dimensionless strain is given based on the important approximation to the overlap reduction function valid within the frequency range of current PTA experiments.
In Sec. <ref>, we perform the Bayesian analysis to estimate the signals and give the posterior probability distribution of GWB amplitude and spectra from SMBHs with and without charge.
Conclusions are given in section <ref>.
§ SGWB IN THE PRESENCE OF SCALAR OR VECTOR RADIATION
In the theory of massless fields including scalar charge or vector charge <cit.>, we consider SMBHB components carrying some additional charge.
The scalar-tensor theories of gravity include scalar charge through spontaneous scalarization <cit.>, while some dark matter models with millicharged components include vector charge (electromagnetic charge) <cit.>.
The Lagrangian density of gravity for binary components is given by
𝒮 = ∫ d^4x √(-g)/16π G[R-1/2g^μνΦ_,μΦ_ ,ν-1/4F^μνF_μν
- 1/√(-g)∑_j=1^2(m_j+ 4 π m_j q^0_j Φ) ∫ dλ√(-g_μνż_̇j̇^μż_̇j̇^ν)δ^4(x-z_j)
- 4π/√(-g)∑_j=1^2 m_j q^1_j A_α∫ dλ ż_j^αδ^4(x-z_j)] ,
where Φ is a massless scalar field, A_μ is a massless vector field, F_μν=∇_μA_ν-∇_νA_μ is the field strength, m_1 and m_2 are the masses of two objects of the compact binary.
The parameter q^s_i is the charge in unit mass carried by each binary component where s=0 corresponds scalar field and s=1 corresponds vector field.
Within the bandwidth of PTA, the background is mainly from the inspiral stage.
The power of tensor, vector, and scalar emission from inspiraling charged binary systems has been studied in <cit.>.
Contrary to the tensor case, the vector and scalar fields contribute monopole and dipole radiation as well as quadrupole radiation.
In the limit of vanishing eccentricity, the scalar and vector energy spectrum from the monopole radiation and quadrupole can be negligible.
Thus the main contribution of the scalar and vector field to the background is from dipole radiation
⟨dE_V,S/dt⟩=-4(s+1)(q_1^s-q_2^s)^2/3G^2m_1^2m_2^2/a^4.
For the tensor field, the main contribution to the background is from quadrupole radiation
⟨d E_T/dt⟩=-32/5G^4m_1^2m_2^2M/a^5,
where M=m_1+m_2 is the total mass and a is the orbital semimajor axis.
For binaries dominated by gravitational interaction, the orbital frequency F satisfies Kepler's law
F=√(GM/a^3).
In the limit of vanishing eccentricity, the orbital frequency F and the GW frequency f is related by f=j F for j=1,2.
The rate of change of the orbital frequency due to GW emission is
Ḟ=48π^8/3G^5/3/5m_1m_2/M^1/3(2F)^11/3.
The energy spectrum is derived from the relation to the power,
d E_I/df= 1/ḟ⟨d E_I/dt⟩,
where the modes of type I=T, V, S represents tensor, vector or scalar modes respectively. For the tensor part from quadrupole radiation (j=2), the energy spectrum is
d E_T/df=G^2/3π^2/3m_1m_2/3M^1/3f^-1/3.
For the vector and scalar part from dipole radiation (j=1), the energy spectrum is
d E_V,S/df=5(s+1)(q_1^s-q_2^s)^2/72m_1m_2/Mf^-1.
We expect the presence of the tensor, vector, and scalar GW would give rise to a stochastic background with tensor, vector, and scalar polarizations, which is described by the dimensionless energy density spectrum:
Ω_T(f) =1/ρ_cdρ_T/dln f,
Ω_V(f) =1/ρ_cdρ_V/dln f,
Ω_S(f) =1/ρ_cdρ_S/dln f,
where ρ_c=3H_0^2/8π G is the critical density and H_0 represents today's Hubble expansion parameter. The quantities ρ_T, ρ_V, and ρ_S relate to tensor, vector, and scalar energy density in the frequency, respectively.
The energy density spectrum of the produced SGWB can be obtained from the emission spectrum of a single SMBHB merger event <cit.>
Ω_I(f)=1/ρ_cdρ_I/dln f=f/ρ_c∫_0^z_ maxdz R_m(z)/(1+z)H(z)d E_I/df(f_z),
where f_z=(1+z)f is the frequency at emission, R_m(z) is the SMBHB merger rate per comoving volume at redshift z <cit.> and z_ max=10 is the redshift cutoff.
We adopt the Λ CDM cosmological model with
H(z)=H_0[Ω_m0(1+z)^3+(1-Ω_m 0)],
where the cosmological parameters are chosen as the Planck 2018 results: H_0 = 67.27 km/s/Mpc, and Ω_m0=0.3166 <cit.>.
It is customary to use the GW strain power spectrum as a function of characteristic frequency h_c,I(f) for PTA searches.
The characteristic dimensionless strain h_c,I(f) is related to Ω_I(f) by
Ω_I(f)=2π^2/3H_0^2f^2h_c,I^2(f).
The GW strain power spectrum is typically approximated as the power-law form at a reference frequency f_ yr=1 yr^-1, with amplitude and spectral index given by A_c,I and α_c,I respectively:
h_c,I(f)=A_c,I(f/f_ yr)^α_c,I,
For the tensor part, we have α_c,T=-2/3,
and α_c,V=α_c,S=-1 for the vector or scalar part.
The observable data for PTA data is the two-point correlation function of quantity z.
The full two-point function from all polarization contributions takes the schematic form
⟨ẑ(f) ẑ^*(f)⟩ =Ω_T Γ_T+ Ω_V Γ_V+ Ω_SΓ_S,
where Γ_T, Γ_V, and Γ_S represent the overlap reduction function for tensor, vector, and scalar polarization <cit.>.
It is convenient to define an "effective" energy density to describe the effects caused by vector polarization and scalar polarization
Ω_ eff=Ω_T+Γ_V/Γ_TΩ_V,S+Γ_S/Γ_TΩ_S.
Since Γ_T, Γ_V, and Γ_S are independent of frequency in the bandwidth of PTA, we can parametrize the frequency-dependent characteristic dimensionless strain h_c in the form,
h_c=A_T(f/f_ yr)^α+A_V,S(f/f_ yr)^-1.
In this situation, a population of GW-driven circular SMBHBs produces a spectrum with α=-2/3 and amplitude A_T ≈ 10^-15 <cit.>.
The amplitude A_V, S is dependent on the overlap reduction function, charges carried by the binary component, and SMBHB population model.
§ DATA ANALYSIS AND RESULTS
Recently, the NANOGrav collaboration, PPTA collaboration, EPTA collaboration, and CPTA collaboration have published their measurements on SGWB.
The results show the presence of SGBW with a power law spectrum is favored over a model with only independent pulsar noises.
Following the paper <cit.>, the first ten and first five excess timing delays measured by PPTA collaboration <cit.> and NANOGrav collaboration <cit.> are converted to the characteristic strain <cit.>
residual(f) = 1/4π^2 f_ yr( f/f_ yr)^-3/2 h_ c(f).
Figure <ref> gives the median values and errors of the observed data from the NANOGrav collaboration and PPTA collaboration.
We use the maximum likelihood to explore the implications and constraints of the extra polarization background caused by the existence of charges on SMBHBs.
The likelihood ℒ = p(h_ c,i|Θ) with two data sets is defined
lnℒ(Θ) =-1/2∑_{NANOGrav, PPTA}[ h_ c,i - h_ c(f_i;Θ)/σ_i]^2,
where h_ c,i and σ_i are the median values and errors of the observed data, h_ c(f_i;Θ) is the modeled strains at frequency f_i with parameters Θ=(A_T, α, A_V,S).
Because of the same formula for vector and scalar, we can combine them and analyze only one situation.
The posterior distribution for the parameters Θ is
p( Θ | h_ c,i) = p(h_ c,i |Θ)p(Θ)/p(h_ c,i),
where p(Θ) is the prior on the parameters and p(h_ c,i) is the evidence.
We use the public code Bilby <cit.>
to perform Bayesian analyses with Eqs. (<ref>) and (<ref>).
We choose the sampler Dynesty <cit.> and 1000 live points for nested sampling and we obtain the posteriors of the physical parameter Θ.
We first fit the SMBHB model without charge.
For our fiducial power-law model and a log-uniform amplitude prior, the Bayesian posterior of SGWB amplitude at the customary reference frequency 1yr^-1 is log_10 A_T=-14.81^+0.24_-0.34, which is compatible with current astrophysical estimates for the SGWB from SMBHBs.
Also, α=-0.61^+0.32_-0.34 is compatible with current astrophysical estimates -2/3 for the SGWB from SMBHBs.
The posterior probability distribution of SGWB amplitude A_T and spectral exponent α are shown in Fig. <ref>.
Then the model for SMBHB with charges is optimized.
The posterior of SGWB amplitude and spectral exponent for three degrees of freedom including charges is nearly the same as the result from the model without charges.
The posterior probability distribution of SGWB tensor amplitude A_T, vector amplitude A_V, and spectral exponent α are shown in Fig. <ref>.
The Bayesian posterior of SGWB amplitude is log_10 A_T=-14.85^+0.26_-0.38 and spectral exponent α=-0.60^+0.32_-0.36.
As for the amplitude of vector polarization caused by charges, the Bayesian posterior for A_V is nearly flat and there is nearly no constraint from the current observation data.
To quantify whether we can distinguish charged SMBH from neutral SMBH, we take a Bayesian approach by computing the Bayes factor for the charged and neutral SMBH.
The ratio of the evidence for the signal under each model,
BF(d)=p(h_ c,i|TV)/p(h_ c,i|T),
where the evidence for a model with parameters Θ is
p(d)=∫ d Θ p_ max(h_ c,i|Θ)p(Θ),
where p_ max(d|Θ) is the maximized likelihood and p(Θ) is the prior.
The Bayesian factor is BF=0.71.
A signal for which the Bayes factor exceeds 100 can be understood as decisively favoring chared SMBHs than neutral SMBHs.
So the current observation data can not support the existence of the charge on SMBH.
§ CONCLUSION
Due to the presence of additional charge carried by binaries in the astrophysical environment, the radiated GWs of them deviate from the predictions of general relativity.
In this paper, we estimate the probability of constraint on the charges in the astrophysical environment around the SMBHs by PTA observations.
We take the Bayesian analysis for neutral and charged SMBHB models.
For the model without charges, based on the fiducial power-law model and the log-uniform amplitude prior, the Bayesian posteriors of SGWB are log_10A_T=-14.81^+0.24_-0.34 and α=-0.61^+0.32_-0.34, which are compatible with current astrophysical estimations for the SGWB from SMBHBs.
For models with charges, the results of the tensor part are almost consistent with the models without charges by log_10A_T=-14.85^+0.26_-0.38 and α=-0.60^+0.32_-0.36.
For the amplitude of extra polarizations caused by the charges, the Bayesian posterior for A_V is nearly flat and there is nearly no constraint from the current observation data.
The Bayesian factor between the charged SMBHs model and the neutral SMBHs model is only 0.71, which is far less than 100.
Thus, the current observation can not support the existence of the charged SMBH.
We thank Dicong Liang for the useful discussions.
This work makes use of the Bilby package.
68
fxundefined [1]
ifx#1
fnum [1]
#1firstoftwo
secondoftwo
fx [1]
#1firstoftwo
secondoftwo
noop [0]secondoftwo
ref[1]@startlink#1@href
href[1]#1@endlink
anitize@url [0]`
12`$12`&12`#12`1̂2`_12`%12
startlink[1]
endlink[0]
rl [1]href #1
@bib@innerbibempty
[Abbott et al.(2016a)Abbott et al.]Abbott:2016blz
author author B. P. Abbott et al. (collaboration LIGO Scientific, Virgo), title Observation of Gravitational Waves from a Binary Black
Hole Merger, https://doi.org/10.1103/PhysRevLett.116.061102
journal journal Phys. Rev. Lett. volume 116, pages 061102 (year
2016a)NoStop
[Abbott et al.(2016b)Abbott et al.]TheLIGOScientific:2016agk
author author B. P. Abbott et al. (collaboration LIGO Scientific, Virgo), title GW150914: The Advanced LIGO Detectors in the Era of First
Discoveries, https://doi.org/10.1103/PhysRevLett.116.131103
journal journal Phys. Rev. Lett. volume 116, pages 131103 (year
2016b)NoStop
[Abbott et al.(2019)Abbott
et al.]LIGOScientific:2018mvr
author author B. P. Abbott et al. (collaboration LIGO Scientific, Virgo), title GWTC-1: A Gravitational-Wave Transient Catalog of Compact
Binary Mergers Observed by LIGO and Virgo during the First and Second
Observing Runs, https://doi.org/10.1103/PhysRevX.9.031040
journal journal Phys. Rev. X volume 9, pages 031040 (year
2019)NoStop
[Abbott et al.(2021a)Abbott et al.]LIGOScientific:2020ibl
author author R. Abbott et al. (collaboration LIGO Scientific, Virgo), title GWTC-2: Compact Binary Coalescences Observed by LIGO and
Virgo During the First Half of the Third Observing Run, https://doi.org/10.1103/PhysRevX.11.021053 journal journal Phys. Rev. X volume 11, pages 021053 (year 2021a)NoStop
[Abbott et al.(2021b)Abbott et al.]LIGOScientific:2021usb
author author R. Abbott et al. (collaboration LIGO Scientific, VIRGO), title GWTC-2.1: Deep Extended Catalog of Compact Binary
Coalescences Observed by LIGO and Virgo During the First Half of the Third
Observing Run, https://arxiv.org/abs/2108.01045
arXiv:2108.01045 [gr-qc] NoStop
[Abbott et al.(2021c)Abbott et al.]LIGOScientific:2021djp
author author R. Abbott et al. (collaboration LIGO Scientific, VIRGO,
KAGRA), title GWTC-3: Compact Binary Coalescences Observed by
LIGO and Virgo During the Second Part of the Third Observing Run, https://arxiv.org/abs/2111.03606 arXiv:2111.03606 [gr-qc] NoStop
[Agazie et al.(2023a)Agazie et al.]NANOGrav:2023gor
author author G. Agazie et al. (collaboration NANOGrav), title The NANOGrav 15-year Data Set: Evidence for a Gravitational-Wave
Background, journal journal Astrophys. J.
Lett. volume 951, https://doi.org/10.3847/2041-8213/acdac6 10.3847/2041-8213/acdac6
(year 2023a)NoStop
[Agazie et al.(2023b)Agazie et al.]NANOGrav:2023hde
author author G. Agazie et al. (collaboration NANOGrav), title The NANOGrav 15-year Data Set: Observations and Timing of 68
Millisecond Pulsars, journal journal Astrophys.
J. Lett. volume 951, https://doi.org/10.3847/2041-8213/acda9a 10.3847/2041-8213/acda9a
(year 2023b)NoStop
[Antoniadis et al.(2023a)Antoniadis et al.]Antoniadis:2023ott
author author J. Antoniadis et al., title The second data release from
the European Pulsar Timing Array III. Search for gravitational wave
signals, https://arxiv.org/abs/2306.16214 arXiv:2306.16214
[astro-ph.HE] NoStop
[Antoniadis et al.(2023b)Antoniadis et al.]Antoniadis:2023lym
author author J. Antoniadis et al., title The second data release from
the European Pulsar Timing Array I. The dataset and timing analysis, https://arxiv.org/abs/2306.16224 arXiv:2306.16224 [astro-ph.HE]
NoStop
[Antoniadis et al.(2023c)Antoniadis et al.]Antoniadis:2023xlr
author author J. Antoniadis et al., title The second data release from
the European Pulsar Timing Array: V. Implications for massive black holes,
dark matter and the early Universe, https://arxiv.org/abs/2306.16227 arXiv:2306.16227 [astro-ph.CO]
NoStop
[Reardon et al.(2023a)Reardon et al.]Reardon:2023gzh
author author D. J. Reardon et al., title Search for an isotropic
gravitational-wave background with the Parkes Pulsar Timing Array, journal journal Astrophys. J. Lett. volume 951, https://doi.org/10.3847/2041-8213/acdd02
10.3847/2041-8213/acdd02 (year 2023a)NoStop
[Zic et al.(2023)Zic et al.]Zic:2023gta
author author A. Zic et al., title The Parkes Pulsar Timing Array
Third Data Release, https://arxiv.org/abs/2306.16230
arXiv:2306.16230 [astro-ph.HE] NoStop
[Reardon et al.(2023b)Reardon et al.]Reardon:2023zen
author author D. J. Reardon et al., title The Gravitational-wave
Background Null Hypothesis: Characterizing Noise in Millisecond Pulsar
Arrival Times with the Parkes Pulsar Timing Array, https://doi.org/10.3847/2041-8213/acdd03 journal journal Astrophys. J. Lett. volume 951, pages L7 (year 2023b)NoStop
[Xu et al.(2023)Xu et al.]Xu:2023wog
author author H. Xu
et al., title Searching for the Nano-Hertz Stochastic
Gravitational Wave Background with the Chinese Pulsar Timing Array Data
Release I, https://doi.org/10.1088/1674-4527/acdfa5 journal journal Res. Astron. Astrophys. volume 23, pages 075024 (year
2023)NoStop
[Hellings and Downs(1983)]Hellings:1983fr
author author R. W. Hellings and author G. S. Downs, title UPPER LIMITS ON THE ISOTROPIC GRAVITATIONAL
RADIATION BACKGROUND FROM PULSAR TIMING ANALYSIS, https://doi.org/10.1086/183954 journal journal
Astrophys. J. Lett. volume 265, pages
L39 (year 1983)NoStop
[Kormendy and Richstone(1995)]Kormendy:1995er
author author J. Kormendy and author D. Richstone, title Inward bound: The Search for supermassive
black holes in galactic nuclei, https://doi.org/10.1146/annurev.aa.33.090195.003053 journal
journal Ann. Rev. Astron. Astrophys. volume 33, pages 581 (year 1995)NoStop
[DeRocco and Dror(2023)]DeRocco:2023qae
author author W. DeRocco and author J. A. Dror, title Searching For Stochastic Gravitational Waves
Below a Nanohertz, https://arxiv.org/abs/2304.13042
arXiv:2304.13042 [astro-ph.HE] NoStop
[Ghoshal and Strumia(2023)]Ghoshal:2023fhh
author author A. Ghoshal and author A. Strumia, title Probing the Dark Matter density with
gravitational waves from super-massive binary black holes, https://arxiv.org/abs/2306.17158 arXiv:2306.17158 [astro-ph.CO]
NoStop
[Agazie et al.(2023c)Agazie et al.]NANOGrav:2023hfp
author author G. Agazie et al. (collaboration NANOGrav), title The NANOGrav 15-year Data Set: Constraints on Supermassive Black
Hole Binaries from the Gravitational Wave Background, https://arxiv.org/abs/2306.16220 arXiv:2306.16220 [astro-ph.HE]
NoStop
[Afzal et al.(2023)Afzal et al.]NANOGrav:2023hvm
author author A. Afzal et al. (collaboration NANOGrav), title The NANOGrav 15-year Data Set: Search for Signals from New
Physics, journal journal Astrophys. J. Lett. volume 951, https://doi.org/10.3847/2041-8213/acdc91 10.3847/2041-8213/acdc91
(year 2023)NoStop
[Sesana et al.(2008)Sesana,
Vecchio, and Colacino]Sesana:2008mz
author author A. Sesana, author A. Vecchio, and author C. N. Colacino, title The stochastic gravitational-wave background from massive
black hole binary systems: implications for observations with Pulsar Timing
Arrays, https://doi.org/10.1111/j.1365-2966.2008.13682.x
journal journal Mon. Not. Roy. Astron. Soc. volume 390, pages 192 (year
2008)NoStop
[Kelley et al.(2017)Kelley,
Blecha, and Hernquist]Kelley:2016gse
author author L. Z. Kelley, author L. Blecha, and author L. Hernquist, title Massive Black Hole Binary Mergers in Dynamical Galactic
Environments, https://doi.org/10.1093/mnras/stw2452 journal journal Mon. Not. Roy. Astron. Soc. volume 464, pages 3131 (year
2017)NoStop
[Ellis et al.(2023)Ellis,
Fairbairn, Hütsi, Raidal,
Urrutia, Vaskonen, and Veermäe]Ellis:2023owy
author author J. Ellis, author M. Fairbairn,
author G. Hütsi, author M. Raidal, author
J. Urrutia, author V. Vaskonen, and author H. Veermäe, title Prospects for Future
Binary Black Hole GW Studies in Light of PTA Measurements, https://arxiv.org/abs/2301.13854 arXiv:2301.13854 [astro-ph.CO]
NoStop
[Kocsis and Sesana(2011)]kocsis2011gas
author author B. Kocsis and author A. Sesana, title Gas-driven massive black hole binaries:
signatures in the nhz gravitational wave background, @noop
journal journal Monthly Notices of the Royal
Astronomical Society volume 411, pages
1467 (year 2011)NoStop
[Yunes and Siemens(2013)]Yunes:2013dva
author author N. Yunes and author X. Siemens, title Gravitational-Wave Tests of General
Relativity with Ground-Based Detectors and Pulsar Timing-Arrays, https://doi.org/10.12942/lrr-2013-9 journal journal Living Rev. Rel. volume 16, pages 9 (year 2013)NoStop
[An and Yang(2023)]An:2023idh
author author H. An and author C. Yang, title Gravitational Waves Produced by Domain Walls During
Inflation, https://arxiv.org/abs/2304.02361 arXiv:2304.02361
[hep-ph] NoStop
[Qiu and Yu(2023)]Qiu:2023wbs
author author Z.-Y. Qiu and author Z.-H. Yu, title Gravitational waves from cosmic strings associated with
pseudo-Nambu-Goldstone dark matter, https://arxiv.org/abs/2304.02506 arXiv:2304.02506 [hep-ph] NoStop
[Zeng et al.(2023)Zeng,
Liu, and Guo]Zeng:2023jut
author author Z.-M. Zeng, author J. Liu, and author Z.-K. Guo, title Enhanced curvature perturbations from spherical domain walls
nucleated during inflation, https://arxiv.org/abs/2301.07230
arXiv:2301.07230 [astro-ph.CO] NoStop
[Arzoumanian et al.(2021)Arzoumanian et al.]NANOGrav:2021flc
author author Z. Arzoumanian et al. (collaboration NANOGrav
Collaboration), title Searching for Gravitational Waves from
Cosmological Phase Transitions with the NANOGrav 12.5-Year Dataset, https://doi.org/10.1103/PhysRevLett.127.251302 journal
journal Phys. Rev. Lett. volume 127, pages 251302 (year 2021)NoStop
[Gouttenoire and Volansky(2023)]Gouttenoire:2023naa
author author Y. Gouttenoire and author T. Volansky, title Primordial Black Holes from Supercooled
Phase Transitions, https://arxiv.org/abs/2305.04942
arXiv:2305.04942 [hep-ph] NoStop
[Dandoy et al.(2023)Dandoy,
Domcke, and Rompineve]Dandoy:2023jot
author author V. Dandoy, author V. Domcke, and author F. Rompineve, title Search for scalar induced gravitational waves in the
International Pulsar Timing Array Data Release 2 and NANOgrav 12.5 years
dataset, https://arxiv.org/abs/2302.07901 arXiv:2302.07901
[astro-ph.CO] NoStop
[Zhao et al.(2023)Zhao,
Liu, and Li]Zhao:2023xnh
author author J.-X. Zhao, author X.-H. Liu, and author N. Li, title
Primordial black holes and scalar-induced gravitational waves from the
perturbations on the inflaton potential in peak theory, https://doi.org/10.1103/PhysRevD.107.043515 journal journal Phys. Rev. D volume 107, pages 043515 (year 2023)NoStop
[Ferrante et al.(2023)Ferrante, Franciolini, Iovino, and Urbano]Ferrante:2023bgz
author author G. Ferrante, author G. Franciolini, author A. Iovino,
Junior., and author A. Urbano, title Primordial black holes in
the curvaton model: possible connections to pulsar timing arrays and dark
matter, https://arxiv.org/abs/2305.13382 arXiv:2305.13382
[astro-ph.CO] NoStop
[Cai et al.(2023a)Cai, Zhu, and Piao]Cai:2023uhc
author author Y. Cai, author M. Zhu, and author Y.-S. Piao, title Primordial black holes from null energy condition violation during
inflation, https://arxiv.org/abs/2305.10933 arXiv:2305.10933
[gr-qc] NoStop
[Inomata et al.(2023)Inomata,
Kohri, and Terada]Inomata:2023zup
author author K. Inomata, author K. Kohri, and author T. Terada, title The Detected Stochastic Gravitational Waves and Sub-Solar
Primordial Black Holes, https://arxiv.org/abs/2306.17834
arXiv:2306.17834 [astro-ph.CO] NoStop
[Cai et al.(2023b)Cai, He, Ma, Yan, and Yuan]Cai:2023dls
author author Y.-F. Cai, author X.-C. He,
author X. Ma, author
S.-F. Yan, and author
G.-W. Yuan, title Limits on
scalar-induced gravitational waves from the stochastic background by pulsar
timing array observations, https://arxiv.org/abs/2306.17822
arXiv:2306.17822 [gr-qc] NoStop
[Campbell et al.(1992)Campbell, Kaloper, and Olive]Campbell:1991kz
author author B. A. Campbell, author N. Kaloper, and author K. A. Olive, title Classical hair for Kerr-Newman black holes in string
gravity, https://doi.org/10.1016/0370-2693(92)91452-F journal journal Phys. Lett. B volume
285, pages 199 (year 1992)NoStop
[Mignemi and Stewart(1993)]Mignemi:1992nt
author author S. Mignemi and author N. R. Stewart, title Charged black holes in effective string
theory, https://doi.org/10.1103/PhysRevD.47.5259 journal journal Phys. Rev. D volume
47, pages 5259 (year 1993)NoStop
[Kanti et al.(1996)Kanti,
Mavromatos, Rizos, Tamvakis, and Winstanley]Kanti:1995vq
author author P. Kanti, author N. E. Mavromatos, author J. Rizos,
author K. Tamvakis, and author E. Winstanley, title Dilatonic black holes in higher curvature string gravity, https://doi.org/10.1103/PhysRevD.54.5049 journal journal Phys. Rev. D volume 54, pages 5049 (year 1996)NoStop
[Yunes and Stein(2011)]Yunes:2011we
author author N. Yunes and author L. C. Stein, title Non-Spinning Black Holes in Alternative
Theories of Gravity, https://doi.org/10.1103/PhysRevD.83.104002
journal journal Phys. Rev. D volume 83, pages 104002 (year
2011)NoStop
[Kleihaus et al.(2011)Kleihaus, Kunz, and Radu]Kleihaus:2011tg
author author B. Kleihaus, author J. Kunz, and author E. Radu, title Rotating Black Holes in Dilatonic Einstein-Gauss-Bonnet Theory, https://doi.org/10.1103/PhysRevLett.106.151104 journal
journal Phys. Rev. Lett. volume 106, pages 151104 (year 2011)NoStop
[Sotiriou and Zhou(2014a)]Sotiriou:2013qea
author author T. P. Sotiriou and author S.-Y. Zhou, title Black hole hair in generalized scalar-tensor
gravity, https://doi.org/10.1103/PhysRevLett.112.251102 journal journal Phys. Rev. Lett. volume 112, pages 251102 (year
2014a)NoStop
[Sotiriou and Zhou(2014b)]Sotiriou:2014pfa
author author T. P. Sotiriou and author S.-Y. Zhou, title Black hole hair in generalized scalar-tensor
gravity: An explicit example, https://doi.org/10.1103/PhysRevD.90.124063 journal journal Phys. Rev. D volume 90, pages 124063 (year 2014b)NoStop
[Antoniou et al.(2018)Antoniou, Bakopoulos, and Kanti]Antoniou:2017acq
author author G. Antoniou, author A. Bakopoulos, and author P. Kanti, title Evasion of No-Hair Theorems and Novel
Black-Hole Solutions in Gauss-Bonnet Theories, https://doi.org/10.1103/PhysRevLett.120.131102 journal
journal Phys. Rev. Lett. volume 120, pages 131102 (year 2018)NoStop
[Doneva and Yazadjiev(2018)]Doneva:2017bvd
author author D. D. Doneva and author S. S. Yazadjiev, title New Gauss-Bonnet Black Holes with
Curvature-Induced Scalarization in Extended Scalar-Tensor Theories, https://doi.org/10.1103/PhysRevLett.120.131103 journal
journal Phys. Rev. Lett. volume 120, pages 131103 (year 2018)NoStop
[Silva et al.(2018)Silva,
Sakstein, Gualtieri, Sotiriou, and Berti]Silva:2017uqg
author author H. O. Silva, author J. Sakstein,
author L. Gualtieri, author T. P. Sotiriou, and author E. Berti, title
Spontaneous scalarization of black holes and compact stars from a
Gauss-Bonnet coupling, https://doi.org/10.1103/PhysRevLett.120.131104 journal
journal Phys. Rev. Lett. volume 120, pages 131104 (year 2018)NoStop
[Cardoso et al.(2021)Cardoso,
Macedo, and Vicente]Cardoso:2020iji
author author V. Cardoso, author C. F. B. Macedo, and author R. Vicente, title Eccentricity evolution of compact binaries
and applications to gravitational-wave physics, https://doi.org/10.1103/PhysRevD.103.023015 journal journal Phys. Rev. D volume 103, pages 023015 (year 2021)NoStop
[Holdom(1986)]Holdom:1985ag
author author B. Holdom, title Two U(1)'s and Epsilon Charge Shifts, https://doi.org/10.1016/0370-2693(86)91377-8 journal
journal Phys. Lett. B volume 166, pages 196 (year 1986)NoStop
[Cardoso et al.(2016)Cardoso,
Macedo, Pani, and Ferrari]Cardoso:2016olt
author author V. Cardoso, author C. F. B. Macedo, author P. Pani, and author V. Ferrari, title Black holes and gravitational waves in models of minicharged dark
matter, https://doi.org/10.1088/1475-7516/2016/05/054 J. Cosmol.
Astropart. Phys. volume 05year year
(2016) pages pages 054, note [Erratum: JCAP 04, E01 (2020)]NoStop
[Newman et al.(1965)Newman,
Couch, Chinnapared, Exton,
Prakash, and Torrence]Newman:1965my
author author E. T. Newman, author R. Couch,
author K. Chinnapared, author A. Exton, author
A. Prakash, and author
R. Torrence, title Metric of
a Rotating, Charged Mass, https://doi.org/10.1063/1.1704351
journal journal J. Math. Phys. volume 6, pages 918 (year
1965)NoStop
[Scharre and Will(2002)]Scharre:2001hn
author author P. D. Scharre and author C. M. Will, title Testing scalar tensor gravity using space
gravitational wave interferometers, https://doi.org/10.1103/PhysRevD.65.042002 journal journal Phys. Rev. D volume 65, pages 042002 (year 2002)NoStop
[Barausse et al.(2016)Barausse, Yunes, and Chamberlain]Barausse:2016eii
author author E. Barausse, author N. Yunes, and author K. Chamberlain, title Theory-Agnostic Constraints on Black-Hole Dipole
Radiation with Multiband Gravitational-Wave Astrophysics, https://doi.org/10.1103/PhysRevLett.116.241104 journal
journal Phys. Rev. Lett. volume 116, pages 241104 (year 2016)NoStop
[Maselli et al.(2022)Maselli,
Franchini, Gualtieri, Sotiriou, Barsanti, and Pani]Maselli:2021men
author author A. Maselli, author N. Franchini,
author L. Gualtieri, author T. P. Sotiriou, author
S. Barsanti, and author
P. Pani, title Detecting
fundamental fields with LISA observations of gravitational waves from extreme
mass-ratio inspirals, https://doi.org/10.1038/s41550-021-01589-5
journal journal Nature Astron. volume 6, pages 464 (year
2022)NoStop
[Maselli et al.(2020)Maselli,
Franchini, Gualtieri, and Sotiriou]Maselli:2020zgv
author author A. Maselli, author N. Franchini,
author L. Gualtieri, and author T. P. Sotiriou, title Detecting scalar fields with Extreme Mass Ratio
Inspirals, https://doi.org/10.1103/PhysRevLett.125.141101
journal journal Phys. Rev. Lett. volume 125, pages 141101 (year
2020)NoStop
[Liang et al.(2023)Liang,
Xu, Mai, and Shao]Liang:2022gdk
author author D. Liang, author R. Xu, author Z.-F. Mai, and author
L. Shao, title Probing vector
hair of black holes with extreme-mass-ratio inspirals, https://doi.org/10.1103/PhysRevD.107.044053 journal journal Phys. Rev. D volume 107, pages 044053 (year 2023)NoStop
[Zhang et al.(2023)Zhang,
Guo, Gong, and Wang]Zhang:2023vok
author author C. Zhang, author H. Guo, author Y. Gong, and author
B. Wang, title Detecting
vector charge with extreme mass ratio inspirals onto Kerr black holes, https://arxiv.org/abs/2301.05915 arXiv:2301.05915 [gr-qc]
NoStop
[Zhang et al.(2022)Zhang,
Gong, Liang, and Wang]Zhang:2022rfr
author author C. Zhang, author Y. Gong,
author D. Liang, and author B. Wang, title
Gravitational waves from eccentric extreme mass-ratio inspirals as probes
of scalar fields, https://arxiv.org/abs/2210.11121
arXiv:2210.11121 [gr-qc] NoStop
[Damour and Esposito-Farese(1993)]Damour:1993hw
author author T. Damour and author G. Esposito-Farese, title Nonperturbative strong field effects
in tensor - scalar theories of gravitation, https://doi.org/10.1103/PhysRevLett.70.2220 journal journal Phys. Rev. Lett. volume 70, pages 2220 (year 1993)NoStop
[Du(2019)]Du:2018txo
author author S. M. Du, title Scalar Stochastic Gravitational-Wave Background
in Brans-Dicke Theory of Gravity, https://doi.org/10.1103/PhysRevD.99.044057 journal journal Phys. Rev. D volume 99, pages 044057 (year 2019)NoStop
[Phinney(2001)]Phinney:2001di
author author E. S. Phinney, title A Practical theorem on gravitational wave
backgrounds, https://arxiv.org/abs/astro-ph/0108028
arXiv:astro-ph/0108028 NoStop
[Klein et al.(2016)Klein et al.]Klein:2015hvg
author author A. Klein et al., title Science with the space-based
interferometer eLISA: Supermassive black hole binaries, https://doi.org/10.1103/PhysRevD.93.024003 journal journal Phys. Rev. D volume 93, pages 024003 (year 2016)NoStop
[Aghanim et al.(2020)Aghanim
et al.]Planck:2018vyg
author author N. Aghanim et al. (collaboration Planck), title Planck 2018 results. VI. Cosmological parameters, https://doi.org/10.1051/0004-6361/201833910 journal journal Astron. Astrophys. volume 641, pages A6 (year 2020), note [Erratum:
Astron.Astrophys. 652, C4 (2021)]NoStop
[Liang and Trodden(2021)]Liang:2021bct
author author Q. Liang and author M. Trodden, title Detecting the stochastic gravitational wave
background from massive gravity with pulsar timing arrays, https://doi.org/10.1103/PhysRevD.104.084052 journal journal Phys. Rev. D volume 104, pages 084052 (year 2021)NoStop
[Shen et al.(2023)Shen,
Yuan, Wang, and Wang]Shen:2023pan
author author Z.-Q. Shen, author G.-W. Yuan,
author Y.-Y. Wang, and author Y.-Z. Wang, title Dark Matter Spike surrounding Supermassive Black Holes Binary and
the nanohertz Stochastic Gravitational Wave Background, https://arxiv.org/abs/2306.17143 arXiv:2306.17143 [astro-ph.HE]
NoStop
[Ratzinger and Schwaller(2021)]Ratzinger:2020koh
author author W. Ratzinger and author P. Schwaller, title Whispers from the dark side: Confronting
light new physics with NANOGrav data, https://doi.org/10.21468/SciPostPhys.10.2.047 journal
journal SciPost Phys. volume 10, pages 047 (year 2021)NoStop
[Ashton et al.(2019)Ashton
et al.]Ashton:2018jfp
author author G. Ashton et al., title BILBY: A user-friendly Bayesian
inference library for gravitational-wave astronomy, https://doi.org/10.3847/1538-4365/ab06fc journal journal Astrophys. J. Suppl. volume 241, pages 27 (year 2019)NoStop
[Skilling(2006)]10.1214/06-BA127
author author J. Skilling, title Nested sampling for general Bayesian
computation, https://doi.org/10.1214/06-BA127 journal journal Bayesian Analysis volume 1, pages 833 (year 2006)NoStop
|
http://arxiv.org/abs/2307.01193v1
|
20230703175440
|
Squeezing Large-Scale Diffusion Models for Mobile
|
[
"Jiwoong Choi",
"Minkyu Kim",
"Daehyun Ahn",
"Taesu Kim",
"Yulhwa Kim",
"Dongwon Jo",
"Hyesung Jeon",
"Jae-Joon Kim",
"Hyungjun Kim"
] |
cs.LG
|
[
"cs.LG",
"cs.AI"
] |
[
Squeezing Large-Scale Diffusion Models for Mobile
Jiwoong Choisqzb
Minkyu Kimsqzb
Daehyun Ahnsqzb
Taesu Kimsqzb
Yulhwa Kimsnu
Dongwon Josnu
Hyesung Jeonsnu
Jae-Joon Kimsnu
Hyungjun Kimsqzb
sqzbSqueezeBits Inc., Seoul, South Korea
snuSeoul National University, Seoul, South Korea
Hyungjun [email protected]
Machine Learning, ICML
0.3in
]
The emergence of diffusion models has greatly broadened the scope of high-fidelity image synthesis, resulting in notable advancements in both practical implementation and academic research.
With the active adoption of the model in various real-world applications, the need for on-device deployment has grown considerably.
However, deploying large diffusion models such as Stable Diffusion with more than one billion parameters to mobile devices poses distinctive challenges due to the limited computational and memory resources, which may vary according to the device.
In this paper, we present the challenges and solutions for deploying Stable Diffusion on mobile devices with TensorFlow Lite framework, which supports both iOS and Android devices.
The resulting Mobile Stable Diffusion achieves the inference latency of smaller than 7 seconds for a 512 × 512 image generation on Android devices with mobile GPUs.
§ INTRODUCTION
Recently, diffusion models have gained significant interest by achieving impressive performance in image synthesis and related tasks.
Since the public release of Stable Diffusion <cit.>, one of the foundation models in diffusion models, there has been a surge of interest in exploring the potential of the diffusion models in various fields including image synthesis <cit.>, super-resolution <cit.>, inpainting <cit.>, and many other applications <cit.>.
Deploying large diffusion models on mobile devices offers significant advantages such as reduced server costs and improved user privacy, but it presents unique challenges.
These challenges arise from the large number of parameters, typically exceeding one billion, which necessitates compressing the model for deployment on mobile devices.
Moverover, ensuring that the computation latency remains within an acceptable range is also a crucial consideration.
In this paper, we introduce the implementation of Mobile Stable Diffusion based on the Stable Diffusion v2.1, achieving the lowest inference latency on GPU-powered Android devices, to the best of our knowledge (∼7 seconds on Samsung Galaxy S23 to generate a 512 × 512 image).
§ BACKGROUND
Diffusion models utilize the reverse diffusion process to generate images from noise.
These models have been recognized for their ability to address significant challenges in the field of image synthesis.
Specifically, they mitigate problems such as mode-collapse, training instability, and quality degradation that are commonly encountered in previous approaches such as Generative Adversarial Networks (GANs) or Variational Autoencoders (VAEs).
<cit.> initially showcased the capability of diffusion models in generating high-quality images, although they came with high computational costs.
Subsequent works <cit.> have focused on reducing the computational cost of diffusion models.
<cit.> introduced a method to decrease the number of denoising steps based on the non-Markovian diffusion process.
On the other hand, <cit.> proposed to improve efficiency of diffusion models by applying denoising steps on latent space.
The advancement in improving efficiency in diffusion models contributed to the development of Stable Diffusion, a latent diffusion model for high-resolution image generation.
Stable Diffusion has demonstrated impressive capabilities in both text-to-image and image-to-image synthesis tasks.
The model combines three modules to implement text-to-image synthesis; a Contrastive Language–Image Pre-training (CLIP) module that generates guidance from a given text prompt (text encoder), a U-Net module that conducts the reverse diffusion process (denoising network), and a Decoder module from a VAE model that generates an image from the output latent tensor (image decoder).
There is a growing demand for on-device image synthesis using the diffusion models, with a focus on enhancing the models in terms of latency, scalability, and user privacy.
<cit.> introduced the official support for on-device computations of Stable Diffusion on iOS mobile devices.
On Android devices, <cit.> recently announced the first mobile deployment of Stable Diffusion based on the Hexagon processor of the latest Snapdragon 8 Gen 2 platform.
<cit.> has also demonstrated a faster implementation of Stable Diffusion using mobile GPUs based on private OpenCL kernels.
While prior works have demonstrated the feasibility of deploying Stable Diffusion on-device, these works commonly relied on custom-built kernels for acceleration.
Particularly in the case of Android devices, <cit.> relied on the Hexagon processor and the dedicated SDK.
Additionally, <cit.> reported extensive use of private OpenCL-based kernels, pursuing additional performance gain with optimized memory access and faster computation.
§ CHALLENGES AND PROPOSED SOLUTIONS
We have chosen Google's TensorFlow Lite (TFLite) runtime <cit.> as our deployment framework, rather than constructing custom-built kernels.
Opting for TFLite offers two significant benefits over building custom kernels.
First, the publicly accessibility of TFLite is likely to stimulate further adoption of on-device Stable Diffusion models in real-world applications.
Moreover, the versatility of TFLite facilitates the rapid deployment of various diffusion models on different mobile devices using the same optimization techniques.
In this section, we introduce several technical challenges we encountered while deploying the Stable Diffusion model using TFLite on a mobile GPU and propose solutions for them.
§.§ Complete Mobile GPU Delegation
TFLite enables the use of the mobile GPU via a hardware driver called GPU delegate.
It selectively runs supported operators in a computation graph on the GPU, leaving the unsupported operators to run on the CPU.
However, such selective execution often leads to sub-optimal performance due to the expensive communication between CPU and GPU.
Therefore, complete delegation is necessary for achieving optimal performance.
While the TFLite GPU delegate provides the acceleration for the most operators involved in Stable Diffusion, it fails to delegate even officially supported operators when the input activation size is large.
To address the incomplete GPU delegation, we propose three methods involving modifications in the computation graph of the model.
§.§ Converting FullyConnected to Conv2D
In spatial transformer blocks of the denoising U-Net network, there exist several fully-connected layers with large input activations (e.g., 1 × 4096 × 320).
Since the large fully-connected layers failed to be delegated, we convert them to equivalent convolution layers as shown in Fig. <ref>.
Note that the depicted FullyConnected layer and the Reshape-Conv2D-Reshape layers result the same output and show almost the same latency when benchmarked on the GPU.
Hence, converting all FullyConnected operators into equivalent Conv2D operators is preferable to prevent the GPU delegation failure.
§.§ Serializing Conv2D with large activations
Although converting fully-connected layers to equivalent convolution layers enables delegation of layers with large input activations, we observed that one 3×3 convolution layer in the denoising network failed to be delegated with OpenCL backend due to its large input and output activation sizes: 1 × 32 × 32 × 1920 and 1 × 32 × 32 × 640, respectively.
Serializing the Conv2D operator can solve this problem by reducing the activation sizes, but at the cost of multiple kernel call overhead. Therefore, the minimal serialization factor should be chosen to avoid excessive overhead.
The serialization can be applied along the input or output channel dimension as shown in Fig. <ref>.
We find that the minimal serialization factor that enables complete delegation is 2 with the latency of 15.5 ms for the input dimension, and 8 with the latency of 40.9 ms for the output dimension by trying possible serialization factors in increasing order along each dimension.
Thus, we chose the input serialization for its lower latency.
As the input serialization is a simple reordering of the computation sequence, the output should be very similar to that of the original graph. We qualitatively examined the generated images before and after applying the serialization.
The difference between the images was subtle, as shown in the first two images in Fig. <ref>.
§.§ Broadcast-free Group Normalization
Group normalization is not represented as a single operator in the TFLite but as a computation graph consisting of basic operators such as Mean, Square, Rsqrt, and BroadcastTo.
However, BroadcastTo is not supported by the TFLite GPU delegate, which makes it necessary to modify the implementation of the group normalization layer.
We notice that the TFLite converter does not create an explicit BroadcastTo operator when the activations are 4-dimensional or lower tensors.
Hence, we reformat the group normalization layer so that the dimensions of the activation tensors are at most 4.
Please refer to Fig. <ref> in Appendix for the modified group normalization graph.
§.§ Numerically Stable Approximation of GELU
The images generated on different hardwares are noticeably different even if identical textual description and initial latent have been used as inputs (Fig. <ref>).
Stable Diffusion adopts float16 as the default data type for faster operations, which generally works well on server GPUs without causing any issues. However, it is important to note that on certain mobile devices, the use of float16 can lead to floating-point exceptions.
We identify that the numerical instability is caused by the approximated GELU operator in its cubic polynomial term.
GELU(x) ≈ 0.5x(1 + τ(x))
where τ(x) tanh(√(2/π)(x + 0.044715 x^3))
Instead of this well-known approximation, we use the following more numerically stable approximation:
GELU(x) ≈ 0.5x(1 + τ(γ_M(x)))
where
γ_M(x) x, if | x |≤ M
M, otherwise
is a clipping function.
We use an empirical value M=10, which suppresses the floating-point exceptions and maintains the image quality as shown in Fig. <ref>.
§.§ Pipelined Execution
Due to the limited memory available on the mobile devices, it is often not practical to load all three components of Stable Diffusion on the memory simultaneously.
We propose a pipelined execution strategy for devices with small processor memory.
While the denoising network is retained on the memory throughout the entire execution, the text encoder and the image decoder are loaded interchangeably via a child thread running parallel with the main thread, as described in Fig. <ref>.
§.§ Model Compression
We apply quantization and pruning techniques to the pre-trained model to reduce the overall memory consumption.
Since mobile GPU does not support integer matrix multiplications, float16 is applied for the activations.
However, we quantize weights into 8-bit precision to reduce the model size; thus, weights are casted from 8-bit integers to 16-bit floating points before being involved in the computation.
We further apply structured pruning on huge convolution layers to minimize memory requirements.
Since it is not straightforward to measure the performance degradation caused by the quantization and pruning quantitatively, we used block-wise reconstruction error <cit.> as an indirect metric and the quality of generated images as a qualitative measure.
Fig. <ref> shows the output images of the baseline, quantized, and quantized and pruned model, respectively.
Although each image shows differences in details, they are less prominent than in Fig. <ref>.
§ EXPERIMENT
In this work, we use Stable Diffusion v2.1 as a baseline model and optimize it for mobile deployment.
We choose Samsung Galaxy S23 device to measure end-to-end benchmark latency.
The device has Snapdragon 8 Gen 2 processor which includes Adreno 740 GPU.
In addition to the quantization and pruning, we apply knowledge distillation to reduce the number of inference steps following <cit.> and <cit.>.
Table <ref> shows the end-to-end latency of our model and the comparison with previous approaches to deploy Stable Diffusion on mobile.
For a fair comparison with previous works, we measure end-to-end latency for text encoding, 20 effective denoising steps and image decoding.
The proposed approach can successfully generate a 512x512 image from a given text prompt within 7 seconds as shown in Fig. <ref>.
In addition, while previous approaches use dedicated or custom engine to deploy Stable Diffusion on mobile, our approach enables using off-the-shelf TFLite engine without any custom modification.
§ CONCLUSION
In this paper, we have discussed a series of optimization techniques that, in combination, enable the fastest on-device image synthesis using the Stable Diffusion.
These solutions can be extended to the deployment of other diffusion models, thereby facilitating the implementation of these models on various mobile devices, while leveraging the computation capability of TFLite.
We believe that the optimized deployment to a common and accessible inference framework will enrich the ecosystem of real-world mobile applications built upon diffusion models.
icml2023
§ VISUALIZATION OF COMPUTATIONAL GRAPHS
We provide visualization of computational graphs proposed in the main text.
Fig. <ref> shows the computational graph of original group normalization layer in TFLite format and that of the reimplemented group normalization layer.
All of the BroadcastTo operations and 5-dimension activations are removed in the reimplemented version.
In Fig. <ref> , the computational graph of the modified version of GELU is depicted.
Note that the additional operations (Minimum and Maximum) are added in the beginning of the graph.
|
http://arxiv.org/abs/2307.00660v1
|
20230702202755
|
Minimum Levels of Interpretability for Artificial Moral Agents
|
[
"Avish Vijayaraghavan",
"Cosmin Badea"
] |
cs.AI
|
[
"cs.AI",
"cs.CY"
] |
Article Title]Minimum Levels of Interpretability for Artificial Moral Agents
[1,2,3]Avish [email protected]
3]Cosmin Badea
[1]Section of Bioinformatics, Division of Systems Medicine, Department of Metabolism, Digestion and Reproduction, Imperial College London, London, UK
[2]UKRI Centre for Doctoral Training in AI for Healthcare, Imperial College London, London, UK
[3]Department of Computing, Imperial College London, London, UK
As artificial intelligence (AI) models continue to scale up, they are becoming more capable and integrated into various forms of decision-making systems. For models involved in moral decision-making, also known as artificial moral agents (AMA), interpretability provides a way to trust and understand the agent's internal reasoning mechanisms for effective use and error correction. In this paper, we provide an overview of this rapidly-evolving sub-field of AI interpretability, introduce the concept of the Minimum Level of Interpretability (MLI) and recommend an MLI for various types of agents, to aid their safe deployment in real-world settings.
Abbreviations. AI = artificial intelligence, AMA = artificial moral agent, BU = bottom-up, GPT = generative pre-trained transformer, IML = interpretable machine learning (or interpretability), LLM = large language model, MDM = moral decision-making, ML = machine learning, MLI = minimum level of interpretability, TD = top-down.
[
[
August 1, 2023
==================
§ INTRODUCTION
The deployment of consumer-facing generative artificial intelligence (AI) models such as Midjourney and ChatGPT has raised important questions on the ethics <cit.> and consequences of widespread access to AI technologies <cit.>. Tracing the evolution of these models over the past five years <cit.>, it is likely that we will soon see multi-modal general-purpose models <cit.> available to the public. As these models begin operating with higher autonomy and become integrated into existing applications <cit.> (e.g. ChatGPT with plugins, AI vision models within self-driving cars), they will play a greater role in many aspects of human decision-making <cit.>. A fundamental subset of human decision-making is moral decision-making (MDM). MDM comes in many forms - some examples include predicting whether criminals will reoffend <cit.>, deciding appropriate treatment plans for patients <cit.>, and executing military defense strategies in a way that is compliant with original mission orders <cit.>. MDM is difficult because it often involves weighing competing values in complex and ambiguous situations <cit.>. Where other types of decision-making may be based in pragmatic considerations, like efficiency or performance, MDM requires making judgements about what is right and wrong.
AI models that are involved in MDM are called artificial moral agents (AMAs). For these types of decisions, it is imperative that we have high levels of agent understanding so that errors can be corrected swiftly and performance better aligned with human values to prevent unintended and potentially harmful agent effects <cit.>. “Understanding” an agent can take on different levels of complexity <cit.> which require different AMA constructions <cit.> for effective deployment. Here, effective deployment means finding a model of appropriate capacity for its task so it can be deployed and updated as efficiently as possible in the real-world. This understanding of agent behaviour is enabled through the field of interpretable machine learning (IML, a.k.a. interpretability) which helps make AI models more trustworthy and transparent <cit.>.
Taking the necessity of interpretability in AMAs as a spectrum, we frame our discussion less around whether AMAs need interpretability (i.e. a binary decision), and more around the Minimum Level of Interpretability (MLI) for different AMA constructions. As such, this paper revolves around three key concepts: different AMA constructions, different levels of interpretability, and how AMA capability is altered by different levels of interpretability. Much of machine ethics research concerns aligning ethical schemas with machines <cit.>. Our work continues down this line, aiming to bridge technical aspects of interpretability with AMA construction for smoother deployment. An important disclaimer is that this field is nascent and we are proposing general safety rules based on limited evidence - the MLI will evolve as more evidence is gathered for different AMA use cases. Our scope is limited to a computational understanding of moral decision-making in AMAs and we do not consider the problem of where responsibility falls for agent decisions <cit.>.
§ BACKGROUND
§.§ Improving moral decision-making (MDM)
We adopt the definition given by Garrigan and colleagues <cit.> that moral decision-making (MDM) is “any decision, including judgements, evaluations, and response choices, made within the `moral domain”' with the moral domain consisting of decisions concerning issues like harm and fairness. The authors show how theories for the development of morality in humans fall into three main categories: cognitive, affective (or emotional), and social <cit.>. Cognitive theories take on a neuroscience-based approach (i.e. which parts of the brain are activated in response to moral stimuli), affective theories are based primarily in developmental psychology, and social theories reflect how moral psychology and behaviours changes from an individual to a population <cit.>. We recap the authors' original summary of the various theories to illustrate how they have evolved in complexity. Theories from developmental psychology look at how moral reasoning develops, firstly in childhood <cit.>, then beyond into adolescence and adulthood <cit.>, and now incorporate a spectrum based in cognitive theories like perspective taking and scripts <cit.> as well as attention and working memory <cit.>. The most well-known affective theory - Haidt's social intuitionist theory <cit.> - investigates how quicker emotional decisions are rationalised post-hoc into “moral” decisions (akin to System 1 of Kahneman's reflexive System 1 and more calculating System 2 ways of thinking <cit.>). Other affective theories like dual-process theory have incorporated neuroscience aspects by focusing on activity levels of parts of the brain with real-time MDM <cit.>.
MDM necessitates navigating trade-offs between different interests, such as those of individuals, groups, or society as a whole <cit.>. This makes MDM emotionally challenging since it involves choices that have significant consequences for oneself or others with positive and negative effects often getting amplified with scale <cit.>. The inclusion of AI models in emotional decisions may initially seem off-putting <cit.>. But the emotional challenge of making important decisions is the exact reason that we want non-emotional agents involved - so they can minimise human inconsistency <cit.> and provide fairer outcomes <cit.>. We are careful here to avoid referencing “automation” of human decisions <cit.>. Real-world decisions have multiple stages which cannot currently, and likely never will, be fully automated by AI models. Instead, we look to improve moral decision making by integrating AI into standard human decision-making processes in stages which are prone to human error or with data that are beyond our cognitive capacities <cit.>. So, how do we integrate moral psychology theories into our AI models? Moral philosophies such as Deontology allow us to formalise aspects of MDM but the multi-factorial development of morality in humans is hard to represent as just one moral philosophy <cit.>. Context-specific models may be enabled by singular theories such as Virtue Ethics for general clinical settings <cit.> or Bentham's Felicific Calculus for end-of-life situations <cit.> but more flexible constructions are also possible. These flexible constructions help generalise MDM systems to unseen and novel moral situations <cit.>. We outline the overarching study of these models, called artifical moral agents, in the next section.
§.§ Different constructions of artificial moral agents (AMAs)
The study of artificial moral agents (AMAs) is an interdisciplinary field between computer science, ethics, and philosophy. As such, we first clarify terminology. The terms “model” and “agent” both refer to AI systems, and we use agent to emphasise that the model has a degree of autonomy. “Morals” concern actions of virtue and “morality” reflects that these behaviours are practised habitually to become things we accept internally and externally as rules or principles <cit.> - more concisely, morals “regulate selfishness and make social life possible” <cit.>. “Ethics” is a broader term than morality which can be defined as a “rational reflection on moral behaviours” <cit.> and better emphasises contextual differences for moral behaviours <cit.>. The words are closely linked and for our purposes can be used interchangeably, but we refer solely to morality going forward since this is the terminology of AMAs. Thus, an AMA is a program that can act or make decisions in a “moral” way, with a degree of autonomy <cit.>. The autonomy of an AMA is the extent to which a human can interact with the agent to change one of its decisions <cit.>. There are three categories of AMAs distinguished by the level of moral consideration built into them and that they can act on: implicit, explicit, and full <cit.>. Implicit AMAs cannot distinguish good from bad behaviour but are constructed to enable moral behaviour, explicit agents use inbuilt ethical rules (e.g. from logical formalisms or algorithmic constraints), while full ethical agents, like humans, possess aspects of consciousness like desires, intentions, and free will. We only consider AMAs within the first two levels to ignore questions related to fair treatment of potentially sentient artificial agents, limiting the scope of interpretability requirements to human (and not machine) safety. The breadth of these categories makes it more challenging to analyse how their differences manifest in real-world agents so we turn our attention to more granular parameters of AMA construction: the moral paradigm, the scale, and the purpose of the agent.
We group moral philosophies and moral psychologies under the name moral paradigm or framework which tell us how morality is instilled into the agent. There are three broad moral paradigms we consider: top-down (TD), bottom-up (BU), or hybrid. TD approaches start from a set of principles or a moral framework (e.g. Utilitarianism), BU approaches have no moral framework and instead aim to learn morality from the environment, and hybrid approaches combine aspects of the two <cit.>. Agents, like standard AI models, can be constructed at different scales which produce different performance capabilities <cit.> - we consider standard individual agents, high capacity individual agents (vertical scaling), and multi-agent systems (horizontal scaling) <cit.>. For simplicity in horizontally-scaled systems, we assume all agents are cooperative and that there are no unpredictable agent-agent interaction effects <cit.>. The purpose of the AMA is the task that it is designed to do and can be split into uni-purpose, multi-purpose, and general-purpose. The distinction between multi-purpose and general-purpose gets blurred as agents become more capable at multiple tasks and so, to avoid case-by-case analysis of different purposes, we focus on the distinction between uni-purpose and general-purpose.
§.§ Interpretability “levels” and their importance for MDM
There is no agreed-upon definition for interpretability but it can be viewed generally as a domain-specific quality for understanding or trusting our agent <cit.>. Two seminal perspectives from the explainability/interpretability literature pose a dichotomy where “explainability” is using a black box model and then explaining it with a secondary post-hoc model, and “interpretability” is not using a black box, instead using a model that explains itself (a.k.a. a white box or transparent model) <cit.>. <cit.> adds further detail based on three different paradigms of white box modelling: algorithmic transparency, decomposability, and simulatability. Algorithmic transparency amounts to a formal understanding of the agent's learning process <cit.>, for example, better characterisation of the loss surface <cit.> or providing internal convergence properties <cit.>. Decomposability corresponds to transparency at the level of model parameters while simulatability is transparency across the whole model <cit.>. Within the context of MDM, decomposability corresponds to having an intuitive and step-by-step explanation for each major agent decision <cit.>. We believe that complete decomposability (i.e. intuitive explanations for all model parameters and output decisions) subsumes simulatability and so do not consider simulatability further. For clarity, we use “transparency” when referring to forms of white box agents, “post-hoc explainability” for explanations of black box agents, and “interpretability” when referring to both of these concepts together.
Interpretability alone is not necessarily useful for our domain of MDM but becomes so when directed towards a specific goal <cit.>. Watson lays out three challenges that interpretability faces <cit.>: clarity of what the explanation corresponds to (e.g. the model's outputs, the data generating process, different sub-objectives), error rates and consistency rates for explanations, and little consideration of the fact that explanations can change over time. Given these challenges are important for all interpretable systems, we take the aim of interpretability in MDM as providing an understanding of agent decision-making processes for appropriate error correction, error prevention, and agent behaviour optimisation. More simply, interpretability is useful here as a debugging tool for different stakeholders involved in a moral decision <cit.>. While there are clearer situations where interpretability is not needed, such as when an agent is not involved in a high-stakes decision <cit.> or does not have a significant impact on society <cit.>, we assume all decisions requiring moral consideration as potentially high-stakes and enabled by reliable human-agent collaboration <cit.>, and thus in need of some form of interpretability. This lends itself to our characterisation of interpretability as a spectrum more than a binary requirement.
While the different types of interpretability do not fall neatly into a hierarchy of explanation complexity <cit.>, this becomes easier when each type is viewed as a debugging tool. Certain types of interpretability are more challenging to program into an agent and different types are required depending on what the AMA does and the number of stakeholders involved. A loose interpretability hierarchy in terms of increasing agent construction difficulty, which we phrase as “levels”, is as follows: black box models - post-hoc explanations of black box models - algorithmic transparency - decomposability.
Below, we address exactly these different facets of interpretability, starting with asking ourselves about whether black box AMAs allow for trust and continuing with whether transparency is the key to understanding all AMAs.
§ DOES LACK OF TRANSPARENCY IN AMAS PREVENT TRUST?
In this section, we discuss the reliability of moral reasoning in AMAs without transparency, commonly seen as black box AMAs. Thus, we will use “black box" as a generic term for such systems. Since the internal agent reasoning is unavailable to us directly, we require a level of faith in our agent, which corresponds to framing interpretability as trust for MDM <cit.>. Trust can take on a range of meanings: confidence in the agent to make the correct decision, the consistency of the agent's decisions in certain situations, or whether the agent makes decisions that are right or wrong in a human-like manner. We discuss whether AMAs can learn moral principles, and then if so, whether these principles are appropriate. We conclude that we can trust AMAs without transparency if they adopt the benefits of both BU and TD agents.
§.§ Can we tell if black box AMAs have learned any moral principles?
We define the “environment” of an AMA as the potential hypothesis space spanned by the data and the AMA's learning process, which consists of its internal model and training regime. BU AMAs are predicated on the existence of functional morality <cit.> - that agents are able to learn morals from their environment. With a black box BU AMA, how can we be sure that our agent has learned some form of morality? <cit.> defined a Moral Turing Test which states that “if two systems are input-output equivalent, they have the same moral status”, with the subsequent debate following that of the Chinese Room Argument <cit.>. Effectively, any machine capable of memorising a sufficiently diverse and framework-like or human-like set of input-output moral relationships would pass the test, even though we would not be able to determine if it is intrinsically “moral”.
There is no universal definition for morality in humans beyond notions of obligation <cit.>. However, psychological theories for the development of morality within humans <cit.> and the “teachability” of moral values <cit.> point to the development of morality as a process <cit.>, if an imprecise one. It is from the process of learning via experience in the world that we as humans feel the obligation to be a moral agent <cit.>. The same argument has been made for AI models <cit.> through development of value-based agents which learn human values, a process which has been successfully implemented in the multi-valued action reasoning system <cit.>. The lack of a universal definition means that memorisation could suffice as a type of morality, particularly if we consider memorisation a form of learning <cit.>. However, it is insufficient for trusting our agent because there are no guarantees that the agent will generalise outside the training set. This is an issue that can be explained more clearly by the difference between form and meaning as expressed initially by Badea and Artus <cit.> and subsequently formalised by Bender and colleagues <cit.>. They state that if an agent is trained only on form (e.g. pixels, words, etc.) without any input for communicative intent behind the form (i.e. context-dependent meaning, which exists at different scopes of worldly experience <cit.>), it cannot truly intuit any meaning of morality, and certainly no definition that extends to novel contexts <cit.>.
This comes down to the “The Interpretation Problem” <cit.>, which refers to the issue of endless potential interpretations for any symbolic representation given to an AMA. This makes it impossible to guarantee a fully accurate transmission of meaning regardless of the medium chosen.
The question then arises: would sufficient memorisation of triples with a structure of (input, output, communicative intent) suffice for “learning morality”? The ever-changing nature of communicative intent across cultures and over time requires a potentially infinite and unobtainable set of such data <cit.> which prevents learning a cohesive set of moral principles <cit.> unless the context is clearly defined a priori and an acceptable level of error with an “action limit” defined <cit.>. To avoid the difficulty of defining a complete and ethical training dataset for an imprecise objective <cit.>, we can instead use black box TD AMAs to approach the problem from a different angle: ensure the agent has a set of pre-defined moral principles rather than relying on the data and where it has come from. TD AMAs use a specific moral framework (or set of frameworks) which allow us to compare the TD agent's input-output moral relationships with the most likely output from that same moral framework. While the TD construction is more rigid than the BU construction, it enables greater trust in the AMA's learned principles because we have a form of ground truth that is less variable than individual agent comparisons permitted by the Moral Turing Test for BU constructions. Hybrid settings require additional domain knowledge but are even better since they can find the right “compromise between being too flexible and too strict” <cit.>. From this, we deduce that morality can be learned by agents once it has some initial framework for a given domain, and further capacity can be enabled by data with communicative intents.
§.§ Can we guarantee that black box AMAs have learned appropriate moral principles?
Let us assume that our AMA is able to learn moral principles from its environment. We are now presented with a different problem: how can we be sure that this environment reflects our desired human values and that the agent is learning them? Current inequalities in our world have been shaped (and are still influenced) by cultural remnants of historically unequal power dynamics <cit.>. An overt example is historical medical exploitation of underrepresented communities which has led to a lack of diversity in large-scale genomic data, and been a major obstacle to generalisable genomic insights across populations <cit.>. Similarly damaging, but more subtle, is the inadequate treatment of sensitive variables (e.g. age, sex, race, etc.) which can lead to models shortcutting to high predictive accuracy based on harmful stereotypes <cit.> - textbook examples include racist explanations in criminal recidivism prediction <cit.> and proxy variables that consolidate racial disparities in population health models for medical support prioritisation <cit.>. Beyond error correction, models can also serve as ways to improve existing disparities - for example, enabling smoother socioeconomic mobility via smarter intergenerational wealth allocation <cit.>. Given that these inequalities persist in both data collection and the data itself <cit.>, potentially in implicit ways, any AMA reliant on its environment has the potential to propagate or even amplify these inequalities. As computer scientists, we have the opportunity to build algorithmic mechanisms into our AMAs to counteract and help remedy these types of bias <cit.>. If we take this pro-active approach to equality, it becomes important to understand how biases exist in our environment, how these get encoded in our data, and how the AMA can use them inappropriately in its reasoning or explanation mechanisms <cit.>. This is more important in BU systems due to their higher capacity for learning morality from the environment <cit.>.
For systemic inequalities that affect marginalised communities, minimising predictive disparities over different demographics is a proxy for our AMA “learning” appropriate moral principles. This line of work has been explored extensively in the fairness and sequential decision-making literature <cit.> and we briefly review important instances. Ensemble models have proven effective, with different weighting schemes used per classifier <cit.> and for “unfairly classified” samples <cit.>. Coston and colleagues <cit.> found that characterising properties over the Rashomon set - an ensemble of models that all perform highly which, in this case, is the set of most fair models - gave them algorithmic bounds for the range of disparities with applications to recidivism risk prediction and consumer lending. Beyond outcome evaluation, <cit.> laid out metrics for evaluating post-hoc explanations: fidelity, stability, and consistency, aligning with the main conceptual challenges for IML <cit.>. They also evaluated the practical explanation quality of sparsity, which is a proxy for how “understandable” an explanation is, with higher sparsity allowing for fewer features and thus easier understanding. These ideas of human-intuitive explanations are expanded on in Section <ref>. Well-aligned explanations are useful because they can reduce overreliance on AI systems and make human-AI interaction more coordinated <cit.>.
Taking a more engineering-based approach, <cit.> defined meta-qualities for desired moral standards to guide BU AMAs under very constrained applications. Accepting that uncovering the reasoning of individual agents is challenging, the authors analysed multi-agent systems via post-hoc explainability to derive bounds for moral behaviour. These multi-agent behaviours can also be viewed from a TD lens via consideration of TD agents and stakeholders in a complete sociotechnical system <cit.> or collective actions of TD agent systems called moral communities <cit.>. The bounds in these cases are based on average population behaviour and act like probabilistic alternatives to the Moral Turing Test. Stronger fairness tests, such as those based on localised program execution paths <cit.>, would be needed for individual instances of discrimination. A more intuitive way to view these “moral behaviours” is as highly likely probabilistic constraints - similar to the way we would view the chance of a bridge breaking under stress - they should hold under all reasonable perturbations within a given context.
The limitation of TD AMAs is that it is challenging to select the appropriate moral paradigm for a situation, made more difficult by the fact that there are several potentially appropriate moral decisions (i.e. input-output relation cardinality of moral decisions is one-to-many) based on varying sequential decision trajectories. Reinforcement learning (RL) has become the de facto toolkit for sequential decision-making since it can comprehensively explore a given decision space <cit.> - in moral agent terms, this amounts to BU flexibility within wide-ranging but well-defined TD constraints, or the hybrid moral paradigm. For robustness to this decision trajectory variation, <cit.> developed an AMA based on RL that uses all previous decision states to make its final decision. Notably, this AMA also circumvented the issue of imprecise objective functions by decoupling the moral compliance objectives from the task objectives. This decoupling was also recommended as one of the main ways to circumvent the Interpretation Problem in <cit.> and mirrors their distinction between moral mistakes and amoral mistakes. This avoids issues of ambiguous fidelity with respect to explanations <cit.>. In a less constrained decision-space than the setting in <cit.>, trajectories can get more unwieldy. Subsequent work by <cit.> applies additional constraints to the agent (instead of its environment) which can analyse negative side effects of prototype decision sequences with human input and then replan an appropriate sequence which minimises these side effects. RL can also modify the environment to make it more appropriate. Using a reward that includes both the performance and moral compliance objectives, <cit.> refine the convex hull of a Markov decision process (the stochastic process underlying RL) to get moral bounds on the decision space, which also mirrors the second suggestion for circumventing the Interpretation Problem.
Having begun this section with a discussion of fairness requirements, we covered how they can be enabled through metrics on the black box outputs, surveyed moral explanations for black boxes, and finished by reviewing high capacity black boxes for sequential decision-making. Regardless of the chosen interpretable black box paradigm, AMAs can be trusted without transparency if their level of capacity is well-tuned to their purpose. However, steps should be taken to enable debugging where applicable. Fairness and moral compliance objectives should be distinct from performance objectives <cit.>, and ideally, post-hoc explanations should be used to facilitate easier debugging in case of faulty agents. As a general rule, explanations or decision trajectories should be stress-tested in different scenarios and their consistency analysed <cit.>. Thus, for additional safety, we recommend that the MLI for trustworthy black box models be consistent explanations or decision trajectories over important subgroups of the populations in the dataset.
§ DOES TRANSPARENCY HELP US UNDERSTAND AMAS?
In this section, we frame interpretability as transparency for internal model reasoning, looking at two forms of transparency: algorithmic transparency and decomposability <cit.>. With that in mind, we show below that the utility of transparency varies in magnitude and is context-dependent. We propose algorithmic transparency as single-agent rules composed into multi-agent rules. Furthermore, we argue that the importance of transparency rises with the causal power of the agent, and depends on the relevant stakeholders.
In real-world settings, AMAs require higher capacity to interact and respond to their environment <cit.>. For this, we assume that the moral paradigm (BU/TD/hybrid) of the agent is sufficiently flexible to allow adaptation to new environments. With that assumption, the most important AMA construction parameters for analysing deployed AMAs become the scale and purpose of the agent. We note these are both somewhat nebulous terms that incorporate aspects outside AMA construction: `scale' encompasses the number of model parameters, the capacity to act in the world, and the number of other agents which interact with it (for instance its users) and `purpose' can be defined a priori by developers via its task objectives and moral paradigm but is ultimately at the hands of the user. Additionally, the purpose of an AMA is dynamic and can change with regards to performance capabilities achieved at scale. For added clarity when considering both construction parameters and transparency terms in the following sections, we centre the discussion of algorithmic transparency on AMA construction since specifics of the explanation form are less important, and we centre decomposability on AMA users since the explanation form is paramount to its utility.
§.§ The utility of algorithmic transparency depends on AMA construction
The Artus-Badea law states that an increase in scale gives an AMA more causal power and so more exposure to risk <cit.>. Thus, if an AMA does not have a significant effect on the world around it, there is less of a safety requirement for transparency to understand the agent's reasoning <cit.>. Therefore, in such trivial cases, one could get away with not implementing any explicit transparency. But moral “significance” is not always obvious because of collective agent behaviour from horizontally-scaled systems. For example, say you design an agent for your own use to analyse the sentiment of current news (whether the news is positive or negative) so you can prioritise more positive news stories. This only involves you and is thus unlikely to have a direct and significant impact on society, and would be a prime candidate for potential safety without transparency. However, if you decide to scale the model up or make it into a commercial product so that other people use the same agent, then moral questions arise because its effects on society are compounded and individuals might experience different uni-agent effects. For example, we have the development of echo chambers within social media websites, the reinforcement effect this has on sub-populations (perpetuating their existing opinions), and then the combined polarising effect on the entire population (the radicalisation of opposite sides) <cit.>.
Now onto our proposed MLI for this case. As discussed in Section <ref>, multi-agent system behaviour used as post-hoc explainability can help give us guarantees on general agent morality for specific tasks. However, without some level of individual agent transparency, we have no guarantees on agent subgroup behaviour below a certain (unknown) subgroup size, and consequently a deeper understanding of the overall population becomes intractable. Internal ensembling over outputs is a practical way to get probabilistic limits to behaviour <cit.> for black box singular agents and to mitigate this population-subgroup mismatch. We have discussed algorithmic fairness over ensembles in function space <cit.> (i.e. potentially vastly different models) but <cit.> propose a simpler alternative within a reinforcement learning framework that ensembles over two models predicting opposite things. They compare the output of rational and irrational teacher models to quantify the difference between them and thus produce a better “rationality direction” for future decision trajectories. Accordingly, we recommend the MLI for horizontally-scaled AMAs to be a nested combination of black box interpretability. Thus our proposal is thinking of algorithmic transparency as logical rules <cit.> (or as probabilistic limits) <cit.> intended for uni-purpose individual agents and composed into multi-agent rules (or limits). In other words, this means bounding multi-agent systems by bounding uni-agent systems with algorithmic transparency. As the agents themselves scale vertically (i.e. their number of parameters increases), the range of behaviours each agent can perform increases and, when combined in multi-agent systems, results in complex collective behaviour <cit.>. To limit the complexity of our discussion, we do not talk about this combined vertical and horizontal scaling case further, instead moving to focusing on vertically-scaled agents.
Vertical scaling of models is performed through increasing three parameters: training data size (with the assumption that quality stays the same), compute power, and model capacity <cit.>. Although algorithmic transparency has not been studied for vertically scaled AMAs directly <cit.>, we can use large language models (LLMs) as an approximation given their ongoing integration into MDM settings like medicine <cit.>. LLMs such as GPT-3 have demonstrated linear scaling laws for prediction, that is: as the three parameters increase, the performance of the model increases linearly <cit.>. However, <cit.> showed that although these scaling laws hold at a general level across multiple tasks, performance on specific tasks can change abruptly at arbitrary scaling points of the three parameters, raising questions on what the models are actually learning. By reverse engineering neural networks, we are beginning to mathematically understand these flows of information <cit.> and discontinuous jumps to qualitatively better performance <cit.> during the learning process. However, these results are currently limited to toy models (one layer multi-layer perceptrons and attention modules) of much smaller size than those in deployment and there is thus uncertainty about their generalisation to real-world models with multiple components <cit.>. Regardless, the rapid uptake and potential of these models necessitates guarantees on MDM to lessen harmful effects on end-users <cit.>. Where these guarantees are not currently expressible as a proof or formula, we propose expressing them as probabilistic guarantees or qualitative explanations that reveal opaque agent reasoning. As such, we continue the discussion of vertically-scaled agents via decomposability in large language models in the following section.
§.§ The utility of decomposability depends on the stakeholder
What makes an explanation of an agent's decision-making intuitive? As has been a common theme throughout this paper, there is no single answer since intuitive explanations are dependent on the type of stakeholders and their level of interaction with the agent. Suresh et al. <cit.> ascribe two essential facets to stakeholders in interpretability: their level of expertise (knowledge within a context), and their goals in the long- and short-term. While long-term goals (model understanding and trust) are common across all stakeholders, short-term goals can differ. Given our focus on MDM and user accessibility, we are focused on the (O1), (O7), and (T1-4) short-term goals (as defined in <cit.>) which revolve around model debugging, improvement, and feature importance <cit.>.
As part of pursuing these goals, we segment our stakeholders into two functional categories - developers and end users - for a clear and concrete analysis of stakeholder concerns in interpretable MDM. Intuitive explanations of agent reasoning steps are important for developers and users alike so that both can easily modify the AMA to rectify and prevent unintended behaviour (i.e. error reduction) while also fine-tuning the agent for more desirable behaviour (i.e. agent optimisation). To better align the actions of high-capacity AMAs to human values, we have touched on algorithmic mechanisms of counteracting bias and designing modular objective functions (moral and performance), but these interventions are restricted to developers. To make alignment accessible to the end users, we can instead express these algorithmic changes directly through text or other intuitive modality forms for humans (e.g. audio, video) <cit.>. One advantage is that this can help level the commercial and regulatory playing field between developers and users. <cit.> describes how the black box nature of models allows companies and their developers to get away with some faulty individual predictions if their average behaviour is sufficient, which the above proposal mitigates. We propose integrating decomposability into AMAs, which would mean that explanations would be presented in a way that allows users to understand the AMA's reasoning, giving them more control and customisation over the agent, and decreasing the chance that developers would be able to exploit them through asymmetric information <cit.>.
In the same vein, we believe that the success of publicly-deployed LLMs like ChatGPT is largely due to their use of text as an interface for users. The input text for LLMs is called a “prompt” - the ease of prompt design and its interpretations as both probabilistic and textual inputs have inspired new ways of formalising LLMs at multiple interpretability scales <cit.>. A key paper from this line of research is Chain-of-Thought (CoT) prompting <cit.> which involves setting up the prompt with sequential reasoning steps (a “rationale chain”) to lead the LLM to give its response using a similar rationale chain. Extensions to CoT have focused on automating rationale chain generation via pre-defined prompt phrases to generate multiple rationale chains <cit.> and smart pruning of less likely rationale chains <cit.>. However, CoT reasoning is an emergent phenomenon of large models (>100 billion parameters). For medium-sized LLMs (10-100 billion parameters), Anthropic have found that they can learn moral concepts related to harm like stereotyping and bias when given clear instructions <cit.>.
Going one step further, <cit.> developed a CoT extension which determines when it is appropriate to break moral rules with a provided rationale, giving textual explanations for both developers and users. Currently, this work is limited to three lower-stakes situations within the cultural context of the USA but the experiments are initial proof that medium-sized (and larger) LLMs are capable of learning and reasoning about moral obligations <cit.>. This suggests that larger LLMs can also be fine-tuned for specific moral consideration by all stakeholders with appropriate datasets.
The arguments above show once again the value of our proposal of using decomposability to enhance stakeholder accessibility and enable them to do more. Having explained the importance of decomposability, we also describe its fundamental limitation: oversimplification. Humans are complex adaptive systems that exist within the dynamics of societal interaction <cit.>. This complexity means digitisation (or conversion into “form” <cit.>) of context-dependent human concepts like trust, understanding, or morality is just an approximation of the real thing <cit.>. Explaining these digitised concepts, whether internally during processing or post-hoc, will only also be further approximations because explanations are simplified representations of the original model <cit.>. Inputs to the agent need to contain causal information for outputs, otherwise explanations of morality cannot translate from humans to machines and back without a loss of information, making them brittle and particularly harmful to long-term trust <cit.>. This may seem like a pessimistic view of interpretability, but it draws attention to the fact that our internal representations of moral principles and their subsequent preservation through AMA processing are the keys to ensuring useful explanations. Referring back to our discussion of form and meaning in Section <ref>, useful model decomposition will require adaptive multi-modal agents grounded in experience of the world <cit.>.
§ CONCLUSION
On the one hand, as we argued earlier, artificial moral agents (AMAs) can be created without interpretability and be given sufficient trust for moral reasoning in some narrow and well-defined tasks. The Interpretation Problem <cit.> means that we would still never get perfect guarantees about moral behaviour but we can get around this by building value-based agents that can be tested for trustworthiness <cit.>. However, relying on “input-output” tests, like the Moral Turing Test, limits this trust because they do not evaluate for “intrinsic” morality or the possibility of several acceptable moral outputs to the same inputs. To improve this and make individual agents more reliable, we can use principles based on collective agent behaviour to guide them and address this issue of trust in black box bottom-up agents with opaque internal mechanisms <cit.>. Additionally, top-down black box AMAs allow us to define prior moral constraints and carefully construct objective functions in AMAs so their moral reasoning is more consistent and predictable <cit.>.
On the other hand, for general-purpose AMAs, we need stronger levels of interpretability requirements. While we have seen large-scale studies that show it is possible to reliably obtain good general AMA behaviour, a poor understanding of the inner mechanics of these agents has still resulted in abrupt scaling issues and unintended individual agent behaviour <cit.>. For optimal levels of trust in and between agent-developer-user systems, explanations at different levels of abstraction, while imperfect <cit.>, are imperative to help these AMAs reach safe deployment. Importantly, for future work in this area, better quantification of moral compliance can decisively aid the understanding of interpretability requirements in different contexts, while neurosymbolic methods can help with constructing top-down and hybrid AMAs <cit.> and causality across the three rungs of Pearl's ladder <cit.> with more comprehensive understanding of moral decision-making <cit.>.
In conclusion, while trustworthy AMAs can be created without any level of transparency, they make rigorous assessment and risk mitigation of AMAs much more challenging. For the moral paradigm of an AMA, we recommend top-down or hybrid agents, advocating against the use of bottom-up agents due to their higher risk of learning improper moral principles when deployed in novel environments. For AMAs with variable purposes, we believe algorithmic behavioural guarantees are the MLI for uni-purpose AMAs, with additional during-processing explanations, or task-specific decomposability, being the MLI for general-purpose AMAs. Additionally, for both of these purposes, moral compliance objectives should be as distinct from performance objectives as possible for easier “quantification” <cit.>. For scale, both horizontally- and vertically-scaled systems require strong algorithmic behavioural guarantees, and for those with multiple stakeholders, intuitive explanations in both algorithmic and textual forms, or stakeholder-specific decomposability. The general rule that we propose builds upon the Artus-Badea law <cit.>, and is one of common sense: the higher capacity an AMA is, the more potential it has for a wider user base, the more safety and interactivity mechanisms are needed, and so a “higher” Minimum Level of Interpretability is required.
§ ACKNOWLEDGEMENTS
A.V. is supported by a UK Research and Innovation (UKRI) Centre for Doctoral Training in AI for Healthcare PhD studentship (EPSRC, EP/S023283/1).
|
http://arxiv.org/abs/2307.02849v1
|
20230706083214
|
NatLogAttack: A Framework for Attacking Natural Language Inference Models with Natural Logic
|
[
"Zi'ou Zheng",
"Xiaodan Zhu"
] |
cs.CL
|
[
"cs.CL"
] |
Resist the Hype! Practical Recommendations to Cope With Résumé-Driven Development
Jonas Fritzsch, Marvin Wyrich, Justus Bogner, Stefan Wagner
August 1, 2023
=================================================================================
Reasoning has been a central topic in artificial intelligence from the beginning.
The recent progress made on distributed representation and neural networks continues to improve the state-of-the-art performance of natural language inference. However, it remains an open question whether the models perform real reasoning to reach their conclusions or rely on spurious correlations.
Adversarial attacks have proven to be an important tool to help evaluate the Achilles' heel of the victim models. In this study, we explore the fundamental problem of developing attack models based on logic formalism. We propose to perform systematic attacks centring around natural logic, a classical logic formalism that is traceable back to Aristotle's syllogism and has been closely developed for natural language inference. The proposed framework renders both label-preserving and label-flipping attacks.
We show that compared to the existing attack models, generates better adversarial examples with fewer visits to the victim models.
The victim models are found to be more vulnerable under the label-flipping setting.
provides a tool to probe the existing and future NLI models' capacity from a key viewpoint and
we hope more logic-based attacks will be further explored for understanding the desired property of reasoning. [The code of is available at <https://github.com/orianna-zzo/NatLogAttack>.]
§ INTRODUCTION
While deep neural networks have achieved the state-of-the-art performance on a wide range of tasks, the models are often vulnerable and easily deceived by imposing perturbations to the original input <cit.>, which seriously hurts the accountability of the systems. In depth, this pertains to model robustness, capacity, and the development of models with more advanced intelligence.
Natural language inference (NLI), also known as textual entailment <cit.>, is a fundamental problem that models the inferential relationships between a premise and hypothesis sentence. The models built on distributed representation have significantly improved the performance on different benchmarks <cit.>. However, it is still highly desirable to conduct research to probe if the models possess the desired reasoning ability rather than rely on spurious correlation to reach their conclusions <cit.>.
Adversarial attacks have proven to be an important tool to reveal the Achilles' heel of victim models. Specifically for natural language inference, the logic relations are easily broken if an attack model does not properly generate the adversarial examples following the logic relations and related semantics. Therefore, unlike other textual attack tasks such as those relying on semantic similarity and relatedness, it is more challenging to create effective attacks here.
In this study, we explore the basic problem of developing adversarial attacks based on logic formalism, with the aim to probe victim models for the desired reasoning capability.
Specifically, we propose , in which the adversarial attacks are generated based on natural logic <cit.>, a classical logic formalism with a long history that has been closely developed with natural language inference. From a general perspective, natural language inference provides an appropriate setup for probing the development of distributed representation and the models based on that. A robust solution for the task requires manipulation of discrete operations and adversarial attacks can help understand whether and how the required symbols and inference steps emerge from the data and the learned distributed representation. Our work has also been inspired by recent research on exploring the complementary strengths of neural networks and symbolic models <cit.>.
Our research contributes to the development of logic-based adversarial attacks for natural language understanding. Specifically, we propose a novel attack framework, , based on natural logic for natural language inference. Our experiments with both human and automatic evaluation show that the proposed model outperforms the state-of-the-art attack methods. Compared to the existing attack models, generates better adversarial examples with fewer visits to the victim models. In addition to the commonly used attack setting where the labels of generated examples remain the same as the original pairs, we also propose to construct label-flipping attacks. The victim models are found to be more vulnerable in this setup and
succeeds in deceiving them with much smaller
numbers of queries.
provides a systematic approach to probing the existing and future NLI models' capacity from a basic viewpoint that has a traceable history, by combining it with the recent development of attacking models. The proposed framework is constrained by the natural logic formalism and we hope more logic-based attacks will be
further explored for understanding the desired property of natural language reasoning.
§ RELATED WORK
Adversarial Attacks in NLP.
White-box attacks leverage the architecture and parameters of victim models to craft adversarial examples <cit.>. Black-box models, however, have no such knowledge. Pioneering blind models <cit.>, for example, create adversarial examples by adding distracting sentences to the input. More recently, score-based (e.g., <cit.>) and decision-based attack models <cit.> also query the prediction scores or the final decisions of victim models.
In terms of perturbation granularities, character-level attacks modify characters <cit.> while word-level models rely on word substitutions that can be performed based on word embeddings <cit.>, language models <cit.>, or even external knowledge bases <cit.>.
Sentence-level attack models add perturbation to an entire sentence by performing paraphrasing <cit.> or attaching distracting sentences <cit.>.
<cit.> generated natural language inference examples based on entailment label composition functions with the help of lexical knowledge. <cit.> utilized a set of first-order-logic constraints to measure the degree of rule violation for natural language inference. The efforts utilized the generated examples for data augmentation. The focus is not on adversarial attack and the adversarial examples' quality, e.g., the attack validity, is not evaluated.
Natural Logic.
Natural logic has a long history and has been closely developed with natural language inference <cit.>.
Recently, some efforts have started to consider monotonicity in attacks, including creating test
sets to understand NLI models' behaviour <cit.>.
The existing work, however, has not performed systematic attacks based on natural logic. The core idea of monotonicity (e.g., downward monotone) and projection has not been systematically considered. The models have not been combined with the state-of-the-art adversarial attack framework and search strategies for the general purpose of adversarial attacks. For example, <cit.> and <cit.> generate adversarial examples from a small vocabulary and pre-designed sentence structures. The effort of <cit.> is limited by only considering one-edit distance between a premise and hypothesis. We aim to explore principled approaches to constructing perturbations based on natural logic, and the control of the quality of attack generation can leverage the continuing advancement of language models. The proposed attack settings, along with the breakdown of attack categories, help reveal the properties of victim models in both label-preserving and label-flipping attacks.
§ : A NATURAL-LOGIC-BASED ATTACK FRAMEWORK
This section introduces , a systematic adversarial attack framework centring around natural logic. The overview of 's generation and attack process is depicted in Figure <ref>. Below we will introduce the background, attack principles, setups, and each component of the framework.
§.§ Background
The study of natural logic
can be traced back to Aristotle's syllogisms. Rather than performing deduction over an
abstract logical form, natural logic models inference in natural language by operating on the structure or surface form of language <cit.>. It allows for a wide range of intuitive inferences in a conceptually clean way that we use daily and provides a good framework for attacking inference models—we doubt that a victim model vulnerable to such natural attacks indeed performs reliable reasoning. Our work uses the natural logic variant proposed by <cit.> and <cit.>, which extends the prior formalism to model the entailment relations between two spans of texts with seven relations B={ ≡, ⊏, ⊐, ∧, | , ,# }, representing equivalence, forward entailment, reverse entailment, negation, alternation, cover, and independence, respectively. Through projection based on monotonicity in context, local lexical-level entailment relations between a premise and hypothesis can be aggregated to determine the entailment relations at the sentence-pair level.
For completeness of this paper, we highlight the key building blocks in Appendix <ref>.
§.§ Setups and Principles
Formally, given a premise sentence P, its n-word hypothesis H=(h_1,h_2,⋯, h_n), and the ground-truth natural language inference label y_g = 𝕃(P, H), generates a hypothesis H^* that satisfies a desired target label y^*_g = 𝕃(P, H^*).
The attacking pair ⟨ P, H^*⟩ is generated only if the original pair ⟨ P, H ⟩ is correctly classified by a victim model 𝔽.
Accordingly, we denote y = 𝔽(P, H) as the natural language inference label predicated by the victim model 𝔽 for the original pair and denote y^* = 𝔽(P, H^*) as the predicted label for the attacking pair.
We propose to perform the attacks in two setups: the label-preserving and label-flipping attacks. The attack principles and setups are summarized in Table <ref>. A label-preserving attack generates adversarial examples with y^*_g = y_g, aiming to test the robustness of victim models on different inputs that have the same label—it attacks victim models under perturbations that do not change the inferential labels of the original premise-hypothesis pair.
The label-flipping attacks, on the other hand, aim at attacking victim models with perturbations that are key to differentiating two different logical relations where y^*_g ≠ y_g. Note that natural logic can be naturally used to generate label-flipping attacks, and our work here is among the first to explore this type of attacks for natural language understanding, although label-flipping attacks have been explored in image attacks <cit.>.
The third column of the table (strategy) lists the logic conditions between the generated hypothesis H^* and the original hypothesis H that satisfy the desired properties of preserving or flipping labels to obtain the target label y_g^*. Consider the second row of the label-preserving setup (C → C), in which generates a hypothesis H^* with y_g^*=y_g= contradiction. This is achieved by ensuring the natural language inference label between H^* and H to obey entailment: H^* H. [We use the entailment notation that is same as in <cit.>.] This guarantees the sentence pair ⟨ P,H^*⟩ to have a contradiction relation. In the natural logic formalism <cit.>, this is implemented with H ≡ H^* or H ⊐ H^*. Consider another example. In the last row of the label-flipping setup, generates a new hypothesis H^* with y_g^* = entailment from a contradiction pair, implemented by following the natural logic relations H ≡ H^* or H ⊐ H^*.
assumptionConstraint[section]
We constrain from generating neutral attack examples () using the premise-hypothesis pairs with y_g=contradiction, because two contradictory sentences may refer to irrelevant events from which a neutral pair cannot be reliably generated. [For example, The SNLI <cit.> and MNLI datasets <cit.> were annotated under a guideline with a specific assumption of treating potentially irrelevant events as contraction.]
is also constrained from generating contradiction and entailment attacks ( or ) from neutral pairs (), as there are many ways two sentences being neutral, including reverse entailment and diverse semantic relations.
The contradiction and entailment pairs
cannot be reliably generated.
§.§ Generation and Quality Control
§.§.§ Preparing Natural Logic Relations
As shown in the bottom-left part of Figure <ref>, given a premise-hypothesis pair ⟨ P, H ⟩, the ground-truth label y_g, and the target label y^*_g, retrieves natural logic relations from the last column of Table <ref>. Consider label-preserving attacks and take y_g^*=y_g=entailment as an example. From the last column in the first row of the label-preserving setup, finds and pushes the relations ≡ and ⊏ into the natural-logic relations set, R^*_g = {≡,⊏}, where R^*_g includes the natural-logic relations between H and H^* and will be used to generate the latter. Note that r^*_g ∈R^*_g is one of relations in R^*_g.
We first copy H to H^(1), denoted as H^(1) ← H for the convenience of notation, because the generation-and-attack process may be performed multiple rounds if one round of attacks fail. Then we use the notation H^(1) and H^(2) to refer to the original and a generated hypothesis sentence in each round. Note that in the above example, as will be discussed below, within each round of generation, will provide a set of attacks to perform multiple (iterative) attacks.
§.§.§ Candidate Generation
mycommfont
KwInputInput
KwOutputOutput
KwInitInit
KwReturnReturn
KwParamParam
Our candidate attack generation process is described in
Algorithm <ref>. Taking H^(1) and R^*_g as the input, the algorithm aims to generate a set of candidate hypotheses ℋ={H^(2)_1,⋯,H^(2)_m}
with each pair ⟨ H^(1), H^(2)_i ⟩ following a target relation r^*_g ∈R^*_g where H^(2)_i ∈ℋ.
For each token h^(1)_i ∈ H^(1) and r^*_g ∈R^*_g,
the algorithm obtains the monotonicity and relation projection information using the Stanford natlog parser[https://stanfordnlp.github.io/CoreNLP/natlog.html.] (line 2).
Specifically for h_i^(1), suppose the parser outputs an ordered relation list: L_i = ⟨≡, ⊐, ⊏, ∧, | , , #⟩, this returned list actually encodes the contextualized projection information, which we leverage to substitute h_i^(1) with h_i' to generate H^(2)_i that satisfies relation r^*_g.
In natural logic, when determining the sentence-level logical relation between a premise and hypothesis sentence, projection is used to map local lexicon-level logical relation to sentence-level relations by considering the context and monotonicity. However, in adversarial attacks, needs to take the following reverse action:
R_local =
L_B
[idx^L_i(r^*_g)]
where r^*_g is the target sentence-level natural logic relation (in our above example, suppose r^*_g=`⊏'). Then idx^L_i(.) returns the index of that relation in L_i. For `⊏', the index is 3. Then the index is used to find the lexicon-level (local) relation from the predefined ordered list L_B=⟨ ≡, ⊏, ⊐, ∧, | , ,# ⟩. In the above example we will get L_B[3]=`⊐'. Again, Equation <ref> presents a reverse process of the regular projection process in natural logic.
In other words, the ordered relation list provided by the natlog parser for each word token, when used together with the predefined (ordered) relation list L_B, specifies a mapping between global (sentence-level) natural-logic relations and local (lexicon-level) relations.
Note also that the output R_local is a set, because L_i is an ordered list that may contain the same relation multiple times.
Basic Word Perturbation.
For a word token h_i, we replace it with word h_i' to ensure the local relation ⟨ h_i, h_i' ⟩ to be r_local∈R_local.
extracts natural-logic relation knowledge from knowledge bases to obtain word candidates for the desired relation types.
The word perturbation of focused on five relations in Table <ref>.
Since cover () is very rare and independence (#) is ambiguous, is constrained to only focus on utilizing the remaining five relations: { ≡, ⊏, ⊐, ∧, |}.
We attack the victim models using the most basic semantic relations explicitly expressed in knowledge bases and knowledge implicitly embedded in large pretrained language models. Specifically, we use WordNet <cit.> to extract the desired lexical relations. For a word token h_i, we search candidate words h_i' that has one of the following relations with . Synonyms are used as h_i' to substitute h_i for constructing H^(2) with an equivalence relation to H^(1) (line 6), hypernyms are used for forward entailment (line 10), and hyponyms for reverse entailment (line 14).
Due to the transitiveness of forward entailment (⊏) and reverse entailment (⊐), we centre around h_i to find its hypernyms and hyponyms but restrict the distances within a threshold to avoid generating sentences that are
semantically unnatural, contain overgeneralized concepts, or are semantically implausible. Later, we will further use a language model to control the quality.
For alternation, the perturbation candidates h_i' are words that share the common hypernym with h_i (line 18). Following <cit.>, we do not use antonyms of content words for the negation relation but instead use them to construct alternation hypotheses (line 19). For the negation (line 23), a list of negation words and phrases is used to construct new hypotheses.
Note that while our experiments show the has been very effective and outperforms other attack models, some of the components can be further augmented as future work.
Enhancing Alternation. As discussed above, attacks may run multi-rounds if the prior round fails. For alternation substitution, does not replace the word token that has been substituted before, since the alternation of alternation does not guarantee to be the alternation relation. In addition to constructing alternation hypotheses using WordNet, we further leverage DistilBert <cit.> to obtain the alternation candidates using the function AltLM (line 20). Specifically, we mask the target word (which is a verb, noun, adjective or adverb) and prompt the language model to provide candidates. The provided candidates and replaced words are required to have the same POS tags.
Insertion and Deletion.
In addition to substitution, also follows natural logic and monotonicity to construct examples using the insertion and deletion operations. As shown in Table <ref>, adjectives, adverbs and prepositional phrases are leveraged in the upward and downward context of monotonicity to enhance the attacks for entailment (`⊏') and reverse entailment (`⊐').
We include the details in Appendix <ref>, which is built on Stanford CoreNLP parser
and pretrained language models. Note that the syntactic rules do not guarantee to generate sentences with the desired NLI labels (e.g., see <cit.> for the discussion on the semantic composition of adjective + noun) and the process is only for generating candidates. We will use the pretrained language model to further identify good adversarial examples at a later stage.
Both the insertion and deletion operations are used with monotonicity and projection context to generate different relations.
§.§.§ Attack Quality Control
uses DistilBert <cit.> to calculate the pseudo-perplexity scores <cit.> for all generated hypotheses ℋ = {H^(2)_1,H^(2)_2,⋯, H^(2)_m},
and keeps only a maximum of 100 candidates with the lowest perplexity values. In our development, we found that the quality control stage is important for ensuring the quality of attack examples, particularly for reducing word perturbation mistakes resulting from incorrect interpretation of the words being substituted, which often results in unnatural hypothesis sentences, as well as reducing other sources of low-quality attacks including over-generalization of concepts and implausible semantics caused by insertion and deletion. The output of this stage is an ordered list of candidate attacks
ℋ_sqc = ⟨ H^(2)_r_1,H^(2)_r_2,⋯, H^(2)_r_k⟩.
§.§ Iterative and Multi-rounds Attacking
As discussed above, performs iterative attacking within each round of generation and then multi-round attacks if the current round fails. Within each round, the original premise P and each hypothesis in the ranked hypotheses list
ℋ_sqc form an attack list ⟨⟨ P,H^(2)_r_1⟩,⋯, ⟨ P,H^(2)_r_k⟩⟩. As shown in Figure <ref>, when an attack succeeds, we output the corresponding hypothesis as H^*, which is sent for evaluation. If an attack fails, the next pair in the ranked attack list will be tried until the list is exhausted. Then organizes the next round of attacks. In total generates a maximum of 500 attacks for each ⟨ P,H ⟩ pair.
When generating the next round attacks, we identify the adversarial pair for which the victim model has the lowest confidence (indexed as j_ lc) over the ground-truth class y_g^*:
j_ lc = _j ∈{r_1, …, r_k}{ s_r_1, …, s_r_k}
s_r_j = o(y_g^*|(P,H_r_j^(2)))
where o(*) returns the corresponding softmax probabilities of the output layer.
We then copy H^(2)_j_ lc to H^(1), denoted as H^(1)← H^(2)_j_ lc. The attack continues until the victim model is deceived to make a wrong prediction y^* that is different from the ground truth y_g^* or the maximum number of attacks is reached.
§ EXPERIMENTS AND RESULTS
§.§ Experimental Setup
Dataset
Our study uses SNLI <cit.>, MNLI <cit.>, MED <cit.>, HELP <cit.>, and SICK <cit.> datasets. The MED upward and downward subsets are denoted as MED_up and MED_down, respectively. Details of the datasets and the setup for training can be found in Appendix <ref>.
Attack and Victim Models
We compared the proposed model to five representative attack models including the recent state-of-the-art models: <cit.>, <cit.>, <cit.>, <cit.> and <cit.>.
Specifically, we used the implementation made publicly available in .[https://github.com/QData/TextAttack] For victim models, we used uncased <cit.> and base models <cit.>. The accuracy of victim models is included in Table <ref>, which is comparable to the state-of-the-art performance.
Evaluation Metrics Three metrics are used to evaluate the models from different perspectives. The sign ↑ (↓) indicates that the higher (lower) the values are, the better the performance is.
* Human Validated Attack Success Rate (HVASR ↑).
Most existing attacking methods are evaluated with attack success rates that are not validated by human subjects, assuming that the attacking methods could generate adversarial examples of the desired labels. This assumption works for many NLP tasks such as sentiment analysis and text classification. However, this is not the case in NLI, since the logical relationships can be easily broken during the generation process. As observed in our experiments, although the state-of-art attacking models ( and ) attain high attack success rates on various NLP tasks, human-validated evaluation demonstrates that they are much less effective in attacking natural language reasoning.
To reliably evaluate the attack performance, we use Human Validated Attack Success Rate (HVASR).
Specifically, we used Amazon Mechanical Turk[https://www.mturk.com/] to validate if the generated attack examples belong to the desired relations. Each example was annotated by at least three workers and the label is determined by the majority voting.
HVASR is the percentage of successful-and-valid adversarial examples that successfully deceived the victim models to make the wrong prediction and at the same time the majority of the annotators think their NLI labels are the desired target labels y^*_g.
While HVASR is our major evaluation metric, we also use query numbers and perplexity to provide additional perspectives for observations.
* Query number (QN ↓) refers to the average number of times that a successful attack needs to query the victim model <cit.>. QN can reflect the efficiency (but not effectiveness) of an attack model.
* Perplexity (PPL ↓) reflects the fluency and quality of generated examples. Same as in <cit.>, it is computed with GPT-2 <cit.> during evaluation.
§.§ Results and Analysis
Results on Label Preserving Attacks
Table <ref> shows the performance of different models on label-preserving attacks. We can see that consistently achieves the best performance on HVASR.
The detailed results on MED also show that has a better ability to construct adversarial examples in both upward and downward monotone. also shows superior performance on average QN and PPL in nearly all setups.
We can see that has a large HVASR and small QN value in MED_up, suggesting that can easily generate attacks in the upward monotone. However, in MED_down, needs more efforts (QN). Our further analysis reveals that this is because in the downward monotone, the attack model relies more on the insertion operation than deletion, and the former is more likely to result in unsuccessful attempts.
Figure <ref> further compares the query numbers (QNs) of different attack models on and in terms of the medians (instead of means) and density of QN.
We can see that the majority of query numbers of are rather small and medians are less than 12 for on both SNLI and MED, showing that could attack successfully with very limited attempts in most cases.
For each attack model, the density of QN on and
is close to each other and the medians are indiscernible and are represented by the same red dot in the figure.
Results on Label Flipping Attacks
Table <ref> shows the performance of on the label-flipping attacks. Note that there has been little prior work providing systematic label-flipping attacks for NLP tasks. This new angle of evaluation is more easily implemented with logic-based attacks and provides additional insights.
Specifically, the table shows that the numbers of queries that sent to the victim models are much smaller than those in the label-preserving setting presented in Table <ref>, suggesting that the victim models are more vulnerable in label-flipping setting. For example, we can see that most of the query numbers are within 1-5 in Table <ref>. The pretrained victim models are capable of memorizing the superficial features related to the original label and have difficulty in capturing the logical relationship when we alter them between sentences by keeping the majority of words untouched.
In both the label-preserving and label-flipping setup, the HVASR may still be further improved, although the proposed models have substantially outperformed the off-the-shelf state-of-the-art attack models and cautions have been exercised in all attack generation steps, which leaves room for more research on improving logic-based attacks as future work.
Examples and Analysis.
Table <ref> provides the generated attack examples in the label-preserving setup (E → E), in which we can see the quality of attacks generated by is clearly higher.
The baseline attacking models generate adversarial examples by replacing words based on word embedding or language models, which can easily break the logic relationships.
Some examples in Table <ref> show that the baselines often rely on semantic relatedness to construct adversarial examples, which is not detailed enough for NLI and hence break the logic relations (e.g., the last example). Also, the last example of shows that the model deletes words without considering the context (downward) monotonicity, resulting in an invalid attack. Note that the baseline models modify both premises and hypotheses and focuses only on modifying hypotheses—it is straightforward to copy or adapt the operations of to modify premises—in many applications, it is more natural to modify the hypotheses and keep the premises (evidences) untouched.
Table <ref> shows more adversarial examples generated by in the label-flipping setup. For all the six examples, the prediction of the victim model remains unchanged (entailment, entailment and contradiction for the first, middle, and last two examples, respectively), while the ground-truth labels are now contradiction, neutral, and entailment, respectively. The victim model had difficulty in telling the difference, which renders an angle to challenge the models' ability of understanding and perform reasoning.
§ CONCLUSION
Towards developing logic-based attack models, we introduce a framework , which centres around the classical natural logic formalism. The experiments with human and automatic evaluation show that the proposed framework outperforms the existing attack methods. Compared to these models, generates better adversarial examples with fewer visits to the victim models. In addition to the widely used label-preserving attacks, also provides label-flipping attacks. The victim models are found to be more vulnerable in this setup and
succeeds in deceiving them with much smaller
numbers of queries.
provides an approach to probing the existing and future NLI models' capacity from a key viewpoint and we hope more logic-based attacks will be
further explored for understanding the desired property of reasoning.
§ LIMITATIONS
Our research focuses on the adversarial attack itself and provides a framework that can be potentially used in different adversarial training strategies. We limit ourselves on attacks in this work, but it would be interesting to investigate logic-based attacks in adversarial training. We will leave that as future work. The proposed attack approach is also limited by the limitations of natural logic, while the latter has been a classical logic mechanism. For example, our proposed framework has less deductive power than first-order logic. It cannot construct attacks building on inference rules like modus ponens, modus tollens, and disjunction elimination. As discussed in the paper, some components of the generation and quality control process can be further enhanced.
§ ACKNOWLEDGEMENTS
The research is supported by the NSERC Discovery Grants and the Discovery Accelerator Supplements. We thank Bairu Hou for his contributions to an early version of the proposed model.
acl_natbib
§ BACKGROUND
Our work is based on the specific natural logic formalism proposed by <cit.> and <cit.>.
To model the entailment relations between two spans of texts, <cit.> introduced seven relations inspired by the set theory: B={ ≡, ⊏, ⊐, ∧, | , ,# } (see Table <ref> for some examples).
The inference of natural logic is built on monotonicity, which is a pervasive feature of natural language that explains the impact of semantic composition on entailment relations <cit.>. Suppose dog ⊏ animal,
the upward monotone context keeps the entailment relation when the argument “increases” (e.g., dog ⊏ animal). Downward monotone keeps the entailment relation when the argument “decreases” (e.g., in all animals ⊏ all dogs). The system performs monotonicity inference through a projection ρ𝔅→𝔅, which is determined by the context and projection rules. As will be detailed, a monotonicity-based parser can provide monotonicity information for each word token in a sentence and the projection information. For example, consider the sentence All↑ the↓ kids↓ run↑, where ↑ denoted upward polarity and ↓ downward polarity. If we mutate the word kids with boys, where kids ⊐ boys, the system projects the reverse entailment (`⊐') into forward entailment (`⊏') due to its downward polarity, ρ (`⊐') = `⊏', and thus All the kids run ⊏ All the boys run.
With these components ready, the system aggregates the projected local relations to obtain the inferential relation between a premise and hypothesis sentence. Specifically, Table <ref> <cit.> shows the composition function when a relation in the first column is joined with a relation listed in the first row, yielding the relations in the corresponding table cell.
<cit.> shows that different orders of compositions yield consistent results except in some rare artificial cases. Therefore, many works, including ours, perform a sequential (left-to-right) composition.
Consider two edits from the premise sentence, All the kids run, to the hypothesis, All the boys sleep. The first edit that replaces kids in the premise with boys yields All the kids run ⊏ All the boys run. The second edit of replacing run with sleep yields All the boys run | All the boys sleep. Based on Table <ref>, the union of the relations resulted from these two edits (i.e., `⊏' `|') is `|', where is the union operator. As a result, we obtain All the kids run | All the boys sleep.
The seven natural logic relations at the sentence-pair level can then be mapped to the typical three-way NLI labels (entailment, contradiction, and neutral), where the `≡' or `⊏' relation can be mapped to entailment; the `∧' or ` | ' relation to contradiction; the `⊐', `', and `#' to neutral.
§ INSERTION AND DELETION
For both insertion and deletion, the part-of-speech (POS) tags and constituency parse tree for H^(1) are first obtained using Stanford CoreNLP parser[https://stanfordnlp.github.io], which are then used with a state-of-the-art pretrained model to perform insertion. To insert an adjective before a noun or an adverb after a verb, leverages DistilBert <cit.> to obtain the candidates in the corresponding locations. The syntactic rules do not guarantee to generate sentences with the desired NLI labels (e.g., see <cit.> for discussion on the semantic composition of adjective + noun). The above process is only for generating candidates, and we will use pretrained language models to find good adversarial examples.
In order to insert a prepositional phrase (PP), we first collected from the SNLI training dataset all the PPs that are the constitutes of other noun phrases (NPs) for more than 100 times.
We also collected PPs that appear in other verb phrases (VPs) at least 100 times.
During insertion, these PPs will be added as modifiers to a noun or a verb, respectively. We also insert assertion phrases such as "It is not true that" to deceive the victim models.
For the deletion operation, we delete the corresponding constituents based on the parse tree and POS tags.
§ DETAILS OF DATASETS AND BASELINES
As discussed in Section <ref>, our study uses SNLI <cit.>, MNLI <cit.>, MED <cit.>, HELP <cit.>, and SICK <cit.> to evaluate the models.
SNLI and MNLI are widely-used general-purpose NLI datasets. Following <cit.>, for MNLI, we evaluate the performance on the matched set. MED and HELP are designed for monotonicity-based reasoning and hence suit for probing models' capacity in natural logic-related behaviour. SICK is rich in lexical, syntactic and semantic phenomena designed for distributional semantic models including those recognizing textual entailment. For SICK, we use the corrected labels proposed by <cit.>.
The pretrained victim models tested on the SNLI, MNLI, and SICK test set were finetuned on their own training set and the performances are comparable to the state-of-the-art performances as well as those used in the previous attack models. Following <cit.>, the models tested on MED are finetuned on both the SNLI training set and the entire HELP dataset. Since HELP is not manually annotated, we do not use it as the test set.
The MED upward subset is denoted as MED_up and downward subset as MED_down.
Following <cit.>, each test set has 1,000 sentence pairs. Also following <cit.>, we set the maximum query number to be 500.
For all the attack models in comparison, we used the implementation made available by <cit.>.
Details of these attack models are as follows.
* PWWS <cit.> makes use of the synonyms in WordNet <cit.> for word substitutions and designs a greedy search algorithm based on the probability-weighted word saliency to generate adversarial samples.
* TextFooler <cit.> utilizes counter-fitting word embeddings to obtain synonyms and then performs substitution based on that.
* PSO <cit.> utilizes the knowledge base HowNet <cit.> to generate word substitutions. It adopts particle swarm optimization, another popular meta-heuristic population-based search algorithm, as its search strategy.
* BertAttack <cit.> leverages the superior performance of pretrained language model and greedily replaces tokens with the predictions from BERT.
* Clare <cit.> adds two more types of perturbations, insert and merge, building on BertAttack. Since Clare has a very high query number to the victim models, we reduce the number of each type of perturbation to 10 in order to make sure that Clare can attack the victim model successfully within the maximum query number in most cases.
|
http://arxiv.org/abs/2307.02252v1
|
20230705125006
|
Unlocking optical coupling tunability in epsilon-near-zero metamaterials through liquid crystal nanocavities
|
[
"Giuseppe Emanuele Lio",
"Antonio Ferraro",
"Bruno Zappone",
"Janusz Parka",
"Ewa Schab-Balcerzak",
"Cesare Paolo Umeton",
"Francesco Riboli",
"Rafał Kowerdziej",
"Roberto Caputo"
] |
physics.optics
|
[
"physics.optics",
"physics.app-ph"
] |
APS/123-QED
[email protected]
[email protected]
[email protected]
^1 Physics Department, University of Florence, 50019, Sesto Fiorentino, Florence, Italy
^2 European Laboratory for non Linear Spectroscopy (LENS), 50019, Sesto Fiorentino, Florence, Italy
^3 Consiglio Nazionale delle Ricerche - Istituto di Nanotecnologia (CNR-Nanotec), Rende (CS), 87036 Italy
^4 Institute of Applied Physics, Military University of Technology, 2 Kaliskiego Str., 00-908, Warsaw, Poland
^5 Centre of Polymer and Carbon Materials Polish Academy of Sciences, 34 M. Curie-Sklodowska Str., 41-819 Zabrze, Poland
^6Physics Department, University of Calabria,
87036 Arcavacata di Rende (CS), Italy
^7 National Institute of Optics, CNR-INO, 50019, Sesto Fiorentino (FI), Italy
^8 Institute of Fundamental and Frontier Sciences, University of Electronic Science and Technology of China, Chengdu 610054, China
⊛ Authors contribute equally to this work.
Epsilon-near-zero (ENZ) metamaterials represent a powerful toolkit for selectively transmitting and localizing light through cavity resonances, enabling the study of mesoscopic phenomena and facilitating the design of photonic devices. In this experimental study, we demonstrate the feasibility of engineering and actively controlling cavity modes, as well as tuning their mutual coupling, in an ENZ multilayer structure. Specifically, by employing a high-birefringence liquid crystal film as a tunable nanocavity, the polarization-dependent coupling of resonant modes with narrow spectral width and spatial extent was achieved. Surface forces apparatus (SFA) allowed us to continuously and precisely control the thickness of the liquid crystal film contained between the nanocavities and thus vary the detuning between the cavity modes. Hence, we were able to manipulate nanocavities anti-crossing behaviors. The suggested methodology unlocks the full potential of tunable optical coupling in epsilon-near-zero metamaterials and provides a versatile approach to the creation of tunable photonic devices, including bio-photonic sensors, and/or tunable planar metamaterials for on-chip spectrometers.
Unlocking optical coupling tunability in epsilon-near-zero metamaterials through liquid crystal nanocavities
Roberto Caputo^3,6,8
August 1, 2023
============================================================================================================
§ INTRODUCTION
In the eighteenth century, the voltaic pile invented by Alessandro Volta demonstrated that stacking materials with different properties can lead to groundbreaking devices with significantly novel functionalities. Nowadays, this approach is recognized as a cornerstone of fabrication technology, particularly in the development of high-performance nano-devices. The Fabry-Perot resonator is one of the most convenient and broadly used devices in photonics, particularly for engineering light-matter coupling <cit.> and is commonly used in color filters <cit.>, two-photon direct laser writing with hyper-resolution <cit.>, optical metasurfaces <cit.>, high-heat release <cit.>, sensing devices <cit.>, and anti-counterfeiting tags <cit.>, just to name a few. The resonant cavity is usually fabricated by sandwiching a transparent dielectric layer between two partially reflecting mirrors. These metal-dieletric resonators possess the intriguing properties of epsilon-near-zero (ENZ) effective permittivity <cit.> at specific resonance wavelengths that can be finely tuned by carefully selecting the thickness and refractive index of the metal and dielectric layers, and the angle and polarization of the incoming light <cit.>. Multilayer resonators also allow to efficiently manipulate electromagnetic waves in specific spectral ranges and enable optimal solutions for device miniaturization <cit.>, fabrication of perfect absorbers for structural coloring in the VIS-NIR range, <cit.>, and high photovoltaic conversion <cit.>. Furthermore, photon confinement in optical nanocavities enables an efficient control of light-matter coupling in fundamental physics studies of single quantum objects <cit.> and correlated polaritons <cit.>, as well as applications in quantum optical devices, and sensors <cit.>.
In this context, there is high demand of devices that can be reconfigured and adapted to various emerging technologies, especially in the automotive and telecommunication sectors <cit.>. Current ENZ metamaterial technologies, however, lack the ability to dynamically adjust their functionalities. Liquid crystals (LCs) show a large and fast response to external stimuli and are ideal candidates to overcome this limitation. For instance, elastomeric LCs have been used to tune photonic crystals <cit.> and Fabry-Perot cavities <cit.>. LC-based metasurfaces have also been recently implemented, confirming the extraordinary capabilities of LCs in the active control of visible light <cit.>, while extensions to the microwave <cit.> and terahertz regimes <cit.> are under way. The primary challenge to developing an active, LC-based ENZ metamaterial is to reduce the LC thickness to a few hundred nanometers. This thickness is considerably smaller than the limit of a few micrometers currently achieved in display technology.
In this article, we present experimental and numerical evidence of optical coupling in ENZ multilayer metamaterials comprising a nanoscale high-birefringence LC film with tunable thickness achieved by means of a Surface Force Apparatus (SFA). Originally designed to measure surface forces across fluid films <cit.>, the SFA has been recently introduced in photonics as a tool to control mode coupling in optical cavities <cit.>. Specifically, we have investigated a system comprising a nanoscale LC film (T-layer) with variable thickness d_T sandwiched between two identical metal-insulator-metal (MIM) cavities, thereby creating a symmetric three-cavity resonator denoted as MIMTMIM. The MIM cavities were fabricated by sputtering deposition on two cylindrical surfaces having a radius R = 2 cm. The surfaces were mounted with crossed axes in the SFA ensuring a single contact point (i.e., point of closest surface approach, r = 0 in Fig. <ref>b) where the surface distance was d_T. Around this point, the distance h_T varied approximately as in a sphere-plane geometry: h_T ≈ d_T + r^2/2R. The three-cavity resonator was illuminated with white light under normal incidence. The SFA allowed controlling the LC thickness dynamically, accurately, and continuously from several tens of microns down to the direct mechanical contact between the MIM surfaces (Fig. <ref>a). Details about the SFA technique are provided in the Materials and method section and a scheme is shown as supporting information (Fig. <ref>).
§ MODE COUPLING IN A MULTI-CAVITY RESONATOR
Let us begin the theoretical considerations with the analysis of multi-beam interference under normal incidence in a single (Fabry-Perot) MIM cavity constituted by an isotropic material (I-layer). A plane wave resonates with a cavity if the following condition of constructive interference occurs <cit.>:
n_T K_qd_T=qπ-ϕ
where n_T is the refractive index of the cavity medium, d_T is the metal-metal surface separation distance, q is the resonance order, K_q = 2π/λ_q is resonance wavevector with wavelength λ_q, and ϕ is the phase shift due to reflection at the dielectric-metal interface. Both n and ϕ vary slowly with the wavelength and can be considered approximately constant across the ∼ 100 nm spectral range of an SFA experiment. The resonance condition Eq.<ref> can thus be rewritten as:
λ_q=2n_Td_T/(q-ϕ/π)
showing that the resonance wavelength λ_q increases linearly as the surface distance d_T or the refractive index n_T increases, whereas it decreases when the order number q increases. The transmittance of a MIM cavity under normal incidence can be accurately calculated as a function of wavelength λ and the cavity thickness using the transfer matrix multiplication (TMM) method (green lines in Fig. <ref>b). For the MIM cavities considered in our experiments, only one resonance wavelength λ_1 = 560 appeared in the SFA spectral range (vertical dashed black line in Fig. <ref>b). The TMM calculation showed that λ_1 corresponded to the first resonant mode obtained for the MIM cavity thickness of d_1= 95 (horizontal dashed black line in Fig. <ref>b).
In a three-cavity MIMTMIM resonator, a cavity mode can overlap and interfere with the resonances of neighboring cavities across the metal (M) layers. Consequently, the coupling of resonances and the optical interaction among the cavities that give rise to hybrid resonance modes <cit.>. In the MIMTMIM resonator, hybridization is strongest between the outer MIM cavities and the central MTM cavity when the resonance wavelength λ_1 of the former crosses the resonance wavelength λ_q of the latter. If the central liquid crystal cavity, T-layer indicated as (d_T), is optically isotropic, hybridization creates a threefold splitting of the resonance, i.e., a triplet of wavelengths λ_L, λ_1, and λ_U with increasing photon energy <cit.>. On the other hand, when the T-layer is an optically anisotropic LC film, these wavelengths depend on light polarization (purple and cyan curves in Fig. <ref>b). In contrast with Eq. <ref>, the resonance wavelengths λ_L and λ_U do not vary linearly with the thickness d_T in proximity of the crossing point between λ_1 and λ_q. However, as the wavelength λ_q departs from λ_1, hybridization weakens and λ_L and λ_U converge towards λ_q thus acquiring an almost linear dependence on d_T. These preliminary considerations highlight the potential of birefringent materials for tuning the resonances of metal-dielectric metametarials.
§ EXPERIMENTAL RESULTS
In the realized system, the thickness of silver (Ag, M-layers) and zinc oxide (ZnO, I-layers) is 30 and 95, respectively. The LC material considered for the T-layer is a high-birefringence nematic liquid crystal mixture named LC1825, synthesized by the Military University of Warsaw <cit.> with a birefringence of Δ n= n_e - n_o = 0.42, where n_e=1.96 and n_o=1.54 are the extraordinary and ordinary refractive indices at room temperature, respectively. The photoalignment compound JK158 <cit.> was spin-coated on the Ag surfaces facing the LC to induce planar orientation along the cylinder axis on one surface and perpendicular to the axis on the other surface. Crossing the axes in the SFA ensured a planar alignment uniform across the LC thickness. Therefore, polarized parallel or perpendicular to the LC orientation travelled in the LC film as purely extraordinary or ordinary waves, respectively. Further details on the fabrication and materials used in our experiments are provided in the Materials and methods section.
During the experiment, a collimated white-light beam coming from a halogen lamp illuminated the MIMTMIM resonator under normal incidence and the transmitted intensity was analyzed using an imaging spectrograph coupled to a high-resolution CCD camera. In the spectrogram of Fig. <ref>, the transmitted intensity I was measured as a 2D function of the wavelength λ and lateral distance r for the contact point (r=0 in FIg. <ref>). A resonance produces a local maximum in the intensity function I_0(r,λ). Because h_T ≈ d_T + r^2/2R around the contact point, resonance wavelengths that depend linearly on h_T vary quadratically with r and create curved fringes in the SFA spectrograms with a parabolic tip corresponding to the contact point.
In the spectrogram of Fig. <ref>b, the intensity was measured at the contact position (r=0) while increasing the surface distance d_T at a constant speed u of a few nm/s using a motorized actuator (Fig. <ref>). In this case, the intensity I_0 was resolved as a 2D-function of λ and time. Because d_T(t) = d_0 + ut,where d_0 is the initial thickness, each vertical line in the spectrogram corresponds to a specific time t and surface distance d_T(t). The advantage of this approach is that the SFA can vary d_T dynamically and continuously over a wide range of surface distances, from several μm to direct surface contact (d_T < 1 nm for molecularly smooth surfaces), with an accuracy better than 1 nm and execution time of the order of minutes. To vary d_T, the surfaces were approached to or separated from each other at a constant speed. By recording I_0(λ,t), the SFA allowed studying the resonance dispersion as a function of the cavity thickness d_T in a single sweep of thickness, instead of fabricating multiple cavities with different thicknesses.
In the spectrograms of Fig. <ref>a,b, the first-order resonance of the fixed-thickness MIM cavities produces a specific resonance wavelength λ_1 that does not depend on the thickness (d_T or h_T) of the MTM cavity, i.e., the LC film. On the other hand, the other fringes in the spectrogram are due to resonances of the MTM cavity and, therefore, depend both on the film thickness and polarization of the incident light. For a fixed surface distance (d_T or h_T) and far from λ_1, these fringes show an approximately parabolic shape (Fig. <ref>a) reflecting the surface curvature, as expected. Due to the LC birefringence, these fringes form two distinct sets that can be separately extinguished using a linear polarizer parallel or perpendicular to the planar anchoring direction, as shown in Fig. <ref>a (see also an example of unpolarized spectrogram in Fig. <ref> of SI). This finding demonstrates that the resonance modes of the MTM cavity are linearly polarized along the ordinary and extraordinary axis of the uniformly aligned LC. The extraordinary modes appear slightly brighter than the ordinary ones (Fig. <ref> in SI), because light was directed into the spectrograph using a right-angle mirror with polarization-dependent reflectivity (Fig. <ref> in SI).
This allows to identify the fringe polarization even though we cannot use a polarizer to dynamically resolve the polarization while varying d_T. Figure <ref>b shows that extraordinary fringes enter and exit the spectral range of the SFA (at a wavelengths distant from λ_1) more rapidly than ordinary fringes. In the entrance and exit regions, resonances approximately follow Eq. <ref>. Therefore, if the mode with order q resonates at a given wavelength λ_1, then the mode with order q ± r resonates at the same wavelength after displacing the surfaces by a distance Δ d = ± r λ_1/2n. As a result of the inequality n_e > n_o, extraordinary fringes with index n_e cross the wavelength λ_1 more often than ordinary fringes as the distance d_T is increased. If the surfaces are separated at a constant speed u, the fringe with order q exits the spectral range after a time τ =Δ d/u compared to the fringe with order q - 1. Figure <ref>b shows ordinary fringes exiting the spectral range at wavelength 593.2 at periodic time intervals τ_o =78.3 s, whereas the period is τ_e =60.8 s for extraordinary fringes. The ratio of these two periods, τ_0/τ_e=1.29, is in good agreement with the value τ_o/τ_e = n_e/n_o = 1.27 predicted by Eq. <ref> using the nominal refractive indices (at room temperature) n_e = 1.96 and n_o = 1.54 of the LC.
Figures <ref>a and <ref>b show intensity spectrograms I_0(λ,t) obtained for the ordinary and extraordinary polarization, respectively, by using a polarizer in transmission. Resonance wavelengths are highlighted by black dashed lines and are referred to as λ_1, λ_U and λ_L.
As the thickness d_T of the LC film increases, λ_U approaches λ_1 while λ_L departs from λ_1. Eventually, λ_U and λ_L become equally spaced from λ_1 by a distance Ω. This behaviour agrees with the numerical prediction and demonstrate the possibility of dynamically tuning the modes at different wavelengths ranging from 530 to 590 by acting on LC thickness or incoming light polarization.
§ TUNING MODE COUPLING VIA LC CONFINEMENT AND REORIENTATION
In order to understand why mode coupling produces a wavelength triplet, we calculated the transmitted intensity as a function of wavelength λ and LC film thickness d_T using the TMM method (Fig. <ref>a), and selected three different values of d_T to compute, by a finite element method (COMSOL), the electric field map along the direction perpendicular to the MIMTMIM resonator as a function of λ and z position <cit.> (Fig. <ref>(b-e)).
For d_T=100 nm and ordinary polarization (Fig. <ref>b), the high-energy mode has wavelength λ_U^o ∼ 450 and is farther away from λ_1 than the low-energy mode with wavelength λ_L^o ∼ 585. This unequal wavelength spacing is reflected in mode hybridization. Namely, the high-energy mode is mainly located in the central MTM cavity while the low-energy mode is more delocalized among the central and outer (MIM) cavities. When the LC film thickness is increased to d_T=134 nm (Fig. <ref>c), the high-energy wavelength (Fig. <ref>c) λ_U^o ∼ 510 and low-energy wavelength λ_L^o∼ 620 are almost equally spaced from λ_1 and show a comparable degree of delocalization. When the LC film thickness is further increased to d_T=160 nm (Fig. <ref>d), the situation shown in Fig. <ref>b is reversed and the high-energy mode at λ_U^o ∼ 530 is closer to λ_1 and more delocalized than the low-energy mode at λ_L^o ∼ 670.
Mode hybridization in MIMTMIM resonator can be understood based on its mirror symmetry under reflection on the middle plane of the central T-cavity. Symmetry requires that resonances be either even (+) or odd (-) under reflection (Fig <ref>(b-e)). Using first-order perturbation theory or variational method <cit.>, these modes can be approximated as symmetry-adapted linear combinations of single-cavity modes. In particular, the field E_c of first-order mode in the central cavity is even and, therefore, hybridizes with the field E_+ of the even combination of outer-cavity modes. Against, the odd combination E_- cannot hybridize with an even mode such as E_c. While the modes E_c and E_+ overlap and interfere with each other, particularly within the metal layers of the central MTM cavity, direct overlap and interference between the outer cavities is negligible and, as a result, the wavelength λ_- of the E_- mode is very close to the wavelength λ_1 of an isolated MIM cavity. Indeed, the difference between λ_- and λ_1 was too small to be detected in our experiments.
Hybridization between same-parity modes produces the wavelengths λ_L and λ_U observed both in the SFA experiments and in our calculation <cit.>. For first-order modes, these wavelengths correspond to the modes E_L=E_c+α E_+ and E_U=E_c - β E_+, respectively, where the positive linear coefficients α and β depend on the thickness d_T of the central MTM cavity. The wavelengths λ_L and λ_U are due to the anti-crossing interaction between the even mode E_c and E_+ occurring as d_T varies (Fig <ref>(b-d)). Namely, the E_U-mode repels the E_L-mode as it moves towards lower energies, while the E_- mode is unaffected. The avoided-crossing point is reached when the wavelength λ_q of the E_c-mode (Eq. <ref>) overlaps with the first-order wavelength λ_1 of the outer MIM cavities. In other words, the difference or "detuning" between the photon energies of the two modes becomes zero. At this point, the modes E_L and E_U become uniformly delocalized across the resonator, with equal intensity maxima in each cavity (α,β ≈ 1, Fig <ref>c). At the avoided-crossing point, the wavelengths λ_L and λ_U are found at an equal distance Ω from the wavelength λ_1 of the E_- mode.
A decisive advantage of using an anisotropic LC film is that the detuning can be actively controlled not only by varying the MTM cavity thickness, but also by selecting the refractive index of the LC. This fact is highlighted in Eq. <ref> showing that the resonance wavelength λ_q depends on the product n_Td_T and, therefore, the thickness d_T and index n_T play equivalent roles. For example, the transmittance variation obtained by increasing the film thickness from d_T= 100 nm (Fig. <ref>b) to d_T= 134 nm (Fig. <ref>c) can also be obtained by switching the refractive index from ordinary to extraordinary (Fig. <ref>d) while keeping the film thickness fixed to d_T= 100 nm. The switching can be obtained by changing the polarization from ordinary to extraordinary, or acting on the LC orientation (e.g., by applying a voltage to the silver surfaces of the MTM cavity) so as to vary the refractive index seen by extraordinary waves.
As a further demonstration of mode hybridization, the electric field was calculated using finite element method for the three LC thicknesses under normal incidence in Fig. <ref> of SI. In Fig. <ref> of SI the transmittance plots, electric field confinement into the LC cavity (n_e), and electric field distributions along the propagation direction are also shown.
In Fig. <ref>a, the wavelength splitting 2Ω related to the difference λ_L-λ_U, λ_1 -λ_U and λ_1-λ_L is shown as a function of the LC thickness. The top and bottom panels show the resonance wavelengths retrieved via a multiple Gaussian fit on the transmittance curves, for the anti-crossing behaviour represented by λ_U approaching λ_L, λ_U approaching λ_1, and λ_1 approaching λ_L, for the ordinary and extraordinary LC refractive index, respectively. The minimum value of 2Ω corresponds to the avoided-crossing point and maximum coupling between same-parity modes. The same behaviour is observed in experiments (Fig. <ref>b). The slight difference related to the amplitude (2Ω) reported in both plots, numerical and experimental ones, for extraordinary polarization is a consequence of the non negligible effect of changing the surrounding medium around the two MIM resonators. In SI, we also simulated the angular dependence by varying the incident angle θ_i from 0^∘ to 80^∘ in steps of 2^∘ for both ordinary and extraordinary polarization (Fig. <ref>. The results show that the three-cavity resonator is not significantly perturbed by the variation of the incident angle, especially for the extraordinary polarization .
§ CONCLUSIONS
In conclusion, we presented a detailed study on how to design and actively tune strongly confined hybrid modes in one-dimensional layered structures working in visible wavelengths. The active control is enabled by a high-birefringence LC in combination with an SFA that can vary the cavity thickness rapidly, continuously, and accurately from several μm down to direct contact between its metal mirror surfaces. Importantly, we studied numerically and experimentally how the system performs in terms of weak and strong light coupling conditions when an LC film is confined between two MIM cavities. This result has significant practical implications for the development of innovative devices as it enables the possibility to excite multiple resonant modes across the LC cavity. This is of fundamental importance for developing active and reconfigurable devices that can find applications as a platform for optical beam steering devices.. Thanks to the tunability of these photonic modes, the proposed system can be of extremely high importance for bio-sensing where it is necessary to involve high energy modes (short wavelengths, from 450 to 530 ) excitable in free-space.
Although the cavity resonances were obtained under normal incidence, it is expected that plasmonic modes can also be excited in a multi-cavity metamaterials under oblique incidence, notably without using any coupler (i.e, a grating) to generate evanescent waves.
To this end, the SFA could be used to study the generation, coupling, and transmission of plasmonic modes in multi-cavity metamaterials as a function of the thickness and refractive index of LC loaded cavity.
§ MATERIALS AND METHOD
Samples Fabrication:
The MIM cavities were fabricated by DC/RF sputtering (model Kenosistec KC300C), and they were constituted of Ag and transparent Zinc-oxide (ZnO), respectively with target thickness of 30 and 95 nm, on cylindrical glass lenses. The lenses had diameter of R = 2 cm , the thickness of 4 mm, 60/ 40 scratch/dig surface quality, centration wedge angle <5 arcmin, and irregularity (interferometer fringes) λ/2 at a wavelength of 630 nm. Ag was chosen for its large extinction coefficient k > 1 ≫ n, ensuring a high reflectivity and an approximately real negative permittivity in the metal layers while ZnO was chosen for its transparency (n > 1 ≫ k).
For the deposition, the following parameters were used: vacuum 7· 10^-6, DC power 100 for 62 for Ag layer while ZnO were deposited using the RF cathode at a power of 80 and time of 31min 36.
In order to align the LC layer, a solution photo-active poly(amide imide), denominated JK158 in N-methylpyrrolidone (1 wt.%) was spin-coated on top of the exposed Ag layers. The poly(amide imide) was described in <cit.>. JK158 contains randomly aligned azo-dye molecules that reorient perpendicularly to the polarization direction of UV light to minimise the absorption cross-section. The LCs in contact with the aligned JK158 molecules acquire the same alignment.
SFA Experiments:
A surface forces apparatus (SFA) Mark III by Surforce LLC, USA was used in the experiments <cit.>. One of the MIM-coated cylindrical lenses was fixed on a rigid support, whereas the other one was attached to the free end of a double cantilever spring. The surfaces of the lenses were sufficiently far apart from each other to avoid any mechanical interaction and moved freely in contact with a 50 μL droplet of LC. The droplet was infiltrated between the MIM-coated lenses by capillarity.
Transmission spectra were obtained by illuminating the MIMTMIM cavity (consisting of the metal- insulator- metal layers on the lens surfaces and the LC solution) under normal incidence with white light from a halogen lamp. The transmitted light was collected through the entrance slit of an imaging spectrograph (PI Acton Spectra Pro 2300i) aligned with one of the cylindrical lenses, and recorded with a high-sensitivity CCD camera (Andor Newton DU940P-FI). Only a small region of the surface surrounding the contact position was probed, such that r ≤ 0.15 mm ≪ R, equivalent to a sphere-plane geometry <cit.>. A CCD camera image recorded the transmitted intensity I as a function of the wavelength λ and position r. Multi-beam interference created resonance peaks in a spectrogram, i.e., local maxima of the 2d intensity function I(λ, r), corresponding to constructive interference. Spectrograms were obtained by combining multiple CCD images taken in different but overlapping spectral intervals. Each image was recorded within in less than one second, whereas a spectrogram was completed in less than 20 s.
Numerical simulations:
The transfer matrix method (TMM) analysis were performed using a script implemented in commercial software Matlab. It uses as input the refractive indices data, retrieved by ellipsometry for the used materials, the layer thicknesses and it allows calculating the spectrum varying the T cavity thickness.
Finete Element Method (FEM) simulation were performed using COMSOL Multiphysics with the same scheme reported in <cit.>. In order to analyse the electric filed |E|/E_0, where E_0 has been calculated as E_0 = √((P/w)Z_0), here P is the input power (1 W/m^2), w represents the area illuminated by the light beam and Z0 is the impedance, in the MIMTMIM system a 1D cutting line have been used to collect the normalized electric filed as function of the structure size and wavelengths. The cutting line has been chose to cover the entire length of the MIMTMIM system plus extra 20 nm in the glass before and after the structure.
This research is performed in the framework of the bilateral (Italy-Poland) project: "Active metamaterials based on new generation liquid crystals (LCMETA)" funded by the Italian Ministry of Foreign Affairs and International Cooperation and the Polish National Agency for Academic Exchange NAWA. G.E.L and F.R. thank the FASPEC (Fiber-Based Planar Antennas for Biosensing and Diagnostics) - supported by Tuscany region in the Horizon 2020 framework - and the project "Complex Photonic Systems (DFM.AD005. 317). G.E.L. also thanks the research project "FSE-REACT EU" financed by National Social Fund - National Operative Research Program and Innovation 2014-2020 (DM 1062/2021). A.F and R.C thank the project “DEMETRA – Sviluppo di tecnologie di materiali e di tracciabilità per la sicurezza e la qualità dei cibi” PON ARS01 00401. R.K. and J.P. acknowledge the financial support from the MUT University Grant UGB 22 804 from funds for year 2023.
§ AUTHORS CONTRIBUTION
G.E.L. and A.F. equally contributed to this work and wrote the article. G.E.L and R.C. conceived the main idea in the framework of the research project "LCMETA". G.E.L and A.F. performed numerical simulations and samples fabrication. B.Z. performed the SFA measurements and provided theoretical explanations. E.S-B. synthesized and delivered LC photoaligning materials and supported the work with fundamental technical advice. R.K. tested photo-aligning materials. J.P., R.K., C.P.U., F.R., and R.C. provided fundamental support thanks to their expertise in liquid crystals. F.R. provided his expertise on light coupling behavior in complex media to explain the physics behind this work. All authors revised the paper and accepted its contents.
50
fxundefined [1]
ifx#1
fnum [1]
#1firstoftwo
secondoftwo
fx [1]
#1firstoftwo
secondoftwo
noop [0]secondoftwo
ref[1]@startlink#1@href
href[1]#1@endlink
anitize@url [0]`
12`$12`&12`#12`1̂2`_12`%12
startlink[1]
endlink[0]
rl [1]href #1
@bib@innerbibempty
[Réveret et al.(2008)Réveret, Disseix, Leymarie,
Vasson, Semond, Leroux, and Massies]reveret2008influence
author author F. Réveret, author P. Disseix, author J. Leymarie,
author A. Vasson, author F. Semond, author
M. Leroux, and author
J. Massies, title title Influence of the mirrors on the strong coupling regime in planar gan
microcavities, @noop journal journal
Physical Review B volume 77, pages
195303 (year 2008)NoStop
[Lio et al.(2019)Lio,
Palermo, Caputo, and De Luca]Lio2019cavities
author author G. E. Lio, author G. Palermo,
author R. Caputo, and author A. De Luca, title
title A comprehensive optical analysis of nanoscale structures:
from thin films to asymmetric nanocavities, https://doi.org/10.1039/C9RA03684A journal journal RSC Adv. volume 9, pages
21429 (year 2019)NoStop
[Cao et al.(2021)Cao,
De Liberato, and Kavokin]cao2021strong
author author J. Cao, author S. De Liberato, and author A. V. Kavokin, title title Strong light–matter coupling in
microcavities characterised by rabi-splittings comparable to the bragg
stop-band widths, @noop journal journal
New Journal of Physics volume 23, pages 113015 (year 2021)NoStop
[Choudhury et al.(2016)Choudhury, Shaltout, Shalaev, Kildishev, and Boltasseva]choudhury2016experimental
author author S. M. Choudhury, author A. Shaltout,
author V. M. Shalaev, author A. V. Kildishev, and author A. Boltasseva, title
title Experimental realization of color hologram using
pancharatnam-berry phase manipulating metasurface, in @noop
booktitle CLEO: QELS_Fundamental Science (organization Optical Society of America, year 2016) pp. pages FF1D–8NoStop
[Lio et al.(2020)Lio,
Ferraro, Giocondo, Caputo, and De Luca]lio2020color
author author G. E. Lio, author A. Ferraro,
author M. Giocondo, author R. Caputo, and author
A. De Luca, title title Color gamut behavior in epsilon near-zero nanocavities during
propagation of gap surface plasmons, @noop journal
journal Advanced Optical Materials volume 8, pages 2000487 (year
2020)NoStop
[Guo et al.(2018)Guo,
Jiang, Zhu, Sun,
Li, and Chen]guo2018focusing
author author Z. Guo, author H. Jiang, author K. Zhu, author
Y. Sun, author Y. Li, and author H. Chen, title title Focusing
and super-resolution with partial cloaking based on linear-crossing
metamaterials, @noop journal journal
Physical Review Applied volume 10, pages 064048 (year 2018)NoStop
[Lio et al.(2021)Lio,
Ferraro, Ritacco, Aceti,
De Luca, Giocondo, and Caputo]lio2021leveraging
author author G. E. Lio, author A. Ferraro,
author T. Ritacco, author D. M. Aceti, author
A. De Luca, author M. Giocondo, and author R. Caputo, title title
Leveraging on enz metamaterials to achieve 2d and 3d hyper-resolution in
two-photon direct laser writing, @noop journal
journal Advanced Materials volume
33, pages 2008644 (year 2021)NoStop
[Shaltout et al.(2016)Shaltout, Kinsey, Kim, Chandrasekar, Ndukaife, Boltasseva, and Shalaev]shaltout2016development
author author A. M. Shaltout, author N. Kinsey,
author J. Kim, author
R. Chandrasekar, author
J. C. Ndukaife, author
A. Boltasseva, and author
V. M. Shalaev, title
title Development of optical metasurfaces: emerging concepts and
new materials, @noop journal journal
Proceedings of the IEEE volume 104, pages 2270 (year 2016)NoStop
[Kowerdziej et al.(2022)Kowerdziej, Ferraro, Zografopoulos, and Caputo]kowerdziej2022soft
author author R. Kowerdziej, author A. Ferraro,
author D. C. Zografopoulos, and author R. Caputo, title title Soft-matter-based hybrid and active
metamaterials, @noop journal journal
Advanced Optical Materials volume 10, pages 2200750 (year 2022)NoStop
[Dyachenko et al.(2016)Dyachenko, Molesky, Petrov, Störmer, Krekeler, Lang, Ritter, Jacob, and Eich]dyachenko2016controlling
author author P. N. Dyachenko, author S. Molesky,
author A. Y. Petrov, author M. Störmer, author
T. Krekeler, author
S. Lang, author M. Ritter, author Z. Jacob, and author M. Eich, title title
Controlling thermal emission with refractory epsilon-near-zero metamaterials
via topological transitions, @noop journal journal Nature communications volume 7, pages 1 (year 2016)NoStop
[Ferraro et al.(2021)Ferraro, Lio, Hmina, Palermo, Djouda, Maurer, and Caputo]ferraro2021tailoring
author author A. Ferraro, author G. E. Lio,
author A. Hmina, author G. Palermo, author
J. M. Djouda, author
T. Maurer, and author
R. Caputo, title title Tailoring of plasmonic functionalized metastructures to enhance
local heating release, @noop journal journal Nanophotonics (year 2021)NoStop
[Sreekanth et al.(2016)Sreekanth, Alapan, ElKabbash, Ilker, Hinczewski, Gurkan, De Luca, and Strangi]sreekanth2016extreme
author author K. V. Sreekanth, author Y. Alapan,
author M. ElKabbash, author E. Ilker, author
M. Hinczewski, author
U. A. Gurkan, author
A. De Luca, and author
G. Strangi, title title Extreme sensitivity biosensing platform based on hyperbolic
metamaterials, @noop journal journal
Nat. Mater. volume 15, pages 621
(year 2016)NoStop
[Sreekanth et al.(2013)Sreekanth, Zeng, Yong, and Yu]sreekanth2013sensitivity
author author K. V. Sreekanth, author S. Zeng,
author K.-T. Yong, and author T. Yu, title title Sensitivity enhanced biosensor using
graphene-based one-dimensional photonic crystal, @noop journal journal Sens. Actuators, B volume 182, pages 424 (year
2013)NoStop
[Lio et al.(2023)Lio,
Ferraro, Kowerdziej, Govorov,
Wang, and Caputo]lio2023engineering
author author G. E. Lio, author A. Ferraro,
author R. Kowerdziej, author A. O. Govorov, author
Z. Wang, and author
R. Caputo, title title Engineering fano-resonant hybrid metastructures with ultra-high
sensing performances, @noop journal journal Advanced Optical Materials , pages 2203123
(year 2023)NoStop
[Ferraro et al.(2022)Ferraro, Lio, Bruno, Nocentini, De Santo, Wiersma, Riboli, Caputo, and Barberi]ferraro2022hybrid
author author A. Ferraro, author G. E. Lio,
author M. D. L. Bruno, author S. Nocentini, author
M. P. De Santo, author
D. S. Wiersma, author
F. Riboli, author R. Caputo, and author R. C. Barberi, title title Hybrid
camouflaged anticounterfeiting token in a paper substrate, @noop
journal journal Advanced Materials Technologies , pages 2201010 (year 2022)NoStop
[Vassant et al.(2012)Vassant, Hugonin, Marquier, and Greffet]vassant2012berreman
author author S. Vassant, author J.-P. Hugonin, author F. Marquier, and author J.-J. Greffet, title title Berreman mode and epsilon near zero
mode, @noop journal journal Optics
express volume 20, pages 23971
(year 2012)NoStop
[Reshef et al.(2019)Reshef,
De Leon, Alam, and Boyd]reshef2019nonlinear
author author O. Reshef, author I. De Leon,
author M. Z. Alam, and author R. W. Boyd, title title Nonlinear optical effects in epsilon-near-zero
media, @noop journal journal Nature
Reviews Materials volume 4, pages 535
(year 2019)NoStop
[Wu et al.(2021)Wu,
Xie, Sha, Fu, and Li]wu2021epsilon
author author J. Wu, author Z. T. Xie,
author Y. Sha, author
H. Fu, and author
Q. Li, title title Epsilon-near-zero photonics: Infinite potentials, @noop
journal journal Photonics Research volume 9, pages 1616 (year
2021)NoStop
[Jin and Ziolkowski(2010)]bilotti2010
author author P. Jin and author R. Ziolkowski, title title Multiband extensions
of the electrically small, near-field resonant parasitic z antenna, @noop journal journal IET microwaves,
antennas & propagation volume 4, pages
1016 (year 2010)NoStop
[Li et al.(2015)Li,
Butun, and Aydin]li2015
author author Z. Li, author S. Butun, and author K. Aydin, title title Large-area, lithography-free super absorbers and
color filters at visible frequencies using ultrathin metallic films, @noop journal journal Acs Photonics volume 2, pages 183 (year
2015)NoStop
[Heydari and Sabaeian(2017)]heydari2017
author author M. Heydari and author M. Sabaeian, title title Plasmonic nanogratings
on mim and soi thin-film solar cells: comparison and optimization of optical
and electric enhancements, @noop journal journal Applied optics volume 56, pages 1917 (year 2017)NoStop
[Imamog et al.(1999)Imamog,
Awschalom, Burkard, DiVincenzo, Loss, Sherwin, Small et al.]imamog1999quantum
author author A. Imamog, author D. D. Awschalom, author G. Burkard,
author D. P. DiVincenzo,
author D. Loss, author
M. Sherwin, author A. Small, et al., title title Quantum information processing using quantum dot spins and cavity
qed, @noop journal journal Physical
review letters volume 83, pages 4204
(year 1999)NoStop
[Greentree et al.(2006)Greentree, Tahan, Cole, and Hollenberg]greentree2006quantum
author author A. D. Greentree, author C. Tahan,
author J. H. Cole, and author L. C. Hollenberg, title title Quantum phase transitions of light, @noop journal journal Nature Physics volume 2, pages 856 (year
2006)NoStop
[Patra et al.(2023)Patra,
Caligiuri, Zappone, Krahne, and De Luca]patra2023plane
author author A. Patra, author V. Caligiuri,
author B. Zappone, author R. Krahne, and author
A. De Luca, title title In-plane and out-of-plane investigation of resonant tunneling
polaritons in metal–dielectric–metal cavities, @noop journal journal Nano Letters volume
23, pages 1489 (year 2023)NoStop
[Vahala(2003)]vahala2003optical
author author K. J. Vahala, title title Optical microcavities, @noop journal journal nature volume 424, pages 839 (year
2003)NoStop
[Liang et al.(2013)Liang,
Clarke, Patel, Loncar, and Quan]liang2013scalable
author author F. Liang, author N. Clarke,
author P. Patel, author M. Loncar, and author
Q. Quan, title title Scalable photonic crystal chips for high sensitivity protein
detection, @noop journal journal
Optics express volume 21, pages
32306 (year 2013)NoStop
[Frisk Kockum et al.(2019)Frisk Kockum, Miranowicz, De Liberato,
Savasta, and Nori]frisk2019ultrastrong
author author A. Frisk Kockum, author A. Miranowicz, author S. De Liberato, author S. Savasta, and author F. Nori, title title Ultrastrong coupling between
light and matter, @noop journal journal
Nature Reviews Physics volume 1, pages
19 (year 2019)NoStop
[Herzog et al.(2020)Herzog,
Böhrkircher, Both, Fischer, Sittig, Jetter, Portalupi, Weiss, and Michler]herzog2020realization
author author T. Herzog, author S. Böhrkircher, author S. Both, author M. Fischer,
author R. Sittig, author M. Jetter, author
S. Portalupi, author
T. Weiss, and author
P. Michler, title title Realization of a tunable fiber-based double cavity system, @noop journal journal Physical Review
B volume 102, pages 235306 (year 2020)NoStop
[Smith et al.(2020)Smith,
Chen, Majumdar, and Masiello]smith2020active
author author K. C. Smith, author Y. Chen,
author A. Majumdar, and author D. J. Masiello, title title Active tuning of hybridized modes in a
heterogeneous photonic molecule, @noop journal
journal Physical Review Applied volume
13, pages 044041 (year 2020)NoStop
[Cui et al.(2019)Cui,
Bai, and Sun]cui2019tunable
author author T. Cui, author B. Bai, and author H.-B. Sun, title title Tunable metasurfaces based on active materials, @noop journal journal Advanced
Functional Materials volume 29, pages
1806692 (year 2019)NoStop
[De Bellis et al.(2023)De Bellis, Martella, Parmeggiani,
Wiersma, and Nocentini]de2023temperature
author author I. De Bellis, author D. Martella,
author C. Parmeggiani, author D. S. Wiersma, and author S. Nocentini, title
title Temperature tunable 4d polymeric photonic crystals, @noop journal journal Advanced
Functional Materials , pages 2213162 (year
2023)NoStop
[Zubritskaya et al.(2023)Zubritskaya, Cichelero, Faniayeu,
Martella, Nocentini, Rudquist, Wiersma, and Brongersma]zubritskaya2023dynamically
author author I. Zubritskaya, author R. Cichelero, author I. Faniayeu,
author D. Martella, author S. Nocentini, author
P. Rudquist, author
D. S. Wiersma, and author
M. L. Brongersma, title
title Dynamically tunable optical cavities with embedded nematic
liquid crystalline networks, @noop journal journal Advanced Materials , pages 2209152 (year 2023)NoStop
[Sharma and Ellenbogen(2020)]sharma2020all
author author M. Sharma and author T. Ellenbogen, title title An all-optically
controlled liquid-crystal plasmonic metasurface platform, @noop
journal journal Laser & Photonics Reviews volume 14, pages 2000253 (year 2020)NoStop
[Lio and Ferraro(2021)]Lio_photonics8030065
author author G. E. Lio and author A. Ferraro, title title Lidar and beam steering tailored by
neuromorphic metasurfaces dipped in a tunable surrounding medium, journal journal Photonics volume 8, https://doi.org/10.3390/photonics8030065
10.3390/photonics8030065 (year 2021)NoStop
[Wang et al.(2022)Wang,
Li, He, Cai, Liu, Yin, Mu, Hisao,
Gérard, Luo et al.]wang2022metasurface
author author J. Wang, author K. Li, author H. He, author
W. Cai, author J. Liu, author Z. Yin, author Q. Mu, author V. K. Hisao, author
D. Gérard, author
D. Luo, et al., title
title Metasurface-enabled high-resolution liquid-crystal
alignment for display and modulator applications, @noop journal journal Laser & Photonics Reviews , pages 2100396 (year 2022)NoStop
[Palermo et al.(2022)Palermo, Lininger, Guglielmelli,
Ricciardi, Nicoletta, De Luca, Park, Lim, Meretska, Capasso et al.]palermo2022all
author author G. Palermo, author A. Lininger,
author A. Guglielmelli, author L. Ricciardi, author
G. Nicoletta, author
A. De Luca, author J.-S. Park, author S. W. D. Lim, author M. L. Meretska, author F. Capasso,
et al., title title All-optical
tunability of metalenses permeated with liquid crystals, @noop
journal journal ACS nano volume 16, pages 16539 (year
2022)NoStop
[Kowerdziej et al.(2013)Kowerdziej, Krupka, Nowinowski-Kruszelnicki, Olifierczuk, and Parka]kowerdziej2013
author author R. Kowerdziej, author J. Krupka,
author E. Nowinowski-Kruszelnicki, author M. Olifierczuk, and author J. Parka, title title Microwave
complex permittivity of voltage-tunable nematic liquid crystals measured in
high resistivity silicon transducers, @noop journal
journal Applied Physics Letters volume
102, pages 102904 (year 2013)NoStop
[Zografopoulos et al.(2019)Zografopoulos, Ferraro, and Beccherelli]zografopoulos2019liquid
author author D. C. Zografopoulos, author A. Ferraro, and author R. Beccherelli, title title Liquid-crystal
high-frequency microwave technology: Materials and characterization, @noop journal journal Advanced Materials
Technologies volume 4, pages 1800447
(year 2019)NoStop
[Kowerdziej et al.(2015)Kowerdziej, Stańczyk, and Parka]kowerdziej2015
author author R. Kowerdziej, author T. Stańczyk, and author J. Parka, title title Electromagnetic simulations
of tunable terahertz metamaterial infiltrated with highly birefringent
nematic liquid crystal, @noop journal journal Liquid Crystals volume 42, pages 430 (year 2015)NoStop
[Isić et al.(2019)Isić, Sinatkas, Zografopoulos,
Vasić, Ferraro, Beccherelli, Kriezis, and Belić]isic2019electrically
author author G. Isić, author G. Sinatkas,
author D. C. Zografopoulos,
author B. Vasić, author A. Ferraro, author
R. Beccherelli, author
E. E. Kriezis, and author
M. Belić, title title Electrically tunable metal–semiconductor–metal terahertz
metasurface modulators, @noop journal journal IEEE Journal of Selected Topics in Quantum Electronics volume 25, pages 1 (year
2019)NoStop
[Israelachvili and McGuiggan(1990a)]israelachvili_mcguiggan_1990
author author J. N. Israelachvili and author P. M. McGuiggan, title title Adhesion and
short-range forces between surfaces. part i: New apparatus for surface force
measurements, https://doi.org/10.1557/JMR.1990.2223 journal journal Journal of Materials Research volume 5, pages 2223–2231 (year
1990a)NoStop
[Zappone et al.(2021)Zappone, Caligiuri, Patra, Krahne, and De Luca]Zappone_ACSPhotonics_2021
author author B. Zappone, author V. Caligiuri,
author A. Patra, author R. Krahne, and author
A. De Luca, title title Understanding and controlling mode hybridization in multicavity
optical resonators using quantum theory and the surface forces apparatus, journal journal ACS Photonics https://doi.org/10.1021/acsphotonics.1c01055 10.1021/acsphotonics.1c01055
(year 2021)NoStop
[Fowles(1989)]fowles1989modern
author author G. Fowles, @noop title Modern optics. new york: Holt
(year 1989)NoStop
[Born and Wolf(2013)]born2013principles
author author M. Born and author E. Wolf, @noop title Principles of optics:
electromagnetic theory of propagation, interference and diffraction of
light (publisher Elsevier, year
2013)NoStop
[Dkabrowski et al.(2013)Dkabrowski, Kula, and Herman]dabrowski2013high
author author R. Dkabrowski, author P. Kula, and author J. Herman, title title High birefringence liquid crystals, @noop journal journal Crystals volume 3, pages 443 (year
2013)NoStop
[Wegłowski et al.(2015)Wegłowski, Piecek, Kozanecka-Szmigiel,
Konieczkowska, and Schab-Balcerzak]wkeglowski2015poly
author author R. Wegłowski, author W. Piecek, author A. Kozanecka-Szmigiel, author J. Konieczkowska, and author E. Schab-Balcerzak, title title Poly (esterimide)
bearing azobenzene units as photoaligning layer for liquid crystals, @noop journal journal Optical
Materials volume 49, pages 224
(year 2015)NoStop
[Caligiuri et al.(2019)Caligiuri, Palei, Biffi, and Krahne]caligiuri2019hybridization
author author V. Caligiuri, author M. Palei,
author G. Biffi, and author R. Krahne, title
title Hybridization of epsilon-near-zero modes via resonant
tunneling in layered metal-insulator double nanocavities, @noop
journal journal Nanophotonics volume 8, pages 1505 (year
2019)NoStop
[Konieczkowska et al.(2015)Konieczkowska, Schab-Balcerzak, Siwy,
Switkowski, and Kozanecka-Szmigiel]konieczkowska2015large
author author J. Konieczkowska, author E. Schab-Balcerzak, author M. Siwy, author K. Switkowski, and author A. Kozanecka-Szmigiel, title title Large and
highly stable photoinduced birefringence in poly (amideimide) s with two
azochromophores per structural unit, @noop journal
journal Optical Materials volume 39, pages 199 (year 2015)NoStop
[Israelachvili and McGuiggan(1990b)]israelachvili1990adhesion
author author J. N. Israelachvili and author P. M. McGuiggan, title title Adhesion and
short-range forces between surfaces. part i: New apparatus for surface force
measurements, @noop journal journal
Journal of Materials Research volume 5, pages 2223 (year 1990b)NoStop
[Israelachvili(2011)]israelachvili2011intermolecular
author author J. N. Israelachvili, @noop title Intermolecular
and surface forces (publisher Academic press, year 2011)NoStop
§ SUPPLEMENTARY INFORMATION
§ UNLOCKING OPTICAL COUPLING TUNABILITY IN EPSILON-NEAR-ZERO METAMATERIALS THROUGH LIQUID CRYSTAL NANOCAVITIES
§ SURFACE FORCE APPARATUS (SFA)
Figure <ref> shows a schematic of the SFA setup.
§ EXPERIMENTAL MEASUREMENTS
Figure <ref> shows the transmitted intensity of a MIMTMIM resonator as a function of the wavelength λ and lateral distance r from the surface contact position (r=0) without polarizers.
§ NUMERICAL SIMULATIONS
Here are reported the further numerical simulations performed to confirm the trends shown in the main paper.
|
http://arxiv.org/abs/2307.02719v3
|
20230706015737
|
Understanding Uncertainty Sampling
|
[
"Shang Liu",
"Xiaocheng Li"
] |
cs.LG
|
[
"cs.LG",
"stat.ML"
] |
Massive MIMO with Cauchy Noise: Channel Estimation, Achievable Rate and Data Decoding
Ziya Gülgün,
and Erik G. Larsson, Fellow, IEEE
Z. Gülgün was with the Department
of Electrical Engineering (ISY), 58183 Linköping, Sweden. He is now with Ericsson AB, 16440 Stockholm, Sweden ([email protected]).
E. G. Larsson is with the Department
of Electrical Engineering (ISY), 58183 Linköping, Sweden e-mail: ([email protected]).
This work was supported by Security Link and the SURPRISE project funded by the Swedish Foundation for Strategic Research (SSF). A preliminary version of this paper was presented at the International Conference on Communications (ICC), 2022 <cit.>.
====================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================
Uncertainty sampling is a prevalent active learning algorithm that queries sequentially the annotations of data samples which the current prediction model is uncertain about. However, the usage of uncertainty sampling has been largely heuristic: (i) There is no consensus on the proper definition of “uncertainty” for a specific task (classification or regression) under a specific loss (binary loss, cross-entropy loss, squared loss, etc.); (ii) There is no theoretical guarantee that prescribes a standard protocol to implement the algorithm, for example, how to handle the sequentially arrived annotated data under the framework of empirical risk minimization or optimization algorithms such as stochastic gradient descent. In this work, we systematically examine uncertainty sampling algorithms under both stream-based and pool-based active learning. We propose a notion of equivalent loss which depends on the used uncertainty measure and the original loss function through a partial differential equation and establish that an uncertainty sampling algorithm essentially optimizes against such an equivalent loss. The perspective verifies the properness of existing uncertainty measures (including entropy uncertainty, least confidence uncertainty, margin-based uncertainty, etc.) from two aspects: surrogate property and loss convexity. It can also be used to develop new uncertainty measures. Furthermore, we propose a new notion for designing uncertainty measures called loss as uncertainty. The idea is to use the conditional expected loss given the features as the uncertainty measure. Such an uncertainty measure has nice analytical properties and, more importantly, a generality to cover both classification and regression problems (in contrast to the existing case-by-case design of uncertainty measures). These developments enable us to provide the first generalization bound for uncertainty sampling algorithms under both stream-based and pool-based settings, in the full generality of the underlying model and problem. Lastly, we establish some connection between certain variants of the uncertainty sampling algorithms with risk-sensitive objectives and distributional robustness, which can partly explain the advantage of uncertainty sampling algorithms when the sample size is small.
§ INTRODUCTION
Active learning is a machine learning paradigm where the learning algorithm interactively queries humans (or some other information source) to annotate new data points. Different from supervised learning, an active learning algorithm begins with all the data samples unlabeled and adaptively decides which samples to query for labels. The study of active learning is motivated by the great availability of unlabeled data and the prohibitive cost of getting all the data labeled. Its goal is to improve data efficiency and reduce the labeling cost by querying only a small proportion of the data but still getting a satisfying performance.
The study of active learning algorithms can be categorized according to two standards: scenarios and querying strategies <cit.>. The scenarios of active learning are determined by how the data is generated and observed. The query synthesis scenario allows the learner to generate de novo examples rather than samples from a distribution <cit.>. While query synthesis is practical for many problems, labeling arbitrarily generated instances could be awkward for human experts <cit.>. Comparatively, if the data is generated from a fixed unknown distribution, then we call it either stream-based sampling or pool-based sampling, depending on the way that unlabeled samples arrive. If the samples arrive in a sequence, the learner queries the labels from a stream <cit.>. Otherwise, the learner can observe the pool of unlabeled samples <cit.>. In this paper, we focus on stream-based and pool-based scenarios.
The second criterion to categorize the active learning algorithms is the querying strategy, among which uncertainty sampling is “perhaps the simplest and most commonly used query framework” <cit.>. Roughly speaking, the uncertainty sampling strategy is to query the samples that the model is uncertain about <cit.>. Other strategies include query-by-committee <cit.>, expected model change <cit.>, expected error reduction <cit.>, and expected variance reduction <cit.>. Although rigorous theoretical results have been obtained for some of the other querying strategies <cit.>, theoretical understanding of the uncertainty sampling strategy is still lacking. Some initial yet intriguing results have been established for various kinds of uncertainty measurements. <cit.> show that the threshold-based uncertainty sampling (i.e., to query only the samples of which the uncertainty is above a threshold) can be interpreted as performing a preconditioned stochastic gradient step on a smoothed version of the population zero-one loss that converges to the population zero-one loss. The non-convexity of the zero-one loss implies that the threshold-based uncertainty sampling could be trapped in local minima, suggesting the necessity of a warm start. <cit.> consider a similar threshold-based uncertainty, where the threshold is chosen implicitly via querying the least confident several samples under the Bayes optimal hypothesis. For a handcrafted linearly separable distribution, <cit.> prove a finite-sample lower bound on the logistic regression in the high-dimensional case for the empirical risk minimization algorithm as <cit.>, and claim the less efficiency of uncertainty sampling against passive learning both theoretically and empirically. Apart from the pool-based setting and the threshold-based uncertainty, <cit.> design their algorithm in the stream-based setting with a margin-based uncertainty. They prove that the stream-based algorithm will converge with an O(1/T) error rate under a strictly linearly separable data distribution.
Despite all those efforts, there has been no systematic theoretical understanding of data efficiency or even the convergence of uncertainty sampling. Besides, existing theoretical works are restricted to particular forms of uncertainty sampling algorithms. In addition, all existing theoretical results are made for linear classifiers. And there is little theoretical understanding of the probabilistic-based uncertainty measurements <cit.> or the regression problem. In this paper, we propose a general framework to analyze uncertainty sampling algorithms and introduce a notion of equivalent loss. We establish that the uncertainty sampling algorithms essentially optimize against such an equivalent loss objective. By inspecting the surrogate and the optimization properties of the equivalent loss, we not only recover existing theoretical results but also generalize to uncertainty sampling algorithms under other contexts such as multi-class classification and regression. Our contribution can be summarized as follows:
* We introduce the equivalent loss as a loss function specified through a partial differential equation in terms of the used uncertainty and the original loss function. Then we establish that uncertainty sampling algorithms essentially optimize against this equivalent loss.
* For binary classification, we examine the existing uncertainty measures and theoretical results through the lens of equivalent loss. Specifically, we show that the error rate of the margin-based uncertainty in <cit.> will converge to zero regardless of the underlying data distribution, compared to their assumption that the data needs to be strictly separable. We recover the non-convexity observations of the threshold-based models <cit.>. We also analyze the probabilistic uncertainty models, showing their Fisher consistency.
* We generalize this notion to the multi-classification and the regression problems with our loss-as-uncertainty principle. Equipped with such an uncertainty measure, the convergence can be proved for any convex and non-negative loss functions for binary classification, multi-class classification, and regression.
* We also study several other variants of uncertainty sampling algorithms and draw connections with risk-sensitive loss and distributional robustness. Specifically, we show the exponential-loss-as-uncertainty will be minimizing the softmax of the loss, the top-k-max uncertainty sampling essentially minimizes the conditional value at risk (CVaR), and the mixture of uniform and uncertainty sampling recovers a distributionally robust optimization formulation.
§ PROBLEM SETUP
Consider the problem of predicting the label Y from the feature X, where (X, Y) is independently drawn from an unknown distribution 𝒫. We denote the marginal distribution of X to be 𝒫_X and the conditional distribution of Y on X is 𝒫_Y|X. Let 𝒳 and 𝒴 denote the support of X and Y respectively. Suppose 𝒳∈ℝ^d is a bounded set with an upper bound of M_X with respect to the Euclidean norm. For a binary classification problem, 𝒴 = {-1, +1}. For a K-nary classification problem, 𝒴 = [K] = {1, …, K}. For a regression problem, we assume 𝒴=[-M_Y, M_Y] is a bounded set with an upper bound of M_Y.
For the canonical setting of supervised learning, a full dataset of both features and labels is completely revealed to the learner at the beginning. For active learning, the learner starts with only observations of the features X's and needs to decide which of the labels Y's to query or whether to query the labels Y's. In this paper, we consider two mainstream settings for active learning.
* Stream-based setting. The dataset 𝒟_T^X consists of T i.i.d. features {X_t}_t=1^T from 𝒫_X. The samples arrive sequentially. At each time t, upon the arrival of X_t, the learner decides whether to query the sample: if so, Y_t is revealed to the learner; otherwise, it moves on to the next time period. The feature and the label (if queried) of the t-th time period will be discarded (but not cached) after the time period. Without loss of generality, we still assume the presence of the label Y_t sampled from 𝒫_Y|X=X_t; it may just not be revealed to the learner depending on the querying decision.
* Pool-based setting. The dataset 𝒟_n^X consists of n i.i.d. features {X_i}_i=1^n from 𝒫_X. The whole dataset 𝒟_n^X is revealed all at once to the learner at the beginning. The learner queries samples from the dataset sequentially. Unlike the stream-based setting, the information from past queries will be retained and can be repeatedly utilized by the learner.
Throughout the paper, we consider a parameterized family of hypotheses denoted by ℱ = {f_θ(·): θ∈Θ, f_θ(·):𝒳→𝒴}. We assume the parameter set Θ has an upper bound of M_Θ under the Euclidean norm. We denote the loss function l:𝒴×𝒴→ℝ, i.e., l(Ŷ, Y) measures the loss of predicting Y with Ŷ. With a slight overload of the notation, we denote l(θ; (X, Y)) = l(f_θ(X), Y) as the prediction loss of the model f_θ(·) on the sample (X, Y).
For uncertainty sampling algorithms, a key component is an uncertainty function/measure U(θ; X):𝒳→ [0, ∞). The uncertainty function quantifies the uncertainty about a sample X given the model parameter θ. The specification of the uncertainty function usually depends on both the underlying hypothesis class ℱ and the loss function l
<cit.>. The general idea is to spend more querying efforts on those samples that the current model is uncertain about, in the hope to maximize the improvement of the model learning.
§ UNCERTAINTY SAMPLING FOR BINARY CLASSIFICATION
We start with probably the simplest yet still intuitive example: binary classification. For the binary classification, we are interested in the 0-1 loss, where the 0-1 loss for any hypothesis f can be defined as l_01(f(X), Y) 1{sign(f(X)) ≠ Y}. The expectation of the 0-1 loss is called the 0-1 risk, which is denoted as L_01(f) ℙ(sign(f(X))≠ Y). The ultimate goal is to give a theoretical bound on the excessive 0-1 loss L_01(f) - inf_g ∈𝒢 L_01(g), where 𝒢 is the class of all measurable functions, and the latter term inf_g ∈𝒢 L_01(g) is called the Bayes risk. Any hypothesis that reaches the Bayes risk is called Bayes-optimal or Bayes.
§.§ Generic algorithm under stream-based setting
We begin our discussion with the binary classification problem. In the following, we present a generic algorithm of uncertainty sampling under the stream-based setting. Specifically, Algorithm <ref> queries the data samples based on the model uncertainty and updates the model parameter according to a gradient descent procedure. It takes the uncertainty function U(θ; X) as an input. At each time t, the algorithm observes only the feature X_t and calculates the uncertainty U(θ_t; X_t). Here, without loss of generality, we assume the uncertainty is between [0,1]. Then, with probability U(θ_t; X_t), the algorithm queries the label of the sample and performs a gradient descent update; with probability 1-U(θ_t; X_t), the algorithm does not make a query and hence not update the parameters. In this way, a larger value of uncertainty will encourage the querying of a sample.
The core idea of the algorithm is to query only the samples that the model is uncertain about, and the uncertainty function quantifies such uncertainty. In the following, we review three examples of the uncertainty function used in the literature as special cases of the generic algorithm.
[Probabilistic model <cit.>]
A probabilistic model outputs q(θ; X): 𝒳→ [0,1] to estimate the true conditional probability ℙ(Y=+1|X). The entropy uncertainty <cit.> considers the entropy of q(θ; X):
U(θ; X) -[q(θ; X) log(q(θ; X)) + (1-q(θ; X))log (1-q(θ; X))],
where q = q(X;θ) ∈ (0, 1). The least confidence uncertainty <cit.> considers
U(θ; X) 1 - max{q(X;θ), 1-q(X;θ)} = min{q(X;θ), 1-q(X;θ)}.
These two uncertainties are often accompanied by the following cross-entropy loss that trains the probabilistic model
l(θ;(X, Y)) = -[1{Y=+1}log q(θ;X) + 1{Y=-1}log(1-q(θ;X))]
where 1{·} is the indicator function. Equivalently, we can also represent the loss function by
l(Ŷ, Y) = -log(1+Ŷ· Y/2)
where Ŷ = 2 q(θ; X) - 1 ∈ [-1, 1] is the predicted expectation.
For a probabilistic model, q(θ; X) reflects the confidence of the prediction. When q(θ; X) is close to 1, the model is confident that Y=+1, while q(θ; X) is close to 0, it is confident that Y=-1. For both ends, the uncertainty is small for both the entropy uncertainty and the least confidence uncertainty. When the model is less confident about the prediction and outputs q(θ; X) close to 1/2, the uncertainty becomes larger.
[Margin-based model <cit.>] Another class of classification model is margin-based, such as support vector machines (SVMs). Consider a linear SVM model that predicts Y with the sign of θ^⊤ X. The margin-based uncertainty function is defined by
U_μ(θ; X) 1/1+μ |θ^⊤ X|
where μ>0 is a hyper-parameter. The associated loss function for learning such margin-based models is squared margin loss
l(θ;(X, Y)) = (max{0, 1-Y·θ^⊤ X})^2.
Equivalently, the loss function can be written in the form of
l(Ŷ, Y) = (max{0, 1-Y·Ŷ})^2,
where Ŷ = θ^⊤ X.
For linear classifiers, |θ^⊤ x| is proportional to the distance from a sample to the classification hyperplane. The margin-based uncertainty captures the intuition that the closer a sample is to the classification hyperplane, the more uncertain the learner is about the sample.
[Threshold-based uncertainty <cit.>] The pool-based version of Example 2 works with a fixed set of samples and results in a threshold-based uncertainty function. At each time step, the algorithm will query the most uncertain sample in the given dataset with index i_t = _i∈𝒰_t |θ^⊤ X_i| where the set 𝒰_t contains the indices of unqueried samples at time t. Such a procedure can be captured by the following uncertainty function
U(θ; X) 1{|θ^⊤ X| ≤γ}.
where γ>0 is a hyper-parameter that may change over time. <cit.> analyze this uncertainty function and derive some negative theoretical results on its performance. Specifically, they consider the following loss for a logistic regression model
l(θ;(X, Y)) = log(1+exp(-Y ·θ^⊤ X)).
Equivalently, the loss can be written as
l(Ŷ, Y) = log(1+exp(-Y ·Ŷ)),
where Ŷ = θ^⊤ X.
As in the margin-based model, the quantity |θ^⊤ x| reflects the confidence of the prediction, and thus it is inversely proportional to the uncertainty. The threshold-based uncertainty queries only those samples where the confidence is smaller than the threshold γ.
§.§ Equivalent loss
Now we show a general property of Algorithm <ref> that, with this selective querying procedure, the algorithm essentially optimizes against an alternative loss function which we name as the equivalent loss; and the alternative loss is jointly determined by the uncertainty function U and the original loss function l. Specifically, if we combine the two cases of query and not query for the update step in Algorithm <ref>, we obtain the following
_ξ_t[θ_t+1|θ_t,X_t,Y_t] = θ_t - η_t · U(θ_t; X_t)·∂ l(θ;(X_t,Y_t))/∂θ|_θ=θ_t
where the expectation is taken with respect to ξ_t which is the sampling random variable that determines whether to query the sample.
Suppose (for the moment) there exists a loss function l̃ such that
∂l̃(θ;(x,y))/∂θ=U(θ; X)·∂ l(θ;(x,y))/∂θ
holds for all θ∈Θ and (x,y)∈𝒳×𝒴 (we will discuss the existence of l̃ in the following subsection). Then the parameter update can be written as
_ξ_t[θ_t+1|θ_t,X_t,Y_t] = θ_t - η_t ·∂l̃(θ;(X_t,Y_t))/∂θ|_θ=θ_t.
Suppose there exists l̃ satisfying (<ref>). Then Algorithm <ref> essentially performs stochastic gradient descent (SGD) with respect to the loss function l̃.
We say l̃ is the equivalent loss for the uncertainty function U and the original loss function l, if it satisfies (<ref>).
The equivalent loss l̃ can be viewed as a surrogate loss of the original loss l twisted by the uncertainty function U. If l̃ exists, it provides a convenient handle to understand and analyze the algorithm. In the following, we derive the equivalent loss l̃ for the previous examples.
[Continued]
Example 1 considers a probabilistic model q(θ; X) that estimates the true conditional probability ℙ(Y=+1|X), and the loss function is the cross-entropy loss.
* For the entropy uncertainty, the equivalent loss
l̃(θ; (X,Y)) =
qlog(q) + (1-q)log(1-q) - 1{Y=+1}·Li_2(q) - 1{Y=-1}·Li_2(1-q) + Li_2(1),
where q=q(θ;X) stands for the prediction model and the function Li_2(z) = -∫_0^z log(1-u)/udu is the Spence's function.
* For the least confidence uncertainty, the equivalent loss
l̃(θ; (X,Y)) =
-1{Y=-1}·log(2(1-q)) - q + log(2), if q < 1/2;
-1{Y=+1}·log(2q) - (1-q) + log(2), if q ≥1/2.
where q=q(θ;X) stands for the prediction model.
[Continued] For the margin-based model, the equivalent loss for the margin-based uncertainty function (defined in Example 2) and the squared margin loss is
l̃_μ(θ;(X,Y)) =
-2/μ(1/μ - 1)log(1-μ Y ·Ŷ) - 2/μ Y ·Ŷ + C, if Y ·Ŷ≤ 0;
-2/μ(1/μ + 1)log(1+μ Y ·Ŷ) + 2/μ Y ·Ŷ + C, if Y ·Ŷ∈ (0, 1);
0, if Y ·Ŷ≥ 1,
where the prediction Ŷ = θ^⊤ X, the constant C = 2/μ(1/μ + 1)log(1+μ) - 2/μ, and the hyper-parameter μ>0 is the same one that defines the margin-based uncertainty function.
[Continued]
The equivalent loss for the threshold-based uncertainty and the logistic loss function is given by the following:
l̃_γ(θ;(X,Y)) = log(1+exp(γ)), if Y ·Ŷ≤ -γ,
log(1+exp(-Y ·Ŷ)), if Y ·Ŷ∈ (-γ, γ),
log(1+exp(-γ)), if Y ·Ŷ≥γ
where the prediction Ŷ = θ^⊤ X and the hyper-parameter γ is the same one that specifies the threshold-based uncertainty function.
For these three examples, the derivation of the equivalent loss is standard and it is by solving the partial differential equation (PDE) (<ref>), and we defer the details to Appendix <ref>. We remark that these equivalent loss functions specify the objective function that Algorithm <ref> optimizes, and they are jointly determined by the pair of the uncertainty function and the original loss function.
§.§ Surrogate property of the equivalent loss
The derivation of equivalent loss makes it clear the objective function of the uncertainty sampling procedure. Then a natural question is whether the equivalent loss is a “suitable” loss for the binary classification problem. Recall that the practical goal of training a binary classifier is commonly to achieve a high classification accuracy, i.e., to optimize the binary loss l_01(Ŷ, Y) 1(Ŷ≠ Y). While the binary loss is in general computationally intractable <cit.>, the margin loss, the logistic loss, and the cross-entropy loss can all be viewed as a surrogate loss of the binary loss that enjoys better computational structure such as convexity. In this light, the equivalent loss derived from uncertainty sampling can also be viewed as a surrogate of the binary loss. Following the principles of <cit.>, we can examine the suitability of an equivalent loss and hence certify the properness of the uncertainty function.
A loss function l(·, ·) is said to be a surrogate of the binary loss if there exists a continuous, non-negative, and non-decreasing function ψ such that for any measurable function f:𝒳→𝒴 and any probability distribution 𝒫 on 𝒳×𝒴 = 𝒳×{-1, +1},
ψ(L_01(f) - inf_g ∈𝒢 L_01(g) ) ≤𝔼[l(f(X), Y) ] -inf_g ∈𝒢𝔼[l(g(X), Y)],
where 𝒢 is the set of all measurable functions, and L_01(f) 𝔼[l_01(f(X), Y) ] denotes the expected binary loss. All the expectations are taken with respect to the distribution 𝒫.
The definition establishes a connection between the oracle generation bound under the loss l and that under the binary loss. It can thus verify the properness of a loss l by whether training a model with l can also lead to a performance guarantee for the binary loss. An important property of the link function ψ is that if z→ 0 as ψ(z) → 0, then the loss function is classification-calibrated <cit.>. This ensures that the minimizer of the loss l among all the measurable functions will be the Bayes optimal classifier; the property is also known as the Fisher consistency.
For any loss function l that can be expressed as l(Ŷ, Y), one can construct a link function ψ. Furthermore, the constructed link function is mini-max optimal in the sense that for any non-negative loss l, any |𝒳| ≥ 2, any risk level ζ∈ [0, 1], and any precision ϵ > 0, there exists a probability distribution on 𝒳×{-1, +1} such that L_01(f) - inf_g ∈𝒢 L_01(g) = ζ and
ψ(ζ) ≤𝔼[l(f(X), Y) ] -inf_g ∈𝒢𝔼[l(g(X), Y)] ≤ψ(ζ)+ϵ.
The loss l is classification-calibrated (Fisher consistent) if and only if for any z ∈ (0, 1], ψ(z) > 0.
<cit.> provide a way to derive the link function ψ (See our Appendix <ref> for more details). They further prove that this surrogate property's link function is mini-max optimal by the existence of a probability distribution to make the surrogate upper bound arbitrarily tight. They also establish some equivalence between the link function and the Fisher consistency. While such a conclusion is only stated for margin-based models where l(Ŷ, Y) = l(Ŷ· Y) in <cit.>, their analysis in Theorem <ref> indeed applies to more general loss functions such as the cross entropy written as loss l(Ŷ, Y) = 1{Y=+1}· l(Ŷ, +1) + 1{Y=-1}· l(Ŷ, -1).
In the following proposition, we re-examine the previous examples and calculate the corresponding link functions against binary loss.
All the equivalent losses in Example 1, Example 2, and Example 3 are surrogate losses for binary loss. Specifically,
* Example 1 – entropy uncertainty (see Figure <ref>). The link function
ψ(z) = 1+z/2·Li_2(1+z/2) + 1-z/2·Li_2(1-z/2) - Li_2(1/2)
=- 1+z/2·log(1+z) - 1-z/2·log(1-z)
= log(2) · z^2 + o(z^2) as z→ 0.
where Li_2(z) is the Spence's function as defined earlier.
* Example 1 – least confidence uncertainty (see Figure <ref>)
ψ(z) = 1+z/2log(1+z) - z/2 = z^2/2 + o(z^2) as z→ 0.
* Example 2 – margin-based uncertainty (see Figure <ref>)
ψ_μ(z) = 2/μ^2 (1+μ z) log(1+μ z) - 2/μ z = 2 z^2+ o(z^2) as z→ 0.
* Example 3 – threshold-based uncertainty (see Figure <ref>)
ψ_γ(z) =
1/2[(1+z) log(1+z) + (1-z)log(1-z)], if z ≤ z_0;
1/2[(1+z) log(1+z_0) + (1-z)log(1-z_0)], if z ≥ z_0,
where z_0 is a constant determined by the threshold γ
z_0 2(exp(γ/1+exp(γ)) + exp(-γ/1+exp(-γ)))^-1 - 1.
As z→ 0, ψ_γ(z) = z^2 + o(z^2).
As noted earlier, the link function helps to transfer the excessive risk bound under the equivalent loss to that under the binary loss. In the next subsection, we pursue such a roadmap by first establishing the convergence rate under the equivalent loss and then transferring it to a performance guarantee under the binary loss.
§.§ Convergence analysis for convex loss
From the perspective of equivalent loss, the stream-based uncertainty sampling of Algorithm <ref> can be viewed as a stochastic gradient descent algorithm to minimize the objective function [l̃(θ;(X,Y))]. Now we establish the convergence rate against such an objective.
A loss function l(θ; (X, Y)) is said to be a convex loss if it is convex with respect to θ for any X ∈𝒳 and Y ∈𝒴.
When the equivalent loss is convex, we let
θ̃^* _θ∈Θ𝔼[l̃(θ^*, (X, Y))]
and have the following convergence bound.
Suppose that (i) for the original loss, ∂ l(θ; (X, Y))/∂θ_2≤ G for all θ∈Θ almost surely for (X,Y)∼𝒫; (ii) for the initial point, θ_1 - θ̃^*_2 ≤ D; (iii) the equivalent loss is a convex loss. Then with the step size η_t = D/G√(T+1), Algorithm <ref> yields the following bound
𝔼[l̃(θ̅_T+1, (X, Y))]
≤1/T∑_t=1^T𝔼[l̃(θ_t, (X, Y) )]
≤𝔼[l̃(θ̃^*, (X, Y))] + GD/√(T+1).
We would like to draw a comparison between the bound in Proposition <ref> and the bound obtained by a standard SGD algorithm against the equivalent loss objective. Note that Algorithm <ref> queries only part of the samples, but it achieves the same order of 1/√(T) as the standard SGD which naively queries all the samples. The sacrifice here is the larger variance which is reflected by the constant G in the bound; comparatively, the corresponding gradient variance will be smaller for the standard SGD against the equivalent loss objective.
The analysis of Proposition <ref> follows the standard analysis of stochastic gradient descent, and it states in the expectation sense. For high probability bounds, a typical concentration argument will yield a similar bound with an additional log(T) factor. We note that the rate of O(1/√(T)) can be further improved to O(1/T) for strongly convex functions. Furthermore, if we only consider the last iteration θ_T+1 rather than the average θ̅_T+1, <cit.> give an expectation bound of O(log(T)/√(T)) (or O(log(T)/T)) for non-smooth convex (or strongly convex) functions.
Suppose the equivalent loss l̃ induced by Algorithm <ref> is a surrogate loss for the binary loss with link function ψ. Also, the parameter space Θ satisfies the conditions in Proposition <ref>, and the step size η_t = D/G√(T+1).
Then we have
𝔼[L_01(f_θ̅_T+1)] - inf_g ∈𝒢 L_01(g) ≤ψ^-1(GD/√(T+1) + (𝔼[l̃(f_θ̃^*(X), Y)] - inf_g ∈𝒢𝔼[l̃(g(X), Y)] )),
where L_01(f) = 𝔼[l_01(f(X), Y) ] denotes the expected binary loss as earlier, and the expectation is with respect to the training data and the algorithm's randomness. Here 𝒢 is the set of all measurable functions.
Theorem <ref> exemplifies how the performance guarantee under the equivalent loss l̃ (Proposition <ref>) can induce an excessive risk bound under the binary loss through the link function ψ. There are two terms on the right-hand side which correspond to estimation error and approximation error, respectively. The first term comes from the SGD learning procedure, and it captures the estimation suboptimality of θ̅_T+1 against the best parameter θ̃^*. While such an error bound on the estimation suboptimality will generally involve the complexity of the hypothesis class, the online nature of the stream-based setting enables a neat analysis alike other online convex optimization algorithms. The second term captures the approximation suboptimality between the best parameter θ̃^* in the prescribed hypothesis class and the best one in the class of all measurable functions. The term will shrink as we enlarge the hypothesis class. We note that this approximation term is not pertaining to the uncertainty sampling algorithm or the equivalent loss, but it also appears in the standard supervised learning setting when transforming the excessive risk bound under margin/cross-entropy loss to that under binary loss.
We make the following two remarks based on Theorem <ref>:
* Convergence rate: We note that the link function plays a key role in transforming the excessive risk bound: it determines the convergence rate under the binary loss. For all the examples calculated so far (See Proposition <ref>), the link function ψ(z) = Θ(z^2) as z→ 0, which implies that ψ^-1(z) ∼Θ(z^-1/2). Thus it will lead to a convergence rate of O(T^-1/4) under the binary loss. This does not mean a performance deterioration of uncertainty sampling. For comparison, under the supervised learning regime, the margin loss corresponds to a link function ψ(z) = Θ(z), while the cross-entropy loss and the logistic loss, among others, all correspond to a link function ψ(z) = Θ(z^2). More importantly, we emphasize that T in the bound represents the number of arrived samples in Algorithm <ref> but not the number of queried samples. That is, the uncertainty sampling algorithm achieves the same rate of theoretical convergence for the cross-entropy loss but uses potentially much fewer queried samples. For the margin loss, we provide a short discussion in the next section arguing why it is not compatible with the existing uncertainty sampling algorithms.
* Convexity: An important condition in obtaining the bound is the convexity of the loss function with respect to the underlying parameter. While the non-convexity induced by the neural networks is commonly acknowledged as a benign non-convexity, the non-convexity induced by the loss function such as the binary loss or the truncated loss which may cause bad local minima is the type of non-convexity we try to avoid. This gives a new perspective to understanding the existing uncertainty functions:
* The equivalent loss for either the entropy uncertainty or the least confidence uncertainty is convex with respect to the predicted probability q = q(θ; X) for Example 1.
* The equivalent loss is convex for the squared margin loss in Example 2. The convexity can thus explain why <cit.> develop the algorithm based on the squared margin loss rather than the vanilla margin loss: any margin-based uncertainty U(θ; X|) = h(|θ^⊤ X|) for some non-decreasing function h(·) will induce a non-convex equivalent loss when the original loss is the margin loss (see Proposition <ref>).
* The equivalent loss is non-convex for the truncated loss in Example 3 (see Figure <ref>), which provides an explanation for the bad performance of uncertainty sampling <cit.>.
This discussion underlines that in addition to the surrogate property, we desire the equivalent loss induced by the uncertainty function also has a convexity structure.
To induce a concrete error guarantee, we take a closer look at the excessive risk. Suppose the best population risk is achieved by some hypothesis g^* in the function class of all measurable functions 𝒢. Recall that the function class where we choose our hypothesis is ℱ, which is a more restricted one compared to 𝒢. We denote the best hypothesis we can get by f^*∈ℱ, parametrized by θ^*. Following the basic guideline of theoretical machine learning, we decompose the excessive risk into two terms:
𝔼[l(f̂(X), Y)] - 𝔼[l(g^*(X), Y)]
= 𝔼[l(f̂(X), Y)] - 𝔼[l(f^*(X), Y)] (estimation)
= + 𝔼[l(f^*(X), Y)] - 𝔼[l(g^*(X), Y)] (approximation).
Let's deal with the above two terms separately. We start with the second term. The approximation term describes the distance between the best model f^* we can get in the hypothesis class ℱ and the best model g^* in the class of all measurable functions 𝒢 from the perspective of expected loss. Such a reference hypothesis g^* can be considered an oracle. The approximation term can be small when the model misspecification issue is not severe for a simple hypothesis class such as linear functions, or when the hypothesis class is rich enough to approximate almost every continuous function such as modern neural network models. The detailed discussion on the approximation term goes beyond the scope of this paper.
Analyzing the estimation term plays a central role not only in the classical statistical learning theory but also in nowadays theoretical machine learning research. A typical way is to further decompose the estimation term into a generalization term, an optimization term, a non-positive term, and a concentration term <cit.>, which is more common in the pool-based setting. Unlike the pool-based setting, in the stream-based version of our SGD Algorithm <ref>, the newly arriving sample (X_t, Y_t) is derived from the distribution 𝒫 rather than a fixed set in the pool-based setting, which enables the direct analysis on the estimation term. We only provide probably the easiest way to deal with the estimation term just to give an example. Following the classical results of convex optimization, the estimation error can be handled neatly with the convexity condition.
§.§ Two more examples
We conclude our discussion of the binary classification problem with two more examples.
hingecounter
[Margin loss with margin-based uncertainty induces non-convexity]
As noted earlier, all the link functions calculated so far
for the equivalent losses have that ψ(z) is of order z^2 as z → 0. For the standard supervised learning problem, the margin loss (also known as the Hinge loss) has ψ(z) = z. In fact, we can calculate the link function for the equivalent loss associated with the margin loss and the margin-based uncertainty as follows. The margin loss is
l(θ; (X, Y)) = max{0, 1 - Y ·θ^⊤ X}
and the margin-based uncertainty is
U_μ(θ; X) = 1/1+μ |θ^⊤ X|.
Then the equivalent loss is
l̃(θ; (X, Y)) = 1/μlog(1 - μ· Y ·θ^⊤ X) + 1/μlog(1+μ), if Y ·θ^⊤ X ≤ 0;
-1/μlog(1 + μ· Y ·θ^⊤ X) + 1/μlog(1+μ), if Y ·θ^⊤ X ∈ (0, 1);
0, if Y ·θ^⊤ X ≥ 1.
And its link function is
ψ_μ(z) = log(1+μ)/μ· z,
which is of the desirable linear order. However, as plotted in Figure <ref>, the equivalent loss is non-convex with respect to the margin θ^⊤ X. Thus Proposition <ref> no longer applies, and practically, the loss may induce bad local minima. This also justifies the choice of the squared margin loss in <cit.>. In the following proposition, it establishes that for the margin loss, if the induced equivalent loss is convex, then any differentiable margin-based uncertainty function must be constant.
Consider the margin loss and an uncertainty function that can be expressed by U(θ; X) = h(|θ^⊤ X|) where h(·) is a non-increasing, non-negative, and piece-wise differentiable function. Then h(·) must be a constant function,
h(·) ≡ C
for some C>0 if the equivalent loss is continuous and convex.
The non-decreasing requirement is natural for that we want to assign a larger uncertainty value to a sample with a smaller margin. The proposition gives a negative result on designing uncertainty functions for the margin loss in that there does not exist a non-trivial uncertainty function that retains the convexity structure for the equivalent loss. While Proposition <ref> and Theorem <ref> provide positive results on establishing the convergence rate of the uncertainty sampling algorithm, Proposition <ref> and Example <ref> give negative results on the non-convexity issue associated with some uncertainty functions.
Going beyond analyzing the existing uncertainty functions, we can apply the machinery to derive new uncertainty functions such as the following example.
[Exponential loss with exponential uncertainty]
The loss function and the uncertainty function are defined by
l(θ; (X, Y)) = exp(-Y·θ^⊤ X).
U_μ(θ; X) = exp(-μ |θ^⊤ X|).
The equivalent loss takes a similar shape as the exponential loss:
l̃_μ(θ; (X, Y)) = 1/1+μ·exp(-(1+μ) Y ·θ^⊤ X) + μ/1+μ, if Y ·θ^⊤ X ≥ 0;
1/1-μ·exp(-(1-μ) Y ·θ^⊤ X) - μ/1-μ, if Y ·θ^⊤ X < 0.
The link function for the surrogate property is
ψ_μ(z) = 1/1-μ^2(1-μ z - (1-z)^1+μ/2 (1+z)^1-μ/2) = z^2 + o(z^2) as z→ 0.
See Figure <ref> for a visualization of these functions.
We note that this equivalent property not only maintains the convexity of the exponential loss but also exhibits a strong convexity when both Θ and 𝒳 are bounded. This is a property that does not hold for equivalent losses derived upon margin-based loss but can be helpful in accelerating the convergence rate of gradient-based algorithms.
§.§ Numerical illustration
After previous theoretical discussions, we utilize a numerical example to demonstrate the equivalence between the uncertainty sampling and the equivalent loss and the convexity conditions. We adopt the synthetic data generation from <cit.>, where the feature points follow a mixture of two-dimensional Gaussian distributions. All the Gaussians. All Gaussians are of (0.5, 0.5) standard deviance, where the centers are located at 4 distinct positions: (-2, 0), (2, 0), (0, -2), (0, 2). The percentages of the four Gaussians are 20%, 30%, 40%, 10%, where the former two are aligned with positive labels while the latter two are negative. For each example, we start from random initialization, apply both the original loss minimization and the equivalent loss minimization algorithms on the synthetic data, and plot their final decision boundaries. As for the uncertainty sampling, we also choose the random initial points, set the step size to be small enough (10^-4), and run sufficiently many iterations (10^7). The final decision boundaries obtained by the uncertainty sampling are compared with the two empirical risk minimization boundaries.
Figure <ref>, <ref>, <ref>, <ref>, and <ref> show the final decision boundaries obtained by different algorithms. We can observe that the uncertainty sampling algorithm achieves almost the same decision boundary as the equivalent loss minimization rather than the original loss. Besides, Figure <ref> and <ref> imply that their corresponding equivalent losses are non-convex and of local minimum, which coincides with our theoretical computation. A noteworthy fact is that although we show the non-convexity of the logistic regression model under the cross entropy loss and the probabilistic uncertainties, Figure <ref> and <ref> show that they might be of no local minimum or be able to avoid from being trapped into them.
§ LOSS AS UNCERTAINTY: MULTI-CLASS CLASSIFICATION AND REGRESSION
In the previous section, we discuss the problem of binary classification and propose the notion of equivalent loss to verify the properness of an uncertainty function. However, the discussion, along with the uncertainty functions, has been quite specialized to the problem of binary classification and therefore can be hardly applied to the more general multi-class classification and regression problems. In particular, for binary classification, the uncertainty function and the loss function can be expressed by a single-variable function of either the predicted probability q or the margin θ^⊤ X. While this usually ensures the existence of the equivalent loss l̃, the structure no longer holds for multi-class classification and regression problems. In this section, we develop a general principle for designing uncertainty functions – “loss as uncertainty”, which umbrellas binary classification, multi-class classification, and regression problems as special cases. The idea is, rather than handcrafting uncertainty functions case-by-case, we propose using conditional expected loss as the uncertainty function. Such an uncertainty function endows nice analytical properties for the learning problem, and it provides a guideline for the uncertainty quantification/calibration of a prediction model.
§.§ Loss as uncertainty
We first define the conditional loss which marginalizes Y given the feature X.
Define the conditional (expected) loss as
L(θ; X) 𝔼[l(θ; (X, Y))| X]
where the expectation is taken with respect to the conditional distribution of Y|X with (X,Y)∼𝒫.
Note that the conditional loss is a function of the parameter θ and the feature X. Suppose we let the uncertainty function simply be the conditional loss. Then we have the equivalent loss being exactly the square of the original loss.
Suppose the uncertainty function U(θ;X)=L(θ;X). Then Algorithm <ref> essentially performs stochastic gradient descent with respect to the loss function [L̃(θ;X)] where the expectation is with respect to X∼𝒫_X and the equivalent loss
L̃(θ;X) 1/2(L(θ;X))^2.
Compared to Proposition <ref>, the loss-as-uncertainty design performs SGD against a loss that marginalizes out the label Y. It results in a small twist in the proof, but it is not essential. Importantly, the result holds for all differentiable conditional loss L(θ;X), and saves us from finding the solution to PDE (<ref>) case-by-case. In other words, the result applies generally to the problem of binary classification, multi-class classification, and regression. It reduces the design of the uncertainty function to a calibration problem of estimating the conditional loss L(θ;X). In terms of uncertainty sampling for regression problems, a similar uncertainty that measures conditional variance has already been proposed <cit.>. <cit.> justifies such a variance uncertainty by showing the equivalence between variance and entropy under the Gaussian distribution assumption, while for more general distributions, the equivalence does not hold. We provide a different but more general explanation that the conditional variance is the conditional loss (when the estimation is the true conditional mean) regardless of the underlying distribution.
We provide the following two motivations for “loss as uncertainty”:
Convexity: The design retains the convexity of the original loss. Suppose that the original loss l is non-negative and convex. Then it leads to the non-negativity and convexity of the conditional loss L. Consequently,
∂^2 L̃/∂θ^2 = ∂ L/∂θ·(∂ L/∂θ)^⊤ + ∂^2 L/∂θ^2≽ 0.
More generally, it is easy to verify that the convexity is still retained for L̃ if the uncertainty function
U(θ;X) = h(L(θ;X)) for some non-decreasing and non-negative scalar function h(·).
Existence of solution to (<ref>): The PDE (<ref>) becomes a multi-variate one for multi-class classification for that there will be one predicted probability for each class. And multi-variable functions generally do not have an indefinite integral, whereas the single-variable case is guaranteed by the fundamental theorem of calculus. If we aim to find a well-defined equivalent loss that always produces the same gradient as the uncertainty sampling in expectation, a necessary condition is that the path integral of its derivatives ∑_j=1^d U ·∂ l/∂θ_jdθ_j should depend not on the chosen path but only on the starting and the ending points. Assume that both U and l are smooth functions of θ. From the basics of differential forms and algebraic topology <cit.>, such a requirement is equivalent to finding some U such that the exchangeability holds,
∂ U/∂θ_i·∂ l/∂θ_j = ∂ U/∂θ_j·∂ l/∂θ_i, ∀ i ≠ j,
where a natural choice is U = h(l) such that h(·) has an anti-derivative. Consequently, this leads to the choice of h(·) as a non-decreasing and non-negative function with the special case of the identity function. We defer more discussions to Appendix <ref>.
§.§ Oracle case
Now we analyze Algorithm <ref> with the choice of U(θ;X) = L(θ;X). Here we assume the algorithm has an oracle access to L(θ;X). Note that this entails the knowledge of the conditional distribution 𝒫_Y|X. In the next subsection, we analyze the case where such oracle is not available and one needs to calibrate the conditional loss to obtain an estimate of L(θ;X).
With slight overload of notation, we write
L(f) = [l(f(X),X)|X], L̃(f) = 1/2([l(f(X),X)|X])^2 = 1/2 L^2(f)
for some hypothesis f.
Also, without loss of generality, we assume the loss is non-negative. Then for any two hypotheses f and g, we have
L̃(f) - L̃(g) = 1/2 L^2(f) - 1/2 L^2(g) = 1/2(L(f)+L(g)) (L(f) - L(g)) ≥1/2(L(f) - L(g))^2
where the last inequality comes from the non-negativeness of L.
For any measurable hypothesis f, we have the following bound for U(θ;X)= L(θ;X),
[L(f)] - inf_g ∈𝒢[L(g)] ≤√(2([L̃(f)] - inf_g ∈𝒢[L̃(g)]))
where 𝒢 denotes the class of all measurable functions as before, and the expectation is taken with respect to X∼𝒫_X.
Proposition <ref> presents the link function between L̃ and L, and this gives a handle of transforming a performance guarantee with respect to the squared conditional loss L̃ to that with respect to an original loss L. Then one can derive similar results as Proposition <ref> and Theorem <ref>.
Furthermore, a careful examination of the derivation in (<ref>) leads to an improved link function, and consequently a faster convergence rate. Let
g^* _g ∈𝒢[L(g)]
denotes the best measurable hypothesis, and
ϵ^* _x∈𝒳[l(g^*(X),X)|X=x]
be the pointwise minimum conditional risk. Then the following proposition expresses the link function with ϵ^*.
Under the same setup as Proposition <ref>, we have
[L(f)] - inf_g ∈𝒢[L(g)] ≤2/ϵ^*·([L̃(f)] - inf_g ∈𝒢[L̃(g)]).
The error bound in Proposition <ref> becomes smaller when ϵ^* grows larger, i.e., the data become more noisy and inseparable. This seems to contradict the results of <cit.> that the data efficiency of uncertainty sampling algorithms is in strong negative correlation with the error rate of the final classifier. However, we should note that Proposition <ref> is stated with respect to the excessive risks' relationships of any hypothesis f rather than the excessive risk itself, while the latter term is dealt by the SGD's convergence analysis as in Proposition <ref>. The convergence analysis is made with respect to the number of periods/observed features T rather than the number of queried samples. If the data become more separable from the decision boundary, the expected loss as the querying probability will decrease, leading to a smaller number of queries and higher data efficiency; thus it reconciles <cit.>'s observation.
Results such as Proposition <ref> and Proposition <ref> are not restricted to the binary classification problem but are generally applicable to the multi-class classification problem and the regression problem. While the existing development of uncertainty sampling algorithms has mainly focused on the classification problem, few uncertainty measurements have been proposed for the regression problem. Our result here gives a pointer for such development; for example, one can use the estimated mean-squared error itself as the uncertainty measure for the regression problem.
To examine the power of our uncertainty, we revisit the margin-based model with the Hinge loss.
[Continued]
If we utilize the Hinge loss rather than the squared Hinge loss, the surrogate property's link function of the Hinge loss can be computed as (see Example 3 of <cit.>)
ψ(z) = z,
which indicates that
L_01(f) - inf_g ∈𝒢 L_01(g) ≤𝔼[l_Hinge(f(X), Y) ] -inf_g ∈𝒢𝔼[l_Hinge(g(X), Y)].
Assume that the gradient bound G and diameter bound D in Proposition <ref> are satisfied. Further, assume that the approximation term is zero. By setting U = ϵ + L_Hinge, we have for η_t = D/G√(T+1),
𝔼[L_01(f_θ̅_T+1)] - inf_g ∈𝒢 L_01(g) ≤1/ϵ·GD/√(T+1).
Note that this rate O(T^-1/2) is faster than those examples' O(T^-1/4) rates.
§.§ Estimated loss and loss calibration
The analysis of the oracle case in previous can also be adapted to a setting where one uses the estimated conditional loss as uncertainty. Specifically, consider
U(θ;X) = L̂(θ; X)
where L̂(θ; X) is an estimate of L(θ; X).
Then although the equivalent loss relation does not hold exactly, one can still analyze the estimation error of Algorithm <ref>.
Suppose the estimates satisfy
𝔼[|L̂(θ_t; X) - L(θ_t; X)|] ≤δ_t
where the expectation is taken with respect to both θ_t and X∼𝒫_X that is independent of θ_t.
Let U(θ;X) = L̂(θ;X)∈ [0, 1] such that (<ref>) holds. Under the same condition as Proposition <ref>, Algorithm <ref> yields the following bound
𝔼[L̃(θ̅_T+1;X) ] ≤min_θ∈Θ𝔼[L̃(θ;X)] + GD/√(T) + D/T∑_t=1^Tδ_t,
where the expectation is taken with respect to both X and θ̅_T+1.
The estimate L̂(θ_t; X) can be obtained from a separate validation dataset by adapting uncertainty quantification methods <cit.>. We note that compared to the model calibration literature, the condition (<ref>) aims for an individual calibration objective in that it measures the calibration/estimation error for each X, and then takes expectation, rather than a population/average calibration or group calibration objective.
§ POOL-BASED SETTING
In this section, we analyze the uncertainty sampling algorithm under the pool-based setting and continue to adopt the conditional loss as the uncertainty function. Different from the stream-based setting, the features for all the samples are given at the beginning. To distinguish between the number of samples and the number of steps for the gradient descent algorithm, we use i=1,…,n to index the samples and t = 1,… T to index the gradient descent time steps.
Algorithm <ref> presents the pool-based uncertainty sampling algorithm. At each time step, the algorithm calculates the uncertainty for each sample in the data pool 𝒟_n given the current model parameter θ_t. Then the algorithm samples an index according to the probability distribution proportional to the uncertainty and queries the label of the sampled index. Based on this new label, the algorithm updates the model parameter via gradient descent.
§.§ Repeated-query v.s. single-query
For the pool-based setting in our paper, we consider a repeated-query setting where the learner may query the same sample X_i multiple times. Practically, this captures the situation where different human experts may provide different labels for the same sample feature X.
With the uncertainty function U(θ;X)=L(θ;X) and a proper choice of the step size η_t, Algorithm <ref> essentially performs stochastic gradient descent to minimize
𝔼_𝒫̂_X^n[L̃(θ; X)] 1/n∑_i=1^n(𝔼[l(θ; (X, Y))| X=X_i])^2
where the subscript 𝒫̂_X^n denotes the empirical distribution of 𝒟^X_n.
We remark that the choice of the step size involves an adjustment based on the normalizer of the probability distribution,
S_t=∑_i=1^n U(θ_t; X_i).
This ensures the length of the step size does not scale with the uncertainty level. We defer more details to the proof in Appendix <ref>.
The repeated-query setting is entailed by the objective (<ref>), which optimizes the empirical conditional loss that marginalizes out Y. In theory, the analysis still goes through for the single-query setting, and accordingly, Algorithm <ref> performs SGD to minimize
1/n∑_i=1^n(l(θ; (X, Y)))^2.
But this will require U(θ;X_i)=l(θ;(X_i,Y_i)) for i∈[n]. This uncertainty function is not as practical for that it depends on the realized label Y_i, and thus it will be generally hard to estimate this quantity without observing Y_i.
The major difference between those two settings is the corresponding loss that we are trying to minimize. For the repeated-querying setting, we are optimizing θ to minimize
where we denote the empirical distribution of X w.r.t. 𝒟_n^X by 𝒫̂_X^n and L(θ; X) = 𝔼[l(θ; (X, Y))| X] as defined in Section <ref>.
For the single-querying setting, we are optimizing θ to minimize
1/n∑_i=1^n l(θ; (X, Y)) = 𝔼_𝒫̂^n[l(θ; (X, Y))],
where we denote the empirical distribution of (X, Y) w.r.t. 𝒟_n by 𝒫̂^n.
In this section, we are interested in the repeated-querying setting, not only because it is more natural in practice when the same human expert may give different responses to some vague instance if queried multiple times, but also for its consistency with our chosen uncertainty U(θ; X)= L(θ; X). Note that parallel arguments hold for the single-querying case if we replace U(θ; X)= L(θ; X) with U(θ; X) = 𝔼_𝒫̂^n [l(θ; (X, Y))|X]. We assume that every observation of X_i is unique, which is the almost sure case if the distribution 𝒫_X is continuous.
After analyzing the details of the stream-based algorithm (Algorithm <ref>), we adopt the same principle in the pool-based setting, leading to a pool-based algorithm (Algorithm <ref>).
Unlike the stream-based setting where the arriving sample does not return to the learner and is discarded no matter whether it is queried, the pool-based setting requires the learner to select some samples to query in a fixed pool of data. Although the querying probability of a sample is still an increasing function of its uncertainty, such a probability is now dependent not only on the uncertainty of itself but also on those of other samples, which results in a more sophisticated dynamic. To compensate for the effects of such a dependence, we multiply the step size by an additional S_t/n factor.
§.§ Theoretical analysis
We describe a general challenge in analyzing pool-based uncertainty sampling. The algorithm dynamic works as follows:
θ_t→ (X_t,Y_t) →θ_t+1.
At each time t, we observe a new sample and use the sample to update the model parameter. If (X_t, Y_t) is sampled uniformly from the data pool or from the distribution 𝒫, it can be viewed as an exogenous randomness. Such an exogeneity provides great convenience in analyzing the convergence behavior of θ_t under online algorithms. However, for the uncertainty sampling algorithm, the parameter θ_t determines the uncertainty value and consequently the sampling distribution of (X_t, Y_t); and this makes the update dynamics more complicated. In this light, our perspective of equivalent loss and the notion of loss as uncertainty becomes helpful. Specifically, while the sampling distribution of (X_t, Y_t) bears dependence on the parameter θ_t, one can absorb the sampling distribution into the gradient and make the sample (X_t, Y_t) exogenous again, but against an alternative objective of l̃ or L̃.
Therefore, the error bound of θ̅_T+1 in Algorithm <ref> can be derived in a few standard steps:
* Establish a convergence result like Proposition <ref> for Algorithm <ref>. Note that the objective here is the empirical conditional loss but no longer the expected loss, but this will not change the nature of the analysis.
* Develop a generalization argument to connect the empirical condition loss with the expected loss.
* Use the link function argument to transform the excessive risk bound under L̃ to the original loss L or binary loss L_01.
§.§ Numerical experiments
To show that our loss as uncertainty principle can be a practical option for the multi-class classification and the regression problems, we test our pool-based algorithm Algorithm <ref> (denoted by active) on 5 UCI datasets <cit.> in comparison with the uniform sampling algorithm (marked as passive). Our implementation of Algorithm <ref> drops out the adjusting term S_t to simplify the step sizes to be constant. The source code and data can be found on <https://github.com/liushangnoname/Uncertainty-Sampling>.
Estimation of loss: In order to get an estimation of the conditional expected loss, we carry out the non-parametric estimator in <cit.> with a little adaptation to the active learning setting. <cit.> is focused on supervised learning, where they split out an independent validation set to calibrate the error. Their argument is that the independence of the validation set is crucial to avoid an underestimated error, while in our active learning setting, we can still apply the “loss as uncertainty” principle even if the error estimation is not calibrated as long as the estimation reflects the relative quantitative relationships. On the contrary, the preciousness of the labels encourages us to utilize every label for gradient descent training. We henceforth do not split the labels into validation and training in our active learning implementation.
Multi-class classification: We test two types of classifiers: logistic regression with cross-entropy loss and support vector machine with margin loss. We choose 3 datasets where the linear classifiers get acceptable performance on prediction accuracy, named Dry Bean, Waveform Version 1, and Covertype. For the Covertype dataset, we randomly pick 10000 samples from the whole set. We run 30 trials, where the dataset is randomly split according to an 80-20 proportion for training and testing each time. For each trial, the uncertainty sampling and the uniform sampling share the same Gaussian initialization and the same constant step sizes. The averaged accuracy v.s. step numbers result is shown in Figure <ref>, <ref>, and <ref>. For the Dry Bean and the Covertype datasets, uncertainty sampling with the “loss as uncertainty” principle outperforms uniform sampling, while for the Waveform dataset, the performances are similar.
Regression: As for the regression problem, we test the kernelized linear regression model, where the kernel is chosen among linear, polynomial, and radial basis functions. Two datasets named Forest Fires and QSAR Aquatic Toxicity are examined, where the datasets are chosen so that the kernelized linear regression is of acceptable performance and computational cost. The results are shown in Figure <ref>. Although for the QSAR dataset, our uncertainty sampling does not achieve dominant performance, it still reaches the same level as the uniform sampling. For the Forest Fires, our algorithm shows its superiority to passive learning.
§ OTHER VARIANTS OF UNCERTAINTY SAMPLING
§.§ Exponential loss as uncertainty
Now we explore an alternative choice for the uncertainty function for Algorithm <ref> where
U(θ; X) = exp(L(θ; X)).
To generate some intuitions, we first make some derivations under the oracle case where we have direct access to 𝔼_𝒫̂_n[l(θ; (X, Y))| X = X_i] = l(θ; (X_i, Y_i)). To simplify the notation, we abbreviate the gradient we take at time step t to g_t:
g_t ∂ l(θ;(X_i_t, Y_a_t))/∂θ|_θ=θ_t.
If we define the uncertainty as the exponential of the conditional expected loss and utilize the structure of the softmax distribution,
then the conditional expectation of the gradient g_t is
𝔼[g_t | ℱ_t] = ∑_i=1^n exp(l(θ_t; (X_i, Y_i))) ·∇_θ l(θ_t; (X_i, Y_i))/∑_j=1^n exp(l(θ_t; (X_j, Y_j))) = ∇_θ(log(∑_i=1^n exp(l(θ_t; (X_i, Y_i)))) ).
By viewing the overall equivalent loss as the log-sum-exp (softmax) function
L̃log(∑_i=1^n exp(l(θ; (X_i, Y_i)))),
we have
𝔼[g_t | ℱ_t] = ∇_θL̃(θ_t).
With the uncertainty function U(θ;X)=exp(L(θ;X)), Algorithm <ref> essentially performs stochastic gradient descent to minimize
log(∑_i=1^n exp(L(θ;X))).
We note that the objective in Proposition <ref> is risk-sensitive rather than risk-neutral such as expectation. In the following section, we continue to study two more variants of uncertainty sampling that relate to the risk profile and robustness of the underlying loss.
§.§ Top-Lg-max uncertainty sampling
Some variant of the uncertainty sampling algorithm queries the most uncertain samples, in replacement of the sampling step in Algorithm <ref>. Algorithm <ref> describes such a variant: at each time step, the algorithm randomly picks one of the m most uncertain samples and queries the sample. Then the algorithm performs a gradient descent step based on the queried sample.
With the uncertainty function U(θ;X)=L(θ;X), Algorithm <ref> essentially performs stochastic gradient descent to minimize
CVaR_P̂^n_X^α(𝔼[l(θ; (X, Y))| X])
where the subscript 𝒫̂_X^n denotes the empirical distribution of 𝒟^X_n and the risk level α=m/n.
Proposition <ref> gives the objective of Algorithm <ref> when using the conditional loss as the uncertainty function. Here the conditional value-at-risk is defined by
CVaR_𝒬^α(ξ) [ξ|ξ≥𝒬^-1_α(ξ)]
where the underlying random variable ξ follows the distribution 𝒬, and 𝒬^-1_α(ξ) denotes the α-quantile of ξ. Note that Algorithm <ref> uses loss as uncertainty, and by querying the most uncertain samples, it focuses on the samples with the largest conditional loss, which naturally leads to the CVaR objective. We remark that the CVaR is a risk-sensitive objective rather than a risk-neutral one such as expectation/average. While the result is presented for the oracle case of U(θ; X)=L(θ; X), we may expect similar risk-sensitive behavior for the uncertainty sampling algorithm when the used U(θ; X) is strongly correlated with L(θ; X). Also, the risk level α=m/n partly explains why the arg-max strategy (where m=1) may have volatile behavior: it may focus on the very tail part of the loss.
§.§ Distributionally robust optimization as a variant of uncertainty sampling
In this section, we establish some equivalence between distributionally robust optimization under χ^2-divergence and a variant of uncertainty sampling. Algorithm <ref> implements a mixture of uniform sampling and uncertainty sampling (Algorithm <ref>). At each time step, the algorithm queries a sample uniformly randomly with probability 1 - γ, and follows the top-k-max uncertainty sampling with probability γ. It is a natural algorithm in that it softly combines uncertainty sampling with the standard learning procedure of uniform sampling.
With the uncertainty function U(θ;X)=L(θ;X), Algorithm <ref> essentially optimize the following distributionally robust objective
max_p∈𝒰(𝒫) ∑_i=1^n p_i ·𝔼[l(θ; (X, Y))|X = X_i],
where 𝒰(𝒫) is the ambiguity set for the probability vector p=(p_1,...,p_n). It is defined by
𝒰(𝒫) {p| ∑_i=1^n p_i = 1, 0 ≤ p_i ≤m+(n-m)γ/mn, D_ϕ(𝐩(1/n, …, 1/n)^⊤) ≤γ^2 n(n-m)/2nm},
where D_ϕ(pq) = ∑_i=1^n q_i ϕ(p_i/q_i) is the ϕ-divergence with ϕ(z) = 1/2 (z - 1)^2.
Proposition <ref> gives the objective of the mixture of uncertainty sampling and uniform sampling. Note that when we employ loss as uncertainty, the samples with larger losses will be more frequently sampled and optimized over. This intuition is aligned with the design of the above distributionally robust optimization formulation which assigns larger weights to samples with larger losses. There is a small difference between these two in that uncertainty sampling uses the conditional expected loss whereas the robust objective uses the empirical loss, yet the difference is not essential. The distributionally robust objective (<ref>) bears certain equivalence to the variance regularized objective <cit.>
1/n∑_i=1^n L(θ; X_i) + √(γ^2 (n-m)/m·Var_𝒫̂_X^n(L(θ; X))),
where the latter objective (called variance regularized empirical risk) can act as a high probability upper bound for the population risk <cit.>. To avoid a vain upper bound, the theory of distributionally robust optimization suggests a choice of γ = √(C m/n(n - m)) so that γ^2 (n-m)/m = C/n, and this will render Algorithm <ref> a strong tendency to the uniform sampling.
informs2014
§ DERIVATION OF EQUIVALENT LOSSES AND SURROGATE LINK FUNCTION
This section will present the detailed calculations of the equivalent losses and the surrogate link functions of all the listed examples in previous sections. The subscript of μ or γ will sometimes be omitted for simplicity when the text is clear.
§.§ Equivalent loss in Section <ref>
[Equivalent loss of <cit.>]
Both the loss and the uncertainty function can be expressed as a function of predicted probability q(X;θ). By the chain rule,
∂l̃/∂θ = ∂l̃/∂ q·∂ q/∂θ,
U·∂ l/∂θ = U ·∂ l/∂ q·∂ q/∂θ.
Hence if we can find some l̃ such that
∂l̃/∂ q = U ·∂ l/∂ q,
then we have accomplished the task.
The indicator function 1{Y=+1} where Y∈{-1, +1} can be transformed into Y+1/2 which we denote as p by a slightly abuse of notations. Then the derivative of the original cross-entropy loss can be presented as
∂ l/∂ q = -p/q + 1-p/1-q = q-p/q(1-q).
We start with the entropy uncertainty U = -q log(q) - (1-q) log(1-q) in <cit.>.
U(q) ·∂ l/∂ q = p log(q) -(1-p) log(1-q) - (1-p)·q log(q)/1-q + p ·(1-q) log(1-q)/q
= p log(q) -(1-p) log(1-q) - (1-p)·(q - 1) log(q) + log(q)/1-q + p ·-q log(1-q) + log(1-q)/q
= p log(q) -(1-p) log(1-q) + (1-p) log(q) - (1-p)·log(q)/1-q - p log(1-q) + p ·log(1-q)/q
= log(q) - log(1-q) - (1-p)·log(q)/1-q + p ·log(1-q)/q.
Then by calculating its indefinite integral, we have
∫ U(q) ·∂ l/∂ qdq = qlog(q) + (1-q)log(1-q) - p ·Li_2(q) - (1-p) ·Li_2(1-q) + C,
where Li_2(z) is the Spence's function,
Li_2(z) = -∫_0^zlog(1-z)/zdz.
Since we are interested in the excessive risk (which is the expected difference between those hypotheses and the optimal measurable function), the selection of C does not matter. We simply select C = Li_2(1) = π^2/6 to make the equivalent loss vanish at p = q = 0 and p = q = 1, which yields the equivalent loss l̃ presented in Section <ref>.
We continue with the least confident uncertainty U = min{q, 1-q} in <cit.>. For q ∈ [0, 1/2], we have
U(q) ·∂ l/∂ q = q-p/1-q
= q - 1 + 1 - p/1-q
= -1 + 1-p/1-q.
Its indefinite integral is simple:
∫ U(q)·∂ l/∂ qdq = -q - (1-p) log(1-q) + C, ∀ q ∈ [0, 0.5].
Similarly, we can compute the indefinite integral for q ∈ [0.5, 1]:
∫ U(q)·∂ l/∂ qdq = q - p log(q) + C, ∀ q ∈ [0.5, 1].
The equivalent loss function is piece-wise continuous. We select the constants properly to avoid the jump discontinuity at q = 1/2.
To let the values at q = 1/2 match each other, we select the constants so that
l̃ =
-(1-p)·log(2(1-q)) - q + log(2), if q < 0.5;
-p·log(2q) - (1-q) +log(2), if q ≥ 0.5.
Again, we don't choose the overall constant deliberately. The log(2) term is simply to make the equivalent loss vanish at p = q = 0 and p = q = 1.
[Equivalent loss of <cit.>]
For the SVM-based methods, both the loss and the uncertainty function can be expressed as a function of Y ·Ŷ, where Ŷ = θ^⊤ X. By the similar chain rule arguments in Example <ref>, we can find the equivalent loss with respect to θ as long as we can find that with respect to Y ·Ŷ.
To simplify the notations, we denote Y·Ŷ = Y θ^⊤ X by s. As a reminder, we again state the squared Hinge loss
l(s) =
(1-s)^2, if s ≤ 1;
0, if s ≥ 1,
and the uncertainty function
U_μ(s) =
(1-μ s)^-1, if s ≤ 0;
(1+μ s)^-1, if s ≥ 0.
We compute the amount U ·∂ l/∂ s and its indefinite integral in three parts.
For s ≥ 1, the result is straightforward: the equivalent loss must be a constant. We select the constant to be zero for some notation convenience.
For s ∈ [0, 1],
U_μ(s) ·∂ l/∂ s = -2(1-s) ·1/1+μ s
= -2 -1/μ(μ s + 1) + 1/μ+ 1/1+μ s
= 2/μ- 2(1/μ+ 1) ·1/1+μ s.
Its indefinite integral is
∫ U_μ(s) ·∂ l/∂ sd s = 2/μ· s - 2/μ(1/μ+ 1) ·log(1+μ s) + C,
where we select C = 2/μ- 2/μ(1/μ+ 1) ·log(1+μ) so that the values at s = 1 coincide.
For s ≤ 0, we can complete the calculation similarly:
U_μ(s) ·∂ l/∂ s = -2(1-s) ·1/1-μ s
= -2 -1/μ(1 - μ s) - 1/μ+ 1/1-μ s
= -2/μ+ 2(1/μ- 1) ·1/1-μ s.
The indefinite integral is
∫ U_μ(s) ·∂ l/∂ sd s = -2/μ· s - 2/μ(1/μ- 1) ·log(1-μ s) + C,
where the constant is selected to be the same as s ∈ [0, 1] to match at s = 0.
[Equivalent loss of <cit.>]
The uncertainty function is probably the simplest case: an indicator function of whether |s| = |Y·Ŷ| = |Y θ^⊤ X| is no greater than a certain threshold γ. Then for those s's that satisfy the threshold requirement, the equivalent loss is identical to the original loss (which is the logistic loss, as a reminder), while for those s's outside the threshold area, the equivalent loss must be constant. We select those constants to avoid abrupt changes at the threshold, resulting in the expressions in Section <ref>.
[Equivalent loss of margin loss and margin-based uncertainty]
We recall that the original loss and the uncertainty function w.r.t. s = Ŷ· Y are
l(s) = max{0, 1-s},
U_μ(s) = 1/1+μ s, if s ≥ 0;
1/1-μ s, if s ≤ 0.
For the s ≥ 1 part, the indefinite integral must be constant. We select the constant to be zero.
For the s ∈ (0, 1) part,
U_μ(s) ·∂ l/∂ s = -1/1+μ s,
which indicates that
∫ U_μ(s) ·∂ l/∂ sds = -1/μlog(1+μ s) + C, ∀ s ≥ 0.
We select the constant to be 1/μlog(1+μ) so that there is no discontinuity at s = 1.
For the s ≤ 0 part,
U_μ(s) ·∂ l/∂ s = -1/1-μ s,
resulting in
∫ U_μ(s) ·∂ l/∂ sds = 1/μlog(1-μ s) + C, ∀ s ≥ 0.
We set the constant to be 1/μlog(1+μ) to keep the continuity at s = 0.
[Equivalent loss of exponential loss and exponential uncertainty]
Similarly, we state the original loss and the uncertainty function concerning s:
l(s) = exp(-s),
U_μ(s) = exp(-μ s), if s ≥ 0;
exp(μ s), if s ≤ 0.
Then, for s ≥ 0,
U_μ(s) ·∂ l/∂ s = -exp(-(1+μ) s),
of which the indefinite integral is
∫ U_μ(s) ·∂ l/∂ sd s = exp(-(1+μ)s)/1+μ + C, ∀ s ≥ 0.
We select C = μ/1+μ so that the value at s = 0 is 1.
On the contrary, for s ≤ 0,
U_μ(s) ·∂ l/∂ s = -exp(-(1-μ) s),
leading to
∫ U_μ(s) ·∂ l/∂ sd s = exp(-(1-μ)s)/1-μ + C, ∀ s ≤ 0.
The constant is chosen to be C = -μ/1-μ to meet the value at s=0.
§.§ Surrogate property and proof of Proposition <ref>
In this subsection, we summarize the arguments in <cit.> and provide their surrogate link function computation method for the margin-based models such as the SVM. Such a surrogate property induces a mini-max optimal bound on the excessive 0-1 risk (see Theorem 3 in <cit.>). For simplicity, in this subsection, we omit the dependence on X and θ, since all the excessive risk analyses hold for any certain but fixed hypothesis f_θ and sample point X=x.
We start with the standard definitions of <cit.>. Assume that the loss l(Ŷ, Y) is of the form l(Ŷ· Y) (which is the case in all of our examples). By denoting the probability of a positive Y by p, the expected loss induced by predicting Ŷ is
C_p(Ŷ) p l(Ŷ) + (1-p) l(-Ŷ).
For any fixed probability value p, the inferior of the expected loss is denoted by
H(p) inf_Ŷ C_p(Ŷ).
If we restrict the prediction Ŷ to be not Bayes-optimal (that is, to be of the different sign as 2p-1) and take the inferior, we get
H^-(p) inf_Ŷ· (2p - 1)≤ 0 C_p(Ŷ).
Note that a binary classification loss l is said to be classification-calibrated <cit.> (or Fisher consistent <cit.>) if H^-(p) > H(p) for any p ≠1/2.
<cit.> provide a way of computing the surrogate link function ψ: [0, 1] →ℝ via
ψ̃(z) = H^-(1+z/2) - H(1+z/2),
ψ(z) = ψ̃^**(z),
where g^** is the Fenchel-Legendre biconjugate of the function g, characterized by
epi g^** = co=epi g.
Note that those functions are convex if and only if their Fenchel-Legendre biconjugate are themselves <cit.>.
Equipped with such a surrogate link function ψ, <cit.>'s Theorem 3 shows that it can be an upper bound for the excessive 0-1 risk: for any measurable function f and any probability distribution on 𝒳×𝒴 = 𝒳×{-1, +1},
ψ(L_01(f) - inf_g ∈𝒢 L_01(g) ) ≤𝔼[l(f(X), Y) ] -inf_g ∈𝒢𝔼[l(g(X), Y)],
where 𝒢 is the set of all measurable functions.
Such an upper bound is mini-max optimal in the sense that for any non-negative loss l, any |𝒳| ≥ 2, any 0-1 risk level ζ∈ [0, 1], and any precision ϵ > 0, there exists a probability distribution on 𝒳×{-1, +1} such that L_01(f) - inf_g ∈𝒢 L_01(g) = ζ, and
ψ(ζ) ≤𝔼[l(f(X), Y) ] -inf_g ∈𝒢𝔼[l(g(X), Y)] ≤ψ(ζ)+ϵ.
Equipped with such powerful tools, all we need to do is to find the surrogate link functions of those active learning models. But before we proceed to the particular calculation, we notice that the analysis in <cit.> is designed for the margin-based models, while our Example <ref> is not based on the margin but on the probability. To generalize the arguments to the probabilistic models, we transform the probability into the expectation to enable the margin-based analysis. We denote the predicted expectation of Y in a probabilistic model by
Ŷ𝔼̂[Y] = 2 q - 1.
[Surrogate link function of <cit.>]
Remind that the original loss can be expressed as
l(Ŷ· Y) = -log(1+Ŷ· Y/2).
The entropy uncertainty is
U = -[qlog(q) + (1-q) log(1-q)]
= -[1+Ŷ/2log(1+Ŷ/2) + 1-Ŷ/2log(1-Ŷ/2)]
= -[1+Ŷ· Y/2log(1+Ŷ· Y/2) + 1-Ŷ· Y/2log(1-Ŷ· Y/2)].
Then the equivalent loss is
l̃ = Li_2(1) - Li_2(1+ Ŷ· Y/2) + 1/2[(1+Ŷ· Y) log(1+Ŷ· Y/2) + (1-Ŷ· Y) log(1-Ŷ· Y/2)],
where Li_2(·) is the Spence's function.
One can take an easy check that this loss is actually identical to the equivalent loss we provide in Section <ref> if Ŷ = 2q-1.
Notice that U is a non-negative even function that only takes zero value at two endpoints, which implies that minimizing expected l̃ is equivalent to minimizing expected l. The minimizer Ŷ^* can be easily obtained at the first-order stationary point
p ·(-1/1+Ŷ^*) + (1-p) ·(1/1-Ŷ^*) = 0,
which is Ŷ^* = 2p - 1. Then
H(p) = Li_2(1) -[p Li_2(p) + (1-p) Li_2(1-p)] + [plog(p) + (1-p)log(1-p)],
where Li_2(·) is the Spence's function.
The computation of H^-(p) is simple: the equivalent loss is convex, indicating that the inferior risk of the non-Bayes classifiers must be taken at Ŷ = 0. Therefore,
H^-(p) = Li_2(1) -Li_2(1/2) - log(2).
By definition,
ψ̃(z) = -Li_2(1/2) + 1+z/2·Li_2(1+z/2) + 1-z/2·Li_2(1-z/2) - [1+z/2·log(1+z) + 1-z/2·log(1-z)],
whose second-order derivative is
d^2 ψ̃/d z^2 = -1/2[(1-z)log(1-z/2) + (1+z) log(1+z/2) /1-z^2] ≥ 0.
The convexity implies that
ψ(z) = ψ̃(z).
We need to note that the first-order derivative of ψ is
dψ/dz = 1/2[Li_2(1+z/2) - Li_2(1-z/2)] ≥ 0,
which is zero if and only if z=0. So the equivalent loss is classification-calibrated, and the surrogate link function around z=0 is approximately
ψ(z) ∼d^2 ψ/d z^2|_z=0· z^2 = log(2) · z^2.
Since ψ(z) is bounded at z∈[0,1], we can conclude that ψ(z) = Θ(z^2),
where Θ is the big theta notation referring to “of the same order as” rather than our denoted set of parameters.
The other example of the least confidence uncertainty U = min{q, 1-q} can also be analyzed via Ŷ = 2q -1. By definition,
U = min{1+Ŷ/2, 1-Ŷ/2} = 1-|Ŷ|/2.
The equivalent loss with respect to Ŷ· Y is
l̃(Ŷ· Y) = 1/2(Ŷ· Y -2log(1+Ŷ· Y)) + log(2) - 1/2, if Ŷ· Y ≥ 0;
-1/2·Ŷ· Y + log(2) - 1/2, if Ŷ· Y ≤ 0.
Again, one can quickly check that this equivalent loss is identical to the form we present in Section <ref> with Ŷ = 2q - 1. We don't bother to adjust those constants explicitly to meet the non-negativity or any other requirements, since those equivalent losses are all bounded and we are interested in the excessive risk (which is one expected loss minus another).
W.l.o.g. assume that p ≥1/2. Then the first-order stationary point of C_p(Ŷ) should be
-1/2· p ·1-Ŷ^*/1+Ŷ^* + 1/2 (1-p) = 0,
which is Ŷ^* = 2p - 1. Then
H(p) = p ·1/2[ (2p-1) - 2log(2 p) ] - (1-p) ·1/2(1-2p) + log(2) - 1/2 = p - 1/2 - p log(2p) + log(2) - 1/2, ∀ p ≥1/2.
For p ≤1/2, the optimal Ŷ^* remains the same 2p - 1, while ∀ p ≤1/2,
H(p) = -p ·1/2(2p - 1) + (1-p) ·1/2[ (1-2p) - 2 log(2(1-p))] + log(2) - 1/2
= 1/2 - p - (1-p) log(2(1-p)) + log(2) - 1/2.
By the convexity of l̃,
H^-(p) = C_p(0) = log(2) - 1/2.
The derivation of ψ̃ only requires the p ≥1/2 part, hence
ψ̃(z) = H^-(1+z/2) - H(1+z/2) = -1/2 z + 1+z/2log(1+z),
of which the second-order derivative is
d^2 ψ̃/d z^2 = 1/2(1+z)≥ 0.
By the convexity of ψ̃, we have
ψ= ψ̃.
From the first-order derivative of ψ
dψ/dz = 1/2log(1+z),
we know that the surrogate link function ψ is only tending to zero if and only if z itself tends zero. Thus, the equivalent loss is classification-calibrated. From the facts that
dψ/dz|_z=0 = 0
and
d^2 ψ/dz^2|_z=0 = 1/2,
we know that
ψ(z) ∼1/2 z^2
around the zero point. From the boundedness of ψ, we can also conclude similarly to the entropy uncertainty case that
ψ(z) = Θ(z^2),
where the big theta notation means “of the same order as”.
[Surrogate link function of <cit.>]
We start with finding the Ŷ that minimizes the expected equivalent loss. Remind that the equivalent loss can be written in the form of Ŷ· Y:
l̃_μ =
-2/μ(1/μ - 1)log(1-μŶ· Y) - 2/μŶ· Y + C, if Ŷ· Y ≤ 0;
-2/μ(1/μ + 1)log(1+μŶ· Y) + 2/μŶ· Y + C, if Ŷ· Y ∈ (0, 1);
0, if Ŷ· Y ≥ 1,
where C = 2/μ(1/μ + 1)log(1+μ) - 2/μ.
By the definition,
U_μ·∂ l/∂Ŷ = ∂l̃_μ/∂Ŷ.
Since U_μ = (1+μ |Ŷ|)^-1 is a positive and even function, minimizing the expected equivalent loss is identical to minimizing the expected original loss (which is, the squared Hinge loss). By direct calculation (or referring the Example 2 in <cit.>), the minimizer should be
Ŷ^* = 2p - 1.
Without loss of generality, we assume p ≥1/2, which implies that 2p-1 ≥ 0. Subject to that minimizer,
H(p) = C
+ p ·(-2/μ(1/μ+ 1) log(1+μ (2p - 1)) + 2/μ(2p-1))
+ (1-p)·(-2/μ(1/μ-1) log(1+μ(2p-1))+2/μ(2p-1)).
Since the equivalent loss is convex, the minimized risk of the non-Bayes classifier must be
H^-(p) = C_p(0) = C.
Hence we have
ψ̃(z) = 1+z/2·(2/μ(1/μ+ 1) log(1+μ z) + 2/μz) + 1-z/2·(2/μ(1/μ-1) log(1+μ z)+2/μz)
= 2/μ^2· (1+μ z) log(1+μ z) - 2/μz.
The second-order derivative of (̃ψ) is
d^2 ψ̃/d z^2 = 2/1+μ z > 0,
which guarantees the convexity of ψ̃. Hence
ψ = ψ̃.
The first-order derivative of ψ is
dψ/d z = 2/μlog(1+μ z) ≥ 0,
where the equality holds if and only if z = 0 for any μ > 0, indicating the classification-calibration of the equivalent loss l̃. By a similar Taylor expansion argument, we can conclude that
ψ(z) ∼d^2 ψ/d z^2|_z=0· z^2 = 2z^2.
Due to the boundedness of the surrogate link function, we have
ψ(z) = Θ(z^2),
where the big theta notation stands for “of the same order as”.
[Surrogate link function of <cit.>]
We briefly recall the equivalent loss with respect to Ŷ
l̃_γ = log(1+exp(γ)), if Y ·Ŷ≤ -γ;
log(1+exp(-Y ·Ŷ)), if Y ·Ŷ∈ (-γ, γ);
log(1+exp(-γ)), if Y ·Ŷ≥γ,
where the non-constant part is identical to that of a logistic loss. For sufficiently large threshold γ so that the minimizer locates in the non-constant part, we compute the first-order condition of the minimizer (which is just that of the logistic loss) as
-p ·exp(-Ŷ^*)/1+exp(-Ŷ^*) + (1-p) ·exp(Ŷ^*)/1+exp(Ŷ^*) = 0,
which implies that
Ŷ^* = log(p/1-p).
For a small γ, the derivative of the expected equivalent loss suggests that the minimizer should be
Ŷ^* = γ·sign(2p - 1).
Without loss of generality, we assume that p ≥1/2. Then
Ŷ^* = γ, if p ≥exp(γ)/1+exp(γ);
log(p/1-p), if 1/2≤ p ≤exp(γ)/1+exp(γ).
Substituting above results into C_p(Ŷ), we have
H(p) =
-[plog(p) + (1-p)log(1-p)], if 1/2≤ p ≤exp(γ)/1+exp(γ);
log(1+exp(γ)) - γexp(γ)/1+exp(γ), if p ≥exp(γ)/1+exp(γ).
One can check that C_p(0) ≤ C_p(Ŷ) for any p ≥1/2 and Ŷ≤ 0, implying that
H^-(p) = C_p(0) = log(2).
Then
ψ̃(z) = 1/2[(1+z) log(1+z) + (1-z)log(1-z)], if z ≤exp(γ) - 1/exp(γ) + 1;
log(2/1+exp(γ)) + γexp(γ)/1+exp(γ), if z ≥exp(γ) - 1/exp(γ) + 1.
Apparently, ψ̃(z) is non-convex as a whole: in the first part where z is small, the function is convex and strictly increasing, while in the second part, the function is a constant. We extend the values of ψ̃(z) from small z's to large z's by defining another function
h(z) 1/2[(1+z) log(1+z) + (1-z)log(1-z)].
To compute ψ(z), observe that the convex hull of the epigraph of ψ̃ can be determined by some specific point z_0 ≤exp(γ) - 1/exp(γ) + 1: at the left side of z_0, the epigraph is identical to that of h(z), while at the right side of z_0, the epigraph is identical to that of the tangent at (z_0, h(z_0)). Such a tangent should contain the right-most point (1, ψ̃(1)), which means
h(z_0) + h^'(z_0)· (1-z_0) = ψ̃(1).
Replacing the equation with concrete expressions, we have
log(1+z_0) = h(z_0) + h^'(z_0)· (1-z_0) = ψ̃(1) = log(2) - log(exp(γ/1+exp(γ)) + exp(-γ/1+exp(-γ))).
Simplifying notations, we have
z_0 = 2·(exp(γ/1+exp(γ)) + exp(-γ/1+exp(-γ)))^-1 - 1.
Therefore,
ψ(z) =
1/2[(1+z) log(1+z) + (1-z)log(1-z)], if z ≤ z_0;
1/2[(1+z) log(1+z_0) + (1-z)log(1-z_0)], if z ≥ z_0,
where z_0 is some positive constant stated above.
By examining the first-order derivative of ψ(z), we can easily find out that the equivalent loss is classification-calibrated:
dψ/dz = 1/2[log(1+z) - log(1-z)], if z ≤ z_0;
1/2[log(1+z_0) - log(1-z_0)], if z ≥ z_0.
By computing its Taylor expansions at z=0, we have
ψ(z) ∼d^2 ψ/d z^2|_z=0· z^2 = z^2.
Finally, we note that
ψ(z) = Θ(z^2),
where the big theta notation suggests “at the same order as”.
[Surrogate link function of margin loss and margin-based uncertainty]
Similarly, the even and positive uncertainty function U leads to the same minimizer of the expected equivalent loss as the expected original margin loss, while the latter by the arguments in <cit.> is
Ŷ^* = sign(p - 1/2),
for p ≠1/2. For p = 1/2, any Ŷ∈ [-1, 1] will lead to the same expected equivalent loss.
We compute the p ≥1/2 part, gaining
H(p) = p · 0 + (1-p) ·2/μlog(1+μ) = (1-p) ·2/μlog(1+μ), ∀ p ≥1/2.
The other part p < 1/2 is
H(p) = p ·2/μlog(1+μ) + (1-p) · 0 = p ·2/μlog(1+μ), ∀ p < 1/2.
For computing the H^-(p), assume that p ≥1/2. Then any Ŷ∈ [-1, 0] will be optimal among the non-Bayes classifiers, leading to
H^-(p) = 2/μlog(1+μ).
Hence,
ψ̃(z) = H^-(1+z/2) - H(1+z/2) = log(1+μ)/μ z.
The linear function is of course convex, so
ψ = ψ̃.
[Surrogate link function of exponential loss and exponential uncertainty]
The equivalent loss concerning Ŷ· Y is
l̃ =
-1/1+μ·exp(-(1+μ) Ŷ· Y) + μ/1+μ, if Ŷ· Y ≥ 0;
-1/1-μ·exp(-(1-μ) Ŷ· Y) - μ/1-μ, if Ŷ· Y ≤ 0.
Since the uncertainty function U = exp(-|Ŷ|) is even and positive, the minimizer of the expected equivalent loss C_p(Ŷ) is identical to that of the expected original loss. That is,
-p exp(-Ŷ^*) + (1-p) exp(Ŷ^*) = 0,
which implies that
Ŷ^* = 1/2log(p/1-p).
Without loss of generality, assume that p ≥1/2. Then
H(p) = p [-1/1+μ·exp(-1+μ/2log(p/1-p))+ μ/1+μ]
= + (1-p) [-1/1-μ·exp( 1-μ/2log(p/1-p)) - μ/1-μ]
= 2/1-μ^2· p^1-μ/2 (1-p)^1+μ/2 + μ/1-μ^2· (2p-1-μ).
Since the equivalent loss is convex with respect to Ŷ· Y, the minimum of expected equivalent loss when the prediction is non-Bayes is
H^-(p) = C_p(0) = 1.
Then by definition,
ψ̃(z) = H^-(1+z/2) - H(1+z/2)
= 1/1-μ^2(1-μ z - (1-z)^1+μ/2 (1+z)^1-μ/2).
The first-order derivative is
dψ̃/d z = -μ/1-μ^2 - 1/2(1+μ)(1-z/1+z)^1+μ/2 + 1/2(1-μ)(1+z/1-z)^1-μ/2,
which is zero at z=0.
The second-order derivative is
d^2 ψ̃/d z^2 = 1/1-z^2· (1-z)^-1-μ/2 (1+z)^-1+μ/2≥ 0,
which implies two facts: ψ̃(z) tends to zero if and only if z tends to zero, and ψ̃(z) is convex (henceforth ψ = ψ̃). Thus, the equivalent loss is classification-calibrated.
From the facts that
dψ/d z|_z=0 = 0
and
d^2 ψ/d z^2|_z=0 = 1,
we can say that
ψ(z) ∼ z^2
around z=0. Due to the boundedness of ψ, we have
ψ(z) = Θ(z^2),
where the big theta notation is “of the same order as”.
§.§ Convexity and Proof of Proposition <ref>
In this subsection, we examine how the convexity requirements are fulfilled in the listed examples.
[Convexity of <cit.>]
W.l.o.g. we still assume Y=1 to ease the burden of notations. For the entropy uncertainty, remind that we have already shown its partial derivative with respect to Ŷ by
∂l̃/∂Ŷ = 1/2log(1+Ŷ/2) + 1/2·1-Ŷ/1+Ŷlog(1-Ŷ/2).
Continue to compute its partial derivative, we have
∂^2 l̃/∂Ŷ^2 = -1/(1+Ŷ)^2·log(1-Ŷ/2) ≥ 0,
which ensures its convexity with respect to Ŷ.
For the least confidence uncertainty, the partial derivative is
∂l̃/∂Ŷ =
-1/2·1-Ŷ/1+Ŷ, if Ŷ≥ 0;
-1/2, if Ŷ≤ 0,
which implies that l̃ is at least C^1 continuous with respect to Ŷ. Furthermore,
∂^2 l̃/∂Ŷ^2 = 1/(1+Ŷ)^2, if Ŷ> 0;
0, if Ŷ< 0.
Therefore, the equivalent loss is convex with respect to Ŷ.
We have shown the convexity with respect to Ŷ for both cases. If Ŷ is linear with respect to θ, then we can further conclude that the convexity regarding θ holds. But unlike the margin-based classifiers, the probabilistic models restrict that Ŷ∈ (-1, 1), where a popular model is the logistic regression model that predicts Ŷ = exp(θ^⊤ X) - 1/exp(θ^⊤ X) +1. Unlike the original cross-entropy loss, the equivalent loss under the logistic regression model is no longer convex with respect to the parameter θ.
[Convexity of <cit.>]
Since the model is linear in the sense that Ŷ = θ^⊤ X, we only need to check the convexity with respect to Ŷ. First, assuming Y=1, the derivative of l̃ with respect to Ŷ is
∂l̃_μ/∂Ŷ = 2(Ŷ - 1)/1+μŶ, if Ŷ≤ 0;
2(Ŷ - 1)/1-μŶ, if Ŷ∈ (0, 1);
0, if Ŷ≥ 1.
We can see that l̃ is C^1 continuous with respect to Ŷ. We further compute that
∂^2 l̃_μ/∂Ŷ^2 = 2(1+μ)/(1+μŶ)^2, if Ŷ < 0;
2(1-μ)/(1-μŶ)^2, if Ŷ∈ (0, 1);
0, if Ŷ > 1.
Hence the model is convex but not strongly convex.
[Convexity of <cit.>]
The equivalent loss is non-convex for Ŷ since it is a truncated logistic loss outside a region, where the truncation is to set the loss to be a constant. By the linearity of Ŷ on θ, the model is also non-convex for θ.
[Nonconvexity of margin loss and margin-based uncertainty]
Since the model is linear, we only need to examine the case where l̃ is convex w.r.t. Ŷ. At the differentiable parts, the second-order derivative of the equivalent loss w.r.t. Ŷ is
∂^2 l̃/∂Ŷ^2 = ∂/∂Ŷ(∂l̃/∂Ŷ) = ∂ U/∂Ŷ·∂ l/∂Ŷ + U ·∂^2 l/∂Ŷ^2 = ∂ U/∂Ŷ·∂ l/∂Ŷ,
since the Hinge loss l is piece-wise linear w.r.t. Ŷ. For any fixed Ŷ, the actual outcome Y could possibly be either +1 or -1, indicating that
∂ l/∂Ŷ =
+1, if Ŷ > -1, Y = -1;
-1, if Ŷ < +1, Y = +1;
0, otherwise.
At the positive part Ŷ > 0, the uncertainty function is non-increasing, which restricts the term ∂ U/∂Ŷ to be non-positive. But for the case Y=-1, the convexity requires the term ∂ U/∂Ŷ to be non-negative. Henceforth
∂ U/∂Ŷ = 0,
which implies that the uncertainty function must be piece-wise constants. To further ensure that U must be only one constant, we observe that the equivalent loss is now piece-wise linear with non-increasing slopes for Ŷ > 0 if Y = -1. In order to keep the loss continuous and convex, the slope must be constant everywhere.
[Convexity of exponential loss and exponential uncertainty]
Similar to the arguments in Example <ref>, we only need to compute the second-order derivatives (w.l.o.g. assume Y=1):
∂^2l̃_μ/∂Ŷ^2 =
(1+μ) ·exp(-(1+μ) Ŷ), if Ŷ < 0;
(1-μ) ·exp(-(1-μ) Ŷ), if Ŷ > 0.
The convexity thus holds.
§.§ Lipschitzness in Section <ref>
What is different from the stream-based case is the excessive equivalent risk decomposition, due to the distributions from which the SGD's samples are drawn. For the stream-based setting, the algorithm receives a newly drawn sample X from the underlying distribution 𝒫_X, while for the pool-based setting, the sample set 𝒟_n^X is determined and the sampling distribution is the empirical distribution 𝒫̂_X^n. As a consequence, the excessive risk for any loss function l(f; (X, Y)) (which can be transformed into the excessive risk for the conditional expectation L(f; X) = 𝔼_Y[l(f; (X, Y))]) should be decomposed into five terms rather than two:
𝔼[l(f̂; (X, Y))] - 𝔼[l(g^*; (X, Y))] = 𝔼[L(f̂; X)] - 𝔼[L(g^*; X)]
= 𝔼[L(f̂; X)] - 1/n∑_i=1^n L(f̂; X_i) (generalization)
= + 1/n∑_i=1^n L(f̂; X_i) - inf_f ∈ℱ1/n∑_i=1^n L(f; X_i) (optimization)
= + inf_f ∈ℱ1/n∑_i=1^n L(f; X_i) - 1/n∑_i=1^n L(f^*; X_i) (non-positive)
= + 1/n∑_i=1^n L(f^*; X_i) - 𝔼[L(f^*; X)] (concentration)
= + 𝔼[L(f^*; X)] - 𝔼[L(g^*; X)] (approximation).
Among the above five terms, the non-positive term and the concentration term can be dealt with easily: the non-positive term can be discarded immediately, and the concentration term can be handled by either the standard concentration arguments to yield a high probability bound or the same as the generalization term. In this paper, we cope with the concentration term in the same way as the generalization term.
What matters most now remains three terms: generalization, optimization, and approximation. As in the stream-based setting, we do not discuss the approximation term in this paper, since it is beyond the scope of choosing the uncertainty function. We simply assume that there is no model misspecification so that the approximation term is zero. The optimization can be dealt with easily with the convexity condition as we do in Proposition <ref>. For the remaining generalization term, we summarize an easy-to-check criterion.
To begin with, we briefly review the classical statistical learning theory. The estimator f̂ we get in any algorithm is dependent on the data points 𝒟_n^X, so we cannot directly get the generalization bound via the concentration inequalities that rely on the i.i.d. condition. To deal with such a dependence, classical statistical learning theory usually proves the uniform convergence to establish an upper bound on the generalization term. A popular way to uniform convergence is to compute the Rademacher complexity. The Rademacher complexity of a hypothesis class ℱ on 𝒳 can be defined as
ℛ_n(ℱ) 𝔼[sup_f ∈ℱ1/n∑_i=1^n σ_i f(X_i)],
where σ_i's are n i.i.d. samples from the uniform distribution on {-1, +1} and X_i's are n i.i.d. samples from the distribution 𝒫 on 𝒳. If we further define the loss class as
ℒ_L ∘ℱ{x ↦ L(f; x) | f ∈ℱ},
then a well-known high-probability upper bound for the generalization term (for example, see Theorem 5 in <cit.>) is that for any δ > 0, the following holds with probability at least 1-δ:
∀ f ∈ℱ, |𝔼[L(f; X)] - 1/n∑_i=1^n L(f; X_i)| ≤ 2 ℛ_n(ℒ_L ∘ℱ) + √(log(2/δ)/2 n).
The 1-δ high-probability bound <ref> can handle the generalization term and the concentration term easily with an upper bound of 4 ℛ_n (ℒ_L ∘ℱ) + O(log(1/δ)/n).
The next question is: how to quickly get an upper bound on the Rademacher complexity of a loss class? We hope that ℛ_n (ℒ_L ∘ℱ) can be converted to ℛ_n (ℱ), since the Rademacher complexity of a function class ℱ is generally easier to compute. For example, for a linear function class with parameter L_2 norm upper bound M_Θ and feature space L_2 upper bound M_X, the Rademacher complexity is upper bounded by M_ΘṀ_X/√(n).
Luckily, if we can ensure the β_L-Lipschitzness of the conditionally expected loss L w.r.t. f, then by Ledoux-Talagrand's contraction inequality (see Corollary 3.17 in <cit.>), we have
ℛ_n(ℒ_L ∘ℱ) ≤β_L ℛ_n(ℱ).
Lipschitz condition of those examples in Section <ref> are verified in Appendix <ref>. As for the “loss as uncertainty” principle in Section <ref>, we note that the equivalent loss L̃ = 1/2 L^2 + C is M_L ·β_L-Lipschitz if the original loss L is β_L-Lipschitz and bounded by M_L.
We start to check the Lipschitz condition for the equivalent loss in the examples. We recall that
∂l̃/∂Ŷ = U ·∂ l/∂Ŷ.
Therefore, due to the fact that the uncertainty U ∈ [0, 1], we have
|∂l̃/∂Ŷ| ≤|∂ l/∂Ŷ|,
which implies the following:
If the uncertainty function U ∈ [0, M_U] and the original loss l(Ŷ, Y) is differentiable and β-Lipschitz with respect to Ŷ, then the equivalent loss l̃(Ŷ, Y) is M_U ·β-Lipschitz with respect to Ŷ.
Moreover, the uncertainty function U is usually decreasing to be near zero when |Ŷ| is large enough, which counteracts the effects of the rapid growth of many popular loss functions when Ŷ· Y is negative and far enough from zero. To see this, we have a closer look at the probabilistic model in Example <ref>.
[Lipschitzness of <cit.>]
The original cross-entropy loss is not Lipschitz on the range Ŷ∈ (-1, 1) (or equivalently, q ∈ (0, 1)), since the derivative of the negative logarithm will explode near the zero point. But from direct computation, for the entropy uncertainty <cit.>, we have (w.l.o.g. assume Y=1)
∂l̃/∂Ŷ = U ·∂ l/∂Ŷ
= -[1+Ŷ/2log(1+Ŷ/2)+ 1-Ŷ/2log(1-Ŷ/2)] ·(-1/1+Ŷ)
= 1/2log(1+Ŷ/2) + 1/2·1-Ŷ/1+Ŷlog(1-Ŷ/2).
Since the first-order partial derivative ∂l̃/∂Ŷ is non-positive and monotonically increasing, we only need to check the limit case Ŷ→ -1^+ to examine the Lipschitzness. We have
|∂l̃/∂Ŷ| ∼1/2|log(1+Ŷ)|,
which is much smaller than the original loss
|∂ l/∂Ŷ| ∼|-1/1+Ŷ|,
since by l'Hôpital's rule,
lim_Ŷ→ -1^+∂l̃/∂Ŷ/ ∂ l/∂Ŷ = lim_Ŷ→ -1^+(-1/1+Ŷ)/(-1/(1+Ŷ)^2) = lim_Ŷ→ -1^+ (1+Ŷ) = 0.
Although we cannot say that the equivalent loss is Lipschitz with respect to the whole (-1, 1), for any compact subset of (-1, 1), the equivalent loss is Lipschitz. We shall see that the Lipschitz constant is reduced compared to the original loss.
As for the least confidence uncertainty, the situation is even better: the equivalent loss is Lipschitz over the entire set Ŷ∈ (-1, 1). To see this, we w.l.o.g. assume Y = 1, and the equivalent loss is
l̃(Ŷ· Y) = 1/2(Ŷ-2log(1+Ŷ)), if Ŷ≥ 0;
-1/2·Ŷ, if Ŷ≤ 0.
Its partial derivative is
∂l̃/∂Ŷ =
-1/2·1-Ŷ/1+Ŷ, if Ŷ≥ 0;
-1/2, if Ŷ≤ 0,
which implies that the equivalent loss is 1/2-Lipschitz.
[Lipschitzness of <cit.>]
From direct computation, the partial derivative w.r.t. Ŷ can be upper-bounded by
∂l̃/∂Ŷ≤2/μ.
By assuming an almost upper bound M_X on the feature space 𝒳, the equivalent loss is of course 2 M_X/μ-Lipschitz w.r.t. θ.
[Lipschitzness of <cit.>]
By the property of the logistic loss, the equivalent loss must be 1-Lipschitz w.r.t. Ŷ. Hence the equivalent loss is M_X-Lipschitz w.r.t. θ.
[Lipschitzness of margin loss and margin-based uncertainty]
The equivalent loss is 1-Lipschitz w.r.t. Ŷ, which indicates its M_X-Lipschitzness w.r.t. θ.
[Lipschitzness of exponential loss and exponential uncertainty]
The prediction Ŷ = θ^⊤ X has an upper bound of
|Ŷ| ≤ M_X · M_Θ,
where M_X is the almost sure upper bound for X and M_Θ is the upper bound for Θ. Then the equivalent loss has an upper bound for its partial derivative w.r.t. Ŷ of exp((1-μ) M_X · M_Θ). The final Lipschitzness constant w.r.t. θ is M_X ·exp((1-μ) M_X · M_Θ).
§ PROOFS AND DISCUSSIONS
§.§ Proof of Proposition <ref>
Denote the σ-field generated by θ_t by ℱ_t. The general requirement for the SGD update to hold is that
𝔼[θ_t+1 - θ_t |ℱ_t] = -η_t ·∂l̃/∂θ|_θ = θ_t,
where η_t is the step size. To prove such a requirement, we first see that the only randomness that will affect θ_t+1 conditioned on ℱ_t is
1{ξ_t ≤ U(θ_t; X_t)}
that has a conditional expectation of
𝔼[1{ξ_t ≤ U(θ_t; X_t)}|ℱ_t] = U(θ_t; X_t).
From the definition that
θ_t+1 = θ_t - η_t ·1{ξ_t ≤ U(θ_t; X_t)}·∂l̃/∂θ|_θ = θ_t,
we can conclude the proof.
§.§ Proof of Proposition <ref>
To ease the notation, we denote 1{ξ_t ≤ U(θ_t; X_t)}·∂ l(θ; (X_t, Y_t))/∂θ|_θ=θ_t by g_t. By Proposition <ref>, we have
𝔼_ξ_t[g_t | θ_t] = ∇_θl̃(θ_t; (X_t, Y_t)).
Take the expectation with respect to (X_t, Y_t), we see that g_t is further applying SGD directly on the expected equivalent loss
𝔼_ξ_t, (X_t, Y_t)[g_t |θ_t] = 𝔼_(X_t, Y_t)[∇_θl̃(θ_t; (X_t, Y_t))|θ_t] = ∇_θ𝔼_(X, Y)[l̃(θ_t; (X, Y))|θ_t].
Denote 𝔼_(X, Y)[l̃](θ; (X, Y)) by R(θ). Then from the definition, we have
θ_t+1 - θ^*^2 = θ_t - η_t · g_t -θ^*^2
= θ_t - θ^*^2 - 2η_t · g_t^⊤ (θ_t - θ^*) + η_t^2 g_t^2
= θ_t - θ^*^2 - 2η_t ·(g_t - ∇ R(θ_t) + ∇ R(θ_t))^⊤ (θ_t - θ^*) + η_t^2 g_t^2
≤θ_t - θ^*^2 - 2η_t ·(g_t - ∇ R(θ_t))^⊤ (θ_t - θ^*) + 2η_t ·(R(θ^*) - R(θ_t))+ η_t^2 g_t^2,
where the last inequality follows from the convexity of R(·) such that R(θ^*) ≥ R(θ_t) + ∇ R(θ_t)^⊤ (θ^* - θ_t).
Assume that the parameters sequence {θ_t}_t≥ 1 is adapted to an increasing sequence of σ-fields {ℱ_t}_t ≥ 1. Since θ_t+1 is completely determined by θ_t, ξ_t, and (X_t, Y_t), taking the expectation conditioned on ℱ_t is equivalent to taking the expectation w.r.t. ξ_t and (X_t, Y_t) conditioned on knowing θ_t. By taking the expectation w.r.t. ℱ_t, we have
=𝔼[θ_t+1 - θ^*^2|ℱ_t]
≤θ_t - θ^*^2 - 2η_t ·𝔼[(g_t - ∇ R(θ_t))|ℱ_t]^⊤ (θ_t - θ^*) + 2η_t ·(R(θ^*) - R(θ_t))+ η_t^2 𝔼[g_t^2|ℱ_t]
= θ_t - θ^*^2 + 2η_t ·(R(θ^*) - R(θ_t))+ η_t^2 𝔼[g_t^2|ℱ_t].
By rearranging the terms, we have
R(θ_t) ≤ R(θ^*) + θ_t - θ^*^2 - 𝔼[θ_t+1 - θ^*^2|ℱ_t]/2η_t + η_t/2·𝔼[g_t^2|ℱ_t],
where
𝔼[g_t^2|ℱ_t]
= (1{ξ_t ≤ U(θ_t; X_t)}·∂ l(θ; (X_t, Y_t))/∂θ|_θ=θ_t^2 + 1{ξ_t > U(θ_t; X_t)}· 0)
≤ G^2.
Summing up inequality <ref> from t=1 to T+1 and taking the unconditional expectation on both sides, by the tower property of the conditional expectation we have
1/T+1∑_t=1^T+1𝔼[R(θ_t)] ≤ R(θ^*) + θ_1 - θ^*^2/2 η_t + η_t/2· G^2.
Assume θ_1 - θ^*≤ D. Substituting η_t = D/G√(T+1) into the above inequality, we have
1/T+1∑_t=1^T+1𝔼[R(θ_t)] ≤ R(θ^*) + GD/√(T+1).
By the convexity of R(·), we have
R(θ̅_T+1) = R(1/T+1∑_t=1^T+1θ_t) ≤1/T+1∑_t=1^T+1 R(θ_t),
which finally verifies the proof.
§.§ Proof of Theorem <ref>
Since ψ is convex by the definition in <cit.>, we have
ψ(𝔼[L_01(f_θ̅_T+1) - inf_g ∈𝒢 L_01(g)] )≤𝔼[ψ(L_01(f_θ̅_T+1) - inf_g ∈𝒢 L_01(g) )],
where the expectation is taken with respect to all the randomness in the algorithm. By the surrogate property (<ref>), we have
𝔼[L_01(f_θ̅_T+1) - inf_g ∈𝒢 L_01(g)] ≤ψ^-1(𝔼[ψ(L_01(f_θ̅_T+1) - inf_g ∈𝒢 L_01(g))] )
≤ψ^-1(𝔼[𝔼_X, Y [l̃(f_θ̅_T+1(X), Y)] - inf_g ∈𝒢𝔼_X, Y [l̃(g(X), Y)]])
= ψ^-1(𝔼[l̃(f_θ̅_T+1(X), Y)] - inf_g ∈𝒢𝔼 [l̃(g(X), Y)])
= ψ^-1(𝔼[l̃(f_θ̅_T+1(X), Y)] - 𝔼[l̃(f_θ̃^*(X), Y)] + 𝔼[l̃(f_θ̃^*(X), Y)] - inf_g ∈𝒢𝔼 [l̃(g(X), Y)]).
From the result in Proposition <ref>, we can see that
𝔼[l̃(f_θ̅_T+1(X), Y)] - 𝔼[l̃(f_θ̃^*(X), Y)] ≤GD/√(T+1).
§.§ Proof of Proposition <ref>
The algorithm we are considering is under the stream-based setting (Algorithm <ref>), where the newly observed sample is directly taken from the unknown distribution 𝒫. As discussed in Section <ref>, we are directly applying SGD on the expected loss L(θ; X). Following the equivalent loss analyses, one can find the equivalent expected loss by
∂L̃/∂θ = U ·∂ L/∂θ = L ·∂ L/∂θ.
Then
L̃ = 1/2 L^2 + C,
where C is some constant. Since the constant does not affect the gradient, we choose C=0 for simplicity. In other words, we are actually implementing SGD on the squared expected loss when we are applying the gradient-descent-update version of the uncertainty sampling algorithm. For those U ≥ 1, we permanently query the label and compensate the ratio by increasing the original descent step size η to η· U, keeping the SGD rule the same.
§.§ Discussions on existence of solution to Equation (<ref>)
In Section <ref>, we have mentioned that the necessary and sufficient condition for the path integral of ∑_j=1^d U ·∂ l/∂θ_jdθ_j to not depend on the chosen path is the uncertainty U and loss l fulfill the exchangeability condition:
∂ U/∂θ_i·∂ l/∂θ_j = ∂ U/∂θ_j·∂ l/∂θ_i, ∀ i ≠ j.
To see why this happens, we give a very brief argument here without bothering to concretely introduce another system of concepts in differential forms and algebraic topology. For those interested readers, please refer to the textbook of differential forms in algebraic topology <cit.>.
By de Rham's theorem, we shall see the condition that the path integral of ∑_j=1^d U ·∂ L/∂θ_jdθ_j does not depend on the path choices is equivalent to saying that it is an exact form, where the term exact means that the form itself is the (exterior) derivative of another function. In other words, saying that the path integral of some differential form depends only on the starting and ending points equals saying that it is some gradient itself.
The next question is: how to find all the exact forms on some Euclidean parameter space
Θ? By Poincaré's lemma, on any open ball of ℝ^d, to say a 1-form is exact (where 1 means that it is a first-order gradient of a function) equals to say the form is a closed 1-form, where the term closed means that the form's exterior derivative is zero.
Before diving into finding closed forms, we try to intuitively tell what an exterior derivative is. We take ℝ^3 as an example. We shall see that the exterior derivative is a mimic of gradients, curls, and divergences. In the 3-dimensional Euclidean space, we can find the gradient of a smooth function F by
∇ F = (∂ F/∂ x, ∂ F/∂ y, ∂ F/∂ z)^⊤.
If we represent the gradient by independent vectors (dx, dy, dz), then we have
∇ F ≃∂ F/∂ xdx + ∂ F/∂ ydy + ∂ F/∂ zdz,
which is exactly the definition of the exterior derivative dF.
If 𝐅 is now a vector field (F_x, F_y, F_z)^⊤, then its curl is
∇×𝐅 = (∂ F_z/∂ y - ∂ F_y/∂ z, ∂ F_x/∂ z - ∂ F_z/∂ x, ∂ F_y/∂ x - ∂ F_x/∂ y)^⊤.
By representing it with independent vectors (dy ∧dz, dz ∧dx, dx ∧dy), we have
∇×𝐅≃(∂ F_z/∂ y - ∂ F_y/∂ z) dy ∧dz + (∂ F_x/∂ z - ∂ F_z/∂ x)dz ∧dx + (∂ F_y/∂ x - ∂ F_x/∂ y)dx ∧dy.
By writing 𝐅≃ F_xdx + F_ydy + F_ydy, we have the same as the definition of exterior derivatives:
d(F_xdx + F_ydy + F_ydy) = (∂ F_z/∂ y - ∂ F_y/∂ z) dy ∧dz + (∂ F_x/∂ z - ∂ F_z/∂ x)dz ∧dx + (∂ F_y/∂ x - ∂ F_x/∂ y)dx ∧dy.
Finally, the divergence of a vector field 𝐅 = (F_x, F_y, F_z)^⊤ is
∇·𝐅 = ∂ F_x/∂ x + ∂ F_y/∂ y + ∂ F_y/∂ y.
Up to a vector dx ∧dy ∧dz, we have
∇·𝐅≃d(F_x dy ∧dz + F_y dz ∧dx + F_z dx ∧dy).
In a word, the exterior derivative is to extend the concept of “differential” from functions to vector fields.
All we have to do now is to find all the closed 1-forms. The closed forms are those of zero exterior derivatives. By the definition of exterior derivatives, we can compute the exterior derivative of ∑_j=1^d U ·∂ L/∂θ_jdθ_j as
d(∑_j=1^d U·∂ L/∂θ_j dθ_j ) = ∑_1≤ i < j ≤ d(∂/∂θ_i(U·∂ L/∂θ_j) - ∂/∂θ_j(U·∂ L/∂θ_i)) dθ_i ∧dθ_j
= ∑_1≤ i < j ≤ d(∂ U/∂θ_i·∂ L/∂θ_j - ∂ U/∂θ_j·∂ L/∂θ_i) dθ_i ∧dθ_j.
which must be zero due to the definition of closed forms. This is the so-called requirement for exchangeability.
§.§ Proof of Proposition <ref>
∀ϵ > 0, we can find some g_ϵ∈𝒢 such that
𝔼[L(g_ϵ)] ≤inf_g ∈𝒢𝔼[L(g)] + ϵ.
For every trajectory of X, the inequality (<ref>) holds for any hypotheses f and g. Set g = g_ϵ. Taking expectation w.r.t. X ∼𝒫_X on both sides, we have
𝔼[L̃(f)] - 𝔼[L̃(g_ϵ)] ≥1/2𝔼[(L(f) - L(g_ϵ))^2].
By Jensen's inequality,
1/2(𝔼[L(f)] - 𝔼[L(g_ϵ)])^2 ≤1/2𝔼[(L(f) - L(g_ϵ))^2]
≤𝔼[L̃(f)] - 𝔼[L̃(g_ϵ)]
≤𝔼[L̃(f)] - inf_g∈𝒢𝔼[L̃(g)].
Hence, we have ∀ϵ > 0,
𝔼[L(f)] - inf_g ∈𝒢𝔼[L(g)] - ϵ≤𝔼[L(f)] - 𝔼[L(g_ϵ)] ≤ 2√(𝔼[L̃(f)] - inf_g∈𝒢𝔼[L̃(g)]).
Taking ϵ to be arbitrarily small, we complete the proof.
§.§ Proof of Proposition <ref>
The proof is straightforward from the first two equal signs of (<ref>). We have assumed the pointwise minimum conditional risk of g^* is at least ϵ^*. Then for any X ∈𝒳,
L̃(f) - L̃(g^*) = 1/2 (L(f) + L(g^*))(L(f) - L(g^*)) ≥ϵ^*/2 (L(f) - L(g^*)).
Taking expectation on X ∼𝒫_X concludes the proof.
§.§ Proof of Theorem <ref>
We develop our proof based on that of Proposition <ref>. In the proof of Proposition <ref>, we utilize the term g_t to ease the burden of redundant notations. We keep the notation here but replace U with L:
g_t 1{ξ_t ≤ L(θ_t; X_t)}·∂ l(θ; (X_t, Y_t))/∂θ|_θ=θ_t.
Note that if we were able to carry on the parameter update based on g_t, then we would be doing SGD on the oracle equivalent expected loss L̃, and the analysis of Proposition <ref> can be directly applied. But we are actually implementing the uncertainty based on an estimation of L, say L̂. Note that the θ term in L̂(θ; X) does not mean that the estimation model L̂ is also parametrized by θ but that the model estimates the conditional expected loss when the current hypothesis is θ. Therefore, the update is made w.r.t.
ĝ_t 1{ξ_t ≤L̂(θ_t; X_t)}·∂ l(θ; (X_t, Y_t))/∂θ|_θ=θ_t.
Denote 𝔼_Y[L̃] by R. By the definition of θ_t+1, we have
θ_t+1 - θ_t^2 = θ_t - η_t ·ĝ_t - θ^*^2
= θ_t - θ^*^2 - 2η_t ·(ĝ_t - g_t + g_t - ∇ R(θ_t) + ∇ R(θ_t))^⊤ (θ_t - θ^*) + η_t^2 ĝ_t^2
≤θ_t - θ^*^2 - 2η_t ·(g_t - ∇ R(θ_t))^⊤ (θ_t - θ^*) - 2η_t ·(ĝ_t - g_t)^⊤ (θ_t - θ^*)
≤ + 2η_t ·(R(θ^*) - R(θ_t)) + η^2 ĝ_t^2,
where the last inequality is derived from the convexity of R. Similar to the proof of Proposition <ref>, we take the expectation conditioned on ℱ_t on both sides, where ℱ_t is the σ-field generated by θ_t:
𝔼[θ_t+1 - θ^*^2|ℱ_t] ≤θ_t - θ^*^2 + 2η_t ·𝔼[ĝ_t - g_t|ℱ_t] ·θ_t - θ^*
≤ + 2η_t ·(R(θ^*) - R(θ_t))+ η_t^2 𝔼[ĝ_t^2|ℱ_t].
Rearranging the terms, we have
R(θ_t) ≤ R(θ^*) + θ_t - θ^*^2 - 𝔼[θ_t+1 - θ^*^2|ℱ_t]/2η_t + η_t/2·𝔼[ĝ_t^2|ℱ_t] + 𝔼[ĝ_t - g_t|ℱ_t] ·θ_t - θ^*.
Similar to the proof of Proposition <ref>, we can easily see from the definition that
ĝ_t^2 ≤∂ l(θ; (X_t, Y_t))/∂θ|_θ=θ_t^2 ≤ G^2.
By assuming that θ_t - θ^*≤ D for all t ∈ [T+1], we take the unconditional expectation on both sides of inequality <ref> and sum up from t=1 to t=T+1, then
1/T+1∑_t=1^T+1𝔼[R(θ_t)] ≤ R(θ^*) + D^2/2η_t + η_t/2· G^2 + D·1/T+1∑_t=1^T+1δ_t.
By taking η_t = D/G√(T+1) and the convexity of R, we have
𝔼[R(θ̅_T+1) ] ≤ R(θ^*) + GD/√(T+1) + D/T+1∑_t=1^T+1δ_t.
§.§ Proof of Proposition <ref>
We give the analysis here to show that Algorithm <ref> is indeed an SGD update. To simplify the notation, we abbreviate the gradient we take at time step t as g_t:
g_t ∂ l(θ;(X_a_t, Y_t))/∂θ|_θ=θ_t.
We define the σ-field generated by θ_t to be ℱ_t. If we define the equivalent expected loss L̃ to be 1/2 L^2 as we do in Section <ref>, we have
𝔼[S_t/n· g_t | ℱ_t]
= S_t/n∑_i=1^n U(θ_t; X_i)/S_t∇_θ𝔼[l(θ_t; (X_i, Y))|X=X_i]
= 𝔼_𝒫̂_X^n[U(θ_t; X) ·∇_θ L(θ_t; X)]
= 𝔼_𝒫̂_X^n[L(θ_t; X) ·∇_θ L(θ_t; X)]
= 𝔼_𝒫̂_X^n[∇_θL̃(θ_t; X)]
= ∇_θ𝔼_𝒫̂_X^n[L̃(θ_t; X)],
which indicates that Algorithm <ref> is indeed an SGD update w.r.t. the expected L̃ under the empirical distribution 𝒫̂_X^n with step sizes {η_t}_t=1^T.
§.§ Proof of Proposition <ref>
Denote the indexes of the m-largest loss functions l(θ_t; (X_i, Y_i)) by {i_t1, …, i_tm}. Denote the conditional expected loss as 𝔼[l(θ; (X, Y))|X] by L(θ; X). Then, the gradient of the objective (<ref>) at θ = θ_t is
1/m∑_k=1^m ∇_θ L(θ_t; X_i_tk).
On the other hand, the conditional expectation of the update is
𝔼[θ_t+1 - θ_t |θ_t] = 𝔼[-η_t ·∂ l(θ; (X_i_t, Y_t))/∂θ|_θ = θ_t|θ_t]
= -η_t ·1/m∑_k=1^m 𝔼_Y_t[∂ l(θ; (X_i_t, Y_t))/∂θ|_θ = θ_t|θ_t, i_t = i_tk]
= -η_t ·1/m∑_k=1^m ∇_θ𝔼_Y_t[l(θ_t; (X_i_t, Y_t))|θ_t, i_t = i_tk]
= -η_t ·1/m∑_k=1^m ∇_θ L(θ_t; X_i_tk),
which completes the proof.
§.§ Proof of Proposition <ref>
The sampling probability at step t is
p_ti = (1-γ) ·1/n + γ·1{i ∈{i_t1, …, i_tm}}.
Due to the envelope theorem, we only need to prove that 𝐩_t is indeed the solution to the maximization problem at time step t:
max_𝐩∈𝒫_γ^2 n(n-m)/2m, n, m+(n-m)γ/mn∑_i=1^n p_i · L(θ_t; X_i).
Such an optimality check can be easily done by checking the KKT condition, if one notices that the uncertainty set 𝒫_γ^2 n(n-m)/2m, n, m is a convex set and the objective is a linear function of p. In fact, if we remove the divergence constraint and only focus on the linear constraints, one can easily see that the maximization solution is 𝐩_t, since it puts as much as possible weights on the largest m objectives. Since
D_1/2(· -1)^2(𝐩_t (1/n, …, 1/n)^⊤) = m ·1/2n·(γ(n/m - 1))^2 + (n-m) ·1/2n·γ^2 = γ^2 ·n-m/2 m,
which means that the divergence constraint is also fulfilled by the relaxed maximization point 𝐩_t. Hence the relaxed solution is also the solution to the original maximization problem.
|
http://arxiv.org/abs/2307.01930v1
|
20230704213549
|
Learning ECG signal features without backpropagation
|
[
"Péter Pósfay",
"Marcell T. Kurbucz",
"Péter Kovács",
"Antal Jakovác"
] |
cs.LG
|
[
"cs.LG",
"cs.AI",
"cs.CV",
"stat.AP",
"stat.ML",
"62H30, 68T10, 62M10, 92C50",
"J.3; I.5; I.2.0; G.3"
] |
label1]Péter Pósfay
[email protected]
label1,label2]Marcell T. Kurbuczcor1
[email protected]
label3]Péter Kovács
[email protected]
label1]Antal Jakovác
[email protected]
[cor1]Corresponding author.
[label1]organization=Department of Computational Sciences, Wigner Research Centre for Physics,
addressline=29-33 Konkoly-Thege Miklós Street,
city=Budapest,
postcode=H-1121,
country=Hungary
[label2]organization=Corvinus University of Budapest, Institute of Data Analytics and Information Systems,
addressline=8 Fővám Square,
city=Budapest,
postcode=H-1093,
country=Hungary
[label3]organization=Department of Numerical Analysis, Eötvös Loránd University,
addressline=1/c. Pázmány Péter sétány,
city=Budapest,
postcode=H-1117,
country=Hungary
Representation learning has become a crucial area of research in machine learning, as it aims to discover efficient ways of representing raw data with useful features to increase the effectiveness, scope and applicability of downstream tasks such as classification and prediction. In this paper, we propose a novel method to generate representations for time series-type data. This method relies on ideas from theoretical physics to construct a compact representation in a data-driven way, and it can capture both the underlying structure of the data and task-specific information while still remaining intuitive, interpretable and verifiable. This novel methodology aims to identify linear laws that can effectively capture a shared characteristic among samples belonging to a specific class. By subsequently utilizing these laws to generate a classifier-agnostic representation in a forward manner, they become applicable in a generalized setting. We demonstrate the effectiveness of our approach on the task of ECG signal classification, achieving state-of-the-art performance.
ECG classification linear law representation learning outlier detection machine learning
§ INTRODUCTION
The efficiency of any machine learning system is heavily dependent on how the data are input into the system <cit.>. The best representation of any given information depends on the task at hand and can change dynamically over time.
For this reason, much effort has been dedicated to transforming data into an appropriate form during the development of machine learning applications. One way to optimize data representations is to employ feature engineering methods that transform the data in advance, making them more suitable for machine learning algorithms. Although the resulting systems can experience large performance boosts from such optimization approaches, there are a few drawbacks to this procedure. Since these data preprocessing methods are designed by humans, they often reflect naive intuitions about the structure of the information in the data and about the task to be automatized. This results in less generalizable systems because they cannot fine-tune their inner data representations, which are manually determined. Another problem is that this process becomes labor intensive and less effective as the volume and variation of the data increase <cit.>. These shortcomings are among the main reasons why finding new ways to automatize data preprocessing and feature generation is the main focus in many areas of machine learning.
To extend the scope and increase the data efficiency of intelligent algorithms <cit.>, it is important to increase their ability to extract and organize useful information from the data presented. As machine learning starts to address increasingly complex tasks, it will become necessary to increase a system's ability to internally organize data and dynamically learn the correct representations. This line of thought has led to breakthrough results in many areas of machine learning, such as speech recognition <cit.> and image recognition <cit.>. The idea of allowing an algorithm to represent information in a hierarchical way has also led to outstanding results in natural language processing <cit.>.
In this paper, we present a new technique for generating useful features from time series data that we apply to the task of electrocardiogram (ECG) signal classification. The main advantage of this technique is that it learns representations in a data-driven way; however, it is distinct and separate from a classification algorithm. The idea behind this new method originates from the connection between representation learning and physics. The aim of representation learning is to disentangle the multitude of underlying factors within the data to generate more abstract representations <cit.>. This idea is very similar in nature to the renormalization idea in physics <cit.>, where the underlying physical processes have to be disentangled in observed quantities. This similarity can be exploited to aid in representation learning using techniques from theoretical physics <cit.>.
This technique approaches the question of data representation from the renormalization point of view and determines relevant and irrelevant underlying factors to aid in the disentangling process <cit.>. When relevant factors are identified, a new representation of the data can be generated, providing a more concise and abstract description of the information. Linear law-based feature space transformation (LLT) has been shown to be a successful technique for extracting relevant factors from time series data <cit.>.
The idea of linear laws originates from the conservation laws of physics, which are quantities that remain constant in an observed system and can be used to greatly simplify calculations. In the data science context, laws mean relations that are true for all elements of a learning set. Analogously to physics, these relations are "conserved" within the learning set, and they are true for every element. The challenge is to generate these laws automatically from the dataset while introducing as few assumptions about the nature of the dataset as possible.
The nature of linear laws dictates that these relations must be determined for classes in the learning set, and they characterize the samples in a general way, as they represent a property that is common in all elements of a given class. Because data representation is task-oriented, this step requires the tuning of hyperparameters by the user to obtain a suitable representation for a given goal. Hence, fitting linear laws is more similar to applying machine learning methods than data preprocessing and feature engineering.
A linear law-based approach has many positive properties that make it very versatile.
First, feature generation is data-driven. This means that without specifying any method for classification, it is possible to check what kind of separation is supported by the data. The parameters of linear laws provide information about the level of support a given classification receives from the data <cit.>.
It can be, for example, that signals that have the same label are not as similar to each other as previously thought. This is indicated by the accuracy of the linear law derived from them.
Another advantage of linear laws is that they can automatically find representations while not requiring any supervised learning method. The laws are 'derived' from the data in a forward way.
With this technique, the feature generation process becomes more distinct from the classification algorithm, and many classifiers can be used for the same feature set. This improves the interpretability because it is possible to follow how a feature was generated and how it influenced the categorization in a given sample and classifier. This property can be especially important in medical applications where verifiability is key <cit.>.
The separation of the classifier and the feature generator also makes it possible to study the behavior of different classifier algorithms for the same transformed space. In this context, the difference between these cases will provide information about the generalizability of each method compared to that of others. From this information, the best method can be chosen. Alternatively, as these algorithms all work in the same feature space, it is possible to combine them in various ways to provide more generality.
The new technique presented in this paper can be summarized as follows. First, linear laws are learned (fitted) for each class in the learning set. These laws describe the underlying factors that define each set in the context of the dataset. Then, the laws are used to transform the data into a new representation that has better features for classification. This transformation builds on previous results obtained by linear laws <cit.> but introduces a new feature generation method to accommodate the spike-like nature of ECG signals. The classification algorithms are then trained in the transformed feature space.
This paper is organized as follows. First, in Section <ref>, an outlook is given on ECG signal classification. Then, in Section <ref>, the mathematical description of linear laws for time series is given, and a method for finding linear laws is explained. In Section <ref>, the ECG dataset used for testing the method is presented. The classification algorithm used to derive the results is described in Section <ref>. The results are presented in Section <ref>.
§ ECG CLASSIFICATION
There is a long history of computer-assisted analysis of electrocardiograms <cit.>, and there it is still a very active research field due to the ever-increasing amount of data. In fact, new data acquisition technologies <cit.>, such as IoT, wearable devices, and smart devices, have paved the way for commercial applications of ECG monitoring. A massive amount of data is generated by these devices and must be stored, processed, and interpreted. A common aspect of these challenges is the extraction of features that encode medical information and discard nonmeaningful parts of the data, such as noise. This can be achieved manually via feature engineering and/or automatically via representation learning <cit.>. The former approach produces static handcrafted features based on human intuition, whereas the latter learns explanatory factors directly from the training data.
A clinical workflow must rely on transparent and interpretable decisions. This favors the use of handcrafted features that fuse medical knowledge with signal processing expertise. Temporal and statistical features, such as RR distances, ECG intervals, and moment-based indices, are typical examples that are reviewed in detail by <cit.>. Furthermore, the shape features of the basic ECG waveforms (QRS, T, P) belong to the morphological category, and their global changes are of great importance in the field of arrhythmia detection <cit.>, which is the main objective in this section.
There are various ways to extract morphological features from ECGs. For instance, time-domain approaches usually operate with direct signal samples from ECGs <cit.> and computed measurements such as power, derivative, and extreme values <cit.>. These time-domain features are especially prone to noise; thus, the feature extraction step is often performed in the frequency domain instead. Spectral approaches (e.g., linear filtering) assume at least weak stationarity of the analyzed signal, which is not true in the case of ECG, as it is influenced by many physiological factors, such as respiration, body movement, and arrhythmia. To address this nonstationary behavior, a number of joint time-frequency representations have been introduced, such as the short-time Fourier transform, Choi–Williams distribution, and multiwindow spectrogram <cit.>. In contrast to the previously mentioned methods, the so-called wavelet transformation employs time windows with variable widths that allow better temporal and spectral localization of the nonstationary signal features. This property makes wavelets very popular in the field of ECG signal processing <cit.>.
Although there is a wide range of joint time-frequency representations, the building atoms (e.g., the window function and the mother wavelets) are fixed a priori to a given application. This can be restrictive in the case of ECG signal processing, where morphological features can vary over time and between subjects. To overcome this limitation, adaptive data-driven transformations have been introduced. These methods represent an ECG as a series of basic functions that are optimal in some sense. Variable projections by means of parameterized basis functions, such as Hermite, spline, and rational functions, provide optimal representations of the data in a least-square sense <cit.>. In contrast, PCA is a statistical approach that transforms the data along variance-maximizing orthogonal directions <cit.>. Independent component analysis (ICA) is another technique originating from blind source separation <cit.>, finding a projection that maximizes higher-order statistics, such as kurtosis. In general, the proposed LLT method is similar in the sense that a set of independent vectors is determined from the input data. The optimality criteria (<ref>) of the LLT method is, however, essentially different since it favors the direction along which the time-embedded samples change the least. This results in a feature space with increased separability, which will be demonstrated in a case study of ECG classification.
§ MATHEMATICAL BACKGROUND
Here, we briefly summarize the mathematical results needed for understanding linear laws and for implementing the LLT-type transformation used for ECG signals <cit.>.
Let us first consider a times series y: ℝ→𝕍, where 𝕍 is a finite-dimensional Hilbert space. In practice, we always work with finite-dimensional representatives, and here, we assume that a faithful finite-dimensional representation of a time series is already given. In this case, we can prepare a finite set of n-length samplings from this time series:
𝒴 = {
Y^(k)∈𝕍^n+1 |
Y_i^(k)=y(t_k-i Δ t), i ∈{0,…, n}, k ∈{n,…, K}},
where K,n ∈ℕ, Δ t is the sampling interval of the time series and y(t_k) is the value of the time series at time t_k. This is referred to as the time delay embedding of the time series, and it is sufficient to capture the dynamic state of the system <cit.>.
The set of t_k values that form the base points corresponding to the n-length samplings can be chosen in any suitable way depending on the given application. Here, we always choose the maximum number of possible base points, which results in maximally overlapping n-length samplings. This means that there are n-length time embeddings of the time series; hence, n ≦min(k). For a time series of length L, this time embedding creates L-n+1 time series subsamples of length n, where two consecutive elements maximally overlap, meaning that the second element contains all the elements of the first element except the first n-1 elements of the first element.
§.§ Linear laws
Let us first consider mappings of the following type:
ℱ: 𝕍^n+1→ℝ, ℱ( Y^(k)) = 0, ∀ k.
In this study, we only consider linear mappings. This assumption constrains the form of ℱ in Eq. (<ref>). For convenience, we introduce the matrix notation for the embedded time series Y_i^(k) from Eq. (<ref>):
Y_ki = Y^k_i = y(t_k -i Δ t).
Using this notation, the linear mapping ℱ can be written as:
ℱ(Y^k)= ∑_i=0^n Y_kiw_i≡ (Yw)_k = 0, ∀ k,
where w_i is an n-length vector. We denote this construction as "linear law ℱ", which is represented by the w_i vector. As mentioned in the introduction above, the intuition behind this nomenclature originates from physics: ℱ can be considered a "law" on the set Y because it represents a relation that is true for all k, i.e., for all elements of Y.
§.§ Determine the linear law from the data
To determine the coefficients of linear law w_i, recognize that the definition in Eq. (<ref>) can be written as Yw =0. In the usual quadratic norm, this takes the following form:
Yw ^2=1/K( Yw )^T ( Yw ) = w^T C w =0,
where
C = 1/KY^TY,
is the correlation matrix corresponding to the original dataset.
To avoid the trivial solution w=0 in Eq. (<ref>), we further require that w=1. To implement this new constraint, the problem can be treated as a minimization task. By using the Lagrange multiplier method, we can write:
χ^2(λ) = w^T C w - λ w^T w = min.
The solution of Eq. (<ref>) is the
Cw^(λ)=λ w^(λ)
eigenvalue equation. This yields a number of n+1-length eigenvectors, which are all potential linear laws. We need an additional constraint to select the eigenvector that best represents the data. An exact law would satisfy Yw =0, which means that the relation in Eq. (<ref>) is completely fulfilled. Usually, this is not possible to satisfy in practice, and ℱ maps the elements of 𝒴 to small numbers around zero:
ℱ(Y^k) = ∑_i=0^n Y_kiw_i ≡ξ_k .
Substituting this back into Eq. (<ref>) yields:
Yw ^2 =1/K∑_k=0^K ξ_k^2 = ⟨ξ^2 ⟩.
On the other hand, substituting the solutions of Eq. (<ref>) into Eq. (<ref>) yields
Yw ^2 = λ w^Tw = λ.
By comparing Eqs. (<ref>) and (<ref>), one can immediately see that the expectation value of ξ^2, which is the variance of the value of (Yw)_k on the whole sample set, is simply λ. The linear law that performs the best is the one corresponding to the smallest variance because it is the closest to the ideal zero defined in Eq. (<ref>). This linear law can now be easily selected from the eigensystem defined by Eq. (<ref>). The linear law that describes the dataset with the smallest average error is the eigenvector in Eq. (<ref>) corresponding to the smallest eigenvalue. This always exists because C is symmetric and positive definite; hence, all eigenvalues are real and positive.
Notably, the linear law has an intuitive meaning. The process described above has a formal similarity to the principal component analysis (PCA) method. A large difference, however, is that in PCA, we are interested in the eigenvector corresponding to the largest eigenvalue; here, we are searching for the smallest one. This comparison helps give intuitive meaning to the w_i linear law. It is a vector in the embedding space that is orthogonal to the dataset. This is the direction in which the time-embedded samples change the least or in this direction, they remain almost constant for the whole dataset. This is a "common property" that is found by linear laws, and it can be imagined as finding the normal of a hyperplane that contains the data. In practice, this plane has a thickness corresponding to small variations in data, but the thinner it is, the better the linear law.
Thus far, our formulation is based upon the idea of having a very long time series sample, which is then time embedded into the Y_ki matrix. In some tasks, this might be the case, but in many problems, the learning set is given as a labeled set of time series samples y_m(t), where m ∈ℕ. This situation can be reduced to the case study above. The process of generating Y_ki can be applied to individual samples, which results in a set of matrices Y_k'i'^(m). These matrices can be concatenated along their first (row, k') axis to form a compound Y_ki. This can be accomplished because the k' indexed rows of Y contain time-embedded samples, and their order does not matter for the process of calculating the linear law. The rows can be thought of as the new learning samples for the linear law. Adding new rows to Y_ki simply means increasing the learning set. In standard notation, this can be described as augmenting the first sample matrix with the others to form a new larger block matrix with much longer columns. If all the samples have a uniform L-length and the linear law is I long (the same length as the embedding depth), then Y_k'i'^(m) is an (L-I+1) × I dimension matrix. If we have M samples, after concatenating the matrices corresponding to the samples, the resulting Y_ki has ((L-I+1)· M) × I dimensions:
Y_ki =
[ Y_k'i'^(1); Y_k'i'^(2); ⋮; Y_k'i'^(m) ].
This larger Y matrix can be treated the same as the Y_ki described above. In this way, the linear law determination process is independent of the form of the given dataset. Everything can be reduced to collecting time-embedded samples into a large Y_ki matrix, which can be treated in a standard way.
§.§ Generating features
In the previous sections, it was discussed how to determine the linear laws if a set of samples were given. This section demonstrates how to use linear laws to transform datasets and generate features. Since linear laws represent properties that are common in the elements of a defining set, in a classification task, we need to determine a linear law for every class. This defines a mapping from the classes to linear laws:
ℋ: C_j → w_i^(j),
where C_j are the sets corresponding to the classes in the dataset (j ∈ℕ). The elements of every class can be transformed by linear laws in the following way. Let us denote one sample from a given class by y_m(t), which is a time series. First, this time is embedded in a standard way, as described in Eqs. (<ref>) and (<ref>). This can be thought of as mapping the L-length time series sample into an L-n+1 × n dimensional matrix Y_ki^(m). Then, the w_i^(j) linear law can be applied to the sample, similar to that in Eq. (<ref>):
∑_i=0^n Y_ki^(m) w_i^(j) = ξ_k^(m, j),
where the ξ_k^(m, j) vector contains the transformed features of sample m according to linear law w_i^j, which corresponds to class C_j. The meaning of these features can be understood by looking at Eq. (<ref>). They measure how effectively a linear law can transform the subsamples corresponding to the given sample into zero. The closer to zero the subsamples are, the better the linear law describes the sample. The intuition behind the features is that the linear law w_i^(j) transforms elements of class C_j closer to zero than samples from other classes. In this way, the features resulting from the transformation with linear law w_i^(j) behave like similarity detectors for elements of class C_j. Of course, in real applications, the situation can be more complicated, and different elements of a sample are mapped to zero to different degrees, which is why it is necessary to retain all the features.
These features can be used for classification in the following way. We need to know how a given sample was transformed by different linear laws to detect how similar the sample is to a given class. Because of this, every sample must be transformed by the linear laws of all other classes. This can be intuitively considered if one thinks about how to transform an unknown sample. In this situation, we need to transform a sample with all of the linear laws corresponding to all possible classes to create a comparison. Based on this information, a classifier can be trained that learns what the elements of each class look like after they are transformed by the linear laws corresponding to other classes. If the classes transform very differently from each other, the resulting vectors will differ greatly from one another, provided that we applied the transformation of the correct class's linear law. This greatly aids any classifier algorithm because samples from different classes move further from each other in the abstract feature space. If the possible classes (and their respective linear laws according to Eq. (<ref>)) are indexed with j, then the transformed features for a sample y_m(t) can be collected in a vector in the following way:
ξ^m = [ξ_k^(m, 1), ξ_k^(m, 2), …, ξ_k^(m, J)].
This is a collection of feature vectors, each corresponding to a given class. This can be treated in different ways according to the given application. One way is to simply flatten ξ^m and concatenate the different ξ_k^(m, 1) after each other. In general, this process generates an (L-n+1) · J-length feature vector for each sample, where J is the number of classes and n is the length of the linear law. In applications, this feature vector can be downsampled if a less detailed representation is sufficient for classification.
If there are only two classes (J = 2), the feature vector ξ can be greatly simplified in certain cases. It is sufficient to use the linear law corresponding to one of the classes:
ξ^m = ξ_k^(m, 1).
This is especially useful in cases such as anomaly detection, where a well-defined reference class is given and the task is to separate a sample from other samples that might not even belong to the same class. This approach works because when there are only two possible classes, it is sufficient for the classifier to detect whether a given sample is similar to the reference class. This method can also be thought of as outlier detection in a sense because we do not need the linear law of the outliers, and it might not even exist. However, some outlier samples are needed to train the classifier. In the task of classifying the ECG signals between normal and ectopic types, this is the situation. There are many different ways an ECG signal can be ectopic, but we are not interested in separately classifying these types of signals. We only want to know whether a healthy ECG signal was obtained. Normal heartbeats are selected as the reference class, and ectopic signals are defined as "not normal". Every sample is transformed by the linear law derived from normal heartbeats, and the resulting features measure how similar a sample is to samples from normal ECG signals.
In summary, the features described above have many advantageous properties from a representation learning point of view <cit.>.
They offer multiple interpretable explanatory factors. The elements of the feature vector describe how similar a given interval of a signal is to the that in the reference set.
They also show properties of semisupervised learning. When generating features according to Eq. (<ref>), only the reference class is used to determine linear laws for both classes.
In the transformed feature space, given samples naturally cluster by class. This is the reason why a simple classifier works so well, as will be shown below.
The transformed features form manifolds that reduce the dimensionality requirements to represent the data. This is supported by the fact that simple classifiers such as random forests and support vector machines work very well with these new features. It is sufficient to find manifold boundaries in the feature space that can be represented with less data.
This representation is also sparse. The feature transformation basically behaves like a similarity detector and signals the presence of a given class similarly to one-hot encoding. For example, the components of the large feature vector given in Eq. (<ref>) will be close to zero in the segment that corresponds to a given class, and they will be larger in other segments, signaling the presence of another class.
The structure of the feature vector is determined by a relatively simple, linear, and interpretable mathematical object, a linear law, as defined in Eq. (<ref>). This vector, which usually has a length of 10-20 elements, is able to represent the relations and connections between the elements of a given class, which makes them similar. It compresses this information into a few real numbers, and with this, it characterizes thousands of samples. In addition, this information can be easily extracted. The feature generation according to Eq. (<ref>) is a linear process and can also be easily interpreted.
§ DATASET
For training and testing purposes, we used a database of QRS complexes from <cit.>. This is a balanced subset of the MIT-BIH database <cit.> that includes healthy and ectopic beats only. The training–testing split follows the protocol defined by de Chazal et al. <cit.>; thus, it is guaranteed that samples from the same patients are not mixed between the train and testing sets. In this way, it can be assured that the results presented here generalize well, and similar performance can be expected for new samples. The training set contains 8520 samples, while the test set contains 6440 signals. Some examples from the dataset can be seen in Fig. <ref>. For the purposes of this work, this database is divided into three parts in the following way: 40 % of the training set is used for the training, and the remainder is retained as a validation set, which is mainly used to fix the hyperparameters of the various classifiers used. The original test remained unchanged and was used only to determine the classification accuracy.
This division of the training and testing sets is partly chosen to demonstrate the data efficiency of this modified LLT method. Placing more elements in the train set does not significantly increase the accuracy. The explanation behind this behavior is that the requirement for a successful transformation is to have a sufficient number of samples to obtain an accurate linear law according to Eqs. (<ref>) to (<ref>). Since the point of a linear law is to represent a common property of a learning set in a compact form, adding more samples of the same type does not influence the result once good statistics are obtained. In this case, a linear law is determined from samples that originate from a given pool of patients. Adding signals from the same patients would not influence the law significantly, but including more patients probably will, as ECG signals show variations from person to person. If the train set contains samples from a sufficient number of different patients who represent a faithful sampling of the general population, then the derived linear law will no longer change significantly by adding new patient data.
The samples in the dataset are processed in a standard way <cit.>. First, a low-pass filter with a 20 Hz cutoff frequency and a high-pass filter with a 0.5 Hz cutoff frequency are used to remove the noise and baseline shift from the samples. Then, the signals are standardized so they have a zero mean, and they are normalized with their maximum value. The QRS peaks are identified in these samples and cut off so that at an end, there are 30 datapoint-long samples that are centered on the QRS complex peak. This means that when determining the dimensions of Y_ki in Eq. (<ref>), L=30 This process can always create a standard input from any signal and makes the algorithm less sensitive to the format in which the samples are given.
§ CLASSIFICATION ALGORITHM FOR ECG SIGNALS
The classification task is realized as a two-step process. First, the LLT is applied to the sample that generates features from the given signal, as described in Eq. (<ref>). Then, a classifier trained on these features processes the sample and output the final result. A zeroth step, which is, strictly speaking, not part of the classification, is also taken to standardize the input signals, as described in Section <ref>. In this step, the input passes through a noise filter, and then it is searched for peaks that are cut out with a window. Note that since the existence of one QRS peak per sample cannot be guaranteed for general ectopic signals, these cases must be handled by the algorithm. In this study, the logic is adopted that if a peak search fails to return a proper peak or finds more than one peak per labeled sample, then this signal is automatically classified as an ectopic signal. The reason behind this is that a peak search always finds a peak in normal heartbeat samples; hence, if there is any problem with the search, the signal must be an ectopic signal. These artifacts are present in 5 % of the samples in the train and validation sets and in 1 % of the test set.
§.§ Training the classifier
Similar to classification, training is also a two-step process. The first step is to find the linear laws corresponding to the classes in the learning set according to Eq. (<ref>). Then, the linear law of the normal beats is used to perform an LLT for the whole learning set according to Eqs. (<ref>) and (<ref>). Now, the whole learning set, including both the normal and ectopic beats, is transformed into the new representation. The hyperparameters of LLT must be determined at this step. In this case, there is only one hyperparameter, which is the length of the linear law, which is denoted by n in Eq. (<ref>).
Determining the value of n is a multifaceted problem, and the correct value often depends on the application and the dataset. However, there are multiple considerations that, when taken into account, can significantly reduce the possible values of n.
First, these parameters must be sufficiently large so that the linear law has a sufficient number of free variables to accurately represent the data with minimal error. This can be checked by looking at the value of the smallest eigenvalue in the solution of Eq. (<ref>), i.e., the eigenvalue equation. According to Eq. (<ref>), this is the variance of the transformed values. Smaller values of these parameters show that the linear law is better at mapping the data to ideally zero. Increasing n generally decreases the error of the fit as it has more parameters, but the noise and statistics place an upper limit on decreasing the error in this way. The error most of the time arises in the form of complex eigenvalues in Eq. (<ref>), which appear after increasing n and attempting to transform the data with a linear law.
Second, n determines the degrees of freedom of the fit, and it also plays a crucial role in generating features. As shown by Eq. (<ref>), the number of features generated per sample is L-n+1, where L is the length of the sample. This means that the larger n is, the smaller the number of features generated for classification, which can decrease the accuracy. n must be sufficiently large to properly represent the signal through linear laws, but it cannot increase arbitrarily because it limits the number of features that are generated.
Another aspect to consider regarding the parameter n is that we want to keep it as low as possible to avoid overfitting. This can be checked by looking at how well it performs with new data. One possible method is given as follows. First, the linear law is determined with the train set. Then, it is tested. Finally, it is evaluated with the validation set. This means calculating ξ_k in a way that in Eq. (<ref>) Y_ki originates from the test set and w_i from the training set. This method checks how good linear law w_1 is for the validation set. In this way, we have two ξ_k vectors, one corresponding to the train set (ξ_k) and the other (ξ'_k) corresponding to the validation set. By comparing the variances corresponding to both vectors, how well the linear law fits both datasets is revealed. The value of n that makes the most sense is the one where the variances of ξ'_k) and ξ_k are close to each other because this means that the linear law generalizes well to the new data. These observations define an interval for the possible values of n. The best value from this interval can be chosen by hyperparameter optimization considering the whole process. This basically means that n can be optimized together with the parameters of different classifiers. However, it must be remembered that n is not part of the classifier; rather, it is a part of data representation and feature generation. It determines the number and quality of the features the classifiers use and influences performance in this way.
Following the previously mentioned observations, it was found that n=11 provides a good balance. It leads to a small variance for both the testing and training sets and the values are close to each other. Eq. (<ref>), i.e., the eigenvalue equation, is numerically stable with different datasets, and all of the studied classifiers obtain good performance with this setting.
The second step of the LLT-based classification is to train a classifier with the features generated by the transformation <cit.>. In this way, the classifier operates with the transformed samples only without using the original data points themselves. Since these two steps are independent of each other, the linear law does not constrain the type of classifier. To study the effectiveness of the LLT, four different classifiers were trained with the same transformed learning set: random forest (RF) <cit.>, k-nearest neighbors (KNN) <cit.>, support vector machine (SVM) <cit.>, and a simple neural network (NN) <cit.>.
The hyperparameters of the abovementioned classifiers (depth of decision trees, k-parameters, etc.) are fixed by maximizing the performance for the validation set. The details corresponding to the different classification methods can be found below along with the results.
§ RESULTS
The results of LLT-based classification of ECG signals are summarized in Table <ref>. Common performance metrics were used to evaluate the different classifiers. From these metrics, the total accuracy (ACC) is defined as:
ACC = TP + TN/TP + FN + TN + FP
where TP, TN, FP, and FN are the true positive, true negative, false-positive, and false-negative matches, respectively. The sensitivity/recall (Se) can be defined as:
Se = TP/TP + FN.
The positive predictability/precision (+P) is defined in the following way:
+P = TP/TP + FP.
These performance metrics are determined for both the testing and validation sets. The reason for this is to examine how well the LLT-generated features generalize to a new group of patients. The validation set contains ECG signals from the same patients as the train set, while the test set contains signals from different patients than those in the other two datasets. These results characterize the real-world performance of the proposed method with new, previously unseen data. As can be observed in Table <ref>, the metrics are close to each other for both sets, which means that the method generalizes well to new patient data. This supports the conclusion that the LLT method is able to grasp the general common properties of healthy QRS complexes. The highest accuracy for the test set, 94.3 %, is reached by the SVM. It is also worth noting that a simple linear SVM is sufficient to reach 90% accuracy for both the testing and validation sets. This means that the LLT method with the new feature generation procedure, described in Eq. (<ref>), successfully generates features that make the classes almost linearly separable in the transformed feature space. This is the reason why simple classifiers achieve high performance, falling within the range of current state-of-the-art methods <cit.>. An even more precise comparison can be made using the results from <cit.>, where variable projection networks (VPNets) obtain the best results. This neural network was trained with the same database, which makes the comparison more realistic. The SVM total accuracy is only 2.4% lower than that of the aforementioned VPNet, although the discrepancy is somewhat higher in terms of the other metrics.
The efficiency of the linear law transformed feature space is demonstrated by two factors.
The testing and validation sets are considerably larger than the train set, which shows that the method can successfully grasp the multitude of underlying factors present within the data.
Another result shows that the method effectively finds the factors that differentiate the healthy and ectopic QRS complexes, and it is related to the number of parameters in the classifiers.
The RF and NN classifiers require a small number of parameters to fit the data, i.e., 10 estimators with 6 depth trees in the case of RF and one layer with 8 neurons for the NN. The fit of each simple classifier is detailed below.
§.§ Random forest
The main two parameters of the random forest classifier are the number of estimators and tree depth. Increasing these parameters leads to a better fit for the train set; however, the accuracy for the testing set will inevitably be worse because the classifier overfits the signals of patients who are in the validation and train sets. To avoid this, the parameters are optimized in the following way: the accuracy of the classification is measured with both the training and testing sets for different combinations of parameter values. These values are considered better when the accuracies for the two sets of data are closer to each other while still maintaining good performance. This method prevents overfitting by not only rewarding high accuracy for the train set but also expecting similar performance for the test set. Note that the testing set is not used in any manner to calculate the parameter values. This procedure leads to a random forest of only 10 estimators and 6-level deep trees. It can also be seen in Table <ref> that the performance for the test set and validation set is similar, which shows that the method successfully generalized to new patient data.
§.§ k-nearest neighbors
The parameters of the classifier are optimized with the validation set. This results in k=4 and the Chebyshev-metric-based distance. The performance for the test and validation is shown in Table <ref> under the KNN (k=4). It is known, however, that larger k values can offer more stability simply because more points are considered while determining the class of a new point. A way to address this problem is to use the known heuristic thumb rule for the determination of k:
k=√(N),
where N is the number of data points. This results in k=57. The performance of the model parametrized this way can also be found in Table <ref> under KNN (k=57). These results exhibit a comparatively lower total accuracy when compared to the results pertaining to the k=4 value. Nonetheless, the observed variances between the two are comparatively small. Additionally, this approach for determining the optimal k value holds the advantage of dispensing the need for a validation set and can prove beneficial in scenarios characterized by limited data availability.
The k=4 KNN model has the highest overall accuracy metrics for the validation set, although the numbers are close to those of the NN and SVM results. However, the NN and SVM models are able to transfer more of this accuracy to the test set than the KNN model. This behavior can be intuitively understood.
The low k determined from the test set means that transformed points in the validation set are close to points in the training set, and KNN requires very few neighbors to obtain a correct result, as there are fewer ambiguous points. This outcome is the manifestation of the LLT, which is designed to transform similar signals in close proximity to each other in the feature space. The train and validation sets contain samples from the same pool of patients; hence, the transformed samples are close to each other. However, the test set contains completely new samples, and some of them may be further away from the points of the train set in the feature space. The KNN method is limited in its capacity to accurately generalize because it lacks the requisite number of internal parameters, unlike the NN and SVM methods. Hence, the predictive accuracy of KNN for the test set is relatively lower than that of NN and SVM because of the new type of signals present in the test set.
§.§ Support vector machine
This classifier with a nonlinear kernel is the most natural choice for the LLT method. Since the feature space transformation attempts to map similar samples to close points in the feature space, it creates clusters corresponding to classes. The SVM method finds hypersurfaces that separate these clusters. Thus, the two methods strengthen each other, and SVM provides a meaningful generalization for the feature transformation: it finds classification borders in the feature space that are suggested by the transformed samples. This method also does not require the optimization of any meta parameters (such as the tree depth), which makes its application much more straightforward. It can be seen in Table <ref> that the SVM provides similarly good performance on both the test and validation sets, suggesting that LLT successfully obtains meaningful features for the separation of healthy and ectopic signals, and the results generalize to new data (and patients) well.
In addition to the more general nonlinear SVM method, a simpler model with a linear kernel was also fitted in the feature space. The linear model performs surprisingly well despite its limited capacity to generalize and reaches approximately 90% accuracy for both the test and validation data. The fact that a linear model manages to reach this accuracy offers some insight into the structure of the feature space. The samples are consistently mapped to distinct locations based on their respective classes to such an extent that a linear SVM can find a hyperplane in the feature space that effectively separates up to 90% of the samples.
§.§ Neural network
Neural networks are the most widely used and versatile machine learning systems. While the classification task on the LLT feature space does not demand the intricacy of a neural network, analyzing the problem through a neural network perspective can provide an alternative viewpoint on the characteristics of feature generation based on the LLT.
The exact form of the NN is a meta-parameter similar to the number of decision trees and tree depth in the RF method. Similarly, the validation set is used to determine the value of these parameters to prevent overfitting of the train set. The following neural network structure is found to be optimal: a fully connected simple neural network with 3 layers (one hidden layer). The input layer has 24 neurons, the hidden layer has 8 neurons, and the output layer contains two nodes corresponding to the two classes.
As shown in Table <ref>, the classification performance is similar to that of other methods, and the results generalize well from the test set to the validation set. The simplicity of the neural network can be explained by the fact that other simpler methods already work well for classification. Increasing the network will lead to higher accuracy on the train set, but it will generalize poorly to the test set. This points to the conclusion that the LLT method extracts the factors needed for the classification and further manipulation of the features (with a larger neural network) does not provide additional benefits.
§ CONCLUSIONS
In this paper, we presented a new technique to generate features from time series-type data and successfully applied it to the task of binary ECG signal classification. The new method extracts the features in a data-driven forward manner, which results in a classifier-agnostic feature space. These properties are achieved by using the principle of linear laws and the LLT method <cit.>. Here, linear laws are identified as common linear relationships between points in samples that belong to the same class. In this way, linear laws are able to represent classes in a concise and effective manner.
The features generated by the linear laws provide many advantages for classification, which was demonstrated by the fact that even linear classifiers reached very high total accuracy. Several different classifiers were trained with the LLT features extracted from a balanced dataset. RF-, SVM-, KNN- and NN-based classifiers reached above 90% total accuracy for the test set. The SVM method performed the best, achieving 94.3% total accuracy and thus placing it into the upper echelon of currently used methods. In the specialized task of classifying ECG signals corresponding to the same pool of patients as that in the training set, the KNN method reached the highest accuracy of 96,4%.
This new method generates high-quality representations with easily and intuitively interpretable features while ensuring that the whole process is verifiable, which can be important for certain applications such as those in health care. The LLT-based technique was also shown to be data-efficient by comparing it to the state-of-the-art VPNet method <cit.>. The LLT-based method is capable of achieving a similar level of performance for the test set, with a difference of only 2.4% while utilizing less than half of the training samples from the same dataset.
§ DATA AVAILABILITY
All the raw data generated in this study are available from the corresponding author upon reasonable request.
§ ACKNOWLEDGMENTS
The authors would like to thank András Telcs for his valuable comments and suggestions. The research was supported by the Ministry of Innovation and Technology NRDI Office within the framework of the MILAB Artificial Intelligence National Laboratory Program. A.J. received support from the Hungarian Scientific Research Fund (OTKA/NRDI Office) under contract number K123815. P.K. was supported by the ÚNKP-22-5 New National Excellence Program of the Ministry for Culture and Innovation from the source of the National Research, Development and Innovation Fund. Project no. PD 142593 was implemented with the support provided by the Ministry of Culture and Innovation of Hungary from the National Research, Development, and Innovation Fund, financed under the PD_22 “OTKA” funding scheme.
§ AUTHOR CONTRIBUTIONS STATEMENT
A. J., P.K., M.T.K., and P.P. conceptualized the work and contributed to the writing and editing of the manuscript. P.K. and P.P. acquired the data, and P.P. conducted the analysis. A.J. supervised the research.
§ COMPETING INTERESTS
The authors declare no competing interests.
elsarticle-num
|
http://arxiv.org/abs/2307.00856v1
|
20230703085432
|
OpenSiteRec: An Open Dataset for Site Recommendation
|
[
"Xinhang Li",
"Xiangyu Zhao",
"Yejing Wang",
"Yu Liu",
"Yong Li",
"Cheng Long",
"Yong Zhang",
"Chunxiao Xing"
] |
cs.IR
|
[
"cs.IR",
"cs.AI"
] |
This work was done during the visiting at Nanyang Technological University when the author studied at Tsinghua University.
Tsinghua Univerisity
Beijing
China
[email protected]
City Univerisity of Hong Kong
Hong Kong
[email protected]
City Univerisity of Hong Kong
Hong Kong
[email protected]
Tsinghua University
Beijing
China
[email protected]
Tsinghua University
Beijing
China
[email protected]
Cheng Long and Yong Zhang are the corresponding authors.
Nanyang Technological University
Singapore
[email protected]
[2]
Tsinghua Univerisity
Beijing
China
[email protected]
Tsinghua Univerisity
Beijing
China
[email protected]
As a representative information retrieval task, site recommendation, which aims at predicting the optimal sites for a brand or an institution to open new branches in an automatic data-driven way, is beneficial and crucial for brand development in modern business.
However, there is no publicly available dataset so far and most existing approaches are limited to an extremely small scope of brands, which seriously hinders the research on site recommendation.
Therefore, we collect, construct and release an open comprehensive dataset, namely , to facilitate and promote the research on site recommendation.
Specifically, leverages a heterogeneous graph schema to represent various types of real-world entities and relations in four international metropolises.
To evaluate the performance of the existing general methods on the site recommendation task, we conduct benchmarking experiments of several representative recommendation models on .
Furthermore, we also highlight the potential application directions to demonstrate the wide applicability of .
We believe that our dataset is significant and anticipated to encourage the development of advanced methods for site recommendation.
is available online at https://OpenSiteRec.github.io/.
<ccs2012>
<concept>
<concept_id>10002951.10003227.10003351</concept_id>
<concept_desc>Information systems Data mining</concept_desc>
<concept_significance>500</concept_significance>
</concept>
<concept>
<concept_id>10002951.10003317.10003347.10011712</concept_id>
<concept_desc>Information systems Business intelligence</concept_desc>
<concept_significance>500</concept_significance>
</concept>
<concept>
<concept_id>10002951.10003317.10003338.10003343</concept_id>
<concept_desc>Information systems Learning to rank</concept_desc>
<concept_significance>500</concept_significance>
</concept>
<concept>
<concept_id>10002951.10003317.10003347.10003350</concept_id>
<concept_desc>Information systems Recommender systems</concept_desc>
<concept_significance>500</concept_significance>
</concept>
<concept>
<concept_id>10002951.10003227.10003236.10003101</concept_id>
<concept_desc>Information systems Location based services</concept_desc>
<concept_significance>500</concept_significance>
</concept>
</ccs2012>
[500]Information systems Data mining
[500]Information systems Business intelligence
[500]Information systems Learning to rank
[500]Information systems Recommender systems
[500]Information systems Location based services
OpenSiteRec: An Open Dataset for Site Recommendation
Chunxiao Xing
August 1, 2023
====================================================
§ INTRODUCTION
In modern business, selecting an optimal site to open a new branch is definitely crucial for the development of a brand or an institution <cit.>.
An appropriate site will bring substantial profits while an inappropriate site may lead to business failure <cit.>.
Thus, properly determining the best choice from so many candidate sites is quite important yet complex, since it needs to take many factors into accounts <cit.>, such as the brand types and the population surrounding the site.
Typically, this task is mainly accomplished by the professional consulting or marketing departments of companies <cit.>, which is usually labor-intensive and time-consuming.
Meanwhile, human error and bias can also lead to suboptimal solutions.
Therefore, it is difficult for such an artificial approach to fulfill the high demand of rapid development in modern business.
Thanks to the booming development in information retrieval, automatic data-driven approaches have been introduced to assist the decision-making and reduce the cost, i.e. site recommendation <cit.>.
These approaches come with a wide variety of definitions of site recommendation, including the association analysis <cit.> for feature selection, the rating prediction <cit.>, the consumption prediction <cit.> and the top-N recommendation <cit.>.
While they share the idea of treating the site recommendation problem as a ranking task, their significantly different definitions make it difficult for them to be compared directly.
Thus, all these works are independent of each other and they have to undesirably start from scratch instead of making continuous improvements, which is detrimental to the subsequent research on site recommendation.
Meanwhile, their datasets only cover very small yet different scopes in site recommendation, such as bike sharing station <cit.>, chain hotel <cit.> and online stores with courier capacity <cit.>.
This leads to failure in utilizing comprehensive information across scopes and severe data sparsity problems in site recommendation.
Even more unfortunately, none of them has released their datasets so far.
In most cases, collecting data and creating dataset are necessary in research but low-yielding since the dataset is typically a fundamental part in a scientific thesis.
Therefore, the lack of publicly available dataset brings inconvenience to the researchers and forces them to spend long time dealing with the dataset construction, which even hinders the development of site recommendation solutions.
According to the above problems, we believe a unified definition and a comprehensive open dataset of site recommendation are necessary and crucial for the benign development of the following research on site recommendation.
To this end, we propose a formal problem definition of site recommendation by jointly considering and summarizing the definitions of existing studies.
Based on this problem definition, we collect, construct and release an Open benchmarking dataset for Site Recommendation, namely .
Specifically, consists of four international metropolises, including Chicago, New York City, Singapore and Tokyo.
Different from the datasets used by the existing approaches, our proposed contains all the brands and regions from all the scopes and types in the whole cities and thus yields a wide-range, much larger and more comprehensive dataset.
Meanwhile, provides sufficient trustworthy commercial relationships and organizes the different types of real-world concepts into a heterogeneous graph to offer more comprehensive information.
Furthermore, we also conduct benchmarking experiments of several representative baselines on to facilitate future research.
Some discussions of the potential application directions of in other research areas, including brand entry forecasting and business area planning, and also the limitations of are presented to give a broader view of .
The contributions of this paper are summarized as follows:
* We introduce a formal definition of site recommendation by summarizing the task definitions of existing works, which unifies them to provide open benchmarks for the following research.
* We collect, construct and release an open comprehensive dataset of four international metropolises, namely , to facilitate the subsequent research on site recommendation. To the best of our knowledge, is the first publicly available dataset for site recommendation.
* We conduct benchmarking experiments of 16 widely-used baseline models in recommendation on , to verify their effectiveness in site recommendation and to facilitate future research for comparison.
* Besides site recommendation, various other research areas such as brand expansion, urban planning and facility location can benefit from given that it embeds rich information on both commercial and geographical aspects of urban spaces.
§ RELATED WORKS
Site recommendation for store brands and public facilities is a widely-studied problem with strong practical significance in modern business <cit.> and urban planning <cit.>.
The most seminal attempt <cit.> proposes to investigate the potential effects of different features for retail store site identification.
Following it, some data mining approaches are proposed in evaluating the correlations between the street centrality and the geographical distribution of activities in Bologna <cit.> and Barcelona <cit.>.
With the booming development of machine learning algorithms, applying them for site recommendation becomes an effective and efficient solution, which has attracted increasing interest and thus yields many approaches.
Unfortunately, not only their datasets are limited to small scopes but also none of them has released their datasets.
The detailed comparison between the datasets of these approaches and our is shown in Table <ref>.
Geo-Spotting <cit.> first extends the early data mining approaches with machine learning algorithms to better analyze the effectiveness of geographical and mobility features in site recommendation.
This approach focuses on 3 fast food brands in New York City and analyzes the key factors of site recommendation separately for each brand with sufficient samples.
ANNRR <cit.> proposes a semi-supervised feature selection method on heterogeneous urban data to predict bike trip demand for the bike sharing station recommendation in two cities.
In this work, the site recommendation is limited to bike sharing station.
PAM <cit.> utilizes the traffic information with partitioning around medoids algorithm to determine the optimal location of new ambulance station.
Although there are a great number of candidate regions, there are only 34 existing stations available for training that may do harm to the credibility of results.
BL-G-CoSVD <cit.> introduces bias learning and integrates both location and commercial features into SVD to recommend the suitable shop-type for each site.
Contrary to the other approaches that predict the optimal region for a given brand, this approach predicts the shop-type for a given region with only 5 candidate shop-types so that the difficulty of this task is much lower.
DD3S <cit.> learns to rank the candidate demand centers for two coffee shop brands and two chain hotel brands by predicting the number of customers at the given location with multiple spatial-temporal data sources.
Since the candidate demand centers are predetermined by clustering the demand and supply gaps, there are only about 10 regions being determined as candidates, which is a small number.
DeepStore <cit.> leverages a deep neural network on both dense and sparse features for predicting the consumption level of 49 stores in their surrounding areas of 13 cities to recommend the optimal site.
However, the limited amount of data significantly increases the dependence of this task to the quality of features.
While these approaches consider the site recommendation of different cities individually, there are also some works focusing on knowledge transfer across different cities.
CityTransfer <cit.> transfers knowledge from a source city to a target city via both inter- and intra-city views for site recommendation of 3 chain hotel brands in a new city.
Specifically, it chooses Beijing or Shanghai as the source city and Xi'an or Nanjing as the target city to improve the performance on the cities with less stores.
WANT <cit.> employs adversarial learning to diminish the distribution discrepancy between the source city and target city for predicting the consumption of stores in given areas.
Then, it ranks the candidate areas in the target city according to the consumption for site recommendation.
Similar to DeepStore, the dataset used by WANT is also small in size.
Despite their success, all these traditional site recommendation approaches rely heavily on feature engineering with simple model structures, especially the fine-grained manually-crafted features <cit.>, which are hard to design and may introduce human biases.
Different from them, the recent approaches pay efforts in utilizing complex models to automatically capture the latent features from multi-source data for site recommendation, which is a new research trend of site recommendation.
UrbanKG <cit.> constructs a knowledge graph from urban data, built upon which a relational graph neural network model is designed for efficient and effective site recommendation.
UrbanKG utilizes a comprehensive dataset and the well-defined data structure is suitable for the research of site recommendation.
However, this dataset focuses on the giant brands and has not been released for publicly available.
O^2-SiteRec <cit.> conducts site recommendation by ranking the candidate regions with order number and delivery time from the courier capacity perspective in Online-to-Offline (O2O) stores of delivery platforms.
This approach only uses the data of a small scope of brands from a single city, which is not sufficient.
UUKG <cit.> proposes an urban knowledge graph as the foundation for downstream spatial-temporal prediction tasks.
Although UUKG is not designed for site recommendation, the link prediction task on it shares many similarities with site recommendation.
Differently, it only have 15 broad categories to distinguish the POIs rather than fine-grained brands and more than 80% of the POIs are from merely 2 categories.
Therefore, the vanilla UUKG is not sufficient to well support site recommendation.
Compared with their datasets, our proposed has outstanding superiorities from four perspectives:
* Wide-range and Large-scale: Existing site recommendation approaches mainly consider a small range of brands or focus on a specific scenario, such as chain hotel and courier capacity. Therefore, they usually collect data only for the specific demand, which yields a small size of data in the final task. In contrast, takes all the brands and regions in the city into account and thus yields much larger data size.
* Rich Commercial Features: Existing approaches focus more on leveraging the features of regions for prediction while paying less attention to the features of brands. In contrast, provides more commercial features of brands and institutions via various types of relations among them.
* Comprehensive Information: Most of the existing approaches manually define the fine-grained features using original geographical and demographical information, which may lead to human bias and information loss. In contrast, models the data as a heterogeneous graph with different types of nodes and edges, which provides more comprehensive information, such as the competitive relations between brands, along with the original features.
* Publicly Available: None of the existing studies have released their datasets. To the best of our knowledge, is the first publicly available dataset for site recommendation. The released dataset will encourage continuous research in site recommendation.
§ DATA DESCRIPTION
In this section, we will first introduce the schema definition to better illustrate the overview of . Then, we will describe the whole process of data construction in detail. Finally, we will provide the statistics along with the usage of .
§.§ Problem Definition
While most of the existing works formulate the site recommendation as a ranking task ultimately, their data is usually used for a specific purpose and thus it yields different definitions of site recommendation, such as store-type recommendation <cit.>, consumption level prediction <cit.> and knowledge graph link prediction <cit.>.
From the higher level of perspective in abstraction, all these existing definitions of site recommendation share the same prototype.
No matter what kinds of aspects of information they use or forms of task they apply, the ultimate objective of them is the same to obtain the ranking list of sites.
Since there is no precise consentient definition of site recommendation, we propose a formal definition for site recommendation task by considering the common parts among all the definitions of existing works as follows.
Let ℬ = {b_1, b_2, ..., b_M} denote the brand set with M brands and ℛ = {r_1, r_2, ..., r_N} denote the region set with N regions.
Each POI that belongs to brand b_i and locates at region r_j contributes to a value P_ij = 1 in the matrix 𝐏 = {0, 1}∈ℝ^M × N.
Therefore, the site recommendation task aims at predicting a ranking list of candidate regions for each given brand.
Different from the definitions of other studies, our definition is more general that unifies the different concepts, such as brand, store and shop type, and requires no additional definition, e.g. the relation in knowledge graph link prediction.
Moreover, such a definition is more straightforward that directly considers the ultimate ranking objective rather than converting the other objectives to ranking.
§.§ Schema Definition
In order to clarify the goal of data to collect and the final structure of dataset to construct, we deliver a schema as shown in Figure <ref> for illustration.
Such an overall schema has a graph structure, which consists of different types of entities to represent the real-world concepts and different types of edges between entities to indicate the commercial or geographical relations.
Since there is much different domain-specific information for each kind of brands or scenarios that provides additional complexity as mentioned above, such as the house price in site recommendation for chain hotel brands <cit.>, we only consider the general and fundamental information in according to our proposed problem definition.
Based on this schema, we elaborate the definition of each kind of entities and relations in details in the following paragraphs.
To be in line with the aforementioned problem and schema definition, we first define five types of entities to denote the real-world concepts following UrbanKG <cit.>:
* Brand: Brands denote either commercial brands in business that own multiple branches, e.g., Starbucks and Apple, or institutions that refer to special functions, e.g., Columbia University and United States Postal Service.
* Category: Categories represent the functions of the venues. Due to the significant functional differences between venues, we define three levels of categories (broad, medium and narrow) for classification. For example, a Starbucks store has the categories of `Food and Beverage', `Beverage Shop' and `Coffee and Tea Shop' for broad, medium and narrow respectively.
* POI: POIs are the basic functional venues in a city, such as shops, restaurants and schools. Each POI has sufficient commercial and geographical information.
* Business Area: Business areas denote the special planned areas for business to form the scale effect, where the venues are usually very dense.
* Region: Regions refer to the geographical divisions planned by the city governments. The principle and granularity of region division varies depending on the city governments. For example, the regions in Chicago are relatively large with only numbers for reference while the regions in Tokyo are much smaller with specific names like `Ginza 1 Chōme'.
Specifically, these five types of entities can be further categorized into three aspects by the types of their data source, which are commercial data (Brand and Category), site data (POI) and geographical data (Business Area and Region).
From this perspective, we define the relation types between these five types of entities as Table <ref>.
Each POI can be mapped to a Brand and a Region from the commercial and geographical aspects of it.
Therefore, the POI serves as a bridge to connect Brand and Region to construct the dataset for site recommendation.
Meanwhile, the Category and Business Area of a POI are uniquely determined.
However, according to the real-world situation, a Brand may have multiple relations of Category and a Business Area may consist of several parts from multiple instances of Region.
So the relations between Brand and Category, Region and Business Area can not be exactly defined.
Moreover, there are also plenty of relations within Brand (Competitive and Related) and relations within Region (NearBy and Similar) that may be useful for site recommendation as follows:
* Competitive: This commercial relation means that two different brands are competitive, e.g., KFC and McDonald's.
* Related: This commercial relation means that two different brands belong to the same company or group, e.g., KFC and Pizza Hut both belong to Yum! Brands.
* NearBy: This geographical relation means that two different regions are close in physical space, e.g., `Ginza 1 Chōme' and `Ginza 2 Chōme'.
* Similar: This commercial relation means that two different regions have similar distributions of POI category, e.g., `Ginza 1 Chōme' and `Toranomon 4 Chōme'.
§.§ Data Construction
Due to the requirements of data quality to produce reliable dataset, we choose four international metropolises for at present considering their information comprehensiveness and data integrity, which are Chicago, New York City, Singapore and Tokyo.
In order to better support and promote the research for site recommendation, we plan to add more cities into on the premise of ensuring data quality in the future.
Note that all of the data is collected from open-source data sources and our is distributed under the same licence with them to fulfill the ethical regulations.
On the basis of the schema, we collect data separately from different sources for the aforementioned three aspects.
First, the site data, which is the core to connect commercial data and geographical data, is obtained by extracting the POIs from OpenStreetMap[https://www.openstreetmap.org/], licensed under the Open Data Commons Open Database License (ODbL) [https://opendatacommons.org/licenses/odbl/].
OpenStreetMap <cit.> is an open-source community-built map service that consists of three types of geographic unit, including nodes, ways and relations.
Typically, each node denotes a geographical point in a map while each way and each relation represents a line and a polygon that consist of series of points.
Specifically, we extract the data from the data distribution service[https://download.bbbike.org/osm/bbbike/] on December 1st, 2022.
For each geographic unit, OpenStreetMap provides a series of tags to describe its characteristics, such as name, brand and amenity.
Here we filter the POIs by extracting all the geographic units with at least one type of name tag since the nameless objects are mostly meaningless either.
To represent the geographical location of each POI to identify its region, we convert the original geographical information of different objects into their centroids' coordinates, which are pairs of longitude and latitude.
Then, we obtain the commercial data by assigning the POIs to the brands from Wikidata[https://www.wikidata.org/].
For the POIs that already have the brand tags, it is as simple as directly taking the information from the tags like the wikidata code to extract their brands.
Unfortunately, the majority of POIs don't have any brand tags and the brand information is mainly contained in their names.
Meanwhile, there may be multiple brand names that correspond to the same brand.
Therefore, it is essential to design an effective method for brand matching.
In order to achieve more accurate and reliable brand matching, we apply a combination of phonetic matching and text matching algorithms.
Specifically, we utilize the soundex <cit.> algorithm for phonetic matching and the Jaro distance <cit.> as edit distance for text matching.
For each brand name, we first translate it to Unicode by its pronunciation, which means the Japanese words are translated using Hepburn romanization and the Chinese words are translated using Hanyu Pinyin.
Two brand names are defined as matching only when they are matched by both two algorithms.
Here, we implement the matching algorithms using the open-source library jellyfish [https://pypi.org/project/jellyfish/] and apply Jaro distance with 0.8 as threshold of matching.
After the brand matching, the brands are grouped together into several disjoint sets according to their matches and each set will choose one specific brand name which has record in Wikidata as the corresponding name for the whole set.
Through this process, we can successfully obtain the precise brand of POIs without ambiguity.
For the sake of ensuring the reliability of such an automatic brand matching approach, we manually evaluate 5% of the matched brands in each city and the evaluation results are quite convincing.
For each POI, the hierarchical categories are determined based on its functional tags and the brand information from Wikidata.
Specifically, the broad category and medium category types are manually defined and the narrow category types are directly extracted from Wikidata.
Take a branch store of Starbucks as example, the broad category `Food and Beverage' is determined by its functional tags `amenity=restaurant' and the narrow category `Cafe and Tea Shop' is obtained from the brand information of Starbucks in Wikidata while the medium category is then defined to be `Beverage Shop'.
Next, the geographical data is collected from the free public data on the data portal of Chicago[https://data.cityofchicago.org/], New York City[https://opendata.cityofnewyork.us/], Singapore[https://data.gov.sg/] and Tokyo[https://www.data.go.jp/] governments.
Specifically, the Tokyo government has not released the business area planning data so we omit the business area data of Tokyo.
Typically, the data is officially released by the government agencies of each city and thus the region partitions and the business area plannings are credible.
For the officially defined regions and business areas, we formulate them into polygon boundaries which consist of a series of coordinates.
Subsequently, we determine the geographical assignment of POIs by computing the inclusion relationships between the boundaries of the regions or business areas and the centroids of the POIs.
After obtaining all the entities, we further identify the intra-relations within Brand and Region to provide more comprehensive commercial and geographical information.
For the relations within Brand, i.e. Competitive and Related, we directly extract them by the provided statements from Wikidata.
Specifically, we define all the other brands in the same lowest category as Competitive and define the brands in `See Also' section as Related.
Since these two relations are obtained from Wikidata, it is not abnormal for a brand to have no provided Competitive or Related brand.
For the relations within Region, we calculate the shortest distances between pairs of regions to identify NearBy relations with a maximum threshold 0.5km and we calculate the cosine similarity of narrow category distributions of POIs to identify Similar relations with a minimum threshold 0.9.
Let the POI category distribution of region i be dist_i = [n_1, n_2, ..., n_K] where n_k is the POI number of category k in region i and K is the number of narrow category types, the cosine similarity between region i and j is defined as sim=dist_i · dist_j/|dist_i||dist_j|.
Also, it is common for a region to have no Similar region.
§.§ Statistics and Usage
Through the whole process of data collection and processing, we finally obtain the well-formulated and comprehensive dataset.
The detailed statistics are shown in Table <ref>.
In each city, we have thousands of brands and regions that serve the role of users and items in personalized recommendation systems, respectively.
Thus, the existing POIs that belong to specific brands and locate in specific regions serve the role of user-item interactions, whose amounts are about tens of thousand in each city.
Meanwhile, each POI has a series of hierarchical categories from about a hundred of types and locates in a specific business area (except for Tokyo).
Here we provide an example of a branch store of Starbucks in Tokyo as illustrated in Table <ref>.
In order to better facilitate the usage of , we further explore the characteristics of by analyzing the dataset distributions.
Due to the different urban planning and lifestyles of people, the distributions of site counts of categories are also different in each city.
From the comparison of distributions in Figure <ref>, we can explore the highlights in these four metropolises.
For example, the people in Chicago and New York City are more likely to pay attention to the sports facilities, the choices of eating are more diverse in Singapore and the public transportation may be more convenient in Tokyo.
However, on the perspectives of brands and regions, the distributions are much more imbalanced.
As shown in Figure <ref>, the site counts of different brands are extremely imbalanced that the top 10% of brands occupy over 50% of sites.
This phenomenon indicates that the giant brands are actually dominating the commerce in every city.
Similarly, the uneven color distributions in Figure <ref> show that the vast majority of POIs (up to 80%) locate in several centres or sub-centres composed of a few regions while most regions have only a few POIs.
Such an inequality among different regions shows the different roles of the regions in urban planning, e.g., business areas are dense of POIs while residential areas are sparse.
All these distributions indicate the significant imbalance of , which is an important problem in utilizing the dataset for site recommendation.
On the basis of these statistics of , we conclude five unique characteristics of the site recommendation task compared with other top-N recommendation tasks:
* The data is quite sparse. Different from the user-oriented recommendation tasks, such as e-commerce or POI recommendation, the site recommendation task is brand-oriented. Due to the real-world concepts, the amounts of brands, regions and POIs are limited. Thus, each data point is very valuable and important.
* The city-specific characteristics are significant. It is necessary for site recommendation to effectively capture the latent essential of the city caused by urban planning and lifestyles. Meanwhile, the inter-city commonalities and differences will also play important roles in the researches of site recommendation, especially the researches on knowledge transfer across cities.
* The data distribution is extremely imbalanced. The significant imbalance may lead to undesirable predictions or other problems like popularity bias. Therefore, solving the imbalance problem is an important criterion in site recommendation.
* The domain-specific features are very crucial. Although there are rich domain-specific features in each domain, these features often have limited effects on the performance, such as the social <cit.> and geographical <cit.> features in POI recommendation. However, due to the data rareness, the commercial and geographical features are crucial in site recommendation.
* The correlations are highly complex. While social relationships can be treated as positive correlations <cit.> in social recommendation, the relationships between brands and between regions are much more complex. For example, two competitive brands may open stores in the same region and a brand may open stores in nearby regions, which is anti-intuitive but not unusual.
§ BENCHMARK EXPERIMENTS
To better illustrate the significant importance of and deliver the facilitation for future research on site recommendation, we conduct a benchmark experiment.
In this section, we will report the experimental results with some widely-used baselines on under different scenarios as the benchmark.
§.§ Experimental Settings
§.§.§ Dataset Split & Evaluation Metrics
Since the original is extremely imbalanced on both brands and regions because of the urban planning, we filter the dataset by 5-core setting on the brands (all the brands have possessed at least 5 POIs) in the benchmarking experiments.
Specifically, we randomly split the POIs of each brand with 70%, 10% and 20% as training, validation and test sets, respectively.
To assess the model performance, we choose the widely-used standard evaluation metrics Recall@20 and nDCG@20 and regard all regions as candidates, i.e. all-ranking protocol.
§.§.§ Baselines
In order to explore the effectiveness of our proposed in practical application, we conduct experiments of site recommendation with several representative recommendation models as the benchmarks.
Since the existing site recommendation approaches have their own experimental settings and do not release their codes, they are not capable of directly being employed in our settings and we exclude them for comparison.
Specifically, the chosen models include machine learning models, collaborative filtering <cit.> models, click-through rate (CTR) prediction <cit.> models and graph-based models.
For machine learning models, we choose:
* LR <cit.>, namely logistic regression, is a simple yet effective model in classification.
* GBDT <cit.>, namely gradient boosting decision tree, is an ensemble model with decision tree model as the backbone.
* SVC <cit.> employs support vector machine (SVM) to tackle the classification problem.
* RankNet <cit.> is a famous learning to rank architecture in recommendation. Specifically, we utilize a two-layer neural network as the backbone here.
For collaborative filtering models, we choose:
* MF-BPR <cit.> is a variant of Matrix Factorization (MF) <cit.> optimized by the Bayesian personalized ranking (BPR) loss.
* NeuMF <cit.> combines neural network and Matrix Factorization (MF) for collaborative filtering with point-wise loss.
* FISM <cit.> extends MF by aggregating the item embeddings of interacted items to represent the user via item similarity.
* NAIS <cit.> introduces attention mechanism onto FISM to conduct weighted aggregation of items.
For CTR prediction models, we choose:
* DNN <cit.> applies deep neural network to capture the complex interaction between features for CTR prediction.
* Wide&Deep <cit.> jointly utilizes linear transformation and DNN for CTR prediction.
* DeepFM <cit.> combines the factorization machine (FM) and DNN to model the first-, second- and high-order feature interactions.
* xDeepFM <cit.> leverages the compressed interaction network (CIN) to achieve vector-wise feature interactions.
For graph-based models, we choose:
* GC-MC <cit.> is a general graph neural network architecture for recommendation.
* GraphRec <cit.> introduces graph neural network into social recommendation by aggregating the embeddings with social relationships. Here we remove the social aggregation component.
* NGCF <cit.> namely Neural Graph Collaborative Filtering, conducts graph message passing on user-item interaction graph for recommendation.
* LightGCN <cit.> simplifies the graph convolutional network with only neighborhood aggregation for collaborative filtering.
§.§.§ Implementation Details
The experiments are implemented on the server with an Intel Xeon E5-2640 CPU, a 188GB RAM and two NVIDIA GeForce RTX 2080Ti GPUs.
According to the proposed problem definition, the brands and the regions actually serve the role of the users and the items, the categories and the business areas are treated as descriptive features and the site recommendation can be seen as a top-N recommendation task.
Specifically, for the traditional machine learning models, including LR, GDBT and SVC, we implement them with scikit-learn <cit.> 1.0.2.
For other models that involve low-dimensional embeddings, we implement them with PyTorch <cit.> 1.12.1 and set the embedding dimension as 100.
The model parameters are initialized with Xavier initialization and optimized by Adam <cit.>.
For all models, we tune hyper-parameters with the performance on the validation set via grid search.
The detailed implementation codes could be found at the git repository[https://github.com/HestiaSky/OpenSiteRec].
§.§ Benchmark Results
The benchmark results of the baselines on are shown in Table <ref>.
In order to deliver our insights for future research, we analyze the experimental results and summarize the following points:
* Traditional machine learning methods are not capable to handle the complex scenario of site recommendation. Although LR, GBDT and SVC are essentially different types of methods, they all converge to the same local optimal point and thus have the same performance. Given the situation with highly limited data but highly rich features, it is extremely difficult for the traditional machine learning methods not to over-fit to the training set. Thus, they fail to generalize to new POIs and are not suitable for site recommendation without additional mechanisms.
* The pair-wise loss is significantly better than the point-wise loss. As shown in the results, the models with pair-wise loss (i.e. BPR loss) including RankNet, MF-BPR, FISM, NAIS, NGCF, LightGCN, all outperform the models with point-wise loss (i.e. BCE loss) including NeuMF, DNN, Wide&Deep, DeepFM, xDeepFM, GC-MC, GraphRec. This phenomenon indicates that the pair-wise loss is
more suitable than the point-wise loss in site recommendation.
* The feature interaction has marginal effects on performance. From DNN to xDeepFM, the degree of feature interaction is increasing but the performance improvement is not significant. This may be because the correlations of features are either fully dependent (broad category and narrow category) or fully independent (brand category and geographical coordinate).
* The high-order interactions between brands and regions are crucial. Under the same condition of other factors, such as loss and feature interaction components, the graph-based models are substantially better than others. Since the data are highly sparse in site recommendation, high-order interactions are important to make correct predictions. Therefore, exploiting the graph representation learning techniques to better model the high-order interactions between brands and regions, especially the explicitly defined relations like Competitive, are beneficial to obtain high performance in site recommendation.
§.§ Long-tail Scenario
Since the imbalance problem is severe in site recommendation, we conduct an additional experiment to evaluate some representative baseline models, including LR, RankNet, MF-BPR, NeuMF, DNN, DeepFM, NGCF and LightGCN, under the long-tail scenario.
Specifically, we regard the bottom 90% of regions with fewer POIs as the long-tail regions which totally occupy less than 50% of POIs and are only for testing.
As shown in Figure <ref>, the performances of the baseline models drop dramatically on these long-tail regions, which implies a big challenge in site recommendation.
Meanwhile, we can also find that the advantage of graph-based models is not as significant as it under the vanilla scenario, which is an important point in designing the model architecture.
From the perspective of the cities, the greater drop in performance from the vanilla scenario to the long-tail scenario indicates a higher degree of centrality in urban planning.
According to the results, Chicago and Singapore have a higher degree of centrality than New York City and Tokyo, which means the giant brands have more dominant positions in Chicago and Singapore.
However, to better achieve fairness, which is especially crucial to provide forward-looking recommendation results to promote urban development, addressing this extreme data imbalance issue is also an important and valuable research topic in site recommendation.
§ POTENTIAL APPLICATIONS
Besides site recommendation, our also support many other potential applications, including spatial object recommendation <cit.>, transportation demand prediction <cit.>, electric vehicle charging recommendation <cit.> and high-potential startup detection <cit.>, which demonstrates the strong applicability of .
In this section, we discuss the significance and show the feasibility via case study and visualization of two potential tasks: brand entry forecasting <cit.> and business area planning <cit.>.
§.§ Brand Entry Forecasting
People in different cities have different preferences for brands due to culture, history, lifestyle and other reasons <cit.>.
These factors along with the commercial strategies of brands result in the different brand distributions in different cities <cit.>.
Typically, some brands are more international that have been spread all over the world while some brands are more local that only open stores at a few cities.
Obviously, all these international brands started from local brands with decades of expansion to reach current standings.
Therefore, many brands that are very popular in local now have a great potential to expand to other cities around the world <cit.>.
A feasible way is to exploit the existing brands in different cities to mine the brands with a high probability to succeed in other cities <cit.>, i.e. brand entry forecasting.
Since our proposed provides plenty of brands in four world-class metropolises, it is credible to carry out such research with .
In order to demonstrate the applicability of our to support the brand entry forecasting, we also present a case study by ranking the brands of Fast Food and Cafe & Dessert in different cities.
As illustrated in Table <ref>, many popular international brands like Starbucks have multiple similar brands that are popular in local, such as Coffee Bean in Singapore and Doutor in Tokyo.
The fact that these local brands have the ability to pose a position in the fierce commercial competition with the international giant brands not only demonstrates their success in commerce so far, but also indicates their adequate competitiveness to success in other places in the future.
Therefore, it is fair to believe that these brands have a high potential to enter other cities and our is capable to provide valuable information for this research topic.
§.§ Business Area Planning
To form a strong scale effect, planning business area <cit.> (i.e. central business district) is crucial in the development of the city.
Generally, business area planning takes many factors into account, such as population, traffic convenience and surrounding environments.
Since the POI distributions and the geographical information already indicate these factors implicitly <cit.>, we deem that is quite beneficial to support business area planning.
To verify this idea, we visualize the planned business area (except for Tokyo, whose government has not released the official business area planning) and POI distributions for comparison.
From Figure <ref>, we can see that the distributions of business areas are relatively uniform in the cities and have high coincidence with the traffic hubs.
Such a regular pattern is consistent with the POI distributions and it is also true that the more concentrated the POIs are, the more business districts are planned.
Our provides tens of thousands of POIs along with their categories and other characteristics.
By analyzing these characteristics of all the POIs in an area, it is to effectively grasp the essential factors of the area.
For example, the areas with high density of shopping sites or the areas serving as transportation hubs will be more possible and reasonable to be planned as business areas.
Therefore, applying to conduct research in business area planning is feasible and reliable.
§ DISCUSSION & CONCLUSION
In this paper, we collect, construct and release the first open dataset for site recommendation, which consists of multi-source data from four international metropolises and is comprehensive to support the following research.
Specifically, leverages a heterogeneous graph schema with the entities to represent the different types of real-world concepts, including category, brand, POI, business area and region, and the relations to denote the corresponding commercial and geographical relationships.
Overall, site recommendation is an important and beneficial information retrieval task for the development of brands or institutions in real-world applications, which has been established for decades.
However, it was until recent years that the research on site recommendation has made much progress, but it is still very slow compared with other rapidly developing areas, such as POI recommendation.
Despite the success of recent approaches in site recommendation, none of them have released their datasets, which brings inconvenience for the researchers and is harmful to the research in this area.
Meanwhile, most of the existing approaches only focus on a small scope of site recommendation, yielding limited significance and impact.
Our provides comprehensive and variety of information across multiple cities to better support more practical and valuable research.
To verify the applicability of our and the suitability of the existing recommendation models on site recommendation task, we conduct benchmarking experiments of totally 16 representative baseline models on .
The experimental results fully demonstrate the great support for site recommendation and the detailed analysis also provides some insights in developing more advanced models.
Furthermore, we explore the potential research directions and deliver toy experiments of urban computing, including brand entry forecasting and business area planning, which indicates the applicability of our in supporting various real-world applications.
Unfortunately, there are still some potential limitations of for now.
The greatest limitation consists in the lack of temporal dimension, which means collects the data at a specific time without variation from urban development.
Therefore, our contains the real-world data but not necessarily the best options of regions for the brands.
A better solution considering the temporal dimension is to collect data and construct dataset at multiple time points on annually, quarterly, or even monthly.
However, due to the requirements of original data and the high workload, we can not afford to achieve it at present and it is left for future work.
In addition, currently contains four metropolises, which are still limited.
We have planned to expand it by adding more cities to better support the research of site recommendation.
In conclusion, site recommendation is is still an underestimated topic considering its significance in modern business and its predicted impacts.
Therefore, we believe that the emergence of our dataset will strongly promote the research on it and help the development of business intelligence in the next few years.
ACM-Reference-Format
|
http://arxiv.org/abs/2307.00898v1
|
20230703095237
|
Periodicity of general multidimensional continued fractions using repetend matrix form
|
[
"Hanka Řada",
"Štěpán Starosta",
"Vítězslav Kala"
] |
math.NT
|
[
"math.NT",
"11A55, 11J70"
] |
Czech Technical University in Prague
Faculty of Nuclear Sciences and Physical Engineering
Břehová 78/7, 115 19 Staré Město
Czech Republic
Czech Technical University in Prague
Faculty of Information Technology
Department of Applied Mathematics
Thákurova 9, 160 00 Prague 6
Czech republic
Charles University
Faculty of Mathematics and Physics
Department of Algebra
Sokolovská 83, 18600 Praha 8
Czech Republic
Periodicity of general multidimensional continued fractions using repetend matrix form
Vítězslav Kala
Electronic address:
======================================================================================
We consider expansions of vectors by a general class of multidimensional continued fraction algorithms. If the expansion is eventually periodic, then we describe the possible structure of a matrix corresponding to the repetend, and use it to prove that a number of vectors has an eventually periodic expansion in the Algebraic Jacobi–Perron Algorithm. Further, we give criteria for vectors to have purely periodic expansions; in particular, the vector cannot be totally positive.
§ INTRODUCTION
Lagrange showed that any quadratic irrational number has an eventually periodic classic continued fraction, thus providing a characterization for quadratic irrational numbers in terms of their continued fraction expansions.
In 1839, Hermite opened the question of similar characterization for higher order irrationalities.
More specifically, he asked for a representation by (eventually) periodic sequences that would capture the algebraicity of the represented number, especially focusing on cubic numbers.
His question was published later in 1850, see <cit.>.
A multitude of algorithms generalizing the classic continued fraction algorithm exist, in general, they are called multidimensional continued fraction (MCF) algorithms.
They have various properties related to the algebraicity of the represented number and their approximation capabilities.
However, till today, there is no satisfactory answer to Hermite's question in full generality.
Allowing some constraints, partial answers are known, for instance Murru <cit.> exhibits an algorithm which provides an eventually periodic representation for any cubic irrational number provided its minimal polynomial is explicitly known.
The history of MCF algorithms starts with Jacobi in 1868, <cit.>.
Jacobi's algorithm was further generalized by Perron, <cit.>, forming a method known today as Jacobi–Perron algorithm.
Other well-known algorithms are Poincaré algorithm (1884, <cit.>), Brun algorithm (1920, <cit.>), Selmer algorithm (1961, <cit.>), and Fully subtractive algorithm (1995, <cit.>).
There exists a variety of MCF algorithms that further generalized the common concept, which is discussed for instance in <cit.>.
Let us mention some of these algorithms: the Modified Jacobi–Perron algorithm (Bernstein, 1965, <cit.>), the Heuristic APD-algorithm (Karpenkov, 2022, <cit.>), the sin^2-algorithm (Karpenkov 2021, <cit.>), the algorithm of Abrate, Barbero, Cerruti and Murru (2013, <cit.>), and the Algebraic Jacobi–Perron algorithm (Tamura and Yasutomi,<cit.>).
In <cit.>, the author of the sin^2-algorithm proves that his algorithm is periodic for every totally real cubic vector.
In <cit.>, the authors of the Algebraic Jacobi–Perron algorithm provide some evidence that their algorithm may answer Hermite's question for cubic numbers; however, no proof is provided.
Besides the very general Hermite's questions, another recent source of interest in multidimensional continued fractions came from the study of universal quadratic forms over number fields. In degree two, there were numerous recent results bounding ranks of universal forms in terms of coefficients of the continued fraction for √(D), e.g., <cit.>. While there were some extensions of such results also to higher degrees <cit.>, they remain quite limited. Among the motivations and hopes for the present article are that the general methods developed here may be useful also for the application to universal forms.
In addition to general results on various MCF algorithms, there are many results for specific classes of algebraic numbers, for example, by Bernstein <cit.>, Raju <cit.>, Levesque <cit.>, Dubois and Paysant-Le Roux <cit.>, Greiter <cit.>, Bouhamza <cit.>.
All the above mentioned algorithms are so-called vectorial MCF algorithms; they provide a representation of a vector v⃗ in the form of a sequence of matrices from a fixed set.
A different approach to the study of MCFs provide the so-called geometric MCF algorithms described for example in the book of Karpenkov <cit.>.
In this article, we focus solely on vectorial MCF algorithms.
In the case when this representation is eventually periodic, we consider the repetend also in the form as a single matrix M which equals to the product of the matrices of the repetend.
Under the assumption that v⃗ forms a basis of (v⃗), we show (<Ref>) that M is equal to the transposed matrix in the basis v⃗ of multiplication by an algebraic unit.
In <Ref>, we show that such a matrix can in fact be determined by its single column by a mapping depending on v⃗ only, i.e., independent on the algebraic unit.
We give explicit form of this relation in <Ref> based on the minimal polynomial of y in the case that v⃗ is the polynomial basis induced by y.
As an example, we refine the obtained formulas for dimension 3 in <Ref>.
We also study purely periodic expansions.
We give two necessary conditions on v⃗ to have a purely periodic expansion in a given MCF algorithm.
The first condition, <Ref>, states that the vector with a purely periodic expansion is not totally positive.
The second condition, <Ref>, is for v⃗ in the form ( y^n-1, …, y, 1 )^T, stating that the norm of y needs to be (-1)^n-1 in order for v⃗ to have a purely periodic expansion.
Finally, in <Ref>, we give a procedure how to find candidates on the product of the matrices of the repetend.
In fact, this procedure can be used to find the expansion in the given MCF algorithm in some cases, hence we refer to in as the repetend matrix form of the algorithm.
We use this form in <Ref> to calculate the MCF expansions in order to show that a large class of vectors has an eventually periodic expansion in the case of Algebraic Jacobi–Perron algorithm in dimension 3 (<Ref>).
It is notable that this result generalizes a previous result of Tamura and Yasutomi <cit.>.
The article is organized as follows.
In <Ref>, we give some necessary notations and introduce the MCF algorithms.
<Ref> is dedicated to matrices of multiplication in (α).
<Ref> studies eventually periodic MCF expansions and the matrices that represent the product of their repetend.
<Ref> elaborates a procedure on how to find a candidate on the product of repetend.
<Ref> applies the previous results to prove that a large class of vectors has eventually periodic expansion in a given setting.
§ PRELIMINARIES
A number α∈ is algebraic over if it is a root of some polynomial f over .
The set of algebraic numbers (over ) is denoted by 𝔸.
Let α and α' be roots of the same irreducible polynomial f.
We say that α' is a conjugate of α.
The degree of α is the least number n such that α is a root of a polynomial over of degree n.
Algebraic numbers of degree two are called quadratic and algebraic numbers of degree three are called cubic (they are roots of quadratic respectively cubic polynomial with rational coefficients).
Let α_1,…,α_n ∈. The number field K=(α_1,…,α_n) is defined by
K=(α_1,…,α_n) := ⋂{T|T is a subfield of ℂ, α_1,…, α_n ∈ T}.
The degree of the number field K is the dimension of K as a vector space over .
The well-known primitive element theorem says
that for every α_1,…, α_n ∈, there exists α∈ such that (α_1,…,α_n) = (α). If α is an algebraic number of degree n, then
(α) = { a_0+a_1 α + … + a_n-1α^n-1 |a_i ∈}.
A number β∈ is called an algebraic integer if there is a monic polynomial f over such that f(β)=0. The set of all algebraic integers is denoted by 𝔹. The ring of integers of the number field (α) is the set _(α) := (α) ∩𝔹.
Let s: (α) →(α) be a linear transformation. Moreover, let S^B ∈^n,n be the matrix of the transformation s in a basis B.
If S^B_1 and S^B_2 are two matrices of the same transformations but in different bases, then S^B_1 is similar to S^B_2 (i.e., there exists an invertible matrix U such that S^B_1 = US^B_2U^-1). Especially they have the same determinant. This means that we can define the determinant of the transformation s as (s) = (S), where S is an arbitrary matrix of the transformation s.
We associate to each element δ∈(α) a linear transformation t_δ: (α) →(α) which is defined by
t_δ(x) = δ x
for every x ∈(α).
The matrix of this transformation is denoted T_δ.
Let β∈𝔸 and γ∈(β). Then the norm N_(β)|(γ) (or simply N(γ) if it is clear that γ∈(β)) of γ is the determinant of a matrix representation of the linear transformation t_γ. In other words
N_(β)|(γ) = (T_γ) ∈.
A unit in a ring R with identity 1_R is an invertible element u of R, i.e., there exists an element v ∈ R such that uv = vu = 1_R. The units of a ring R form a group with respect to multiplication, we call it the group of units U(R) of R. In the ring of integers _(α) of a number field (α), we can characterize the group of units in the following way. Let β∈_(α). Then β∈ U(_(α)) if and only if N(β)=± 1.
Due to the Dirichlet's unit theorem, we can also determine the rank (the number of multiplicatively independent generators) of the group of units U(_(α)).
Let K = (α) be a number field. The group of units of _K is finitely generated, and its rank is equal to
r = r_1+r_2-1,
where r_1 is the number of real conjugates of α and 2r_2 is the number of nonreal complex conjugates of α.
For example, if α is a cubic number, then the group of units U(_K) has rank either 2 or 1.
Let r be the rank of U(_K). The set of units u_1, … ,u_r is called the set of fundamental units if it is multiplicatively independent and it generates (modulo roots of unity) the group U(_K), i.e. if every unit u can be written uniquely in the form
u = ζ u_1^m_1… u_r^m_r,
where m_i ∈ for all i ∈{1,…,r} and ζ is some root of unity (i.e. there exists p ∈_+ such that ζ^p =1).
If K = (α) is an algebraic number field of odd degree, then the roots of unity have the following simple form.
Let K = (α) be an algebraic number field of odd degree. The roots of unity in _K are ± 1.
The following subsections are dedicated to vectorial MCF algorithms and their elementary properties.
§.§ Vectorial MCFs
Let n be a positive integer.
A vectorial MCF acts on _+^n and it is specified by two sets, ℐ and 𝒜.
The first set is an at most countable set of pairwise disjoint subsets of _+^n:
ℐ = {I_1,I_2,…}
where ∀α >0, ∀ I ∈ℐ, α I ⊆ I,
while the second set is a set of invertible matrices from ^n,n:
= {A_1,A_2,…}
having the same cardinality as ℐ.
Moreover, we assume that does not contain the identity matrix.
Given these two sets, a representation of a vector v⃗∈_+^n is obtained by the following algorithm.
[Multidimensional continued fraction algorithm with sets (ℐ,𝒜)]
Let v⃗∈_+^n.
Set v⃗^(0)v⃗, i 0.
Repeat:
Let j be some index such that v⃗^(i)∈ I_j.
If there is no such j, the algorithm stops.
Otherwise set
v⃗^(i+1) A_j^-1v⃗^(i)
and A^(i) A_j.
Set i i+1.
The sequence (A^(i))_i=0^∞ from <Ref> is called an (ℐ,) (n-1)-dimensional continued fraction expansion of the vector v⃗.
If not ambiguous, we will often say only expansion of v⃗.
Moreover, we identify the expansion of v⃗ with v⃗, i.e., we write v⃗ = ( A^(0) , A^(1), … ).
Let α∈_+.
Since the elements of ℐ satisfy ∀ I ∈ℐ, α I ⊆ I,
we conclude that MCF expansions of v⃗ and of αv⃗ are identical.
In what follows, we will often use the last remark and we will work with vectors z⃗^(i) = λv⃗^(i), where λ is such that (z⃗^(i))_n = 1, instead of v⃗^(i).
In other words, the <Ref> works in a projective space, which is the reason for calling the algorithm (n-1)-dimensional and not n-dimensional as the value n-1 corresponds to the dimension of the underlying projective space.
Nevertheless, we prefer to work exclusively with the homogeneous coordinates.
A MCF algorithm is unimodular if the matrices from are unimodular, that is, they have determinant equal to ± 1.
An expansion of a vector v⃗ = ( A^(0) , A^(1), … ) is eventually periodic if there exists N and positive p such that A^(i) = A^(i+p) for all i ≥ N.
We write also
v⃗ = ( A^(0) , A^(1), …, A^(N-1), A^(N), A^(N+1), …, A^(N+p-1)).
If N = 0, then the expansion is purely periodic.
The sequence of matrices ( A^(0) , A^(1), …, A^(N-1)) is called a preperiodic part and the sequence of matrices ( A^(N), A^(N+1), …, A^(N+p-1)) is called a repetend.
The number N is called a preperiod and the number p is called a period.
It follows from <Ref> that
A^(0)⋯ A^(i-2)A^(i-1)v⃗^(i) = v⃗^(0)
and therefore, we shall consider the preperiodic part and the repetend as matrices, i.e., R = A^(0)A^(1)⋯, A^(N-1) and M = A^(N) A^(N+1)⋯ A^(N+p-1).
As a shorthand, we shall use the following notation v⃗ = RM.
Below, when we mention a MCF algorithm, we mean a MCF algorithm for some given (ℐ,) and n.
§.§ Transvections
Let (n,) be the special linear group of matrices over of dimension n × n with determinant 1.
Let (n,) be subset of (n,) containing all the matrices with non-negative elements.
In what follows, we focus mainly on ⊆(n,).
However, the monoid (n,), is not finitely generated for n≥ 3 (for a proof, see Chapter 12.5 of <cit.>).
On the other hand, the group (n,) is finitely generated by transvections which are matrices T_ij that have 1's on the diagonal and on the i,j-th position and 0's elsewhere.
The following result due to Conder, Robertson and Williams (<cit.>) gives us even the presentation of (n,).
Let [A,B] be the commutator of A and B, i. e., [A,B] = ABA^-1B^-1.
The group (n,), where n≥ 3, has a presentation with the n(n-1) generators T_ij subject only to the Steinberg relations
[ T_ij,T_jk] = T_ik for i ≠ k,
[ T_ij,T_kℓ] = 1 for i ≠ℓ, j ≠ k,
where i,j,k ∈{1,…,n}
and to the relation (T_12T_21^-1T_12)^4 = 1.
§ MATRICES OF MULTIPLICATION IN (Α)
We focus on matrices of multiplication T_λ in a number field (α) (as a vector space over ) of degree n.
In this section, we show that the transpose of such matrix is fully determined by any of its single columns.
Moreover, we show that the mappings which determine the matrix from a single selected column are linear.
The reason for the transposition is the following relation to the situation when λ is an eigenvalue.
Let v⃗ = [ v_1; ⋮; v_n ] be a basis of some number field, λ∈(v_1,…,v_n) and T_λ^v⃗ be the matrix of the linear transformation t_λ in the basis v⃗. We have
M v⃗ = λv⃗ M = (T_λ^v⃗)^T.
Let e⃗_⃗i⃗ be the i-th vector of the standard basis, i.e., (e_i)_j = 1 if i =1;
0 otherwise.. It follows from the definition of the matrix T_λ^v⃗ that
v⃗^T T_λ^v⃗e⃗_⃗i⃗ = λ v_i
and therefore
e⃗_⃗i⃗^T (T_λ^v⃗)^T v⃗ = λ v_i.
This holds for every i ∈{1,…,n} and therefore
(T_λ^v⃗)^T v⃗ = λv⃗.
Since we are later (in <Ref>) interested in this exact situation of λ being an eigenvalue, we state the next theorem with the transposition.
Let v⃗ = [ v_1; ⋮; v_n ] be a basis (of a finite field extension of degree n as a vector space over ), ℓ∈{ 1,…,n } and λ∈(v_1,…,v_n).
There exists a mapping 𝒬_ℓ, v⃗ : ^n ↦^n,n such that
for every λ∈(v_1,…,v_n)
we have
M = 𝒬_ℓ,v⃗( M_∙,ℓ)
where M = T_λ^v⃗^T.
Moreover, there exists an n-tuple Q_ℓ,v⃗ of matrices from ^n,n such that
their i-th component satisfies
( Q_ℓ,v⃗)_i M_∙,ℓ =
(𝒬_ℓ,v⃗( M_∙,ℓ))_∙,i.
By <Ref>, we have that λ is an eigenvalue of M corresponding to the eigenvector v⃗.
For all i ∈{1,…,n}, we have
M_i,ℓ = ((λ v_i)_v⃗)_ℓ
where (λ v_i)_v⃗ denotes the vector of coordinates of λ v_i in the basis v⃗.
We show by contradiction that the eigenvalue λ is uniquely determined by these equations.
Let λ_1 and λ_2 be distinct numbers for which (<ref>) holds.
Set λ_3 = λ_1 - λ_2 ≠ 0.
We have
0 = ((λ_3 v_i)_v⃗)_ℓ
for all i ∈{1,…,n}.
Since λ_3 ≠ 0, the linear transformation t_λ_3 is an automorphism of (v_1,…,v_n), hence its matrix is reqular.
On the other hand, equality (<ref>) implies that its matrix in the basis v⃗ has zeros on the ℓ-th row, which is a contradiction.
Hence λ is uniquely determined by (<ref>), i.e., it can be determined from v⃗ and M_∙,ℓ.
Therefore, we can also find the whole matrix M = T_λ^v⃗^T.
The moreover part follows from the fact that the elements of M are linear combinations of the coordinates of λ in the basis v⃗.
Let α∈∖{ 0 }. If M is a matrix of a linear transformation t_λ in the basis v⃗ (as a vector space over ), then it is also a matrix of the same linear transformation in the basis αv⃗.
Therefore, we have
𝒬_ℓ,v⃗ = 𝒬_ℓ,αv⃗
for all α∈∖{ 0 }.
In what follows, we keep the same notation as in <Ref>, i.e., we associate with the mapping 𝒬_ℓ,v⃗ the n-tuple of matrices Q_ℓ,v⃗.
We demonstrate this and the claim of <Ref> in the following example.
Let v⃗ = √(4)√(2)1.
Then
Q_1,v⃗ =( [ 1 0 0; 0 1 0; 0 0 1; ],
[ 0 0 2; 1 0 0; 0 1 0; ],
[ 0 2 0; 0 0 2; 1 0 0 ]).
This means that
𝒬_1,v⃗ ( xyz ) = [ x 2z 2y; y x 2z; z y x ]
We take three matrices: M= [ 1 2 2; 1 1 2; 1 1 1 ], M^2 and M^3. The matrix M is a transposition of a matrix of linear transformation t_ε in the basis v⃗, where ε = (√(4)+ √(2)+1) is a unit in _(√(2)). It follows that M^2 and M^3 are transpositions of the matrix of transformation t_ε^2 and t_ε^3 respectively.
We have
M = [ 1 2 2; 1 1 2; 1 1 1 ]
=
𝒬_1,v⃗ ( M_∙,1 ) =𝒬_1,v⃗ ( 111 ) =
( (Q_1,v⃗)_1 M_∙,1 (Q_1,v⃗)_2 M_∙,1 (Q_1,v⃗)_3 M_∙,1)
= [ 1+ 0 + 0 0+0+2 0+2+0; 0+1+ 0 1+0+0 0+0+2; 0+0+1 0+1+0 1+0+0 ],
M^2 = [ 5 6 8; 4 5 6; 3 4 5 ]
=𝒬_1,v⃗ ( 543 )= ( (Q_1,v⃗)_1 M^2_∙,1 (Q_1,v⃗)_2 M^2_∙,1 (Q_1,v⃗)_3 M^2_∙,1)
= [ 5+ 0 + 0 0+0+2 · 3 0+2 · 4+0; 0+4+ 0 1 · 5+0+0 0+0+2 · 3; 0+0+3 0+1 · 4+0 1 · 5+0+0 ],
M^3 = [ 19 24 30; 15 19 24; 12 15 19 ]
=𝒬_1,v⃗ ( 191512 )
= ( (Q_1,v⃗)_1 M^3_∙,1 (Q_1,v⃗)_2 M^3_∙,1 (Q_1,v⃗)_3 M^3_∙,1)
= [ 19+ 0 + 0 0+0+2 · 12 0+2 · 15+0; 0+15+ 0 1 · 19+0+0 0+0+2 · 12; 0+0+12 0+1 · 15+0 1 · 19+0+0 ].
The next lemma shows that the mapping 𝒬_ℓ, v⃗ of <Ref> can be determined from the first n powers of a matrix of multiplication.
Let v⃗ = [ v_1; ⋮; v_n; ] be a basis of (v_1) as a vector space over and λ∈(v_1) be an algebraic number of degree n.
We can determine the n-tuples Q_ℓ,v⃗ (for ℓ∈{1,…,n}) as linear combinations of elements of the first n powers of the matrix M = T_λ^v⃗^T.
Firstly, we realise that λ is an eigenvalue of M nad v⃗ is the corresponding eigenvector. The algebraic degree of λ is n and therefore M has n distinct eigenvalues.
The elements of n-tuples Q_ℓ,v⃗ are linear combinations of the minimal polynomial of v_1, the coordinates of v_2,…, v_n in the basis 1,v_1,…,v_1^n-1 and the coordinates of λ in the basis v⃗. This means that the n-tuples Q_ℓ,v⃗ are uniquely determined by the matrix M.
Now we show that we can get them as a linear combinations of the ℓ-th columns of the first n powers of M.
We need to find the n^3 elements of the n-tuple Q_ℓ,v⃗. These elements are given by the n^3 equations that we obtain by expressing the elements of the matrices M, M^2,…, M^n as linear combinations of their ℓ-th columns. This follows directly from the definition of the n-tuple Q_ℓ,v⃗.
We show by contradiction that this system of linear equations is nonsingular. Suppose otherwise. The existence of the n-tuple Q_ℓ,v⃗ implies that there exist at least two solutions of this system of linear equations. Using the definition of the n-tuples Q_ℓ,v⃗, we obtain that there is a vector x⃗∈^n, x⃗≠ 0, such that x⃗^T (M^m)_∙,ℓ = 0 for all m ∈{1,…,n}. This means that we have an equation with n variables and n solutions. That implies that the solutions (M)_∙,ℓ,…, (M^n)_∙,ℓ are linearly dependent.
Now, let w⃗ = (w_1,…,w_n) be a left eigenvector of M corresponding to an eigenvalue β. We realise that also β has algebraic degree n and that w⃗ is a basis of (v_1). The elements of (M^m)_∙,ℓ are in fact coordinates of β^m w_ℓ in the basis w⃗. This means that β w_ℓ,…,β^n w_ℓ are linearly dependent. At the same time, β has degree n and therefore w_ℓ = 0. This is a contradiction.
The next theorem refines <Ref> to the case when the components of v⃗ = [ y^n-1; ⋮; y; 1 ] form a polynomial basis.
In this case, we can explicitly determine the matrices of 𝒬_ℓ,v⃗ by the coefficients of the monic minimal polynomial of y.
For simplicity, we state this claim for specific value of ℓ, namely for ℓ = 1.
For other values of ℓ, analogous formulas can be obtained.
Let y be an algebraic number of degree n such that
∑_r=0^n-1α_r y^r + y^n = 0,
where α_r ∈, and
v⃗ = [ y^n-1; ⋮; y; 1 ].
Let i,j,k ∈{1,… n}. We have
(( Q_1,v⃗)_i )_j,k =
1 for i ≤ j , k = j-i+1
α_n-i+1+j-k for 2 ≤ i ≤ j, k ∈{j-i+2,…, j}
- α_n-i+1+j-k for j<i , j+1≤ k ≤ n+j-i+1
0 otherwise .
Suppose that we have a matrix T_λ^v⃗^T.
Because v⃗ is a basis of a finite field extension and λ∈(y), we can find numbers β_1,…,β_n ∈ such that
λ = ∑_j = 0^n-1β_jy^j. We put β_i = 0 for all i <0.
From the definition of the matrix T_λ^v⃗ we obtain
( T_λ^v⃗)_i,j =
β_-i+j + ∑_k = j^n-1β_k ∑_r =0^min{ k-j, n-i}α_n-j-r∑_p_1… p_m
m ≤ k-j - r
p_s ≥ 1
∑_s=1^m p_s = k-j - r (-1)^m+1α_n-p_1…α_n-p_m.
Moreover, if we put x_j = ( T_λ^v⃗)_1,j and α_j = 0 for all j <0 we obtain
( T_λ^v⃗)_1,j =
x_j, ( T_λ^v⃗)_2,j = x_j-1 + x_j α_n-1
for all j ∈{1,…, n},
( T_λ^v⃗)_i,j = x_j-i + 1 + x_j-i+2α_n-1 + … + x_j α_n-i+1
for all i ∈{3,…,n} ,j ∈{1,…,n}, i ≤ j and
( T_λ^v⃗)_i,j = ∑_m = 1^n-j - x_j+mα_n-i-m+1
for all i,j ∈{1,…,n}, i > j.
Now it remains to realize that (( Q_1,v⃗)_i )_j,k is equal to the coefficient of x_k in the expression equal to
( T_λ^v⃗)_i,j.
§.§ Refinements on Q_ℓ,v⃗ for n=3
The explicit formulas for the elements of ( Q_ℓ,v⃗)_i can be derived without relying on the specific form of v⃗ in the previous theorem.
A similar approach to the one used in the previous proof can be employed to obtain these formulas.
However, this alternative method would result in more technical and intricate expressions.
For that reason, we focus in this subsection on the case n=3 for which we state more general claims.
Let y be a cubic number such that α_0 + α_1 y + α_2 y^2 + y^3 = 0, where α_0, α_1,α_2 ∈, x = γ_0 + γ_1 y + γ_2 y^2, where γ_0, γ_1, γ_2 ∈ and v⃗ = xy1.
We have Q_1,v⃗ = ([ 1 0 0; 0 1 0; 0 0 1; ],
[ 0 b_1 c_1; 1 b_2 c_2; 0 b_3 c_3; ],
[ 0 c_1 c_4; 0 c_2 c_5; 1 c_3 c_6 ]), where
b_3 = γ_2, c_3-b_2 = γ_1, c_2 = -γ_0, α_2 = 2c_3 - b_2/b_3,
α_1 = c_3^2-b_1-b_3c_2-b_2c_3/b_3^2, α_0 = -c_1-c_3c_2/b_3^2.
and
c_4 = c_1c_3-c_1b_2+c_2b_1/b_3,
c_5 = c_3c_2+c_1/b_3 and c_6 = c_3^2+b_3c_2-b_1-b_2c_3/b_3.
Or equivalently
b_1 = γ_2γ_0+γ_1α_2γ_2-γ_1^2-α_1γ_2^2
b_2 = α_2γ_2-2γ_1
b_3 = γ_2
c_1 = γ_0α_2γ_2-γ_0γ_1-α_0γ_2^2
c_2 =-γ_0
c_3 = α_2γ_2-γ_1
c_4 =γ_0α_1γ_2-γ_0^2-α_0γ_2γ_1
c_5 =-α_0γ_2
c_6 =α_1γ_2-2γ_0.
Now we show that we can obtain some non-trivial information about the vector v⃗ directly from the 3-tuples Q_ℓ,v⃗.
Let y be a cubic number for which we have α_0 + α_1 y + α_2 y^2 + y^3 = 0, where α_0, α
_1,α_2 ∈, and x = γ_0 + γ_1 y + γ_2 y^2, where γ_0, γ_1, γ_2 ∈, v⃗_⃗2⃗ = 1xy, resp. v⃗_⃗3⃗ = y1x.
We obtain that
Q_2,v⃗_⃗2⃗ = ( [ c_6 1 c_3; c_4 0 c_1; c_5 0 c_2; ],
[ 1 0 0; 0 1 0; 0 0 1; ],
[ c_3 0 b_3; c_1 0 b_1; c_2 1 b_2; ]),
resp.
Q_3,v⃗_⃗3⃗ = ([ b_2 c_2 1; b_3 c_3 0; b_1 c_1 0; ],
[ c_2 c_5 0; c_3 c_6 1; c_1 c_4 0; ],
[ 1 0 0; 0 1 0; 0 0 1; ]),
where (<ref>), (<ref>) and (<ref>) hold.
Let P = [ 0 0 1; 1 0 0; 0 1 0 ] be the permutation matrix determined by v⃗_⃗2⃗ = P v⃗ where v⃗ = xy1 as in <Ref>.
Let π be the permutation given by the permutation matrix P; we have π(1) = 2.
It follows from the definition of 𝒬_ℓ,v⃗ and <Ref> that
Q_π(1),v⃗_⃗2⃗ = ( P ( Q_1,v⃗)_1 P^-1, P ( Q_1,v⃗)_2 P^-1, P ( Q_1,v⃗)_3 P^-1) P^T.
Applying P yields the first part of the desired result.
The second part is obtained analogously: we have v⃗_⃗3⃗ = Pv⃗ for P = [ 0 1 0; 0 0 1; 1 0 0 ].
Let v⃗ = xy1∈_+^3 be a basis of some complex cubic number field (as a vector space over ).
Moreover, let
Q_1,v⃗ = ([ 1 0 0; 0 1 0; 0 0 1; ],
[ 0 b_1 c_1; 1 b_2 c_2; 0 b_3 c_3; ],
[ 0 c_1 c_4; 0 c_2 c_5; 1 c_3 c_6 ]),
Q_2,v⃗ = ( [ c_6 1 c_3; c_4 0 c_1; c_5 0 c_2; ],
[ 1 0 0; 0 1 0; 0 0 1; ],
[ c_3 0 b_3; c_1 0 b_1; c_2 1 b_2; ] )
and
Q_3,v⃗ = ( [ b_2 c_2 1; b_3 c_3 0; b_1 c_1 0; ],
[ c_2 c_5 0; c_3 c_6 1; c_1 c_4 0; ],
[ 1 0 0; 0 1 0; 0 0 1; ]).
We have
b_3(b_3+2c_3-b_2-c_5+c_6-2c_2)> 0 y<1,
b_3(b_3+2c_3-b_2-c_5+c_6-2c_2)> 0 x>1
y;
and
b_3(b_3+2c_3-b_2-c_5+c_6-2c_2)> 0 x<y.
Let f be the monic minimal polynomial of y, i.e.,
f(y) = y^3 + α_2y^2+α_1y+α_0 = 0
for α_2,α_1,α_0 ∈.
As y is the only real root of the polynomial f, the function f is strictly increasing.
Hence,
y < 1 0 = f(y) < f(1) = 1 + α_2 + α_1 + α_0.
Using (<ref>), this is equivalent to
0 < 1 + 2c_3 - b_2/b_3 + c_6 - 2 c_2/b_3 - c_5/b_3,
which proves the first part of the statement.
Proofs of the other two parts of the statement are analogous; they rely on <Ref> and <Ref>.
The last claim requires the cubic number field to be complex.
We indicate here that this is indeed necessary.
Let v⃗ = xy1∈_+^3 be a basis of some cubic number field (as a vector space over ).
We set α_2,α_1,α_0,γ_2,γ_1,γ_0 ∈, γ_2 ≠ 0 to be such that
y^3+α_2y^2+α_1y +α_0 = 0
and
x = γ_2y^2 +γ_1y+γ_0.
It can happen that there is more than one positive real vector v⃗ for which these two equalities hold.
his means that the mapping 𝒬_1,v⃗ is not an injection.
In the case that y_1,y_2 are real roots of y^3 + α_2y^2+α_1y+α_0 = 0 for which exist x_1,x_2 ∈^+ such that x_1 = γ_2y_1^2 + γ_1 y_1 + γ_0, x_2 = γ_2y_2^2 + γ_1y_2 + γ_0
and 0<y_1<1, 1<y_2, we cannot decide whether y>1 only from the knowledge of the triplet Q_1,v⃗ (without the knowledge of v⃗).
The situation for other inequalities is analogous.
Let v⃗ = xy1∈^3_+ be a basis of some cubic number field (as a vector space over ).
Moreover, let
Q_1,v⃗ = ([ 1 0 0; 0 1 0; 0 0 1; ],
[ 0 b_1 c_1; 1 b_2 c_2; 0 b_3 c_3; ],
[ 0 c_1 c_4; 0 c_2 c_5; 1 c_3 c_6 ]),
Q_2,v⃗ = ( [ c_6 1 c_3; c_4 0 c_1; c_5 0 c_2; ],
[ 1 0 0; 0 1 0; 0 0 1; ],
[ c_3 0 b_3; c_1 0 b_1; c_2 1 b_2; ] )
and
Q_3,v⃗ = ( [ b_2 c_2 1; b_3 c_3 0; b_1 c_1 0; ],
[ c_2 c_5 0; c_3 c_6 1; c_1 c_4 0; ],
[ 1 0 0; 0 1 0; 0 0 1; ]).
We have
| N(y)/N(1) | = | c_5/b_3| , | N(1)/N(x) | = | c_5/b_3|, and | N(x)/N(y) | = | c_5/b_3|.
The numbers x,y,1 constitute a basis of a cubic number field, and therefore, y is a cubic number. We put
y^3 + α_2y^2+α_1y+α_0 = 0.
We know that (<ref>) holds. Therefore,
| N(y)/N(1) | = |α_0| = | c_5/b_3|.
The rest of the proof is analogous. We only use <Ref>, <Ref>, the triplet Q_2,v⃗ (resp. Q_3,v⃗) and the minimal polynomial of y = 1/x (resp. y =x/y).
§ PERIODIC MCF EXPANSIONS
In this section, we consider eventually periodic expansions of v⃗, i.e., we assume v⃗ = RN (where both the preperiodic part and the repetend are represented as product).
We shall not distinguish between purely and eventually periodic expansions by considering the matrix RNR^-1, called the matrix of repetend, and the equality RN = RNR^-1.
If v⃗ = M, i.e., the matrix M is a matrix of a repetend of an expansion of v⃗, then v⃗ is an eigenvector of M.
We use the following reformulation of this fact from <cit.>:
Let v⃗ =[ v_1; ⋮; v_n ]∈_+^n, v⃗ = M in a given unimodular MCF algorithm. We have
M v⃗ = λv⃗,
where λ∈. Moreover,
* λ is an algebraic unit of degree at most n;
* If the degree of λ equals n, then the numbers v_1/v_n,…,v_n-1/v_n,v_n/v_n constitute a basis (as a vector space over ) of the number field (λ).
We cannot omit the condition on the degree of λ since deg( λ) ≤ n-1 would allow v_j/v_n∉(λ). For an example of such a vector and algorithm, see Remark (1) in <cit.>.
The following theorem states that the matrix of repetend always equals to a matrix of multiplication by some unit in basis v⃗.
Let v⃗ = [ y_1; ⋮; y_n; ] be a basis of (y_1) (as a vector space over ), where v⃗ has an eventually periodic expansion in a unimodular MCF algorithm.
Moreover, let M be a matrix of repetend of this MCF expansion of v⃗. We have
M = T^v⃗_ε^T,
where ε∈ U(𝒪_(y_1)) and T^v⃗_ε is a matrix of linear transformation t_ε (defined by (<ref>)) in the basis v⃗.
It follows from <Ref> that
M v⃗ = εv⃗,
where ε is an algebraic unit.
Moreover, the matrix M is an integer matrix, and therefore, for all i ∈{1,…,n} we have (Mv⃗)_i ∈(y_1), hence ε∈(y_1).
Equality M = T^v⃗_ε^T follows from <Ref>.
Let v⃗ = [ v_1; ⋮; v_n; ] be a basis of a number field of degree n (as a vector space over ), λ, λ∈(v_1,…,v_n) and m ∈. We have
T_λ^v⃗ T_λ^v⃗ = T_λ^v⃗ T_λ^v⃗ and T_λ^m^v⃗ = (T_λ^v⃗)^m,
which implies that {T_λ^v⃗^T| λ∈(v_1,…,v_n), λ≠ 0} is an Abelian group.
§.§ Purely periodic MCF expansions
In this subsection, we focus on purely periodic MCF expansions and state two necessary conditions for a MCF expansion to be purely periodic.
First of all, we introduce the weak convergence of the MCF algorithms.
Let ( M^(s))_s=0^+∞
be a sequence of matrices from ^n,n.
Moreover, let j ∈{1,…,n}.
We say it weakly converges to v⃗∈^n with respect to j-th column if the following two conditions are fulfilled:
* there exists P such that M^(P) is positive for all P > P;
* the sequence
( M^(s)_i,j/M^(s)_k,j)_s=P^+∞
converges to v⃗_i/v⃗_k for all i ∈{1,…,n} and some k ∈{1,…,n}.
All elements of all matrices M^(s) for s ≥P are positive, and therefore, we can choose the integer k arbitrarily.
The (ℐ,) (n-1)-dimensional MCF algorithm is weakly convergent if for every vector v⃗∈_+^n
whose expansion is ( A^(0), A^(1), …) with M^(s) = A^(0)A^(1)⋯ A^(s) we have that the sequence M^(s) weakly converges to v⃗ with respect to the j-th column for every j.
Note that the definition of weak convergence varies in the literature.
This is mostly due to variances in the definitions of MCF algorithms themselves.
We base our definition of weak convergence on the definition in the book <cit.> of Brentjes although the definition present there is based on geometric definition of MCF algorithm.
The reader may also refer to Schweiger <cit.> on the matter of various concepts of convergence.
Let us now recall that a matrix M is primitive if there exists a positive integer k such that every element of M^k is positive.
In this article, we use the Perron–Frobenius theorem, which gives us a key information about the eigenvectors of a primitive matrix.
We state the theorem in a form suitable for the rest of the article:
Let M be a primitive matrix.
* The matrix M has a positive real eigenvalue λ_max such that every other eigenvalue λ satisfies
|λ| < λ_max.
* The eigenvalue
λ_max has algebraic and geometric multiplicity equal to one and has an eigenvector v⃗ such that every component of v⃗ is positive.
* Any eigenvector with non-negative components is a multiple of v⃗.
In the following, we will discuss vectors composed of conjugates of components of v⃗ and expansions of such vectors.
More precisely, consider a vector v⃗^T=(v_1,…, v_n) and let α be a primitive element for the number field generated by v⃗, i.e., (v_1,…, v_n)=(α). Fixing a conjugate α of α determines an embedding (α)↪ given by β↦β where β= g(α) with β = g(α) for some polynomial g (over ).
We set v⃗^T = (v_1,…, v_n).
It follows that
T_ε^v⃗ = T_ε^v⃗.
Let v⃗∈^n_+ be a basis of a number field of degree n (as a vector space over ) and v⃗≠v⃗ be a conjugate vector of v⃗.
Moreover, suppose that v⃗ has a purely periodic expansion in some unimodular weakly-convergent (n-1)-dimensional continued fraction algorithm . Then
v⃗∉^n_+.
Let v⃗ = M.
Moreover, as the algorithm is weakly convergent, the matrix M is primitive.
By <Ref> there exists a unit ε∈ U(𝒪_(v⃗)) such that M = T_ε^v⃗^T = T_ε^v⃗^T.
It follows that both v⃗ and v⃗ are distinct eigenvectors of M.
Moreover v⃗ is not a multiple of v⃗, since that would imply ε = ε, which is impossible since v⃗≠v⃗.
As M is primitive and v⃗∈^n_+, by the Perron–Frobenius theorem any eigenvector with non-negative components is a multiple of v⃗.
Therefore, v⃗∉^n_+.
Suppose that v⃗ = [ y^n-1; ⋮; y; 1 ], where y is an algebraic number of degree n.
If v⃗ has a purely periodic expansion in some unimodular (n-1)-dimensional continued fraction algorithm with the sets (ℐ,𝒜) for which 𝒜⊂(n,), then the norm N(y) has sign (-1)^n-1.
Let be the identity matrix of degree n and α_0,…,α_n-1∈ be such that
∑_j=0^n-1α_j y^j + y^n = 0.
We suppose for contradiction that the sign of N(y) is (-1)^n. This is equivalent to saying that α_0=(-1)^nN(y) >0.
We show that { T_ε^v⃗|ε∈ U(_(y))}∩SL(n,) = {} (and therefore also {T_ε^v⃗^T|ε∈ U(_(y))}∩SL(n,) = {}).
Let ε∈ U(_(y))} be arbitrary. By <Ref> we obtain
(T_ε^v⃗)_n,j-1 = - (T_ε^v⃗)_1,jα_0,
for all j ∈{2,…, n}.
Now suppose that T_ε^v⃗∈SL(n,).
Since α_0 > 0, it follows that (T_ε^v⃗)_1,j = 0 for all j ∈{2,…,n}. Using <Ref> and <Ref>, we obtain that T_ε^v⃗ =.
On the other hand, a purely periodic expansion in a MCF algorithm has a matrix of repetend that is equal to a product of matrices from the set 𝒜 of this algorithm and by <Ref>, the matrix of repetend is equal to the matrix T_ε^v⃗ for some ε∈ U(_(y)).
Since 𝒜⊂(n,), we have T_ε^v⃗∈SL(n,).
By <Ref> we obtain
(T_ε^v⃗)_n,j-1 = - (T_ε^v⃗)_1,jα_0,
for all j ∈{2,…, n}.
Since α_0 > 0 and T_ε^v⃗∈SL(n,), it follows that (T_ε^v⃗)_1,j = 0 for all j ∈{2,…,n}.
Using <Ref> and <Ref>, we obtain that T_ε^v⃗ =.
Since 𝒜⊂(n,) and ∉𝒜, no matrix of repetend is equal to , thus we have a contradiction.
§ CANDIDATES ON THE MATRIX OF REPETEND
In this section,
we first show how to generate all the matrices T_ε^v⃗ for every v⃗ = [ v_1; ⋮; v_n-1; 1 ] and every ε∈ U(𝒪_(v_n-1)).
The procedure we describe relies on <Ref> and requires knowledge of the minimal polynomial of v_n-1, the coordinates of the components of v⃗ in the polynomial basis (1,v_n-1,…,v_n-1^n-1) (of (v_n-1)), and the fundamental units of 𝒪_(v_n-1).
<Ref> shows that every matrix of repetend of v⃗ in some unimodular MCF algorithm can be expressed as T_ε^v⃗^T for some ε∈ U(𝒪_(v_n-1)).
Therefore, we refer to the matrices T_ε^v⃗^T as the candidates on the matrix of repetend.
Again, we limit our exposition to the case of n=3 for simplicity, but note that the procedure generalizes to larger values of n.
As this procedures relies on the candidates on the matrix of repetend and, in some cases, it may be used to find the expansion (see <Ref>), we refer to it shortly as repetend matrix form of the algorithm.
§.§ Finding candidates on the matrix of repetend
Let y be a cubic number for which α_0 + α_1 y + α_2 y^2 + y^3 = 0, where α_0, α_1,α_2 ∈, x = γ_0 + γ_1 y + γ_2 y^2, where γ_0, γ_1, γ_2 ∈ and v⃗ = xy1 be a basis of some cubic number field (as a vector space over ).
We continue by the description of a procedure finding all the candidates on the matrix of repetend of the MCF expansion of the vector v⃗.
Firstly, we have to realise that the number y is a cubic number, and therefore, by Dirichlet's <Ref>, there are either one or two fundamental units in 𝒪_( y).
Let ε_1 = β_1+β_2y + β_3x, resp. ε_1 = β_1+β_2y + β_3x, ε_2 = β_1+β_2y + β_3x, be the fundamental unit, resp. units, of 𝒪_( y).
It follows from <Ref> and <Ref> that every
candidate M on the matrix of repetend of the MCF expansion of xy1 can be written as
M = ± (T_ε_1^v⃗^T)^m_1
for m_1 ∈, respectively
M = ± ( (T_ε_1^v⃗^T )^m_1(T_ε_2^v⃗^T)^m_2 )
for m_1,m_2 ∈.
We can easily verify by direct computation that (T_ε_1^v⃗^T)_∙,1 =
x_1y_1z_1, where we have x_1 = β_1+ β_2(γ_1/γ_2 -α_2) + β_3(γ_1^2/γ_2-2γ_1α_2 + α_2^2γ_2+2γ_0-α_1γ_2), y_1 = β_2/γ_2 + β_3(γ_1/γ_2-α_2), z_1 = β_3, and, applicable in the case of two fundamental units, (T_ε_2^v⃗^T)_∙,1 = x_2y_2z_2 where x_2 = β_1+ β_2(γ_1/γ_2 -α_2) + β_3(γ_1^2/γ_2-2γ_1α_2 + α_2^2γ_2+2γ_0-α_1γ_2), y_2 = β_2/γ_2 + β_3(γ_1/γ_2-α_2), z_2 = β_3.
Now, we can use <Ref> and <Ref> to compute the matrices T_ε_1^v⃗^T, resp. T_ε_1^v⃗^T and T_ε_2^v⃗^T.
We use the notation from <Ref>. We obtain that
T_ε_1^v⃗^T = ( (Q_1,v⃗)_1 x_1y_1z_1 (Q_1,v⃗)_2 x_1y_1z_1 (Q_1,v⃗)_3 x_1y_1z_1)
and similarly for the matrix T_ε_2^v⃗^T.
For simplicity, we do the explicit calculation only for the case x = y^2. In this case, we obtain a simpler form, and that is x_1 = β_1-β_3α_1-β_2α_2+β_3α_2^2, y_1 = β_2-β_3α_2, z_1 = β_3 and eventually x_2 = β_1-β_3α_1-β_2α_2+β_3α_2^2, y_2 = β_2-β_3α_2, z_2 = β_3.
For i ∈{1,2}, we obtain that
T_ε_i^v⃗^T = [ x_i -α_1y_i-α_0z_i -α_0y_i; y_i x_i+α_2y_i -α_0z_i; z_i y_i+α_2z_i x_i+α_2y_i+α_1z_i ].
Note that the last equality holds also if ε_i is not a unit.
In the most common case, when 𝒜⊆(n,), some of the the candidate matrices M (given by (<ref>), resp. (<ref>)) can be excluded.
First, the determinant of the matrix of repetend has to be 1, hence we can exclude M if it has determinant equal to -1.
Second, the matrix of repetend has integer entries.
Thus, we can exclude M if it has non integer entries.
If y is an algebraic integer, then M has always integer entries.
It remains to comment on the fact that the knowledge of fundamental units is required to find all the candidates on the matrix of repetend.
The procedure of obtaining all fundamental units of a real quadratic number field is known (for example, see <cit.>).
If α = √(d) for some d ∈, d ≠ e^3 where e ∈, the problem of finding fundamental units in _(α) is closely connected with the cubic analogue of the Pell's equation, and therefore, we can use the process described in <cit.>.
For some other algebraic number fields, there are algorithms for computing a set of fundamental units. Most of these algorithms are based on the geometric interpretation of MCFs. One of these algorithms is the Voronoi's algorithm for computing a set of fundamental units of a cubic number field (1896, <cit.> and later restated in a different form in <cit.>). In 1985, Buchmann (<cit.> and <cit.>) generalized Voronoi's algorithm to an arbitrary number field with the group of units of rank 1 and 2.
For an example of sets of fundamental units in some cubic number fields see <cit.> or <cit.>.
We illustrate the described procedure of finding all candidates on a matrix of repetend on the following example:
Let y be the only positive root of the polynomial y^3 + y^2-2 y - 1. We investigate the MCF expansion of the vector v⃗ = y^2y1. The number y has three real conjugates, and therefore, there are two fundamental units in U(𝒪_(y, y^2)). In <cit.> we can find that the two fundamental units are ε_1 = -1 + y + y^2 and ε_2 = 2-y^2.
Using the computation in (<ref>), we obtain that every candidate M on the matrix of repetend is defined by
M = ± (M_1^m_1 M_2^m_2)
where m_1,m_2 ∈,
M_1 = [ 1 1 0; 0 1 1; 1 1 -1 ] and
M_2 = [ -1 1 1; 1 0 -1; -1 0 2 ].
We compare it with the expansion of v⃗ in the Brun and in the Selmer algorithm.
In the Brun algorithm, the vector v⃗ has a purely periodic expansion equal to v⃗ = M_B where
M_B = [ 20 45 16; 16 36 13; 13 29 10 ] = M_1^3 M_2^-3.
In the Selmer algorithm, the vector v⃗ has a purely periodic expansion too. In this case, the expansion is equal to v⃗ = M_S where
M_S = [ 2 3 1; 1 3 1; 1 2 1 ] = M_2^-2.
This means that for the Brun algorithm, we have m_1 = 3 and m_2 = -3 and for the Selmer algorithm we have m_1 = 0 and m_2 = -2.
The next step is to see if a candidate matrix is in fact a matrix of repetend.
§.§ Decomposition of the candidates on the matrix of repetend
After we have all the candidates on the matrix of repetend of v⃗, we need to find whether there exists a candidate M on the matrix of repetend for which we can find matrices R and N such that M = RNR^-1 and such that v⃗ = RN.
In other words, both R and N need to have a decomposition into the matrices from and RN needs to be formed from an expansion produced by <Ref> in the given MCF algorithm.
We sum it up in the following proposition.
Let v⃗ = [ y_1; ⋮; y_n ]∈^n_+ be a basis of some number field as a vector space over .
The vector v⃗ has an eventually periodic expansion in a unimodular MCF algorithm if and only if there exists ε∈ U(_( y_1,…, y_n)) such that
T^v⃗_ε^T
(matrix of the linear transformation t_ε (defined by (<ref>)) in the basis v⃗)
has a decomposition which is equal to an expansion produced by <Ref> in the given MCF algorithm. (I. e. there exists R,N ∈^n,n such that T^v⃗_ε^T = RNR^-1 and the matrices R and N have a decomposition such that v⃗ = RN… in the given MCF algorithm.)
If the vector v⃗ has an eventually periodic expansion in a unimodular MCF algorithm then the rest follows by <Ref> and by the definition of <Ref>.
Now, we continue with the other direction. Let ε∈ U(_( y_1,…, y_n)) be such that T^v⃗_ε^T has a decomposition which is equal to an expansion produced by <Ref>. Let R,N ∈^n,n be such that T^v⃗_ε^T = RNR^-1 and v⃗ = RN… is the expansion of the vector v⃗ in the given MCF algorithm. Using <Ref>, we obtain that λ R^-1v⃗ = N (R^-1v⃗) for some λ∈. The matrices R and N are nonnegative and therefore λ >0.
Using <Ref>, we obtain that v⃗ = RN.
All of the well-known algorithms can be defined in a way in which the set is equal to the set of transvections or multiples of transvections.
Therefore, we can decompose every integer matrix into a product of transvections using <Ref>.
There are numerous decompositions available, and the challenge lies in identifying the specific one that matches the expansion generated by <Ref> within the given MCF algorithm, if such an expansion exists.
In general, this seems to be a difficult question, and it remains to be an open problem for now.
§ USING REPETEND MATRIX FORM TO CONSTRUCT EXPANSIONS
As an alternative approach to investigating the possibility to decompose the candidates on the matrix of repetend, we show, on an example, how we can use the knowledge of a matrix T_λ^v⃗ for some λ∈(v_1,…,v_n) of degree n, where v⃗ = [ v_1; ⋮; v_n ], for construction of expansions of a parametric class of vectors.
In our example, we construct expansions in the Algebraic Jacobi-Perron algorithm.
The Algebraic Jacobi-Perron algorithm (AJPA) was introduced by Tamura and Yasutomi in 2009 (<cit.>).
For our purposes, we use the AJPA in its homogenous form and study only the dimension n -1= 2.
Let K be a cubic number field; N(v) below denotes the norm N_K|(v).
Let v⃗ = v_1v_2v_0∈ (K∩_+)^3. We have
ℐ_AJPA = {I_1,j,k, I_2,j,k,I_3,j,k j,k ∈_3} with
I_1,j,k =
{v_1v_2v_0⌊v_2/v_1⌋ = j, ⌊v_0/v_1⌋ = k ∧ v_p >v_1∧ v_p>v_q ∧v_1/√(|N(v_1)|)>v_q/√(|N(v_q)|)},
where p = 0, q= 2 or p = 2, q = 0.
I_2,j,k =
{v_1v_2v_0⌊v_1/v_2⌋ = j, ⌊v_0/v_2⌋ = k ∧ v_p >v_2∧ v_p>v_q ∧v_2/√(|N(v_2)|)>v_q/√(|N(v_q)|)},
where p = 0, q= 1 or p = 1, q = 0.
I_3,j,k =
{v_1v_2v_0⌊v_1/v_0⌋ = j,⌊v_2/v_0⌋ = k ∧ v_p >v_0∧ v_p>v_q ∧v_0/√(|N(v_0)|)>v_q/√(|N(v_q)|)},
where p = 2, q= 1 or p = 1, q = 2.
The elements of ℐ_AJPA are pairwise disjoint since all the inequalities in the definition of the sets I_i,j,k are strict.
Note that the inequalities defining the intervals above are homogeneous in the sense that their validity does not change when we replace the vector v_1v_2v_0 by its multiple α v_1α v_2α v_0 for any α∈ K∩_+.
Moreover, let
_AJPA = {A_1,j,k = T_21^jT_31^k,A_2,j,k = T_12^jT_32^k, A_3,j,k = T_13^jT_23^k j,k ∈_3}, where
T_12 = ([ 1 1 0; 0 1 0; 0 0 1 ]),
T_13 = ([ 1 0 1; 0 1 0; 0 0 1 ]),
T_21 = ([ 1 0 0; 1 1 0; 0 0 1 ]),
T_23 = ([ 1 0 0; 0 1 1; 0 0 1 ]),
T_31 = ([ 1 0 0; 0 1 0; 1 0 1 ]),
T_32 = ([ 1 0 0; 0 1 0; 0 1 1 ]).
Therefore, the i-th step of the algorithm works as follows. If v⃗^(i)∈ I_1,j,k, then
(v_1^(i),v_2^(i),v_0^(i))^T ↦ (v_1^(i), v_2^(i) - j v_1^(i), v_0^(i) - k v_1^(i))^T = (v_1^(i+1), v_2^(i+1), v_0^(i+1)),
and analogously for v⃗^i in other intervals.
The homogenous AJPA algorithm is the (ℐ_AJPA,_AJPA) MCF algorithm.
The homogenous AJPA expansion of a vector is the (ℐ_AJPA,_AJPA) MCF expansion.
Tamura and Yasutomi <cit.> give a class of vectors and show they have eventually periodic AJPA expansion.
In the following theorem, we give a larger class, and use <Ref> to prove that the elements of this class have eventually periodic AJPA expansion.
Let y be a cubic number such that -1+ty+sy^2+y^3 = 0 for some s,t ∈_+ for which t>s ∧ t>s^2/4. Moreover, let v⃗_s,t,f,r = [ y^2+fy+r; y; 1 ] for some f,r ∈_+ ∪{0}, where f is such that y^2+fy<y√(|f^3-sf^2+ft+1|)∧ y^2+fy <1 (especially f = 0 fulfils this condition for every s,t). Then the AJPA expansion of v⃗_s,t,f,r is
v⃗_s,t,f,r = A_3,r,0A_2,f,tA_1,t,sA_3,t,sA_2,s,t
for r>0 and
v⃗_s,t,f,0 = A_2,f,tA_1,t,sA_3,t,sA_2,s,t.
First of all, we show that (y) is a cubic complex number field. Since y is a cubic number, it is enough to show that the monic minimal polynomial -1+tx+sx^2+x^3 of y has a non-real complex root. This happens if the discriminant Δ of this polynomial is negative. Therefore, we compute the discriminant:
Δ = -18st+4s^3 + s^2t^2-4t^3-27 = (s^2-4t)(t^2+4s)-2st-27t>s^2/4<0.
Therefore, this equation has one purely real and two complex solutions.
Now we compute the expansion of v⃗_s,t,f,r.
The coefficients s,t are positive and therefore y<1. Moreover, the constant coefficient of the monic minimal polynomial of y is -1 and therefore |N(y)| = 1.
We start with the case r>0. In this case, we have
( y^2+fy+r>1>y ) (∧ 1= 1/|N(1)|>y/|N(y)| = y ) ∧
(⌊ y^2+fy + r⌋ = r ) ∧ ( ⌊ y ⌋) = 0.
Together, we obtain that v⃗_s,t,f,r∈ I_3,r,0 and v⃗_s,t,f,r^(1) = [ y^2+fy; y; 1 ]. At this point, we notice, that v⃗_s,t,f,r^(1) = v⃗_s,t,f,0.
Now, we find a matrix M_0 = T_λ^v⃗_s,t,f,0^T for some cubic number λ∈(y). We choose λ for which we have (M_0)_∙,1 = [ 1; 1; 1 ].
We can do this choice using the explicit connection between this column of the matrix M_0 and the coordinates of λ in the basis (y^2,y,1) given by (<ref>).
We have 1>y>y^2.
Using <Ref> and <Ref>, we obtain that
M_0 = [ 1 -f^2 + f s - t + 1 f + 1; 1 -2 f + s + 1 1; 1 -f + s + 1 -f + s + t + 1 ].
Now, we use <Ref> and find the triplets Q_1,v⃗_s,t,f,0,Q_2,v⃗_s,t,f,0 and Q_3,v⃗_s,t,f,0.
We obtain
Q_1,v⃗_s,t,f,0=
(([ 1 0 0; 0 1 0; 0 0 1 ]), ([ 0 -f^2 + f s - t 1; 1 -2 f + s 0; 0 1 -f + s ]), ([ 0 1 f; 0 0 1; 1 -f + s t ]))
Q_2,v⃗_s,t,f,0 =
(([ -2 f^2 - 3 f s + s^2/k 1 -2 f - s/k; -f - s/k 0 -1/k; -1/k 0 -f^2 - f s + t/k ]), ([ 1 0 0; 0 1 0; 0 0 1 ]), ([ -2 f - s/k 0 -f^3 - f^2 s + f t + 1/k; -1/k 0 -f^2 - f s + t/k; -f^2 - f s + t/k 1 -(f^2 - f s) t + t^2 + f/k ]))
Q_1,v⃗_s,t,f,0 =
(([ f - s -f^2 + f s - t 1; 1 -f 0; 0 1 0 ]), ([ -f^2 + f s - t f^3 - f^2 s + f t + 1 0; -f f^2 - t 1; 1 -2 f + s 0 ]), ([ 1 0 0; 0 1 0; 0 0 1 ])),
where k = f^3 - 2 f^2 s + f s^2 + (f - s) t - 1.
We use <Ref> to compute that |N(y^2+fy)| = f^3-sf^2+ft+1 and therefore y = y/|N(y)| >y^2+fy/|N(y^2+fy)| by the assumption on f. We know that ⌊y^2+fy/y⌋ = f so it remains to compute ⌊1/y⌋ in this step of the algorithm. For this reason, we put M_0,c,f = T_32^-cT_12^-fM_0T_12^f T_32^c and v⃗_s,t,f,0^(0,c) = T_32^-cT_12^-fv⃗_s,t,f,0, compute Q_1,v⃗_s,t,f,0^(0,c) and use <Ref> to determine the maximum c∈_+ such that (v⃗_s,t,f,0^(0,c))_2< (v⃗_s,t,f,0^(0,c))_3 for all c ∈_+, c≤c. We obtain that
(v⃗_s,t,f,0^(0,c))_2< (v⃗_s,t,f,0^(0,c))_3
(c^3 - c^2 t + 3 c^2 - c s - 2 c t + 3 c - s - t)(c^3 - c^2 t - c s - 1) >0
((c-t+1)(c^2 +2c) + (c-t)-cs-s)(c^2(c-t) - c s - 1) >0
and this holds for every c ≤ t-1. For c = t, we obtain
(v⃗_s,t,0^(0,t))_2< (v⃗_s,t,0^(0,t))_3 (t^2+2t-ts-s)(-ts-1)>0
which does not hold since t>s. Therefore ⌊1/y⌋ = t.
This means that v⃗_s,t,f,0∈ I_2,f,t, v⃗_s,t,f,0^(1) = T_32^-tT_12^-fv⃗_s,t,f,0 and we put M_1 = T_32^-tT_12^-fM_0T_12^fT_32^t. From the knowledge of the matrix M_1 and by <Ref> we obtain the triplets Q_1,v⃗_s,t,f,0^(1),Q_2,v⃗_s,t,f,0^(1) and Q_3,v⃗_s,t,f,0^(1):
Q_1,v⃗_s,t,f,0^(1)= (([ 1 0 0; 0 1 0; 0 0 1 ]), ([ 0 t 1; 1 t^2 + s t; 0 s t + 1 s ]), ([ 0 1 0; 0 t 1; 1 s 0 ])),
Q_2,v⃗_s,t,f,0^(1)= (([ s^2 - t 1 -s; -s 0 1; s t + 1 0 -t ]), ([ 1 0 0; 0 1 0; 0 0 1 ]), ([ -s 0 1; 1 0 0; -t 1 0 ])),
Q_3,v⃗_s,t,f,0^(1)= (([ -s 0 1; 1 0 0; -t 1 0 ]), ([ 0 1 0; 0 t 1; 1 s 0 ]), ([ 1 0 0; 0 1 0; 0 0 1 ])).
Using these triplets and the <Ref>, we obtain that √(|N((v⃗_s,t,f,0^(1))_3)|/|N((v⃗_s,t,f,0^(1))_1)|) = √(st+1).
Now, we put v⃗_s,t,f,0^(1,0,b) = T_31^-bv⃗_s,t,f,0^(1) and v⃗_s,t,f,0^(1,c,0) = T_21^-cv⃗_s,t,f,0^(1) for b,c ∈_+.
Again, we put M_(1,0,b) = T_31^-bM_1T_31^b, we find Q_2,v⃗_s,t,f,0^(1,0,b) and use <Ref>. We obtain that
(v⃗_s,t,f,0^(1,0,b))_1< (v⃗_s,t,f,0^(1,0,b))_3
b^3 + (b + 1) s^2 + 3 b^2 - 2 (b^2 + 2 b + 1) s + (b - s + 1) t + 3 b<0
(b-s+1)(b^2+2b+1-s(b+1)+t)-1<0
which holds if (but not only if) b≤ s-1 and t>(s-b-1)(b+1). The second inequality holds if b≤ s-1 ∧ t >s^2/4. On the other hand, for b =s we obtain that
(v⃗_s,t,f,0^(1,0,s))_1< (v⃗_s,t,f,0^(1,0,s))_3 s^2+2s+1-s^2-s+t-1<0 s+t<0
which does not hold.
It follows that ⌊(v⃗_s,t,f,0^(1))_3/(v⃗_s,t,f,0^(1))_1⌋ = s.
Similarly, we get that
(v⃗_s,t,f,0^(1,c,0))_1< (v⃗_s,t,f,0^(1,c,0))_2
(-c^3 + c^2 t + c s + 1) (-c^3 + c^2 t - 3 c^2 + c s + 2 c t - 3 c + s + t)>0
((t-c)c^2 + c s + 1) ((c+1)^2(t-c-1)+cs+s+1)>0
which holds for all c ≤ t-1 but does not hold for c = t. This means that ⌊(v⃗_s,t,f,0^(1))_2/(v⃗_s,t,f,0^(1))_1⌋ = t. Therefore (v⃗_s,t,f,0^(1))_2>(v⃗_s,t,f,0^(1))_3>(v⃗_s,t,f,0^(1))_1, (v⃗_s,t,f,0^(1))_3/(v⃗_s,t,f,0^(1))_1 = s < √(st+1) = √(|N((v⃗_s,t,f,0^(1))_3)|/|N((v⃗_s,t,f,0^(1))_1)|). This means that v⃗_s,t,f,0^(1)∈ I_1,t,s.
Now, we put v⃗_s,t,f,0^(2) = T_21^-tT_31^-sv⃗_s,t,f,0^(1) and M_2 = T_21^-tT_31^-s M_1 T_31^sT_21^t
Again, we use <Ref> to find the triplets Q_1,v⃗_s,t,f,0^(2),Q_2,v⃗_s,t,f,0^(2) and Q_3,v⃗_s,t,f,0^(2). We get
Q_1,v⃗_s,t,f,0^(2)= (([ 1 0 0; 0 1 0; 0 0 1 ]), ([ 0 0 1; 1 0 -t; 0 1 -s ]), ([ 0 1 -s; 0 -t s t + 1; 1 -s s^2 - t ])),
Q_2,v⃗_s,t,f,0^(2)= (([ t 1 0; s 0 1; 1 0 0 ]), ([ 1 0 0; 0 1 0; 0 0 1 ]), ([ 0 0 1; 1 0 -t; 0 1 -s ])),
Q_3,v⃗_s,t,f,0^(2)= (([ t^2 + s t 1; s t + 1 s 0; t 1 0 ]), ([ t 1 0; s 0 1; 1 0 0 ]), ([ 1 0 0; 0 1 0; 0 0 1 ])).
Now, we notice that the connection between the triplets Q_1,v⃗_s,t,f,0^(2) and Q_2,v⃗_s,t,f,0^(1) is the same as in (<ref>) with matrix of permutation P= [ 0 1 0; 0 0 1; 1 0 0 ]. Moreover, an analogous connection (with the same matrix of permutation) holds also between the two pairs of triplets Q_2,v⃗_s,t,f,0^(2) and Q_3,v⃗_s,t,f,0^(1), Q_3,v⃗_s,t,f,0^(2) and Q_1,v⃗_s,t,f,0^(1).
Moreover, Pv⃗_s,t,f,0^(1) = v⃗_s,t,f,0^(2).
Using this fact (analogously as in <Ref>), we get that
v⃗_s,t,f,0 = A_2,f,tA_1,t,sA_3,t,sA_2,s,t
and
v⃗_s,t,f,r = A_3,r,0A_2,f,tA_1,t,sA_3,t,sA_2,s,t
for r>0.
This proves the claim.
We now state Theorem 2.4 of <cit.> and give a proof using the last theorem to demonstrate that it indeed covers the class studied in <cit.>.
Notice that the expansions (and the lenghts of periods) of v⃗(m) above and the expansions given in Theorem 2.4. in <cit.> slightly differ.
This is due to the fact that we use the homogenous form of the AJPA whereas Tamura and Yatusomi use the non-homogenous form of the AJPA.
The two forms are equivalent and one may transform the expansions from one form to another.
Let m ∈_+ and v⃗(m) = [ √((m^3+1)^2)-m^2; √(m^3+1)-m; 1 ]. All the vectors v⃗(m) have eventually periodic homogenous AJPA expansion. The length of the period of v⃗(m) is 3 for every m >1 and the length of the period of v⃗(1) is 6.
First, we compute explicitly the expansion of v⃗(1) and verify its periodicity. We obtain
v⃗(1) = A_1,0,1 A_2,2,1A_3,0,1A_1,1,2A_2,1,0A_3,1,2.
Now suppose that m≥ 2.
We have v⃗(m) = [ y^2+2my+m^2; y; 1 ] where y is the only real root of the polynomial g(x) = x^3+3mx^2+3m^2x-1. We put s = 3m, t = 3m^2, f = 2m,r = m^2.
The function g(x) is increasing and g(y) = 0. Therefore, y^2+2my<1 and y^2<y. Now, we show that y^2 + fy < √(|f^3-sf^2+ft+1|)y. This is equivalent to y^2 + 2my <√(|2m^3+1|)y. For m≥ 3 it follows from the fact that y^2<y.
For m = 2, this condition is equivalent to y<√(17)-4. We compute that g(√(17)-4)>0 which implies (by the monotony of g) that y<√(17)-4. This means that v⃗(m) fulfils the assumptions of <Ref> for all m≥ 2 and therefore v⃗(m) has eventually periodic expansion for all m ∈_+.
§ CONCLUSION
Let us conclude with several remarks and further research directions.
The case of vectors from a totally real number field is of special interest when considering MCF expansions.
While our <Ref> showed that a vector with all coordinates being totally positive cannot have a purely periodic expansion, quite a few examples of eventually periodic expansions are known <cit.>.
In particular, <cit.> explicitly considered the connection of Jacobi–Perron expansions with universal quadratic forms (and suitable small, “indecomposable” elements in the number field) already mentioned in the Introduction. Although their results are promising, they remain only partial, and suggest that the more general approach outlined in our present paper may be needed in order to obtain a tight connection between MCFs and indecomposables. Specifically, can one find a suitable decomposition of the candidate for the matrix of the repetend that would yield indecomposables in the form of certain “(semi-)convergents” to the expansion?
As we demonstrated in <Ref>, the repetend matrix form of algorithms can be useful computationally, as it avoids working with small real numbers avoiding potential precision issues and at the same time, we can easily find the norms of the components of the represented vector. This should be very convenient in practical computer implementations.
Note that the repetend matrix form is not limited to n=3 and Algebraic Jacobi–Perron algorithm; however, its formal generalization requires further study.
Finally, the holy grail in the area of MCFs is establishing that some vectors do not have eventually periodic expansions. For example, computational evidence <cit.> suggests that this is the case for the Jacobi–Perron expansion of (1,√(4),√(4^2)). In fact, Voutier [personal communication] conjectured that the positive integers m with eventually periodic JPA expansion of (1,√(m),√(m^2)) have density 0.
These problems are notoriously hard, but the approach outlined in our paper could present a starting point.
§ ACKNOWLEDGEMENTS
H. Řada was supported by the Grant Agency of the Czech Technical University in Prague,
grant No. SGS20/183/OHK4/3T/14.
Š. Starosta acknowledges support of the OP VVV MEYS funded project CZ.02.1.01/0.0/0.0/16_019/0000765 “Research Center for Informatics”.
V. Kala was supported by Czech Science Foundation (GAČR) grant 21-00420M and Charles University Research Centre program UNCE/SCI/022.
siam
biblio.bib
|
http://arxiv.org/abs/2307.03327v1
|
20230706225952
|
Encoder-Decoder Networks for Self-Supervised Pretraining and Downstream Signal Bandwidth Regression on Digital Antenna Arrays
|
[
"Rajib Bhattacharjea",
"Nathan West"
] |
cs.LG
|
[
"cs.LG",
"eess.SP"
] |
Encoder-Decoder Networks for Self-Supervised Pretraining and Downstream Signal Bandwidth Regression on Digital Antenna Arrays
This work was funded under US Defense Advanced Research Projects Agency agreement HR00112190100.
Rajib Bhattacharjea
DeepSig, Inc.
Atlanta, Georgia, USA
[email protected]
Nathan West
DeepSig, Inc.
Rosslyn, Virginia, USA
[email protected]
================================================================================================================================================================================================================================
This work presents the first applications of self-supervised learning applied to data from digital antenna arrays. Encoder-decoder networks are pretrained on digital array data to perform a
self-supervised noisy-reconstruction task called channel in-painting, in which the network infers the contents of array data that has been masked with zeros. The self-supervised step requires
no human-labeled data. The encoder architecture and weights from pretraining are then transferred to a new network with a task-specific decoder, and the new network is trained on a small
volume of labeled data. We show that pretraining on the unlabeled data allows the new network to perform the task of bandwidth regression on the digital array data better than an equivalent
network that is trained on the same labeled data from random initialization.
Machine learning, convolutional neural networks, antenna arrays, self-supervised learning
§ INTRODUCTION
Digital antenna arrays produce volumes of radio frequency (RF) data that are often too large to transfer over a single standard high-speed interface such as 100GbE or PCIe. This is because a typical
digital array samples waveforms at each antenna element at tens or hundreds of megasamples per second (e.g., using an integrated processor like the Xilinx RFSoC as in <cit.>),
with each sample typically having 24-28 bits of precision and requiring 32 bits to transfer.
Furthermore, the total data rate out of an array scales linearly with the desired instantaneous bandwidth and with the number of antenna
elements, meaning that a wideband array rapidly saturates a high-speed interface and computational resources as the bandwidth and number of elements increases. The main method currently used to
address this issue is to form a weighted sum of several antenna element signals into a single signal, which is known as beamforming and effectively reduces the data rate and increases the system
sensitivity to signals impinging from certain directions while attenuating signal sensitivity from other directions. Note, however, that this operation loses information about signals from some
directions. There are, however, many degrees of sparsity/redundancy/structure in the signals that could be exploited to better reduce the volume of data coming out of a digital array. For example,
the received signal spectrum at each element of an array is very similar; each element simply has a position-dependent spectral amplitude and phase offset relative
to other elements in the array. These offsets are themselves a relatively low-information / highly structured function of space and frequency, and can be captured by a few coefficients in a
spatial Fourier representation, with each coefficient corresponding to a multipath direction-of-arrival. Similarly, in a typical terrestrial environment, only some fraction of the spectrum is
occupied, which means that, under the current approach, ADC samples are being used to represent a lot of noise that exists between actual signals. Finally, the types of signals that
need to be represented themselves have structure that distinguishes them from random noise, which means that encodings of these data exist that can store them in fewer bits than their original
format that comes out of the array. Current approaches to array signal processing do not leverage this spatial redundancy, spectral sparsity, and temporal structure, meaning that the amount of data
coming out of the array is vastly larger than what is strictly necessary to represent the signals.
Data-driven machine learning has the potential to address this issue, particularly through the application of self-supervised learning (SSL). In its current incarnation, SSL uses neural
network models to learn compressed representations of data distributions. These compressed representations are known as embeddings, and have far fewer degrees of freedom and far less
dimensionality than the original data sources. Because digital array data contains redundancies and sparsity, it is highly compressible without loss of information, and it is expected that
learned compression methods using SSL will discover how to leverage this redundancy and sparsity for compression, making digital array data an especially good candidate for the SSL approach.
Embeddings are learned from input data that requires no human labeling, which contrasts with the current wave of supervised methods in deep learning.
Requiring large volumes of labeled data has been identified as one of the factors that makes machine learning less attractive in real RF applications <cit.>, which is another factor
that makes SSL an ideal candidate for further explorations with RF data.
A network that generates embeddings is derived through a pretraining process on unlabeled data. One approach to pretraining is to solve a related problem that does not require
human generated labels, known as a pretext task. For example, an encoder-decoder network can be trained to undo an input transformation such as the addition of noise or zeroing of some
input data. At the completion of such a pretraining process, the encoder network is taken as the embedding generator for further downstream tasks. The pretrained network (encoder) outputs
(the embeddings) are then used as inputs for further downstream machine learning based algorithms. In the case of digital antenna arrays, downstream algorithms may include beamforming weight
estimation, signal detection in noise, or joint signal detection and direction-of-arrival estimation.
To summarize, the potential benefits of an SSL-based approach to machine learning on digital antenna arrays are twofold: 1) embeddings of the array data can be transferred in place of
the raw data over a high-speed interface out of the array, and 2) far less labelled data can be used to train downstream machine learning algorithms performing functions of array signal
processing interest. To further try to realize these benefits, we present in this paper the first investigations into self-supervised learning on antenna array data. Our specific contributions include:
* A new proposal for a pretext task for the RF array domain, and
* The first known demonstration showing that pretraining on a pretext task and transferring on a small amount of labeled data to a downstream RF task on an array produces superior results to
training the same downstream model from random initialization.
The rest of the paper is organized as follows. Section <ref> provides an overview the developments in machine learning that are related to the present work. Section
<ref> describes the data collection, hardware, and machine learning methodologies used in our applications of both self-supervised pretraining and downstream task training.
Section <ref> demonstrates some sample experimental results for both pretraining and downstream task training. Section <ref> concludes with a discussion of the
results, limitations of the present study, and future directions for research.
§ RELATED WORK
This work builds on convolutional neural networks and on self-supervised representation learning from the computer vision and natural language processing domains, and so we briefly provide an
overview in this section of those areas of machine learning.
§.§ Supervised Convolutional Neural Networks
Data-driven machine learning using neural-network models has undergone a renaissance since AlexNet<cit.> was submitted to the 2012 ImageNet Large Scale Visual
Recognition Challenge (ILSVRC)<cit.>.
AlexNet leaped past the 2010 and 2011 winners by 12 and 10 percentage points of accuracy, respectively.
Neural networks dominated the ILSVRC competition through its conclusion in 2017, and were also rapidly adapted to other tasks
in other problem domains such as generative image modeling<cit.>, audio classification<cit.>, speech synthesis<cit.>, speech recognition<cit.>,
and generative text modeling<cit.>.
These methods were first applied to the radio frequency (RF) signal domain in 2016 for modulation format recognition <cit.> using data from a single simulated antenna, which
lead to other works in the modulation recognition problem using similar datasets<cit.>. The current work is in this lineage of applications of convolutional networks to problems in
the RF domain.
§.§ Self-Supervised Learning
The area of self-supervised pretraining of neural networks was similarly revived in the post-AlexNet era, with major advances first in natural language modeling <cit.>,
followed by demonstrations of self-supervised pretraining in the image domain<cit.> and speech recognition domains<cit.>, among others. The application of
self-supervised pretraining on RF data is rare in the literature, with typical examples involving radar return data that has been processed into an image format <cit.>.
In the case of <cit.>, the self-supervision takes the form of making embeddings of radar return data consistent with corresponding pretrained image embeddings derived from a
conventional camera input. Reference <cit.> trains a network to approximate a conventional radar motion tracking algorithm, which, while not requiring human labels, can only learn to approximate a human expert algorithm.
Notably, there is a gap in the literature in the area of self-supervision on RF data directly from antenna arrays.
§ METHODS
§.§ Data Collection Hardware and Dataset Creation
An Epiq Solutions Sidekiq X4 software-defined radio (SDR) with 4 channels has been used for data collections. This radio downconverts and digitizes 4 channels
at up to 250 MS/s. For this effort, it has been operated at 50 MS/s with an analog channel bandwidth of 41 MHz after accounting for filter roll-off effects.
The physical aperture configuration was driven by simplicity and cost: simple monopole antennas (Taoglas Limited TG.55.8113) have been put into support structures of convenience, leading to irregular
array geometries without a specific spacing pattern. This reduces costs by not requiring the construction of any additional physical structure to support the antennas. Being able to use arbitrary
antenna geometries is enabled by the fact that the machine learning algorithms learn the array response and characteristics implicitly without any explicit calibration. The irregular array
used for data collection is displayed in Figure <ref> and was used to collect a four-channel dataset from the Epiq Sidekiq X4.
The Epiq Sidekiq X4 was tuned to 115 different frequencies starting at 375 MHz and ending at 2573 MHz.
Each center frequency for recording was chosen by a human operator looking at a spectrogram visualization and trying to minimize the presence of signals crossing the upper and lower band edges.
Recordings were collected at each frequency for 100 ms, which corresponds to 5 million samples at the 50 MS/s rate. The dataset is read into the machine learning framework PyTorch. The entire dataset
can be thought of as a large tensor with shape [115, 4, 5000000, 2]: 115 frequencies, 4 antenna channels, 5000000 time steps, and 2 quadrature channels.
§.§ Pretraining Methods
We have focused on convolutional encoder-decoder networks trained on reconstruction pretext tasks on spectrogram data from an array. The pretraining
step is depicted in Figure <ref>.
Pretraining proceeds by minimizing an error measure between the encoder-decoder network output and the original data. This is a reconstruction pretext
task, and the particular transformation we will present results for in this document is a channel-masking operation called channel in-painting that uniformly-randomly selects the data for one
antenna channel per training
example and sets that data to all zeros. If the network succeeds in reconstructing the original data from this transformed data, then the network has performed “channel in-painting”, which
also refers to similar pretext tasks from natural language processing and computer vision. The remainder of this section describes the implementation details of how we pretrained a
network to perform channel in-painting on digital array data.
The data tensors into this network are derived from frames RF data from the dataset. Examples are drawn from the dataset with a shape of [4,65536,2], corresponding to 4 antenna channels, 65536
time samples, and
2 quadrature channels. These are frames of raw time-domain input from the array over 1.311 ms of continuous time. These data are pre-processed into a joint time-frequency representation
(short-time Fourier transform) suitable
for detection of signals in noise. First, the trailing dimension of 2 on the tensors is absorbed into the time dimension by converting the tensor datatype to be complex.
This results in a tensor with shape [4, 65536]. The time dimension is reshaped into two dimensions with 32 time chunks and 2048 continuous time steps, resulting in a tensor of shape
[4, 32, 2048]. After applying a Hann window function along the time dimension of length 2048, a discrete Fourier transform is calculated along that dimension, resulting in 4-channel
Hann-windowed
short-time Fourier transforms (STFTs) with 32 time chunks and 2048 frequency bins (a shape [4, 32, 2048] complex tensor). The real and imaginary parts of the tensor are now separated back
into a new trailing dimension with size 2, giving a tensor of shape [4, 32, 2048, 2] with all real entries. The trailing quadrature dimension is merged with the antenna array channels,
resulting in 8-channel training examples, with each channel containing the real or imaginary component of the STFT of the signals at the antennas (shape [8, 32, 2048]).
Finally, each training example is normalized (standardized) by subtracting its mean value and dividing by its standard deviation, giving each training example zero mean and unit
variance across the channels, time, and frequency dimensions.
The encoder-decoder architecture used is based on basic principles of reducing the dimensionality of the data and then increasing it again. The core of the network is based on a common
convolutional residual block structure with squeeze-and-excitation with a squeeze reduction ratio of eight <cit.>. There are two variants of this
common structure: one that reduces dimensionality and one that increases dimensionality. The two variants of this basic building block are depicted in Figure <ref>.
Many network architectures based on stacking this block have been explored. One representative architecture is presented in Figure <ref>.
The architecture initially increases the channel count of the data from 8 to 32 using a convolutional stem (a simple block consisting of a single convolutional layer, batch normalization,
ReLU activation, and squeeze-excite), which increases the data tensor sizes by 4x. It then proceeds to decimate by two in time while keeping the channel count and number of frequency bins constant
a total of three times, representing an 8x decrease in the data tensor size. In total, the latent representation is half the size of the input tensor.
The network presented in Figure <ref> will be referred to as the “2-resblock” network, indicating that it uses two serial downsampling resblocks in the encoder and two
serial upsampling resblocks in the decoder. Other designs with more resblocks were also explored.
All networks were pretrained by a common procedure based on the Adam variant of stochastic gradient descent using mini-batches containing 16 examples each (mini-batch shape [16, 8, 32, 2048])
and an initial learning rate of 0.001. Each mini-batch was drawn from the transformed data in which 2 adjacent channels out of the 8 were randomly set to zero for a channel in-painting task.
The loss metric was the mean-squared-error over the mini-batch, and one gradient-based update was performed per training mini-batch using the Adam optimizer. Twenty percent of the training data was
withheld for validation (7084 training examples and 1771 withheld validation examples). A total of 443 mini-batches are processed through the network each epoch, with one gradient-based weight update
per mini-batch. The average loss over the mini-batches was recorded for each epoch. The average loss over the validation data is calculated and recorded at the end of each training epoch. If the
validation loss is lower than any previous validation loss, the network is serialized and saved to disk as one of the outputs of the training procedure. The number of epochs the network is trained
for is determined by validation-based early stopping with a patience-parameter of 30, meaning that if the validation loss does not decrease once in any 30-epoch window, training stops. Finally, the learning
rate is also adapted throughout training using the “reduce-on-plateau” method with a patience-parameter of 10, meaning that if the validation loss does not decrease once in any 10 epoch window, then
the learning rate is reduced by a factor of 10.
§.§ Downstream Task Definition and Methods
After pretraining, a new encoder-decoder network is initialized with the same encoder architecture and weights as the pretrained network, but with a randomly initialized task-specific decoder
that outputs a tensor shape for another task, not the channel in-painting pretext task. In this work, we consider the downstream task of signal bandwidth regression, which is the act of mapping
between the STFT of a signal and a function on the frequency bins that takes on a value proportional to the signal bandwidth in each signal center bin and takes a value of zero everywhere else.
Figure <ref> depicts an example-pair of a spectrogram derived from STFT data, and the corresponding bandwidth regression target.
The regression target values themselves are simply a normalized version of the signal bandwidth such that a signal that occupies all frequency bins gets a target value of 1 (1/2048 times the number
of bins the signal occupies).
To map from the pretrained embeddings to the bandwidth regression targets, the new decoder continues to downsample the embedding in the time dimension until
that dimension has size 1, at which point the singleton dimension is removed, leaving a tensor of shape [32, 2048], corresponding to 32 different features across 2048 frequency bins.
To map from this shape to the desired target bandwidth regression shape of [1, 2048], 1D convolutional residual blocks are used with no upsampling or downsampling. These have a similar structure
to those depicted in Figure <ref>, except all convolutions are replaced with 1D convolutions, all strides are set to one, and the 1D form of squeeze-and-excite is used.
Each block halves the channel count, and so after a stack of 5 such blocks, the decoder output shape matches that of the bandwidth regression target. The kernel size is set to 5 in these 1D
convolutional blocks. The residual path still has size 1 convolutions to match the channel counts between input and output.
The training procedure and hyperparameter choices are identical to those of pretraining (described previously in Section <ref>), except the initial
learning rate of Adam was 0.01. The loss function being optimized is a mean-squared error loss defined on the logarithm of the bandwidth targets, with a small
epsilon hyperparameter added to the network output for numerical stability. Let B_i for i∈{0,1,… 2047} be the sequence of true bandwidth regression target
outputs on the 2048 analysis bins as depicted in the bottom panel of Figure <ref>, and let B̂_i be a sequence that will estimate the true
bandwidth regression target, i.e., the output of the downstream neural network. If we let B and B̂ without subscripts represent their entire respective sequences,
then the bandwidth loss between a true and estimated bandwidth regression target is mathematically defined by
ℓ(B,B̂)=∑_{i|B_i≠ 0}(log(B_i)-log(ε+B_i))^2
The optimizer tries to minimize the average of that quantity over the training dataset by training over minibatches. Critically, this loss is only calculated on the bins with non-zero target values. This is
a pure bandwidth regression problem, and effectively assumes that the signal centers are known by some other method to the network. In the most useful general case, the network
should itself regress both the bandwidth regression values, and their frequency locations. That this work does not present those results is a current limitation that will be addressed in Section
<ref>.
This downstream bandwidth regression problem requires a labeled dataset, and a new single file was recorded that contains 77 training examples. These examples were labeled by upper and lower
signal band edges, which are then processed into the ground-truth bandwidth regression targets. The labeled dataset is itself split into 62 training examples and 15 validation examples. This
is a very low data regime for learning with neural networks, but if the network can generalize to the 15 validation examples by only training on the 62 training examples, it can be concluded
that pretraining was effective in helping the network generalize even with small amounts of data.
§ SAMPLE EXPERIMENTAL RESULTS
§.§ Pretraining Results
The 2-resblock variant of the convolutional encoder-decoder network was pretrained on the channel in-painting task using the dataset and training procedure previously outlined in
Section <ref>. The training results are presented in Figure <ref>.
The 2-resblock model trained for a total of 164 epochs, with the model from epoch 135 achieving the lowest validation loss of 0.3452. The mean-squared error was used as the loss measure for
optimization, and the data inputs are zero-mean, unit-variance when those quantities are calculated across all dimensions for a training example. As a reference for the loss scale, two tensors
with entries drawn from independent identical normal distributions with zero mean and unit variance would have an expected mean-squared error loss of 1.0. Because our dataset is standardized
to zero mean and unit variance, that means that a loss value of 1.0 indicates that the network can match the mean and variance of the data, but nothing else. On the other hand, a network that
correctly reconstructs the real and imaginary parts of every time step and frequency bin of all four antenna channels for every training example would achieve a loss of zero. Note that the
training procedure does drive the loss down from a value near 1.0, meaning that the network initially can only match the mean and variance of the data distribution but learns to represent the
data distribution better than that as training proceeds.
§.§ Downstream Transfer Results
For transfer learning from the pretrained model to the downstream task of bandwidth regression, we present two sets of results for the same encoder-decoder architecture: 1) the baseline has a
randomly initialized encoder and decoder, and 2) the pretrained model has the encoder copied from an SSL pretrained network, with the same random initial decoder weights as the baseline.
We present results here for the case in which both the encoder and decoder are trainable. The
baseline model trained for 194 epochs before early stopping and achieved a best mean validation loss of 7.7, with a best-case loss value of 1.6. The pretrained model trained for 269 epochs and
achieved a best mean validation loss of 3.0, with a best-case loss value of 0.6. The training loss curves are depicted in Figure <ref>.
§ DISCUSSION, LIMITATIONS, AND FUTURE DIRECTIONS
That the pre-trained network converged to a lower loss value on the validation data is the key result. To our knowledge, this is the first set of investigations to
demonstrate 1) a pre-text task for digital array data and 2) the utility of SSL pretraining for digital array problems. The method leads to a reduction in volume of required labeled training data,
which could make data-driven machine learning for RF applications more attractive to practitioners. Furthermore, the 2-resblock architecture was shown to have an embedding size that requires
half the number of bits to store as the original STFT data, hinting towards a future in which array data can be massively compressed by learned algorithms.
A notable limitation of the current effort is that the pretrained encoder was trainable (rather than having its weights frozen) in the results shown above. In our experiments
with a frozen pretrained encoder, we found that the loss values on the downstream task were worse than a randomly initialized, fully-trainable baseline. This indicates that
pretraining helps to put the encoder in a better initial state for downstream training and serves as a kind of weight regularization, but that, at least for the scenario we have presented,
the encoder must be trainable on the downstream tasks in order for the final network performance to be improved over the baseline. If this fact holds as a generality for other pre-text tasks,
downstream tasks, datasets, and network architectures, that would severely limit the utility of the embeddings because they would have to be viewed as a starting point for further refinement
rather than a fixed representation that is useful for a myriad of downstream tasks.
Another limitation of this work is the assumption that the signal center frequencies are known a priori to the bandwidth regression network. Ideally, the network would itself be able to
regress not just the bandwidth values, but also their locations. Doing this requires careful engineering of a suitable loss function, and in simple experiments conducted as part of this work,
we found that simple loss functions resulted in networks that were unable to predict any signal center bins.
To see if the above limitations hold, future research will be directed towards 1) expanding the domain of pre-text tasks (and pretraining methodologies in general), 2) exploring further downstream
tasks (such the full signal detection in noise problem, beamforming weight estimation, direction-of-arrival estimation, etc.), 3) expanding the datasets used for pretraining, 4) exploring
more network architectures, and 5) careful engineering of loss functions suitable for joint signal detection and bandwidth regression. With robust enough pretraining datasets and the appropriate
methodology, SSL has proven to be an effective way to generate fixed embeddings that are useful for a variety of downstream tasks in natural language processing and computer vision. We have little
reason to suspect that there is fundamentally something different about RF data from digital antenna arrays that makes SSL methods fail to provide the same gains to the RF array domain. We
expect it is simply a matter of discovering the correct combination of the factors above that will lead to rapid gains in the field.
ieeetr
|
http://arxiv.org/abs/2307.01022v1
|
20230703135323
|
Strain, Young's modulus, and structural transition of EuTiO3 thin films probed by micro-mechanical methods
|
[
"Nicola Manca",
"Gaia Tarsi",
"Alexei Kalaboukhov",
"Francesco Bisio",
"Federico Caglieris",
"Floriana Lombardi",
"Daniele Marré",
"Luca Pellegrino"
] |
cond-mat.mtrl-sci
|
[
"cond-mat.mtrl-sci",
"cond-mat.str-el"
] |
]Strain, Young's modulus, and structural transition of EuTiO_3 thin films probed by micro-mechanical methods
[email protected]
CNR-SPIN, C.so F. M. Perrone, 24, 16152 Genova, Italy
Dipartimento di Fisica, Università degli Studi di Genova, 16146 Genova, Italy
Department of Microtechnology and Nanoscience – MC2, Chalmers University of Technology, SE 412 96, Gothenburg, Sweden
CNR-SPIN, C.so F. M. Perrone, 24, 16152 Genova, Italy
CNR-SPIN, C.so F. M. Perrone, 24, 16152 Genova, Italy
Department of Microtechnology and Nanoscience – MC2, Chalmers University of Technology, SE 412 96, Gothenburg, Sweden
Dipartimento di Fisica, Università degli Studi di Genova, 16146 Genova, Italy
CNR-SPIN, C.so F. M. Perrone, 24, 16152 Genova, Italy
CNR-SPIN, C.so F. M. Perrone, 24, 16152 Genova, Italy
EuTiO3 (ETO) is a well-known complex oxide mainly investigated for its magnetic properties and its incipient ferro-electricity.
In this work, we demonstrate the realization of suspended micro-mechanical structures, such as cantilevers and micro-bridges, from 100 nm-thick single-crystal epitaxial ETO films deposited on top of SrTiO3(100) substrates. By combining profile analysis and resonance frequency measurements of these devices, we obtain the Young's modulus, strain, and strain gradients of the ETO thin films. Moreover, we investigate the ETO anti-ferro-distorsive transition by temperature-dependent characterizations, which show a non-monotonic and hysteretic mechanical response. Comparison between experimental and literature data allows us to weight the contribution from thermal expansion and softening to the tuning slope, while a full understanding of the origin of the hysteresis is still missing. We also discuss the influence of oxygen vacancies on the reported mechanical properties by comparing stoichiometric and oxygen-deficient samples.
[
Luca Pellegrino 0000-0003-2051-4837
Received July 3, 2023, Accepted xx
=======================================
§ INTRODUCTION
EuTiO3 (ETO) is a complex oxide belonging to the titanate family. It is the closest compound to SrTiO3 (STO),<cit.> which was extensively studied over the last decades and is among the standard substrate materials employed for the deposition of oxide thin films.
An interesting aspect of ETO is that it is iso-structural to STO, with almost identical lattice constants.<cit.> This allows to grow very high quality thin films, having bulk-like characteristics, on top of STO substrate.<cit.>
EuTiO3 has Perovskite crystal structures with cubic lattice at low temperature and undergoes a cubic to tetragonal transition at about 282 K driven by oxygen octahedra rotation,<cit.> which is similar to that observed in STO at 105 K.<cit.>
Specific heat and thermal expansion measurements points toward a first-order phase transition, which also affects Young's modulus temperature dependence.<cit.>
Thanks to its magnetic cation, ETO is also characterized by an anti-ferromagnetic transition at T_N=5.5 K.<cit.>
Moreover, it is an incipient ferroelectric and, despite its negative critical temperature of 175 K forbids the transition to a ferroelectric state, evidences of magneto-dielectric coupling made this compound an interesting candidate to realize multiferroicity.<cit.>
In recent years, it has been demonstrated the possibility to realize prototypical micro-electro-mechanical systems from complex oxides thin films by taking advantage from selective chemical etching of different oxide compounds.<cit.> In order to develop this new scientific and technological direction, it is of great interest to increase the number of viable oxide materials by discussing their fabrication protocols and characterizing their mechanical properties.
Contrary to STO, ETO is resistant to HF, allowing for selective chemical etching of the two materials to realize suspended structures having desired shape.<cit.>
In this work, we report about the fabrication of EuTiO3 micro-mechanical structures from single crystal thin films deposited on top of STO(100) with the goal to investigate their mechanical properties.
ETO samples were initially characterized in terms of structural, magnetic, and optical properties and then micro-fabricated into suspended double-clamped bridges and cantilevers. The analysis of their mechanical properties allowed us to quantify the thin films' built-in strain, strain gradient along the in/out-of-plane directions, and Young's modulus.
Strain was found to be slightly compressive, with a counter-intuitive behavior with respect to the out-of-plane gradient, while the Young's modulus of 130 GPa, although in contrast with previous reports, is in line to what observed for similar compounds in this thickness range.
Temperature-dependent measurements of cantilever's resonance frequency show how the anti-ferro-distorsive transition affects not only the thermal expansion, but also the rigidity of the materials, with anomalies at its critical temperature.
At last, we discuss how all the reported characteristics are affected by the presence of oxygen vacancies, one of the most common doping defect in complex oxides.
§ EXPERIMENTAL
EuTiO_3 thin films were grown by pulsed laser deposition on top of SrTiO_3 single-crystal substrates kept at 650 ^∘C. Laser repetition rate was 4 Hz and the energy density on the target was 1.7 J/cm^2. The distance between the target and the substrate was 50 mm. The growth chamber base pressure was 1·10^-6 mbar and, where not stated differently, the background oxygen pressure during the deposition was 1.5·10^-4 mbar.
Mechanical characterization was performed in a custom setup providing PID-controlled temperature and 2·10^-5 mbar base pressure. When not stated otherwise, samples were kept at the constant temperature of 25 ^∘C.
All the mechanical spectra were recorded by measuring the thermal noise of the ETO cantilevers in the optical-lever detection scheme. We employed a 670 nm laser focused on top of the structures with an optical power of 60 µW. The reflected light was converted into an electrical signal by a custom four quadrant photo-diode and sent to a spectrum analyzer. The reported spectra were typically the result of 8 averages, while the bandwidth changed depending on the central frequency value, with a minimum of 1 Hz when measuring around 13 kHz.
§ RESULTS
The ETO crystal structure was investigated by X-ray diffraction, and a Θ–2Θ scan of a film having thickness of 100 nm is reported in Fig. <ref>a.
It shows a superposition of the peaks owning to the STO substrate and the ETO, in agreement with previous reports of stoichiometric bulk-like ETO films grown in similar conditions.<cit.>
Surface morphology was investigated by atomic force microscopy (AFM), showing very smooth ETO film surface with a roughness about 0.1 nm over scan area of 2.5×2.5 µm^2 as shown in Fig. <ref>b, which s representative of all the samples analyzed in this work.
The main steps of the micro-fabrication protocol that we employed to realize suspended structures are schematically reported in Fig. <ref>c. Initially, we perform UV lithography on a thick positive resist (SPR220) which is required for subsequent physical etching of the ETO layer by Ar ion milling. After cleaning in sonicating acetone and ethanol baths to remove the photo-resist, the samples were put soaking in a 5% HF aqueous solution kept at 35 ^∘C and gently stirred at 200 RPM.
The selective etching of STO starts out-of-plane, from the exposed regions, and then proceeds removing the substrate below the edges of the ETO film, making the narrower geometries suspended.<cit.>
In about 30 minutes, all the structures having width below 5 µm were completely released and ready for the mechanical characterizations.
Examples of the two kind of geometries employed in this work, cantilevers and double-clamped bridges, are shown in Fig. <ref>d and e, respectively, all having nominal width of 5µm. In these pictures, clamped ETO is dark/blueish, while all the suspended regions are light/yellowish.
The typical pyramids that form in of the STO(100) substrates after HF etching are visible on the background: these are the regions where ETO was removed by ion milling.
The length of the fabricated cantilevers was between 15 and 100 µm, but very few of those longer than 60 µm were measurable. This because, as also discussed later, cantilevers are slightly bent downwards and the tip of longer ones touches the substrate located about 6 µm below the ETO level. Once it happens, Van der Waals forces prevent the cantilever to detach.
Double-clamped micro-bridges, instead, do not easily collapse and were fabricated with length between 100 and 250 µm. As visible in Fig. <ref>e, their center is out-of-focus, signaling the relaxation of built-in compressive strain by mechanical buckling.
We can quantitatively evaluate the strain of the ETO film from the shape analysis of buckled double-clamped micro-bridges.<cit.>
To do so we measure the profile of each bridge by using an optical profilometer, which is an interferometric microscope providing an height map of its field of view.
An example of the profiles extracted from these maps is reported in Fig. <ref>a, showing an array of ETO micro-bridges having length between 100 and 195 µm.
Notably, they were all bent downwards, which is the most common case in our samples. This could be related to the fabrication process or to strain relaxation at the clamping points. For each individual profile we calculate the best fit of a sum of trigonometric functions. The resulting analytical expression is then employed to calculate the profile length L^P, which is compared to its nominal value L to calculate the strain ε
ε = (L^P - L) / L.
The basic assumption of this analysis is that all the in-plane compressive stress accumulated in the ETO thin film is relaxed and converted into strain (elongation) upon the structure release. The stress of the final bridge should thus be zero.
The Python script implementing the strain analysis is included in the dataset associated to this work, as indicated in the “Open Data” section.
In this study we measured many (∼140) micro-bridges fabricated on two different samples made of 100 nm-thick ETO films.
The calculated strain distribution is reported in the histogram of Fig. <ref>b, showing an average strain of ε = +0.14 %±0.02 %, where the positive sign corresponds to compressive state. Such value imply an in-plane lattice compression of ETO films of a_STO·ϵ/(1+ϵ) = 0.55 pm that, under elastic deformation, would corresponds to an expansion of the c-axis in the out-of-plane direction of 1 pm (assuming a Poisson's ratio of 0.3).
Literature reports of ETO pseudo-cubic lattice constants at 300 K indicates values between 3.860 Å and 3.908 Å.<cit.>
Such dispersion is wider than the calculated lattice deformation due to epitaxial growth, making difficult to uniquely correlate the in-plane strain to out-of-plane lattice expansion from XRD data reported in Fig. <ref>.
ETO films grown on top of STO are thus likely at the crossover between tensile and compressive strain, depending on the specific growth condition and crystal defects.
Our conclusion is that the measured compressive strain is the result of defects formation during the growth, such as oxygen vacancies or dislocations, which are thermodynamically stable.
The width of the strain distribution was found to be related to long-range film inhomogeneities and not to random bridge-to-bridge variations.
This is shown in the strain map reported in Fig. <ref>c, where each point represents a micro-bridge whose position corresponds to the bridge location in the real 5×5 mm^2 sample. Strain magnitude increases from the bottom-left to the top-right corner, while bridges close to each other show small strain variation. Similar characteristics were already observed in manganite thin films, and related to temperature gradient or different stoichiometry due to substrate-plume misalignment during the growth.<cit.>
While double-clamped bridges provide information about the in-plane strain, cantilevers do not, because the free end allows the structure to relax the stress by elongation instead of buckling. However, they can be employed to evaluate strain gradient in the out-of-plane direction (∂_zε_x,y), since it determines vertical bending of the structure due to the different in-plane strain between top and bottom surfaces. Fig. <ref>d shows a set of profiles owing to cantilevers having length between 15 and 60 µm. Here, the horizontal coordinate was shifted to align the begin of the suspended regions at x=0, while all the profiles were vertically shifted of 0.5 µm for better visibility. The irregularities in the profiles are artifacts of the measurement technique, which fails to reconstruct the shape of these semi-transparent structures where the substrate is close below.
All the profiles are bent downwards, signaling a positive strain gradient in the vertical direction. Such gradient can be quantitatively evaluated by comparing the measured profiles with what expected from a simple finite element model based on a constant vertical gradient, as reported in Fig. <ref>e. Here, the black lines are all the experimental profiles of <ref>d collapsed on top of each other, while the colored lines are the profiles calculated for different values of ∂_z ε.
Best agreement is found for ∂_z ε=2300 m^-1, while the confidence interval of this analysis is about 5 %, being on-half of the 200 m^-1 line spacing. Since the ETO thickness is t=100 nm, the resulting in-plane strain difference between the bottom and top surface is Δε=∂ε_z·t=0.023 %, about 6 times smaller than the average in-plane strain. To further understand the characteristics of strain and strain relaxation in ETO thin films, we grew films having different thicknesses and measured their average in-plane strain from the profile analysis of buckled micro-bridges. It appears that thinner films have a slightly higher compressive strain, as can be seen in Fig. <ref>f. From this analysis we conclude that strain relaxation does not evolve layer-by-layer during the growth, otherwise we would observe a strain increase for thicker structures due to the positive gradient. It is instead a global characteristics of the crystal which likely evolves across its whole thickness as long as the growth process continues, at least for the explored film thickness range.
Micro-cantilevers can be also employed as mechanical resonators to obtain the Young's modulus of the ETO film, even if slightly bent out-of-plane.<cit.>
The resonance frequency of the flexural modes f_n of a thin rectangular homogeneous cantilever are<cit.>
f_n = λ^2_n/2πt/L^2√(E/12ρ),
where λ_n=1.8751, 4.6941, 7.8548, (2n-1)π/2 is a numerical parameter related to the mode shape, t is the thickness, L the length, ρ the density, and E the Young's modulus.
To confirm that the simple analytical model of Eq. <ref> can be applied to our ETO resonators, we first measured a wide spectrum of a 75 µm-long cantilever, which is reported in Fig. <ref>a. The first flexural mode is located at 12.7 kHz, the second at 79.6 kHz, and the third at 224.2 kHz. Since the mode spacing should be only given by the numerical factors λ_n in Eq. <ref>, we can take the first mode as a reference and calculate the expected values of the higher modes as f_n = f_1λ_n/λ_1. The resulting frequencies are indicated by the green bands in Fig. <ref>a and well-match the experimental values, while the band width provides their uncertainty.
Fig. <ref>b shows a detailed spectrum of the first flexural mode, the black line is a fit using the analytical expression of thermal noise spectrum.<cit.>. The resulting Q factor of 960 is in line with the other ETO cantilever resonators fabricated on these samples, all showing values between 600 and 1300.
Such Q value is much lower that what recently reported for complex oxides micro-bridges, exceeding 10k,<cit.> since double-clamped geometries under tension may take advantage from dissipation-dilution mechanism to enhance their Q value.<cit.>
We measured several ETO cantilevers having length spanning from 15 to 75 µm and reported their resonance frequencies (f_1) in the scatter plot of Fig. <ref>c.
In order to calculate the ETO Young's modulus (E_ETO) from this dataset, we fit Eq. <ref> considering a density of 6916.5 kg/m^3, which was calculated from the Eu, Ti, and O atomic masses and a (3.9 Å)^3 cubic unit cell.
The resulting E_ETO=130 GPa is almost two orders of magnitude larger than what previously reported for polycrystalline samples.<cit.> However, this value is in line with recent measurements on STO single-crystal membranes, where 100 nm-thick samples had E_STO=150 GPa.<cit.>
To evaluate the dispersion of the measured resonance frequencies around their best fit (black solid line), in Fig. <ref>c we mark the E_ETO±15 % window (dashed black lines) comprising all the data points.
It is possible to investigate the EuTiO3 anti-ferro-distorsive transition by measuring the temperature dependence of the resonance frequency of ETO cantilever resonators. We here consider a 20 µm-long cantilever which was initially cooled down from room temperature to 255 K. After that, its thermal noise spectrum was recorded every 0.5 K during both a heating and a cooling stepped ramps. The frequency vs temperature characteristics reported in Figure <ref>a shows two distinct features: (i) a slope change during heating between 275 K and 295 K, (ii) a thermal hysteresis.
We can relate these tuning slopes to the mechanical characteristics of ETO by considering the relative temperature derivative of the squared resonance frequency (∂_T f^2_1 / f^2_1). Starting from Eq. <ref>, this leads to the separation of thermal stress and Young's modulus contributions
∂_T f^2_1/f^2_1 = 2∂_T t/t - 4∂_T L/L + ∂_T ρ/ρ + ∂_T E/E
= 2α-4α-3α + ∂_T E/E = -5α + ∂_T E/E,
where α is the linear expansion coefficient.
In Fig. <ref>b we compare the experimental ∂_T f^2_1 / f^2_1 (solid lines) with what expected from thermal expansion only (TE) by imposing ∂_T E=0 in Eq. <ref> (dashed line) (TE data were extracted from Ref. Reuvekamp2014).
Such comparison shows that TE is not the dominant contribution to the observed dynamics: above 300 K it provides about 1/3 of the total slope, while the Young's modulus temperature derivative is constant and about 120 ppm/K.
The oscillatory behavior of ∂_T f^2_1 / f^2_1 data at the transition is not compatible with the single minimum of the TE plot. However, this kind of anomaly, associated to the anti-ferro-distorsive transition, is in qualitative agreement with what previously reported for bulk ETO ceramics, despite the huge difference in the absolute value of E_ETO.<cit.>
Our f(T) measurements are also characterized by a thermal hysteresis of about 35 K, as estimated form Fig. <ref>b.
We did not find reports of this effect in previous literature and, because of the limited temperature span of our measurement setup, we could not explore its whole width. We observed such hysteresis while measuring different devices fabricated on different samples, with minor changes in width and amplitude related to small temperature differences.
It could be related to both intrinsic (domain walls pinning, metastability) and extrinsic (thermal gradients, strain relaxation from the clamping points, transversal buckling) mechanisms, but we cannot draw a definitive conclusion about its origin.
Finally, we investigated how the EuTiO3 characteristics reported so far are affected by oxygen vacancies, which are a typical defect as well as doping mechanism of complex oxides. To do so, we grew 100 nm-thick films at lower oxygen pressure of 10^-6 mbar instead of 10^-4 mbar.
After preliminary characterizations reported in Fig <ref>a–c, ETO_3-δ samples underwent the same fabrication protocol of stoichiometric ones.
The formation of oxygen vacancies in ETO deposited in low oxygen pressure is indicated by the XPS data reported in Figure <ref>a, where the valence change of Eu atoms, with added weight on Eu^2+, is a consequence of charge balance after oxygen removal. Another feature of ETO_3-δ films is the onset of a ferro-magnetic transition at low temperature evidenced by an inflection point in the χ(T) characteristic.<cit.> This feature is visible in our SQUID data as evidenced by the black arrow in Fig. <ref>b.
XRD measurements reported in Fig. <ref>c shows that oxygen-deficient samples have larger out-of-plane inter-planar distance. Such lattice expansion is a common feature of oxygen vacancies-doped complex oxides,<cit.> which is associated to an increase in the compressive strain of the material. Moreover, the broadening of the diffraction peak is quite large, also showing peak splitting.
A peculiar characteristics of ETO_3-δ samples is strain inhomogeneity, as exemplified by the strain map of Fig. <ref>e, which shows strong variations between the upper and lower edge. Similar strain gradients were observed across four samples grown in different deposition runs and always associated to broad XRD peaks, similar to what reported in Fig. <ref>c.
Because of that, we hypothesize that oxygen deficiency during growth makes ETO much more sensitive to small temperature gradients or substrate inhomogeneities which, in higher pressure condition, do not affect long-range strain characteristics.
Another dramatic difference between stoichiometric and vacancy-doped cases is strain gradient in the out of plane direction.
Fig. <ref>f compares the profiles of two 40 µm-long cantilevers: ETO_3-δ one is bent upwards and with a much larger displacement than the stoichiometric case. Strain gradient is thus negative and about three times larger.
This feature is quite surprising, because in a simple layer-by-layer growth picture we could expect a better film oxidation at the first layers, thank to oxygen exchange with the STO substrate. However, in such a case the bottom surface of the cantilever would be more compressed, resulting in a profile shape similar to ETO3. This result confirms our previous conclusion that strain behaves as a global film characteristics, and not as a sum of independent layers.
We also investigate whether the Young's modulus of ETO_3-δ was affected by oxygen vacancies by analyzing the frequency vs length relationship of an array of cantilevers as in Fig. <ref>c.
As reported in Fig. <ref>g, this is not the case: the small difference in the resulting Young's modulus, of about 2 %, is within the experimental error and the difference in resonance frequencies between the two set is due to their different film thickness (t_ETO_3=100 nm and t_ETO_3-δ=85 nm).
At last, we discuss the effect of oxygen vacancies on optical properties of ETO thin films. It is well known that the formation of oxygen vacancies affects the light absorption of complex oxides.<cit.> To investigate how this happens in ETO_3-δ, we assessed the optical properties of stoichiometric and oxygen-deficient films, prior to microfabrication, by means of spectroscopic ellipsometry. The complex dielectric functions ε of the substrate and the ETO film were modelled as a superposition of PSEMI oscillators (parametrized functions used to describe the optical response of crystalline semiconductors).<cit.> The values of ε extracted from the ellipsometry data, reported in Fig. <ref>g, indicate that stoichiometric samples show no absorption below the band-gap, located at about 4 eV, in agreement with previous reports.<cit.>
Oxygen vacancies determine a small redshifting of the band gap, from 4.0 eV to 3.6 eV, but critically increase ε_2 (green dashed line) in the low-energy region, signaling the formation of in-gap states leading to broadband light absorption.
An increased optical absorption directly affects the response of mechanical resonators to incident light. We measured the resonance frequency of two 65 µm-long cantilevers as a function of the intensity of the laser used to probe their motion. Laser wavelength was 670 nm, as indicated by the arrow in Fig. <ref>g. Fig. <ref>h displays the resulting tuning slopes, where, for better comparison, we present the relative frequency shift with respect to the lowest power employed in this experiment (25 µW). The stoichiometric sample shows almost no softening, and thus no heating, in agreement with its good transparency. The resonance frequency of the ETO_3-δ resonator, instead, lowers of about 2.5 % for an incident light power (P) of 250 µW. This can be compared to the frequency shift measured in temperature by simply considering the finite differences
Δ f^2/Δ P1/f_0^2 =
(1.025f_0)^2-f_0^2/f_0^2 Δ P =
0.05/Δ P = 200 [ ppm/μ W].
Quite nicely, we found a 1 µW ↔ 1 K equivalence, which is valid for this specific cantilever length and above 300 K.
§ CONCLUSIONS
In summary, we investigated the mechanical properties of stoichiometric and oxygen vacancy-doped EuTiO3 by fabricating suspended micro-structures from single-crystal thin films.
Mechanical characterization measurements provided a quantitatively evaluation of average in/out-of-plane strain and strain gradients as well as the Young's modulus of the material.
Stoichiometric films are found to grow slightly compressed, with an average strain of +0.14,%, while oxygen-deficient ones shows higher strain inhomogeneity.
The measured Young's modulus of about 130 GPa is almost two orders of magnitude larger than in previous reports, but in line with other similar oxides in the same thickness range.
Above 300 K, the Young's modulus relative derivative is constant, with a value of about 120ppm/K.
Temperature-dependent mechanical measurements also found non-monotonic and hysteretic behavior, associated to the EuTiO3 anti-ferro-distorsive transition, in the frequency response of cantilever resonators.
The origin of such hysteresis may be intrinsic to the material or related to the presence of temperature or strain gradient across the suspended structure.
More experiments including low-temperature and even external magnetic field, are needed to shed light on this phenomenon.
§ ACKNOWLEDGMENTS
This work was carried out under the OXiNEMS project (www.oxinems.euwww.oxinems.eu). This project has received funding from the European Union’s Horizon 2020 research and innovation programme under Grant Agreement No. 828784. We acknowledge financial support from the Università di Genova through the “Fondi di Ricerca di Ateneo” (FRA). We also acknowledge support from the Swedish infrastructure for micro- and nano-fabrication - MyFab.
§ OPEN DATA
The numerical data shown in figures of the manuscript and the
supplemental material can be donwloaded from the Zenodo online
repository: http://dx.doi.org/10.5281/zenodo.8109185http://dx.doi.org/10.5281/zenodo.8109185
|
http://arxiv.org/abs/2307.01696v1
|
20230704130529
|
Preparation of matrix product states with log-depth quantum circuits
|
[
"Daniel Malz",
"Georgios Styliaris",
"Zhi-Yuan Wei",
"J. Ignacio Cirac"
] |
quant-ph
|
[
"quant-ph",
"cond-mat.stat-mech",
"cond-mat.str-el"
] |
Authors with equal contribution (listed alphabetically)
Department of Mathematical Sciences, University of Copenhagen, Universitetsparken 5, 2200 Copenhagen, Denmark
Authors with equal contribution (listed alphabetically)
Max-Planck-Institut für Quantenoptik, Hans-Kopfermann-Str. 1, 85748 Garching, Germany
Munich Center for Quantum Science and Technology (MCQST), Schellingstr. 4, 80799 München, Germany
Authors with equal contribution (listed alphabetically)
Max-Planck-Institut für Quantenoptik, Hans-Kopfermann-Str. 1, 85748 Garching, Germany
Munich Center for Quantum Science and Technology (MCQST), Schellingstr. 4, 80799 München, Germany
Max-Planck-Institut für Quantenoptik, Hans-Kopfermann-Str. 1, 85748 Garching, Germany
Munich Center for Quantum Science and Technology (MCQST), Schellingstr. 4, 80799 München, Germany
We consider preparation of matrix product states (MPS) via quantum circuits of local gates.
We first prove that faithfully preparing translation-invariant normal MPS of N sites requires a circuit depth T=Ω(log N).
We then introduce an algorithm based on the renormalization-group transformation to prepare normal MPS with an error ϵ in depth T=O(log (N/)), which is optimal.
We also show that measurement and feedback leads to an exponential speed-up of the algorithm, to T=O(loglog (N/)).
Measurements also allow one to prepare arbitrary translation-invariant MPS, including long-range non-normal ones, in the same depth.
Finally, the algorithm naturally extends to inhomogeneous MPS.
Preparation of matrix product states with log-depth quantum circuits
J. Ignacio Cirac
August 1, 2023
====================================================================
One of the most important tasks in many-body physics and quantum information science is the preparation of useful or relevant states.
This has spurred a large effort to find ways to prepare states for example adiabatically <cit.>, dissipatively <cit.>, or using quantum circuits.
A natural class of states to consider are matrix product states (MPS), because they efficiently approximate ground states of gapped local Hamiltonians <cit.>,
and are easy to describe and compute expectation values of.
Moreover, many paradigmatic states can neatly be expressed as MPS, such as the cluster <cit.>, GHZ <cit.>, W <cit.> and AKLT states <cit.>.
Several ways are known to prepare MPS.
Using unitary quantum circuits with strictly local gates,
all MPS can be prepared using a sequential quantum circuit
of depth T∝ N <cit.>.
This is provably optimal for long-range correlated states such as the GHZ state <cit.>. However, for so-called normal MPS <cit.> which have short-range correlations, shorter depths are possible. Indeed, when allowing for a small error , they can be obtained by acting on a product state with a constant-depth circuit of quasilocal gates—gates whose support grows (poly-)logarithmically with system size <cit.>.
However, such quasilocal gates have to be compiled into gates with strictly local support, and in the worst case such a compilation leads to circuits with a depth scaling exponentially in the support, and thus as (N).
However, since normal MPS all lie in the topologically trivial phase, on can construct adiabatic paths with a guaranteed gap <cit.>, which means normal MPS can provably be prepared adiabatically in T=O( (N/)) <cit.> (also see <cit.>).
Despite of these results, it remains unclear if the scaling of the state-of-the-art algorithm <cit.> is optimal, or if there exist even faster algorithms to prepare normal MPS.
Proving optimality requires finding a tight lower bound on the depth, or, equivalently, its complexity, which is believed to be difficult in general <cit.>.
Here we first resolve the question of asymptotically optimal preparation of normal translation-invariant (TI) MPS. We prove that any circuit faithfully preparing them requires a depth T=Ω(log N ), i.e., it has to scale at least logarithmically with N. We then introduce an algorithm that saturates this bound and prepares all normal TI-MPS in a circuit depth
T=O(log (N/))
using strictly local gates.
This is asymptotically faster than the previously fastest known algorithm (adiabatic preparation <cit.>) and also asymptotically optimal.
Moreover, the algorithm naturally extends to inhomogeneous MPS that are suitably short-range correlated.
If one has additionally access to measurements and feedback, it is known that MPS can be prepared exactly in a depth T=O(log N) by expressing them in terms of
the multiscale entanglement renormalization ansatz (MERA) <cit.>.
Including measurements also yields a speedup for our algorithm, and allows us to extend it to non-normal MPS, such that all TI-MPS can provably be prepared in depth
T=O(loglog (N/)).
This is exponentially faster than the best known measurement assisted protocol <cit.>.
It also shows that our lower bound can be violated with access to measurements.
As a by-product, our work also proves that the finite-range MERA <cit.> can approximate normal TI-MPS in O(loglog (N/)) layers.
Our algorithm fundamentally builds on the renormalization group (RG) transformation.
One RG step consists of blocking several neighboring sites and subsequently discarding short-range correlations.
In this process, a state asymptotically approaches its fixed point <cit.>, which for short-range correlated (including normal TI) MPS consists of nearest-neighbor entangled pairs <cit.>. This happens rapidly since it suffices to block only O(log (N/)) sites to approximate the fixed-point state <cit.>.
Our algorithm first prepares this fixed point, and subsequently re-introduces the short-range correlations by applying an isometry of support log (N/) [cf. <ref>(a)].
Our key contribution is that we can prove through an explicit construction (inspired by earlier works <cit.>) that this isometry can be implemented with a strictly local circuit of depth T=O(log (N/)) [cf. <ref>(b)]. When assisted by measurements [cf. <ref>(c)], the depth of the isometry can be further reduced, while the GHZ-like fixed point of long-range correlated MPS can be prepared in constant depth <cit.>. Together, this lead to the circuit depth T=O(loglog (N/)) to prepare almost arbitrary (including all TI) MPS.
Preliminaries.—
For simplicity, we first consider (normalized) TI-MPS,
|ϕ_N⟩∝∑_i_1, …, i_N=1^d( A^i_1⋯ A^i_N)|i_1⋯ i_N⟩,
and later extend to the inhomogeneous case. Above A^i are D× D matrices (D is the bond dimension) with i=1,…,d (physical dimension).
We will extensively use graphical notation and identify
(A^i)_jk =
[ [scale=.4, baseline=([yshift=-5.5ex]current bounding box.center), thick]
0,0A (0,1.35) node i;
(1.35,0) node k;
(-1.35,0) node j; ].
To each tensor A we associate its transfer matrix
E_A = ∑_i=1^d (A^i)^* ⊗ A^i
=
[ [scale=.5,thick,baseline=([yshift=1ex]current bounding box.center)]
0,0A0,1.7A^* ] .
A tensor is called normal, if (i) it is irreducible (A^i have no nontrivial common invariant subspace), and (ii) E_A has a unique largest eigenvalue λ_1 = 1 and no other of the same magnitude <cit.>.
Its correlation length is defined via the subleading eigenvalue ξ = -1/ln(|λ_2|).
After a gauge transformation <cit.>, E_A of a normal tensor can be brought into the form
E_A = |ρ⟩⟨| + R=
[ [scale=.4,thick,baseline=([yshift=1ex]current bounding box.center)]
[shift=(-0.3, 0)] (-1, 1) – (-0.5, 1) – (-0.5, -1) – (-1, -1);
[shift=(-0.2, 0)] (+1, 1) – (+0.5, 1) – (+0.5, -1) – (+1, -1);
[color=black, fill=white, thick](-0.8, 0) circle (0.6);
(-0.8, 0) node ρ; ]
+ R
,
where the leading right eigenvector ρ > 0 (Hermitian and positive definite) <cit.>, ⟨|ρ⟩ = (ρ) = 1, and R has spectral radius less than one.
Blocking q sites together yields a new tensor B
[ [scale=.45,thick,baseline=([yshift=-3ex]current bounding box.center)]
0,0B ]
=
[ [scale=.45, baseline=([yshift=5.5ex]current bounding box.center), thick]
[shift=(0,0),dotted] (0,0) – (4,0);
0,0A4,0A [decorate,
decoration = calligraphic brace,mirror] (0,-0.8) – (4,-0.8);
(2,-1.5) node q; ]
with physical dimension d^q, the same bond dimension D, and transfer matrix E_B = E_A^q.
E_B approaches its fixed point in the limit q →∞ <cit.> which, for normal tensors, is E_∞ = |ρ⟩⟨|.
Our goal is to devise an algorithm that approximates the target N-site MPS |ϕ_N⟩ by | ϕ_N ⟩
with error = (ϕ_N, ϕ_N), where
(ϕ,ψ) = 1-|⟨ϕ|ψ⟩|
and | ϕ_N ⟩ is prepared using a local quantum circuit.
Our first result is that it is impossible to approximate well normal TI-MPS in depth o(log N). Subsequently, we provide an explicit algorithm with the asymptotically optimal depth O(log (N/)).
Lower bound.—
Given (i) {|ϕ_N⟩},
a sequence of normalized TI-MPS on N sites, generated by a normal tensor A, with finite correlation length ξ>0 [cf. <ref>],
and (ii) {|ψ_N⟩},
a sequence obtained from depth-T local quantum circuits applied to product states,
we are interested in determining how fast T has to grow in order to approximate the MPS well, as measured by the error =(ϕ_N,ψ_N).
We prove here that no quantum circuit with depth T=o(log N) can faithfully approximate this class.
If T=o(log N) there is some N_0 such that for all N>N_0 we have >1/2.
The proof can be found in <cit.>.
To establish this result, we use the fact that |ψ_N⟩ have a strictly finite light cone, whereas in a normal TI-MPS |ϕ_N⟩ correlation functions decay only exponentially.
This leads to a mismatch in the expectation value of correlators outside the light cone, which gives a lower bound on the error between the two states.
We additionally use the fact that sufficiently distant parts of the system are statistically independent, such that the error accumulates with increasing system size N, unless the circuit depth grows sufficiently quickly.
The algorithm.—
We now present the key steps for our algorithm. We will (i) approximate |ϕ_N⟩ by |ϕ_N⟩, then (ii) show that |ϕ_N⟩ can be efficiently prepared, and (iii) prove that the approximation error decays sufficiently fast with N. We begin with the case of normal TI-MPS and return to the general case later.
Approximation through the fixed-point state.—To make the approximation, we follow the steps of the RG transformation <cit.>.
After blocking q sites, we perform a polar decomposition on the blocked tensor B, interpreting it as a map from the D^2-dimensional virtual space to the d^q-dimensional physical space.
This way we can write B = V P where V is an isometry with V^† V = _D^2 and P > 0 is positive definite[Later, we will have q scaling with the system size, so here we assume that B is injective <cit.>.
This is always true for normal tensors after blocking finite (independent of N) sites <cit.>.].
Thus
E_B =
[ [scale=.39,thick,baseline=([yshift=1ex]current bounding box.center)]
0,0B0,1.7B^* ]
=
[ [scale=.39,thick,baseline=([yshift=1ex]current bounding box.center)]
0,00,1.7P^* ][ [scale=.4,thick,baseline=([yshift=1ex]current bounding box.center)]
0,00,1.7P_∞^* (0,0) node P_∞; ]
=
[scale=.39,thick,baseline=([yshift=-0.52ex]current bounding box.center)]
[shift=(-0.3, 0)] (-1, 1) – (-0.5, 1) – (-0.5, -1) – (-1, -1);
[shift=(-0.2, 0)] (+1, 1) – (+0.5, 1) – (+0.5, -1) – (+1, -1);
[color=black, fill=white, thick](-0.8, 0) circle (0.6);
(-0.8, 0) node ρ;
.
The approximation consists of replacing P by its fixed-point version P_∞ in the tensor B, while keeping the isometry V intact.
Graphically,
[ [scale=.5,thick]
0,0B ]
=
[ [scale=.4,thick]
0,00,1.5V ]≈[ [scale=.4,thick]
(-2.4, 0) – (-0.6, 0);
[color=black, fill=white, thick](-1.5, 0) circle (0.6);
(0.6, 0) – (1.8, 0);
0 ,1V (-1.6,0) node √(ρ); ]
=
[ [scale=.5,thick]
0,0B ] .
Later we will assign meaning to the approximation sign in <ref> by bounding the global error between the MPS |ϕ_N⟩ and | ϕ_N ⟩ resulting from the two tensors, B and B. To obtain a vanishing error in the thermodynamic limit we will need q ∝log N, which we assume for now and justify subsequently.
Preparing the approximate state.—The approximate state |ϕ_N⟩ can be prepared by acting on the fixed-point state with a product of unitaries of support q (for simplicity D = d in the illustration)
|ϕ_N ⟩ = (⊗_i=1^N/q U_i)⊗_i = 1 ^N/q( |ω⟩_R_i L_i+1|0 … 0⟩_C_i)
=
[ [scale=.5,thick]
in 0,1,...,15*, 1 (1.2+5*, 0) – (5* + 3.8, 0);
[color=black, fill=white, thick](5*+2.5, 0) circle (0.3);
(-1.2, 0) – (-2.1, 0);
[dotted] (9, 0) – (10, 0);
[dotted] (-2.2, 0) – (-3.3, 0);
(-1.3,-1.2) node L_i;
(0.05,-1.2) node C_i;
(1.5,-1.2) node R_i;
(3.7,-1.25) node L_i+1;
(5.15,-1.25) node C_i+1;
(6.75,-1.25) node R_i+1; ].
The unitary is constructed such that it implements the required isometry when acting on a product state[From dimension counting D^2 d^ℓ≥ d^q thus ℓ∼ q.] |0⟩^⊗ℓ over the “central” region (ℓ = 2 in the illustration)
[ [scale=.4,thick,baseline=([yshift=3.8ex]current bounding box.center)]
0,0 ]
=
[ [scale=.4,thick,baseline=([yshift=0ex]current bounding box.center)]
0,0V ].
Note that for normal TI-MPS the fixed-point state |Ω⟩ = ⊗_i=1^N/q|ω⟩_R_i L_i+1 is a tensor product of entangled pairs,
|ω⟩_R_i L_i+1 =
[ [scale=.45,thick,baseline=([yshift=4ex]current bounding box.center)]
(-1, .5) – (-1, 0) – (1,0) – (1,0.5);
[color=black, fill=white, thick](0, 0) circle (0.3);
(-.8,-0.7) node R_i;
(1.25,-0.75) node L_i+1; ] =
( ⊗√(ρ)) ∑_i=1^D |ii⟩_R_i L_i+1
each with support over the “right” and “left” Hilbert spaces of neighboring sites (R_i = L_i+1 = D). It can thus be prepared from a product state with a constant depth circuit.
So far, it is not obvious that the resulting circuit can be expressed efficiently in terms of strictly local gates, because the unitaries in <ref> are only quasilocal, i.e., having support q ∝log N.
While a naive bound on the circuit depth would be poly(N), here we use the fact that U comes from an MPS to show that in reality it can be implemented in T=O(q). We do this by providing two explicit and exact decompositions of U in terms of gates with constant support, the “sequential-RG” and the “tree-RG”.
The sequential-RG circuit.—We can express the unitary in <ref> in terms of the original MPS by applying the inverse of P to its virtual legs[The subsequent derivation remains valid also for non-injective tensors B. In that case P^-1 is understood as pseudo-inverse.],
[ [scale=.4,thick,baseline=([yshift=4ex]current bounding box.center)]
0,0 ]
=
[ [scale=.4,thick]
in 0,1,...,3(1.5*,0)A (-1, 0) – (-1, -.8) – (3.5, -.8) – (3.5, -2.5);
(5.5, 0) – (5.5, -.8) – (4.5, -.8) – (4.5, -2.5);
[fill=tensor, thick] (3, -1.1) – (5, -1.1) – (5, -2.1) – (3, -2.1) – (3, -1.1) ;
(4,-1.6) node P^-1;
(3.50, -2.5) – (0, -2.5) – (0, -2.8);
(4.5, -2.1) – (4.5, -2.8); ]
=
[ [scale=.4,thick,fill=tensor,baseline=([yshift=2ex]current bounding box.center)]
(0,.1) – (4.5,.1);
(0,-.1) – (4.5,-.1);
in 0,1,...,3[shift=(1.5*,0)]
(0,0) – (0,1);
(-.5, -.5) – (-.5,.5) – (.5,.5) – (.5,-.5) – (-.5,-.5);
(0,0) node A';
(4.4, 0) – (4.4, -1);
(4.6, 0) – (4.6, -1);
(4, -.5) – (4,.5) – (5,.5) – (5,-.5) – (4,-.5);
(4.5,0) node C;
(4.4, -1) – (0, -1) – (0, -1.3);
(4.6, -0.5) – (4.6, -1.3);
(-.5, 0.15 ) – (-0.85, 0.15) – (-0.85, -0.15) – (-0.5, -0.15); ],
where in the last step we set A'^i = A^i ⊗_D and contracted P^-1 with the rightmost A' to obtain C.
As in sequential preparation of MPS <cit.> and in the left-canonical form <cit.>, we can now iteratively apply singular value decompositions, starting from the tensor on the left and moving right, but stopping before the last tensor[The method can easily be generalized to absorb P^-1 into any of the tensors.
With C in the bulk, the sequential circuit is obtained by repeated SVD starting both left and right and stopping at C, as in the mixed canonical form <cit.>.].
This defines a new set of tensors that describe the same isometry V but now each tensor is a local isometry (arrows indicate isometry direction, q=4 in illustration)
V = V_q… V_1 =
[ [scale=.4,thick,fill=isometry,decoration=
markings, mark=at position 0.75 with >]
in 1,...,2[postaction=decorate] (0,1) – (0,.5);
[postaction=decorate] (.5,0) – (1,0);
[fill=isometry] (-.5, -.5) – (-.5,.5) – (.5,.5) – (.5,-.5) – (-.5,-.5);
(0,0) node V_4;
[shift=(1.5*,0)]
[postaction=decorate] (0,1) – (0,.5);
[postaction=decorate] (.5,.1) – (1,.1);
[postaction=decorate] (.5,-.1) – (1,-.1);
(-.5, -.5) – (-.5,.5) – (.5,.5) – (.5,-.5) – (-.5,-.5);
(1.5,0) node V_3;
(3,0) node V_2;
(4.5,1) – (4.5,.5);
(4.4, 0) – (4.4, -1);
(4.6, 0) – (4.6, -1);
[fill=tensor] (4, -.5) – (4,.5) – (5,.5) – (5,-.5) – (4,-.5);
(4.5,0) node C; ]
, V_i : ℂ^D'_i→ℂ^d D'_i+1 .
with every V_i an isometry V_i^† V_i = _D'_i satisfying D'_i ≤ D^2 (D'_q+1 = 1).
Importantly, C = V_1 is automatically also an isometry, as
V V =
[ [scale=.5,thick,fill=isometry]
(0,1) – (0,.5);
(.5,0) – (1,0);
(-.5, -.5) – (-.5,.5) – (.5,.5) – (.5,-.5) – (-.5,-.5);
(0,0) node V_4;
in 1,...,2[shift=(1.5*,0)]
(0,1) – (0,.5);
(.5,.1) – (1,.1);
(.5,-.1) – (1,-.1);
(-.5, -.5) – (-.5,.5) – (.5,.5) – (.5,-.5) – (-.5,-.5);
(1.5,0) node V_3;
(3,0) node V_2;
(4.5,1) – (4.5,.5);
(4.4, 0) – (4.4, -1);
(4.6, 0) – (4.6, -1);
[fill=tensor] (4, -.5) – (4,.5) – (5,.5) – (5,-.5) – (4,-.5);
(4.5,0) node C;
[shift=(0, 1.5)]
(.5,0) – (1,0);
(-.5, -.5) – (-.5,.5) – (.5,.5) – (.5,-.5) – (-.5,-.5);
(0,0) node V_4^*;
in 1,...,2[shift=(1.5*,0)]
(.5,.1) – (1,.1);
(.5,-.1) – (1,-.1);
(-.5, -.5) – (-.5,.5) – (.5,.5) – (.5,-.5) – (-.5,-.5);
(4.4, 0) – (4.4, 1);
(4.6, 0) – (4.6, 1);
[fill=tensor] (4, -.5) – (4,.5) – (5,.5) – (5,-.5) – (4,-.5);
(4.5,0) node C^*;
(1.5,0) node V_3^*;
(3,0) node V_2^*; ]
=
[ [scale=.5,thick,fill=isometry]
(0, 0) – (0, 1);
(0, -.1) – (-.9, -.1) – (-.9,1.6) – (0, 1.6);
(0, .1) – (-.7, .1) – (-.7,1.4) – (0, 1.4);
(.1, 0) – (.1, -1);
(-.1, 0) – (-.1, -1);
(.1, 1.5) – (.1, 2.5);
(-.1, 1.5) – (-.1, 2.5);
[fill=tensor] (-.5, -.5) – (-.5,.5) – (.5,.5) – (.5,-.5) – (-.5,-.5);
[fill=tensor,shift=(0,1.5)] (-.5, -.5) – (-.5,.5) – (.5,.5) – (.5,-.5) – (-.5,-.5);
(0,0) node C;
(0,1.5) node C^*; ]
=
[ [scale=.4,thick,fill=isometry]
(-.2,-1.5) – (-.2,1.5);
(.2,-1.5) – (.2, 1.5); ] .
Since this sequential circuit comprises q sites, its depth is O(q). This scaling is unchanged if we additionally take into account that the inputs of the unitary in <ref> are separated by O(q) sites, which requires one to implement SWAP gates.
The tree-RG circuit.—Blocking two neighboring sites followed by a polar decomposition is the basis for the real-space RG transformation and halves the correlation length <cit.>.
Instead of directly blocking q sites, we can repeatedly apply this transformation k ∼log_2 q times to the same effect (illustration below, and in <ref>(b)).
This generates a tree-like circuit with k layers, in which each layer but the lowest consists of isometries from dimension D^2 to D^4 (below A_k is obtained from A by blocking k sites, and q=8)
[ [scale=.4,thick,baseline=([yshift=2ex]current bounding box.center)]
in 0,1,...,1[shift=(3*, 0)]
(-1,0) – (2,0);
(-.1,1) – (-.1,0);
(+.1,1) – (+.1,0);
(+0.9,1) – (+0.9,0);
(+1.1,1) – (+1.1,0);
[fill=tensor] (-1/2,-1/2) – (-1/2,1/2) – (3/2,1/2) – (3/2,-1/2) – (-1/2,-1/2);
(0.5, 0) node A_4; ]
=
[ [scale=.4,thick]
in 0,1,...,1[shift=(3*, 0)]
0.5,-1.5 (0.25+0.25, -1.5) node P^(1);
(-1, 2);
(-.1,1) – (-.1,0);
(+.1,1) – (+.1,0);
(+0.9,1) – (+0.9,0);
(+1.1,1) – (+1.1,0);
[fill=isometry] (-1/2,-1/2) – (-1/2,1/2) – (3/2,1/2) – (3/2,-1/2) – (-1/2,-1/2);
(0.5, 0) node V^(1); ] =
[ [scale=.3,thick]
in 0,1,...,1[shift=(3*, 0)]
(-1, 2);
(-.1,1) – (-.1,0);
(+.1,1) – (+.1,0);
(+0.9,1) – (+0.9,0);
(+1.1,1) – (+1.1,0);
[fill=isometry] (-1/2,-1/2) – (-1/2,1/2) – (3/2,1/2) – (3/2,-1/2) – (-1/2,-1/2);
(0.5, 0) node V^(1);
(-1, -3) – (3+2, -3);
(0, -1.5) – (0, -0.5);
[shift=(1, 0)] (0, -1.5) – (0, -0.5);
[shift=(3, 0)] (0, -1.5) – (0, -0.5);
[shift=(4, 0)] (0, -1.5) – (0, -0.5);
(0.0, -1.5) – (0.0, -2.5);
(4, -1.5) – (4, -2.5);
[fill=isometry] (-1/2,-2) – (-1/2,-1) – (3+3/2,-1) – (3+3/2,-2) – (-1/2,-2);
[fill=tensor, shift=(0, -1.5)] (-1/2,-2) – (-1/2,-1) – (3+3/2,-1) – (3+3/2,-2) – (-1/2,-2);
(2, -1.5) node V^(2);
(2, -3) node P^(2); ].
In <ref>, the lowest layer is again the part that is replaced by the fixed-point state in our algorithm, i.e., a product of |ω⟩ [cf. <ref>].
In this scheme, the lowest isometry V^(k) acts across a distance q.
Though not strictly local, this can be done in a depth O(q) utilizing SWAP gates.
Subsequent isometries act over distances q/2, q/4 and so forth, leading to an overall circuit depth T=O(q).
Approximation error.—So far, we constructed efficient circuits for preparing |ϕ_N⟩, having assumed that we block q sites. The scaling of q is a consequence of the following Lemma, which is adapted from Ref. <cit.>.
Given a sequence of TI-MPS generated from a normal tensor, and for all γ < 1/2,
ϵ(ϕ_N, ϕ_N) = O( N/q e^-γ q/ξ).
The proof can be found in <cit.>.
Using <ref>, it follows that q = O (log (N / ϵ)). In particular, blocking q = * 2 ξ (1 + η) ln N∝log N sites gives = O(N^- η) for any η > 0.
We also numerically illustrate the exponential decay of <ref> in <cit.> for preparing the 1D AKLT state <cit.> and an MPS family with tunable correlation length <cit.>, which demonstrates that the circuit is also efficient in practice.
Inhomogeneous short-range correlated MPS.—Our results can be straightforwardly extended to MPS that have a finite correlation length, but are not TI.
The setting here is that we are given a sequence of MPS {|ϕ_N⟩} with bond dimension at most D.
We will consider such a sequence to have finite correlation length if, after blocking q = O (log N) times, the resulting states can be approximated up to quasi-local isometries by a state consisting of nearest-neighbor entangled pairs | Ω⟩ = ⊗_i=1^N/q|ω^i⟩_R_i L_i+1,
with an error
(Ω,ϕ_pos) → 0 as N →∞.
Here
| ϕ_ pos⟩ = [ [scale=.4,thick]
in 0,1,...,2[shift=(3*, 0)]
0.5,-1.5 (0.50, -1.5) node P_i-1;
(3.50, -1.5) node P_i;
(6.50, -1.5) node P_i+1;
[dotted] (-2.1, -1.5) – (-1.1, -1.5);
[dotted] (8.1, -1.5) – (9.1, -1.5); ]
arises after blocking q sites and keeping the positive part of the decomposition of |ϕ_N⟩.
Then the preparation scheme consists of preparing | Ω⟩ and implementing the isometry, decomposed with either of the two methods. The resulting total depth is again O(log (N/)) with error (Ω,ϕ_pos).
As an illustration, in <cit.> we numerically show that this protocol can prepare inhomogeneous random MPS <cit.> efficiently.
Preparations using measurements.—Measurements and subsequent conditional unitaries can make state preparation much faster <cit.>.
Here elaborate how such measurements could be used in our algorithm.
Tree-RG circuit with measurements.—Local measurements and conditional local unitaries are the standard framework to perform quantum teleportation <cit.>, which can be used to reduce the depth of the tree-RG circuit.
Isometries appearing in <ref> act on a constant number of sites which, although spatially separated, can be teleported at neighboring registers with a constant overhead.
This can be achieved by creating nearest-neighbor entangled pairs, then performing simultaneous measurements, and correcting (without post-selection) based on the measurement outcomes <cit.> (this process is also detailed in Ref. <cit.>).
Therefore every isometry in <ref> takes constant time using measurement. Crucially, however, the tree-RG circuit requires only O(loglog (N/ϵ)) layers (in contrast to Ref. <cit.>).
Since the fixed-point state can be prepared in constant time as before, this gives a preparation algorithm for short-range correlated MPS with depth O(loglog (N/)).
Long-range MPS using measurements.—Another consequence of including measurements is that the creation of GHZ-like states |χ_M⟩ = ∑_i=1^b α_i |i⟩^⊗ M becomes possible in only constant depth <cit.>.
These states are closely related to the fixed points of TI-MPS[Here we consider TI-MPS with constant D. Note that this does not include all states that are TI and have area law entanglement (e.g., W-state <cit.>).] which, up to an isometry, take the form <cit.>
|Ω'⟩ = ∑_j = 1^b α_j^(N)⊗_i=1^N/q|ω_j⟩_R_i L_i+1.
The normal case corresponds to b=1 for which |Ω'⟩ = |Ω⟩ while, in general, b is upper bounded by the number of blocks in the canonical form and α_j^(N) may depend on N <cit.>.
Importantly, the different |ω_j⟩ are orthogonal <cit.>, which suggests a preparation procedure for |Ω'⟩. First create |χ_N/q⟩, which can be done in constant depth with measurements (following, e.g., Ref <cit.>). Subsequently, apply in parallel the isometries W: |j⟩↦|ω_j⟩_R_i L_i+1 such that |Ω'⟩ = W^⊗ N/q|χ_N/q⟩, which also takes constant depth.
In <cit.> we show how to explicitly obtain a state of the form <ref> that approximates well the target |ϕ_N⟩ up to local isometries by blocking q ∝log (N/ϵ) sites. As a result, following the same steps as in the tree-RG circuit with measurements we have a scheme that approximates all TI-MPS (short- or long-range correlated) with depth T=O(loglog (N/)) [cf. <ref>]. If instead measurements are only used for the preparation of |χ_N/q⟩, the depth is O(log (N/)).
Our construction generalizes to inhomogeneous long-range correlated MPS exactly as in the short-range case.
Connection to MERA.—The circuit in the tree-RG scheme can be interpreted as a finite-range MERA with O(loglog N) layers, namely a shallow tensor tree acting on the fixed-point state.
Specifically, the isometries V^(i) [cf. <ref>] are identified with the isometries in finite-range MERA, and all disentanglers are the identity, save for the first layer, which is identified with the single-layer of unitaries that prepare the fixed-point state.
Hence, within the approximation error ,
normal TI-MPS⊂[ finite-range MERA; with O(loglog N) layers. ]
Discussion and outlook.— Our results also implies that MPS in the same phase can be transformed into each other using a log-depth circuit, in contrast to the well-known quasilocal evolution corresponding to poly-logarithmic depth circuit <cit.>.
It would be interesting to explore whether our results could be exploited for applications other than state preparation.
Specifically, a number of protocols <cit.> implicitly or explicitly depend on the ability to prepare (or distangle) MPS using a sequential circuit. It may be possible to replace the sequential circuit with ours to reduce the circuit depth in these protocols. Another direction would be to extend our lower-bound proof and the preparation algorithm to prepare certain higher dimensional tensor network states <cit.>.
Acknowledgments.—
We thank Yujie Liu and Rahul Trivedi for insightful discussions.
DM acknowledges support from the Novo Nordisk Fonden under grant number NNF22OC0071934. GS is supported by the Alexander von Humboldt Foundation.
The research is part of the Munich Quantum Valley, which is supported by the Bavarian state government with funds from the Hightech Agenda Bayern Plus. We acknowledge funding from the German Federal Ministry of Education and Research (BMBF) through EQUAHUMO (Grant No. 13N16066) within the funding program quantum technologies—from basic research to market.
The numerical calculations were performed using the ITensor Library <cit.>.
§ PROOF OF <REF> AND EXTENSION TO NON-NORMAL TENSORS
Here we first show how to explicitly obtain the approximate state | ϕ_N ⟩ for the non-normal case. Then we prove <ref>', which bounds the approximation error both for normal and non-normal TI-MPS and thus immediately implies <ref>.
We consider general TI-MPS defined by
|ϕ_N⟩ = 1/c_N∑_i_1, …, i_N( A^i_1⋯ A^i_N)|i_1⋯ i_N⟩
where c_N>0 is the normalization constant.
After a gauge transformation, every tensor A can be expressed in terms of a basis of normal tensors <cit.>
A^i = ⊕_j=1^b diag (μ_j,1 ,… , μ_j,m_j) ⊗ A_j^i
where the normal A_j are in canonical form II and produce orthogonal vectors in the thermodynamic limit <cit.>.
Without loss of generality we assume | μ_j,k | ≤ 1 with at least one of them having magnitude exactly one.
The normal case thus corresponds to b=1 and a single |μ_1,1| = 1.
From <ref>, it follows that the general form of a TI-MPS is
|ϕ_N⟩ = 1/c_N∑_j=1^b β_j |v_j⟩,
where
β_j = ∑_k=1^m_jμ_j,k^N
and |v_j⟩ is the (unnormalized) MPS generated by the normal tensor A_j (i.e., <ref> without c_N).
Let us now define the approximate state | ϕ_N ⟩, which generalizes <ref>.
For that, as in the normal case, we block q sites and perform a polar decomposition of the tensor B^i_1 … i_q = A^i_1… A^i_q. This results in B = V P, where P: ℂ^D^2→ℂ^D^2 is positive-semidefinite and the isometry satisfies V^† V = Π for Π the projector onto the image of P.
Since V is an isometry, P inherits the block structure of A
[ [scale=.4,thick,baseline=([yshift=-4ex]current bounding box.center)]
0,0 (-.5,1.5) node i_1;
(0.65,1.5) node i_2; ]
= ⊕_j=1^b
diag (μ_j,1^q ,… , μ_j,m_j^q) ⊗
P_j^i_1i_2
where i_1,i_2 = 1 , …, D and all P_j are normal tensors. We can therefore express
|ϕ_N⟩ = 1/c_N( ⊗_i=1^N/q V_i ) ∑_j=1^b β_j |v_ pos,j⟩.
where |v_ pos,j⟩ is the unnormalized MPS generated by P_j.
The approximate state |ϕ_N ⟩ is defined by replacing each normal tensor P_j with its fixed-point counterpart P_j,∞ (analogous to <ref>).
Equivalently, we replace each |v_ pos,j⟩ by the corresponding fixed-point state |Ω_j⟩. That is,
|ϕ_N ⟩ = 1/c_N( ⊗_i=1^N/q V_i ) ∑_j=1^b β_j |Ω_j⟩
where |Ω_j⟩ = ⊗_i=1^N/q|ω_j⟩_R_i L_i+1 are (normalized) nearest-neighbor entangled pairs over ℂ^D^2.
Importantly, since a basis of normal tensors was used for the decomposition of <ref>, they satisfy local orthogonality ⟨ω_j | ω _j'⟩ = δ_j j' <cit.>.
<Ref> also justifies the form of <ref>, for which
α_j^(N) = β_j/√(∑_l |β_l|^2),
where we explicitly added a superscript (N) to remember that β_j may have a decaying contribution from | μ_j,k | < 1 [<ref>].
This contribution vanishes in the limit of blocking q →∞, but here is taken into account, because neglecting it leads to an unwanted additional error contribution.
Because of this, ∑_jβ_j|Ω_j⟩ in <ref> may strictly speaking not be a fixed point of the RG transformation in the non-normal case.
We now turn to the error estimate. For that, the key lemma comes from Ref. <cit.>, where it was shown that for the normal tensor A_j
| 1 - |⟨Ω_j |v_ pos,j⟩| | = O ( N/qexp(-γ q/ξ_jj ) )
for all 0< γ< 1/2 (see Eq. (S29) in the Supplemental Material of Ref. <cit.>). Here
ξ_jj = - 1/ ln | λ_2^(j) |
denotes the associated correlation length, i.e., λ_2^(j) the subleading eigenvalue of the transfer matrix of the normal tensor A_j.
We are now ready to state and prove our result.
[Approximation error]
Consider a sequence of TI-MPS | ϕ_N ⟩ generated by the tensor A. Then for all 0 < γ < 1/2:
* If A is normal with correlation length ξ,
ϵ = O( N/q e^-γ q/ξ).
* For a general non-normal A,
and q=o(N)
ϵ = O( N/q e^-γ q/ξ_diag)
where ξ_diag = max_jξ_jj.
(i) Let us start with the case of a normal tensor. As detailed in the main text,
| ϕ_N ⟩ = ⊗_i=1^N/q V_i | Ω⟩ .
Then, by the triangle inequality
ϵ = 1 - | ⟨ϕ_N | ϕ_N ⟩ |
≤| 1 - c_N |⟨ϕ_N | ϕ_N ⟩ | | + | c_N - 1 | |⟨ϕ_N | ϕ_N ⟩ | .
The first term is exactly equal to the LHS of <ref>. Using Cauchy-Schwarz and that c_N = √( (E_A^N)), the second is O(exp(-N/ξ)).
Since q =o(N), the first term dominates.
(ii) We now move on to the non-normal case.
Here, another set of length scales ξ_jj' plays a role, which is defined as follows.
Consider the inner product of two MPS over N sites, ⟨ v_j | v_j'⟩,
where |v_j⟩, |v_j'⟩ are generated by normal tensors A_j, A_j' that belong to different basis elements [cf. <ref>].
Then
| ⟨ v_j | v_j'⟩ | = O ( e^-N / ξ_jj')
where
ξ_jj' = - 1 / lnτ_max
where τ_max the spectral radius of the mixed transfer matrix E_jj'=∑_iA_j^i*⊗ A_j'^i and τ_ max<1 (see Lemma A.2 in <cit.>).
Using triangle and Cauchy-Schwarz inequalities,
ϵ = 1 - | ⟨ϕ_N | ϕ_N ⟩ |
≤ | 1 - c_N/c_N | ⟨ϕ_N | ϕ_N ⟩ | | + | c_N/c_N - 1 ||⟨ϕ_N | ϕ_N ⟩ |.
For the first term, we have
| 1 - c_N c_N/c_N^2 | ⟨ϕ_N | ϕ_N ⟩ | | ≤
∑_j |β_j|^2 | 1 - ⟨Ω_j | v_ pos,j⟩ |/∑_l |β_l|^2
+ | ∑_jj'β_j^* β_j'⟨Ω_j | v_ pos,j'⟩/∑_l |β_l|^2|,
where we used that c_N^2=∑_j|β_j|^2.
To bound the first fraction, we use <ref> and get O ( N/qexp(-γ q/ξ_diag ) ) where ξ_diag = max_jξ_jj.
By <ref> the second fraction is O(exp(-N/ ξ_off-diag)) where ξ_off-diag = max_j j'ξ_jj'.
The remaining term is
| c_N/c_N - 1 | ≤ | c_N^2 / c_N^2 -1 |
≤ | ∑_j |β_j|^2 (1 - ⟨ v_j | v_j ⟩)/∑_l|β_l|^2|
+ |∑_j j'β_j^* β_j'⟨ v_j' | v_j⟩/∑_l|β_l|^2| .
The terms of the first sum are O(exp(-N/ ξ_jj)), while those of the second sum O(exp(-N/ ξ_jj')).
Putting everything together, we get
ϵ = O( N/q e^-γ q/ξ_diag)
+ O ( e^-N/ξ_off-diag),
where ξ_diag = max_jξ_jj and ξ_off-diag = max_j j'ξ_jj'.
If we further assume q=o(N), the second contribution disappears, giving <ref>.
§ PROOF OF <REF>
Before we present the proof, let us introduce the following lemma, which we will use to distinguish the states based on the mismatch between states with strictly finite correlation length and states with exponentially decaying correlations.
Let {|ϕ_N⟩} be a sequence of TI-MPS generated by an injective tensor A with finite correlation length ξ>0 [cf. <ref>].
Then, we can always find two local operators Ø_1,Ø'_s acting on spins 1 and s with ||Ø||=||Ø'||=1 such that for any integer s>1 and sufficiently large N,
⟨ϕ_N|Ø_1|ϕ_N⟩ = ⟨ϕ_N|Ø_s'|ϕ_N⟩ = 0,
⟨ϕ_N|Ø_1Ø_s'|ϕ_N⟩ ≥ c e^-(s-1)/ξ
where c>0 is independent of N,s.
Consider the connected correlation function
Δ= ⟨ϕ_N|Ø_1Ø_s'|ϕ_N⟩ - ⟨ϕ_N|Ø_1|ϕ_N⟩⟨ϕ_N|Ø_s'|ϕ_N⟩,
where Ø_1 and Ø_s are two (potentially different) operators placed at sites 1 and s.
We have
Δ = 1/c_N^2 [(E_^N-s-1 E_Ø E_^s-1 E_Ø')
- (E_^N-1 E_Ø)(E_^N-1 E_Ø')],
where the normalization c_N = √((E_^N)), and
E_ = ∑_i,j=1^d ⟨ i||j⟩ (A^i)^*⊗ A^j, ∈{,Ø,Ø'}
where d is the physical dimension of the MPS.
Given the spectrum of E_ we can always take N sufficiently large so that we can approximate with an arbitrarily small error,
E_^N-1 = E_^N-s-1=|R_1⟩⟨ L_1|+O(e^-N/ξ),
where ξ=-1/ln(|λ_2|) and 1 = λ_1 > | λ_2| > … are the eigenvalues of E_.
Note that in the main text we use the gauge in which |R_1⟩=|ρ⟩ and |L_1⟩=|⟩.
Here,
⟨ L_1|R_1⟩=1, so that c_N≈ 1 and
Δ≈∑_i=2^D^2λ_i^s-1⟨ L_1|E_Ø|R_i⟩ ⟨ L_i|E_Ø'|R_1⟩,
where D is the bond dimension and
we have written
E_^s-1=∑_i λ_i^s-1|R_i⟩⟨L_i|.
Since the tensor A is injective, we can always choose Ø (and Ø'), such that the corresponding transfer matrix E_Ø=|A⟩⟨ B| for arbitrary A,B (up to a normalization constant).
In particular, we can use this to impose that
⟨ L_i|E_Ø |R_i⟩= ⟨ L_i|E_Ø' |R_i⟩ = 0, ∀ i
⟨ L_1|E_Ø |R_i⟩= ⟨ L_i|E_Ø' |R_1⟩ = 0, ∀ i> 2,
⟨ L_1|E_Ø |R_2⟩⟨ L_2|E_Ø' |R_1⟩ = c'>0.
The first line ensures (<ref>), while the second and third ensure (<ref>) for sufficiently large N, with c=c'/2, where 1/2 is an arbitrary constant chosen for concreteness.
Now, we can prove <ref>.
Let {|ϕ_N⟩} be a sequence of TI normalized MPS on N sites generated by a normal tensor A, and {|ψ_N⟩} a sequence of states obtained by applying a depth-T local quantum circuit to a product state and define the error =1-|⟨ϕ_N|ψ_N⟩|.
[restated]
If T=o(log N) there is some N_0 such that for all N>N_0 we have >1/2.
Let us assume that T=o[log(N)] and T>2ξ,
since we can always add layers of identity operators to increase the depth of the circuit.
We approximate {|ϕ_N⟩} through {|ϕ_N⟩} [cf. <ref>] obtained by blocking q_N=⌈ 2(1+η)ξln N with η>0 ⌉ and use <ref> to bound the error as
=1-|⟨ϕ_N|ϕ_N⟩|
<c_0N^-η
for some constant c_0 independent of system size.
We take N such that we have a large number of blocks, all of the same size, q_N, except for the last one, which may be larger.
This is always possible, as q_N=O(log N).
We also take N large enough to ensure q_N>T.
We thus have
d(ϕ_N,ψ_N) ≥ d(ψ_N,ϕ_N)-√(2c_0)N^-η/2,
where d(ρ,σ)=||ρ-σ||_1/2 is the trace distance <cit.>,
as well as an upper bound on trace distance from fidelity combined with <ref>
d(ϕ_N,ϕ_N)≤√(1- |⟨ϕ_N |ϕ_N ⟩|^2)≤√(2c_0)N^-η/2.
In the following, we will find a lower bound to the first term in <ref> to make the difference larger than 1/2.
We will also drop the subscript N to simplify notation.
To obtain a bound on the distance of |ψ_N⟩ and |ϕ_N⟩, let us consider instead a suitable subsystem.
To that end, we divide the chain into ⌊ N/(2q)⌋
blocks of size 2q each, with the last block potentially smaller than 2q.
We then trace over all 2q spins at the sites contained in the intervals [4mq+1,2(2m+1)q], with m=0,1,… in both states |ϕ⟩ and |ψ⟩.
In case the last block we constructed is smaller than 2q, we trace it as well.
If we perform such an operation on |ϕ⟩⟨ϕ|, we obtain a product state
ρ=ρ_0^⊗ k,
which follows from the definition of |ϕ_N⟩ and the fact that it is invariant under translation by q sites.
We have
k = ⌊ N/4q⌋.
Analogously, applying the same trace to |ψ⟩⟨ψ|
we also obtain a product state, because q>T,
σ=σ_1⊗…⊗σ_k.
Using the fact that the trace distance is contractive under tracing, and bounding it in terms of the Uhlmann fidelity, we have <cit.>
d(ϕ,ψ) ≥ d(ρ,σ) ≥ 1- F(ρ,σ),
with the Uhlmann fidelity between two density matrices ρ and σ defined as F(ρ, σ)=Tr√(√(ρ)σ√(ρ)).
Given that ρ and σ are product states, we have
F(ρ,σ)=∏_i=1^k F(ρ_0,σ_i) ≤ (1-δ)^k/2
where δ= min_i d(ρ_0,σ_i)^2, and where we have used another bound between the fidelity and the trace distance <cit.>.
Next we will lower bound δ using <ref>.
However, we have to be a bit careful since this lemma applies to ϕ instead of ϕ.
Fortunately, we can use <ref> to replace one with the other. For the sake of concreteness, we will bound d(ρ_0,σ_1) but the same analysis applies to every σ_i.
Let us take s=2T+1, and the operators Ø,Ø' from <ref> to define
a_ = ⟨ϕ||ϕ⟩,
a_ = (ρ_0 )=⟨ϕ||ϕ⟩,
b_ = (σ_1 )=⟨ψ||ψ⟩,
where ∈{Ø_1, Ø_s',Ø_1Ø_s' }.
Given that ||Ø||=||Ø'||=1, we can bound
d(ρ_0,σ_1) ≥max_=Ø_1,Ø'_s,Ø_1Ø_s'||a_|-|b_||.
According to <ref>, a_Ø_1=a_Ø'_s=0, and
a_Ø_1Ø'_s> μ= c_1 e^-T/ξ.
In order to use <ref>, we need to connect a_ to a_.
Using <ref>, we choose sufficiently large N to obtain d(ϕ,ϕ) ≤μ/3.
This immediately implies that |a_Ø|,|a_Ø'| < μ/3 and a_Ø_1Ø'_s > 2 μ/3.
Moreover, since ψ is created from a product state by a depth-T circuit, every connected correlation for operators at a distance larger than 2T vanishes.
Since s=2T+1 we therefore have b_Ø_1Ø'_s=b_Ø_1 b_Ø_s'.
Thus, <ref> can be written as
d(ρ_0,σ_1) ≥max_x(|x-μ/3|,2μ/3-x^2)>μ/3,
since μ<1 for sufficiently large N,
from which it immediately follows that δ> μ^2/9.
Putting <ref> and <ref> into <ref>, we arrive at
d(ϕ,ψ) > 1-(1-μ^2/9)^k/2 - √(2c_0)N^-η .
The last term is negligible for large N.
If then k>2/δ, one has that (1-δ)^k/2< 1/e and thus d(ϕ,ψ)>√(3/4), which implies >1/2.
Thus, for N for which k>2/δ, we have >1/2.
This is fulfilled for N obeying
k>N/5q>Nγ/10ξlog N>2/δ>18/c_1e^2T/ξ.
Since T=o[log(N)],
we can always find an N_0 such that this is fulfilled for all N>N_0.
§ NUMERICAL RESULTS
Here we provide numerical evidence that our proposed strategies are capable of preparing relevant short-range correlated (inhomogeneous) MPS under open boundary conditions, which are of the form
|ϕ_N⟩_ obc = ∑_i_1,i_2, …, i_N A_[1]^i_1 A_[2]^i_2⋯ A_[N]^i_N|i_1 i_2⋯ i_N⟩,
where A_[1]^i_1 and A_[N]^i_N are tensors on the boundary, while the tensors in the bulk are defined similarly to those in <ref>. To prepare such states, we construct the isometries analytically, as delineated in the main text, and implement a local optimization scheme <cit.> to variationally find the fixed-point state of the form in <ref> that minimize the error between the approximate state [cf. <ref>] and the target state. Since the variational space comprises only (superpositions of) product states of entangled pairs [cf. <ref>], this optimization scheme is found to be highly efficient. Moreover, this local optimization strategy can effectively encapsulate the inhomogeneity present in the target state, which makes it especially suitable for preparing non-TI MPS and states with open boundary conditions, and can be directly extended to the case of long-range MPS.
In the following, we present numerical results for three types of short-range correlated MPS: (1) the 1D AKLT state <cit.>, (2) an MPS family with tunable correlation length <cit.>,
and (3) inhomogeneous random MPS <cit.>.
§.§ Preparation of AKLT state and the MPS family <cit.>
The 1D AKLT state is a paradigmatic state in condensed matter physics, with important application in measurement-based quantum computation <cit.>. The 1D AKLT state can be formed by first having a product state of singlets consisting of virtual qubits that connect neighboring sites of the 1D chain, then projecting the virtual qubits of two neighboring pairs to their symmetric subspace (with spin S=1). In our calculation, we consider the spin S=1 at each site is formed by the symmetric subspace of two actual qubits in the quantum device, and the resulting AKLT state is an MPS of bond dimension D=2 and physical dimension d=4.
Figure <ref>(a) illustrates the scaling of the error per block /M (where the number of blocks M = N/q)[cf. <ref>] as a function of blocking range q (measured in units of the correlation length ξ, with ξ_ AKLT = 1/ ln 3) for the AKLT state. In our calculations, we chose the number of blocks M=200. As expected, both our circuit constructions have comparable performance, with /M exhibiting an exponential decay with q/ξ, in accordance with the bound of <ref>. This verifies the predicted scaling of T=O(log (N/)).
To further assess the efficacy of our algorithm for MPS with varying correlation lengths ξ, we also investigate the MPS class of bond (physical) dimension D=2 (d=2), with matrices in <ref> of the form <cit.>
A_[j]^0 =([ 0 0; 1 1 ]), A_[j]^1 =([ 1 g; 0 0 ]), ∀ j ∈ (2,...,N-1),
and the boundary tensors are chosen as the 2× 2 identity matrix. The correlation length of this MPS class can be tuned by the parameter g as ξ=|(ln1-g/1+g) |^-1. The results on the scaling of the error per block /M is shown in <ref>(a). We see that, for the AKLT state and the MPS class of various correlation lengths ξ≈ 4, 16, the scaling of /M show almost the same behavior as /M ∼exp(-γ_ num q/ξ) with γ_ num≈ 2, where the number of blocks M=N/q. Notably, γ_ num is much larger than the analytically derived value 0<γ<1/2 [cf. <ref>]. Therefore, for these two classes of states, in practice, one can prepare them faster than predicted in the worst-case bound <ref>.
§.§ Scaling of CNOT depth for various schemes
To make the scaling T=O(log N/) of our protocol more relevant to the current devices where the multi-qubit unitaries are decomposed into CNOT gates and single-qubit rotations, we present a simple comparative study for the scaling of the CNOT depth T_ CNOT required to prepare the MPS class [cf. <ref>] with correlation length ξ≈ 4 with the fidelity F=|⟨ϕ_N |ϕ_N ⟩|^2=0.9.
We compare four different schemes: (1) sequential-RG scheme, (2) tree-RG scheme, (3) tree-RG scheme assisted by measurements, and (4) the sequential scheme. We present the result directly here and provide an explanation of our estimation of T_ CNOT later.
Figure <ref>(b) shows T_ CNOT for these four different schemes. As anticipated, for both the sequential-RG and the tree-RG schemes, we observe the overall scaling T=O(log N), and these methods result in a significantly smaller T_ CNOT compared to that of the sequential method, particularly for large system sizes N. Additionally, due to the fact that the blocking range q can only increase discretely, we observe plateau-like features in the scaling of T_ CNOT in <ref>(b). Moreover, when the tree-RG scheme further assisted by measurements, the T_ CNOT is further reduced compared to the stand-alone tree-RG scheme, yielding the smallest _ CNOT among all schemes, with only T_ CNOT≈ 100 when creating this state of N=10^6 qubits.
§.§.§ Estimation of the two-qubit gate count
This technical subsection elaborates on how we estimate T_ CNOT for different schemes studied in <ref>(b). For the sake of simplicity, our focus lies on the MPS with the bond (physical) dimension D=2 (d=2), which aligns with the results presented in <ref>(b). Estimation for MPS with varying physical or bond dimensions can be accomplished in a similar manner.
In general, the schemes considered in <ref>(b) involve two types of gates: (1) Isometries mapping from m qubits to n≥ m qubits and (2) SWAP gates utilized to change the qubit location of the fixed-point state in the sequential-RG scheme and to implement long-range isometries in the tree-RG scheme.
It is well known that a SWAP gate can be decomposed as three CNOT gate. For the isometries (1), we estimate the CNOT depth of each isometry, denoted by T_ iso(m,n), using its theoretical lower bound <cit.>
T_iso (m, n) = ⌈1/4(2^n+m+1-2^2 m-2 n-m-1) ⌉ .
This theoretical limit could potentially be reached by existing gate decomposition approaches <cit.>. Specifically, for preparing MPS with d=D=2, we have T_ iso(1,2) = 2 for the sequential scheme, T_ iso(2,3) = 10 for the sequential-RG scheme, and T_ iso(2,4) = 26 for the tree-RG scheme.
In the following we count the gates used in each scheme for preparing MPS of system size N and bond (physical) dimension D=2 (d=2):
* The sequential scheme uses one layer of 0-to-2 qubit isometry on the boundary and N-2 layers of 1-to-2 qubit isometries in the bulk <cit.>.
* The sequential-RG scheme with blocking range q uses q-2 layers of SWAP gates and q-2 layers of 2-to-3 qubit isometries.
* The tree-RG scheme iteratively blocks the chain for ∼log(q) times. The first blocking (blocking to injectivity) produce a layer of two-qubit gates. After that, the m-th blocking (m ≥ 2) produce a layer of 2-to-4 qubit isometries, where the largest distance between qubits within the same isometry is 2^m. We implement such long-distance isometries using local gates by first swapping the qubits to the center region of the isometry, then local implementing the 2-to-4 isometry, and finally swapping the qubits back. This leads to an additional (2^m - 4) layers of SWAP gates for each m.
* The measurement-assisted tree-RG scheme simply eliminates the SWAP cost in the aforementioned tree-RG scheme, since now the long-distance isometries can be executed by gate teleportation <cit.>. It's noteworthy that there are also Bell-state ancilla preparation, measurement, and post-processing costs involved in this scheme, but our focus here is solely on the circuit depth of the scheme.
Based on the above components, we can directly estimate T_ CNOT as a function of the system size N and the required blocking range q. Here, q can be obtained from the scaling of the error per block (analogous to that in <ref>(a)) and the required state preparation fidelity F, thereby resulting in the T_ CNOT shown in <ref>(b).
§.§ Preparation of inhomogeneous random MPS
To illustrate the effectiveness of our protocol for preparing inhomogeneous MPS, we employ it to instances of inhomogeneous random MPS. Since any MPS can be brought to a canonical form with isometric tensors, a natural way to define the corresponding ensemble is by choosing each tensor randomly according to the Haar measure of the unitary group U(dD) <cit.>.
As these states correspond to the ground states of disordered local Hamiltonians <cit.>,
they can be considered as representative states of the trivial topological phase <cit.>, and it is of interest to explore various properties of this class <cit.>.
Due to the inherent inhomogeneity, we refrain from defining the correlation length for such states. Nevertheless, since they are generally short-range correlated <cit.>, we anticipate that our protocol can efficiently prepare this class of states. In <ref>, we display the scaling of the error per block /M with the blocking range q for randomly sampled 1000 states of d=D=2 using the tree-RG protocol, and observe the asymptotic scaling
/M ∼exp(-c q)
in all instances (note that the number of blocks M=N/q), with c varying only slightly between different individual cases. This scaling is reminiscent of the behavior predicted in <ref>, and it directly implies that our protocols can prepare such inhomogeneous random MPS efficiently.
|
http://arxiv.org/abs/2307.03323v1
|
20230706223206
|
Machine Learning to detect cyber-attacks and discriminating the types of power system disturbances
|
[
"Diane Tuyizere",
"Remy Ihabwikuzo"
] |
cs.LG
|
[
"cs.LG",
"cs.SY",
"eess.SY"
] |
[email protected]
Carnegie Mellon University Africa
Kigali
Rwanda
[email protected]
Carnegie Mellon University Africa
Kigali
Rwanda
This research proposes a machine learning-based attack detection model for power systems, specifically targeting smart grids. By utilizing data and logs collected from Phasor Measuring Devices (PMUs), the model aims to learn system behaviors and effectively identify potential security boundaries. The proposed approach involves crucial stages including dataset pre-processing, feature selection, model creation, and evaluation. To validate our approach, we used a dataset used, consist of 15 separate datasets obtained from different PMUs, relay snort alarms and logs. Three machine learning models: Random Forest, Logistic Regression, and K-Nearest Neighbour were built and evaluated using various performance metrics. The findings indicate that the Random Forest model achieves the highest performance with an accuracy of 90.56% in detecting power system disturbances and has the potential in assisting operators in decision-making processes.
Machine Learning to detect cyber-attacks and discriminating the types of power system disturbances
Remy Ihabwikuzo
==================================================================================================
§ INTRODUCTION
Although Cyber-physical system has many advantages in areas such as power distribution grids and wastewater treatment plants, it also has some disadvantages and threats. A smart grid is an electrical grid equipped with automation, communication, and information technology systems that can monitor power flows from points of generation to points of consumption <cit.>. If these systems fail, it can result in massive damage or loss to people as well as the shutdown of all infrastructure.
Nowadays, most businesses have regulations and policies in place to ensure their security. Phasor Measurement Units (PMUs) have been used to increase system performance as power systems become increasingly complex in their architecture <cit.>. It provides information that can help to make quick decisions. Hackers, on the other hand, can create a trigger that will cause the system to fail and cause significant damage to smart grids. Machine learning techniques can be used to find pattern recognition, learning abilities, and rapid identification of potential security boundaries <cit.>. This paper proposes a machine learning approach for detecting system behaviors by learning from historical data and relevant information. Mainly we present a machine learning-based attack detection model for power systems that can be taught using data and logs collected by PMUs.
To accomplish this, the dataset was preprocessed, for model selection, 10-fold cross-validation was used to build a random forest, logistic regression, and k-Nearest neighbor models, and the results were compared using four performance metrics: f1 macro, recall, accuracy, and precision scores. Furthermore, feature selection was performed, and the results were compared to models without feature selection; the best model found was Random Forest, and finally, optimization of the best model was performed.
The structure of this paper is as follows: Section <ref> provides an overview of related research in the field. In Section <ref>, we detail our proposed approach by highlighting the conducted data processing, model building, testing various machine learning methods, and experimental results as well as discussing the findings. Lastly, Section <ref> offers concluding remarks.
§ LITERATURE REVIEW
Smart grids, are vulnerable to cyber-attacks due to their reliance on automation, communication, and information technology systems <cit.>. Hackers target these systems to disrupt the power supply, cause damage, or gain unauthorized access to critical infrastructure. As highlighted <cit.>, the consequences of successful attacks on power systems can be severe, leading to widespread power outages, financial losses, and even endangering public safety. Therefore, there is an urgent need for effective detection and mitigation strategies to protect power systems from cyber threats.
Machine learning techniques have emerged as promising approaches for enhancing the security of power systems. These techniques offer the ability to analyze large volumes of data, detect patterns, and identify anomalies indicative of potential attacks <cit.>. Phasor Measurement Units (PMUs) play a crucial role in this context, as they provide real-time data on power system dynamics, enabling the development of accurate machine learning models <cit.>. By leveraging historical data and logs collected by PMUs, these models can learn system behaviors and detect deviations that may indicate cyber-attacks.
Several intrusion detection systems(IDS) approaches have been proposed for smart grid security, including anomaly-based detection techniques, communication traffic analysis, and leveraging power system theories <cit.> <cit.><cit.> <cit.> <cit.>. However, these approaches have limitations in terms of detecting different types of attacks, scalability, and capturing invalid changes in the physical system.
In this study, our goal is to utilize machine learning to detect cyber-attacks and accurately classify different types of power system disturbances. We hypothesize that machine learning algorithms can effectively detect disturbances and classify potential security threats in power systems. By addressing the limitations of existing approaches and harnessing the power of machine learning, we aim to enhance the security and resilience of power systems against cyber-attacks.
§ PROPOSED APPROACH
§.§ Dataset
The dataset downloaded was about power system disturbance. It was made up of 15 separate datasets that were collected and recorded by PMUs 1–4, relay snorts alarms, and logs. Each has 129 columns, and the target attribute was having three classes such as No event, Natural, and Attack as shown in Figure <ref>.
All columns were numerical except target attributes which were categorical. Moreover, the total number of observations of all datasets was 73037. After combining all datasets 2 % from all datasets was collected for this experiment. In the dataset, there were no duplicates or missing values found. However, infinity values were found, and the outlier was detected by using Isolation Forest and Principal component analysis was used to visualize the detected outliers Figure <ref>.
§.§ Data preprocessing and preparation
To prepare the dataset for analysis, several preprocessing steps were performed. Firstly, any infinity values present in the dataset were eliminated. Additionally, outliers were identified using the Isolation Forest algorithm and subsequently removed. To handle non-numerical values, a label encoder was applied to convert them into numerical representations. Moreover, as observed in Figure <ref>, the dataset exhibited class imbalance. To address this issue, the Synthetic Minority Oversampling Technique (SMOTE) was employed to augment the samples in the minority class. Lastly, to ensure uniformity in the dataset, standardization was carried out by scaling all the features using standard scalers.
§.§ Exploratory data analysis and data visualization
To explore the data and understand the pattern among features. The distribution of each feature was examined, and an example was presented in Figure <ref> using a histogram. The distribution of the R1-PA1:VH feature closely resembled that of the original dataset, indicating that this particular sample serves as a representative example of the overall dataset.
Furthermore, correlation analysis was performed to assess the relationships between the features and the target variable. The results were presented in Figure <ref>, showcasing the most correlated variables. It was found that the top 14 features exhibited strong correlations with the target variable. This suggests that these features hold valuable information and have a significant impact on predicting the target variable. The correlation analysis aids in selecting the most relevant features for subsequent modeling and analysis, ensuring that the chosen variables capture important patterns and relationships within the dataset.
§.§ Model creation and evaluation
Three machine learning models, namely Random Forest, Logistic Regression, and K-Nearest Neighbor, were constructed for analysis. To evaluate the performance of each model, 10-fold cross-validation was applied, ensuring robustness and reliable results. Various metrics were used to assess the models, including F1 macro, Precision macro, Recall macro, and Accuracy. Since the dataset underwent resampling to address the class imbalance, these metrics were particularly relevant in evaluating the models' performance on the balanced dataset.
To determine the impact of feature selection on model performance, the comparison among models was conducted both on the full set of features and after feature selection. The feature selection method employed was mutual information, which measures the dependency of features on the target value. This approach assists in identifying the most informative and relevant features for accurate predictions. Figure <ref> presents the results of this analysis.
Based on the comparison, the Random Forest model emerged as the best-performing model. Subsequently, hyperparameter tuning was carried out to optimize the selected features. The parameters adjusted during hyperparameter tuning included the number of trees, maximum depth, and criterion selection. By fine-tuning these parameters, the Random Forest model can be optimized to achieve the best possible performance and accuracy for the specific task at hand.
§.§ Experiments results
In general, certain features within the dataset were found to exhibit a high correlation with each other, as illustrated in Figure <ref>. Notably, features such as 'R3-PM9:V', 'R2-PM9:V', 'R4-PM1:V', and 'R3-PM8:V' displayed a strong correlation. However, when considering the correlation between these features and the target variable, the relationship was relatively weaker.
Additionally, a comparison was conducted among the K-Nearest Neighbor (KNN), Random Forest, and Logistic Regression models. The results demonstrated that the Random Forest model performed the best, achieving an F1 macro score of 90.46%, an accuracy of 90.56%, a precision macro score of 90.97%, and a recall macro score of 90.57%. Figure <ref> provides a visual representation of these findings. The second-best performing model was the KNN model, although the specific metrics associated with its performance were not mentioned in the provided context.
Furthermore, the Mutual Information technique was utilized to select the best features from the dataset. Figure <ref> illustrates the scores assigned to each feature based on their relevance. From this analysis, the top 40 features with the highest scores were selected for further modeling.
Using these selected features, the same machine learning algorithms were constructed and compared once again. The performance of each model was evaluated using metrics such as F1 score, precision, recall, and accuracy, Figure<ref>. Notably, the Random Forest (RF) model demonstrated strong performance, achieving a Macro F1 score of 86.16%.
Surprisingly, when comparing the model built with feature selection to the one without, it was found that the model utilizing all features performed better. This unexpected result could be attributed to the potential overfitting of the data since we only used a subset of features.
Additionally, it was observed that the Logistic Regression model did not perform well in this analysis, indicating that it may not be suitable for capturing the complexities present in the dataset or may require further refinement in terms of hyperparameter tuning or feature engineering.
Moreover, the benchmark model found is Random Forest Figure<ref>, then it was used for Hyperparameter tuning and the accuracy score was improved from 89.54% to 90.08%. As a result, it can be concluded that model parameters have to be optimized based on the usage scenario. The model is more sensitive to data collected in the power system and can better distinguish the situations corresponding to the data because of optimization.
§.§ Discussion
Previous studies have recommended the application of preprocessing techniques to improve the performance of classifiers, such as balancing the dataset <cit.>. These findings align with the results obtained in the current study, which also demonstrate that Random Forests exhibit strong precision performance <cit.>. Furthermore, when comparing different algorithms, the tree-based algorithm Random Forest outperforms KNN and Logistic Regression.
According to Junejo and Goh <cit.>, the success of the Random Forest algorithm can be attributed to the fact that the Programmable Logic Controller (PLC) used in power systems is programmed using relational ladder logic. Ladder logic is a rule-based language that executes rules in sequential order, resembling a control logic system. The tree-based algorithms, including Random Forest, attempt to relearn this control logic or understand the normal behavior of the system. This compatibility between the underlying logic of the power system and the tree-based algorithms could explain the superior performance of Random Forest in this context <cit.>.
§ CONCLUSION
This report utilizes Random Forest, KNN, and Logistic Regression machine learning algorithms to detect power system disturbance. All the approaches used for evaluating models showed that random forest remained the best algorithm among others; therefore, it is recommended to be used for classifying the scenarios related to detecting cyberattacks and controlling system operations. However, an increased amount of data may increase accuracy and time complexity. Moreover, as a recommendation, deep learning and big data can be integrated for future work.
ACM-Reference-Format
|
http://arxiv.org/abs/2307.02497v1
|
20230704082752
|
Multi-gauge Hydrological Variational Data Assimilation: Regionalization Learning with Spatial Gradients using Multilayer Perceptron and Bayesian-Guided Multivariate Regression
|
[
"Ngo Nghi Truyen Huynh",
"Pierre-André Garambois",
"François Colleoni",
"Benjamin Renard",
"Hélène Roux"
] |
cs.LG
|
[
"cs.LG"
] |
Multi-gauge Hydrological Variational Data Assimilation: Regionalization Learning with Spatial Gradients using Multilayer Perceptron and Bayesian-Guided Multivariate Regression
Ngo Nghi Truyen Huynh^1,
Pierre-André Garambois^1,*,
François Colleoni^1,
Benjamin Renard^1,
Hélène Roux^2
^1INRAE, Aix-Marseille Université, RECOVER, 3275 Route Cézanne, 13182 Aix-en-Provence, France
^2Institut de Mécanique des Fluides de Toulouse (IMFT), Université de Toulouse, CNRS, 31400 Toulouse, France
^*corresponding author:
August 1, 2023
===========================================================================================================================================================================================================================================================================================================================================================
Tackling the difficult problem of estimating spatially distributed hydrological parameters, especially for floods on ungauged watercourses, this contribution presents a novel seamless regionalization technique for learning complex regional transfer functions designed for high-resolution hydrological models. The transfer functions rely on: (i) a multilayer perceptron enabling a seamless flow of gradient computation to employ machine learning optimization algorithms, or (ii) a multivariate regression mapping optimized by variational data assimilation algorithms and guided by Bayesian estimation, addressing the equifinality issue of feasible solutions. The approach involves incorporating the inferable regionalization mappings into a differentiable hydrological model and optimizing a cost function computed on multi-gauge data with accurate adjoint-based spatially distributed gradients.
Variational Data Assimilation, Distributed Hydrological Modeling, Artificial Neural Networks, Bayesian Estimation, Hydrological Regionalization
§ INTRODUCTION
Regardless of the improvements made in hydrological forward models and available data, hydrological calibration remains a challenging ill-posed inverse problem faced with the equifinality <cit.> of feasible solutions.
Most calibration approaches aim to estimate spatially uniform model parameters for a single gauged catchment, resulting in piecewise constant discontinuous parameters fields for adjacent catchments. Moreover, these calibrated parameter are not transferable to ungauged locations, which represents the majority of the global land surface <cit.>. Therefore, prediction in ungauged basins remains a key challenge in hydrology <cit.>.
Regionalization approaches are employed to estimate hydrological model parameters in ungauged locations by transferring hydrological information from gauged locations. In early studies, the predominant method for regionalization involved individually calibrating catchments and then using multiple regression or interpolation techniques to transfer the calibrated parameter sets from gauged to ungauged locations <cit.>. This process can be referred to as post-regionalization <cit.>. However, post-regionalization approaches are limited to lumped parameters by catchment, thus ignoring within-catchment variabilities <cit.>. Furthermore, they are generally faced with the issue of equifinal parameter sets and hence equifinal estimated transfer laws, while spatial proximity is more adapted to densely gauged river networks and regions <cit.>.
A simultaneous regionalization approach, which involves optimizing a mapping between physical descriptors and model parameters (cf. parajka2005comparison,gotzinger2007comparison), is able to overcome most of the aforementioned problems and can be referred as "pre-regionalization". Typically, a Multiscale Parameter Regionalization (MPR) method, combining descriptors upscaling and pre-regionalization function in form of multi-linear regressions, implemented within a spatially distributed multiscale hydrological model (mHm), has been proposed by samaniego2010multiscale, and later applied to other gridded hydrological models in several applicative studies (e.g., mizukami2017towards, beck2020global). In all the above studies, state of the art optimization algorithms are used, especially Shuffle Complex Evolution algorithm (SCE) <cit.> in mizukami2017towards or Distributed Evolutionary
Algorithms (DEAP) <cit.> in beck2020global. Nevertheless, those optimization algorithms are limited to low-dimensional controls, which imposes the use of a limited number of descriptors in lumped multivariate pre-regionalization mappings, and thus restricts the capability to fully exploit the large amount of information available from multiple data sources with flexible formulations and adequate spatial rigidity.
In huynh2023learning, efficient pre-regionalization algorithms have been proposed for spatially distributed hydrological modeling based on descriptors-to-parameters mappings with neural networks or multivariate regressions in a variational data assimilation framework.
Despite the strong spatial constrain and regularizing effect introduced via pre-regionalization mappings, some sensitivity to prior remains in context of equifinality (model structural equifinality plus spatial equifinality) and its inference is explored here using the Bayesian weighting approach proposed in chelil2022variational,gejadze2022new.
In this work, we present a novel seamless regionalization method for learning the pre-regionalization mapping between physical data and conceptual parameters of spatially distributed hydrological models using information from multi-gauge river flow observations and high-resolution physical descriptors. We explore two approaches to infer the pre-regionalization mapping:
* Bayesian-Guided Multivariate Regional Regression (BGM2R): a multivariate polynomial regression approach, which combines high-dimensional optimization algorithms guided by a Bayesian estimation on the first guess;
* Artificial Neural Network Regionalization (ANNR) enabling a "seamless flow of gradient computation" and employing machine learning optimizers.
The proposed algorithms are implemented in the SMASH platform (see online documentation and tutorials at <https://smash.recover.inrae.fr>) available on public GitHub (<https://github.com/DassHydro-dev/smash>).
§ METHODOLOGY
The full forward model and the optimization process are schematized in Figure <ref>.
§.§ Forward Model and Cost Function
Let us consider observed discharge time series Q^*_g(t) at N_G observation cells of coordinates x_g∈Ω, g∈1,..,N_G with N_G≥ 1.
For each observation cell, the corresponding gauged upstream sub-catchment is denoted Ω_g so that Ω_ung = Ω∖( ∪_g=1^N_GΩ_g) is the remaining ungauged part of the whole spatial domain Ω.
Then, the rainfall and potential evapotranspiration fields are respectively denoted as P(x,t) and E(x,t), ∀ x∈Ω.
The classical forward model ℳ_rr is a dynamic operator projecting the input fields P(x,t) and E(x,t), given an input drainage plan 𝒟_Ω(x),
onto the discharge field Q(x,t) and states fields h(x,t) written as a multivariate function:
(h,Q)(x,t)=ℳ_rr[ 𝒟_Ω(x) ,P(x,t'),E(x,t'),
h(x,0),θ(x),t], ∀ (x, t') ∈Ω×[0,t]
where θ is the N_θ-dimensional vector of model parameters 2D fields that we aim to estimate regionally with the new algorithms proposed below, and h is the N_S-dimensional vector of internal model states.
In this study, the distributed hydrological model ℳ_rr is a parsimonious GR-like conceptual structure with the parameters vector
θ(x)= ( c_p (x), c_ft (x), k_exc (x), l_r (x) )^T, ∀ x∈Ω, which is the "gr-b" structure presented in smash2023.
Now, the full forward model ℳ is composed of the distributed hydrological model ℳ_rr on top of which is applied a pre-regionalization operator ℱ_R to estimate hydrological parameters θ such that:
ℳ=ℳ_rr[ . , θ(x)=ℱ_R(D(x),ρ)], ∀ x ∈Ω
This allows to constrain spatially and explain these spatial fields of conceptual model parameters θ(x) from physical descriptors D(x). The pre-regionalization operator ℱ_R being a descriptor-to-parameters mapping, with D the N_D-dimensional vector of physical descriptor maps covering Ω, and ρ the vector of tunable regionalization parameters that will be defined later.
A calibration cost function is defined in order to measure the misfit between simulated and observed discharge time series, respectively denoted Q_g(t) and Q_g^*(t), for g∈ 1.. N_G gauged cells. A convex differentiable objective function is classically defined as follows:
J=J_obs+γ J_reg,
with J_obs the observation term that measures the difference between observed and simulated quantities and J_reg a regularization term weighted by γ>0.
The observation term is J_obs=∑_g=1^N_Gw_gJ_g^*
with w_g a physical weighting function, J_g^* a local quadratic metric "at the station" (e.g., 1-NSE) involving the response of the direct model. Thus, J_obs depends on the control vector ρ through the direct model ℳ.
The multi-site calibration corresponds to N_G>1 while N_G=1 is a classical calibration on a single station where w_1=1. For N_G>1, several physical weighting expressions w_g are possible with the single constraint that ∑_g=1^N_Gw_g=1. In this work, we simply use use w_g=1/N_G for multiple gauges calibration.
The regional optimization problem writes as follows:
ρ̂=min_ρJ(ρ)
§.§ Regional Calibration with BGMR
In this case, the pre-regionalization mapping ℱ_ℛ≡𝒫 with tunable parameter ρ consists in a multivariate polynomial regression between input physical descriptors D(x) and hydrological model parameters θ(x,D,ρ) 𝒫(D(x), ρ) such that:
θ_k(x,D,ρ_k) s_k(α_k,0+∑_d=1^N_Dα_k,dD_d^β_k,d(x)),
∀ k ∈[1..N_θ],∀ x ∈Ω
with s_k(.) a Sigmoid-based transformation imposing bound constraints in the direct hydrological model. The lower and upper bounds are assumed to be spatially uniform for each parameter field θ_k of the hydrological model.
The optimization of the control vector ρ≡[(ρ_k)_k=1^N_θ]^T≡[(α_k,0,(α_k,d)_d=1^N_D)_k=1^N_θ]^T, that is solving problem <ref>, is performed using the L-BFGS-B algorithm <cit.>, adapted to high-dimensional controls, without bound constraints on the α_k,., whereas the exponents β_k,d is simply fixed to 1 (multi-linear pre-regionalization). This algorithm requires the gradient of the cost function with respect to the sought parameters ∇_ρ J. This gradient is computed by solving the adjoint model, which is obtained by automatic differentiation using the Tapenade engine <cit.>. The entire process is implemented in the SMASH Fortran source code, where the full forward model ℳ≡ℳ_rr(.,𝒫(.)) is a composition of both the hydrological model and the polynomial descriptors-to-parameters mapping. The convergence criterion involves reaching a maximum iteration limit or meeting conditions related to cost function change or gradient magnitude.
It is worth noting that determining a background value ρ^* is important for the convergence of this algorithm. It is used as a starting point for the optimization, and is defined from a spatially uniform prior θ̅^* as ρ^*≡[ α_k,0 = s^-1_k( θ̅_̅k̅^* ),( α_k,d = 0, β_k,d = 1)]^T,∀(k,d)∈[1..N_θ]×[1..N_D], where s^-1_k(z)= ln(z-l_k/u_k-z) is the inverse Sigmoid.
The spatially uniform low-dimensional (LD) prior θ̅^* is determined considering the cost function without pre-regionalization, i.e., ℳ≡ℳ_rr and ρθ, and classically using a global optimization algorithm (SBS in Michel1989).
A Bayesian-like estimator (cf. gejadze2022new, chelil2022variational) is used to look at the mean of the posterior distribution f(θ|Q^*) that is more stable in context of equifinality than searching its mode (inverse problem <ref>, maximum a posteriori probability (MAP) search is the essence of variational data assimilation).
A prior probability distribution f_θ̅ is used to generate a sample of spatially uniform parameter sets θ_i, ∀ i ∈ 1..N within the hypercube defined by parameters bounds [l_k,u_k], ∀ k ∈ 1..N_θ. The likelihood function is defined as:
ℒ_i^α = e^-2^α(J_i/J_min-1)^2,
where α is a parameter controlling the decay rate of this function that compares the value of J_i=J(θ_i) to J_min the minimum value of J_i over the sample of N parameter sets.
The posterior ensemble mean and variance are computed as follows:
[ θ̅^*,α=1/K∑_i=1^N(ℒ_α^i·θ_i⊙ f_θ̅(θ_i)); Var(θ̅^*,α)=1/K∑_i=1^N(ℒ_α^i·(θ_i-θ̅^*,α)⊙(θ_i-θ̅^*,α)⊙ f_θ̅(θ_i)); ]
where K=∑_i=1^Nℒ^i_α· f_θ̅(θ_i) and "⊙" denotes the Hadamard product - simple scalar product between vectors here but usable with higher dimensional controls.
The parameter α is determined using the L-curve approach, considering a parametric curve { J( θ̅^*,α) , D^α}, α = -1,...,10, where
D^α=(Var(θ̅^*,α))^-1⊙(θ̅^*,α-θ̅^0)⊙(θ̅^*,α-θ̅^0)
is the probabilistic (Mahalanobis) distance between the estimate θ̅^*,α and the average prior θ̅^0=1/N∑_i=1^Nθ_i. The value of α is sought in a L-curve "corner" such that it minimizes both J( θ̅^*,α) and D^α.
§.§ Regional Calibration with ANNR
In this case, an ANN-based regional mapping ℱ_ℛ≡𝒩, consisting of a multilayer perceptron, aims to learn the descriptors-to-parameters mapping such that:
θ(x,D,ρ)𝒩(D(x), W, b),∀ x ∈Ω
where W and b are respectively weights and biases of the neural network, whose output layer consists in a scaling transformation based on the Sigmoid function in order to impose bound constraints on each hydrological parameters. The regional control vector
ρ≡[W, b]^T
is optimized by Algorithm <ref>, that uses spatial gradients computed by the adjoint model to minimize the cost function J(ρ)=J(Q^*,ℳ_rr(. , θ=𝒩(D,ρ))) in the present case.
The cost function depends on the forward model ℳ≡ℳ_rr(.,𝒩(.)), which is composed of two components in its numerical implementation: (i) an ANN implemented in Python, which produces the output θ served as input for (ii) the hydrological model ℳ_rr implemented in Fortran. To optimize J, we need its gradients with respect to ρ. The main technical difficulty here is achieving a "seamless flow of gradients" through back-propagation. To overcome this, we divide the gradients into two parts. First, ∇_θ J can be computed via the automatic differentiation applied to the Fortran code corresponding to ℳ_rr. Then, ∇_ρθ is simply obtained by analytical calculus applicable given the explicit architecture of the ANN, consisting of a multilayer perceptron. The convergence criteria is simply determined by reaching the maximum number of training iterations.
§ RESULTS
§.§ Numerical experiment
The proposed alorithms are tested on a highly challenging regionalization case from huynh2023learning: a high-resolution regional modeling of a flash flood prone area located in the South-East of France, with heterogeneous physical properties including karstic areas. Multiple gauges downstream of nested and independent catchments are simultaneously considered, enabling multi-gauge optimization. A total of 11 gauged catchments are employed as "donor" catchments for calibration, while 9 other catchments are treated as pseudo-ungauged for spatial validation to assess regionalization capabilities of the proposed algorithms. In this study, a set of 7 physical descriptors (see Table <ref>) available over the whole French territory is used to learn the regional transfer functions.
In the following, we compare and analyze: (i) local uniform ρ≡θ̅ and full spatially distributed ρ≡θ(x) calibrations for each gauges, that are respectively under- and over-parameterized hydrological optimization problems, but are served as reference performances ("Uniform (local)" and "Distributed (local)"); multigauge regional calibrations with (ii) lumped model parameters ρ≡θ̅ which somehow represents "level 0" regionalization ("UR"); (iii) a multivariate linear mapping (i.e., ρ≡[α_k, 0, (α_k, d, 1)]^T) using a first guess obtained by global optimization algorithm ("M2R"), or guided by a Bayesian estimation ("BGM2R"); and (iv) a multilayer perceptron (i.e., ρ≡[W, b]^T) ("ANNR").
Two study periods, namely P1 (August 2011 – August 2015) and P2 (August 2015 – August 2019), are considered for split sample testing. A two-fold cross-temporal calibration approach is employed, where the models are calibrated on one period and validated on the other period. In each case, we consider three types of validation: spatial validation (performance in pseudo-ungauged catchments during the calibration period), temporal validation (performance in gauged catchments during the validation period), and spatio-temporal validation (performance in pseudo-ungauged catchments during the validation period).
§.§ Regionalization performances and analysis
The performance of all calibration and regionalization methods is presented in Figure <ref>. Unsurprisingly, spatially uniform calibration (UR) leads to limited performance in calibration and poor performance in regionalization, especially when compared to the reference local spatially distributed calibration that is overparameterized. The pre-regionalization methods, which incorporate information from multi-gauge discharge as well as physical descriptor maps, all yield relatively satisfying performances in calibration, temporal validation, and spatio-temporal validation at pseudo-ungauged sites (median NSE scores higher than 0.4 when calibrated on P1). The regionalization approach based on ANN (ANNR) achieves the best results for both gauged and pseudo-ungauged catchments.
Regarding the determination of prior parameter sets for the multi-linear pre-regionalization mapping, the Bayesian estimation approach (LDB-FG) demonstrates fairly good performance, comparable to that obtained with the global heuristic algorithm (SBS-FG), in calibration and spatial validation, with only minor differences in temporal validation. We believe that this is reflective of the importance of exploring a Bayesian approach for the definition of the cost function, which would enable intrinsic weighting of model misfits to different gauged hydrological behaviors. Moreover, when considering the performances of M2R and BGM2R on P1 (upper sub-figure of Figure <ref>), the Bayesian approach exhibits markedly higher performance in calibration and validations, while relatively similar performances are observed on P2 (lower sub-figure of Figure <ref>). This difference may be attributed to variations in hydrological information on P1 and potentially higher data errors, which have a lesser impact on the Bayesian approach.
Table <ref> represents several statistical quantities of the distributed parameter maps obtained through different regionalization approaches. All methods result in distinct parameter maps and varying levels of temporal stability (see Figure <ref>). The ANNR leads to the most robust inference over P1 and P2, with remarkably stable average parameter values as well as spatial standard deviation over time. The priors inferred with SBS-FG or LDB-FG exhibit slight differences and also lead to a different optimum during pre-regionalization for P1. Interestingly, the opposite trend is observed for P2, where data uncertainty and model adequacy might be better, resulting in similar functioning points after regionalization despite substantially different priors determined with SBS-FG or LDB-FG.
Last but not least, during calibration on P1, M2R and BGM2R lead to a negative exchange coefficient (k_exc<0), despite starting from priors with positive exchange values (3.04 for SBS-FG (P1) and 2.9 for LDB-FG (P1)); AANR also lead to negative exchange. This intriguing result, of reaching systematically significant negative exchange, is particularly noteworthy because the exchange coefficient directly impacts mass conservation. These findings relate to those on flash floods water balance sensitivity and regionalization based on geological descriptors presented by garambois2015parameter for catchments in the same and nearby areas. Their event process-oriented and conservative model required an increase in modeled soil volume, while pedological and geological descriptors provided valuable constraining information, especially in the context of flash floods.
§ CONCLUSION
A Bayesian calibration algorithm has been tested in this study, on top of our Hybrid Variational Data Assimilation Parameter Regionalization (HVDA-PR) approach enabling seamless regionalization in hydrology.
The methods were tested in a challenging flash flood-prone area in the South-East of France, characterized by diverse physical properties and hydrological responses. Overall, the methods demonstrated satisfactory performance in several aspects: (i) accurate modeling of discharge at both gauged and pseudo-ungauged sites, and (ii) effective identification of conceptual parameters and extraction of information from physical descriptors.
Notably, the ANN-based regionalization method outperformed other approaches in terms of discharge accuracy and parameter stability. Bayesian prior estimation exhibited good performance and relative robustness, even in challenging cases like calibration on the period P1, where data uncertainty and model inadequacy were assumed to be higher. While the Bayesian method is computationally more demanding than traditional low-dimensional calibration algorithms, it can be efficiently parallelized. Moreover, the Bayesian approach can be extended to higher-dimensional contexts, such as determining semi-distributed priors and exploring spatial equifinality using our variational data assimilation algorithms.
Interestingly, in contrast to the ANN, the regression methods provided insights into more complex modeling situations and potential data-model discrepancies. This highlights the importance of maintaining both "classical" approaches and AI-based solutions in research and applications, particularly in the continuous development of physically and mathematically interpretable methodologies.
§ ACKNOWLEDGMENTS
The authors greatly acknowledge SCHAPI-DGPR and Météo-France for providing data used in this work;
Igor Gejadze for scientific discussion; SCHAPI-DGPR, ANR grant ANR-21-CE04-0021-01 (MUFFINS project, "MUltiscale Flood Forecasting with INnovating Solutions"), and NEPTUNE European project DG-ECO for funding support.
apacite
|
http://arxiv.org/abs/2307.00893v1
|
20230703094413
|
Generating Reliable Pixel-Level Labels for Source Free Domain Adaptation
|
[
"Gabriel Tjio",
"Ping Liu",
"Yawei Luo",
"Chee Keong Kwoh",
"Joey Zhou Tianyi"
] |
cs.CV
|
[
"cs.CV"
] |
Generating Reliable Pixel-Level Labels for Source Free Domain Adaptation
Gabriel Tjio
Centre for Frontier
AI Research (CFAR)
[email protected]
Ping Liu*
Centre for Frontier
AI Research (CFAR)
[email protected]
Yawei Luo
Zhejiang University
[email protected]
Chee Keong Kwoh
Nanyang Technological
University
[email protected]
Joey Zhou Tianyi
Centre for Frontier
AI Research (CFAR)
[email protected]
===========================================================================================================================================================================================================================================================================================================================================================================================================
[ * Ping Liu is the corresponding author. ]
This work addresses the challenging domain adaptation setting in which knowledge from the labelled source domain dataset is available only from the pretrained black-box segmentation model.
The pretrained model's predictions for the target domain images are noisy because of the distributional differences between the source domain data and the target domain data.
Since the model's predictions serve as pseudo labels during self-training, the noise in the predictions impose an upper bound on model performance.
Therefore, we propose a simple yet novel image translation workflow, ReGEN, to address this problem.
ReGEN comprises an image-to-image translation network and a segmentation network.
Our workflow generates target-like images using the noisy predictions from the original target domain images.
These target-like images are semantically consistent with the noisy model predictions and therefore can be used to train the segmentation network.
In addition to being semantically consistent with the predictions from the original target domain images, the generated target-like images are also stylistically similar to the target domain images.
This allows us to leverage the stylistic differences between the target-like images and the target domain image as an additional source of supervision while training the segmentation model.
We evaluate our model with two benchmark domain adaptation settings and demonstrate that our approach performs favourably relative to recent state-of-the-art work.
The source code will be made available.
EMPHASIZE NOVELTY/CONTRIBUTION IN THE ABSTRACT
simplicity
novelty
§ INTRODUCTION
Deep learning has brought about revolutionary changes across several fields since its introduction.
In particular, performance for computer vision tasks such as object detection<cit.>, image classification<cit.> and semantic segmentation<cit.> have all improved tremendously through the application of deep learning.
However, these advances in performance require vast amounts of labelled training data.
While synthetic data generated with photo-realistic rendering techniques offer a potential solution for generating labelled data more easily, it has been observed that training deep learning models solely with synthetic data significantly reduces performance when tested on real-world data from the target domain.
Unsupervised Domain Adaptation (UDA) methods, such as those proposed in <cit.>, have emerged as effective approaches for improving the performance of models on unlabelled target domain data.
These methods rely on the availability of labelled source domain data during the adaptation.
However, there are situations where access to the labelled source domain data is restricted due to privacy and security concerns.
For instance, the labelled data may originate from sensitive consumer information, making it infeasible to release the data to third parties.
In such cases, only the models pretrained on the source domain data are accessible for adaptation, while the source domain data itself remains inaccessible.
Additionally, since private enterprises possess the considerable resources required to acquire and label sufficient data for training large models, it is extremely likely that access to these datasets would be highly restricted.
The challenges presented by the restricted access to labelled source domain data have motivated us to propose a source-free domain adaptation approach specifically tailored for semantic segmentation tasks.
Our work draws inspiration from previous research <cit.> that explores the generation of additional data for domain adaptation.
Similar to earlier studies <cit.>, we incorporate semantic information as a prior for generating realistic and diverse data.
In this paper, we introduce a novel source-free domain adaptation approach that specifically addresses the challenges associated with semantic segmentation tasks under this setting.
Since no ground-truth labels are available, the noisy predictions from the pretrained model reduce performance when used as labels during self-supervision.
Prior methods mitigate the detrimental effect of noisy labels by filtering <cit.> or loss rectification<cit.>.
However, loss rectification methods increase computational overhead and potentially hinder training efficiency.
Label filtering reduces the number of training examples available, and for imbalanced training datasets, also disproportionately affects the performance of minority classes compared to majority classes.
We address the limitation arising from the lack of ground-truth labels by deploying a framework that generates target-like images from the model predictions.
Instead of discarding uncertain predictions, we generate target-like images that are semantically consistent with the model predictions.
Additionally, the target-like images are also stylistically consistent with the corresponding class in the model predictions.
This allows the model predictions to serve as the `ground truth' labels for the generated target-like images.
Figure <ref> illustrates the reasoning behind our approach.
The predictions from the pretrained segmentation model are semantically inconsistent with the ground truth and are not suitable to be used as labels for the original target domain images.
However, for our approach, the semantic consistency between the generated target-like images and the model predictions enables the use of model predictions as labels for the target-like images.
At the same time, the `sidewalk' pixels that are incorrectly classified as `road' pixels have styles resembling that of `road' pixels.
This stylistic similarity between the target-like images and the original target domain images allows us to improve segmentation model performance by minimizing the stylistic differences between the target-like images and the original target domain images while training the segmentation model.
The experimental results for the two experimental settings GTA5<cit.>→Cityscapes <cit.> and Synthia <cit.>→Cityscapes <cit.> demonstrate the efficacy of our proposed solution.
The styles in the regions corresponding to the incorrectly classified `road' pixels appear visually similar to the `sidewalk' class in the target-like image.
Previous work attempted to generate data for adaptation tasks.
However,
Contributions
We introduce a novel framework that reconstructs target-like images from predictions of the target domain images.
We apply it for the segmentation task.
Selling point (hot paper which has a flaw)
the absence of ground truth imposes an upper bound on model performance.
How your work addresses it
Challenges
Not all perceptual loss is useful, some are misleading.
discriminator loss/Fourier space loss does not seem to help
The problem is cycle consistency is affected by the quality of the one hot output from the task model.
This will be especially true for the high entropy cases, which are considerably further from 1.
learning the styles associated with the semantic labels to fine tune the labels?
Learning the right styles
output matching?
warmup for the network
relationship with other work e.g. SHOT
multi - freezing the weights classifier and feature extractor
confidence - selecting the pseudo labels
Rectify the pseudo labels early for fast convergence
Leveraging the image translation network to correct the task model via perceptual differences
Balancing the loss between target-like images and the original target images
Improving performance on minority classes
We summarize our main contributions in this paper:
* We introduce a simple yet novel image translation approach for the source-free domain adaptation
setting.
To our knowledge, our work is the first to generate target-like images from pixel-level pseudo labels under the challenging source-free domain adaptation setting.
* The target-like images are stylistically similar to the original target domain images while being semantically consistent with the noisy model predictions.
We then leverage the generated target-like images to improve adaptation performance.
* We demonstrate the effectiveness of our approach comparable performance with state-of-the-art work on two benchmark datasets. For example, our approach outperforms recent state-of-the-art work (Guo <cit.> and Kundu <cit.>) for GTA5<cit.>→Cityscapes <cit.> by 0.6% and 1.8% respectively. Our approach also demonstrates comparable results with Kundu <cit.> for Synthia <cit.>→Cityscapes <cit.>.
§ RELATED WORK
source free domain adaptation, conditioned generative methods, self training(?
Unsupervised Pixel–Level Domain Adaptation
with Generative Adversarial Networks
Advantages
The image reconstruction network is only required during training.
earlier work requires a translation network and a reconstruction network
Potential criticisms
The image reconstruction network is copying the style image.
This would mean that there is no additional useful information available to train the task model.
Challenges
Learning the correct relationship between the target-like data and the pseudo labels
§.§ Source-free domain adaptation
Objective-how it's done-what you did
Hypothesis transfer (SHOT)
Source-Free Domain Adaptation for Semantic Segmentation – Liu Yuang CVPR 2021
SimT
Source Free Domain Adaptation with
Image Translation Hou
Dual Path Learning for Domain Adaptation of Semantic Segmentation - ICCV 2021
Feature matching
Generation of source-like images
Pseudo label rectification
transfer of domain invariant from the pretrained model - leveraging batch normalization statistics,
The absence of labels for the target domain makes self-training an essential component of source-free domain adaptation.
However, since the model is updated with its predictions on the target domain data during self training,
incorrect predictions will cause errors to accumulate, resulting in a poorly adapted model.
Additionally, the model predictions are also biased towards the majority classes, resulting in poor performance on the tail classes in the target domain data.
Previous work -cite here in unsupervised domain adaptation seek to improve the pseudo-label quality by filtering the noisy labels via confidence or entropy based approaches, rectifying the loss term or by generating target-like/source-like data.
model statistics
However, ....
Li <cit.> generate target-like images from image-level labels.
Dual Path Learning for Domain Adaptation of Semantic Segmentation
Lack of source domain images during image generation, applied for UDA
§.§ Self training
entropy-based methods
Yang Zou
Since source domain data are absent during model fine-tuning, it is essential to leverage the relevant knowledge from the pretrained source model to improve performance on the target domain.
Domain shift between the source and target data degrades performance on the target domain data for a model trained on the source domain data.
Previous work <cit.> aims to mitigate the decrease in performance via unsupervised domain adaptation by jointly training a model with labelled source domain data and unlabelled target domain data.
While updating selected weights of the pretrained model during adaptation<cit.> improves performance to some extent, but the main cause of the reduced performance still remains unresolved.
Source-free domain adaptation is a technique that aims to adapt a pretrained source model to an unlabelled target domain without using the source domain data during the adaptation process.
Under this setting, resolving the challenges posed by noisy pseudo labels during self-training is essential.
A possible solution is to generate source-like/target-like data during adaptation.
While generating source-like data <cit.> simplifies the source-free domain adaptation problem to an unsupervised domain adaptation problem, generating sufficiently diverse and representative source-like data still remains an open problem.
Liu <cit.> train a generator to output source-like images using input randomly drawn from a Gaussian distribution.
Hou <cit.> first train a modified CycleGAN <cit.> on the source domain images to generate source domain images before adapting the model to generate source-like images from the target domain images.
However, their method initially requires access to the source domain images to train the modified CycleGAN, which may not always be feasible for real-life applications.
Yang <cit.> leveraged the labelled source domain images to generate source-like images via an image translation network.
For the generation of target-like images, Li <cit.> explored the possibility of conditioning image generation with image-level labels for adapting image classification tasks.
Our approach differs from the above-mentioned work by generating target-like data without using any labelled data.
Selective self-training. Having obtained pseudolabels and
reliability assignments, we update model parameters via selftraining. In a source-free setting, we do not have access to
any source data or labels and instead only receive a trained
source model and unlabelled target data. In such a setting,
optimizing all model parameters causes the model to rapidly
diverge from its original task. https://arxiv.org/pdf/2107.10140.pdf Augco
§.§ Data generation via generative models
namely style transfer <cit.>, which replace the style of the content image while preserving its content in the generated images, and conditional GANs, which allow the user to control the content and style of the generated images.
Style transfer methods assume that colour information, illumination levels
and saturation levels are domain-variant features. and can be transferred to the source domain data without affecting the semantic meaning of the stylized source domain images.
However, this approach requires the labelled source data during adaptation, making it unsuitable for source-free domain adaptation.
mitigate the perceptual shift between the source domain and the target domain images.
We adapt the two-stream image translation network <cit.> for use in our work.
Style transfer methods assume that the style information, such as colour, texture and saturation levels are domain-variant features and that style transfer preserves the original semantic content of the image.
introduce artifacts and ghosts
pixel level adaptation
Generative methods, particularly Generative Adversarial Networks (GANs) <cit.>, have demonstrated their effectiveness in a diverse range of computer vision applications, including super-resolution <cit.>, image-to-image translation <cit.>, and image denoising <cit.>.
The success of GANs in these tasks has served as an inspiration for their use in addressing domain adaptation challenges.
Leveraging semantic information for conditional GANs
Recently, conditional generative methods, as exemplified in <cit.>, have shown the capability to synthesize target-like data based on a given prior, such as image-level or pixel-level labels.
Li <cit.> focused on generating target-like data for image classification using predefined image-level labels.
However, applying this approach to semantic segmentation tasks becomes challenging due to the large number of pixels involved, making it infeasible to predefine pixel-level labels.
In the work of Yang <cit.>, they explored the generation of images based on pixel-level semantic information.
Their approach constrained the translation network to generate images that are semantically consistent with the input by training the image translation network to generate source-like images from the predictions of the target domain images.
However, their method <cit.> relies on labelled source domain data during training, which is unavailable in the source-free setting.
§ METHODS
§.§ Workflow
Pseudo code for ReGEN
[1]
Pretrained teacher segmentation network G_fixed, Image translation model T, Number of iterations to train translation model Iter_tr, Number of iterations to jointly train both the translation model and segmentation network Iter_joint, Target domain images X_tgt
Adapted segmentation network G
0, ..., Iter_tr
Generate one-hot predictions Y^'← G_fixed(X_tgt)
Generate target-like images X^'_tgt← T(Y^')
Update θ_T via minimising ℒ_translation←ℒ_p(X^'_tgt,X_tgt) + ℒ_c(X^'_tgt,Y^')+ ℒ_f(X^'_tgt,X_tgt)+ ℒ_f(X^'_tgt,X_tgt) + ℒ_KLD
0, ..., Iter_joint
Generate target-like images from pretrained teacher segmentation network X^'_tgt← T(G_fixed(X_tgt))
Update θ_T via minimising ℒ_translation←ℒ_p(X^'_tgt,X_tgt) + ℒ_c(X^'_tgt,Y^')+ ℒ_f(X^'_tgt,X_tgt)+ ℒ_f(X^'_tgt,X_tgt) + ℒ_KLD(X^'_tgt,X_tgt)
Filter one-hot predictions Y^' from G_fixed(X_tgt) via class-wise confidence thresholding to get Y^".
Generate target-like images from the student segmentation network X^"_tgt← T(G(X_tgt))
Update θ_G via minimising ℒ_seg←ℒ_p(X^"_tgt,X_tgt) + ℒ_c(X_tgt,Y^")+ ℒ_c(X^'_tgt,Y^') + ℒ_f(X^"_tgt,X_tgt) + ℒ_KLD(X^"_tgt,X_tgt)
Forumulations
-perceptual loss
-semantic consistency
-GAN features loss
-cross entropy
We propose a novel framework by alternating optimization of pseudo labels and the image generator.
The first stage comprises a warmup phase which refines the task model via self-supervised training.
Pseudo labels from the task model are used to train the task model.
The image generation model is trained to generate synthetic images from the semantic information present in the one-hot encoded ground truth labels and the style information from the images in the intermediate domain dataset.
We then update the image generation model by training it with the target domain images and the semantic information in the prediction outputs of the target domain images.
We then jointly train the image generation model and the task model.
One of the main difficulties is updating the teacher model.
Using the exponential moving average approach to update the teacher model with the student weights causes performance to drop considerably.
We use perceptual loss to encourage the generated images to be stylistically similar to the target domain image.
However, some classes are larger stylistic variations compared to other classes (e.g.'Building' compared to 'road'.
Secondly, perceptual differences are not always correlated with semantic differences.
Balancing perceptual loss with semantic content loss.
Unlike image classification, we cannot simply pass the desired classes to the generation model because the label for each pixel is dependent on other pixels.
Our workflow consists of two modules: an image translation network, denoted as T, and a segmentation network, denoted as G.
The image translation network includes a generator, T_g, which generates target-like images from the segmentation network predictions, and a discriminator, T_D, which distinguishes between the original target domain images and the generated target-like images.
The overall workflow is visually depicted in Figure <ref>.
We first focus on the problem of generating suitable data for training.
As observed earlier by Kim <cit.>, simply translating the images by matching the colour distributions<cit.> introduce blurring artifacts.
These artifacts degrade the semantic information present in the original images, making the translated images sub-optimal for training.
While it has been shown that pixel-level label information can be used to generate realistic images<cit.> and the feasibility of using pixel-wise label-driven image generation to address UDA problems <cit.> has already been demonstrated, we are the first, to the best of our knowledge, to generate target-like images from pseudo labels under the source free setting.
In order to generate target-like images, we enforce the constraint that the predictions from the target-like images are consistent with the predictions from the original target domain images (Figure <ref>).
This is done by minimizing the semantic consistency loss (Equation <ref>), which is simply the cross-entropy loss, while training the generator T_g.
Additionally, we also minimize the perceptual difference (Equation <ref>) and the GAN feature matching loss (Equation <ref>) between the target-like image and the target domain images.
We then adapt the segmentation network G, which is pretrained on the labelled source domain data (X_src,Y_src) to the target domain using the unlabelled target domain data X_tgt.
This is achieved by training the segmentation network G, with generated target-like images and the original target domain images.
§.§ Objective functions
Image translation We first train the generator T_g in the image translation network to generate target-like images from the one-hot predictions of the target domain images.
Following Jiang <cit.>, we use the hinge-based adversarial loss <cit.>, KL divergence loss <cit.>, perceptual loss <cit.> (Equation <ref>), semantic consistency loss (Equation <ref>) and GAN feature matching loss<cit.> (Equation <ref>) to train the generator.
The discriminator is trained with hinge-based adversarial loss.
Perceptual loss
We apply perceptual loss <cit.> ℒ_p to minimize the visual gap between the generated target-like images and the target domain images during the training of the image translation network in the first stage, followed by joint training of the segmentation network and the image translation model in the second stage (Algorithm <ref>).
Similar to Jiang <cit.>, we minimize the L1 loss between the feature representations from the original target domain images and the target-like images.
We extract the features from the following layers ϕ_i i.e. () of the pretrained VGG19 network ϕ, with the loss weights w_i set at 132, 116, 18, 14, 1.
The perceptual loss ℒ_p is given by the following equation:
ℒ_p(T_g,G,X_tgt)=
∑_i=1^5w_i‖ϕ_i(T_g(G(X_tgt)))-ϕ_i(X_tgt)‖_1,
In the first phase, we use the fixed segmentation network to generate predictions from the original target domain.
We then generate the target-like images from those predictions.
Additionally, we use the perceptual loss to train the segmentation network in the joint training phase (Equation <ref>).
In the second phase, we use the predictions from the student segmentation network's instead of the fixed teacher segmentation network to generate target-like images X^"_tgt.
Assuming a well-trained image translation network, any visual discrepancies between the target-like images and the original target images would be due to prediction errors from the student segmentation network.
This allows the perceptual loss to improve the performance of the segmentation network by leveraging the unlabelled target domain images.
KL loss
ℒ_KLD ,
hinge-based adversarial loss
apply the discriminator feature mapping loss
Semantic Consistency loss
We determine the semantic consistency loss ℒ_c between the input images X and the target domain images by computing the cross-entropy loss, as shown by the following:
ℒ_c(G,X,Y^') = ∑_i=1^H × W∑_c=1^C -Y^'_iclog(G(X)),
where G(X) refers to the predicted probability of class c for the ith pixel for the input image X. Y^'_ic is the predicted label by the fixed teacher segmentation network for class c on the ith pixel, where Y^'_ic=1 if the pixel belongs to the class c and Y^'_ic=0 if otherwise.
Minimizing the cross entropy loss ℒ_c(G_fixed,X,Y^') while freezing the weights of segmentation network G, will steer the image translation network to generate target-like images that are semantically consistent with the predictions Y^'.
GAN Feature Matching loss The GAN feature matching loss<cit.> is similar to the perceptual loss<cit.>, though it compares the feature representations obtained from several discriminator T_D layers.
It is calculated as:
ℒ_f(X^'_tgt,X_tgt)=
𝔼(X_tgt)∑_i^N‖ T_D^(i)(X_tgt)-T_D^(i)(X^'_tgt) ‖_1 ,
where N refers to the number of layers in the discriminator T_D and X^'_tgt is the target-like image.
The overall loss function for training the image translation network is
ℒ_translation = λ_pℒ_p(X^'_tgt,X_tgt) + λ_cℒ_c(X_tgt,Y^')
+λ_KLDℒ_KLD(X^'_tgt,X_tgt)
+λ_fℒ_f(X^'_tgt,X_tgt),
,
where ℒ_KLD refers to the KL divergence loss <cit.> commonly used for generative tasks.
We filter the segmentation model predictions Y^' by selecting the top 33% confident pixels per class to obtain the filtered pseudo labels Y^".
The hyperparameter weights used in our implementation are λ_c=3.0, λ_KLD=0.05, λ_f=1.0 and λ_p=2.0 for training the image translation network.
Semantic Segmentation The overall loss function for training the semantic segmentation network is
ℒ_seg = λ_tgtℒ_c(X_tgt,Y^") + λ_genℒ_c(X^'_tgt,Y^')
+λ_psegℒ_p(X^"_tgt,X_tgt)
+λ_fℒ_f(X^"_tgt,X_tgt)
+λ_KLDℒ_KLD(X^"_tgt,X_tgt),
and the hyperparameter weights used in our implementation are λ_KLD=0.05,λ_tgt=1.0, λ_gen=3.0, λ_f=1.0 and λ_pseg=10 for training the segmentation network.
X^"_tgt refers to the target-like images generated from the student model predictions.
§.§ Training the segmentation network
We train the segmentation network with the generated images and the target domain images.
We filter the pseudo labels generated by the teacher model by class.
We do not filter the pseudo labels for the target-like images.
Since we do not have the ground truth labels for the target domain images, we use the student model predictions on the original target domain images as input for the image-translation model.
During this step, we fix the weights of the image translation model and reconstruct the target domain image.
We compare the target-like image with the original target domain image using the perceptual loss formulation.
This allows us to leverage the original target domain image and the knowledge present in the image reconstruction network to improve segmentation model performance.
However, since a large range of perceptual variations may correspond to any given class, pixels in the generated images which are perceptually different from the pixels in the original target domain image may correspond to the same class.
This could be a potential source of noise that can reduce segmentation model performance.
§ EXPERIMENTS
In this section, we introduce the datasets used for training and evaluation of adaptation performance (Section <ref>), followed by the network architectures used for image translation and semantic segmentation (Section <ref>).
§.§ Datasets
* Synscapes<cit.> is a synthetic semantic segmentation dataset with 25,000 densely annotated RGB images with a resolution of 1440 × 720 pixels and a scaled up version with a resolution of 2048×1024 pixels. It has 19 categories that are compatible with the Cityscapes<cit.> dataset.
Following prior source-free domain adaptation work <cit.>, we evaluate our proposed method with the following datasets.
* GTA5<cit.> is a synthetic semantic segmentation dataset with 24,966 densely annotated images with resolution 1914 × 1052 pixels, and has 19 categories that are compatible with the Cityscapes<cit.> dataset.
* Synthia<cit.> refers to the SYNTHIA-RAND-CITYSCAPES subset from the publicly available database for semantic segmentation.
It has 9,400 densely annotated images with resolution 1280 × 760 pixels and has 16 categories that are compatible with the Cityscapes<cit.> dataset.
* Cityscapes<cit.> is a real-world driving dataset with densely annotated images of resolution 2048 × 1024 pixels.
We use the Cityscapes dataset as the target domain, following the default split of 2,975 unlabelled images: 500 images for training and evaluation of model performance respectively.
We notice that there are some artifacts in the reconstructed image despite .. (take the strongest image reconstruction model and evaluate with perfect labels).
denoising the input labels
Blurring
Aliasing
Since ..., we argue that ...
Therefore, we
§.§ Network Architecture
Here, we introduce the network architecture involved in the image reconstruction and semantic segmentation tasks.
We implement our workflow with the Pytorch library <cit.>.
Image translation
For image translation, we use the simplified version of the two-stream image translation network T <cit.>.
Unlike Jiang <cit.>'s approach where the generator contains a content-stream and style-stream module that allows for content and style inputs, we use a generator containing only the content-stream module to reduce the number of model parameters required.
We found no significant difference in performance by including the additional style input.
In our approach, the generator takes the one-hot encoded segmentation model predictions as input.
We use the multi-scale patch discriminator <cit.>, based on the approach by Jiang <cit.>.
We use the Adam optimizer<cit.> with β_1=0, β_2=0.9.
The learning rate for the generator and the discriminator is 10^-4 and 4×10^-4.
We first train the image translation network for up to 80 epochs with batch size=1.
!
23cGTA5 → Cityscapes
90Year 90Arch.
90road 90side. 90buil. 90wall 90fence 90pole 90light 90sign 90vege. 90terr. 90sky 90pers. 90rider 90car 90truck 90bus 90train 90motor 90bike 90mIoU
AUGCO <cit.> 2022 R 90.3 41.2 81.8 26.5 21.4 34.5 40.4 33.3 83.6 34.6 79.7 61.4 19.3 84.7 30.3 39.5 7.3 27.6 34.6 45.9
SFDA<cit.> 2021 R 84.2 39.2 82.7 27.5 22.1 25.9 31.1 21.9 82.4 30.5 85.3 58.7 22.1 80.0 33.1 31.5 3.6 27.8 30.6 43.2
SOMAN <cit.> 2021 R 91.3 52.8 85.7 38.5 31.3 35.2 37.3 35.3 85.8 46.1 88.6 60.4 32.4 86.1 54.9 51.1 5.8 41.8 50.7
53.2
SimT <cit.> 2022 R 92.3 55.8 86.3 34.4 31.7 37.8 39.9 41.4 87.1 47.8 88.5 64.7 36.3 87.3 41.7 55.2 0.0 47.4 57.6 54.4
SF <cit.> 2022 R 89.2 37.3 82.4 29.0 23.5 31.8 34.6 28.7 84.8 45.5 80.2 62.6 32.6 86.1 45.6 43.8 0.0 34.6 54.4 48.8
ReGEN (Our approach) 2023 R 92.6 56.2 86.5 36.0 33.2 39.1 38.2 46.1 87.5 45.9 87.6 65.8 37.1 87.9 43.8 57.7 0.0 44.8 58.5
55.0
Segmentation performance of Deeplab-v2 with ResNet-101 backbone trained on GTA5, adapted to unlabelled Cityscapes data.
<cit.> 2021 R 92.3 55.2 81.6 30.8 18.8 37.1 17.7 12.1 84.2 35.9 83.8 57.7 24.1 81.7 27.5 44.3 6.9 24.1 40.4 45.1
!
20cSynthia → Cityscapes
90Year 90Arch.
90road 90side. 90buil. 90wall # 90fence # 90pole # 90light 90sign 90vege. 90sky 90pers. 90rider 90car 90bus 90motor 90bike 90mIoU13 90mIoU16
AUGCO<cit.> 2022 R 74.8 32.1 79.2 5.0 0.1 29.4 3.0 11.1 78.7 83.1 57.5 26.4 74.3 20.5 12.1 39.3 39.2 45.5
SFDA <cit.> 2021 R 81.9 44.9 81.7 4.0 0.5 26.2 3.3 10.7 86.3 89.4 37.9 13.4 80.6 25.6 9.6 31.3 39.2 45.9
SOMAN* <cit.> 2021 R 89.7 50.2 81.8 14.0 2.9 35.9 27.9 30.9 84.0 88.8 66.6 34.6 84.0 52.7 46.1 47.9 60.4 52.5
SF<cit.> 2022 R 74.3 33.7 78.9 14.6 0.7 31.5 21.3 28.8 80.2 81.6 50.7 24.5 78.3 11.6 34.4 53.7 50.2 43.7
SimT <cit.> 2022 R 87.5 37.0 79.7 7.8 1.0 30.2 9.5 17.3 79.4 80.3 53.4 20.8 82.0 34.2 18.5 38.5 49.1 42.3
ReGEN (Our approach) 2023 R 88.3 42.96 80.81 9.22 0.69 37.93 23.96 28.56 82.69
83.15 68.01 35.3 83.04 39.57
42.5 54.89 58.0 50.1
Segmentation performance of Deeplab-v2 with ResNet-101 backbone trained on Synthia, adapted to unlabelled Cityscapes data. Note*: The reported score here is derived from the model checkpoint available on the project page maintained by Kundu <cit.>. mIoU13 and mIoU16 are computed over 13 classes (excluding the classes marked with #) and 16 classes respectively.
ReGEN 2022 R 88.6 41.7 80.9 12.2 0.8 37.3 24.5 26.5 82.5 83.9 68.0 36.0 84.5 42.2 46.0 54.6
58.4 50.6
Semantic Segmentation
We use the DeepLab-v2 <cit.> segmentation network with ResNet-101<cit.> backbone for the segmentation model G.
We use the pretrained weights for the segmentation models from Kundu <cit.> and Guo <cit.> for the Synthia and GTA5 datasets respectively.
Similar to Kundu <cit.>, we freeze all the layers except for the layer preceding the classifiers in the segmentation model.
We use the SGD optimizer with momentum 0.9, an initial learning rate 2.5×10^-4, a polynomial learning rate decay of power 0.8 and weight decay 5×10^-4.
To ensure a fair comparison for the GTA5→ Cityscapes setting, we used the same model checkpoint that Guo <cit.> obtained after the initial warm-up stage.
Similarly, to ensure a fair comparison for the Synthia→Cityscapes setting, we use the same model checkpoint that Kundu <cit.> obtained before the self-training step in their implementation.
We first perform 3 rounds of self-training on the target domain data following the approach by Kundu <cit.> to warm up the pretrained segmentation model.
We then jointly train the segmentation model and the image translation network for a maximum of 50,000 iterations on a single NVIDIA A100 GPU card, with batch size=2.
During this phase, we filter the pseudo labels for the original target domain images using the class-wise confidence thresholding approach<cit.>.
Similar to Kundu <cit.>, we set the class-wise thresholds at 33% of the most confident predictions at each iteration.
Pixels with prediction probabilities lower than the threshold are assigned to an `unlabelled' class and ignored during loss computation.
λ_c 3.0 3.0 6.0 9.0
λ_p 2.0 4.0 2.0 2.0
λ_f 1.0 2.0 1.0 1.0
Avg. mIoU 55.0 54.8
53.8 53.4
width=
Hyperparameter evaluation for the GTA5 → Cityscapes setting for training the image translation network T. The hyperparameters λ_c, λ_p and λ_f refer to the weights for the semantic consistency loss, perceptual loss, and GAN feature matching loss as shown in Equation <ref>.
% of labels filtered 0 20 66 80
Avg. mIoU 54.1
width=
Evaluation of the effect on performance from percentage of pseudo labels filtered during image reconstruction training for GTA5→Cityscapes. eta 08-03-2023 SGT 1800
λ_tgt 1.0 1.0 3.0 3.0
λ_pseg 10.0 2.0 10.0 10.0
λ_gen 3.0 3.0 3.0 0
Avg. mIoU 55.0 53.0 52.6 51.9
width=
Hyperparameter evaluation for the GTA5→Cityscapes setting for training the image segmentation network G. The hyperparameters λ_tgt, λ_pseg and λ_gen refer to the loss weights for the semantic consistency loss for the target images ℒ_c(X_tgt,Y^"), perceptual loss ℒ_p and the semantic consistency loss for the target-like images ℒ_c(X_tgt^',Y^') as shown in Equation <ref>.
< g r a p h i c s >
width=1.0
Qualitative results for the GTA5→Cityscapes setting. Our approach, ReGEN, demonstrates qualitatively better performance, being able to resolve small objects (top row: “pole"), manage confusion cases (middle row: “road"-“sidewalk") and avoid classification errors (bottom row: “bus"-“fence").
§ DISCUSSION
Training the image reconstruction model
It is essential that the image reconstruction model generates target domain-like images with a high level of correspondence with the task model predictions from the original target domain images.
In fact, blind copying of the original target domain images provides no additional information for training.
Therefore, during the fine-tuning step, we found that weighting the semantic content loss more heavily than perceptual loss gives better performance.
Lack of diversity of the generated target-like images
Mode collapse, which is characterised by a lack of diversity in the generated data, may result in our image reconstruction model generating data with limited target styles.
Training our task model on generated data with reduced diversity compared to the original target domain images may reduce performance.
Maximising correspondence between the ground truth labels and the corresponding target styles.
Since the ground truth labels for the target domain labels are unavailable during domain adaptation, the image reconstruction network may learn incorrect relationships between the content information (pseudo labels obtained from the pretrained model predictions on the target domain data) and the style information (target domain images).
We minimize this by maximising task model performance on the target domain data.
For the Synthia dataset, we perform three consecutive rounds of self-training on the task model.
At the end of each round, the final checkpoint for each model is used to generate pseudo labels from the target domain data.
In this section, we compare our work with prior art and also evaluate the hyperparameter weights used to train the image translation network and the semantic segmentation network.
§.§ Comparison with prior work
In Table <ref> and <ref>, we compare our proposed approach, ReGEN, with the state-of-the-art work<cit.> and also with representative prior work <cit.>.
Guo <cit.> addresses the challenge of open-set semantic segmentation by learning a noise transition matrix that mitigates the effect of noise in the pseudo labels.
Kundu <cit.> trains the segmentation network backbone and multiple classifier heads with differently augmented source domain data for each of the classifier heads to maximise model generalizability, followed by self-training with the unlabelled target domain data.
Liu <cit.> generate source-like data by leveraging the learned parameters of the pretrained segmentation network.
Paul <cit.> enforce consistency between the model output from several input pixel-level transformations of unlabelled target domain data.
Prabhu <cit.> train the segmentation model to maximise consistency between the augmented target domain images, while also identifying reliable pseudo labels via class-conditioned confidence thresholding.
Our proposed approach demonstrates comparable performance with state-of-the-art work for both experimental settings (Table <ref>,<ref>).
In particular, our approach surpasses all other methods for the GTA5→Cityscapes setting and demonstrates comparable performance with state-of-the-art work for the Synthia→Cityscapes setting.
We also present a qualitative comparison of our work in Figure <ref>.
Compared to the prior state-of-the-art work <cit.>, our approach demonstrates better performance resolving small objects (pole, traffic sign) and distinguishing between the confusion classes (“road"-“sidewalk" and “person"-“rider").
§.§ Hyperparameter evaluation
Image Translation Table <ref> shows the effect of the loss weights λ_c, λ_p
and λ_f used during image translation on segmentation performance.
Here, maximizing the ability of the translation model to generate target-like images with high semantic consistency with the segmentation model predictions is required for effective adaptation of the segmentation model.
The results suggest that balancing the weights for semantic consistency and stylistic similarity is essential for generating high-quality data for training.
Additionally, raising the weights for semantic consistency reduced adaptation performance (as seen in the rightmost columns of Table <ref>).
This was initially surprising because a higher semantic consistency between the pseudo labels and the generated target-like images would mean more reliable supervision.
However, we suggest that this increased semantic consistency could have been achieved at the cost of reduced stylistic similarity with the original target domain images.
This might explain why the adaptation performance was reduced in both cases.
Table <ref> shows the effect of filtering the pseudo labels per class with the confidence thresholding approach during training of the image translation model.
Assuming that the more confident predictions are likely to be correct, We sought to determine whether selecting the most confident predictions during training would allow the translation model to learn the correct relationship between the observed target style images and the pseudo labels.
However, we see that segmentation performance worsens when the model predictions are filtered during training for all cases.
This is likely due to the loss of useful information present in the high entropy, low confidence predictions.
Additionally, applying a hard filter to the pseudo labels disproportionately affects minority classes compared to majority classes because of the considerably fewer examples available during training.
Semantic Segmentation
Table <ref> shows the effect of the loss weights λ_gen, λ_tgt, λ_pseg
and λ_f on the segmentation model performance.
Comparison of the performance between the two leftmost columns in Table <ref> suggest that perceptual loss can be effective as an additional means of supervision.
However, as expected, semantic consistency loss for target-like images is also essential for achieving good performance (rightmost column in Table <ref>).
The results also show that increasing the weights for the semantic consistency loss (from 1.0 to 3.0) of the target domain images reduces performance (Table <ref>) and we suggest that this might be caused by the noise in the pseudo labels.
ℒ_c ℒ_p ℒ_f mIoU
55.0
53.0
51.9
51.9
width=
Evaluation of the effect on performance by eliminating perceptual loss ℒ_p, semantic consistency loss for the generated target-like images ℒ_c(X_tgt^',Y^') and GAN feature loss ℒ_f during training of the segmentation network for the GTA5→Cityscapes setting.
§.§ Ablation study
We explore the effect of perceptual loss, semantic consistency loss for the target-like images and GAN feature matching loss during training of the segmentation network (Table <ref>).
The results show that both perceptual loss and semantic consistency loss have more influence on model performance compared to GAN feature matching loss.
Additionally, we wanted to determine whether filtering the target-like images would have an effect on model performance since filtering is commonly used for most self-supervised methods.
The reasoning behind this was to determine w
Therefore, we retain the top k% confident pixels per class (10%, 25%, 50%,75%,100%) when we generate the pseudo labels for the target-like image (Table <ref>).
We observed an upward trend between the percentage of pixels retained in the generated target-like images, though the performance drops when 50% of the target-like pixels are retained.
This drop might be caused by a far greater number of incorrect labels than correct labels for the target-like images occurring between 25%-50% confidence across the classes.
While the performance was best when all the pixels were retained during training (100%), the lack of any considerable difference in performance for the different filtering rates seems to suggest that even with extremely high filtering rates 10%, model performance remains high.
% of pixels retained 10 25 50 75 100
mIoU16 49.0 49.1 48.9 49.2 50.1
width=
Evaluation of the effect of retaining the top k% confidence predictions per class made by the pretrained teacher model to generate pseudo labels for training the segmentation model under the Synthia→Cityscapes setting.
§.§ Qualitative evaluation of target-like images
We observed that the generated target-like images show good semantic consistency with the input one-hot predictions (Figure <ref>).
As shown in the figure, the predictions from the generated images show good agreement with those of the original target images, despite some stylistic differences between the generated images and the original target domain images.
However, we also noticed some errors in the generated images.
Aliasing artifacts (characterized by unwanted repetitive patterns in the generated images) reduce intra-class diversity in the target-like images.
These artifacts may affect performance on the original target domain images as the segmentation model may overfit to the generated instances.
< g r a p h i c s >
width=1.0
Illustration of common error cases during image translation. Aliasing artifacts (top and middle rows) reduce the intra-class style diversity, which could reduce segmentation performance. Prediction errors (bottom row:`train'→`building') can cause the segmentation network to learn incorrect relationships between the pixels.
§ CONCLUSION
We introduce a source-free domain adaptation workflow that generates target-like data with reliable pixel-level labels.
Our approach generates target-like data that has high semantic consistency while also possessing high stylistic similarity to the target domain images.
For future work, we intend to further extend our workflow to address additional domain adaptation settings.
ieee_fullname
|
http://arxiv.org/abs/2307.01026v1
|
20230703135820
|
Temporal Graph Benchmark for Machine Learning on Temporal Graphs
|
[
"Shenyang Huang",
"Farimah Poursafaei",
"Jacob Danovitch",
"Matthias Fey",
"Weihua Hu",
"Emanuele Rossi",
"Jure Leskovec",
"Michael Bronstein",
"Guillaume Rabusseau",
"Reihaneh Rabbany"
] |
cs.LG
|
[
"cs.LG",
"cs.AI"
] |
definitionDefinition
Möbius Homology
[
===============
We present the Temporal Graph Benchmark (), a collection of challenging and diverse benchmark datasets for realistic, reproducible, and robust evaluation of machine learning models on temporal graphs. datasets are of large scale, spanning years in duration, incorporate both node and edge-level prediction tasks and cover a diverse set of domains including social, trade, transaction, and transportation networks. For both tasks, we design evaluation protocols based on realistic use-cases. We extensively benchmark each dataset and find that the performance of common models can vary drastically across datasets. In addition, on dynamic node property prediction tasks, we show that simple methods often achieve superior performance compared to existing temporal graph models. We believe that these findings open up opportunities for future research on temporal graphs. Finally, provides an automated machine learning pipeline for reproducible and accessible temporal graph research, including data loading, experiment setup and performance evaluation. will be maintained and updated on a regular basis and welcomes community feedback. datasets, data loaders, example codes, evaluation setup, and leaderboards are publicly available at <https://tgb.complexdatalab.com/>.
§ INTRODUCTION
Many real-world systems such as social networks, transaction networks, and molecular structures can be effectively modeled as graphs, where nodes correspond to entities and edges are relations between entities. Recently, significant advances have been made for machine learning on static graphs, led by the use of Graph Neural Networks (GNNs) <cit.> and Graph Transformers <cit.>, and accelerated by the availability of public datasets and standardized evaluations protocols, such as the widely adopted Open Graph Benchmark (OGB) <cit.>.
However, most available graph datasets are designed only for static graphs and lack the fine-grained timestamp information often seen in many real-world networks that evolve over time. Examples include social networks <cit.>, transportation networks <cit.>, transaction networks <cit.> and trade networks <cit.>.
Such networks are formalized as Temporal Graphs (TGs) where the nodes, edges, and their features change dynamically.
A variety of machine learning approaches tailored for learning on TGs have been proposed in recent years, often demonstrating promising performance <cit.>. However, Poursafaei et al. <cit.> recently revealed an important issue: these TG methods often portray an over-optimistic performance — meaning they appear to perform better than they would in real-world applications — due to the inherent limitations of commonly used evaluation protocols.
This over-optimism creates serious challenges for researchers. It becomes increasingly difficult to distinguish between the strengths and weaknesses of various methods when their test results suggest similarly high performance. Furthermore, there is a discrepancy between real-world applications of TG methods and the existing evaluation protocols used to assess them. Therefore, there is a pressing need for an open and standardized benchmark that enhance the evaluation process for temporal graph learning, while being aligned with real-world applications.
In this work, we present the Temporal Graph Benchmark (), a collection of challenging and diverse benchmark datasets for realistic, reproducible, and robust evaluation for machine learning on temporal graphs. Figure <ref> shows 's ML pipeline. Inspired by the success of OGB, automates the process of dataset downloading and processing as well as evaluation protocols, and allows the user to easily compare their model performance with other models on the public leaderboard. improves the evaluation of temporal graph learning in both dataset selection and evaluation protocol and covers both edge and node level tasks.
R0.55
< g r a p h i c s >
consists of a diverse set of datasets that are one order of magnitude larger than existing datasets in terms of number of nodes, edges, and timestamps.
Dataset Selection.
Contrary to real-world networks that typically contain millions of nodes and tens of millions of edges, existing TG benchmark datasets are notably smaller, falling short by several orders of magnitude <cit.>. Furthermore, these datasets often have limitations in terms of their domain diversity, with a substantial focus on social and interaction networks <cit.>. This lack of diversity can be problematic as network properties, such as network motifs <cit.>, the scale-free property <cit.>, and the modular structure <cit.> vary significantly across different domains. Consequently, it is important to benchmark existing methods across a wide variety of domains for a more comprehensive evaluation.
To address these limitations, datasets provide diversity in terms of the number of nodes, edges, timestamps, and network domains. As shown in Figure <ref>, datasets are larger in scale and present statistics that were under-explored in prior literature. For instance, the dataset has over 67 million edges while the dataset has more than 30 million timestamps. Additionally, introduces three datasets for the dynamic node property prediction task to address the scarcity of datasets with dynamic node labels in current literature.
Evaluation Protocol. In , we aim to design the evaluation for both edge and node level tasks on temporal graphs based on real applications. Historically, the standard approach for dynamic link prediction evaluation is to treat it as a binary classification task using one negative edge per positive edge in the test set <cit.>.
This strategy tends to generate negatives that are easy to predict, given the structure and sparsity of real-world networks <cit.>, leading to inflated model performance estimations <cit.>. To address this issue,
we propose to treat the task as a ranking problem, contrasting each positive sample against multiple negatives and using Mean Reciprocal Rank (MRR) as the metric. Moreover, historical negatives – past edges absent in the current step – are more difficult to predict correctly than randomly sampled negatives <cit.>. Thus, we sample both historical and random negatives in link prediction evaluations. Secondly, for the dynamic node property prediction task, we select the normalized discounted cumulative gain (NDCG) metric to evaluate if methods output node labels that respect the same ordering of classes as the groundtruth. Finally, we include simple baselines for both tasks, revealing that heuristics can sometimes match or even exceed the performance of more complex models.
Overall, our proposed Temporal Graph Benchmark has the following contributions:
* Large and diverse datasets. includes datasets coming from a diverse range of domains and spanning both node and edge-level tasks. datasets are orders of magnitude larger than existing ones in terms of number of edges, nodes and timestamps.
* Improved evaluation. We propose an improved and standardized evalution protocols motivated by real-world applications. For dynamic link property prediction, we sample multiple negative samples per positive edge and ensure a mix of both historical and random negative samples, while using the MRR metric. For dynamic node property prediction, we use the NDCG metric to evaluate the relative ordering of node labels within the top ranked classes.
* Empirical findings. We show that for the dynamic link property prediction task, model performances can vary drastically across datasets, for example, the best performing model on encounters a 59% MRR drop on . Moreover, datasets with a higher ratio of test edges which are unseen during training are more challenging for all methods. On the dynamic node property prediction task, we find that simple heuristics can often outperform state-of-the-art TG methods, thus leaving ample room for development of future methods targeting this task.
* Public leaderboard and reproducible results. Following the good practice of OGB, also provides an automated and reproducible pipeline for both link and node property prediction tasks. Researchers can submit and compare method performance on the leaderboard.
Reproducibility: code, datasets, leaderboards and details are on the https://tgb.complexdatalab.com/website. The code is also publicly available on https://github.com/shenyangHuang/TGBGitHub with documentations seen https://docs.tgb.complexdatalab.com/here.
§ RELATED WORK
Temporal Graph Datasets and Libraries.
Recently, Poursafaei et al. <cit.> collected six novel datasets for link prediction on continuous-time dynamic graphs while proposing more difficult negative samples for evaluation. Yu et al. <cit.> presented DyGLib, a platform for reproducible training and evaluation of existing TG models on common benchmark datasets. DyGLib demonstrates the discrepancy of the model performance across different datasets and argues that diverse evaluation protocols of previous works caused an inconsistency in performance reports. Similarly, Skarding et al. <cit.> provided a comprehensive comparative analysis of heuristics, static GNNs, discrete dynamic GNN, and continuous dynamic GNN on dynamic link prediction task. They showed that dynamic models outperforms their static counterparts consistently and heuristic approaches can achieve strong performance. In all of the above benchmarks, the included datasets only contain a few million edges. In comparison, datasets are orders of magnitude larger in scale in terms of number of nodes, edges and timestamps. also includes both node and edge-level tasks.
Temporal Graph Methods.
With the growing interest in temporal graph learning, several recent models achieved outstanding performance on existing benchmark datasets. However, due to the limitations of the current evaluation, many methods achieve over-optimistic and similar performance for the dynamic link prediction task <cit.>. In this work, datasets and evaluation show a clear distinction between SOTA model performance, which helps facilitate future advancement of TG learning methods. Temporal graphs are categorized into discrete-time and continuous-time Temporal graphs <cit.>. In this work, we focus on the continuous-time temporal graphs as it is more general. Continuous-time TG methods can be divided into node or edge representation learning methods. Node-based models such as TGN <cit.>, DyRep <cit.> and TCL <cit.> first leverage the node information such as temporal neighborhood or previous node history to generate node embeddings and then aggregate node embeddings from both source and destination node of an edge to predict its existence. In comparison, edge-based methods such as CAWN <cit.> and GraphMixer <cit.> aim to directly generate embeddings for the edge of interest and then predict its existence. Lastly, the simple memory-based heuristic EdgeBank <cit.> without any learning component has shown surprising performance based on existing evaluation. We compare these methods on datasets in Section <ref>.
§ TASK EVALUATION ON TEMPORAL GRAPHS
Temporal graphs are often used to model networks that evolve over time where nodes are entities and temporal edges are relations between entities through time. In this work, we focus on continuous time temporal graphs and denote them as timestamped edge streams consisting of triplets of source, destination, and timestamp; i.e., 𝒢={ (s_0,d_0,t_0), (s_1,d_1,t_1), … , (s_T,d_T,t_T) } where the timestamps are ordered (0 ≤ t_1 ≤ t_2 ≤ ... ≤ t_T) <cit.>.
Note that temporal graph edges can have different properties namely being weighted, directed, or attributed.
We consider 𝒢_t as the augmented graph of all edges observed in the stream up to the time t with nodes as V_t and edges as E_t.
Optionally, 𝒢_t can contain node features X_t ∈ℝ^|V_t| × k_n where k_n is the size of a node feature vector, and edge features M_t ∈ℝ^|E_t| × k_m where k_m is the size of an edge feature vector.
We consider a fixed chronological split to form the training, validation, and test set.
Evaluation Settings. There are several possible evaluation settings in the temporal graph based on the available information of the test set.
We categorize and discuss these settings in detail in Appendix <ref>.
In this work, we consider the streaming setting where the deployed models need to adapt to new information at inference time. More specifically, we follow the setting in <cit.> where previously observed test edges can be accessed by the model but back-propagation and weight updates with the test information are not permitted.
§.§ Dynamic Link Property Prediction
The goal of dynamic link property prediction is to predict the property (oftentimes the existence) of a link between a node pair at a future timestamp.
The timeline is chronologically split at two fixed points resulting in three sets of edges E_train, E_val., and E_test. In , we improve the evaluation setting in the following ways.
Negative edge sampling. In current evaluation <cit.>, only one negative edge is sampled uniformly randomly from all possible node pairs to evaluate against each positive edge.
In contrast, in real applications where the true edges are not known in advance, the edges with the highest probabilities predicted by a given model are used to decide which connections should be prioritized.
With that in mind, we treat the link prediction task as a ranking problem and sample multiple negative edges per each positive edge.
We select twenty negatives as a trade-off between evaluation completeness and the test set inference time.
In particular, for a given positive edge e^p:(s,d,t), we fix the source node s and timestamp t, and sample twenty different destination nodes.
We sample the negative edges from both the historical and random negative edges.
Historical negatives are sampled from the set of edges that were observed in the training set but are not present at the current timestamp t (i.e. E_t∖ E_train) and they are shown to be difficult for models to predict <cit.>.
We sample equally from historical and random negative edges. Note that depending on the dataset and the timestamp t, there might not be enough historical negatives to sample from. In this case, we simply increase the ratio of the random negatives to have the desired number of negative edges per positive ones.
For reproducibility, we include a fixed set of negatives sampled for each dataset to ensure consistent comparison amongst models.
Performance metric. The commonly used metric for reporting models' performance for the dynamic link prediction task is either Area Under the Receiver Operating Characteristic curve (AUROC) or Average Precision (AP).
An appropriate metric should be able to capture the ranking of a positive edge amongst the negative ones, which is not fulfilled by neither AUROC or AP.
Thus,
we devise to use the filtered Mean Reciprocal Rank (MRR) as the evaluation metric for the dynamic link property prediction.
The MRR computes the reciprocal rank of the true destination node among the negative or fake destinations.
The MRR varies in the range of (0,1] and it is a commonly used metric in recommendation systems <cit.> and knowledge graphs <cit.>.
It should be noted that when reporting the MRR, we perform collision checks to ensure that no positive edge is sampled as a negative edge.
§.§ Dynamic Node Property Prediction
The goal of dynamic node property prediction is to predict the property of a node at a future timestamp.
For instance, in recommendation systems, it is important to provide personalized recommendations for a user, which requires a node-centric view to model the property of users.
Figure <ref> shows the node property prediction task in the context of music recommendation systems as seen in the dataset.
In this task, we are given the interaction history of a user with different music genres, and the goal is to output the probability distribution over a set of different music genres that the user will interact with over the next week.
More formally, given the observed evolution history of a temporal graph 𝒢_t until current timestamp t, the dynamic node property prediction task (on a dataset such as ) predicts the interaction frequency vector y_t[u,:] for a node u over a set of candidate nodes 𝐍 within a fixed future period [t,t+k] where k is the window size defined by the application.
Each entry in y_t[u,:] corresponds to a candidate node v ∈𝐍 and the groundtruth value is generated as follows:
y_t[u,v] = ∑_ t < t_i ≤ t+k w_(u,v,t_i)/∑_z ∈𝐍∑_ t < t_i ≤ t+k w_(u,z,t_i)
where w_(u,v,t_i) is the weight of the edge (u,v,t_i) (which we assume to be 0 if the edge between u and v is not present at time t_i).
Observe that, by definition, y_t[u,:]_1 = 1.
We use the Normalized Discounted Cumulative Gain (NDCG) metric that takes into account the relative order of elements.
NDCG is commonly used in information retrieval and recommendation systems as a measure of ranking quality <cit.>.
In this work, we use NDCG@10 where the relative order of the top 10 ranked items (i.e. destination nodes) are examined.
Specifically in the dataset, the NDCG@10 compares the ground truth to the relative order of the top-10 music genres that a model predicts.
§ DATASETS
offers eight temporal graph datasets, six of which are collected and curated for this work.
All datasets are split chronologically into the training, validation, and test sets, respectively containing 70%, 15%, and 15% of all edges, in line with similar studies such as <cit.>.
The dataset licenses and download links are presented in Appendix <ref>, and the datasets will be permanently maintained via Digital Research Alliance of Canada funded by the Government of Canada.
We consider datasets with more than 5 million edges as medium-size and those with more than 25 million edges as large-size datasets.
Table <ref> shows the statistics and properties of the temporal graph datasets provided by .
Datasets such as , , , and are orders of magnitude larger than existing TG benchmark datasets <cit.>, while their number of nodes and edges span a wide spectrum, ranging from thousands to millions.
In addition, dataset domains are highly diverse, coming from five distinct domains including social networks, interaction networks, rating networks, traffic networks, and trade networks.
Moreover, the duration of the datasets varies from months to years, and the number of timestamps in datasets ranges from 32 to more than 30 million with diverse ranges of time granularity from UNIX timestamps to annually.
The datasets can be weighted, directed, or having edge attributes.
We also report the surprise index (i.e., | E_test∖ E_train|/|E_test|) as defined in <cit.> which computes the ratio of test edges that are not seen during training.
Low surprise index implies that memorization-based methods (such as EdgeBank <cit.>) can potentially achieve good performance on dynamic link property prediction task.
We can observe that the surprise index also varies notably across datasets, further contributing to datasets diversity.
We discuss the details of datasets next.
. This dataset stores the co-editing network on Wikipedia pages over one month. The network is a bipartite interaction network where editors and wiki pages are nodes, while one edge represents a given user edits a page at a specific timestamp. Each edge has text features from the page edits.
The task for this dataset is to predict with which wiki page a user will interact at a given time.
. This dataset is an Amazon product review network from 1997 to 2018 where users rate different products in the electronics category from a scale from one to five. Therefore, the network is a bipartite weighted network where both users and products are nodes and each edge represents a particular review from a user to a product at a given time. Only users with a minimum of 10 reviews within the aforementioned time interval are kept in the network. The considered task for this dataset is to predict which product a user will review at a given time.
. This is a cryptocurrency transaction dataset based on the Stablecoin ERC20 transactions dataset <cit.>. Each node is an address and each edge represents the transfer of funds from one address to another at a time. The network starts from April 1st, 2022, and ends on November 1st, 2022, and contains transaction data of 5 stablecoins and 1 wrapped token. This duration includes the Terra Luna crash where the token lost its fixed price of 1 USD. The considered task for this dataset is to predict with which destination a given address will interact at a given time.
. This dataset is a directed reply network of Reddit where users reply to each other's threads. Each node is a user and each interaction is a reply from one user to another.
The network starts from 2005 and ends at 2010. The considered task for this dataset is to predict if a given user will reply to another one at a given time.
. This dataset is a crowd sourced international flight network from 2019 to 2022. The airports are modeled as nodes, while the edges are flights between airports at a given day. The node features include the type of the airport, the continent where the airport is located, the ISO region code of the airport as well as its longitude and latitude.
The edge feature is the associated flight number.
The considered task for this dataset is to predict if a given flight will exist between a source and destination airport at a specified day.
. This is the international agriculture trading network between nations of the United Nations (UN) from 1986 to 2016. Each node is a nation and an edge represents the sum trade value of all agriculture products from one nation to another one. As the data is reported annually, the time granularity of the dataset is yearly.
The considered task for this dataset is to predict the proportion of agriculture trade values from one nation to other nations during the next year.
. This is a bipartite and weighted interaction network between users and the music genres of songs they listen to. Both users and music genres are represented as nodes while an interaction specifies a user listens to a music genre at a given time. The edge weights denote the percentage of which a song belongs to a certain genre.
The dataset is constructed by cross referencing the songs in the http://snap.stanford.edu/jodie/#datasetsLastFM-song-listens dataset <cit.> with that of music genres in the http://millionsongdataset.com/million-song dataset <cit.>.
The LastFM-song-listens dataset has one month of who-listens-to-which-song information for 1000 users and the million-song dataset provides genre weights for all songs in the LastFM-song-listens dataset.
We only retain genres with at least 10% weights for each song that are repeated at least a thousand times in the dataset.
In addition, the genre names are further cross references to remove genre names with typos.
The considered task for this dataset is to rank with which set of music genres a user will interact the most over the course of the next week.
. This is a users and subreddits interaction network. Both users and subreddits are nodes and each edge indicates that a user posted on a subreddit at a given time. The dataset spans from 2005 to 2019. The task considered for this dataset is to rank with which subreddits a user will interact the most over the next week.
§ EXPERIMENTS
For dynamic link property prediction, we include DyRep <cit.>, TGN <cit.>, CAWN <cit.>, TCL <cit.>, GraphMixer <cit.>, and two deterministic heuristics namely EdgeBank_tw and EdgeBank_∞ <cit.>. For dynamic node property prediction, we include DyRep, TGN, and deterministic heuristics such as persistent forecast <cit.> and moving average <cit.>. Details about the above methods are presented in Appendix <ref>. We also provide the computing resources in Appendix <ref>.
For the experimental results, we report the average and standard deviation across 5 different runs.
§.§ Dynamic Link Property Prediction
Table <ref> shows the performance of TG methods for dynamic link property prediction on the dataset. is an existing dataset where TGN, CAWN, and GraphMixer all achieve above 0.97 average precision score in the literature <cit.>. With 's evaluation protocol,
there is now a clear distinction between the three models and CAWN achieves the best result on . On the dataset (results reported in Table <ref>), the performance of all methods are significantly worse than on . Most notably, the method rankings also changed significantly, with CAWN ranking amongst the lowest performing models on . These observations emphasize the importance of benchmarking methods on a variety of datasets offered by . One explanation for the general lower performance on is its higher surprise index compared to that of (reported in Table <ref>). The surprise index reflects the ratio of edges in the test set that have not been seen during training. Therefore, a dataset with a high surprise index requires more inductive reasoning, as most of the test edges are unseen.
As a heuristic that memorizes past edges, EdgeBank performance is inversely correlated with the surprise index and it achieves higher performance when the suprise index of the dataset is low.
Figure <ref> and <ref> show the inference time of different methods for the test set of and , respectively. Notice that as a heuristic baseline, EdgeBank inference is generally at least one order of magnitude faster than neural network based methods.
Comparing these two datasets, we observe one order of magnitude difference in computational time between models such as TGN and DyRep versus others such as GraphMixer and TCL. We believe one important future direction is to improve the inference time of these models to be closer to baselines such as EdgeBank, which facilitates scaling the learning tasks to large real-world temporal graphs.
Table <ref> shows the performance of TG methods on medium and large datasets. Note that some methods, including CAWN, TCL, and GraphMixer, run out of memory on GPU for these datasets, thus their performance is not reported. Overall, TGN has the best performance on all of these three datasets. Surprisingly, the EdgeBank heuristic is highly competitive on the dataset where it even significantly outperforms DyRep. Therefore, it is important to include EdgeBank as a baseline for all datasets.
Another observation is that for medium and large datasets, there can be a significant performance change for a single model between the validation and test set. This is because datasets span over a long time (such as , lasting 5 years) and one can expect that models need to deal with potential distribution shifts between the validation set and the test set.
§.§ Dynamic Node Property Prediction
Table <ref> shows the performance of various methods on node property prediction tasks.
As node level tasks have received less attention compared to edge level tasks in the literature, extending edge embedding based methods such as CAWN and GraphMixer to the dynamic node property prediction task is non-trivial, thus they are not included in these experiments.
One key observation is that despite being simple heuristics, both persistent forecast and moving average are strong contenders to TG methods such as DyRep and TGN. Notably, persistent forecast is SOTA on while moving average is SOTA on . On the dataset, TGN outperforms other methods, while moving average is a close second. This observation calls for the development of future TG methods for node-centric tasks such as node property prediction.
§ CONCLUSION
To enable realistic, reproducible, and robust evaluation for machine learning on temporal graphs, we present the Temporal Graph Benchmark, a collection of challenging and diverse datasets. datasets are diverse in their dataset properties as well as being orders of magnitude larger than existing ones. includes both dynamic link property prediction and dynamic node property prediction tasks, while providing an automated pipeline for researchers to evaluate novel methods and compare them on the leaderboards.
In dynamic link property prediction, we find that model rankings can vary significantly across datasets, thus demonstrating the necessity to evaluate on the diverse range of datasets. Surprisingly for dynamic node property prediction, simple heuristics such as persistent forecast and moving average outperforms SOTA methods such as TGN on two out of three datasets. This motivates the development of more TG methods for node-centric tasks.
Impact on Temporal Graph Learning.
Significant advancements in machine learning are often accelerated by the availability of public and well-curated datasets such as ImageNet <cit.> and OGB <cit.>. We expect to be a common and standard benchmark for temporal graph learning, helping to facilitate novel methodological changes.
Potential Negative Impact.
If becomes a widely-used benchmark for temporal graph learning, it is possible that future papers might focus on datasets and tasks, which may limit the use of other TG tasks and datasets for benchmarking. To avoid this issue, we plan to update regularly with community feedback as well as adding additional datasets and tasks.
Limitations.
Firstly, only considers the most common TG evaluation setting as discussed in Section <ref>, namely the streaming setting. We also discuss other possible settings in Appendix <ref>.
Depending on a specific application, a different setting might be more suitable (such as forbidding test time node updates). Secondly, currently only contains datasets from five domains, while many other domains such as biological networks are not included. We plan to continue adding datasets to to further increase the dataset diversity in .
We thank the https://ogb.stanford.edu/OGB team for sharing their website theme in the construction of this project's website. We also thank Elahe Kooshafar for giving advice on the collection of the dataset. This research was supported by the Canadian Institute for Advanced Research (CIFAR AI chair
program), Natural Sciences and Engineering Research Council of Canada (NSERC) Postgraduate Scholarship-Doctoral (PGS D) Award and Fonds de recherche du Québec - Nature et Technologies (FRQNT) Doctoral Award. Michael Bronstein and Emanuele Rossi are supported in part by ERC Consolidator Grant No. 274228 (LEMAN).
abbrv
§ CHECKLIST
* For all authors...
* Do the main claims made in the abstract and introduction accurately reflect the paper's contributions and scope?
* Did you describe the limitations of your work?
see the paragraph on limitations in Section <ref>.
* Did you discuss any potential negative societal impacts of your work?
see the paragraph on broader impact in Section <ref>.
* Have you read the ethics review guidelines and ensured that your paper conforms to them?
* If you are including theoretical results...
* Did you state the full set of assumptions of all theoretical results?
* Did you include complete proofs of all theoretical results?
* If you ran experiments (e.g. for benchmarks)...
* Did you include the code, data, and instructions needed to reproduce the main experimental results (either in the supplemental material or as a URL)?
all code, data and how to reproduce the results can be found on https://github.com/shenyangHuang/TGBgithub.
* Did you specify all the training details (e.g., data splits, hyperparameters, how they were chosen)?
all data is split chronologically with details in Section <ref> and all training details can be found on https://github.com/shenyangHuang/TGBgithub.
* Did you report error bars (e.g., with respect to the random seed after running experiments multiple times)?
We report error bars across multiple trials in Section <ref>.
* Did you include the total amount of compute and the type of resources used (e.g., type of GPUs, internal cluster, or cloud provider)?
We report the compute and resource details in Appendix <ref>.
* If you are using existing assets (e.g., code, data, models) or curating/releasing new assets...
* If your work uses existing assets, did you cite the creators?
citations for datasets can be found in Section <ref>.
* Did you mention the license of the assets?
The licenses for all datasets are described in Appendix <ref>.
* Did you include any new assets either in the supplemental material or as a URL?
We include the download link for all processed datasets in Section <ref>.
* Did you discuss whether and how consent was obtained from people whose data you're using/curating?
We clean and extract datasets from publicly available datasets from existing publications where consent and license was provided.
* Did you discuss whether the data you are using/curating contains personally identifiable information or offensive content?
There is no personally identifiable information or offensive content in the datasets provided in this work.
* If you used crowdsourcing or conducted research with human subjects...
* Did you include the full text of instructions given to participants and screenshots, if applicable?
* Did you describe any potential participant risks, with links to Institutional Review Board (IRB) approvals, if applicable?
* Did you include the estimated hourly wage paid to participants and the total amount spent on participant compensation?
§ DATASET DOCUMENTATION AND INTENDED USE
All datasets presented by are intended for academic use and their corresponding licenses are listed in Appendix <ref>.
For the ease of access, we provide the following links to the benchmark suits and datasets.
∙ Dataset and project documentations can be found at: <https://tgb.complexdatalab.com/>.
∙ package is available via at: <https://pypi.org/project/py-tgb/>.
∙ Tutorials and API references can be found at: <https://docs.tgb.complexdatalab.com/>.
Maintenance Plan. To provide a robust, realistic, and reproducible benchmark for temporal graphs, we plan to continue developing and maintaining based on community feedback and involvement.
We will maintain and improve the https://shenyanghuang.github.io/TGB/github repository, while the datasets are aimed to be maintained via https://alliancecan.ca/enDigital Research Alliance of Canada (funded by the Government of Canada).
§ DATASET LICENSES AND DOWNLOAD LINKS
In this section, we present datasets' license and the download link (embedded in dataset name).
The datasets will be permanently maintained via Digital Reseach Alliance of Canada funded by the Government of Canada.
As authors, we confirm the data licenses as indicated below and that we bear all responsibility in case of violation of rights.
* https://object-arbutus.cloud.computecanada.ca/tgb/wikipedia.zip: MIT license. The original dataset can be found http://snap.stanford.edu/jodie/#datasetshere <cit.>.
* https://object-arbutus.cloud.computecanada.ca/tgb/amazonreview.zip: Amazon license. By accessing the Amazon Customer Reviews Library (a.k.a. Reviews Library), one agrees that the Reviews Library is an Amazon Service subject to the Amazon.com https://www.amazon.com/gp/help/customer/display.html/ref=footer_cou?ie=UTF8 nodeId=508088Conditions of Use and one agrees to be bound by them, with the following additional conditions. In addition to the license rights granted under the Conditions of Use, Amazon or its content providers grant the user a limited, non-exclusive, non-transferable, non-sublicensable, revocable license to access and use the Reviews Library for purposes of academic research. One may not resell, republish, or make any commercial use of the Reviews Library or its contents, including use of the Reviews Library for commercial research, such as research related to a funding or consultancy contract, internship, or other relationship in which the results are provided for a fee or delivered to a for-profit organization. One may not (a) link or associate content in the Reviews Library with any personal information (including Amazon customer accounts) or (b) attempt to determine the identity of the author of any content in the Reviews Library. If one violates any of the foregoing conditions, their license to access and use the Reviews Library will automatically terminate without prejudice to any of the other rights or remedies Amazon may have. The original dataset can be found https://nijianmo.github.io/amazon/index.htmlhere <cit.>.
* https://object-arbutus.cloud.computecanada.ca/tgb/stablecoin.zip: CC BY-NC license (Attribution-NonCommercial). The original dataset can be found https://www.chartalist.org/eth/StablecoinAnalysis.htmlhere.
* https://object-arbutus.cloud.computecanada.ca/tgb/redditcomments.zip: CC BY-NC license (Attribution-NonCommercial). The original dataset can be found https://surfdrive.surf.nl/files/index.php/s/M09RDerAMZrQy8qhere <cit.>.
* https://object-arbutus.cloud.computecanada.ca/tgb/opensky.zip: a non-commercial, limited, non-exclusive, non-transferable, non-assignable, and terminable license to copy, modify, and use the data in accordance with this agreement solely for the purpose of non-profit research, non-profit education, commercial internal testing and evaluation of the data, or for government purposes. No license is granted for any other purpose and there are no implied licenses in this agreement. For more details, please consult the https://zenodo.org/record/7323875original dataset license. The original dataset can be found https://zenodo.org/record/7323875#.ZEmhTnZKguUhere <cit.>.
* https://object-arbutus.cloud.computecanada.ca/tgb/un_trade.zip: MIT license. The original dataset can be found https://www.fao.org/faostat/en/#data/TMhere <cit.>.
* https://object-arbutus.cloud.computecanada.ca/tgb/lastfmgenre.zip: MIT license. The original LastFM-song-listens dataset <cit.> and the million-song dataset <cit.> is available http://snap.stanford.edu/jodie/#datasetshere and http://millionsongdataset.com/here, respectively.
* https://object-arbutus.cloud.computecanada.ca/tgb/subreddits.zip: CC BY-NC license (Attribution-NonCommercial). The original dataset can be found https://surfdrive.surf.nl/files/index.php/s/M09RDerAMZrQy8qhere <cit.>.
§ EVALUATION SETTINGS IN TG
Based on how the test set information is used during the evaluation of temporal graph learning models, the existing approaches can be grouped into the following three categories.
We emphasize that methods from different categories should be evaluated distinctly to avoid unfair comparison.
Following the practice of the previous works and for the sake of clarity, we comply with streaming setting.
Particularly, we consider a fixed split point in time, i.e. t_split, where the information before and after this point constitute the training and test data, respectively.
Streaming Setting. In this setting, information from the test set is only employed for updating the memory module in temporal graph learning methods (e.g., TGN <cit.>).
However, no back-propagation or model update is possible based on the test set information.
We consider this setting as streaming, since it helps in fast inference by incorporating the recent information (even from the test set), while observign the fact that retraining the model with the test data is too expensive.
Deployed Setting. In this setting, the test set information is not available for any modification to the model.
This setting closely follows the standard machine learning setting with distinct training dataset and test dataset. We refer to this setting as deployed to denote that after deploying the a model, it is only used for inference and no updates to any part of the model is allowed.
Live-Update Setting. In this setting, information from any point in the past (including the test set information) can be used to re-train, fine-tune, or update the model.
The goal of this setting is to achieve the best prediction for each timestamp t+1 given the historical information at all previous timestamps in [0,...,t].
We consider this setting as live-update because the model weights can be updated lively through the incoming data.
Note that this setting is similar to the rolling setting exercised in ROLAND framework <cit.>.
§ TEMPORAL GRAPH LEARNING MODELS
Recently, there is an increasing interest in developing graph representation learning models for networks that evolve over times.
Evolving graphs can be investigated at different time granularity.
In particular, we have discrete-time dynamic graphs (DTDGs) that consist of an ordered set of static graph snapshots, and continuous-time dynamic graphs (CTDGs) which include the temporal information associated with any node- or edge-wise events.
In this work, we focus on CTDGs mainly due to the following reasons.
First, CTDGs provides exact temporal information and CTDG models can be applied to DTDGs as well, while modification and employment of DTDG models for making inference on CTDGs is a non-trivial task.
Second, it has been shown that DTDGs are convertible to CTDGs <cit.>, while the conversion in the opposite direction results in loss of information.
Here, we introduce the methods used in our evaluation for the dynamic link property prediction and dynamic node property prediction task.
Models Used for Evaluation of Dynamic Link Property Prediction Task.
* DyRep <cit.> is a temporal point process-based model that propagates interaction messages via Recurrent Neural Networks (RNNs) to update the node representations. It employs a temporal attention mechanism to model the weights of a given node's neighbors.
* TGN <cit.> is a general framework for learning on continuous time dynamic graphs. It has the following components: memory module, message function, message aggregator, memory updater, and embedding module. TGN updates the node memories at test time with newly observed edges.
* GraphMixer <cit.> is a simple model for dynamic link prediction consisting of three main modules: a node-encoder to summarize the node information, a link-encoder to summarize the temporal link information, and a link predictor module. These modules that only employ based on multi-layer perceptrons (MLPs), making GraphMixer a simple model without the use of a GNN architecture.
* TCL <cit.> employs a transfomer module to generate temporal neighborhood representations for nodes involved in an interaction.
It then models the inter-dependencies with a co-attentional transfomer at a semantic level. Specifically, TCL utilizes two separate encoders to extract representations from temporal neighborhoods surrounding the two nodes of an edge.
* CAWN <cit.> predicts dynamic links based on extracting temporal random walks and retrieves temporal network motifs to represent network dynamics. It utilizes a neural network model to encode Causal Anonymous Walks (CAWs) to support online training and inference.
Particularly, it starts by relabeling nodes with encoded temporal anonymous walks starting at the nodes involved in an interaction.
Then, the temporal walks themselves are further encoded using the generated node labels and encodings of the elapsed times.
Finally, the existence of a link is predicted based on aggregating the walks encoding through a pooling module.
* EdgeBank_∞ <cit.> is a simple heuristic storing all observed edges in a memory (implemented as a hashtable).
At inference time, if the queried node pair is in the memory then EdgeBank_∞ predicts true, otherwise EdgeBank_∞ predicts false.
* EdgeBank_tw <cit.> is another variation of EdgeBank which only memorizes edges from a fixed duration in the recent past. Therefore, it has a strong recency bias.
Models Used for Evaluation of Dynamic Node Property Prediction Task.
* TGN <cit.> is discussed above. We utilize the TGN embedding of a node from the memory at the query time t to predict the node labels.
* Persistant Forecast <cit.> is a simple yet powerful baseline for time series forecasting and complex systems. Here, we extend the core idea by simply outputting the recently observed node label for the current time t.
* Moving Average <cit.> considers the average of the node labels observed in the previous k steps (we set k=7).
§ COMPUTING RESOURCES
Dynamic link property prediction. For this task, we ran all experiments on either https://docs.alliancecan.ca/wiki/Narval/enNarval or https://docs.alliancecan.ca/wiki/Béluga/enBéluga cluster of https://www.alliancecan.ca/enDigital Research Alliance of Canada.
For the experiments on Narval cluster, we ran each experiment on a Nvidia A100 (40G memory) GPU with 4 CPU nodes (from either of the AMD Rome 7532 @ 2.40 GHz 256M cache L3, AMD Rome 7502 @ 2.50 GHz 128M cache L3, or AMD Milan 7413 @ 2.65 GHz 128M cache L3 available type) each with 100G memory.
For the experiments on Béluga, we ran each experiment on a NVidia V100SXM2 (16G memory) GPU with 4 CPU nodes (from either of the Intel Gold 6148 Skylake @ 2.4 GHz, Intel Gold 6148 Skylake @ 2.4 GHz, Intel Gold 6148 Skylake @ 2.4 GHz, or Intel Gold 6148 Skylake @ 2.4 GHz) each with 100G memory.
A five-day time limit is considered for each experiment.
We repeated each experiments five times and reported the average and standard deviation of different runs.
It is noteworthy that except for the TGN <cit.> and DyRep <cit.> models that we ported them into the PyTorch Geometric environment, the other models (evaluated by their original source code or by using the https://github.com/yule-BUAA/DyGLibDyGLib repository) throw an out of memory error for the medium and large datasets on both Narval and Béluga clusters.
Dynamic Node Property Prediction For this task, we ran experiments with 4 standard CPU and either RTX8000, V100, A100 or A6000 GPUs. The longest experiment takes around 2 days for the dataset.
§ ADDITIONAL DATASET STATISTICS
In addition to the main dataset statistics presented in Table <ref>, it is insightful to examine some other dataset characteristics as indicated in Table <ref>.
The reoccurence index (i.e., |E_train∩E_test|/|E_train|) denotes the ratio of training edges that reoccur during the test phase as well.
If the edges appearance follows a consistent pattern, a high reoccurrence index can be correlated with the high performance of a memorization-based approach such as EdgeBank <cit.>.
The average number of edges per timestamps provides information about the evolution of the datasets, and the ratio of the new nodes in validation or test set provides insights about the portion of unseen nodes introduced during inference.
It should be noted that although we mainly focus on transductive dynamic link property prediction task in our evaluation, the ratio of new nodes in validation or test set (fourth and fifth column in Table <ref>) show that there are indeed new nodes during the validation and test phase.
Correctly predicting the properties of edges for the new nodes might be more challenging for the models, since no historical information about these nodes are available. Lastly, we observe that datasets are also diverse in all of the above properties
|
http://arxiv.org/abs/2307.02853v1
|
20230706083703
|
Formation and evolution of transient jets and their cavities in black-hole X-ray binaries
|
[
"Marek Sikora",
"Andrzej Zdziarski"
] |
astro-ph.HE
|
[
"astro-ph.HE"
] |
Transient jets
0000-0002-0333-2452]Marek Sikora
Nicolaus Copernicus Astronomical Center, Polish Academy of Sciences, Bartycka 18, PL-00-716 Warszawa, Poland; mailto:[email protected]@camk.edu.pl
0000-0002-0333-2452]Andrzej A. Zdziarski
Nicolaus Copernicus Astronomical Center, Polish Academy of Sciences, Bartycka 18, PL-00-716 Warszawa, Poland; mailto:[email protected]@camk.edu.pl
Sikora & Zdziarski
We propose a model explaining the origin of transient/episodic jets in black-hole X-ray binaries, in which they are caused by transitions from a collimated, strongly magnetized, jet to a wide, un-collimated, outflow. The change occurs when the accretion flow leaves the magnetically-choked state due to an increase of the accretion rate at a constant magnetic flux. The formed powerful jet then detaches from its base, and propagates as a discrete ejection. The uncollimated outflow then produces a relativistic plasma that fills surrounding of the black hole, contributing to the formation of a low-density cavity. While the pressure in the cavity is in equilibrium with the surrounding interstellar medium (ISM), its inertia is orders of magnitude lower than that of the ISM. This implies that the plasma cannot efficiently decelerate the ejecta, explaining most of the observations. The modest deceleration within the cavities observed in some cases can be then due to the presence of clouds and/or filaments, forming a wide transition zone between the cavity and the ISM.
§ INTRODUCTION
The concept of launching of powerful jets by rotating black holes (BHs) involving Magnetically Arrested Disks (MAD; ) provides new tools to explain a large diversity of accretion flow properties and their time evolution, as well as it explains high jet production efficiencies. It connects the idea of generating initially Poynting-flux dominated jets by rotating black holes (, hereafter ) using the magnetic field of the MAD ( and references therein). That magnetic field is so strong that its pressure counterbalances the ram pressure of the accretion flow. In contrast to standard accretion disc theories, where the structure of accretion flows down to the innermost stable orbits is not affected by magnetic fields and their role is limited to provide viscosity via the magneto-rotational instability, accretion onto a BH in the MAD proceeds via magnetic Rayleigh-Taylor instability across the unipolar flux of poloidal magnetic field threading the BH and innermost parts of the accretion flow (see for a review).
Independently whether the system reaches the MAD state or not, rotating BHs threaded by poloidal magnetic fields produce Poynting-flux dominated outflows (). Advantages of MAD models over those with weaker fields, named SANE (standard-and-normal-evolution; ), are not limited to the possibility of producing most powerful jets observed in AGNs and XRBs, but also by getting them well collimated. Since relativistic outflows cannot self-collimate <cit.>, external collimation is needed <cit.>. That collimation in the MAD model can be provided by MHD winds produced by accretion flows enclosed within the MAD zone. In the case of SANE models, the outflows can be confined by radiative or thermal accretion disc winds, but those are not expected to be good collimators on larger scales.
We focus in this paper on production and evolution of transient/episodic jets in BH X-ray binaries (XRBs; e.g., ) and on the nature of cavities surrounding them <cit.>. Two main types of jets are observed in BH XRBs, compact and transient. The former are relatively steady jets associated with the hard spectral state (see, e.g., for reviews of spectral states) of BH XRBs. Their sizes observed in radio are limited to ∼ 10^15 cm (e.g., ) and are often unresolved by radio observations, and thus also referred to as the core jets. The transient jets are associated with transitions from the hard intermediate state (defined as the softest part of the hard state, with the X-ray photon index >2) to the soft one, and are observed as discrete moving ejecta <cit.>. Often both the approaching and the receding components are seen (e.g., ), and they are sometimes detected up to a parsec scale (e.g., ).
Our paper is organized as follows. Our proposed scenario of the production of episodic jets is presented in Section <ref>. Their propagation through the cavities and the cavity formation and evolution are investigated in Section <ref>. Then in Section <ref>, we discuss our results and compare our results for XRB cavities with those on galactic scales, and in Section <ref>, we present our main conclusions.
§ PRODUCTION OF TRANSIENT JETS
As we described in Section <ref>, jets produced in the MAD state are effectively collimated, while no such collimation is expected when the magnetic flux drops below the MAD limit. Since the maximum magnetic flux that can thread the BH scales with √(Ṁ_ acc) (where Ṁ_ acc is the accretion rate), we may expect transitions between strongly collimated (jetted) states and weakly collimated (windy) -outflow states provided variations of Ṁ_ acc lead to changes between Φ_ tot>Φ_ BH,max (MAD state) and Φ_ tot<Φ_ BH,max (SANE), where Φ_ tot is the net poloidal magnetic flux accumulated on both one hemisphere of the BH and the accretion flow. The maximum magnetic flux that can be confined on the BH by the ram pressure of the accreting plasma is given by <cit.>
Φ_ BH,max = ϕ (Ṁ_ acc c r_ g^2)^1/2, ϕ≈ 70 (1-0.38 a r_ g/r_ h) h_0.3^1/2,
where ϕ is a dimensionless magnetic flux, h≡ r× 0.3 h_0.3 is the half-thickness of the disc at radius r, and a is the dimensionless BH spin. The above formula for ϕ approximates well results of numerical simulations ( and references therein).
Such transitions are illustrated in Fig. <ref>. Here, XRB states producing jets (both those compact and episodic) are located above the Φ_ BH,max line, while those producing wide outflows (in particular those in the soft spectral state) are located below the Φ_ BH,max line. We can infer from this diagram and Equation (<ref>) that transitions between jetted and un-jetted states can be driven by changes of the accretion rate, of the total magnetic flux, and of the geometrical thickness of the accretion flow. Our interest here is on production of episodic jets/ejecta observed during the transition from the hard-intermediate state to the soft one. A jet can become episodic when the (total) magnetic flux threading inner parts of the accretion flow is squeezed onto the BH due to an increase of Ṁ_ acc. Initially the source is located above the Φ_ BH,max line in Fig. <ref>, i.e., in the hard intermediate state. Then an increase of Ṁ_ acc causes it to move to the right, as shown by the red dashed line. In this case, the jet power (∝Ṁ_ acc) increases first, but then it crosses the MAD boundary, at which point the outflow is no more collimated and changes into a wide conical Poynting-flux dominated wind. The formed jet detaches then from the BH and continues to move as a discrete ejections. This explains both the powerful episodic ejections and the following absence of core radio emission seen in most transient XRBs in soft state <cit.>. Alternatively, a change of the accumulated magnetic flux will also cause the outflow to move between the collimated jet and wide outflow states, as illustrated by the blue dashed line.
Since the accretion rate during the transition from the hard state to hard intermediate state appears to be associated with an increase of Ṁ_ acc, the former case is more likely. In that case we will have a powerful jet which formation suddenly stops.
That powerful jet can interact with the remnant of the previous compact jet. Such interaction will be analogous to that in the internal shock model (originally proposed for γ-ray bursts; ), in which shocks are predicted to be formed as a result of interaction between faster and slower shells. This model has also been proposed for microquasars <cit.>. Then, evolution of the ejecta has the two initial steps.
(1) The less powerful, slower-moving, compact jet forms a classical double-shock structure while interacting with the transient jet (see, e.g., fig. 1 in but replacing shells by jets). Due to having balance of the momentum fluxes at the contact surface, shocked fluids propagate in this phase
with the constant Lorentz factor, Γ_ sh, between Γ_ f of the faster transient jet and Γ_ s of the slower compact jet.
(2) The transient jet is entirely shocked and hence its momentum flux is not anymore balanced. From this moment on, the shocked plasmas start to decelerate, but at most down to Γ_ s.
§ THE NATURE OF THE CAVITIES
Tracking of motion of transient jets is critical for determination of their
energetics. In many cases, radio and X-ray observations have tracked the motion of the approaching and receding ejecta up to large distances <cit.>. The ejecta are seen to propagate with at most weak deceleration, implying that the BH XRBs reside in low-density environments, or cavities <cit.>. The matter density within the cavities is lower than that typical for warm interstellar medium (ISM) by at least a few orders of magnitude (e.g., ). In only two cases, of XTE J1550–564 and XTE J1348–630, the ejection was observed to cross the boundary of the cavity and quickly decelerate outside of it, see <cit.> and <cit.>, respectively. In both cases, the ISM boundary was at ∼0.5 pc. Still, the sizes of other observed cavities remain unknown, and they could, in principle, be much larger.
The origin of the cavities remains uncertain. They cannot be related to the supernova outbursts that created the BHs, given the issues of the motion of the XRBs through the ISM and the long ages in the case of low-mass XRBs, see <cit.>. Then, disc winds are not likely to produce the cavities <cit.>. The cavities could have been carved by the previous jet activity of the systems, but then they would produce narrow tunnels, which are unlikely to remain sustained over long time, as argued by <cit.>. On the other hand, a very promising mechanism appears to be inflation of a quasi-spherical cavity by relativistic plasmas from both collimated jets <cit.> and uncollimated outflows powered by rotating BHs. The latter occur after transition from the hard intermediate state to the soft state, due to Φ_ tot < Φ_ BH,max, see Figure <ref>. The relativistic plasma is then expected to form a quasi-spherical cavity and have the pressure approximately equal to the pressure of the surrounding ISM <cit.>.
In that case, the pressure of the relativistic gas deposited inside the cavity, p_ cav, is equal to n_ ISM k T_ ISM, where n_ ISM and T_ ISM are the ISM number density and temperature, respectively. In the case of a warm ionized ISM, with n_ ISM≈ 1 cm^-3 and T_ ISM≈ 10^4 K, the energy needed to be deposited within a cavity of a given volume equals E_ cav=(u_ cav+p_ cav) V_ cav, where u_ cav is the energy density. In a spherical case,
E_ cav≈ 4 n_ ISM k T_ ISM4π/3 R_ cav^3≈ 8.5× 10^43(R_ cav/0.5 pc)^3 erg,
where R_ cav is the cavity radius. The time needed to inflate such a cavity is
t ≈ 0.27 (R_ cav/ 0.5 pc)^3/P_ j/10^37 erg s^-1 yr,
where P_ j is the jet power. This implies that the cavities with the detected boundaries could, in principle, be inflated during single outbursts of XRBs.
In the idealized case of steady-state injection of relativistic plasma into the cavity, we expect the formation of a four-zone structure: (i) the innermost part filled up by quasi-spherical relativistic outflow; (ii) the shocked central outflow; (iii) the shocked external medium; (iv) the un-shocked ISM. In this classical double-shock structure, the cavity edges are represented by discontinuity/contact surface, and, provided there is no motion of the central source relative to ISM, one may expect continuous increase of the cavity.
While we do not know the cavity sizes for most of the observed cases, it is possible that the two cases with the measured sized (XTE J1550–564 and XTE J1348–630) are representative. Then a question arises how to avoid inflation of a cavity up to ≫ 1 pc over multiple outbursts during the lifetime of an XRB. This could be avoided if the plasmas within cavities cool fast enough during quiescent epochs. However, both synchrotron and inverse Compton mechanisms are extremely inefficient. Assuming B^2/8π=p_ cav gives B≈ 6 μG, which is somewhat higher than the typical interstellar magnetic field strength. This gives the cooling time of ∼ 10^12(10^6/γ) yr, where γ is the electron Lorentz factor. The diluted central accretion emission at the Eddington limit dominates over the starlight, but it still gives inverse-Compton cooling time scale of ∼ 10^9(10^6/γ) yr.
Hence, the only way to avoid too large growth of cavities is to assume that XRBs are moving in the ISM. That motion can be related to the kicks which binaries receive following supernova explosions. As observations show, velocities of such kicks are v_ kick∼ 10^2 km/s (e.g., ). Then, cavities will form structures extended along the XRB trajectories <cit.> like those along the pulsar trajectories (see, e.g., fig. 9 in ). The BH spins not perpendicular to such trajectories can be responsible for observed in some XRBs asymmetries between the motion of jets and counterjets, e.g., <cit.>. Such cavities will keep their parsec scale cross-sectional radius provided they were powered with the average rates (including quiescent epochs) equal to
⟨ P_ j⟩≈ 4 p_ cavπ R_ cav^2 v_ kick≈ 4× 10^32(R_ cav/0.5 pc)^2 v_ kick/10^2 km/s erg/ s.
Next, a question emerges whether interactions with the plasma would not stop the ejection at much smaller distances than those observed. In a cold medium, the deceleration is due to the rest energy of ions swept up by the moving ejection (e.g., ). In a hot medium, the deceleration will be due to the total, rest + internal, energy density of the plasma, see Appendix <ref> for details. If the plasma is highly relativistic, the internal energy dominates. Then, we can find the density of cold ions with the same total energy density, u_ p, as the total energy density, u_ cav, of the relativistic plasma, and compare it to estimates of the density of ions in observed cavities, which are of the order of n_ p,obs∼ 10^-3 cm^-3 (e.g., ). Noting that the relativistic plasma in the cavity is in the pressure balance with the external thermal plasma, we obtain u_ cav≈ 3 p_ cav≈ 3 n_ p, ISM kT_ ISM (where, for the sake of simplicity, we considered only protons). Hence, the number density of protons for which u_ cav = u_ p is
n_ p = 3 n_ p,ISM k T_ ISM/m_ p c^2≈ 2.8 × 10^-9 n_ p, ISMT_ ISM/10^4 K,
which we normalized using values characteristic to the warm ISM. The same value is obtained for the hot ISM with n_ ISM≈ 10^-4 cm^-3 and T_ ISM∼ 10^8 K. Thus, the relativistic plasma filling the cavity does not noticeably slows down the ejecta.
At the end of Section <ref>, we have shown that the interaction with the remnants of the previous compact jet can only moderately slow down the ejecta and only close the BH. Then, in order to explain the observed wide deceleration profiles of ejecta, we postulate the presence of an extended transition region between the core of the cavity and the ISM, similarly to the transition layer found in <cit.>, but wider and with the cold protons being associated with clumps/filaments of thermal plasma being
in the pressure balance with the relativistic plasma. Also, the outflows can be loaded by some baryons, though their contribution to the drag is probably negligible, see Appendix <ref>. In particular, in the case of relativistic jets dominated by cold ions in the comoving frame, the ions will be injected to cavities with relativistic energies, γ∼Γ_ j, upon the jet complete deceleration.
§ DISCUSSION
In all previously studies of the evolution of transient jets, the presence of relativistic plasmas inside cavities was not included, despite that inflation of cavities by relativistic plasma from jets was investigated in literature <cit.>. As we have argued, the presence of such plasmas in the pressure balance with the external medium is unavoidable. Furthermore, we have demonstrated that the inertia of the relativistic plasma is by several orders of magnitude too low to affect the dynamics of ejecta within the cavities. That may suggest the existence of extended two-phase transition layers, with thermal clumps and/or filaments in the pressure balance with the surrounding relativistic plasma and filling the space near the edges with gradients of the volume filling factor, cf. <cit.>.
We note that the idea of having empty cavities was also considered for X-ray–mapped hot galactic and galaxy-cluster atmospheres. However, radio detections at very low frequencies indicate that the X-ray cavities are not empty, but are filled by relativistic plasmas <cit.>. Yet another example of remnants of past jet activity represented by cavities filled up by relativistic plasma is provided by two giant bubbles above and below the Milky Way center, the so-called Fermi bubbles <cit.>. They were also detected in radio <cit.> and X-rays <cit.>, and argued to represent remnants of the past jet activity in Sgr A^* (e.g., ).
§ CONCLUSIONS
Our main results are as follows.
We have proposed a novel scenario explaining the origin of transient/episodic jets in BH XRBs. It involves transition from a collimated outflow (jet) to a wide, un-collimated, one due to an increase of the accretion rate. That increase results in an increase of the possible magnetic flux threading the BH, which in turn results in a transition from the MAD phase to the SANE one of the outflow. Consequently, the jet production stops, resulting in a discrete ejection, detached from the central BH.
The outflow continues after the jet ejection, but it is no more collimated. This produces most of the relativistic plasma contributing to the formation of cavities surrounding XRBs. The cavities are not vacuous, but filled by that plasma.
The relativistic plasma filling a cavity is in pressure equilibrium with the surrounding ISM. We show that this implies that the plasma cannot decelerate the ejecta. The modest deceleration within the cavities observed in a number of cases requires the presence of clouds and/or filaments filling the cavity, and forming a wide transition layer between the cavity and the surrounding ISM.
§ ACKNOWLEDGEMENTS
We acknowledge support from the Polish National Science Center under the grant 2019/35/B/ST9/03944.
§ DECELERATION BY RELATIVISTIC PLASMA
The drag force acting on an ejection moving in an external medium is equivalent to the momentum flux of the medium measured in the ejection comoving frame (e.g., ). We find it is equal to
Π' = (w_ cavβ_ ej^2 Γ_ ej^2 + p_ cav) A_ ej,
where w_ cav=ρ_ cav c^2 + u_ cav + p_ cav, β_ ej = v_ ej/c, Γ_ ej= (1-β_ ej^2)^-1/2, and A_ ej is the jet cross-sectional area. For a cavity filled with a relativistic plasma, we have ρ_ cavc^2 ≪ u_ cav + p_ cav and w_ rel≈ 4 p_ cav, implying
Π'_ rel/A_ ej≈ 4 p_ cavβ_ ej^2 Γ_ ej^2 + p_ cav≈ 4 p_ cavΓ_ ej^2.
For a cavity filled with a cold plasma, we have
w_ cav≈ρ_ cavc^2 = n_ p m_ p c^2, and
Π'_ cold/A_ ej≈ n_ p m_ p c^2 β_ ejΓ_ ej^2
≈ n_ p m_ p c^2 Γ_ ej^2.
(In this case at β_ ej≪ 1, we get the well-known formula
for the ram pressure p_ ram≡Π'_ cold/ A_ ej = ρ_ cav v_ ej^2.)
Consequently, the pressure balance between the relativistic plasma in cavity and the pressure of the thermal plasma in the ISM outside it implies Π'_ rel≪Π'_ cold. Then, as discussed in Section <ref>, such relativistic plasma cannot significantly decelerate the ejecta within the observed cavities, see Equation (<ref>), and we need another cavity components for that.
aasjournal
|
http://arxiv.org/abs/2307.02109v1
|
20230705083609
|
Aeroacoustic investigation of airfoil at near stall conditions
|
[
"Prateek Jaiswal",
"Jose Rendón",
"Stéphane Moreau"
] |
physics.flu-dyn
|
[
"physics.flu-dyn"
] |
AIP/123-QED
]Aeroacoustic investigation of airfoil at near stall conditions
[email protected]
[]Department of Mechanical Engineering, University of Sherbrooke, Sherbrooke, QC, CA.
[]Department of Mechanical Engineering, University of Sherbrooke, Sherbrooke, QC, CA.
[]Department of Mechanical Engineering, University of Sherbrooke, Sherbrooke, QC, CA.
This paper presents a detailed aeroacoustic investigation of a Controlled-Diffusion airfoil at near stall condition. The study aims at answering two research questions: identify the flow mechanism responsible for separation noise for an airfoil near stall conditions and whether the noise is generated by a dipole for airfoil close to stall and can be quantified by Amiet's diffraction theory. The study uses synchronized PIV, RMP and far-field microphone measurements to perform experiments at two chord based Reynolds numbers of
about 150,000 and 250,000. The results show that when the airfoil is placed at a higher angle of attack, such as 15^∘, strong amplification of flow disturbance is seen, resulting in the rolling up of the shear layer in the aft-region of the airfoil, forming
large coherent structures. While these rollers play a central role in the increase in noise due to flow separation, the flapping of shear layer does not contribute to the separation noise. The present study conclusively shows that separation noise is dipolar in nature, and that the quadrupolar contribution for low-speed airfoils at near-stall conditions can be neglected. However, the increase in flow disturbances measured close to the trailing-edge of the airfoil implies that the assumption of small amplitude disturbance is no longer valid, which is the central premise of the thin linearized airfoil theory. Outside the frequency range at which flow separation operates, Amiet's theory
is able to predict the far-field noise even at high angles of attack.
[
Stéphane Moreau
August 1, 2023
===================
§ NOMENCLATURE
@l l@
C Airfoil chord
c_0 Speed of sound
C_p Mean pressure coefficient
C_prms Root-mean-square of the wall-pressure coefficient
E_11 Pre-multiplied turbulent energy spectra
H Boundary layer shape factor
M_∞ Inlet Mach number
p_∞ Inlet static pressure
p_rms root-mean-square of the wall pressure
p^'_a Far-field acoustic pressure
p^'_w Fluctuating wall-pressure
R_ij Second order two-point zero time delay correlation
Re_c Reynolds number based on the chord
S_pp Far-field acoustic power spectral density
u_i Fluctuating velocity component
U_c Convective speed of wall-pressure fluctuations
U_∞ Inlet velocity
U_e Boundary layer edge velocity
U_1,U_2,U_3 Mean velocity in trailing edge reference frame
u_1u_1,u_2u_2,-u_1 u_2 Root-mean-square of velocity fluctuations in trailing edge reference frame
-ρ u_1 u_2_max maximum Reynolds shear stress
V_x,V_y Mean velocity in wind tunnel reference frame
x,y,z Wind tunnel coordinate system
x_1,x_2,x_3 Coordinate system aligned with the airfoil trailing edge
x'_1,x'_2,x'_3 Coordinate system aligned with the airfoil leading edge
δ_95 Boundary layer thickness based on 95% of U_e
δ^* Boundary layer displacement thickness
Λ Dimensionless radiation ratio
θ Boundary layer momentum thickness
ρ Constant air density
§ INTRODUCTION
Airfoil trailing-edge noise is dominant in a host of engineering applications. Several of the distinct mechanisms,
which are referred as airfoil self-noise, are related to scattering of pressure gust past the airfoil trailing edge. Among them, noise due to flow separation is found to be dominant at high angles of attack, where large scale flow separation may occur. This is particularly the case for some wind turbine architectures, such as the H-Darrieus type wind turbine <cit.>. Therefore, accurate models are needed during the pre-design phase to estimate acoustic noise by such machines. To achieve this a better understanding of the noise generation mechanism is needed. Nevertheless, only few comprehensive aeroacoustic studies have been performed, for airfoils placed at high angles of attack <cit.>. As such, the overall objectives of the present manuscript are to identify the dominant flow mechanism(s) responsible for separation noise, and test the applicability of diffraction theory <cit.> to predict noise at high angles of attack.
Numerically, Moreau and co-workers had performed several high-fidelity incompressible simulations for airfoil at high incidence almost a decade ago <cit.>. In these simulations, an isolated airfoil installed in an open-jet anechoic wind tunnel (the test configuration) was simulated as opposed to the full scale wind turbine <cit.>. As such only the noise due to the boundary-layer and its separation were studied. The far-field noise was quantified using both acoustic analogies <cit.> and amiet1976noise diffraction theory. <cit.> reported over-prediction of the wall-pressure by the LES, and the far-field acoustic spectra estimated by amiet1976noise model to be 10 dB higher than the measurements. In particular, this disagreement was present only at the low frequency, where the noise due to separation is expected to be the dominant mechanism. Similarly, the semi-empirical models for far-field noise based on amiet1976noise theory, referred to as MODA <cit.>, have been shown to yield poor results. However, the reason for this disagreement when predicting separation noise with amiet1976noise model is unknown and requires further investigation.
More recently, compressible simulations have been performed by <cit.> to quantify the individual contributions of equivalent source type (dipole and quadrupole) from low-speed airfoils in near stall conditions. They achieve this by subtracting noise estimated by the solid formulation of williams1969sound acoustic analogy from the noise estimated by the permeable formulation. <cit.> show that the noise contribution by quadrupole sources is
significant, when an airfoil is placed at high incidence. However, while the porous formulation is complete, the solid formulation ignores the correlation between the dipole and quadrupole noise sources. This can lead to spurious directivity patterns as already demonstrated by <cit.>. Nevertheless, it is important to quantify the individual contributions of various equivalent source types that may contribute to far-field noise.
While equivalent noise sources are an important metric in aeroacoustics research, they are by no means unique. This is because the multipole expansion <cit.> dictates that one equivalent image source can be replaced by another. For instance, a quadrupole can be expressed as two dipoles that are of equal strength but in phase opposition. As such correct identification of equivalent noise source cannot by itself describe or confirm the precise flow mechanism behind separation noise. As such it is imperative to perform a detailed flow quantification and analysis to understand the noise mechanism. Previously, <cit.> hypothesized that airfoil separation noise results from the interaction between turbulent structures in the shear layer and the airfoil trailing-edge, as separated structures are convected past the airfoil, resulting in significant pressure fluctuations. However, previous experiments were unable to accurately identify the noise mechanism, as flow-field measurements were unavailable.
More recently, using PIV and synchronized wall-pressure and hot-wire measurements, <cit.> identified three possible distinct noise generation mechanisms to explain noise generation by an airfoil close to stall. Importantly, all of these mechanisms were linked to instabilities in the shear layer and were localized in a region within the separated shear layer away from the wall. The separated shear layer may not only result in a substantial increase in the contribution of quadrupole noise <cit.>, but may also invalidate the unsteady Kutta condition. This is because the latter relies on the flow leaving the airfoil trailing edge smoothly. Furthermore, separation noise is dominant for airfoil placed at high angles of attack. Therefore, the central premise of the thin-airfoil linearized theory may not hold for such cases because the amplitude of the induced disturbance by the flow separation may not be small. Evidently, changes may occur in resulting radiation ratio, and thus amiet1976noise radiation factor may not be able to correctly quantify the hydrodynamic-to-acoustic conversion <cit.>. Therefore, in the present manuscript, we ask the question: can the separation noise be fully quantified using a dipole source, such as those outlined in Amiet's diffraction theory? If so, are there other possible mechanisms of noise generation that may explain noise generation due to an airfoil close to stall? Furthermore, is the mechanism behind the separation noise universal ?
To this end, aeroacoustic measurements have been performed in
the anechoic flow facility at Université de Sherbrooke. In particular, planar PIV measurement, wall-pressure and far-field acoustic measurements have been achieved. For the present study, a Controlled-Diffusion (CD) airfoil is used. These measurements have been performed at a fixed geometric angle of attack of 15^∘. For the CD airfoil at this angle of attack, flow separation near the leading-edge region was reported by <cit.>. As such, the present aeroacoustic investigation is performed to understand noise due to flow separation, for an airfoil that is close to stall conditions <cit.>. Comparing the flow and pressure characteristics between the present case and that reported earlier, where the boundary-layer is fully attached near the trailing-edge of the airfoil <cit.>, is expected to elucidate the true contribution of separation noise.
§ EXPERIMENTAL SET-UP AND INSTRUMENTATION:
The aero–acoustic measurements were performed in the anechoic wind tunnel at Université de Sherbrooke (UdeS). The anechoic room, is about 7 × 5.5 × 4 m^3 in dimension. The open jet has a dimension of 50 × 30 cm^2, and can achieve a maximum velocity of 40 m/s. As the temperature of the open jet can be controlled, all the measurements are performed at a constant free-stream density ρ.
The CD airfoil is placed at a 15^∘ geometric angle of attack with the help of plexiglass plates of thickness 4.25 mm laser cut to reduce uncertainty in angle of attack while placing the airfoil and at the same time giving good optical access. All the measurements are performed at a free-stream velocity U_∞ of 16 m/s and 28 m/s, which respectively corresponds to Mach numbers M_∞≡ U_∞/c_0 ≃ 0.05 and M_∞≃ 0.08 (c_0 speed of sound) and Reynolds numbers based on the airfoil chord length C and the free-stream velocity of Re_c ≃ 1.5× 10^5 and Re_c ≃ 2.5× 10^5.
§.§ Planar PIV measurements setup
Two-dimensional PIV measurements were performed on the suction side of the airfoil, as shown in figure <ref>. Three sCMOS cameras, with a 5.5 megapixel sensors each, were used to acquire images in a dual frame mode. A ND:YAG dual pulsed laser from Lavision was used for illumination. The light sheet for Planar PIV was generated with a set of spherical lenses and a diverging cylindrical lens with a focal length of -20 mm. Tracer particles of about 1 μm
were generated to seed the flow. The images were recorded for each case at an acquisition frequency of 2 Hz. Inter-frame time was increased until the cross-correlation coefficient remains between 0.6-0.9. The resulting inter-frame time meant particle image displacement of more than 20 pixels was achieved in the free stream. This ensures low relative error (∼ 0.5 %) in the estimation of particle image displacement. The data collected at a free-stream velocity of U_∞=16 m/s were processed using Lavision's Davis 8 software while for the U_∞=28 m/s case they were processed with the newer Davis 10 software. The final vector calculations were performed on
the computer clusters from Digital Research Alliance of Canada. For
U_∞=16 m/s case, a total of 11 passes were used for the multi-grid scheme, starting with an initial window of 128 × 128 pixels to the final window size of 4× 4 pixels. In the iterative multi-grid scheme, an overlap of 75 % and an elliptical weighting (elongated in the mean-flow direction) is used. The final window size was about 0.0923 × 0.0923 mm^2. In contrast, for the U_∞=28 m/s case, a reduced window overlap of 50 % was used while keeping all the other parameters the same as for the U_∞=16 m/s case. This was done to accelerate the vector calculations and to reduce the size of the final vector field.
§.§ Steady wall-pressure measurements
Figure <ref> shows the pinholes located along the chord of the airfoil, which are used to measure the mean wall-pressure coefficient.
There are in total 21 probes on both suction and pressure sides of the airfoil, 18 of them are placed in the streamwise direction and the last 3 in the spanwise direction.
Pinholes on the pressure side of the airfoil are labelled as 4, 8, 10, 12 and 29, whereas those on the suction side are 1 to 6 at the leading edge, 7, 9 and 11 at mid-chord and 21 to 28 at the trailing edge (see figure <ref>). The differential pressure is measured using an array of miniature amplified low pressure sensors in order to get a full reading along the airfoil chord <cit.>. These miniature amplified low pressure sensors have an accuracy of 0.25 % Full Scale (FSS), which is between 1244.2-248.84 Pa.
Details on the setup and acquisition can be found in <cit.>. In the current paper, the wall differential pressure are normalized by inlet free stream dynamic pressure, which yields the mean pressure coefficient
C_p≡ (p-p_∞)/(0.5 ρ U_∞^2) with p_∞ the inlet pressure.
§.§ Unsteady wall-pressure measurements
The pinholes on
both sides of the airfoil are
are also connected to Remote Microphone Probes (RMP) to record unsteady static wall-pressure measurements <cit.>. For the present set of experiments, Knowles FG 23329-P07 miniature microphones were used.
These microphones have a flat response over a large range of frequency (0.1-10 kHz), and have a nominal sensitivity of 22.4 mV/Pa. The pinhole diameter of 0.5 mm ensures spectral averaging is avoided well beyond 10 kHz <cit.>. As these microphones are connected remotely to the pinhole, a correction in phase and magnitude is needed. This is achieved following the methodology outlined by <cit.>.
§.§ Hot wire measurements
Hot wire anemometry (HWA)
is used to investigate the spectral content of the velocity disturbance over the airfoil. The HWA
probe is placed directly above
RMP 26 (x/C=0.98),
on the suction side of airfoil. The hot wire measurements were performed using a TSI 1210-T 1.5 single wire probe. The probe consists of a platinum quoted tungsten wire with a 0.0038 mm diameter and a 1.27 mm length, which satisfies the recommended wire length-to-diameter ratio of 200 <cit.>. The hot wire probe was connected to a TSI IFA 300 anemometer operating in Constant Temperature Anemometry (CTA) mode. The output signals of this anemometer were recorded with 25600 kHz acquisition frequency using a NI 9234 24 bit module. In order to attenuate any unwanted parasitic noise, a low pass filter of 1000 Hz was applied. Based on previous wall-pressure measurements <cit.>
no substantial contributions of velocity disturbances beyond this frequency is expected for the 15^∘ angle-of-attack case. Furthermore, the HWA were performed only at U_∞=16 m/s.
The total recording time for each of these point-wise measurements was about 60 seconds. For more details on the setup the reader is referred to <cit.>.
§.§ Acoustic measurements
Far-field acoustic pressure was measured using Integrated Circuit Piezoelectric (ICP) microphones with a 1/2 inch diaphragm. The microphones are placed in the airfoil mid-chord plane. In total 8 microphones were placed on an circular arc around the airfoil at a distance of 1.21 m (or about 10 times the chord length) to ensure they are in an acoustic far-field location. The microphones were calibrated using a B&K piston-phone, which ensures the calibration uncertainty is within 0.2 dB.
§.§ Synchronized
measurements
In order to relate the near-field velocity disturbance field to the resultant far-field acoustic noise, synchronized velocity-pressure measurements have been performed as previously done at a lower 5^∘ angle-of-attack <cit.>. Furthermore, the wall-pressure measurements were also synchronized to study the footprint of velocity disturbances on the wall. To obtain acoustic directivity pattern caused by diffraction of unsteady gust, the far-field microphones were synchronized with the RMPs. The near-field and far-field pressure measurements are time resolved compared to PIV measurements, which has a limited time resolution. As such, the acquisition frequency for all the measurements performed are set to powers of two. In particular, the PIV measurements were performed at 2 Hz while unsteady near and far-field pressure were recorded at an acquisition frequency of 65536 Hz (or 2^16 Hz). The synchronization between PIV and pressure measurements
is achieved using the procedure outlined by <cit.>, where further details on the implementation can be found.
§ RESULTS
To ensure that the flow facility and installation do not dictate overall flow dynamics <cit.>, the mean wall-pressure coefficient has first been compared. The results in
two different facilities, in which the CD airfoil has been tested in within a 50 cm wide jet, show an overall good agreement over most of the airfoil chord, C, as shown in figure <ref>. (x,y) represents the fixed laboratory reference frame at the airfoil midspan, x being parallel to the jet axis and oriented with the flow. The origin of the reference frame is taken at the airfoil trailing edge. Repeatability tests at UdeS have also been achieved <cit.>.
Previous experimental and numerical studies on this airfoil <cit.> have reported an increase in low-frequency noise, when it is placed at a high angle of attack. This observation is confirmed by the far-field microphone measurements shown in figure <ref>, which shows an overall increase in low frequency sound pressure levels when comparing the 8^∘ and 15^∘ cases for the two Reynolds numbers. As shown in figure <ref>, this is also consistent with the previous measurements by <cit.> (open symbols).
This noise increase is most likely
linked to an overall increase in r.m.s levels of wall pressure close to the trailing-edge region, as shown in figure <ref> (b). As the overall goal of the present manuscript is to identify flow mechanisms responsible for separation noise and to test the applicability of diffraction theory <cit.>, the cause (flow disturbances) to the effect (far-field noise) will be established with the help of the latter.
Amiet's model and its extension <cit.> relies on Curle's analogy combined with a compressible linearized Euler model for the wall-pressure fluctuations on an infinitely thin flat plate seen as equivalent dipoles. The PSD of the far-field acoustic pressure at any observer located at 𝐗=(X_1,X_2,X_3), for any angular frequency ω, generated by a flat plate of chord length C and span L then reads:
S_pp(𝐗,ω) ≈ (k C X_2/4π S_0^2)^2L/2| I(ω/U_c,kX_3/S_0)|Φ_pp(ω) l_z(ω,kX_3/S_0),
where k is the acoustic wave number, S_0 the corrected distance to the observer, I the analytical radiation integral (or acoustic transfer function) given in <cit.>, U_c the streamwise convection velocity, Φ_pp the wall-pressure spectrum and l_z the spanwise coherence length.
In summary, the wall-pressure field can be characterized by the PSD of wall-pressure fluctuations, the convection velocity and the spanwise correlation length. In order to explain the increase in the low frequency far-field acoustic spectra, the statistical description of the incident wall-pressure field will be explored in the next section.
§.§ Unsteady wall-pressure field
Figure <ref> shows
PSD measurements using RMPs on the suction side of the airfoil along its chord. The first two probes located at the leading edge, show rapid decay in spectral energy
most likely because of the laminar nature of the boundary layer. The humps and peaks observed in RMP 3 probe (x/C ≃ 0.09) can be linked to boundary-layer instabilities <cit.>, which are present due to the existence of a Laminar Separation Bubble (LSB). RMP 5 (x/C ≃ 0.15) onward the wall-pressure spectra decay is much slower than it was for the first three probes, suggesting a possible turbulent re-attachment. Near the mid-chord region (RMP 9), the wall-pressure statistics almost attains a -5 slope at high-frequencies, suggesting a mean attached turbulent boundary layer.
On the aft part of the airfoil,
an almost constant gradient in the wall-pressure spectra, of f^-2.2, emerges in the mid-frequency range. Similar observations were made by <cit.> who used the NACA-65-410 airfoil for their study at high angles of attack, and by <cit.> on the oscillatory NACA-0012 airfoil in the similar light-stall flow regime. In contrast, at low frequencies, the wall-pressure spectrum becomes flat to an extent that its slope is near zero for probes beyond RMP 9. As such, the classical f^2 <cit.> scaling at low frequency is not observed. It is hypothesized that this change in slope is more linked to the presence of
the jet, which predominantly contributes to the low frequency and interacts more with the airfoil at high angle of attacks. At higher frequencies, a constant spectral slope of f^-5 emerges, which is consitent with previous studies made on the CD airfoil <cit.>. Overall, the spectra become statistically similar beyond
0.85 c. To quantify the effects of mean pressure gradient on wall-pressure fluctuations, differences in
PSD between the airfoil at 8^∘ and 15^∘ angles of attack at Re_c≃ 150000
are plotted in figure <ref> for the trailing-edge sensors 21-25 only between
10 and 1000 Hz.
An increase in spectral content is clearly shown for the
15^∘ case compared to the 8^∘
case <cit.>.
The second quantity of interest, the convection velocity U_c, was estimated by correlation analysis between two RMPs, 23 and 24, which are separated by a finite streamwise distance (
about 0.02 C). This was performed at several band-passed frequencies to obtain the convection velocity at frequencies where the separation noise dominates. The results obtained are shown in figure <ref>. Open circles represent the 15^∘ angle-of-attack case, while cross symbols stand for the 8^∘ angle-of-attack case. The dashed line refers to the mean convection velocity (0.75 U_∞) estimated by <cit.> from the phase slope of two sensors at the trailing edge at 15^∘ and 16 m/s. The present results are therefore consistent with the previous ECL measurements and also with the estimate provided by <cit.> based on the direct numerical simulation <cit.> for the 8^∘ case (0.72 U_∞).
The lower value at 400 Hz is also consistent with that reported by <cit.> for this frequency range (0.52 U_∞).
Note that the observed variations can be caused by the uncertainty in the estimation of U_c, which should be a function of total recording time of the signal as shown in the appendix.
In the low frequency ranges
below 400 Hz, an increase in convection velocity is observed for the 15^∘ angle-of-attack case compared to the 8^∘ angle-of-attack case. This result is quite surprising, as an increase in adverse pressure gradient leads to a decrease in convection velocity <cit.>. Therefore, this observation will be addressed in the subsequent sections. Nevertheless, at higher frequencies
beyond 400 Hz, the convection velocity for the 15^∘ angle-of-attack case does become lower than that for the 8^∘ angle-of-attack case. Finally, the convection velocity decreases with an increase in frequencies because at low frequency, only the large eddies contribute to the pressure gusts <cit.>. At higher frequencies, the contribution from smaller eddies, which are close to the wall, becomes significant, resulting in lower convection velocities. Therefore, <cit.>'s observation regarding the frequency dependence of convection velocity is valid for both angles of attack.
Lastly, quantifying the spanwise correlation length is anything but straightforward. <cit.>, under the assumption that the normalized cross-power spectral density can be represented by two separate dimensionless variables ωΔx_1 / U_c and ωΔx_3 / U_c, showed that the two-dimensional coherence function can be written as follows:
γ(Δx_1,Δx_3,ω) = Φ_pp(ω, Δx_1, Δx_3)/Φ_pp(ω, 0, 0) = A(ωΔx_1 / U_c) B(ωΔx_3 / U_c) e^-i ωΔx_1 / U_c
The magnitude-squared coherence in the spanwise direction, is obtained by multiplying γ(0,Δx_3,ω) by its complex conjugate, and is plotted in figure <ref> (a). As can be seen in the latter, the coherence goes to zero beyond 1000 Hz, which makes the estimation of the associated length scales impossible.
More importantly, a hump centred around ∼ 100 Hz is observed in the values of γ^2. These results are consistent with the previous measurements by <cit.> shown as symbols in figure <ref> (a). Even though similar results were also reported by <cit.> beyond 100 Hz, the oscillatory behavior observed in their figure 4 below 100 Hz is caused by a too short signal length as shown in the appendix.
<cit.> was also first to recognize the exponential decay nature of the wall-pressure correlation separated by a finite distance, as also evidenced in figure <ref> (a). Invoking observation of exponential decay of correlation made by <cit.>, the function A and B can be written as:
A(ωΔx_1 / U_c) = e^- ω b_2 Δx_1 / U_c and B(ωΔx_3 / U_c) = e^- ω b_1 Δx_3 / U_c
where, b_1 and b_2 are
fitting parameters.
Under the assumption of zero streamwise separation, the normalized cross-power spectral density can then be written as follows:
γ(0,Δx_3,ω) = e^- ω b_1 Δx_3 / U_c
The spanwise correlation length l_z(f) can be estimated by:
l_z(ω) = ∫_0^∞γ(0,Δx_3,ω) Δx_3
Plugging equation (<ref>) in equation (<ref>) yields:
l_z(ω) = b_1 U_c/ ω
The reader should be cautioned that Corcos's model (equation (<ref>)) can lead to nonphysical values of spanwise correlation length, as it relies on the assumption that the convection velocity is independent of frequency. Nevertheless, it provides a reasonable estimation of the correlation length and has been used in the past by several authors <cit.>. Therefore, Corcos's model (equation (<ref>)) was used to estimate the spanwise correlation length. However, as the frequency at which the model should be used is unclear, the frequency was arbitrarily chosen. The resulting lengths are shown as the solid black and broken grey lines for the 15^∘ angle-of-attack and the 8^∘ angle-of-attack cases, respectively. The resulting values for the constant b_1 are 1.37 and 1.34 for the 15^∘ and the 8^∘ angle-of-attack cases, respectively. Although the estimate of Corcos's model predicts high-frequency attenuation in spanwise correlation length, it over-predicts it at low frequency for the 8^∘ angle-of-attack case. This can be corrected by using <cit.> model (solid red line), which takes the boundary-layer thickness (δ) and friction velocity (u_τ) into account to re-scale correlation length in the low-frequency range. The three empirical constants were set to 1.34, 19.5, and 13.5 in order to estimate the spanwise correlation length with <cit.> model.
In order to experimentally estimate the spanwise correlation length, the spanwise coherence between several spanwise sensors (RMPs 25-28) near the trailing edge (x/C=0.98) was calculated. The estimated values of the real part of the coherence were fitted with an exponential decay function for a given frequency.
The exponential decay function was chosen based on observations by <cit.>. Finally, the correlation length l_z
was obtained by combining equations (<ref>) and (<ref>). The resulting values of the correlation lengths (l_z(f)) are represented by symbols (crosses and circles) in figure <ref> (b). As in the case of convection velocity, the spanwise correlation length also increases for the 15^∘ angle of attack compared to the 8^∘ angle of attack case. As wall pressure is an imprint of turbulent flows convecting over the surface, what flow structures can explain such an increase in convection velocity? To answer this question, velocity field measurements were carried out using PIV and will be discussed in the following section.
§.§ Velocity measurements
Two snapshots of wall-parallel velocity are plotted in figure <ref>. As can be seen, large rollers, similar to that reported by <cit.> at 5^∘ incidence, are observed, which evidences the presence of large coherent structures. This is also consistent with the flow topology seen by <cit.> in their LES at 15^∘. These structures are typically induced by instability within the separated shear layer. At some instant, even a fully separated boundary-layer is observed as shown in figure <ref> (b). These instantaneous flow fields confirm large scale separation and passage of coherent rollers at the trailing edge, which are reminiscent of Kelvin-Helmholtz flow type.
Figures <ref> and <ref> show the mean boundary-layer statistics recorded by the first and the second camera, respectively. Figure <ref> is plotted with respect to an observer sitting on the leading-edge of the airfoil while in figure <ref>, the coordinate system of the velocity field is aligned with the wind-tunnel axis. As evidenced in table <ref>, the spatial resolution achieved by the first camera is about three times higher than that of the second camera. Thus, further spatial filtering is expected in the results shown in figure <ref>. Figure <ref> shows that, in a time-averaged sense, the mean flow becomes separated from RMP 3 (
x/C=0.09) onward. This is consistent with figure <ref>, which shows a plateau in C_p between RMP 3
and RMP 6. More importantly, the separated region shown by
the black dashed lines in figures <ref> and <ref> has a negligible r.m.s value of velocity disturbances (all Reynolds stresses close to zero). This confirms the laminar nature of the time-averaged separated flow region, and in the literature, it is commonly referred to as the
LSB. The presence of such an LSB is characteristic of the flow past the CD airfoil at Re_c=150000 and is consistent with the finding of <cit.>, who also reported the presence of a LSB when the CD airfoil is placed at a 15^∘ incidence. The LSB near the leading-edge of the airfoil seems to deflect the mean flow away from the airfoil, which provides a possible explanation for a drop in mean-loading reported in figure <ref>. The deflected mean flow and resultant flow acceleration near the leading-edge, at the point of inception of the LSB, can be evidenced from an increase in the length of arrows in figure <ref>. In a time-averaged sense, the LSB seems to cover at least 30% of the airfoil chord. However, due to the limited field-of-view, the exact extent of the LSB could not be quantified.
Overall, the mean flow topology presented in figures <ref> and <ref> show that the flow at the leading-edge region of the CD airfoil at 15^∘ angle of attack and Re_c≃ 150000 is laminar in nature. The LSB ensures the flow transition that occurs only after x/C > 0.4 as found in the previous LES <cit.>.
The mean boundary-layer statistics recorded by the third camera are shown in figure <ref> for the case when the airfoil is placed at a 15^∘ angle of attack with an inlet velocity of 16 m/s. Figure <ref> (a) shows the mean wall-parallel velocity. Despite the large-scale flow separations observed in figure <ref>, the boundary-layer near the trailing edge is fully attached in a time-averaged sense. As such, in the present pre-stall noise study, the time-averaged flow near the trailing-edge of the CD airfoil is different from the one reported by <cit.>, who reported a separated time-averaged flow near the trailing edge. The black dotted line is the iso-contour of the inlet velocity free-stream velocity, which roughly corresponds to the overall extent of the boundary layer. The Reynolds stress tensor terms, u_1 u_1/U^2_∞, u_2 u_2/U^2_∞, and -u_1 u_2/U^2_∞, are shown in figures <ref> (b), (c), and (d), respectively. Compared to the leading-edge region, the disturbances (quantified by r.m.s of velocities) close to the trailing-edge are substantially higher, which implies that the flow transitions to a fully turbulent boundary-layer somewhere between 40 and 65% of the chord. Higher levels of r.m.s velocities are the sources of far-field noise <cit.>. In particular, elevated regions of r.m.s velocity do not have a clear peak but a broad region of elevated intensity. This is typical of flows that experience the presence of shear-layer instabilities <cit.>.
In order to understand the impact of the Reynolds number, the results of the measurements performed at 28 m/s are plotted in figure <ref>. Upon comparison with figure <ref>, it shows similar overall behavior in the measured velocity field in the trailing-edge region. The overall length of the boundary layer is similar to the 16 m/s case at x_2 ≃ 0.32 C and close to the trailing-edge region (x_2 ≃ 0.02 C), as shown in tables <ref> and <ref>. Furthermore, for the 28 m/s case, the turbulence intensity appears to be much lower than in the 16 m/s case, resulting in more localized levels of iso-contours in figure <ref> compared to those in figure <ref>. This is especially true for the cross-term -u_1 u_2/U^2_∞.
A more quantitative comparison can be obtained by looking at the velocity profiles near the airfoil trailing edge, as shown in Figure <ref>. The velocity profile at RMP 26 (x/C=0.98) shows when the CD airfoil placed at 15^∘ and 16 m/s flow incidence and inlet velocity respectively, the near wall mean velocity is reduced compared to the inlet velocity. Similar observations have been made by <cit.> (see figure 4), who reported a decrease in the near wall mean velocity as the mean pressure gradient increases. As such, we expect the boundary layer to grow faster in the streamwise direction near the trailing-edge region for the 28 m/s case compared with the 16 m/s case. This faster growth of the boundary layer for the 28 m/s case is captured in the shape factor, which remains smaller at both RMP 21 and RMP 26 locations compared to 16 m/s
(see the values in Tables <ref> and <ref>, for instance). Yet,
both velocity cases have higher values of the shape factor compared to the case when the airfoil is fixed at 8^∘ angle of attack and
16 m/s.
Notably, a higher value of the shape factor indicates flow close to separation <cit.>. Therefore, as the flow speed increases, the probability of flow separation decreases. Nevertheless, the overall boundary layer extent is similar for the 28 m/s and 16 m/s cases, as evidenced
in Tables <ref> and <ref>. As such, the Reynolds number based on the momentum thickness (Re_θ) for the airfoil placed at 15^∘ angle-of-attack is substantially higher than that of the 8^∘ case near the trailing-edge region.
The profiles of velocity statistics, namely u_1 u_1/U^2_∞, u_2 u_2/U^2_∞, and -u_1 u_2/U^2_∞, for the two velocity cases at 15^∘ angle of incidence and the case when the airfoil is placed at 8^∘ angle of attack and U_∞=16 m/s are compared in figures <ref> (b-d). Generally, the velocity statistics are normalized with the friction velocity to remove any Reynolds number (Re_τ) based effects. However, the overall goal of plot <ref> (b-d) is to demonstrate the levels of velocity disturbances with respect to the inlet velocity U_∞. Such a scaling inherently shows the applicability of thin-airfoil linearized theory, which assumes that the velocity disturbances are small compared to the inlet velocity U_∞. While the peak levels of velocity statistics scale with the boundary-layer thickness δ_95, in absolute units (for instance in meters) they are much further away from the wall compared to the 8^∘ angle-of-attack and 16 m/s case. More importantly, the profiles confirm that the r.m.s levels of velocity disturbances are elevated for the CD airfoil placed at 15^∘ angle-of-attack at 16 m/s case compared to the rest. With the exception of the wall-parallel disturbances (u_1 u_1/U^2_∞) for the 15^∘ angle-of-attack at 16 m/s case, the disturbances are at least an order of magnitude smaller in the rest of the cases tested. Previous studies <cit.> have reported an increase in r.m.s levels of velocity disturbances for wall bounded flows subjected to mean adverse pressure gradients.
Recently, <cit.> showed that normalizing the wall-pressure spectra by the square of the maximum value of the Reynolds stress, denoted by |-u_1 u_2|^2_max, leads to a collapse in low-frequency spectra over a broad range of cases for boundary-layer flows subjected to arbitrary mean pressure gradient. This normalization holds true because, as first shown by <cit.>,
the term p_rms/(-ρ u_1 u_2_max) falls between 2 and 3 for boundary-layer flows.
This was later confirmed by <cit.>
for canonical boundary-layer flows, and more recently by <cit.> for flows past an airfoil. These observations are confirmed
in tables <ref> and <ref> for the present case. Small deviations from the aforementioned values can be ascribed to measurement uncertainty, and the presence of open jet, which predominantly contributes to low frequency wall-pressure spectra and which is absent in the aforementioned data <cit.>. More importantly, when the scaling proposed by <cit.> is used to scale the wall-pressure spectra in Figure <ref>, a collapse in the low-frequency range is achieved. This collapse is remarkable because the wall-pressure spectra exhibit a difference of 20 dB, as shown in Figure <ref>, corresponding to an order of magnitude difference in wall-pressure fluctuations.
As the PIV velocity measurements were not time-resolved, additional single wire measurements were performed. As mentioned, single HWA measurements were performed close to the trailing-edge (RMP 26) of the airfoil. These measurements were done for airfoil placed at 15^∘ angle of attack and fixed inlet velocity of U_∞=16 m/s. Figure <ref> shows the pre-multiplied spectrogram f × E_11/U_e^2. The plot shows that the pre-multiplied energy spectrum peaks at about 100 Hz, away from the wall (0.4-0.6 ×δ_95), approximately the location where the peak in r.m.s. of velocity fluctuations was reported in figure <ref>.
In summary, figures <ref> to <ref> show that large scale flow disturbances may be present and confirm the instantaneous snapshots in figure <ref>. These large structures are in turn responsible for elevated
levels of r.m.s.
velocity fluctuations and the peak in the pre-multiplied spectrogram. As such, modal decomposition could be useful to understand the hierarchy and the organization of velocity disturbances close to the trailing edge.
§ MODAL ANALYSIS
The Proper Orthogonal Decomposition (POD) <cit.> was employed to uncover the modes present in the velocity disturbance field. One benefit of using POD is that, unlike linear stability analysis, it does not require velocity disturbances to be small. In the present paper, POD was carried out using the snapshot approach of the algorithm developed by <cit.>. For more information, please refer to the monograph by <cit.>. The modal energy distribution of the measured velocity field is shown in figure <ref>. The spatial POD modes are used to identify the spatial organization of the velocity disturbance field and their associated energy levels (E_r), and are plotted in figure <ref>. In the present manuscript, only the spatial modes associated with the vertical velocity disturbances (E_22) are used because they are the principal drivers of wall-pressure fluctuations and far-field acoustics <cit.>. Figure <ref> (b) clearly shows that the first 12 modes contribute to approximately 40% of the total energy, although the cumulative energy for the 16 m/s case appears to be slightly lower compared to the 28 m/s case. The relative contributions for the 16 m/s and 28 m/s are shown for the first 12 modes. As can be seen, the relative energies of modal pairs 3-4, 5-6, and 11-12 appear to be similar and may form a modal pair. However, upon inspection, it was found not to be the case
(see figure <ref> for example).
Yet, the spatial organization appears to be similar between the 16 m/s and 28 m/s cases. Moreover, in these figures, the dashed black lines that represent the time-averaged location where the wall-parallel velocity is equal to the free-stream velocity U_∞ show that the spatial modes are distributed across the boundary layer. In contrast, modal decomposition performed by <cit.> had shown that the spatial modes are uniquely present outside the time-averaged extent of the shear layer. In fact, the spatial distribution of the velocity disturbance field looks similar to the instantaneous
field in figure <ref> (a), and it could be due to the passage of coherent structures, and it may correspond to the disturbance at the frequency range of 80-300 Hz. More importantly, the spatial extent or wavelength of this modal pattern (mode 3) closely corresponds to the peak in the pre-multiplied spectrogram (figure <ref>). As such, it may be responsible for the hump in the low-frequency wall-pressure (figure <ref>) and far-field acoustic spectra (figure <ref>). To verify this, the correlation between mode 3 and band-passed pressure will be performed next.
§ CORRELATION ANALYSIS
Having characterized the velocity and pressure field, the manuscript will now attempt to delineate the correlation between these two quantities of interest. Correlation between flow quantities are quantified using Pearson's correlation coefficient. Pearson correlation coefficient at two different locations (x_1,x_2,x_3) and (x_1^',x_2^',x_3^') is denoted by:
R_ζχ(x_1,x_1^',x_2,x_2^',x_3,x_3^') = ζ_(x_1,x_2,x_3) χ_(x_1^',x_2^',x_3^') /√(ζ(x_1,x_2,x_3)^2)×√(χ(x_1^',x_2^',x_3^')^2)
where ζ(x_1,x_2,x_3) and χ(x_1^',x_2^',x_3^') are the fluctuating components of variables of interest.
The Pearson correlation method is used in pattern recognition
to quantify the similarity between patterns or features in data. For example, in time series data,
the correlation between the values of two time series at different time points can be used to quantify the similarity between the patterns of the time series. However, correlation alone does not establish causation, as correlation cannot yield causal asymmetry and hence cannot separate the cause from the effect <cit.>. As such, in the present section the overall goal is recognize pattern in velocity disturbance field that are similar to ones measured in time series of pressure signals recorded at the wall or at far-field locations. This can aid
to identify velocity disturbance pattern associated with separation noise. The causality is inferred through amiet1976noise equation (<ref>) and Poisson's equation <cit.>, which relates velocity disturbance to wall-pressure fluctuations.
§.§ Wall and far-field pressure correlation analysis
To identify patterns in measured time series of wall-pressure and far-field acoustic pressure the correlation, R_p'_w,p'_a,
has been calculated. To segregate the separation noise,
both signals
have been band-passed filtered between 80-300 Hz and 600-2100 Hz, where contributions from separation noise can be ignored (see figure <ref>). The results are shown in figures <ref> (a-b). A negative correlation between the wall-pressure fluctuations p'_w and the far-field acoustic ones p'_a is measured when these signals
have been band-passed filtered between 80-300. This can be caused by the passage of eddies at these frequencies near the trailing-edge and their diffraction in the form of acoustic pressure at a far-field location (see also figure <ref>). The phase opposition between near-field and far-field is due to the dipole nature of the source term. In contrast, for the band passed frequencies between 600-2100 Hz, no meaningful correlation is obtained. This already suggests a significant low-frequency contribution of the surface noise sources caused by the largest turbulent coherent structures.
§.§ Correlation between POD modes and pressure
The temporal signals associated with mode 3,
has been correlated with the band-passed filtered wall-pressure and far-field pressure signals. The frequency band for the separation noise
has been chosen to be between 80-300 Hz and 600-2100 Hz, as in figure <ref>. Once again the band passed filtering
has been achieved using a zero-phase digital filtering, which conserves the phase. Figures <ref> (a) and (b) show the correlation between the third mode (R_E_22,p'_w) and the wall-pressure fluctuations measured by RMP 26 (x/C=0.98). In these plots, T_f corresponds to the time of flight of an acoustic signal emitted at the trailing-edge of the airfoil to reach the far-field location where the noise is measured. The third mode of wall-normal velocity fluctuations (E_22) and the recorded wall-pressure signals p'_w show a meaningful correlation only at the band-pass frequency range of 80-300 Hz (figure <ref> (a)), while the correlation drops to background noise levels for the higher frequency band 600-2100 Hz (figure <ref> (b)). Similarly for the correlation between mode 3 and the far-field acoustic pressure p'_a meaningful results are only obtained for the lower frequency band 80-300 Hz. The only difference is that it takes the near field hydrodynamic event a finite time to reach the far-field location, where the acoustic measurements are achieved. As such, before the cross-correlation was performed, the time series of the acoustic signal was shifted by the time of flight T_f. More importantly a phase opposition is seen in figure <ref> (c)) between mode 3 and the far-field acoustic pressure. This is not surprising as the third mode of the wall-normal velocity disturbances (E_22) and the recorded wall-pressure signals are in phase (see figure <ref> (c)) while the acoustic pressure and wall-pressure field are in phase opposition (figure <ref>).
To conclude, the near-field source terms are amplified in the case when the airfoil is placed at high angles of attack through induced flow separation. The separated shear layer can induce Kelvin-Helmholtz like roller structures,
the imprint of which are registered by the surface pressure probes. This results in an increased noise content, at a frequency that is associated with the wavelength of these roller structures. Having characterized the near-field source terms and its correlation with far-field acoustics, the diffracted acoustic pressure field around the airfoil
is then quantified. Finally attempts
are made to identify the equivalent source images responsible for the separation noise.
§ FAR-FIELD ACOUSTIC PRESSURE ANALYSIS
The far-field acoustic pressure
has been measured around the airfoil mid-chord to compare the influence of angles of attack on the acoustic directivity patterns. This
has been done at several frequencies, and hence at several Helmholtz numbers kc, where k
is the acoustic wavenumber.
The results are shown in figure <ref>. While there
is an overall increase in absolute levels of the measured sound pressure levels, the overall sound directivity pattern is similar between the 8^∘ and 15^∘ angles of attack cases, where the former is known to emit noise through an equivalent dipole at the trailing edge <cit.>. As such, classical dipole noise at the airfoil trailing-edge seems to be the driver of separation noise. In contrast, at higher Mach numbers (M_∞=0.3-0.4) than the ones reported in present study, <cit.> had reported a significant contribution from the quadrupole noise sources.
To further investigate the overall contribution of quadrupole noise generated, due to separated shear layers, the cross-correlation between two far-field microphones located on either side of the airfoil mid-chord was performed. To isolate the influence of separation noise, the far-field noise signals were band-passed filtered between 80 and 1000 Hz. The
comparison at 16 m/s between the two angles of attacks, 8^∘ and 15^∘ are shown in figure <ref> (a).
The clear phase opposition decisively demonstrates that the dominant noise source is dipolar in nature. To further reinforce these findings, the OverAll Sound Pressure Level (OASPL) as a function of free-stream velocity U_∞ is shown in figure <ref> (b). Once again, to isolate overall influence of separation noise, the sound pressure levels have been integrated between 80-1000 Hz, where the separation noise dominates. The results clearly show that the OASPL due to noise separation follows the classical compact dipole scaling U^6_∞, which was first proposed by <cit.>.
Having shown that the separation noise can be represented by an equivalent compact dipole source,
we now attempt to
check whether it can be quantified using the diffraction theory outlined above in equation (<ref>) <cit.>. The success of
Amiet's model and its extension relies on the fact that the response of the airfoil to an incident gust can be predicted using the linearized thin-airfoil theory. Following <cit.>,
the dimensionless radiation ratio Λ is plotted in figure <ref>.
Red dashed lines correspond the frequency range for which the estimation of the spanwise correlation length was performed. The high frequency region is limited to 1 kHz because the spanwise coherence for the 15^∘ angle-of-attack case drops drastically below the measurement uncertainty beyond this frequency range. The radiation ratio quantifies the diffraction efficacy for an airfoil trailing-edge that is subjected to an unsteady pressure gust. To recall, the radiation ratio is defined as the ratio of far-field and near-field spectra normalised by spanwise length scales, far-field observer distance and the airfoil span length, in the following manner:
Λ = S_pp/Φ_pp× l_z×x^2/L
As argued by <cit.>, a good collapse between various cases should be expected for the same airfoil.
Figure <ref> compares the present measurements with the theoretical prediction with Amiet's model (solid lines) and the previous measurements at ECL (diamond and square symbols) as reported in <cit.>. The two theoretical curves stress the effect of directivity in Λ and the actual microphone position in the present experiment, 258^∘ (on the airfoil pressure side to provide laser access on the suction side), reproduces better the experimental trend compared to the ECL measurements at about 270^∘ (or equivalently 90^∘ on the airfoil suction side). The 5 dB spread in the experimental data is consistent with the data of figure 16 (a) in <cit.>, and can be attributed to both the saturation in the Electret microphones at low frequencies shown in figure <ref> and a less accurate calibration methods in 2005. Yet, in both data sets,
while a good collapse between the two angles of attack 15^∘ and 8^∘
is
achieved between 500 and 2000 Hz, the collapse is relatively poor at lower frequencies (80-500 Hz range), for the high incidence where the separation noise dominates. On the other hand, the newer 8^∘ case shows a good match with Amiet's prediction.
§ DISCUSSION
The low-frequency content of the airfoil self noise increases as the angle of attack is increased from 8^∘ to 15^∘. This increase noise is also accompanied by an increase in the amplitude of wall-pressure and velocity disturbances.
The wall-pressure field shows an increased amplitude, spanwise extent and the velocity at which pressure gusts convect past the trailing-edge at frequencies where the separation noise is dominant.
The genesis of the increased disturbances can be linked to the late transition of boundary layer. In particular, as the angle of attack is increased, it was found that the LSB covers at least 40% of the airfoil chord consistently with previous LES results <cit.>, which leads to a delayed flow transition and re-attachment, somewhere between 40 and 65% of the chord. As such, the magnitude of flow disturbances represented by the dimensionless Reynolds stress components u_1 u_1/U^2_∞, u_2 u_2/U^2_∞, and -u_1 u_2/U^2_∞, increases substantially compared to the airfoil at 8^∘ attack, especially close to the airfoil trailing edge.
While the time-averaged flow is found to be attached, large-scale flow distortions in form of rollers, that are reminiscent of KH-type instability, are present. These roller structures are similar to the ones that were previously reported on the CD airfoil numerically by <cit.> at the same incidence and experimentally by <cit.> at a lower angle-of-attack. As the wall-normal spatial extent of these structures can be substantially larger than the mean boundary layer thickness, they have access to higher momentum flow. This explains why an increase in low-frequency convection velocity was observed despite a strong adverse pressure gradient
in the trailing-edge region.
The velocity
fluctuations, -u_1 u_2, increases steadily before eventually getting saturated close to the trailing-edge region of the airfoil. The peak values of -u_1 u_2 are shown to scale the wall-pressure spectra for two angles of attack. As such, it clearly demonstrates that the increase in the magnitude in wall-pressure statistics can be linked to an increase in -u_1 u_2. The
amplification of flow disturbance, such as -u_1 u_2, is known to
yield KH-type instability and vortex pairing in the shear layer, producing
the observed roller structures <cit.>. In summary, these rollers are present due to the late amplification (x>0.4C) of the LSB instability, and its subsequent roll-up, which ensures
that large eddies reach the trailing edge of the airfoil.
<cit.> showed that these rollers have large coherence in the spanwise direction. Furthermore, the mode associated with roller structures correlates with the wall-pressure fluctuations at frequencies that correspond to the maximum levels of spanwise coherence. In addition, the spanwise coherence of wall-pressure and HWA spectrogram peak at the same frequency. In the absence of any alternative frequency-centred activities in the flow, it may be concluded that these rollers are responsible for an increase in the spanwise correlation length.
Finally, <cit.> had shown that flapping of the shear yields an increase in low-frequency noise. While the flapping of LSB may result in flapping of shear-layer, no evidence for its contribution to separation noise is found in the absence of rollers. This is because no modal structures associated exclusively with shear-layer flapping were identified. The noise mechanism due to shear-layer flapping is thus not universally present for an airfoil at near stall conditions. In the present case, the increase in flow disturbances, and the associated rolling-up of the shear layer are the only dominant flow mechanisms that contribute to separation noise.
Therefore, the question naturally arises: is the increase in the magnitude of -u_1 u_2 sufficient to nullify
Amiet's diffraction theory that depends on the thin-airfoil linearized theory? To answer this question, the radiation ratio is calculated for cases with variable flow incidence at the same Reynolds number based on chord. The results confirm that the diffraction efficacy of an airfoil subjected to higher angles of attack is substantially attenuated at frequencies associated with separation noise. This is because the overall increase in sound pressure level is comparatively small compared to the rise in spanwise correlation l_z. In particular, the energy conversion from near-field pressure to far-field pressure should be more effective as l_z increases; however, this is not achieved. Furthermore, the roller structures imply that the unsteady Kutta condition may not be valid, as its validity hinges on the flow leaving the airfoil trailing edge smoothly. As such, the diffraction efficacy for an airfoil trailing-edge that is subjected to an unsteady pressure gust due to flow separation is substantially attenuated. Nevertheless, the separation noise can be fully quantified using a compact dipole. The far-field microphones located on the either side of the airfoil confirm this dipolar behaviour along with U_∞^6 scaling of the OASPL measured in the far-field.
Furthermore, the far-field acoustic pressure field directivity pattern is similar for both 8^∘ and 15^∘ angles of attack, which reinforces the dipolar directivity pattern. These observations partly explain why <cit.> obtained a more favourable estimate of acoustic noise while using the <cit.> analogy compared to diffraction theory <cit.> at frequencies where the separation noise dominates.
§ CONCLUSIONS
The present paper is a detailed aeroacoustic investigation of a CD airfoil at near stall condition. This is achieved by placing the CD airfoil at high angles of attack in an open jet anechoic wind tunnel. Two sets of experiments are performed at Re_c ≃ 140,000 and Re_c ≃ 245,000 based on airfoil chord for an airfoil placed at 15^∘ angle of attack. For the airfoil at Re_c ≃ 140,000, synchronized PIV, RMP and far-field microphone measurements were performed.
The present study is driven by two fundamental research questions.
1) What is the mechanism that is responsible for separation noise for an airfoil near stall conditions ? If so, is it universal ?
2) Is the noise due to flow separation generated by a dipole for airfoil close to stall ? If so, can it be quantified by amiet1976noise diffraction theory ?
The present study shows that when the CD airfoil is placed at a higher angle of attack compared to 8^∘, such as 15^∘ in the present study, strong amplification of flow disturbance, up to an order of magnitude higher is seen in the trailing-edge region. In fact, noise due to flow separation can be linked to increase in flow disturbances, like -u_1u_2, which scale up the wall-pressure fluctuations.
This increased Reynolds stress triggers the roll up
of the separated shear layer.
These rollers are
linked to the flow transition triggered by the Kelvin-Helmoltz
instability.
They are also linked to an increase in spanwise coherence of the wall-pressure fluctuations, as they convect past the trailing-edge. The modal decomposition obtained by POD shows that the modes associated with these roller structures correlate with near and far-field pressure. This correlation is observed only at frequencies where the separation noise dominates, i.e. frequency at which the E_11 peaks. As such, rollers and associated Kelvin-Helmholtz type flow instability play a central role in the increase in noise due to flow separation. Lastly, in the present study, no contributions coming exclusively from the flapping of the shear layer were observed.
The present study conclusively shows that separation noise is dipolar in nature, therefore, quadrupole contribution for low-speed airfoils at near-stall conditions can be neglected, at least for flows up to a Mach number of about 0.1.
Yet the increase in flow disturbances measured close to the trailing-edge of the airfoil implies that the assumption of small amplitude disturbance are no longer valid, which is the central premise of the thin-airfoil linearized theory used to estimate the response of the airfoil to an incoming pressure gust. Furthermore, passage of large roller structures past the trailing edge may invalidate the unsteady Kutta condition. Yet outside the frequency range at which flow separation operates, amiet1976noise theory should be able to predict the far-field noise even at high angles of attack as previously shown by <cit.>.
§ ACKNOWLEDGMENTS
The authors would like to acknowledge the help of Sidharth Krishnan Kalyani and Yann Pasco during the PIV measurements. Authors are thankful for computational time in supercomputer Graham, managed by the Digital research alliance of Canada.
§ FUNDING
This work was supported by the Canadian NSERC Discovery grant (no.RGPIN-2014-04111).
§ DECLARATION OF INTERESTS
The authors report no conflict of interest.
§ DATA AVAILABILITY STATEMENT
Raw data of PIV were processed
on the Digital research alliance of Canada's HPC center. Derived data supporting the findings of this study are available from the first author upon reasonable request.
§ APPENDIX
In this appendix, the influence of the total length of RMP signals on wall-pressure statistics is studied, particularly at low frequencies where this parameter is known to define the lowest achievable frequency in power spectral densities.
Figure <ref> first shows that this total length
is an important metric when it is used to estimate the convection velocity. This in part explains as to why previous studies <cit.> have reported slightly different values of U_c and l_z.
However, the uncertainty in the estimation of U_c/U_∞ is less than 10%, which yields a marginal uncertainty in
the radiation ratio, Λ, when plotted on a logarithmic scale. In turn, this has no significant impact on the efficacy of diffraction theory for separation noise.
In order to understand the impact of the signal length on the spanwise correlation length (l_z), the spanwise coherence (γ^2) is plotted in figure <ref> between two spanwise probes 25 and 27. The results are also compared with the data reported by <cit.>.
Figure <ref> shows that at low-frequency oscillations, which represent uncertainty in the estimate of γ^2, are higher for cases where the signal length is truncated below 30 seconds. Consequently, <cit.>, who estimated the spanwise coherence (γ^2) with a signal length of 15 seconds, have a higher uncertainty in the estimate of γ^2. Furthermore, <cit.> took one-tenth of the number of points to estimate the PSD compared to the present case. As such, the low-frequency part of γ^2 shows an erroneous double peak in their results (see Figure 4(a) of <cit.>), which is absent in the present case as well as in one reported earlier by <cit.>.
0
[
hide axis
]
[
draw=rgb,255:red,55;green,126;blue,184,
mark=triangle*,
mark options=fill=rgb,255:red,55;green,126;blue,184, draw=black, scale=2, line width=0.0pt, solid,
mark size=1.5pt,
line width=1pt,
]0;
[
draw=rgb,255:red,55;green,126;blue,184,
mark=square*,
mark options=fill=rgb,255:red,55;green,126;blue,184, draw=rgb,255:red,55;green,126;blue,184, scale=2, line width=0.0pt, solid,
mark size=1.5pt,
line width=1pt,
solid,
]0;
[
draw=rgb,255:red,55;green,126;blue,184,
dashed,
mark=square*,
mark options=fill=rgb,255:red,55;green,126;blue,184, draw=rgb,255:red,55;green,126;blue,184, scale=2, line width=0.0pt, solid,
mark size=1.5pt,
line width=1pt,
]0;
[
draw=red,
mark=*,
mark options=fill=red, draw=red, scale=2, line width=0.0pt, solid,
mark size=1.5pt,
line width=1pt,
solid,
]0;
[
draw=red,
mark=*,
mark options=fill=red, draw=red, scale=2, line width=0.0pt, solid,
mark size=1.5pt,
line width=1pt,
dashed,
]0;
[
draw=red,
mark=triangle*,
mark options=fill=red, draw=black, scale=2, line width=0.0pt, solid,
every mark/.append style=rotate=180,
mark size=1.5pt,
line width=1pt,
]0;
[
draw=ForestGreen,
mark=diamond*,
mark options=fill=ForestGreen, draw=black, scale=2, line width=0.0pt, solid,
mark size=1.5pt,
line width=1pt,
]0;
[
color=lightgray,
solid,
line width=1.pt,
dashed,
]0;
[
color=gray,
mark options=solid, scale=1.3,
line width=1.pt,
solid,
]0;
[
color=lightgray,
mark=triangle,
mark options=fill=lightgray, draw=lightgray, scale=1.5, line width=0.5pt, solid, rotate=0,
line width=1.pt,
solid,
]0;
[
color=lightgray,
mark options=solid, scale=1.3,
line width=1.5pt,
solid,
]0;
[
color=lightgray,
dotted,
line width=1.pt,
]0;
[
color=red,
solid,
line width=1.pt,
]0;
[
color=black,
solid,
line width=1.pt,
]0;
[
color=black,
dotted,
line width=1.pt,
]0;
[
color=red,
dashdotted,
line width=1.pt,
]0;
[
color=rgb,255:red,55;green,126;blue,184,
dashdotted,
line width=1.pt,
]0;
[
color=rgb,255:red,55;green,126;blue,184,
solid,
line width=1.pt,
]0;
[
color=rgb,1:red,1;green,0.49;blue,0,
solid,
line width=1.pt,
]0;
[
color=rgb,1:red,0;green,0.0;blue,0,
dashed,
line width=1.pt,
]0;
[
color=rgb,1:red,0;green,0.0;blue,0,
dashdotted,
line width=1.pt,
]0;
[
color=rgb,1:red,0;green,0.2;blue,1,
dashed,
line width=1.pt,
]0;
[
color=rgb,1:red,1;green,0.1;blue,1,
dashed,
line width=1.pt,
]0;
[
color=rgb,1:red,0.1;green,0.5;blue,0,
dashed,
line width=1.pt,
]0;
[
color=rgb,1:red,0.8;green,0.0;blue,0.0,
dashed,
line width=1.pt,
]0;
[
color=rgb,1:red,0.8;green,0.8;blue,0.8,
dashed,
line width=1.pt,
]0;
[
color=rgb,1:red,0.8;green,0.8;blue,0.8,
dashed,
line width=2.pt,
]0;
[
color=black,
mark options=solid, scale=1.3,
line width=0.5pt,
dashed,
]0;
[
color=red,
mark options=solid, scale=1.3,
line width=1.pt,
dashed,
]0;
[
color=black,
mark=square,
mark options=solid, scale=1.2,
line width=1.pt,
dashed,
]0;
[
color=black,
mark=square*,
mark options=fill=black, draw=black, scale=1.3, line width=0.pt, solid,
line width=0.pt,
only marks,
]0;
[
color=black,
mark=square*,
mark options=fill=black, draw=black, scale=1.3, line width=0.pt, solid,
line width=1.pt,
solid,
]0;
[
color=black,
mark=square*,
mark options=fill=white, draw=black, scale=1.3, line width=0.pt, solid,
line width=1.pt,
solid,
]0;
[
color=lightgray,
mark=o,
mark options=solid, scale=1.2,
line width=1.pt,
dashed,
]0;
[
color=lightgray,
mark=o,
mark options=solid, scale=1.2,
line width=1.pt,
solid,
]0;
[
color=black,
mark=o,
mark options=solid, scale=1.3,
line width=1.pt,
dashed,
]0;
[
color=red,
dashdotted,
mark=square*,
mark options=fill=red, draw=red, scale=1.3, line width=0.pt, solid,
line width=1.pt,
]0;
[
color=rgb,255:red,55;green,126;blue,184,
dashdotted,
mark=square*,
mark options=fill=rgb,255:red,55;green,126;blue,184, draw=rgb,255:red,55;green,126;blue,184, scale=1.3, line width=0.pt, solid,
line width=1.pt,
]0;
[
color=rgb,255:red,55;green,126;blue,184,
solid,
mark=square*,
mark options=fill=rgb,255:red,55;green,126;blue,184, draw=rgb,255:red,55;green,126;blue,184, scale=1.3, line width=0.pt, solid,
line width=1.pt,
]0;
[
color=rgb,255:red,0;green,0;blue,255,
solid,
mark=square*,
mark options=fill=white, draw=rgb,255:red,0;green,0;blue,255, scale=1.3, line width=0.pt, solid,
line width=1.pt,
]0;
[
color=red,
solid,
mark=square*,
mark options=fill=red, draw=red, scale=1.3, line width=0.0pt, solid,
line width=1.pt,
]0;
[
color=rgb,255:red,55;green,126;blue,184,
mark=*,
mark options=fill=rgb,255:red,55;green,126;blue,184, draw=rgb,255:red,55;green,126;blue,184, scale=1.3, line width=0.0pt, solid,
line width=1.pt,
]0;
[
color=red,
mark=*,
mark options=fill=red, draw=red, scale=1.3, line width=0.0pt, solid,
line width=1.pt,
]0;
[
color=red,
dashdotted,
mark=*,
mark options=fill=red, draw=red, scale=1.3, line width=0.0pt, solid,
line width=1.pt,
]0;
[
color=red,
solid,
mark=*,
mark options=fill=white, draw=red, scale=1.3, line width=0.5pt, solid,
line width=1.pt,
]0;
[
color=red,
only marks,
mark=*,
mark options=fill=red, draw=red, scale=1.3, line width=0.0pt, solid,
line width=1.pt,
]0;
[
color=rgb,255:red,55;green,126;blue,184,
dashdotted,
mark=*,
mark options=fill=rgb,255:red,55;green,126;blue,184, draw=rgb,255:red,55;green,126;blue,184, scale=1.3, line width=0.0pt, solid,
line width=1.pt,
]0;
[
color=black,
mark options=solid, scale=1.3,
line width=1.pt,
solid,
]0;
[
color=red,
mark=square,
mark options=fill=red, draw=red, scale=1.0, line width=0.5pt, solid, rotate=45,
line width=1.pt,
solid,
]0;
[
color=red,
mark=square,
mark options=fill=red, draw=red, scale=1.0, line width=0.5pt, solid, rotate=45,
line width=1.pt,
dashed,
]0;
[
color=red,
mark=*,
mark options=fill=white, draw=red, scale=1.0, line width=0.5pt, solid, rotate=0,
line width=1.pt,
dashed,
]0;
[
color=rgb,255:red,77;green,175;blue,74,
mark=triangle,
mark options=fill=rgb,255:red,77;green,175;blue,74, draw=rgb,255:red,77;green,175;blue,74, scale=1.5, line width=0.5pt, solid, rotate=0,
line width=1.pt,
solid,
]0;
[
color=rgb,255:red,77;green,175;blue,74,
mark=triangle,
mark options=fill=rgb,255:red,77;green,175;blue,74, draw=rgb,255:red,77;green,175;blue,74, scale=1.5, line width=0.5pt, solid, rotate=0,
line width=1.pt,
dashed,
]0;
[
color=rgb,255:red,55;green,126;blue,184,
mark=triangle,
mark options=fill=rgb,255:red,55;green,126;blue,184, draw=rgb,255:red,55;green,126;blue,184, scale=1.5, line width=0.5pt, solid, rotate=180,
line width=1.pt,
solid,
]0;
[
color=rgb,255:red,0;green,0;blue,255,
mark=diamond*,
mark options=fill=white, draw=rgb,255:red,0;green,0;blue,255, scale=1.5, line width=0.5pt, solid, rotate=0,
line width=1.pt,
solid,
]0;
[
color=rgb,255:red,55;green,126;blue,184,
mark=triangle,
mark options=fill=rgb,255:red,55;green,126;blue,184, draw=rgb,255:red,55;green,126;blue,184, scale=1.5, line width=0.5pt, solid, rotate=180,
line width=1.pt,
dashed,
]0;
[
color=rgb,255:red,255;green,127;blue,0,
mark=triangle,
mark options=fill=rgb,255:red,255;green,127;blue,0, draw=rgb,255:red,255;green,127;blue,0, scale=1.3, line width=0.5pt, solid, rotate=0,
line width=1.pt,
only marks,
]0;
[
color=rgb,255:red,255;green,127;blue,0,
mark=x,
mark options=fill=rgb,255:red,255;green,127;blue,0, draw=rgb,255:red,255;green,127;blue,0, scale=1.3, line width=0.5pt, solid, rotate=0,
line width=1.pt,
only marks,
]0;
[
color=rgb,255:red,0;green,0;blue,255,
mark=x,
mark options=fill=rgb,255:red,0;green,0;blue,255, draw=rgb,255:red,0;green,0;blue,255, scale=1.3, line width=0.5pt, solid, rotate=0,
line width=1.pt,
only marks,
]0;
[
color=rgb,255:red,0;green,0;blue,255,
mark=x,
mark options=fill=rgb,255:red,0;green,0;blue,255, draw=rgb,255:red,0;green,0;blue,255, scale=1.3, line width=0.5pt, solid, rotate=0,
line width=1.pt,
solid,
]0;
[
color=rgb,255:red,0;green,0;blue,255,
mark=*,
mark options=fill=white, draw=rgb,255:red,0;green,0;blue,255, scale=1.7, line width=0.5pt, solid, rotate=0,
line width=1.pt,
only marks,
]0;
[
color=rgb,255:red,255;green,0;blue,0,
mark=square,
mark options=fill=rgb,255:red,255;green,0;blue,0, draw=rgb,255:red,255;green,0;blue,0, scale=1.7, line width=0.5pt, solid, rotate=0,
line width=1.pt,
only marks,
]0;
[
color=rgb,255:red,255;green,127;blue,0,
mark=square,
mark options=fill=rgb,255:red,255;green,127;blue,0, draw=rgb,255:red,255;green,127;blue,0, scale=1.3, line width=0.5pt, solid, rotate=0,
line width=1.pt,
solid,
]0;
[
color=rgb,255:red,255;green,127;blue,0,
mark=diamond*,
mark options=fill=rgb,255:red,255;green,127;blue,0, draw=rgb,255:red,255;green,127;blue,0, scale=1.3, line width=0.5pt, solid, rotate=0,
line width=1.pt,
solid,
]0;
[
color=rgb,255:red,255;green,127;blue,0,
mark=square,
mark options=fill=rgb,255:red,255;green,127;blue,0, draw=rgb,255:red,255;green,127;blue,0, scale=1.3, line width=0.5pt, solid, rotate=0,
line width=1.pt,
dashed,
]0;
[
color=rgb,255:red,255;green,127;blue,0,
mark=diamond*,
mark options=fill=white, draw=rgb,255:red,255;green,127;blue,0, scale=1.3, line width=0.pt, solid,
line width=0.pt,
only marks,
]0;
[
color=blue,
mark=triangle*,
mark options=fill=white, draw=blue, scale=1.3, line width=0.pt, solid,
line width=0.pt,
only marks,
]0;
[
color=lightgray,
mark=diamond*,
mark options=fill=white, draw=lightgray, scale=1.7, line width=0.pt, solid,
line width=1.pt,
only marks,
]0;
[
color=red,
mark=square*,
mark options=fill=white, draw=red, scale=1.3, line width=0.pt, solid,
line width=0.pt,
only marks,
]0;
[
color=black,
mark=*,
mark options=fill=white, draw=black, scale=1.3, line width=0.pt, solid,
line width=0.pt,
only marks,
]0;
[
color=blue,
mark=square,
mark options=fill=white, draw=blue, scale=1.7, line width=0.1pt, solid,
line width=0.pt,
only marks,
]0;
|
http://arxiv.org/abs/2307.00683v1
|
20230702233255
|
Rapid mixing of global Markov chains via spectral independence: the unbounded degree case
|
[
"Antonio Blanca",
"Xusheng Zhang"
] |
math.PR
|
[
"math.PR",
"cs.DM",
"cs.DS",
"math-ph",
"math.MP"
] |
Rapid mixing of global Markov chains via spectral independence: the unbounded degree case
Antonio Blanca
Pennsylvania State University.
Email: [email protected].
Research supported in part by NSF grant CCF-2143762.
Xusheng ZhangPennsylvania State University.
Email: [email protected].
Research supported in part by NSF grant CCF-2143762.
August 1, 2023
==========================================================================================================================================================================================================================================================================
We consider spin systems on general n-vertex graphs of unbounded degree and explore the effects of spectral independence on the rate of convergence to equilibrium of global Markov chains. Spectral independence is a novel way of quantifying the decay of correlations in spin system models, which has significantly advanced the study of Markov chains for spin systems.
We prove that whenever spectral independence holds, the popular Swendsen–Wang dynamics for the q-state ferromagnetic Potts model
on graphs of maximum degree Δ, where Δ is allowed to grow with n,
converges in O((Δlog n)^c) steps where c > 0 is a constant independent of Δ and n.
We also show a similar mixing time bound for the block dynamics of general spin systems, again assuming that spectral independence holds.
Finally, for monotone spin systems such as the Ising model and the hardcore model on bipartite graphs, we show that spectral independence implies that the mixing time of the systematic scan dynamics is O(Δ^c log n) for a constant c>0 independent of Δ and n.
Systematic scan dynamics are widely popular but are notoriously difficult to analyze.
Our result implies optimal O(log n) mixing time bounds for any systematic scan dynamics of the ferromagnetic Ising model on general graphs up to the tree uniqueness threshold.
Our main technical contribution is an improved factorization of the entropy functional: this is the common starting point for all our proofs.
Specifically, we establish the so-called k-partite factorization of entropy
with a constant that depends polynomially on the maximum degree of the graph.
empty
§ INTRODUCTION
Spectral independence is a powerful new approach for quantifying the decay of correlations in spin system models. Initially introduced in <cit.>, this condition has revolutionized the study of Markov chains
for spin systems. In a series of important and recent contributions, spectral independence has been shown to be instrumental in determining the convergence rate of the Glauber dynamics, the simple single-site update Markov chain that updates the spin at a randomly chosen vertex in each step.
The first efforts in this series (see <cit.>) showed that spectral independence implies optimal O(n log n) mixing of the Glauber dynamics on n-vertex graphs of bounded degree for general spin systems. The unbounded degree case was studied in <cit.>, while <cit.> explored the effects of this condition on the speed of convergence of global Markov chains (i.e., Markov chains that update the spins of a large number of vertices in each step) in the bounded degree setting.
Research exploring the applications of spectral independence is ongoing.
We contribute to this line of work by investigating how spectral independence affects the speed of convergence of global Markov chains for general spin systems on graphs of unbounded degree.
A spin system is defined on a graph G=(V,E).
There is a set 𝒮 = {1,…,q} of spins or colors, and configurations are assignments of spin values from 𝒮
to each vertex of G.
The probability of a configuration σ∈𝒮^V is given by the Gibbs
distribution:
μ(σ) =e^-H(σ)/Z,
where the normalizing factor Z is known as the partition function, and the Hamiltonian H: 𝒮^V →
contains terms that depend on the spin values at each vertex
(a “vertex potential” or “external field”) and
at each pair of adjacent vertices (an “edge potential”);
see Definition <ref>.
A widely studied spin system, and one that we will pay close attention to in this paper, is the ferromagnetic Potts model, where for a real parameter β > 0, associated with inverse temperature in physical applications, the Hamiltonian is given by:
H(σ) = -β∑_{u,v}∈ E(σ_u = σ_v).
The classical ferromagnetic Ising model corresponds to the q=2 case.
(In this variant of the Potts model,
the Hamiltonian only includes edge potentials, and
there is no external field.)
We shall use and for the Gibbs distributions corresponding to the Ising and Potts models. Other well-known, well-studied spin systems include uniform proper colorings and the hardcore model.
Spin systems provide a robust framework for studying interacting systems of simple elements and have a wide range of applications in computer science, statistical physics, and other fields. In such applications, generating samples from the Gibbs distribution (<ref>) is a fundamental computational task and one in which Markov chain-based algorithms have been quite successful. A long line of work dating back to the 1980s relates the speed of convergence of Markov chains to various forms of decay of correlations in the model. Spectral independence, defined next, captures the decay of correlations in a novel way.
Roughly speaking, spectral independence holds when the spectral norm of a “pairwise” influence matrix is bounded.
To formally define it, let us begin by introducing some notations.
Let Ω⊆𝒮^V be the support of μ: the set of configurations σ such that μ(σ) > 0.
A pinning τ on a subset of vertices Λ⊆ V is a fixed partial configuration on Λ;
i.e., a spin assignment from 𝒮^Λ to the vertices of Λ.
For a pinning τ on Λ⊆ V and U ⊆ V∖Λ, we let
Ω_U^τ = {σ_U ∈𝒮^U: μ(σ_U |σ_Λ = τ ) > 0}
be the set of partial configurations on U that are consistent with the pinning τ.
By convention, we write Ω^τ_u = Ω^τ_{u} if u is a single vertex.
Let
𝒫^τ := {(u, s): u∉Λ, s∈Ω^τ_u }
denote the set of consistent vertex-spin pairs in Ω^τ_V∖Λ under μ.
For each Λ⊆ V and pinning τ on Λ,
we define the signed pairwise influence matrix Ψ^τ_μ∈ℝ^𝒫^τ×𝒫^τ
to be the matrix with entries:
Ψ^τ_μ((u,a), (v,b)) = μ(σ_v = b |σ_u = a, σ_Λ = τ)
- μ(σ_v = b |σ_Λ = τ)
for u ≠ v, and Ψ^τ_μ((u,a), (u,b)) = 0 otherwise.
A distribution μ satisfies η-spectral independence if
for every subset of vertices Λ⊆ V and every pinning τ∈Ω_Λ,
the largest eigenvalue of
the signed pairwise influence matrix Ψ^τ_μ, denoted λ_1(Ψ^τ_μ), satisfies
λ_1(Ψ^τ_μ) ≤η.
There are several definitions of spectral independence in the literature;
we use the one from <cit.>.
We show that spectral independence implies new upper bounds on the mixing time of several well-studied global Markov chains in the case where the maximum degree Δ of the underlying graph G=(V,E) is unbounded; i.e., Δ→∞ with n.
The mixing time is defined as the number of steps required for a Markov chain to reach a distribution close in total variation distance to
its stationary distribution, assuming a worst possible starting state; a formal definition is given in Section <ref>.
The global Markov chains we consider include the Swendsen–Wang dynamics for the ferromagnetic q-state Potts, the systematic scan dynamics for monotone spin systems, and the block dynamics for general spin systems. These three dynamics are among the most popular and well-studied global Markov chains and present certain advantages (e.g., faster convergence and amenability to parallelization) to the Glauber dynamics.
§.§ The Swendsen–Wang dynamics
A canonical example of a global Markov chain is the Swendsen–Wang (SW) dynamics for the ferromagnetic q-state Potts model.
The SW dynamics transitions from a configuration σ_t to σ_t+1 by:
* For each edge e = {u,v}∈ E, if σ_t(u) = σ_t(v), independently include e in the set A_t with probability p = 1 - e^-β;
* Then, independently for each connected component 𝒞 in (V,A_t), draw a spin s ∈{1, … ,q} uniformly at random and set σ_t+1(v)= s for all v∈𝒞.
The SW dynamics is ergodic and reversible with respect to and thus converges to it.
This Markov chain originated in the late 1980s <cit.> as an alternative to the Glauber dynamics, which mixes exponentially slowly at low temperatures (large β).
The SW dynamics bypasses some of the key barriers that cause the slowdown of the Glauber dynamics at low temperatures.
For the Ising model (q=2), for instance, it was recently shown to converge in poly(n) steps on any n-vertex graph for any value of β > 0 <cit.>. (The conjectured mixing time is Θ(n^1/4), but we seem to be far from proving such a conjecture.)
For q ≥ 3, on the other hand, the SW dynamics can converge exponentially slowly at certain “intermediate” temperatures regimes corresponding to first-order phase transitions; see <cit.>.
Recently, η-spectral independence (with η = O(1)) was shown to imply that the mixing time of the SW dynamics is O(log n) on graphs of maximum degree Δ = O(1), i.e., bounded degree graphs <cit.>. This mixing time bound is optimal since the SW dynamics requires Ω(log n) steps to mix in some cases where η and Δ are both O(1) <cit.>. However, it does not extend to the unbounded degree setting since the constant factor hidden by the big-O notation depends exponentially on the maximum degree Δ; this is the case even when η = O(1) and βΔ = O(1).
Our first result provides a mixing time bound that depends only polynomially on Δ.
Let q≥ 2, β > 0, η > 0 and Δ≥ 3.
Suppose G= (V,E) is an n-vertex graph of maximum degree Δ.
Let be the Gibbs distribution of the q-state ferromagnetic Potts model on G with parameter β.
If is η-spectrally independent with η =O(1) and βΔ = O(1), then
there exists a constant c > 0 such that
the mixing time of the SW dynamics satisfies
T_mix(P_SW) = O((Δlog n)^c).
The constant c has a near linear dependency on η and βΔ; a more precise statement of Theorem <ref> with a precise expression for c is given in Theorem <ref>.
Despite the expectation that the SW dynamics mixes in O(log n) steps in weakly correlated systems (i.e., when βΔ is small), proving sub-linear upper bounds on its mixing time has been difficult.
Recently,
various forms of decay of correlation (e.g., strong spatial mixing, entropy mixing, and spectral independence)
have been used to obtain O(log n) bounds
for the mixing time of the SW dynamics
on cubes of the integer lattice graph ℤ^d, regular trees, and general graphs of bounded degree (see <cit.>).
However, for graphs of large degree, i.e., with Δ→∞ with n, the only sub-linear mixing time bounds known either hold for the very distinctive mean-field model, where G is the complete graph <cit.>, or hold for very small values of β; i.e., β≲ 1/(3Δ) <cit.>.
Our results provide new sub-linear mixing time bounds for graph families of sub-linear maximum degree, provided η =O(1) and βΔ = O(1). These last two conditions go hand-in-hand: in all known cases where η = O(1), we also have βΔ = O(1).
We also note that our proof can be slightly modified to obtain
the same upper bound on the mixing time under the weaker assumption η=O(log n).
On graphs of degree at most Δ, η-spectral independence is supposed to hold with η = O(1) whenever β < β_u(q,Δ), where β_u(q,Δ) is the threshold for the uniqueness/non-uniqueness phase transition on Δ-regular trees.
This has been confirmed for the Ising model (q=2) but not for the Potts model. Specifically, for the ferromagnetic Ising model, we have β_u(2,Δ) = lnΔ/Δ-2, and when β≤ (1-δ)β_u(2,Δ) for some δ∈ (0,1), is η-spectrally independent with η = O(1/δ); see <cit.>.
In contrast, for the ferromagnetic Potts model with q ≥ 3,
there is no closed-form expression for β_u(q,Δ) (it is defined as the threshold value where an equation starts to have a double root), and for graphs of unbounded degree η-spectral independence is only known to hold when β≤2(1-δ)/Δ.
As a result, we obtain the following corollary of Theorem <ref>.
Let δ∈(0,1), Δ≥ 3. Suppose that either q = 2 and β < (1-δ)β_u(2,Δ), or
q ≥ 3 and β≤2(1-δ)/Δ.
Then,
there exists a constant c = c(δ) > 0 such that
the mixing time of the SW dynamics
for the q-state ferromagnetic Potts model
on any n-vertex graph of maximum degree Δ satisfies
T_mix(P_SW) = O((Δlog n)^c).
We mention that other conditions known to imply spectral independence (e.g., those in <cit.>) are not well-suited for the unbounded degree setting since under those conditions, the best known bound for η depends polynomially on Δ.
For another application of Theorem <ref>, see Section <ref> for a bound on the mixing of the SW dynamics on random graphs.
We comment briefly on our proof approach for Theorem <ref>.
A mixing time bound for the SW dynamics can be deduced from the so-called edge-spin factorization of the entropy functional introduced in <cit.>.
It was noted there that this factorization, in turn, follows
from a different factorization of entropy known as
k-partite factorization, or KPF. Spectral independence is known to imply KPF but with a loss of a multiplicative constant that depends exponentially on the maximum degree of the graph.
Our proof of Theorem <ref> follows this existing framework,
but pays closer attention to establishing KPF with an optimized constant with a better dependence on the model parameters.
This is done through a multi-scale analysis of the entropy functional; in each scale, we apply spectral independence to achieve a tighter KPF condition.
Our new results for KPF not only hold for the Potts model,
but also for a general class of spin systems, and we use it to establish new mixing time bounds for the systematic scan and block dynamics.
§.§ The systematic scan dynamics
Our next contribution pertains the systematic scan dynamics,
which is a family of Markov chains closely related to the Glauber dynamics in the sense that updates occur at single vertices sequentially.
The key difference is that the vertex updates
happen
according to a predetermined ordering ϕ of the vertices instead of at random vertices.
These dynamics offer practical advantages since there is no need to randomly select vertices at each step, thereby reducing computation time.
There is a folklore belief that the mixing time of the systematic scan dynamics (properly scaled) is closely related to that of the Glauber dynamics. However, analyzing this type of dynamics has proven very challenging (see, e.g., <cit.>),
and the best general condition under which the systematic scan dynamics is known to be optimally mixing is a Dobrushin-type condition due to Dyer, Goldberg, and Jerrum <cit.>. The new developments on Markov chain mixing stemming from spectral independence have not yet provided new results for this dynamics, even for the bounded degree case where much progress has already been made. We show that spectral independence implies optimal mixing of the systematic scan dynamics for monotone spin systems with bounded marginals; we define both of these notions next.
In a monotone system, there is a linear ordering of the spins at each vertex which
induce a partial order ≼_q over the state space.
A spin system is monotone with respect to the
partial order ≼_q if for every Λ⊆ V and every pair of pinnings
τ_1 ≽_q τ_2 on V ∖Λ, the conditional distribution μ(·|σ_Λ = τ_1) stochastically dominates μ(·|σ_Λ = τ_2).
Canonical examples of monotone spin systems include the ferromagnetic Ising model and the hardcore model on bipartite graphs. As in earlier work (see <cit.>), our bounds on the mixing time will depend on a lower bound on the marginal probability of any vertex-spin pair. This is formalized as follows.
The distribution μ is said to be b-marginally bounded
if for every Λ⊆ V and pinning τ∈Ω_Λ,
and each (v,s)∈𝒫^τ,
we have
μ(σ_v = s |σ_Λ = τ) ≥ b.
Before stating our result for the systematic scan dynamics of b-marginally bounded monotone spin systems, we note that
this Markov chain updates in a single step each vertex once in the order prescribed by ϕ.
Under a minimal assumption on the spin system (the same one required to ensure the ergodicity of the Glauber dynamics),
the systematic scan dynamics is ergodic. Specifically, when the spin system is totally-connected (see Definition <ref>), the systematic scan dynamics is ergodic.
Moreover,
the systematic scan dynamics is not necessarily reversible with respect to μ, so, as in earlier works, we work
with the symmetrized version of the dynamics in which, in each step, the vertices are updated according to ϕ first, and subsequently in the reverse order of ϕ. The resulting dynamics, which we denote by P_ϕ, is reversible with respect to μ. Our main result for the systematic scan dynamics is the following.
[]theoremssintro
Let b > 0, η > 0, and Δ≥ 3.
Suppose G= (V,E) is an n-vertex graph of maximum degree Δ.
Let μ be the distribution of a totally-connected monotone spin system on G.
If μ is η-spectrally independent and b-marginally bounded,
then for any ordering ϕ,
T_mix(P_ϕ) = (e^2 Δ/b)^9+ 4⌈2η/b⌉· O(log n).
The bound in this theorem is tight: for a particular ordering ϕ, we prove a Ω(log n) mixing time lower bound that applies to settings where Δ, b and η are all Θ(1); see Lemma <ref>.
We present next several interesting consequences of Theorem <ref>. First,
we obtain the following corollary using the known results about spectral independence for the ferromagnetic Ising model.
Let δ∈ (0,1) ,Δ≥ 3 and 0 < β < (1-δ)β_u(2,Δ).
Suppose G= (V,E) is an n-vertex graph of maximum degree Δ.
For any ordering ϕ of the vertices of G,
the mixing time of P_ϕ for the Ising model on G with parameter β
satisfies T_mix(P_ϕ) = O(log n).
The constant hidden by the big-O notation is an absolute constant that depends only on the constant δ, even when Δ depends on n.
This result, compared to the earlier conditions in <cit.>, extends the parameter regime where the O(log n) mixing time bound applies; in fact, the parameter regime in Corollary <ref> is tight, as the systematic scan dynamics undergoes an exponential slowdown when β > β_u(2,Δ) <cit.>.
We derive analogous results for the hardcore model on bipartite graphs; see Section <ref>.
Our next application concerns the specific but relevant case
where the underlying graph is an n-vertex cube of the integer lattice graph ℤ^d.
In this context,
it was proved in <cit.> that all systematic scan dynamics converge in O(log n (loglog n)^2) steps
whenever a well-known condition known as
strong spatial mixing (SSM) holds.
A pertinent open question is whether SSM implies spectral independence. In fact,
spectral independence is often proved
by adapting earlier arguments for establishing SSM (see, e.g., <cit.>).
Recently, it was proved in <cit.> that SSM on trees implies spectral independence on large-girth graphs.
We show that for general spin systems on ℤ^d, SSM
implies η-spectral independence with η = O(1).
[]lemmassmtosiintro
For a spin system on a d-dimensional cube V ⊆ℤ^d, SSM implies η-spectral independence, where η=O(1).
The formal definition of SSM is given later in Section <ref>. This result does not assume monotonicity for the spin system and could be of independent interest.
An interesting consequence of this lemma, when combined with Theorem <ref> is the following.
Let d≥ 2. For a b-marginally bounded monotone spin system on a d-dimensional cube V⊆ℤ^d, SSM implies that
the mixing time of any systematic scan P_ϕ is O(log n).
For the ferromagnetic Ising model on ℤ^2,
SSM is known to hold for all β < β_c(2) = ln(1+√(2)) (see <cit.>), so we deduce that when β < β_c(2), the mixing time of any systematic scan P_ϕ on an n-vertex square box of ℤ^2 is O(log n); note that β_c(2) > β_u(2,2d),
the corresponding tree uniqueness threshold.
We comment briefly on the techniques used to establish our results for the systematic scan dynamics.
Our starting point is again the k-partite factorization of entropy (KPF). Our improved bounds for KPF imply that a global Markov chain that updates a random independent set of vertices in each step is rapidly mixing.
We then use the censoring technique from <cit.> to relate the mixing time of this Markov chain to that of the systematic scan dynamics.
To establish Lemma <ref>,
we use SSM to construct a contractive coupling for a particular
Markov chain.
Our Markov chain is similar to the one from <cit.>, but modified to update rectangles instead of balls, and thus match the variant of SSM that holds up to the critical threshold for the Ising model on ℤ^2.
This contractive coupling is then used to establish spectral independence using the machinery from <cit.>.
§.§ The block dynamics
Our final result concerns a family of Markov chains known as the block dynamics. They are a natural generalization of the Glauber dynamics where a random subset of vertices (instead of a random vertex) is updated in each step. More precisely,
let ℬ := {B_1, …, B_K} be a collection of subsets of vertices (called blocks) such that V=∪_i=1^K B_i.
Let α be a distribution over ℬ.
The (heat-bath) block dynamics with respect to (ℬ, α)
is the Markov chain that, in each step,
given a spin configuration σ_t,
selects B_i ∈ℬ
according to the distribution α and updates the configuration on B_i
with a sample from the μ(·|σ_t(V∖ B_i)); that is, from the conditional distribution
on B_i given the spins of σ_t in V∖ B_i.
We denote this Markov chain (and its transition matrix) by P_ℬ,α.
When the B_i's are each single vertices, and α is a uniform distribution over the blocks in ℬ, we obtain the Glauber dynamics.
Our result for the mixing time of the block dynamics is the following.
Let b>0, η>0 and Δ≥ 3.
Suppose G= (V,E) is an n-vertex graph of maximum degree Δ.
Let μ be a Gibbs distribution of a totally-connected spin system on G.
Let ℬ := {B_1, …, B_K} be any collection of blocks such that V=∪_i=1^K B_i, and let α be a distribution over ℬ.
If μ is η-spectrally independent and b-marginally bounded,
then there exists a constant C > 0 such that
the mixing time of block dynamics P_ℬ,α
satisfies:
T_mix(P_ℬ,α) = O( ^-1·(C Δlog n loglog n/b^7)^2+⌈2η/b⌉),
where = min_v∈ V∑_B∈ℬα_B.
See Theorem <ref> for a more precise statement.
Previous results for the block dynamics only apply to the bounded degree case <cit.>, so Theorem <ref> provides the first bounds
for its mixing time in the unbounded degree setting.
§ MIXING TIMES AND MODIFIED LOG-SOBOLEV INEQUALITIES
Let P be an irreducible and aperiodic (i.e., ergodic) Markov chain
with state space Ω and stationary distribution μ. Let us assume that
P is reversible with respect to μ,
and let
d(t):=max_x∈ΩP^t(x,·) - μ_TV := max_x∈Ωmax_A⊆Ω |P^t(x,A)-μ(A)|,
where P^t(x,·) denotes the distribution of the chain at time t assuming x ∈Ω as the starting state; ·_TV denotes the total variation distance.
Note that with a slight abuse of notation, we use P for both the Markov chain and its transition matrix.
For ϵ>0,
let
T_mix(P, ϵ) := min{t>0: d(t)≤ϵ},
and
the mixing time of P is defined as T_mix(P) = T_mix(P, 1/4).
For functions f,g:Ω→ℝ,
the Dirichlet form of a reversible Markov chain P with stationary distribution μ is defined as
ℰ_P(f,g) = ⟨ f, (I-P) g⟩_μ = 1/2∑_x,y∈Ωμ(x) P(x,y) (f(x)- f(y)) (g(x) - g(y)),
where ⟨ f, g⟩_μ:=∑_x∈Ω f(x)g(x) μ(x).
The spectrum of the ergodic and reversible Markov chain P is real, and we let 1 = λ_1 > λ_2 ≥…≥λ_|Ω|≥ -1 denote its eigenvalues. The (absolute) spectral gap of P is defined by (P) = 1 - max{|λ_2|,|λ_|Ω||}.
When P is positive semidefinite, we have
(P) =1 - λ_2 = inf{ℰ_P(f,f)/⟨ f, f⟩_μ| f:Ω→ℝ, ⟨ f, f⟩_μ≠ 0 }.
For P reversible and ergodic, we have
the following standard comparison between the spectral gap and the mixing time
T_mix(P) = 1/(P)·log(1/ϵμ_min),
where μ_min := min_x∈Ωμ(x).
The expected value of a function f: Ω→ℝ_≥ 0
with respect to μ is defined as _μ[f] = ∑_x∈Ω f(x) μ(x).
Similarly,
the entropy of the function
with respect to μ is given by
_μ(f) := _μ[f logf/_μ [f]]
= _μ[f log f] - _μ[f log(_μ[f])].
We say that the Markov chain P satisfies a modified log-Sobolev inequality (MLSI) with constant ρ if
for every function f:Ω→ℝ_≥ 0,
ρ·_μ(f) ≤ℰ_P(f,log f).
The smallest ρ satisfying the inequality above is
called the modified log-Sobolev constant of P and is denoted by ρ(P).
A well-known general relationship (see <cit.>) shows that
4(1-2μ_min )/log(1/μ_min - 1)(P)
≤ρ(P) ≤ 2(P).
For distributions μ and ν over Ω,
the relative entropy of ν with respect to μ, denoted as ℋ(ν|μ), is defined as
ℋ(ν|μ) := ∑_x ∈Ων(x) logν(x)/μ(x).
A Markov chain P with stationary distribution μ satisfies discrete relative entropy decay with rate r > 0 if for all distributions ν:
ℋ(ν P |μ) ≤ (1-r) ℋ(ν|μ).
It is a standard fact (see, e.g., Lemma 2.4 in <cit.>) that when (<ref>) holds, then ρ(P) ≥ r, and
T_mix(P, ϵ) ≤1/r·(loglog(1/μ_min) + log(1/2ϵ) ).
§ SWENDSEN-WANG DYNAMICS ON GENERAL GRAPHS
In this section, we consider the SW dynamics
for the q-state ferromagnetic Potts models on general graphs. In particular, we establish Theorem <ref> from the introduction, which is a direct corollary of the following more general result.
Let q≥ 2, β > 0, η > 0, b > 0, Δ≥ 3, and χ≥ 2.
Suppose G= (V,E) is an n-vertex graph of maximum degree Δ and chromatic number χ.
Let be the Gibbs distribution of the q-state ferromagnetic Potts model on G with parameter β.
If is η-spectrally independent and
b-marginally bounded,
then there exists a universal constant C > 1 such that
the modified log-Sobolev constant of the SW dynamics satisfies:
ρ(P_SW) = Ω(b^7+6κ/χ· (C Δlog n )^κ· (loglog n)^κ+1),
where κ = 1 +⌈2η/b⌉, and
T_mix(P_SW) = O( b^-(7+6κ)·χ· (C Δlog n)^κ (loglog n)^κ+1·log n ).
Theorem <ref> follows from this theorem by noting that
χ≤Δ and that
under the assumptions η = O(1) and βΔ = O(1), we have b = O(1) and κ = O(1).
When Δ is small, i.e., Δ = o(log n), we can obtain slightly better bounds on
ρ(P_SW) and T_mix(P_SW) and replace the
(C Δlog n ·loglog n)^κ factor
by a factor of (CΔ)^6 + 4⌈2η/b⌉.
Before proving Theorem <ref>, we provide a number of definitions and required background results in Section <ref>.
We then give the proof of Theorem <ref> in Sections <ref>, <ref>, and <ref>, and include some applications of this result in Section <ref>.
§.§ Factorization of entropy
We present next several factorizations of the entropy functional _μ(f), which are instrumental in establishing the decay of the relative entropy for the SW dynamics.
We introduce some useful notations first.
Given a pinning τ in V∖Λ (i.e., τ∈Ω_V∖Λ), we let μ_Λ^τ(·) := μ(·|σ_V∖Λ = τ).
Given a function f:Ω→ℝ_≥0,
subsets of vertices B ⊆Λ⊂ V,
and τ∈Ω_V ∖Λ,
the function f^τ_B : Ω_B^τ→ℝ_≥0 is defined by:
f^τ_B (σ) = _ξ∼μ^τ_Λ∖ B [f(τ∪ξ∪σ)].
If B= Λ, we often write f^τ for f^τ_B, and if τ = ∅,
then we use f_B for f^τ_B.
We use ^τ_B(f^τ) to denote _μ^τ_B(f^τ), and
if the pinning τ on V∖ B is from a distribution π over Ω_V∖ B,
we use _τ∼π[^τ_B(f^τ)] to denote the expected value
of the function f on S over the random pinning τ.
Various forms of entropy factorization arise from bounding _μ (f) by different (weighted) sums of restricted
entropies of the function f.
The first one we introduced, is the so-called ℓ-uniform block factorization of entropy of ℓ-UBF.
For an integer ℓ≤ n, ℓ-UBF holds for
μ with constant C_UBF if for all functions f : Ω→ℝ_≥0,
ℓ/n·_μ (f) ≤ C_UBF·1/nℓ∑_S ∈Vℓ_τ∼μ_V ∖ S[^τ_S (f^τ) ],
where Vℓ denotes the collection of all subsets of V of size ℓ.
An important special case is when ℓ=1, in which case (<ref>) is called approximate tensorization of entropy (AT); this special case has been quite useful for establishing optimal mixing time bounds for the Glauber dynamics in various settings (see, e.g., <cit.>).
In recent works, a key step for obtaining AT has been to first establish ℓ-UBF for some large ℓ. The following result will be useful for us.
Let b and η be fixed.
For θ∈(0,1) and n ≥2/θ (4η/b^2 + 1),
the following holds.
If the Gibbs distribution μ of a spin system
on an n-vertex graph
is η-spectrally independent and b-marginally bounded, then
⌈θ n ⌉-UBF holds with
C_UBF=(e/θ)^⌈2η/b⌉.
In addition, if θ < b^2/(12Δ), then:
_μ (f) ≤ C_UBF·18 /b^5θ∑_i=1^n _τ∼μ_V∖{i}[_i^τ (f^τ) ].
Note that the second inequality in the theorem corresponds to AT with constant
C_AT = C_UBF·18 /b^5θ.
Another useful notion is the k-partite factorization of entropy or KPF.
Let U_1, …, U_k be k disjoint independent sets of V such that
⋃_i=1^k U_i = V. We say μ satisfies KPF with constant C_KPF if for all functions
f : Ω→ℝ_≥0,
_μ (f) ≤ C_KPF∑_i=1^k
_τ∼μ_V ∖ U_i[^τ_U_i (f^τ) ].
KPF was introduced in <cit.>, where it was used to analyze global Markov chains.
The interplay between KPF and UBF is intriguing and is further explored in this paper.
§.§ Proof of main result for the SW dynamics: Theorem <ref>
The main technical contribution in the proof of
Theorem <ref> is establishing KPF with a better (i.e., smaller) constant C_KPF.
As in <cit.>, KPF is then used to derive an improved “edge-spin” factorization
of entropy which is known to imply the desired bounds on the modified log-Sobolev constant and on
the mixing time of the SW dynamics.
For a b-marginally bounded Gibbs distribution μ that satisfies η-spectral independence
on an n-vertex graph G=(V,E) of maximum degree Δ,
if b and η are constants independent of Δ and n,
and Δ∈ [3, b^4 n/10e(4η + b^2)],
then there exists an absolute constant c>0 such that
k-partite factorization of entropy holds for μ with constant
C_KPF =(Δlog n)^c.
Specifically,
for a set of k disjoint independent sets V_1, …, V_k such that ⋃_j=1^k V_j = V,
_μ(f) ≤ 54 ·e^13κ/b^5+6κ·
(Δlog n)^κ· (loglog n)^1+κ∑_j=1^k_τ∼μ_V ∖ V_j[^τ_V_j (f^τ)],
where κ = 1+⌈2η/b⌉.
Moreover, if Δ^2 ≤b^4 n/10e(4η + b^2), then the following also holds
_μ(f) ≤ 72 ·e^8κ/b^5+4κ·Δ^2+4κ∑_j=1^k_τ∼μ_V ∖ V_j[^τ_V_j (f^τ)].
Let ℬ = {B_1,…, B_k} be a collection of disjoint independent sets such that V = ⋃_i=1^k B_i.
The independent set dynamics P_ℬ
is a heat-bath block dynamics w.r.t. ℬ and a uniform distribution over ℬ. If μ satisfies k-partite factorization of entropy with C_KPF,
then P_ℬ satisfies a
relative entropy decay with rate r≥ 1/(k· C_KPF).
See Lemma <ref> for the more general statement.
As mentioned, KPF was first studied in <cit.>;
the constant proved there was
C_KPF = b^O(Δ)· (b Δ)^O(η/b),
so our new bound improves the dependence on Δ from exponential to polynomial.
The proof of Theorem <ref> is given in two parts. In Section <ref>, we prove (<ref>), whereas (<ref>) is proved in Appendix <ref>.
With KPF on hand, the next step in the proof of Theorem <ref> relies on the
so-called edge-spin factorization of entropy.
Let Ω_J := Ω×{0,1}^E be the set of joint configurations (σ, A) corresponding to pairs of a spin configuration σ∈Ω and an edge configuration (a subset of edges in a graph) A⊆ E.
For a q-state Potts model with parameter p=1-e^-β,
we use ν to denote the Edwards-Sokal measure on Ω_J given by
ν(σ,A) := 1/Z_J (1-p)^|E|-|A| p^|A|1(σ∼ A),
where σ∼ A is the event that every edge in A has its two endpoints with the same spin in σ,
and Z_J:=∑_(A,σ)∈Ω_J(1-p)^|E|-|A| p^|A|1(σ∼ A) is a normalizing constant.
Let ν(·|σ) and ν(·| A) denote
the conditional measures obtained from ν by fixing the spin configuration to be σ or fixing the edge configuration to be A respectively.
For a function f : Ω_J →ℝ_≥0,
let f^σ : {0,1}^|E|→ℝ_≥0 be the function given by f^σ(A)= f(σ∪ A), and
let f^A : Ω→ℝ_≥0 be the function given by f^A(σ)= f(σ∪ A).
We say that edge-spin factorization of entropy holds with constant C_ES if for all functions f:Ω_J →ℝ_≥0,
_ν(f) ≤ C_ES(
_(σ,A) ∼ν[
_A∼ν(·|σ) (f^σ)
] +
_(σ,A) ∼ν[
_σ∼ν(·| A) (f^A)
]
).
The following result from <cit.> will be useful for us.
Suppose the q-state ferromagnetic Potts model with parameter β on a graph G of maximum degree is Δ≥ 3 satisfies KPF with constant C_KPF.
Then, the edge-spin factorization of entropy holds with constant C_ES = O(βΔ k e^βΔ) · C_KPF.
The original bound for C_ES stated in <cit.> is actually O(βΔ^2 e^βΔ) · C_KPF,
but in the proof there, one factor k is replaced with Δ as its upper bound.
Since we do not assume Δ to be a constant, we avoid such an upper bound.
We also remark that the exponential dependence of C_ES on βΔ can probably be improved,
but in our applications βΔ = O(1), so this would not represent a tangible improvement.
The final ingredient in the proof of Theorem <ref> is the following.
Suppose edge-spin factorization of entropy holds with constant C_ES.
Then, the SW dynamics P_SW satisfies the relative entropy decay with rate
Ω(1/C_ES).
We are now ready to prove Theorem <ref>.
We are aware that Theorem <ref> requires an upper bound on the maximum degree Δ of the graph,
so in the case when Δ is at least linear to n,
we employ a crude argument via spectral gap to obtain a polynomial bound for the MLSI and mixing time.
First, we assume Δ∈ [3, b^4 n/10e(4η + b^2)].
By Theorem <ref>,
satisfies
χ-partite factorization of entropy with constant
C_KPF= (Δlog n)^κ (loglog n)^1+κ· O(e^13κ/b^5+6κ).
It follows from Lemma <ref> and Lemma <ref> that
the SW dynamics satisfies (<ref>) with
r= Ω(b^5+6κ/χβΔ e^βΔ· (Δlog n)^κ (loglog n)^1+κ· e^13κ).
Note that b≤ q^-1e^-βΔ,
and so βΔ e^βΔ≤ e^2βΔ≤ b^-2.
Therefore, the mixing time bound follows from (<ref>).
Next, let us consider the case when Δ = Ω(n).
In this case, it suffices to provide a 1/poly(n) lower bound on the modified log-Sobolev constant of the SW dynamics, which can be obtained in a straightforward manner using the known bounds for the Potts Glauber dynamics and the comparison technology from <cit.>.
Recall that P_ℬ is the independent set dynamics; that is, the block dynamics with respect to a collection of disjoint independent sets {B_1, …, B_k}; see in Remark <ref>.
From Theorem 3.2 in <cit.>, we know that (P_GD) ≥ n^-(2η +1), where P_GD denotes the transition matrix for the Potts Glauber dynamics.
Since ℰ_P_GD(f,f) ≤ℰ_P_ℬ(f,f) for any function f, it follows that (P_ℬ) ≥ n^-(2η +1).
The comparison inequalities from <cit.> imply that
(P_SW) ≥(P_ℬ)·min_i=1,…, kmin_τ∈Ω_V∖ B_imin_v∈ B_i( P_v^τ),
where P_v^τ is the transition matrix
for the update at vertex v, with τ as the fixed boundary condition,
that adds each monochromatic edge
between v and its neighbors independently with probability p:=1 - e^-β, and assigns a new random spin to v only if no edge is added.
From a coupling argument, we know that for any v∈ B_i, (P_v^τ) ≥ (1-p)^Δ = e^-βΔ≥ qb.
Thus, (P_SW) ≥ n^-(2η+1) qb,
and ρ(P_SW) = Ω(n^-(2η+2) b) by (<ref>).
The mixing time bound then follows from (<ref>).
§.§ Proof of the main technical theorem: Theorem <ref>
Recall that given a function f:Ω→ℝ_≥0,
subsets of vertices B ⊆Λ⊂ V,
and τ∈Ω_V ∖Λ,
the function f^τ_B : Ω^τ_B→ℝ_≥0 is defined by
f^τ_B (σ) = _ξ∼μ^τ_Λ∖ B [f(τ∪ξ∪σ)].
In the proof of Theorem <ref> we use several facts, which we compile next.
Let Λ_0 = ∅.
For any Λ_1 ⊂…Λ_m ⊂Λ⊆ V, any τ∈Ω_V∖Λ and any f : Ω_Λ^τ→ℝ_≥0,
∑_i=1^m_γ∼μ_Λ∖Λ_i^τ[ ^τ∪γ_Λ_i∖Λ_i-1(f^γ_Λ_i ∖Λ_i-1) ]
= _γ∼μ^τ_Λ∖Λ_m[ ^τ∪γ_Λ_m (f^γ) ].
The following corollary directly follows from this fact,
by taking Λ_1 = A, Λ_2 = B and V=Λ.
Let A,B and Λ be subsets of vertices such that A⊂ B ⊂Λ⊆ V. For any τ∈Ω_V∖Λ and any f : Ω_Λ^τ→ℝ_≥0,
_γ∼μ^τ_Λ∖ A[ ^γ∪τ_A (f^γ) ]
≤_γ∼μ^τ_Λ∖ B[ ^γ∪τ_B (f^γ) ].
Let S⊆ V be a subset of vertices.
Let S_1, …, S_m ⊆ V denote the connected components of S.
For a vertex v∈ V,
let C_S(v) the unique connected component S_i that contains v, if such component exists, otherwise set C_S(v) to be the empty set. When S is chosen uniformly at random
among all subsets of size ⌈θ n⌉, the following exponential tail bound
for |C_S(v)| was established in <cit.>.
Let G=(V,E) be an n-vertex graph of maximum degree at most Δ. Then for any v∈ V and every integer k≥ 0 we have
_S[|C_S(v)| = k]≤ℓ/n· (2eΔθ)^k-1,
where the probability _S[·] is taken over a uniformly random subset S⊆ V of size ℓ = ⌈θ n ⌉.
We proceed to prove (<ref>) from Theorem <ref>.
With a slightly different argument, we will establish (<ref>) in Appendix <ref>,
which is a better bound only when Δ =o( log n).
Let V_1, …, V_k ⊆ V be disjoint independent sets such that ⋃_j V_j = V.
We take θ = b^2/5eΔ
so that 2/n·(4η/b^2 + 1) < θ < min{1/(4eΔ), b^2/(12Δ)}.
Let S be a subset of vertices of size ⌈θ n⌉ chosen uniformly
at random from all the subsets of size ⌈θ n⌉.
Let S_1, …, S_m ⊆ V be the connected components of S.
Theorem <ref> implies that ⌈θ n⌉-UBF holds with constant
C_UBF=(e/θ)^⌈2η/b⌉ =
(5e^2Δ/b^2)^⌈2η/b⌉,
and so
for any function f : Ω→ℝ_≥0 we have
_μ(f) ≤(5e^2Δ/b^2)^1+⌈2η/b⌉_S[ _τ∼μ_V∖ S[_S^τ (f^τ) ]],
where _S denotes the expectation over the random subset S.
To bound the right-hand side of (<ref>), we use the following fact, which we prove later in Section <ref>.
Let V_1, …, V_k be disjoint independent sets such that ⋃_j=1^k V_j = V.
Let S⊆ V be a subset of vertices.
Let S_1, …, S_m ⊆ S be all the connected components of S.
Suppose that for S_i ⊆ S,
Γ(S_i) takes the minimum value such that the following inequality holds
for an arbitrary pinning τ∈Ω_V∖ S_i and any function g:Ω_S_i^τ→ℝ_≥0:
^τ_S_i(g)≤Γ(S_i) ∑_j=1^k _ξ∼μ_S_i ∖ V_j^τ[_V_j∩ S_i^ξ∪τ (g^ξ_S_i∩ V_j)].
Then for any function f : Ω→ℝ_≥0,
_τ∼μ_V∖ S[_S^τ (f^τ) ]
≤∑_j=1^k _τ∼μ_V∖ V_j[_V_j^τ (f^τ) ] ·max_S_i ⊆ SΓ(S_i).
Letting κ := 1+⌈2η/b⌉,
from (<ref>) and Lemma <ref>,
we have
_μ(f) ≤(5e^2Δ/b^2)^κ∑_j=1^k _τ∼μ_V∖ V_j[_V_j^τ (f^τ) ] ·_S[max_S_i ⊆ SΓ(S_i)].
To show (<ref>), it remains to provide an upper bound for
_S[max_S_i ⊆ SΓ(S_i)].
Note that if |S_i| is a constant, then Γ(S_i) can be trivially bounded by a constant.
For S_i with |S_i| = ω(1), we first recursively upper bound each Γ(S_i) by Γ(S_i,l) where S_i,l's are the decomposition of S_i in a smaller scale, and
Γ(S_i,l) is defined analogously to Γ(S_i) in (<ref>).
Formally,
for a fixed S_i,
let S_i'⊆ S_i be a uniformly generated set of ⌈θ_i |S_i| ⌉ vertices, where θ_i:=b^2/5eΔ(S_i) and Δ(S_i) denotes the maximum degree in the subgraph induced by S_i.
Let S_i,1,…, S_i,m_i⊆ S_i' be all the connected components of S_i'.
For each l=1,…, m_i, we define Γ(S_i,l) to be
the minimum number such that
for an arbitrary pinning τ∈Ω_V∖ S_i,l and any function g:Ω_S_i,l^τ→ℝ_≥0 the following inequality holds:
^τ_S_i,l(g)≤Γ(S_i,l) ∑_j=1^k _ξ∼μ_S_i,l∖ V_j^τ[_V_j∩ S_i,l^ξ∪τ (g^ξ_S_i,l∩ V_j)].
By assumption,
μ is η-spectrally independent and b-marginally bounded.
These properties, by definition, are preserved under any pinning.
In particular,
for any S_i ⊆ S and an arbitrary pinning τ∈Ω_V∖ S_i,
μ_S_i^τ is still η-spectrally independent and b-marginally bounded. Hence, by the same argument used to obtain (<ref>), Theorem <ref> and Lemma <ref> imply that
for any g:Ω_S_i^τ→ℝ_≥0,
_S_i^τ (g) ≤(5e^2Δ(S_i)/b^2)^κ∑_j=1^k _γ∼μ^τ_S_i∖ V_j[_S_i ∩ V_j^τ∪γ (g^γ) ] ·_S_i'[max_S_i,l⊆ S_i'Γ(S_i,l)].
We proceed to upper bound Γ(S_i,l) in terms of the size of S_i,l.
Note that for any S_i,l and an arbitrary pinning τ∈Ω_V∖ S_i,l,
μ_S_i,l^τ is again η-spectrally independent and b-marginally bounded.
Thus, the second part of Theorem <ref> implies that
for any function h:Ω_S_i,l^τ→ℝ_≥0,
_S_i,l^τ (h) ≤18/b^5(5e^2Δ(S_i,l)/b^2)^κ∑_v∈ S_i,l_γ∼μ^τ_S_i,l∖{v}[_v^τ∪γ (h^γ) ].
Observe that by Corollary <ref>, if v∈ B⊆ S_i,l, then
_γ∼μ^τ_ S_i,l∖{v}[_v^γ∪τ (h^γ) ] ≤_γ∼μ^τ_ S_i,l∖ B[_B^γ∪τ (h^γ) ].
Hence,
∑_v∈ S_i,l_γ∼μ^τ_S_i,l∖{v}[_v^τ∪γ (h^γ) ]
= ∑_j=1^k ∑_v∈ S_i,l∩ V_j_γ∼μ^τ_ S_i,l∖{v}[_v^τ∪γ (h^γ) ]
≤max_j | S_i,l∩ V_j| ∑_j=1^k _γ∼μ^τ_ S_i,l∖ V_j[_V_j∩ S_i,l^τ∪γ(h^γ) ],
Let b_1:= 18 · (5e^2)^κ· b^-5 - 2κ.
We obtain from (<ref>) that
_S_i,l^τ (h) ≤
b_1 max_j|V_j ∩ S_i,l| Δ(S_i,l)^κ
∑_j=1^k_γ∼μ^τ_S_i,l∖ V_j[_V_j ∩ S_i,l^γ∪τ (h^γ) ].
From the definition of Γ(S_i,l), we then have
Γ(S_i,l) ≤
b_1 max_j|V_j ∩ S_i,l| Δ(S_i,l)^κ
≤ b_1 |S_i,l| Δ(S_i,l)^κ,
and
_S_i'[max_S_i,l⊆ S_i'Γ(S_i,l)]
≤ b_1 _S_i'[max_S_i,l⊆ S_i' |S_i,l| Δ(S_i,l)^κ]
≤ b_1_S_i'[ max_v∈ S_i |C_S_i'(v)|^κ+1].
To estimate the expectation on the right-hand side of (<ref>),
we first expand the expectation and apply a union bound as follows:
_S_i'[ max_v∈ S_i |C_S_i'(v)|^κ+1] =
∑_x=0^|S_i| x^(κ+1)·_S_i'[ max_v∈ S_i |C_S_i'(v)| = x]
≤ (2log_2|S_i|)^1+κ + ∑_x=2log_2|S_i|^|S_i| x^(κ+1)·_S_i'[ max_v∈ S_i |C_S_i'(v)| = x]
≤ (2log_2|S_i|)^1+κ + ∑_x=2log_2|S_i|^|S_i| x^(κ+1)·∑_v∈ S_i_S_i'[ |C_S_i'(v)| = x].
Then, applying
Lemma <ref> with θ_i ≤ 1/(5eΔ(S_i)) to the subgraph G[S_i] induced by S_i we obtain
∑_x=2log_2|S_i|^|S_i| x^(κ+1) ·∑_v∈ S_i_S_i'[ |C_S_i'(v)| = x]
≤⌈θ_i |S_i| ⌉∑_x=2log_2|S_i|^|S_i| x^(κ+1) (2eΔ(S_i) θ_i)^x-1
= ⌈θ_i |S_i| ⌉/2eΔ(S_i)θ_i· (2eΔ(S_i) θ_i)^2log_2|S_i|∑_x=2log_2|S_i|^|S_i| x^(κ+1) (2eΔ(S_i) θ_i)^x - 2log_2|S_i|
≤1/2|S_i|eΔ(S_i)∑_x=0^|S_i|-2log_2|S_i| (x+2log_2|S_i|)^κ+1 2^-x
≤1/2|S_i|eΔ(S_i)[
∑_x=0^log_2|S_i| - 1 (x + 2log_2 |S_i|)^κ + 1
+ ∑_x=log_2 |S_i|^|S_i|-2log_2|S_i|(x+2log_2|S_i|)^(κ+1)/|S_i| · 2^x - log_2 |S_i|]
≤1/2|S_i|eΔ(S_i)[
(3log_2|S_i|)^2+κ +
∑_x=0^|S_i|-3log_2|S_i|(x+3log_2|S_i|)^(κ+1)/|S_i| · 2^x].
Notice that for |S_i|=ω(1), (3log_2|S_i|)^κ+2/|S_i| < 1.
Also, for any integer x≥ 0, (x+3log_2|S_i|)^(κ+1)/|S_i| · 2^x < 1,
so
the last sum in (<ref>) is less than |S_i|.
Therefore, by (<ref>), (<ref>) and (<ref>) we have
_S_i'[max_S_i,l⊆ S_i'Γ(S_i,l)]
≤ b_1 · [ (2log_2|S_i|)^1+κ + 1 ]
≤ b_1 (3log_2|S_i|)^1+κ.
Now we plug (<ref>) into (<ref>) and obtain that
Γ(S_i) ≤(5e^2Δ(S_i)/b^2)^κ b_1 (3log_2|S_i|)^1+κ = b_2 Δ(S_i)^κ (log_2 |S_i|)^1+κ≤ b_2 |S_i|^κ (log_2 |S_i|)^1+κ,
where b_2 :=b_1 · 3^1+κ· (5e^2/b^2)^κ.
Then
_S[max_S_i⊆ SΓ(S_i)]
≤ b_2 _S[max_S_i⊆ S |S_i|^κ (log_2 |S_i|)^1+κ]
≤ b_2_S[ max_v∈ S |C_S(v)|^κ· (log_2 |C_S(v)|)^1+κ].
Analogous to the computation in (<ref>) and (<ref>), we have
_S[ max_v∈ S |C_S(v)|^κ· (log_2 |C_S(v)|)^1+κ]≤ (3log_2 n)^κ (log_2 log_2 n)^1+κ.
These bounds together with (<ref>) imply that
C_KPF≤ b_2 (3log_2 n)^κ (log_2 log_2 n)^1+κ·(5e^2Δ/b^2)^κ≤ 54 ·(5e^2)^3κ· 3^2κ/b^6κ· (Δlog_2 n)^κ· (log_2 log_2 n)^1+κ
,
from which we obtain the bound in (<ref>) by the inequality (5e^2)^3· 3^2 ≤ e^13.
§.§ Entropy factorization: Proof of Lemma <ref>
We proceed with the proof of Lemma <ref>.
We present this fact from <cit.> that will be useful.
Let Λ = A∪ B ⊆ V, τ∈Ω_V∖Λ, and assume μ_Λ^τ is a product measure μ_Λ^τ = μ_A^τ⊗μ_B^τ.
For all U⊂ B and any f: Ω→ℝ_≥0,
* ^τ_A(f^τ_A) = _γ∼μ^τ_B[^γ∪τ_A(f^τ_A)].
* _γ∼μ^τ_B[^γ∪τ_A(f_A^τ)]
≤_γ∼μ^τ_U[^γ∪τ_A(f^γ∪τ_A)].
We are now ready to prove Lemma <ref>.
Note that μ_S^τ = ⊗_i=1^m μ_S_i^τ is a product measure.
For i≥ 1, let S_≤ i := S_1∪…∪ S_i.
For i > 1, we let S_<i:=S_1 ∪…∪ S_i-1,
and we set S_<1 := ∅ for convenience.
As a direct consequence of applying Lemma <ref> and applying Lemma <ref>(1), we have the following identity for any f : Ω→ℝ_≥0:
_τ∼μ_V∖ S[_S^τ (f^τ) ]
= ∑_i=1^m_τ∼μ_V∖ (S_≤ i)[ ^τ_S_i(f^τ_S_i)
]
=∑_i=1^m_τ∼μ_V∖ (S_≤ i)[
_γ∼μ^τ_S_<i[ ^τ∪γ_S_i(f^τ_S_i) ]
].
On the other hand, setting g=f_S_i^τ in (<ref>), then for any γ∈Ω^τ_S_<i we obtain that
^τ∪γ_S_i(f_S_i^τ)≤Γ(S_i) ∑_j=1^k _ξ∼μ_S_i ∖ V_j^γ∪τ[_V_j∩ S_i^ξ∪τ∪γ (f_S_i ∩ V_j^τ∪ξ)].
Combining (<ref>) and (<ref>) yields
_τ∼μ_V∖ S[_S^τ (f^τ) ] ≤∑_i=1^m_τ∼μ_V∖ S_≤ i[
_γ∼μ^τ_S_<i[
Γ(S_i) ∑_j=1^k _ξ∼μ_S_i ∖ V_j^τ∪γ[_V_j∩ S_i^ξ∪γ∪τ (f_S_i ∩ V_j^ξ∪τ)]
]
]
= ∑_j=1^k
∑_i=1^mΓ(S_i)_τ∼μ_V∖ S_≤ i_ξ∼μ^τ_S_i ∖ V_j_γ∼μ_S_<i^τ∪ξ[_V_j∩ S_i^ξ∪γ∪τ (f^ξ∪τ_S_i ∩ V_j)]
≤∑_j=1^k max_i Γ(S_i)
∑_i=1^m_τ∼μ_(V∖ S_≤ i) ∪ (S_i ∖ V_j)_γ∼μ^τ_S_<i[_V_j∩ S_i^γ∪τ (f^τ_S_i ∩ V_j)].
We show next that for any j=1,…, k, the following inequality holds:
∑_i=1^m_τ∼μ_(V∖ S_≤ i) ∪ (S_i ∖ V_j)_γ∼μ^τ_S_<i[_V_j∩ S_i^γ∪τ (f^τ_S_i ∩ V_j)]
≤_τ∼μ_V∖ V_j[_V_j^τ (f^τ) ].
Given a pinning τ∼μ_V∖ ((S_i ∩ V_j) ∪ S_<i),
μ_S_i∩ V_j and μ_S_<i are independent. By applying Lemma <ref>(2) to _γ∼μ^τ_S_<i
[_V_j∩ S_i^γ∪τ (f^τ_S_i ∩ V_j)],
we have
∑_i=1^m_τ∼μ_(V∖ S_≤ i) ∪ (S_i ∖ V_j)_γ∼μ^τ_S_<i[_V_j∩ S_i^γ∪τ (f^τ_S_i ∩ V_j)]
≤∑_i=1^m
_τ∼μ_(V∖ S_≤ i) ∪ (S_i ∖ V_j)_ξ∼μ_(S_<i) ∖ V_j^τ[_S_i∩ V_j^τ∪ξ (f_S_i∩ V_j^τ∪ξ) ].
Letting ϕ = τ∪ξ, and by applying Lemma <ref>(1) to _S_i∩ V_j^ϕ (f_S_i∩ V_j^ϕ) we also have
∑_i=1^m
_ϕ∼μ_V∖((S_≤ i) ∩ V_j)[_S_i∩ V_j^ϕ (f_S_i∩ V_j^ϕ) ]
=∑_i=1^m
_ϕ∼μ_V∖(S_≤ i) ∩ V_j_ψ∼μ^ϕ_(S_<i) ∩ V_j[_S_i∩ V_j^ϕ∪ψ (f_S_i∩ V_j^ϕ) ].
Also, the following identity follows from Lemma <ref>(1) and Lemma <ref> as in the way of obtaining (<ref>):
_τ∼μ_V ∖ (S∩ V_j)[_S ∩ V_j^τ (f^τ)] = ∑_i=1^m_ϕ∼μ_V∖(S_≤ i∩ V_j)_ψ∼μ^ϕ_(S_<i) ∩ V_j[_S_i∩ V_j^ϕ∪ψ (f_S_i∩ V_j^ϕ) ].
Finally, it follows from Corollary <ref> that
_τ∼μ_V ∖ (S∩ V_j)[_S ∩ V_j^τ (f^τ)] ≤_τ∼μ_V ∖ V_j[_V_j^τ (f^τ)],
so (<ref>) follows from (<ref>),
(<ref>), (<ref>) and (<ref>).
Therefore, we obtain (<ref>) by
(<ref>) and (<ref>).
§.§ Applications of Theorem <ref>
In this section, we prove Corollary <ref> from the introduction
and present another application of Theorem <ref> concerning the SW dynamics on a random graph generated from the classical Erdős-Rényi G(n,p) model. For this, we first define a condition related to the Dobrushin's influence matrix.
The Dobrushin influence matrix A∈ℝ^n × n is defined by A(u,u)=0 and for u≠ v,
A(u,v) = max_ (σ, τ)∈ S_u,vd_TV(μ_v(·|σ), μ_v(·|τ)),
where S_u,v contains the set of all pairs of partial configurations (σ, τ) in Ω_V∖{v} that can only disagree at u, namely,
σ_w = τ_w if w≠ u.
It is known that an upper bound on the spectral norm of A implies spectral independence. In particular, we have the following result from <cit.>.
If the Dobrushin influence matrix A of a distribution μ satisfies A≤ 1 - ϵ for some ϵ > 0,
then μ is spectral independent with constant η= 2/ϵ.
For the ferromagnetic Ising model, β_u(Δ):=lnΔ/Δ-2 corresponds to the threshold value of the parameter β for the uniqueness/non-uniqueness phase transition on the Δ-regular tree.
For the anti-ferromagnetic Ising model, the phase transition occurs at β̅_u(Δ):=-lnΔ/Δ-2.
If β̅_u(Δ)(1 - δ)<β <β_u(Δ)(1 - δ), we say the Ising model satisfies the δ-uniqueness condition.
On a bounded degree graph,
A≤ 1 - δ for the Ising model is a strictly stronger condition than
δ-uniqueness condition. However, due to the observation made in <cit.>,
if Δ→∞, the two conditions are roughly equivalent.
The Ising model with parameter β̅_u(Δ)(1 - δ)<β <β_u(Δ)(1 - δ)
and Δ→∞
satisfies
A≤ 1- δ/2.
We verify that the Ising model
has bounded spectral norm of A:
note that each entry of A can be upper bounded by |β|/2 <cit.>, so a row sum of A is at most
|β|Δ/2
< (1-δ)Δ/2ln( 1 + 2/Δ - 2) ≤(1-δ)Δ/2( 2/Δ - 2)
= (1-δ)(1+ 2/Δ-2) < 1-δ/2,
where the last inequality holds for Δ large enough.
We show next that Corollary <ref> indeed follows from Theorem <ref>.
For this, we first restate the corollary in a more precise manner.
Let δ∈(0,1) and Δ≥ 3.
For the ferromagnetic Ising model with β≤ (1-δ)β_u(Δ)
on any graph G of maximum degree Δ and chromatic number χ,
or for the ferromagnetic q-state Potts model with q≥ 3 and
β≤2(1-δ)/Δ on the same graph,
the mixing time of the SW dynamics satisfies
T_mix(P_SW) =O( χ·Δ^κ· (log n loglog n)^1+κ),
where κ=1 + ⌈4qe^2/δ⌉.
If Δ=O(1),
then the corollary was proved in a stronger form in <cit.>.
Thus, we assume Δ→∞.
We first show spectral independence.
Let q=2.
Under the δ-uniqueness condition 0<β <(1-δ)β_u(Δ),
by Proposition <ref> and Proposition <ref>,
the Ising model satisfies (4/δ)-spectral independence.
For the q-state Potts model with q≥ 3,
the Dobrushin influence matrix corresponding to satisfies
A≤1/2βΔ; see proof of Theorem 2.13 in <cit.>.
Thus, if β≤2(1-δ)/Δ, then
A≤ 1-δ, and by Proposition <ref>,
satisfies (2/δ)-spectral independence.
Letting N(v) denote the neighborhood of v, and noting that
for any configuration η on N(v) we have
μ(σ_v = c |σ_N(v) = η) ≥ 1/(qe^2),
we deduce that and are both (1/(qe^2))-marginally bounded.
Therefore, by noting that κ = 1 + ⌈4qe^2/δ⌉ is a constant that only depends on δ,
the mixing time bound follows from Theorem <ref>
T_mix(P_SW) = O( b^-(7+6κ)·χ· (C Δlog n)^κ (loglog n)^κ+1·log n ) = O( χ·Δ^κ· (log n loglog n)^1+κ),
as desired.
§.§.§ The SW dynamics on random graphs
As another application of Theorem <ref>,
we consider the SW dynamics on a random graph
generated from the classical G(n, d/n) model in which each edge is included independently with probability p = d/n;
we consider the case where d is a constant independent of n.
In this setting, while
a typical graph has Õ(n) edges,
its maximum degree is of order Θ(log n/loglog n) with high probability.
Our results imply that the SW dynamics has polylogarithmic mixing on this type of graph provided β is small enough.
Let δ∈(0,1) and d∈ℝ_≥0 be constants independent of n.
Suppose that G∼ G(n,d/n) and G has maximum degree Δ.
For the ferromagnetic Ising model with parameter β < (1-δ)β_u(Δ) on G
or the ferromagnetic q-Potts model with q ≥ 2 and
β≤2(1-δ)/Δ
on the same graph,
the SW dynamics has (log n)^3+ 2⌈4qe^2/δ⌉· O(loglog n) mixing time,
with high probability over the choice of the random graph G.
Corollary <ref> is established using Corollary <ref> and the following fact about random graphs.
Let G∼ G(n, d/n) for a fixed d∈ℝ_≥0,
and let χ be the chromatic number of G.
With high probability over the choice of G, χ = k_d or χ = k_d +1, where
k_d is the smallest integer k such that d < 2k log k.
By Proposition <ref>, with high probability G∼ G(n,d/n) has
chromatic number χ = O(d).
Also, it is known that with high probability Δ = Θ(log n/loglog n).
Suppose both properties hold.
The result follows from Corollary <ref>.
§ SYSTEMATIC SCAN DYNAMICS
In this section, we study the systematic scan dynamics for general spin systems, which we define next.
Let G=(V,E) be a graph and 𝒮= {1,…, q} a set of spins.
Let Ω⊆𝒮^V be the set of possible spin configurations on G.
We write σ_v for the spin assigned to v by σ.
Given a configuration σ∈Ω and a subset Λ of V,
we write σ_Λ∈𝒮^Λ
for the configuration of σ restricted to Λ.
For a subset of vertices Λ⊆ V,
a boundary condition τ
is an assignment of spins to (some) vertices in outer vertex boundary ∂Λ⊆ V ∖Λ of Λ; namely,
τ: (∂Λ)_τ→𝒮, with (∂Λ)_τ⊆∂Λ.
Note that a boundary condition is simply a pinning of a subset of vertices identified as being in the boundary of G.
Given a boundary condition τ:(∂ V)_τ→𝒮, the Hamiltonian H:Ω→ℝ of a spin system is defined as
H(σ) = -∑_{v,u}∈ E K(σ_v, σ_u)
- ∑_{v,u}∈ E: u∈ V, v∈ (∂ V)_τ K(σ_v, τ_v)
-∑_v∈ V U(σ_v),
where K:𝒮×𝒮→ℝ and U:𝒮→ℝ are
respectively the symmetric edge interaction potential function and the
spin potential function of the system.
The Gibbs distribution of a spin system with Hamiltonian H is defined as
μ(σ) = 1/Z_H e^-H(σ),
where Z_H := ∑_σ∈Ω e^-H(σ).
We use Ω for the set of configurations σ satisfying
μ(σ) > 0.
The Potts model, as defined in the introduction, corresponds to the spin system with q≥ 2, K(x,y)= β·1(x = y), and U(σ_v)= 0 for all v∈ V.
In this section, we focus on the ferromagnetic Ising model where β >0 and 𝒮 = {-1, +1}.
Another important spin system is the hardcore model that can be defined by setting 𝒮 = {1, 0},
K(x,y)=∞ if x=y=1 and K(x,y)=0 otherwise,
and U(x)= 1(x=1) ·lnλ, where λ >0 is referred to as the fugacity parameter of the model.
We restrict attention to totally-connected spin systems, as this ensures that the Glauber dynamics, the systematic scan dynamics, and the block dynamics are all irreducible Markov chains (and thus ergodic).
For a subset 𝒞_U of partial configurations on U ⊆ V,
let H[𝒞_U] = (𝒞_U, E[𝒞_U]) be the induced subgraph where E[𝒞_U] consists of all pairs of configurations
on 𝒞_U that differ at exactly one vertex.
We say that 𝒞_U is connected when H[𝒞_U] is connected.
For a pinning τ on Λ⊆ V,
we say Ω^τ_V ∖Λ is connected if H[Ω^τ_V ∖Λ] is connected.
A distribution μ over 𝒮^V is totally-connected if
for every Λ⊆ V and every pinning τ on Λ, Ω^τ_V ∖Λ is connected.
Given an ordering ϕ = [v_1, …, v_n] of the vertices, a systematic scan dynamics performs heat-bath updates on v_1, …, v_n
sequentially in this order.
Recall that a heat-bath update on v_i simply means the replacement of the spin on v_i by a new spin assignment generated according to
the conditional distribution in v_i given the configuration in V∖{v_i}.
Let P_i ∈ℝ^|Ω| × |Ω| be the transition matrix corresponding to a heat-bath update on the vertex v_i.
The transition matrix of the systematic scan dynamics for the ordering ϕ can be written as
𝒮_ϕ := P_n … P_1.
In general, 𝒮_ϕ is not reversible, so as in earlier works we work with the symmetrized version of the scan dynamics
that updates the spins in the order ϕ and in addition updates the spins in the reverse order of ϕ <cit.>.
The transition matrix of the symmetrized systematic scan dynamics can then be written as
P_ϕ := ∏_i=1^n P_i ∏_i=0^n-1 P_n-i.
Henceforth, we only consider the symmetrized version of the dynamics.
Since P_ϕ is a symmetrized product of reversible transition matrices,
one can straightforwardly verify its reversibility with respect to μ; its ergodicity follows from the assumption that the spin system is totally-connected (see Definition <ref>).
We show tight mixing time bounds for P_ϕ for monotone spin systems (see Definition <ref>). Our main result for the systematic scan dynamics is Theorem <ref> from the introduction, which we restate here for convenience. The proof of this theorem is provided in Section <ref>.
*
We complement Theorem <ref> with a lower bound for the mixing time of systematic scan dynamics for a particular ordering ϕ. Specifically, on a bipartite graph G=(V_E ∪ V_O, E), an even-odd scan dynamics P_EOE is a systematic scan dynamics with respect to an ordering ϕ such that v_e appears before v_o in ϕ for all v_e∈ V_E and v_o ∈ V_O. In other words,
P_ϕ = ∏_i:v_i∈ V_E P_i ∏_i:v_i∈ V_O P_i ∏_i:v_i∈ V_O P_i ∏_i:v_i∈ V_E P_i.
The above expression is well-defined without specifying the ordering in which the vertices in V_E and V_O are updated since the updates commute.
Let Δ be a constant and let G be an n-vertex connected bipartite graph with maximum degree Δ.
The even-odd scan dynamics P_EOE for the ferromagnetic Ising model on G has mixing time
T_mix(P_EOE) = Ω(log n).
The lower bound in Lemma <ref>
is proved in Section <ref> using the machinery from <cit.> and the fact
that even-odd scan dynamics does not propagate disagreements quickly (under a standard coupling).
Our proof can thus be extended to other scan orderings that propagate disagreements slowly; however, there are orderings that do propagate disagreements quickly (think of a box in ℤ^2 with the vertices sorted in a “spiral” from the boundary of the box to its center). For this type of ordering, the technique does not provide the Ω(log n) lower bound.
In addition, while we focus on the ferromagnetic Ising model to ensure clarity in the proof, the established lower bound is expected to encompass a broader class of spin systems.
§.§ Proof of main result for systematic scan dynamics: Theorem <ref>
The main technique in the proof of Theorem <ref> is to compare the systematic scan dynamics with a fast mixing block dynamics
via a censoring inequality developed in <cit.>.
For this, we first introduce some notations and definitions.
We start by reviewing standard facts about the coupling method that will be used in our proofs; see <cit.> for a more detailed background.
A coupling of a Markov chain M specifies, for every pair of states (X_t,Y_t)∈Ω×Ω at every step t,
a probability distribution over (X_t+1, Y_t+1) such that when viewed in isolation,
{X_t} and {Y_t} are valid instances of the chain M.
The optimal coupling lemma says that for any two distributions
μ and ν, we have
μ -ν_TV = inf_X∼μ, Y∼ν[X≠ Y: (X,Y) is a coupling of μ and ν],
where the infimum is taken over all couplings of μ and ν.
We focus on couplings of Markov chains such that if X_s = Y_s then X_t = Y_t for all t≥ s.
Given a coupling of M, the coupling time, is defined as
(M) := min_T>0{max_X_0 ∈Ω, Y_0∈Ω[X_T ≠ Y_T]≤1/4}.
It is a standard fact that for any coupling (X_t, Y_t), the coupling time bounds the mixing time as follows:
d(t) ≤max_X_0 ∈Ω, Y_0∈Ω[X_T ≠ Y_T], and thus T_mix(M) ≤(M).
A coupling of two instances {X_t}, {Y_t} of a Markov chain M is a monotone coupling if
X_t+1≥_q Y_t+1 whenever X_t ≥_q Y_t, where ≥_q is the partial ordering of Ω.
Let {X_t, σ} denote the instance of M starting at configuration σ∈Ω.
If there exists a simultaneous monotone coupling of {X_t, σ} for all σ∈Ω
(i.e., a grand coupling),
then we say M is a monotone Markov chain.
It can be checked that P_ϕ is a monotone Markov chain for any ϕ (see e.g. <cit.>).
We may also define a partial ordering ≼_π on the space of transition matrices.
A function f∈ℝ^|Ω| is said to be non-decreasing if f(σ) ≥ f(τ) whenever σ≥_qτ,
or non-increasing if f(σ) ≤ f(τ) whenever σ≥_qτ.
We endow ℝ^|Ω| with the inner product
⟨ f, g⟩_π:=∑_x∈Ω f(x)g(x) π(x),
which induces a Hilbert space
(ℝ^|Ω|, ⟨·, ·⟩_π) denoted as L_2(π).
For transition matrices K and L whose stationary distributions are both π,
we say K≼_π L if ⟨ Kf, g⟩_π≤⟨ Lf, g⟩_π
for every non-negative and non-decreasing functions f,g∈ L_2(π).
To show K≼_π L in our applications, we use the following facts.
Suppose π is the Gibbs distribution of a monotone spin system.
* If A_1 ≼_π B_1 and A_2 ≼_π B_2, then for 0≤λ≤ 1,
(1-λ)A_1 + λ A_2 ≼_π (1-λ) A_1 +λ A_2.
* If A_s ≼_π B_s for s=1,…,l, then A_1 … A_l ≼_π B_1 … B_l.
* For any fixed v, let K_v be the heat-bath update at site v.
Then, K_v ≼_π I.
Establishing such partial order between two transition matrices is significant as it would imply stochastic domination of the corresponding two chains (recall that for two distributions π and ν on Ω, we say π stochastically dominates ν, and denote as π≽ν, if
for any non-decreasing function f∈ℝ^|Ω|, we have _π[f] ≥_ν[f]).
The following lemma captures such implication.
Suppose {X_t} and {Y_t} are monotone ergodic Markov chains
reversible with respect to π, the Gibbs distribution of a monotone spin system.
Let K_X and K_Y be the corresponding transition matrices of {X_t} and {Y_t}.
Suppose K_X≼_π K_Y.
Then X_t ≼ Y_t for all t≥ 0 if
the initial states X_0 and Y_0 are sampled from a common distribution ν such that ν/π is non-decreasing;
if ν/π is instead non-increasing, then
Y_t ≼ X_t for all t≥ 0, where ≼ as a relation for X_t and Y_t denotes stochastic domination of their corresponding distributions at time t.
We now provide our proof of Theorem <ref>.
We first assume Δ^2 ∈ [3, b^4 n/10e(4η + b^2)].
We partition V into k disjoint independent sets I_1, I_2, …, I_k,
where k = O(Δ).
Set ℬ = {I_1, …, I_k} and define P_ℬ to be the heat-bath block dynamics w.r.t.
these independent sets.
Fix an ordering ϕ = [v_1, …, v_n], and fix j ∈{1,…, k}.
Let K_j be the transition matrix corresponding to heat-bath update in the independent set I_j, which can also be seen as a systematic scan on I_j
according to the ordering defined by ϕ.
We define P̂_i to be P_i if i ∈ I_j and the identity matrix I otherwise so that
K_j = K_j^2
= (∏_i: v_i ∈ I_j P_i)^2
= (∏_i: v_i ∈ I_j P_i ∏_i: v_i ∉ I_j I)^2
= ∏_i=1^nP̂_i ∏_i=0^n-1P̂_n-i.
Note that in the computation above, P_i and P_i' commute for v_i,v_i'∈ I_j,
and I commutes with arbitrary matrices.
By Proposition <ref>(3), we obtain
P_i ≼_μP̂_i for all i, and hence by Proposition <ref>(2),
we obtain P_ϕ≼_μ K_j for any j, and consequently, by Proposition <ref>(1),
P_ϕ≼_μ1/k∑_j=1^k K_j = P_ℬ.
Let + and - denote the top and the bottom elements in [q] respectively.
Let {X^+_t} (resp., {X^-_t}) be an instance of a Markov chain with transition matrix P_ϕ starting from the all + (resp., all -) configuration.
Similarly, let {Y^+_t} (resp., {Y^-_t}) be an instance of P_ℬ
starting from the all + (resp., all -) configuration.
P_ϕ is monotone,
so we can define a grand monotone coupling of {X^+_t} and {X^-_t}
such that X^-_t ≤_q X^+_t for all t≥ 0,
which with (<ref>) further implies that the mixing time of a systematic scan can be upper bounded by the coupling time of the all + and all - configurations.
Letting ν^+ and (resp., ν^-) denote the trivial distribution concentrated on the all + (resp., all -) configuration, we note that ν^+/μ is non-decreasing and ν^-/μ is non-increasing.
Then Lemma <ref> and (<ref>) imply that
for all t≥ 0,
Y_t^- ≼ X_t^- ≼ X_t^+ ≼ Y_t^+.
For any v ∈ V and all t≥ 0,
X^-_t ≤_q X^+_t implies that
[X_t^+(v) ≠ X_t^-(v)]
≤∑_c∈ [q][X_t^+(v) ≥ c, X_t^-(v) < c]
= ∑_c∈ [q][X_t^+(v) ≥ c] - [X_t^-(v) ≥ c].
Then, since Y_t^- ≼ X_t^- and X_t^+ ≼ Y_t^+,
we obtain that
∑_c∈ [q][X_t^+(v) ≥ c] - [X_t^-(v) ≥ c]
≤∑_c∈ [q][Y_t^+(v) ≥ c] - [Y_t^-(v) ≥ c]
≤∑_c∈ [q]| [Y_t^+(v) ≥ c] - [Y_t^-(v) ≥ c] |
≤ qP_ℬ^t(+,·) -P_ℬ^t(-,·) _TV
≤ q(P_ℬ^t(+,·) -μ(·) _TV +
P_ℬ^t(-,·) -μ(·) _TV).
Since μ is η-spectrally independent and
b-marginally bounded,
it follows from Theorem <ref> and Remark <ref> that
P_ℬ satisfies the relative entropy decay with rate
r ≥b^5+4κ/k Δ^2+4κ· e^8κ,
where κ = 1+⌈2η/b⌉.
Let b' := e^8κ b^-5-4κ, and let
T:= k Δ^2+4κ b' log( log (μ_min^-1)/1/(4qn))
= O(Δ^3+4κb' log (qn)).
By (<ref>) and (<ref>),
T_mix(P_ℬ, 1/(8qn)) ≤ T.
Then for any σ∈Ω,
P_ℬ^T(σ,·) -μ(·) _TV≤1/8qn,
so we have [X_T^+(v) ≠ X_T^-(v)] ≤ 1/(4n).
By a union bound, [X_T^+ ≠ X_T^-] ≤ 1/4,
and we obtain
T_mix(P_ϕ) ≤ T = O(Δ^3+4κb' log (qn)).
For the case when Δ = Ω(√(b^4 n)), we modify the proof slightly by upper bounding (<ref>) with the corresponding spectral gap bound instead.
As discussed in the proof of Theorem <ref>,
(P_ℬ) ≥ n^-1-2η.
By (<ref>), T_mix(P_ℬ, 1/(8qn)) = O(n^2+2η) = O(bn^κ),
which implies that T_mix(P_ϕ) = O(bn^κ) = O(Δ^2κ· b^-4κ-1) using the argument above.
§.§ Proof of the lower bound: Lemma <ref>
We provide next the proof of Lemma <ref>.
Our proof extends the argument from <cit.> for the Glauber dynamics and also uses ideas from <cit.>. The following fact will be used in our proof.
Let {X_t} denote a discrete-time Markov chain with finite state space Ω, reversible with respect to π and with a positive semidefinite transition matrix. Let B⊆Ω denote an event.
If X_0 is sampled proportional to π on B, then [X_t ∈ B]≥π(B) for all t≥ 0, and for all t≥ 1,
[X_t∈ B]≥π(B)+(1-π(B))^-t+1[(X_1∈ B) - π(B) ]^t.
We can now prove Lemma <ref>.
Suppose n is sufficiently large.
Let R=⌈ln n/8 lnΔ⌉ and let T=αln n < R/3 for some α>0 we will specify later.
We will show that for some (random) starting configuration X_0∈Ω,
(·) - P^T_EOE(X_0, ·)_TV > 1/4,
and hence by definition T_mix(P_EOE) ≥ T.
As G has maximum degree Δ, we can always find a subset V_C⊆ V of size at least n^1/4 whose pairwise graph distances are at most 2R.
Let G_C := ∪_u∈ V_C B(u, R).
We consider a restriction of the even-odd scan dynamics on G_C.
Let {X_t} be an instance of the even-odd scan dynamics,
and let {Y_t} be an even-odd scan dynamics that only updates spins for vertices in G_C, starting from the same configuration as {X_t} which will be specified next.
Let N:=n^1/4,
and let f:Ω→ℝ be the function given by f(σ) = 1/N∑_v∈ V_C1(σ(v) = +1).
To show (<ref>), it suffices to find a distribution for X_0 ∈Ω and a threshold A∈ℝ such that
|
[f(X_T) ≥ A] - _σ∼[f(σ) ≥ A]
| > 1/4.
We define X_0 by setting
the configuration on V_C ∪ (V ∖ G_C) to be the all +1 configuration and for each v_C ∈ V_C
sampling the configuration in B(v_C, R) ∖{v_C} conditional on the all +1 configuration on V_C ∪ (V ∖ G_C).
Let π denote the conditional distribution on G_C with a fixed all +1 configuration on V ∖ G_C.
Define A:=_σ∼π[f(σ) ] + N^-1/3.
We will show next that
* _σ∼[f(σ) ≥ A] ≤ 1/2;
* Under the identity coupling, f(X_t) = f(Y_t) for t≤ T. The identity coupling is the standard coupling that updates the same vertex in both chains at the same time and maximizes the probability that the spin value at the vertex agrees after the update;
* [f(Y_T) ≥ A] > 3/4,
and thus (<ref>) follows.
We first give the upper bound for _σ∼[f(σ) ≥ A].
Since the ferromagnetic Ising model is monotone, and f is a non-decreasing function, for any boundary condition τ on Ω_V∖ G_C,
_σ∼π[f(σ) ] ≥_σ∼^τ[f(σ) ].
For any τ∈Ω_V∖ G_C, if σ is generated from ^τ, then f(σ) is the average of N independent indicator random variables.
By Hoeffding's inequality,
_σ∼^τ[f(σ) ≥ A ] ≤_σ∼^τ[f(σ) ≥_σ∼^τ[f(σ)] +N^-1/3]
≤exp(-2· N^4/3/N) < 1/2,
and thus
_σ∼[f(σ) ≥ A]
= ∑_τ∈Ω_V∖ G_C_σ∼^τ[f(σ) ≥ A ] ·μ(τ)
< 1/2.
To see that f(X_t) = f(Y_t), we consider the speed of “disagreement propagation”.
Note that f(X_0) = f(Y_0) since X_0=Y_0.
The key observation is that under the identity coupling, in one step of the coupled even-odd scan dynamics,
the disagreement at any vertex v can be propagated only to vertices at distance at most 3 from v.
Since R>3T, we can guarantee that X_t(v) = Y_t(v) for all v∈ V_C and all t≤ T.
Finally, we provide a bound for [f(Y_T) ≥ A].
Fix v ∈ V_C. Let π_v denote the Ising model distribution restricted to B(v, R)
under the all +1 boundary condition outside of B(v, R).
Note that ⊗_v ∈ V_Cπ_v = π.
Let {Y_t^v} denote the Markov chain obtained by projecting {Y_t} to B(v, R).
Since the boundary of B(v, R) is fixed, {Y_t^v} is simply an even-odd scan dynamics on B(v, R) under the all +1 boundary condition.
It can be checked that {Y_t^v} is reversible with respect to π_v and that it has a positive semidefinite transition matrix.
We define ℬ_v to be the event (or subset of configurations) that v is assigned spin +1. It can also be verified that is b-marginally bounded for some constant b = b(β,Δ),
so b ≤π_v(ℬ_v) ≤ 1-b.
Moreover, we have the following fact, which we prove later.
There exists a constant c:=c(β, Δ) > 0 such that (Y^v_1∈ℬ_v) > π_v(ℬ_v) +c.
By Lemma <ref> and Claim <ref>, for all t ≥ 1,
[Y_t^v∈ℬ_v]≥π_v(ℬ_v) +b^-t+1[(Y_1^v∈ℬ_v) - π_v(ℬ_v) ]^t≥π_v(ℬ_v) + c^t/b^t-1.
Using this and the definition of f, we have
[f(Y_T)] = 1/N∑_u ∈ V_C[Y_T^u∈ℬ_u]
≥1/N∑_u∈ V_C(π_u(ℬ_u) + c^T/b^T-1)
= _σ∼π[f(σ)]+ c^T/b^T-1.
Set T:= min(R/3, 1/12ln n - ln2/b/lnb/c), so that c^T/b^T-1≥ 2N^-1/3.
Thus,
[f(Y_T)] ≥ A + N^-1/3.
By Hoeffding's inequality, we obtain
[f(Y_T) < A] ≤[f(Y_T) < [f(Y_T)] - N^-1/3]
≤exp[-2 N^4/3/N] < 1/4.
Therefore, the mixing time of P_EOE is at least T=Ω(log n).
It remains to prove Claim <ref>.
Let P be the even-odd dynamics defined on V' = B(v,R), and
suppose V'=V_E ∪ V_O is a connected bipartite graph.
Suppose v∈ V_O without loss of generality.
Recall that the transition matrix of P is
∏_i:v_i∈ V_E P_i ∏_i:v_i∈ V_O P_i ∏_i:v_i∈ V_E P_i.
We use Y_E, Y_OE and Y_EOE = Y_1^v to denote the configuration of Y^v_0 after the updates ∏_i:v_i∈ V_E P_i on even vertices for the first time, after the updates ∏_i:v_i∈ V_O P_i on odd vertices and after update ∏_i:v_i∈ V_E P_i respectively.
Since the last set of updates on the even vertices do not affect the spin at v, we have
(Y^v_1∈ℬ_v)
= [1 (Y_EOE∈ℬ_v) ] = [1 (Y_OE∈ℬ_v) ]
= [ [ 1 (Y_OE∈ℬ_v) | Y_E ] ].
Let N(w) denote the set of vertices in V' adjacent to w.
For a configuration σ∈Ω and w∈ V, we define S(σ;w) := ∑_x∈ N(w)1 (σ_x = +1) and g_w : ℤ→ [0,1] given by
g_w(y) := (σ_w =+1 | S(σ; w) = y).
Let π^+_v (resp. π^-_v) be distribution on V' given by π^+_v(σ) = π_v(σ|σ∈ℬ_v)
(resp. π^-_v(σ) = π_v(σ|σ∉ℬ_v)).
Recall that Y^v_0 is a configuration drawn from π^+_v
and by noting that
π^+_v ·( ∏_i:v_i∈ V_E P_i)
=π^+_v,
so Y_E can also be viewed as a configuration drawn from π^+_v.
Hence, by the definition of the Gibbs update, we have
[ [ 1 (Y_OE∈ℬ_v) | Y_E ] ]
= _τ∼π^+_v[
g_v(S(τ, v))
].
Similarly,
π_v(ℬ_v) = _σ∼π_v[
g_v(S(σ, v))
].
By Strassen's theorem, there exists a
coupling of (σ, τ) such that
σ∼π_v, τ∼π^+_v and
σ≤_q τ.
Then σ_N(v)≠τ_N(v) implies
S(τ,v) ≥ S(σ, v) + 1.
Therefore,
(Y^v_1∈ℬ_v) - π_v(ℬ_v)
= _τ∼π^+_v[
g_v(S(τ, v))
] - _σ∼π_v[
g_v(S(σ, v))
]
= _(σ, τ)∼ (π_v, π_v^+)[
g_v(S(τ, v)) - g_v(S(σ, v))
]
≥min_i≤(v) (g_v(i,v) - g_v(i-1,v)) ·_(σ, τ)∼ (π_v, π_v^+)[
S(τ, v)) - S(σ, v)
]
≥min_i≤(v) (g_v(i,v) - g_v(i-1,v)) ·_(σ, τ)∼ (π_v, π_v^+)[
1(σ_N(v)≠τ_N(v))
].
It can be checked that min_i≤(v) (g_v(i,v) - g_v(i-1,v)) ≥ c_2, where c_2:=c_2(β,Δ),
Moreover, for any u∈ N(v) we have
_(σ, τ)∼ (π_v, π_v^+)[
1(σ_N(v)≠τ_N(v))
] ≥_(σ, τ)∼ (π_v, π_v^+)[
1(σ_u≠τ_u)
].
Fix u and let Λ := V' ∖{u,v}.
Since σ_u ≤τ_u,
σ_u≠τ_u implies that σ_u = -1 and τ_u = +1.
Thus we obtain
_(σ, τ)∼ (π_v, π_v^+)[
1(σ_u≠τ_u)
]
= _(σ, τ)∼ (π_v, π_v^+)[
(τ_u =+1 |τ_Λ)-(σ_u =+1 |σ_Λ)
]
= _(σ, τ)∼ (π_v, π_v^+)[
g_u(S(τ, u)) - g_u(S(σ, u))
]
≥ b ·_(σ, τ)∼ (π_v^-, π_v^+)[
g_u(S(τ, u)) - g_u(S(σ, u))
],
where the inequality is due to the b-bounded marginal condition of which
requires σ_v=-1 with probability at least b.
Note that if σ∼π_v^-, τ∼π^+_v and
σ≤_q τ, then S(τ, u) ≥ S(σ, u) + 1.
Hence,
_(σ, τ)∼ (π_v^-, π_v^+)[
g_u(S(τ, u)) - g_u(S(σ, u))
]
≥min_i≤(u) (g_u(i,u) - g_u(i-1,u)) > c_3,
for some c_3=c_3(β,Δ)>0.
Therefore, we established that
(Y^v_1∈ℬ_v) - π_v(ℬ_v) ≥ c_2c_3b,
and c_2c_3b depends only on β, Δ.
§.§ Applications of Theorem <ref>
We discuss next some applications of Theorem <ref>.
As a first application, we can establish optimal mixing for
the systematic scan dynamics on the ferromagnetic Ising model under the δ-uniqueness condition, improving the best known results that hold under the Dobrushin-type conditions <cit.>.
This result was stated in Corollary <ref> in the introduction and is proved next.
For this, we recall that under δ-uniqueness condition, the Ising distribution
satisfies spectral independence and the bounded marginals condition.
The ferromagnetic Ising model with parameter β such that β̅_u(Δ)(1 - δ)<β <β_u(Δ)(1- δ)
is O(1/δ)-spectrally independent and b-marginally bounded with b=O(1).
We fix δ∈(0,1) and first assume that Δ is a constant.
By Proposition <ref>, the ferromagnetic Ising model with parameter β<(1-δ)β_u(Δ)
satisfies η-spectral independence and b-bounded marginals,
where η = O(1/δ) and b is a constant.
Since the ferromagnetic Ising model is a monotone system, it follows from Theorem <ref> that T_mix = O(log n) for any ordering ϕ.
Now, when Δ→∞ as n →∞, by Proposition <ref>, the Dobrushin's influence matrix A of ferromagnetic Ising model satisfies that A≤ 1 - δ/2.
Under this assumption, it is known that T_mix = O(log n) for any ordering ϕ; see <cit.>.
We can similarly show mixing time bound for the systematic scan dynamics of the hardcore model on bipartite graphs under δ-uniqueness condition.
Let δ∈ (0,1) be a constant.
Suppose G is an n-vertex bipartite graph of maximum degree Δ≥ 3.
For the hardcore model on G with fugacity λ such that 0 < λ < (1-δ)λ_u(Δ),
where λ_u(Δ) = (Δ-1)^Δ-1/(Δ-2)^Δ is the tree uniqueness threshold on the Δ-regular tree,
the systematic scan with respect to any ordering ϕ satisfies
T_mix(P_ϕ) = Δ^O(1/δ)· O(log n).
The hardcore model on a bipartite graph (V_1∪ V_2, E) with fugacity 0 < λ < (1-δ)λ_u(Δ) is monotone, and <cit.> show that it satisfies O(1/δ)-spectral independence and the O(λ)-bounded marginals condition.
Theorem <ref> then implies Δ^O(1/δ)· O(log n) mixing of systematic scan for any ordering.
We consider next
the application of Theorem <ref> to the special case where
the underlying graph is a cube of the d-dimensional lattice graph ℤ^d. We show that strong spatial mixing implies optimal O(log n) mixing of any systematic scan dynamics.
Previously, under the same type of condition, <cit.> gave
an O(log n (loglog n)^2) mixing time bound for arbitrary orderings,
and an O(log n) mixing time bound for a special class of scans that (deterministically) propagate disagreements slowly under the standard identity coupling.
We first provide the definition of our SSM condition.
We say a spin system μ on ℤ^d satisfies the strong spatial mixing (SSM) condition
if there exist constants α, γ, L > 0 such that
for every d-dimensional rectangle Λ⊂ℤ^d
of side length between L and 2L
and every subset B⊂Λ,
with any pair (τ, τ') of boundary configurations on ∂Λ
that only differ at a vertex u, we have
μ_B^τ(·) - μ_B^τ'(·)_TV≤γ·exp(-α· dist(u,B)),
where dist(·, ·) denotes graph distance.
The definition above differs from other variants of SSM in the literature (e.g., <cit.>)
in that Λ has been restricted to “regular enough” rectangles.
In particular, our variant of SSM is easier to satisfy than those in <cit.>
but more restricting than the one in <cit.> (that only considers squares).
Nevertheless, it follows from <cit.> that for the ferromagnetic Ising model, this form of SSM holds up to a critical threshold temperature β < β_c(2) = ln(1+√(2)) on ℤ^2.
Corollary <ref> from the introduction states that for b-marginally bounded monotone spin system on d-dimensional cubes V⊆ℤ^d, SSM implies that the mixing time of any systematic scan P_ϕ is O(log n).
As mentioned there, this result in turn implies that any systematic scan dynamics for the ferromagnetic Ising model is mixing in O(log n) steps on boxes of ℤ^2
when β < β_c(2).
Another interesting consequence of Corollary <ref>
is that we obtain O(log n) mixing time for any systematic scan dynamics P_ϕ for the hardcore model on ℤ^2 when λ < 2.538, which is the best known condition for ensuring SSM <cit.>.
Our proof of Corollary <ref> relies on Lemma <ref> that is restated below.
Remarkably, Lemma <ref> generalizes beyond monotone systems and may be of independent interests.
*
Assume a monotone spin system satisfies SSM condition.
Then the spin system satisfies η-spectral independence, where η = O(1) by Lemma <ref>.
By noting that Δ = 2^d
the corollary follows from Theorem <ref>.
Lastly, we give a proof of Lemma <ref>.
For this, we recall the notion of a κ-contractive coupling which is known to imply spectral independence.
We say a distribution μ is κ-contractive with respect to a Markov chain P if
for all X_0, Y_0 ∈Ω,
there exists a coupling of step of P so that
𝔼[d(X_1, Y_1) | X_0, Y_0] ≤κ d(X_0, Y_0),
where d(·, ·) denotes the Hamming distance of two configurations.
The following lemma from <cit.> shows that spectral independence follows from the existence of a contractive coupling with respect to a heat-bath block dynamics.
If μ is κ-contractive with respect to a block dynamics,
then μ is (2DM/1-κ)-spectrally independent, where M is the maximum block size and
D is the maximum probability of a vertex being selected as part of a block in any step of the block dynamics.
With this lemma on hand, we can now prove Lemma <ref>.
Let L be a sufficiently large constant so that the SSM condition is satisfied; we will choose L later.
Let V be a d-dimensional cube of ℤ^d.
We define a heat-bath block dynamics P_ℬ with respect to
a collection ℬ of d-dimensional rectangles in V.
Precisely, let S_v:={w ∈ℤ^d:d_∞(w,v)< L},
and let ℬ be the set of blocks {S_v ∩ V}_v∈ V.
Given a configuration X_t, the heat-bath block dynamics P_ℬ obtains a configuration X_t+1 in 3 steps as follows:
* Choose v ∈ V uniformly at random. Let S_v' := S_v ∩ V.
* Generate a configuration σ∈Ω_S_v' from μ_S_v'^τ(·), where τ∈Ω_V ∖ S_v' is given by τ(u) = X_t(u);
* Let X_t+1(u) = σ(u) if u∈ S_v' and X_t+1(u) = X_t(u) otherwise.
We will show that μ is κ-contractive with respect to P_ℬ whenever SSM holds.
Our argument builds upon <cit.> but works for P_ℬ under our weaker form of SSM condition,
in which the geometry is restricted to d-dimensional rectangles of large side lengths.
One can verify that if Λ = S_v ∩ V∈ℬ,
then Λ is a d-dimensional rectangle of side lengths between L and 2L.
The argument in <cit.> requires a stronger form of SSM to deal with the set of blocks ℬ' = {Λ = S_v ∩ V: Λ≠∅, v ∈ℤ^d } which contains arbitrarily thin rectangles,
and this stronger form of SSM condition
does not hold up to β_c for the ferromagnetic Ising.
Fix (X_0, Y_0) such that there exists exactly one vertex u∈ V such that X_0(u)≠ Y_0(u) and X_0(v)=Y_0(v) for all v≠ u.
We select the same v ∈ V in the first step of P_ℬ in both chains; let Λ = S_v'.
There are three cases with regard to the position of the disagreeing vertex u:
u is contained in Λ, u is on the boundary of Λ,
or u is far from Λ.
Let ∂Λ denote the external boundary of Λ.
If u ∈Λ or u ∉ (Λ∪∂Λ), since the boundary conditions are identical,
we generate the same configuration σ∼μ_Λ^τ to update Λ in both chains such that
X_1(Λ) = Y_1(Λ), where τ := X_0(∂Λ) = Y_0(∂Λ).
Hence,
𝔼[d(X_1, Y_1) | X_0, Y_0, u∈Λ] = 0 and
𝔼[d(X_1, Y_1) | X_0, Y_0, u ∉ (Λ∪∂Λ)] = 1.
It remains to define the coupling in the case when u∈∂Λ, and we would need an upper bound for 𝔼[d(X_1, Y_1) | X_0, Y_0, u ∈∂Λ].
For this, we use the SSM condition.
Let B:={w∈Λ: d(w,u) ≥ r }, where r := 1/2(L/d)^1/2d,
and let τ and τ' be the boundary conditions of Λ in X_0 and Y_0 respectively.
By assumption, τ and τ' are only different at u.
We can view the coupling of the update on Λ as consisting of three steps:
* Generate two configurations σ_1, σ_2 ∈Ω_B from μ_B^τ and μ_B^τ' using the optimal coupling of the two distributions;
* Independently generate two configurations σ_3, σ_4 ∈Ω_Λ∖ B from μ_Λ∖ B^τ∪σ_1 and μ_Λ∖ B^τ' ∪σ_2;
* Let X_1(u) = σ_1(u) and Y_1(u) = σ_2(u) if u∈ B,
and X_1(u) = σ_3(u) and Y_1(u) = σ_4(u) if u∈Λ∖ B.
Clearly, X_1(Λ) ∼μ^τ_Λ and Y_1(Λ) ∼μ^τ'_Λ,
so the coupling is valid.
By (<ref>),
there exists a coupling used for the first step such that
[σ_1 ≠σ_2] = μ^τ_B - μ^τ'_B_TV.
Moreover, SSM implies that there exist constants γ, α >0 such that
μ_B^τ - μ_B^τ'_TV≤γ·exp(-α· dist(u,B)) ≤γ· e^-α r.
Also, |Λ| ≤ (2L)^d and |Λ∖ B|≤ (2r)^d.
Put together, we have
𝔼[d(X_1, Y_1) | X_0, Y_0, u∈∂Λ]
≤ 1+|Λ∖ B| + |Λ|·[σ_1 ≠σ_2]
≤ 1 + (2r)^d + (2L)^d·γ· e^-α r.
Let N:=|ℬ|.
Therefore, by noting that [u∉Λ] ≥ L^d we obtain
𝔼[d(X_1, Y_1) | X_0, Y_0] =
𝔼[d(X_1, Y_1) | X_0, Y_0, u∈∂Λ] ·[u∈∂Λ]
+ 𝔼[d(X_1, Y_1) | X_0, Y_0, u∈Λ] ·[u∈Λ]
+ 𝔼[d(X_1, Y_1) | X_0, Y_0, u∉ (Λ∪∂Λ)] ·[u∉ (Λ∪∂Λ)]
≤ 1 + [u∈∂Λ] · [(2r)^d + (2L)^d·γ· e^-α r]
- [u∈Λ]
≤ 1 + 2d·(2L)^d-1/N· [(2r)^d + (2L)^d·γ· e^-α r] - L^d/N
= 1 + L^d-1/N·[ 2^d d·(√(L/d) + (2L)^d·γ/exp(α·√(L/d)) ) - 2L ].
Recall that N = O(n).
By choosing L=L(d,α, γ) sufficiently large, we obtain
𝔼[d(X_1, Y_1) | X_0, Y_0]
≤ 1 - L^d-1/N
= 1 - Ω(1/N) = 1 - Ω(1/n).
In the case where blocks are of maximum size (2L)^d and where each vertex is covered by at most (2L)^d number of blocks at any step,
D = Θ(n^-1) and M = O(1).
Thus, Lemma <ref> implies that μ is η-spectrally independent, where
η = Θ(n^-1)/1-(1-Ω(n^-1) ) = O(1),
as desired.
§ GENERAL BLOCK DYNAMICS
In this section, we give an upper bound for the mixing time of the block dynamics of a totally-connected spin system on general graphs. In particular, we prove Theorem <ref> from the introduction.
We present next a more general form of entropy factorization.
In particular, KPF and UBF are special cases of it.
A Gibbs distribution μ is said to satisfy the general block factorization of entropy (GBF)
with constant C_GBF if for all functions
f : Ω→ℝ_≥0,
and
for all probability distributions α over the set of all subsets of V,
·_μ (f) ≤ C_GBF∑_U ⊆ Vα(U) _τ∼μ_V ∖ U[^τ_U (f^τ) ],
where = min_v∈ V∑_U: v∈ Uα(U).
The notion of GBF is closely related to the general block dynamics <cit.>.
Indeed, the following proposition shows that a bound for C_GBF yields a bound for the modified log-Sobolev constant of general block dynamics.
If the Gibbs distribution μ of a spin system is totally-connected and satisfies GBF with constant C_GBF,
then the general block dynamics P_ℬ,α w.r.t. (ℬ, α) satisfies relative entropy decay with rate at least /C_GBF and satisfies a modified log-Sobolev inequality with constant ρ(P_ℬ,α) ≥/C_GBF.
The main theorem of this section is the following; Theorem <ref> from the introduction follows as a corollary of this result.
Let η>0, b>0, Δ≥ 3 and χ≥ 2.
Suppose G=(V,E) is an n-vertex graph of maximum degree Δ and chromatic number χ. Let μ be a Gibbs distribution of a totally-connected spin system on G.
Let ℬ := {B_1, …, B_K} be any collection of blocks such that V=∪_i B_i, and let α be a distribution over ℬ.
If μ is η-spectrally independent and b-marginally bounded,
then there exists a universal constant C>1 such that a general heat-bath block dynamics P_ℬ, α w.r.t. (ℬ, α) has
modified log-Sobolev constant:
ρ(P_ℬ, α) = Ω(· b^5+6κ/χ· (C Δlog n )^κ· (loglog n)^κ+1),
where κ = 1 +⌈2η/b⌉, and
T_mix(P_ℬ, α) = O( χ/· b^-(5+6κ)· (C Δlog n)^κ (loglog n)^κ+1·log n ).
Theorem <ref> follows from the bounds for C_KPF in Theorem <ref> and the following lemma from <cit.> that relates k-partite factorization with the general block factorization.
Suppose the Gibbs distribution μ of a spin system on a graph G satisfies k-partite factorization of entropy with constant C_KPF.
Then μ satisfies GBF with constant k · C_KPF.
When Δ∈ [3, b^4 n/10e(4η + b^2)],
the lower bounds for the entropy decay rate and MLSI constant follow from Theorem <ref>, Lemma <ref> and Proposition <ref>, and by (<ref>) we obtain the desired upper bound for mixing time.
Now, suppose that Δ = Ω(b^4n).
As discussed in the proof of Theorem <ref>,
(P_ℬ) ≥ n^-(2η +1), where P_ℬ is any block dynamics dynamics with respect to χ disjoint independent sets.
This implies that
(P_ℬ)·⟨ f, f⟩_μ≤ℰ_P_ℬ(f,f)
for every function f:Ω→ℝ.
By Lemma <ref> and Proposition <ref> (more precisely, the analogous inequalities for the variance functional), we obtain that
(P_ℬ, α) =
/χ n^2η + 1
= /χ (b^-4Δ)^2η + 1,
and the lower bound for the modified log-Sobolev constant and the mixing time follow from (<ref>) and (<ref>).
We also obtain the following corollary for the ferromagnetic Ising and Potts model.
Let δ∈(0,1) and Δ≥ 3.
For the Ising model with β∈[ (1-δ)β̅_u(Δ), (1-δ)β_u(Δ)]
on any graph G of maximum degree Δ and chromatic number χ,
or the ferromagnetic q-state Potts model with q≥ 2 and
0 < β≤2(1-δ)/Δ on the same graph,
T_mix(P_ℬ,α) = O( χ/) ·Δ^1+O(1/δ)·( log n loglog n )^2+O(1/δ).
We have shown in the proof of Corollary <ref> that, for the ferromagnetic q-state Potts model
when β is such that
0 < β≤2(1-δ)/Δ, then b=O(1) and η = O(1/δ).
For the Ising model, we achieve the same bound by Proposition <ref>.
Now
κ = 1 +⌈2η/b⌉ = 1+O(1/δ),
and the mixing time bound follows from Theorem <ref>.
alpha
§ ADDITIONAL PROOFS
In this appendix, we prove (<ref>) in Theorem <ref>, which begins by extrapolating the proof of Lemma 3.3 in <cit.> as Lemma <ref>.
Let θ∈ (0,1] and n ≥2/θ (4η/b^2 + 1).
Let G, μ, V_1, …, V_k be as in the assumption of Theorem <ref>.
Let S be a uniformly generated block of vertices of size ⌈θ n⌉,
and let S_1, …, S_m be the connected components of S.
Recall that
C_S(v) denotes the unique connected component S_i in S that contains v if such a component exists, otherwise set it to be the empty set.
Suppose further that for S_i ⊆ S,
Γ(S_i) takes the minimum value such that
the following inequality holds
for an arbitrary pinning τ∈Ω_S_i^c and any function g:Ω_S_i^τ→ℝ_≥0:
^τ_S_i(g)≤Γ(S_i) ∑_j=1^k _ξ∼μ_S_i ∖ V_j^τ[_V_j∩ S_i^ξ∪τ (g^ξ_S_i∩ V_j)].
Then,
_μ(f) ≤C_UBF/θ∑_j=1^k _τ∼μ[_V_j^τ (f) ] · G_j,
where
G_j := max_W⊂ V_jmax_v∈ W_S[Γ(C_S(v)) | V_j ∩ S = W]
and
the expectation _S is taken over the uniform generation of S.
Take θ = b^2/5eΔ^2.
Theorem <ref> implies that
C_UBF=(e/θ)^⌈2η/b⌉ =
(5e^2Δ^2/b^2)^⌈2η/b⌉.
Given Lemma <ref>,
to show (<ref>) it remains to provide an upper bound
G_j for each j.
There are two main steps for proving this bound.
First, we upper bound G_j in terms of the size of connected components in S.
Under the assumptions of Theorem <ref>,
μ is η-spectrally independent and b-marginally bounded.
These properties by definition preserve under any pinning.
In particular,
for any S_i ⊆ S and an arbitrary pinning τ∈Ω_V∖ S_i,
μ_S_i^τ is still η-spectrally independent and b-marginally bounded.
Thus, the second part of Theorem <ref> implies that
for any g:Ω_S_i^τ→ℝ_≥0
and θ' < b^2/12Δ(S_i),
_μ_S_i^τ (g) ≤18 /b^5(e/θ' )^1+⌈2η/b⌉∑_v∈ S_i_γ∼μ^τ_S_i∖{v}[_v^τ∪γ (g^γ) ],
where Δ(S_i) denotes the maximum degree in the component S_i.
Note that
∑_v∈ S_i_γ∼μ^τ_S_i∖{v}[_v^τ∪γ (g^γ) ] =
∑_j=1^k ∑_v∈ S_i ∩ V_j_γ∼μ^τ_S_i∖{v}[_v^τ∪γ (g^γ) ]
≤max_j |S_i ∩ V_j| ∑_j=1^k _γ∼μ^τ_S_i∖ V_j[_V_j∩ S_i^τ∪γ(g^γ) ],
where the inequality holds by Corollary <ref>.
Take θ' = b^2/5eΔ(S_i) and
let κ =1+⌈2η/b⌉ and b̃:= 18 · (5e^2)^κ b^-5 - 2κ.
We obtain
_S_i^τ (g) ≤b̃max_j|V_j ∩ S_i| Δ(S_i)^κ
∑_j=1^k_γ∼μ^τ_S_i∖ V_j[_V_j ∩ S_i^γ (g^γ) ].
By their definitions,
Γ(S_i) ≤b̃max_j|V_j ∩ S_i| Δ(S_i)^κ
≤b̃ |S_i| Δ(S_i)^κ,
and
G_j ≤b̃max_W⊂ V_jmax_v∈ W_S[ |C_S(v)| Δ(C_S(v))^κ| V_j ∩ S = W]
≤b̃max_W⊂ V_jmax_v∈ W_S[ |C_S(v)|^κ+1| V_j ∩ S = W].
The second part of this proof analyzes the conditional expectation term above on the right-hand side of (<ref>).
We fix v∈ V (and hence fix V_j) and fix a feasible W such that v∈ W ⊆ V_j and |W| ≤⌈θ n⌉.
We say a set T⊆ V∖ V_j is W-connected if T∪ W is connected in G, and we denote by
S'(v) the unique W-connected vertex-set in S that is adjacent to v, if such set exists, otherwise an empty set.
Clearly if S'(v) = ∅, then C_S(v) = {v}.
Suppose S'(v) ≠∅.
Observe that C_S(v) = S'(v) ∪ (C_S(v) ∩ W).
Since (C_S(v) ∩ W) must be adjacent to
S'(v) if S'(v) ≠∅,
|C_S(v) ∩ W| ≤Δ· |S'(v)|.
Hence, |C_S(v)|≤ (Δ + 1)|S'(v)|.
Furthermore, let G_2 := (V, E∪ E_2), where E_2 is the set of pairs of vertices that are of distance at most 2 in G.
Note that the degree of any vertex in G_2 is at most Δ^2.
Let C_S_2(v) be the unique connected component in G_2[S] that contains v.
Notice that the set S'(v) is always a subset of C_S_2(v), regardless of the specific set W we choose to fix.
Hence, for any x,
_S[ |C_S(v)| ≥ x | V_j ∩ S = W ] ≤_S[ |S'(v)| ≥x/Δ+1| V_j ∩ S = W ] ≤_S[ |C_S_2(v)| ≥x/Δ+1].
Now we apply Lemma <ref> to estimate the last probability.
For θ < 1/4eΔ^2,
_S[ |C_S_2(v)| ≥x/Δ+1] ≤⌈θ n ⌉/n∑_k=0^∞ (2eΔ^2 θ)^⌊x/Δ+1⌋ + k - 1
≤1/2eΔ^2(1/2)^⌊x/Δ+1⌋·∑_k=0^∞1/2
≤1/Δ^2· 2^-x/Δ+1.
Hence, we obtain
_S[ |C_S(v)|^κ+1| V_j ∩ S = W]
≤∑_x=1^n x^κ + 1_S[ |C_S(v)| ≥ x | V_j ∩ S = W ]
≤∑_x=1^n x^κ + 1_S[ |C_S_2(v)| ≥x/Δ+1]
≤∑_x=1^n x^κ + 1·1/Δ^2· 2^-x/Δ+1
≤ 4 Δ^2(κ+1).
Therefore,
G_j ≤ 4b̃Δ^2(κ+1).
This bound on G_j together with (<ref>) and (<ref>) implies
C_KPF≤ 4b̃Δ^2(κ+1)·(5e^2Δ^2/b^2)^κ
= 72·Δ^4κ +2 (5e^2)^2κ/b^5+4κ
,
concluding the proof.
|
http://arxiv.org/abs/2307.01778v1
|
20230704153103
|
Physically Realizable Natural-Looking Clothing Textures Evade Person Detectors via 3D Modeling
|
[
"Zhanhao Hu",
"Wenda Chu",
"Xiaopei Zhu",
"Hui Zhang",
"Bo Zhang",
"Xiaolin Hu"
] |
cs.CV
|
[
"cs.CV",
"cs.AI",
"cs.CR"
] |
Physically Realizable Natural-Looking Clothing Textures Evade Person Detectors via 3D Modeling
Zhanhao Hu^1Equal contribution. Wenda Chu^2, 1[1] Xiaopei Zhu^3, 1 Hui Zhang^4 Bo Zhang^1 Xiaolin Hu^1,5,6Corresponding author.
^1Department of Computer Science and Technology, Tsinghua University, Beijing, China
^2Institute for Interdisciplinary Information Sciences, Tsinghua University, Beijing, China
^3School of Integrated Circuits, Tsinghua University, Beijing, China
^4Beijing Institute of Fashion Technology, Beijing, China
^5IDG/McGovern Institute for Brain Research, THBI, Tsinghua University, Beijing, China
^6Chinese Institute for Brain Research (CIBR), Beijing, China
{huzhanha17, chuwd19, zxp18}@mails.tsinghua.edu.cn
[email protected], {dcszb, xlhu}@mail.tsinghua.edu.cn
August 1, 2023
==========================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================
Recent works have proposed to craft adversarial clothes for evading person detectors, while they are either only effective at limited viewing angles or very conspicuous to humans. We aim to craft adversarial texture for clothes based on 3D modeling, an idea that has been used to craft rigid adversarial objects such as a 3D-printed turtle. Unlike rigid objects, humans and clothes are non-rigid, leading to difficulties in physical realization. In order to craft natural-looking adversarial clothes that can evade person detectors at multiple viewing angles, we propose adversarial camouflage textures (AdvCaT) that resemble one kind of the typical textures of daily clothes, camouflage textures. We leverage the Voronoi diagram and Gumbel-softmax trick to parameterize the camouflage textures and optimize the parameters via 3D modeling. Moreover, we propose an efficient augmentation pipeline on 3D meshes combining topologically plausible projection (TopoProj) and Thin Plate Spline (TPS) to narrow the gap between digital and real-world objects. We printed the developed 3D texture pieces on fabric materials and tailored them into T-shirts and trousers. Experiments show high attack success rates of these clothes against multiple detectors.
§ INTRODUCTION
Deep Neural Networks(DNNs) have been widely used in many real-world systems such as face recognition and object detection <cit.>. However, it is well known that DNNs are vulnerable to adversarial examples <cit.>. Adversarial examples can be crafted by adding small perturbations to the clean inputs, rendering the DNNs' outputs incorrect. Such vulnerabilities could result in severe safety problems when deploying DNN-based systems. This has become a hot research topic recently <cit.>.
Adversarial examples were first identified in the digital world. However, adversarial examples also exist in the physical world, posing more risks in real-world scenarios. Recently, many works <cit.> have designed physical adversarial examples to deceive DNNs in the real world. Among them, hiding persons<cit.> from DNN-based object detectors is especially challenging due to the difficulties of modeling non-rigid object surfaces (i.e., clothes). Most works <cit.> print adversarial patches on the front side of clothes to hide people from being detected. We call them patch-based adversarial examples. These patches are usually conspicuous to humans, making the clothes look strange and easily noticeable by human observers. Efforts have been put on making the adversarial patches more natural-looking <cit.>. However, these patch-based adversarial clothes can only attack object detectors at a narrow range of viewing angles (i.e., when the camera faces the front of the person). To attack the detector at a wider range of viewing angles, one may print the adversarial patches everywhere on the clothes, which would make the clothes unnatural-looking again. For example, a dog-head-like patch on the front of a T-shirt is natural, but putting this patch everywhere on the T-shirt would make the T-shirt look weird.
Another way to craft physical adversarial examples is to design the textures on the surface of the target objects <cit.>, i.e., crafting texture-base adversarial examples. Unlike patch-based adversarial examples, texture-based ones are usually adversarially effective at multiple viewing angles. They are mostly optimized via 3D modeling or using clone networks, and printed on the surface of rigid objects such as turtles<cit.> and cars<cit.>.
However, it is much harder to realize the 3D textures of non-rigid objects like humans and clothes in the physical world while maintaining their adversarial effectiveness, since there is a huge gap between a 3D human model and a real-world person.
To circumvent this difficulty, Hu et al. <cit.> propose to craft texture-based adversarial clothes by extending patches into textures with repetitive patterns, which does not require 3D modeling. However, their textures are very conspicuous to humans, and obtaining natural-looking textures can be difficult under the constraint of repetitive patterns.
In this paper, we propose a 3D modeling pipeline to produce natural-looking adversarial clothes that are physically realizable and can hide people at multiple viewing angles. Specifically, we craft adversarial camouflage texture (AdvCaT) patterns and apply them on clothes. We choose camouflage texture patterns mainly because they are typical texture patterns widely used in daily clothes, therefore making the clothes more natural-looking
In order to make the texture patterns more generalizable when applied to deformed and unseen 3D models, we propose a novel 3D augmentation method combining topologically plausible projection (TopoProj) and thin plate spline (TPS) <cit.> for non-rigid objects such as clothes.
We optimized several AdvCaT patterns to attack widely used detection models, including YOLOv3 <cit.>, Faster RCNN <cit.>, and deformable DETR <cit.>, and applied the texture patterns on clothes in the physical world. See <ref> for the visualization of our adversarial clothes compared with others. Experiments showed that our adversarial clothes could evade different detectors at multiple viewing angles. A subjective test experiment indicated that the naturalness score of our adversarial clothes covered with AdvCaT is significantly higher than other adversarial clothes and close to daily clothes.
§ RELATED WORK
Early works <cit.> found that adversarial examples crafted by adding small digital adversarial perturbations on the clean inputs can mislead the DNNs. Some adversarial examples can also be crafted in the physical world to attack different DNN models, including image classification models <cit.> and detection models <cit.>. Among these works, patch-based and texture-based attacks are typical ways to craft physical adversarial examples.
Patch-based attacks <cit.> usually optimize patches and put them on the target objects, and therefore can only work at a narrow range of viewing angles. These works produce different adversarial objects, including glasses frames <cit.>, road signs <cit.>, cars <cit.> and clothes <cit.>. Among them, hiding persons from object detectors is especially challenging since the adversarial patches on the clothes can be heavily deformed due to their non-rigidity <cit.>. On the other hand, these adversarial patches are usually conspicuous to humans. To this end, Duan et al. <cit.> and Wang et al. <cit.> introduce additional losses to make the adversarial patches less conspicuous. Hu et al. <cit.> propose to produce more natural-looking patches with GANs <cit.>.
Texture-based attacks <cit.>, on the other hand, optimize the textures on the surface of the target objects to craft physical adversarial examples. Covered with adversarial textures, the object usually can deceive DNNs at multiple viewing angles. These works mainly use 3D modeling or clone networks to optimize textures for rigid objects. Athalye et al. <cit.> introduce the Expectation over Transformation (EoT) method and produce an adversarial 3D-printed turtle. Zhang et al. <cit.> and Wang et al. <cit.> design vehicle camouflage for multi-view adversarial attacks. Hu et al. <cit.> propose adversarial textures with repetitive structures for non-rigid clothes.
§ ADVERSARIAL CAMOUFLAGE TEXTURE PATTERNS GENERATION
In this section, we present the pipeline of generating adversarial camouflage texture (AdvCaT) that can be applied to clothes. As shown in <ref>, we adopt 3D meshes to model humans and clothes and define the surface of the clothes according to their 2D texture maps with UV coordinates. We propose two critical techniques to optimize adversarial camouflage texture clothes. The first is to parameterize the camouflage textures on the 2D texture maps with Voronoi diagram <cit.> and Gumbel softmax trick <cit.>. The second is to apply a realistic deformation on the 3D meshes with the topologically plausible projection (TopoProj).
We render the foreground photos of a 3D person wearing a T-shirt and a trouser using a differential renderer <cit.>. The foreground photos are synthesized with background images sampled from a scene dataset. Finally, we feed the synthesized photos into the victim detector and minimize an adversarial loss to optimize the parameters of camouflage patterns.
In what follows, we first introduce the differential generation of AdvCaT. Next, we present the novel 3D deformation which can be used to augment the meshes during training to boost the generalizability of the optimized textures. Finally, we elaborate the loss functions for optimization.
§.§ Differentiable Generation of Camouflage Textures
Camouflage patterns are originally designed for concealing people in the wild and have now become typical textures of ordinary clothes. There are a few kinds of common camouflage patterns. Among them, we choose to imitate a specific type called digital camouflage patterns that consists of small rectangular pixels. As shown in <ref>a, digital camouflage patterns are typically irregular in shape and consist of a limited number of colors.
We noticed that the pixels of the camouflage patterns are locally
aggregated as clusters, each of which approximately covers
a polygon region. See <ref>b for illustration. Inspired by the polygon generation ability of Voronoi diagram <cit.>, we use a soft version of Voronoi diagram to generate the cluster regions of the
camouflage pixels.
5pt
Polygon generation with Voronoi diagram.
A Voronoi diagram is a partition of a plane into multiple regions <cit.>. Each region is controlled by a point, consisting of all the pixels closer to the corresponding control point than to any other point (see <ref>c). In this way, the locations and shapes of the polygons can be parameterized by the coordinates of the control points.
However, the locations and shapes of the polygons are not differentiable with respect to the coordinates of the control points if we directly apply this rule. Therefore, we define a soft version of Voronoi diagram by introducing probabilities for each pixel. Suppose the texture map only consists of several discrete colors in a color set 𝒞= {c_i=(R_i,G_i,B_i)| i=1,…, N_C}. Then, N_P independent control points are assigned to each color, with coordinates {b_ij∈ℝ^2, i=1,2,…,N_C, j= 1,2,…, N_P}. For each pixel with coordinates x on the texture map, we assign a discrete distribution 𝒫^(x) to describe its probability of coloring with {c_i}:
p^(x)_k = w^(x)_k/∑_i=1^N_C w^(x)_i, k = 1,…, N_C,
w^(x)_i = ∑_j=1^N_Pexp(-x - b_i j_2/α),
where p^(x)_k is the sampling probability of color c_k. According to <ref>, the probability of a pixel x colored by c_i increases as it gets closer to a control point b_ij. The parameter α is the smoothing radius of the Voronoi diagram. When α approaches zero, the summation in <ref> will be dominated by the closest control point to x, therefore the color of x will be deterministic, which resembles the original hard version of Voronoi diagram. In practice, we define a probability map 𝒫 with size N_C× H× W for all the pixels on the texture map. We further smooth the probability map 𝒫 by a uniform smoothing kernel 𝒮 = 1/m^2 1_m× m of size m× m. The smoothed probability map is then computed by a convolutional operation: 𝒫^' = 𝒫 * 𝒮.
5pt
Sampling discrete colors by Gumbel softmax. Following the procedure stated above, we assign each pixel x on the texture map to a discrete distribution 𝒫^(x) guided by a Voronoi diagram, while each pixel should be assigned with a specific color c^(x) in the end. However, directly sampling according to 𝒫^(x) is not differentiable with respect to p_i^(x). Alternatively, using softmax function directly to blend all the colors certainly can not produce discrete colors. Therefore, we leverage the Gumbel-softmax<cit.> reparameterization trick to approximate the discrete sampling process.
Suppose g_i∼Gumbel (0,1) are i.i.d. random variables drawn from the standard Gumbel distribution. Given the discrete distribution 𝒫^(x), we can equivalently draw the color c^(x) by c_k, where
k = max_i(g_i + log p_i^(x)).
The equivalency is guaranteed<cit.> by
Pr(k=i) = p_i^(x).
Since the argmax operation is still non-differentiable, we instead use a softmax estimator <cit.> to approximate it, such that the color c^(x) is calculated by
c^(x) = ∑_i=1^N_Cc_i ·Softmax(g_i+log p_c_i^(x)/τ),
where τ is the temperature coefficient. We have lim_τ→ 0c^(x) = c_k.
Finally, each pixel x on the texture map T will be colored with c^(x).
In order to enlarge the optimization space, we replace the random seed g_i with a variable g_i'. Since the random seed g_i can be equivalently sampled by inverse transform sampling g = -log(-log(u)) with u∼Uniform (0,1), we define the variable g_i in <ref> as
g_i = -log(-log(λ· u_i^(fix) + (1-λ) · u_i^(train))),
where u_i^(fix)∼Uniform (0,1) is fixed during the whole training process, and u_i^(train) is a trainable variable clipped in range [0, 1]. The hyper-parameter λ∈[0, 1] controls the ratio of the trainable variables. Putting together, we update the coordinates {b_ij} and the trainable variable {u_i^(train)} jointly during optimization.
§.§ Non-rigid Mesh Augmentations
According to Expectation over Transformation (EoT) <cit.>, one can improve the robustness and the generalizability of the physical adversarial examples by applying multiple digital transformations that simulate physical transformations as augmentations during optimization. In order to efficiently simulate the physical warps and movements of the clothes, we apply two augmentations on 3D meshes before applying regular 2D augmentations on the final images. The first augmentation aims to warp the texture map of the clothes meshes based on topologically plausible projections (TopoProj). The second augmentation is applied on the mesh vertices' coordinates by 3D Thin Plate Spline (TPS) <cit.>.
5pt
Texture warping based on topologically plausible projection.
We first obtain the texture maps and UV coordinates of the clothes by Clo3D software[https://www.clo3d.com/]. The clothes on the 3D person models are created by pieces of flattened cloth identical to the texture maps. See (<ref>a) for an example of a T-shirt mesh's texture map. The local distances (within triangular elements) of the mesh vertices according to 3D coordinates are thus consistent with their local distance on the 2D texture map. Therefore, we call the texture map a geometrically plausible projection (GeoProj). According to this local-distance-preserving property, we can produce the final clothes similar to the 3D simulated ones (<ref>c) by printing the texture maps on fabric materials in the real world.
It is challenging to simulate the physical transformations merely by warping on the texture map, since the warping may result in transformations that are far away from the physical ones. As an example, <ref>d shows a rendered image after applying a mild shear strain along the vertical axis in <ref>a. Plenty of pixels on the image appear at wrong place, e.g., the black background pixels appear on the front side of the clothes, and orange backside pixels appear on the sleeves. The reason is that the coordinates of the points on the texture map cannot reflect their topological relations in the 3D mesh. For example, the bottom-left corner of the T-shirt's front side should be connected to the bottom-right corner of the T-shirt's backside, while the corresponding pixels are far away on the texture map.
To address this problem, we propose a novel warping technique based on the TopoProj (<ref>b), which resembles the physical transformation of the clothes (<ref>e). A TopoProj is a projection of the mesh vertices that preserve the topological relations between vertices, which allow the pixels appear at reasonable place after the warping. See Supplementary Material for the generation process of the TopoProj. However, we can not simply replace the GeoProj with the TopoProj, since it brings difficulties in physical realization: the local distances will no longer be consistent with that of 3D meshes, i.e., we cannot print such patterns and tailor them to produce the final clothes. Moreover, the inconsistency of the local distance will result in extremely uneven resolution of the textures.
Therefore, we leverage both GeoProj and TopoProj when applying the warping.
During the original rendering <cit.>, each pixel of the final image corresponds to a certain light path that passing through the camera. The light path may have single or multiple intersections with some triangle elements of the 3D mesh, yet we only consider the closest intersection to the camera. The barycentric coordinate of the intersection in the triangle elements thus can be calculated. Since each vertex of the triangle elements has its correspondent on the texture map, one can calculate the correspondent of the intersection point on the texture map according to its barycentric coordinate. The rendered color of the pixel thus can be calculated according to the position of the intersection point on the texture map.
In order to assign new colors to the pixels of the warped image, we apply additional projections on the coordinates of the intersection points during the rendering. <ref> illustrates the warping pipeline. As mentioned, GeoProj and TopoProj are two projections of all the vertices in the 2D plane for a 3D mesh.
For a point in a triangle element, we define a corespondent in each projection, whose position is determined by its barycentric coordinate. The barycentric coordinate is calculated via the original rendering, which is the same in GeoProj and TopoProj.
Specifically, we describe the warping process in five steps: (1) given an intersection point A (the red cross on the left piece in <ref>) on the GeoProj with its barycentric coordinates; (2) find its correspondent B on TopoProj based on the barycentric coordinates; (3) warp the corresponding point by 2D Thin Plate Spline (TPS) <cit.> method and get point C; (4) compute the new barycentric coordinates for the warped point C on TopoProj (may be in a new triangle element); (5) find its correspondent D on GeoProj according to the new barycentric coordinates, and compute the color of point D by interpolating the texture map. The process is applied on all the pixels of the image.
The TPS warping in step (3) depends on a set of control points (See Supplementary Material for the details). We uniformly sample the polar coordinates of each control point with a range of [-ϵ_r, ϵ_r] and [-ϵ_t, ϵ_t] for the radius and angle respectively.
5pt
Vertex augmentation by 3D TPS. We also applied augmentation directly on the 3D vertex coordinates of the meshes by 3D TPS <cit.>. The vertex coordinates are perturbed according to a set of control points. We uniformly perturb the control points in range [-ϵ_TPS, ϵ_TPS]. See Supplementary Material for the visualization.
Together with the texture warping, we apply mesh augmentations during optimization to reduce the gap between the 3D meshes and the real-world ones.
5pt
Other augmentation. Since the colors will change when they are printed on fabric materials, we calibrate the digital color on the texture maps to the physical color following <cit.>. See Supplementary Material for the details. During 3D rendering, we sample the viewing angles of the camera adaptively according to the mean confidence score of the target bounding boxes, where the angles with higher scores are more likely to be sampled. We also choose the simulated lights from ambient lights, directional lights, and point lights uniformly at random. Moreover, we apply other image augmentations on the rendered images following previous works <cit.>, such as randomizing the scales, positions, contrast and brightness.
§.§ Adversarial Loss Function
In this section, we present the objective functions for attacking detectors.
5pt
Detection loss. Object detectors predict bounding boxes with confidence scores. Since our goal is to evade the detectors from detecting humans, we minimize the confidence score of the person class in the box which has the maximum Intersection over Union (IoU) score with the ground truth. For each input x, suppose that the victim detector 𝒟 outputs a set of bounding boxes b_i^(x), each with a confidence Conf_i^(x). We define the detection loss as
ℒ_det = ∑_xConf_i^*^(x),
i^* = max_iIoU(gt^(x), b_i^(x)),
where gt^(x) stands for the ground truth bounding box of the foreground person on the input image x.
5pt
Concentration loss for camouflage texture.
In order to increase the stability of the camouflage texture generation, we prevent the polygons from being too small by introducing a concentration loss that encourages control points to move away from each other:
ℒ_con = ∑_j=1^N_C∑_1 ≤ k_1 < k_2 ≤ N_Pexp(-b_jk_1 - b_jk_2^2/σ^2),
where σ is a constant.
The total adversarial loss for minimization is
ℒ = ℒ_det + α_conℒ_con,
where α_con is the weight between the two losses.
§ EXPERIMENTS
§.§ Experimental Setup
5pt
Subjects.
Three actors (age mean: 26.3; age range: 25-27; height range: 175-178 cm) are recruited to collect physical test data. We also recruited 93 subjects (age mean: 30.2; age range: 18-57) to evaluate the naturalness score of different clothes. The recruitment and study procedures were approved by the Department of Psychology Ethics Committee, Tsinghua University, Beijing, China.
5pt
Baseline methods
According to the previous work <cit.>, the Attack Success Rates (ASRs) of the adversarial clothes printed with isolated patches will drop catastrophically when the viewing angle changes. Printing repetitively tiled patches on the clothes is helpful to prevent the ASRs from dropping. Therefore, we tiled the patches produced by patch-based attacks for fair comparison. We mainly evaluated three patch-based attacks AdvPatch<cit.>, AdvTshirt<cit.>, NatPatch<cit.>, and a texture-based attack, AdvTexture<cit.>. We also evaluate RandColor, a random texture with random colors in a lattice, and RandCaT, a random camouflage texture pattern.
See Supplementary Material for the datasets, target detectors, evaluation metric and the implementation details.
§.§ Naturalness Score by Subjective Evaluation
Following Hu et al. <cit.>, we conducted a subjective evaluation on the naturalness score of the adversarial clothes. For a fair comparison, we applied different patterns on an identical garment model using FAB3D[https://tri3d.in/]. We showed eight pictures of different T-shirts (<ref>) aggregated on a scrollable page in random orders to the subjects and required them to give a naturalness score for each picture using a 7-level Likert scale (1 = not natural at all to 7 = very natural).
As shown in <ref>, the naturalness score of AdvCat targeting Faster RCNN (4.89) is significantly higher than those of the other five adversarial patterns (p<0.001, student's t-test), and is close to the scores of the control group with common textures (the second column, 6.08 and the third column, 5.05 in the table).
§.§ Digital World Results
5pt
Evaluation with different IoU threshold.
We noticed that the IoU threshold τ_IoU during evaluation is usually set to 0.5 according to previous works <cit.> since they mainly evaluate their adversarial patches or textures on the datasets that contains multiple people, e.g. Inria dataset <cit.>. On such datasets, a relatively high threshold can prevent confusing the boxes of overlapping objects. However, the high threshold could result in an overestimation of the attack's effectiveness. The target detector may output a considerably large bounding box with an IoU score smaller than the threshold, which still provides strong evidence of having detected the person. See Supplementary Material for examples. Therefore, we evaluated the ASRs with different IoU thresholds 0.01, 0.1, 0.3, and 0.5. See <ref> for the ASRs of different methods targeting Faster RCNN. According to <ref>, the ASR of the AdvTexture is slightly higher than our method with an IoU threshold of 0.5, while it decreases significantly when the IoU threshold decreases. On the contrary, the ASRs of our method, AdvCaT is consistently high with different IoU threshold, even when the threshold equals to 0.01, which indicates the strong adversarial effectiveness of AdvCaT. Since an IoU threshold of 0.01 is too small that may introduce undesirable noises, we report the ASRs with IoU threshold 0.1 in the rest of our paper unless explicitly mentioned.
5pt
Ablation Study of Augmentation Strategies In order to investigate the effect of different 3D model augmentation strategies on the generalizability of the optimized patterns, we optimized four AdvCaT patterns targeting Faster RCNN with different 3D model augmentation strategies
The first pattern used no augmentation on 3D meshes, denoted by NoAug. The second pattern used 3D TPS augmentation, denoted by AugTPS. The third pattern used topologically plausible projection, denoted by AugTopo. The final one, AugTPS+AugTopo, incorporated both augmentations. Moreover, we used four different intensity of 3D mesh deformation during evaluation, which are denoted by None (no deformation), Mild ((ϵ_r, ϵ_t, ϵ_TPS )=(0.1, 50, 0.15)), Middle ((ϵ_r, ϵ_t, ϵ_TPS )=(0.1, 65, 0.22)), and Huge(ϵ_r, ϵ_t, ϵ_TPS )=(0.1, 80, 0.3), respectively. Note the hyper-parameter used during training was the same as Mild. See <ref> for the ASRs of the patterns with different augmentation strategies and deformation intensities. The ASRs of the patterns applied on a new 3D person without any 3D deformations are also plotted in the figure.
As shown in <ref>, the ASR of NoAug drops significantly when the deformation intensity increases, which implies its catastrophic drop of the adversarial effectiveness in the real world. Using 3D TPS alone can be better, but it still suffers from a considerable drop under huge deformation intensity. The ASRs of AugTopo that only use TopoProj are high even when the deformation intensity is huge. Combining 3D TPS with TopoProj is slightly better than only using TopoProj. The ASRs of different strategies evaluated on a new unseen 3D model are consistent with the previous observation, which indicates the good generalization ability of the pattern using both augmentations.
5pt
Attacking different detectors. We optimized camouflage patterns to attack different detectors including YOLOv3 <cit.>, FasterRCNN <cit.> and deformable DETR <cit.> and show their ASRs in <ref>. We also used the trained patterns to attack other detectors to study their transferability. The ASR of the AdvCaT trained to attack Faster RCNN was relatively high when targeting MaskRCNN (92.22%) and Deformable DETR (65.11%), but relatively low when targeting YOLOv3 (23.26%). See Supplementary Material for the visualization of these AdvCaTs and the full transfer study.
5pt
Parameter sensitivity. We varied the value of λ in <ref> during optimization. When λ increased, The ASR increased, while the AdvCaT became less like a camouflage pattern. In addition, we optimized AdvCaT with different styles by using various color combinations c_i, all of which achieves high ASRs targeting Faster RCNN. See Supplementary Material for details of these experiments.
§.§ Physical World Results
We produced three clothes covered with different AdvCaT patterns in the physical world. We cropped the different parts of the clothes from the texture map and printed them on fabric materials. These pieces were then tailored to wearable adversarial clothes. In <ref> we visualized the clothes and presented their ASRs targeting Faster RCNN, where Random denotes the clothes covered with random camouflage textures; AdvCaT w/o aug and AdvCaT w aug denote the clothes that covered with AdvCaT with or without mesh augmentation (i.e. TopoProj and 3D TPS) during optimization, respectively. The ASR of AdvCaT w aug (85.94%) was significantly higher than those of AdvCaT w/o aug (19.27%) and Random (0.00%).
<ref> shows the ASRs at different viewing angles, indicating strong attack ability of the AdvCaT clothes. In addition, we found that our designed clothes were relatively robust to the environment change. When the distance between the actor and the camera was less than 4 m, the ASR stayed high (above 61.5%). See Supplementary Material for details of these experiments. We also provide a video demo in Supplementary Video.
§ CONCLUSION
We proposed to optimize clothes textures via 3D modeling to produce natural-looking adversarial clothes that are adversarially effective at multiple viewing angles.
The adversarial T-shirt with AdvCaT patterns has a high naturalness score in a subjective test evaluated by a group of subjects. Experimental results indicate that our adversarial clothes can hide people from detectors at multiple viewing angles with high ASRs in the digital and physical world.
5pt
Limitations
Though the AdvCaT patterns sometimes have a relatively high ASR targeting unseen detectors, their transferability is not universal, since the ASRs targeting some certain detectors are not very good. One can use model ensemble to improve their transferability.
§ ACKNOWLEDGEMENT
This work was supported by the National Natural Science Foundation of China (Nos. U19B2034, 62061136001, 61836014).
ieee_fullname
§ SUPPLEMENTARY METHODS
§.§ Topologically Plausible Projection Generation
See <ref>a for the visualization of the vertices of a T-shirt mesh, and <ref>b-c for the visualization of a GeoProj and a TopoProj. We generate the topological plausible projection (TopoProj) by a process named Zipping. When 3D vertex points are mapped to TopoProj, the connectivity of all point pairs remains unchanged. However, as for GeoProj, only the connectivity of the inner points in each pieces is unchanged, while some separated boundary point pairs are connected according to the 3D mesh. Therefore, we use Zipping to generate TopoProj by connecting the point pairs in GeoProj. See <ref>d for an example.
Zipping is a continuous transformation from GeoProj to TopoProj inspired by physical mechanisms of elastic stress. Intuitively, Zipping is to pull the point pairs (red and black points in <ref>d) slowly, until they are overlapped. The triangle elements will be deformed as even as possible, while their chiralities are not supposed to be flipped during zipping. Specifically, we update the coordinates of the points based on carefully designed forces. This force is an inner force caused by the triangle (like a spring) that intended to push the point back to restore the triangle's original shape. It restricts the shape of the entire frame from changing too much during Zipping. As illustrated in <ref>, the force component F_i,k on point i by triangle element k is proportional to the relative displacement Δ_i,k^all of this point in the triangle element. Since each point belongs to multiple triangle elements, the resultant force is
F_i = ∑_kF_i,k = ∑_kΔ_i,k^all/v_i,k^perp_2,
v_i,k^perp = (v_i,k^b, v_i,k^c)v_i,k^a - (v_i,k^a, v_i,k^c)v_i,k^b/(v_i,k^c, v_i,k^c),
where (·, ·) is the inner production and v_i,k^a, v_i,k^b and v_i,k^c are the edge vectors of the triangle element illustrated in <ref>. v_i,k^perp is the altitude from the edge v_i,k^c to the point i, also illustrated in <ref>. We divide the forces by the norm of the altitude v_i,k^perp_2 in <ref> to prevent the triangle element from flipping.
Moreover, the points in each point pair (colored red and black in <ref>d) are subjected to attractive forces that point to each other. We then iteratively update the coordinates of each point according to the resultant force. Suppose that the resultant force of the point i at step t is F_i^(t), and its current coordinate is x_i^(t). We update the coordinate by
x_i^(t + 1) = x_i^(t) + β^(t) * F_i^(t),
where β^(t) is an adaptive time interval that prevent the chirality of the triangles from being flipped. Specifically, we have
β^(t) = γmin(𝒮^(t)_β∪{β_max}),
𝒮_β ^(t) ={β|β>0, and ∃ i,k, s.t. v_i,k^perp = 0 when coordinates x_i = x_i^(t) + β * F_i^(t)},
where we used γ=0.5 and β_max=0.1 in practice.
With a proper initialization of the points' coordinates (by bending the pieces from GeoProj), we generated the TopoProj for T-shirt and trouser meshes. See the visualization in <ref>.
§.§ TPS Warping on TopoProj
We initialize a set of control points and uniformly perturb the polar coordinates of each control point. The warped coordinates of all the other points are calculated according to the control points, as shown in <ref>.
§.§ Visualization of 3D TPS
An example of the perturbed mesh is shown in <ref>.
§.§ Physical Color Calibration on the 3D Texture Map
As shown in <ref>a, we first generate a color palette with 9×9×9=729 different colors in the digital world. We then print it on a piece of cloth, take a picture for them, and extract the corresponding colors. We use a polynomial regression model with six degrees to fit the color transformation. Specifically, suppose the original digital color is x=(R, G, B) and the final physical color is y^*=(y_1^*, y_2^*, y_3^*)=(R^*, G^*, B^*). The regression model is
y_i = ∑_a_1,a_2,a_3w^i_a_1,a_2,a_3x_1^a_1x_2^a_2x_3^a_3, i = 1,2,3,
a_1 + a_2 + a_3≤d,
where a_1, a_2, a_3 and the degree d are non-negative integers. A linear regression model is then applied to fit the polynomial features to y^* with coefficients w^i_a_1,a_2,a_3. In order to choose a optimal degree d, we randomly divide the 729 color pairs into training and validation sets, each containing 50% of the pairs. We fit the model on the training set, and calculate the MSE loss on the validation set. As shown in <ref>b, the MSE loss is the smallest when the degree is around 6. Therefore, we choose d=6.
§ SUPPLEMENTARY RESULTS
§.§ Experimental Setup
5pt
Datasets and Target Detectors.
We collected 506 background images in total from Google Search for 3D rendering. See <ref> for some examples. We split them randomly into two sets for training and testing respectively, where the training set consists of 376 images and the test set consists of 130 images.
We attacked three different detectors including YOLOv3 <cit.>, Faster RCNN <cit.>, and Deformable DETR <cit.> under white box setting. We also attack other detectors including YOLOv2 <cit.>, Mask RCNN <cit.>, and DETR <cit.> to evaluate the transferability of the adversarial camouflage textures.
5pt
Evaluation Metric. We extracted the bounding boxes from the output of the target detector on each input image and filtered out the boxes of which IoU scores with the ground truth box are lower than a specific IoU threshold τ_IoU. The threshold was set to 0.1 in our paper except for Tab. 2 in the main text.
An image is regarded as an adversarial success example as long as the confidence scores of all the left boxes are lower than a confidence threshold τ_conf=0.5. The ASR is defined as the proportion of the adversarial success examples among all the test images.
5pt
Implementation Details.
We fixed the temperature τ in main text Eq. (5) to 0.3 during training and use discrete sampling (τ→0) during evaluation and physical evaluation. We set the parameter λ=0.7 in main text Eq. (6). During training, we randomly perturbed the 3D models with the hyper-parameters (ϵ_r,ϵ_t, ϵ_TPS) = (0.1, 50.0, 0.15).
We optimized the parameters for 600 epochs with Adam <cit.> optimizer with learning rate 0.001 for the coordinates b_ij in main text Eq. (2) and 0.01 for the trainable Gumbel seed u_i^train in main text Eq. (6).
For digital evaluation, we rendered the mesh as we did for training, while using backgrounds sampled from the test set. We averaged the ASR over 37 viewing angles ranging from -180^∘ to 180^∘. Note that the 3D person model exactly faces the simulated camera when the viewing angle equals to 0^∘.
For physical evaluation, we asked three actors to wear the adversarial clothes and turn a circle slowly in front of a camera, which was fixed at 1.55 m above the ground and 3.0 m distant from the actor unless otherwise specified. For each actor and each adversarial clothes, we recorded one video indoor and one outdoor. We extracted 32 frames from each video and therefore collected 32×3×2=192 for each clothes. We labeled the ground truth manually and evaluated the ASRs on these frames as we did in digital evaluation.
§.§ Scene Dataset and Synthesized Images
See <ref>a for examples of the background images in the scene dataset, and see <ref>b for the synthesized images with rendered 3D person meshes as the foreground images. The 3D person meshes were rendered at different viewing angles, and were stuck onto the background images with random scales and positions.
§.§ Visualization of the Bounding Boxes during Evaluation
See <ref> for the examples of the detection results of different patterns. Previous methods were more likely to output some bounding boxes within the area of the foreground image, which had small but non-negligible IoU scores with the ground truth boxes. See Sec. 4.3 in the main text for the ASRs evaluated with different IoU threshold.
§.§ Transfer Study in the Digital World
We optimized camouflage patterns against different detectors including YOLOv3 <cit.>, Faster RCNN <cit.>, and Deformable DETR <cit.>. <ref> shows the camouflage patterns against different detectors. See <ref> for the ASRs of the transfer attacks in the digital world. The ASRs usually drop when the patterns were transferred to attack unseen detectors. In some cases, the ASRs still had high values (source Faster RCNN & target Mask RCNN, 92.22%, source Deformable DETR & targete Faster RCNN, 71.73%).
§.§ Parameter Sensitivity of λ
We optimized the AdvCaT patterns with different values of λ in Eq.(6) in the main text, which controls the percentage of the trainable variable in the Gumbel seed. As shown in <ref>, small λ resulted in poor adversarial effectiveness while large λ resulted in strong adversarial effectiveness. Meanwhile, the AdvCaT pattern with small λ was more like typical camouflage texture patterns, while the pattern with λ=1 looked somehow strange. Therefore, we chose λ=0.7 as a trade-off between the adversarial effectiveness and the appearance.
§.§ Different Color Combinations of the AdvCaT Patterns
We used different color combinations to produce different styles of the AdvCaT patterns. See <ref> for the visualization. All of the AdvCaT patterns had high ASRs (>97%) targeting Faster RCNN in the digital world.
§.§ Attack in the Physical World under Different Physical Settings
We evaluated the ASRs of the AdvCaT clothes under different physical settings. See <ref> for the ASRs of the adversarial clothes in the indoor and outdoor scenes. See Sec. 4.1 in the main text for the collection of the physical test set. The ASRs of AdvCaT clothes were high (above 85.4%) both indoor and outdoor. In addition, <ref> shows the ASRs at different distances between the camera and actors. The ASRs stayed high (above 61.5%) when the distance was less than 4 m.
|
http://arxiv.org/abs/2307.00828v2
|
20230703081601
|
Model-Assisted Probabilistic Safe Adaptive Control With Meta-Bayesian Learning
|
[
"Shengbo Wang",
"Ke Li",
"Yin Yang",
"Yuting Cao",
"Tingwen Huang",
"Shiping Wen"
] |
eess.SY
|
[
"eess.SY",
"cs.LG",
"cs.SY",
"math.OC"
] |
1]Shengbo Wang
2]Ke Li
3]Yin Yang
4]Yuting Cao
5]Tingwen Huang
6]Shiping Wen
[1]College of Computer Science and Engineering, University of Electronic Science and Technology of China, Chengdu 611731, China
[2]Department of Computer Science, University of Exeter, EX4 4QF, Exeter, UK
[34]College of Science and Engineering, Hamad Bin Khalifa University, 5855, Doha, Qatar
[5]Science Program, Texas A & M University at Qatar, Doha 23874, Qatar
[6]Australian AI Institute, Faculty of Engineering and Information Technology, University of Technology Sydney, NSW 2007, Australia
[∗]Email: ,
,
Model-Assisted Probabilistic Safe Adaptive Control With Meta-Bayesian Learning[This manuscript is submitted for potential publication. Reviewers can use this version in peer review.]
[
=======================================================================================================================================================================================
Abstract: Breaking safety constraints in control systems can lead to potential risks, resulting in unexpected costs or catastrophic damage. Nevertheless, uncertainty is ubiquitous, even among similar tasks. In this paper, we develop a novel adaptive safe control framework that integrates meta learning, Bayesian models, and control barrier function (CBF) method. Specifically, with the help of CBF method, we learn the inherent and external uncertainties by a unified adaptive Bayesian linear regression (ABLR) models, which consists of a forward neural network (NN) and a Bayesian output layer. Meta learning techniques are leveraged to pre-train the NN weights and priors of the ABLR model using data collected from historical similar tasks. For a new control task, we refine the meta-learned models using a few samples, and introduce pessimistic confidence bounds into CBF constraints to ensure safe control. Moreover, we provide theoretical criteria to guarantee probabilistic safety during the control processes. To validate our approach, we conduct comparative experiments in various obstacle avoidance scenarios. The results demonstrate that our algorithm significantly improves the Bayesian model-based CBF method, and is capable for efficient safe exploration even with multiple uncertain constraints.
Keywords:
Safety-critical control, control barrier functions, meta learning, adaptive Bayesian linear regression, neural networks, safe exploration.
§ INTRODUCTION
Despite the existence of numerous designs, significant research efforts, and successful applications in the field of control systems, the development of a reliable and secure controller that combines robust theoretical foundations with exceptional performance continues to present a formidable challenge. This challenge has captured the attention of researchers from diverse fields, including robotics <cit.> and healthcare <cit.>, among others. In the context of control systems, safety is evaluated based on the system state. In this study, we focus on probabilistic safe control, wherein a safe controller is expected to prevent the system from entering hazardous states with an acceptable probability <cit.>. Due to the intricate nature of calculating the safe state space for a general dynamics-driven system, ensuring safety by designing or learning a safe controller is rather complex. Existing safe control strategies include model predictive control <cit.>, reachability analysis <cit.>, and control barrier function (CBF) method <cit.>. In our research, we build upon the CBF method, which ensures that the system state remains within safe regions by defining a forward invariant set. This set is a subset of the safe region and restricts the system state within its boundaries. Furthermore, we take into account the presence of uncertainty, which not only have a more significant impact on the system state than small disturbances <cit.>, and does not have an analytical format as well <cit.>. Prior work has addressed this issue in <cit.> by introducing non-structured learning-based techniques. However, these methods, as we will discuss later, suffer from limitations in terms of real-time safe deployment efficiency and practicality in data-deficient online control tasks.
The learning-based safe control methods face a dilemma. To effectively model uncertainty, it is preferable to sample data randomly and independently from the control space. However, considering safety issues, driving an unknown system for data collection can lead to a high risk of unsafe behaviors. In addition, in the context of control system, data collection is typically based on a few state trajectories, which means the data is not independently distributed in most cases. This is exemplified in Fig. <ref>, where we aim to estimate the unsafe region by sampling data from a given safe trajectory. Previous works <cit.> have utilized Gaussian processes (GPs) <cit.> to recognize the safe set. However, this approach often leads to conservative estimations, as indicated by the large portion of the safe area labeled as 'unsafe' in the green covered region. To address this issue, researchers have explored the concept of safe exploration, which involves actively expanding system trajectories into unknown regions for better estimation <cit.>. However, we argue that on-line exploration introduces additional time costs in the current control task. Moreover, it relies on a more ideal control environment than the practical one, such as the smoothness of the safety measurements, to minimize the risk of unsafe behaviors. It also struggles to ensure safety effectively when uncertainty exists.
We employ the concept of meta learning to effectively address inherent uncertainties in the current control task by leveraging empirical knowledge learnt from historical similar tasks <cit.>. In the field of control systems, meta learning has been utilized for various purposes, such as modeling system dynamics <cit.>, conducting control design <cit.>, and adapting to new environments in the presence of uncertainties <cit.>. With regards to safety, meta learning can significantly enhance the estimation of unsafe regions, as depicted by the yellow covered region in Fig. <ref>. Despite its empirical effectiveness, there exists a theoretical gap that hinders the application of meta learning in facilitating safe control tasks <cit.>. In this work, we aim to present a novel safe control framework combining the CBF method and meta learning techniques. Moreover, we will explore its theoretical criteria to ensure probabilistic safety mathematically.
§.§ Related Works
§.§.§ CBF-based Adaptive Control Design
In order to ensure safety, it is inevitable to introduce more conservativeness into a controller when model uncertainty exists. Taking the worst-case robustness into account results in the most conservative controllers <cit.>. This method is not appealing to researchers because of its infeasibility for general cases <cit.>. To solve this problem in a more elegant way, it is necessary to further assumed that the uncertainty conforms to specific forms.
For structured parametric uncertainty, i.e., the uncertainty only exists in some unknown parameters of a structured function, the ideas from the adaptive control design are adopted to obtain a better safe controller <cit.>. To achieve parameter identification simultaneously, an adaptive tuning law is developed for finite-time estimation <cit.>, which is further improved for a less conservative design in <cit.>.
For unstructured uncertainty as studied in this paper, accurate estimation or even point-wise safety may not be guaranteed. We roughly divide these works into two categories: (i) directly modelling the unknown dynamics, and (ii) merely reducing the uncertainty in CBF constraints. For the first case, (deep) neural networks (NNs) are developed to build effective learning frameworks empirically <cit.>. For rigorous theoretical analysis, Bayesian methods are more attractive for safety-critical applications <cit.>. A principled Bayesian dynamics learning framework under CBF constraints is presented in <cit.>. However, these methods do not scale well on high dimensional systems and real-time control tasks due to high computational costs. For the latter case, NN-based control scheme is studied in <cit.> and <cit.> for one- and high-order CBF methods respectively. The critical limitation of these works lies in the lack of theoretical analysis. In addition, the framework in <cit.> needs episodic training and cannot work in on-line control tasks. From a Bayesian perspective, GPs are introduced to model the unknown parts of CBF constraints with data collected in an event-triggered manner to reduce the computational complexity <cit.>. Similar to our method, the Bayesian linear regression (BLR) model is introduced to learn the unknown residual part of CBF constraints with point-wise guaranteed safety <cit.>. However, the basis functions used in <cit.> should be chosen from a set of class 𝒦_∞ functions, which is often hard to determine for accurate modeling and effective adaptation. Besides, all methods assume the unknown part is static, which is impractical in dynamic control environments.
§.§.§ Meta Learning for Safe Control
For dynamic control scenarios where the control environment is changing, e.g. time-varying disturbances <cit.> or switched external inputs <cit.>, the generalization ability of a controller is significant. In terms of CBF methods, the generalization ability can be enhanced by (i) the adaptive structure of CBFs considering dynamic environment <cit.>, and (ii) the dynamic parameter in the CBF constraints <cit.>. The former is achieved either automatically by machine learning approaches or manually by domain experts. The latter is related to parameter optimization, current methods including differential convex optimization techniques <cit.> and reinforcement learning <cit.>.
Nevertheless, by assuming perfect knowledge of environments, these methods can hardly work when uncertainty exists, which is an important issue to be addressed in this paper.
The meta-learning techniques are well-known capable of modeling uncertainties across similar tasks. By assuming shared implicit variables <cit.> or unified hyper-priors of learning models <cit.>, a meta model pre-trained with data of historical tasks can effectively adapt to a new task <cit.>. The meta-adaptive control strategy has been studied in <cit.>. In <cit.>, the probably approximately correct (PAC) is integrated with Bayesian learning framework for provable generalization ability of the designed controllers. However, both studies does not work in safety-critical environments. Very recently, the safe meta-control algorithms are developed based on Bayesian learning and reachability analysis in <cit.>, where unfortunately, estimation processes and safe control algorithms are rather complex. Differently, we are interested in estimating the scalar uncertainty in a CBF constraint, which is much simpler than estimating vector uncertainties for high-dimensional systems. Moreover, we argue that our method can adapt to changeable environments, while the work in <cit.> only considers uncertain dynamics in a deterministic environment.
§.§ Our Contributions
In this work, we make three key contributions.
* We propose a systematic development of an adaptive and probabilistic safe control framework by integrating meta learning techniques into the control barrier function (CBF) method. Our algorithm achieves computational efficiency in on-line control through the direct modeling of scalars in CBF constraints and the introduction of finite feature spaces based on adaptive BLR models.
* We provide a theoretical investigation into the criteria for ensuring probabilistic safety in our designed algorithm. Under mild assumptions, we establish upper bounds on the weight estimation errors and a relationship between these bounds and the predicted confidence levels, ensuring probabilistic safety during control.
* We conduct empirical evaluations to assess the performance of our algorithm in various obstacle avoidance control scenarios. The results demonstrate the superior performance of our algorithm compared to robust and GP-based CBF methods, both with and without online sampling. Furthermore, our method exhibits enhanced efficiency in safe exploration through online sampling.
§.§ Notations
Throughout this paper, denotes the real value space, while ^n denotes the n-dimensional real space. For a vector x∈^n and a differential scalar function h(x), ∇_x h = ∂ h(x)/ ∂ x. The Lie derivative of f(x)∈^n w.r.t. h(x)∈ is denoted by ℒ_f h(x) = ∇_x h^⊤ f(x). χ_d^2(p) represents the p-th quantile of the χ^2 distribution with d degree of freedom (DOF). For a matrix A, λ_max(A) and λ_min(A) denote the maximum and minimum eigenvalues of A, respectively. Let ‖ x ‖_A = √(x^⊤ A x) be the weighted 2-norm of a vector x.
§ PRELIMINARIES
§.§ Problem Formulation
The autonomous control systems are considered to take the following control-affine structure in this paper:
ẋ = f(x) + g(x) u + φ_ω(x) + ϵ,
where x∈^n represents the system state and u∈^m is the system input. Both drift dynamics f: ^n→^n and input dynamics g:^n→^n× m are known and locally Lipschitz continuous, corresponding to the prior knowledge of the systems. In addition, we introduce an unknown vector φ_ω:^n→^n. Here, ω denotes the label of a certain control environment. The unknown small noise ϵ is also considered due to ubiquitous disturbances or observation errors. This definition of our systems is also taken in other works <cit.>. In terms of safety, the system state should stay in certain regions 𝒳_ω⊂^n at a high probability in case of potential risks. Note that 𝒳_ω allows to change in different environments. The control input should be taken from a given set 𝒰⊂^m. We formalize the safe control problem as follows.
Probabilistic Safe Control Problem (PSCP):
min_u∈𝒰 l(x, u)
subject to ẋ = f(x) + g(x) u + φ(x, ω) + ϵ, <ref>a
ℙ( x ∈𝒳_ω) > 1 - δ. <ref>b
In the above equations, δ∈ [0, 1) is a given threshold, and l: ^n× m→ denotes a control objective, such as tracking a nominal input. PSCP is not uncommon in real-life applications. For example, a vehicle is expected to avoid moving on irregular areas of the road in terms of high mechanical wear and tear or other unpredictable risks, serving as a probabilistic safety constraint. While the resistance, load, road condition, are changing according to different scenarios, which are naturally uncertain a priori, leading to the unknown dynamics φ and different admissible regions 𝒳_ω. We endeavor to propose a safe control method to solve PSCP. Before further investigations, we present a brief overview of CBF and adaptive BLR (ABLR) models with some necessary assumptions.
§.§ Control Barrier Function Method
The CBF is defined as a measure of the safety distance. Formally, a valid CBF is given as follows.
Consider a continuously differentiable function h_ω(x):^n→, and a closed convex set defined by 𝒞_ω = { x | h_ω(x)≥0 }. If 𝒞_ω⊆𝒳_ω, then h_ω(x) is a valid CBF for systems (<ref>).
Dependent to ω, a CBF may differ in changing environments. To ensure safety during the entire control procedures, the concept of invariant set is introduced to obtain a valid CBF-based safety constraint below.
For an initial state x_0 ∈𝒞_ω, i.e. h_ω(x_0) ≥ 0, if there exists a locally Lipschitz continuous controller satisfying
sup_u ∈𝒰∇_x h^⊤_ω(x) ẋ≥ - α (h_ω(x)),
where α represents an extended class 𝒦 function, then h_ω(x) is forward invariant. As a consequence, systems (<ref>) are safe with probability 1, i.e. x ∈𝒳_ω starting from x_0.
For systems (<ref>), CBF constraints (<ref>) are linear to u, making it tractable to compute a safe input. We assume the admissible control input set 𝒰 is described by a linear matrix inequality as u_min≤ A u ≤ u_max. When A=I_m, 𝒰 represents the input saturation. Moreover, the loss function is designed quadratic to the input vector, such as l = u - u_ref^2 with u_ref the task specific nominal control input. In all, the safe controller is obtained by solving the following CBF-based quadratic programming (CBF-QP) problem at each control step:
u^* = min_u∈^m u - u_ref^2
ℒ_f h_ω(x) + ℒ_g h_ω(x) u + Δ_ω(x) + α(h_ω(x)) ≥ 0, <ref>a
u_min≤ A u ≤ u_max, <ref>b
where Δ_ω(x) = ℒ_φ_ω h_ω(x) + ϵ_ω with ϵ_ω = ℒ_ϵ h_ω(x). Note that uncertainty exists in (<ref>). To tackle this, there are commonly two routines: (i) to find the largest bound of Δ_ω(x) as the worst-case estimation <cit.>, and (ii) to estimate Δ_ω(x) with adaptive error bounds <cit.>. In this paper, the latter is chosen for better generalization. The following assumption is critical and widely made for estimation and adaptive design.
[Measurable Variables]
It is assumed that ℒ_f h_ω(x), ℒ_g h_ω(x) u, and ḣ_ω are all measurable.
The above statement is equal to assume x is fully observable and the first derivative of h_ω can be computed or estimated. For simplicity, we denote the estimation of Δ_ω(x) by Δ̃_ω(x) = ḣ_ω - ℒ_f h_ω(x) -ℒ_g h_ω(x) u, and assume an unknown parametric regressor as Δ̃_ω(x) = ϕ^⊤(x)𝐰^*. The following assumption limits the regression error in theory.
[Noise Bound <cit.>]
For a sample sequence {ϕ_t}_t=1^∞ and a noise sequence {η_t}_t=1^∞, the estimation noise η_t = Δ_ω^t - ϕ_t^⊤𝐰^* is conditionally σ_0-sub-Gaussian, where σ_0 > 0 is a fixed constant. Thus for an integer t>1, there is
∀λ∈ℝ, 𝔼[exp^λη^t|ϕ_1:t, η_1:t-1] ≤exp(λ^2 σ_0^2/2).
Assumption <ref> naturally assigns zero mean value and bounded variance to η_t. The zero-mean Gaussian noise 𝒩(0, σ_0^2) that is a common noise assumption in the research of observer-based control <cit.> meets this condition.
§.§ Adaptive Bayesian Linear Regression
To estimate the unknown part in (<ref>), we introduce the ABLR <cit.>, a nonparametric model with tractable uncertainty quantification. A BLR models an unknown function y(x) by ỹ(x) = ϕ^⊤ (x) 𝐰, where ϕ(x) = [ϕ_1(x), …, ϕ_D(x)]^⊤, and weights 𝐰∈^D. Instead of point estimation, we assume the weight priors follow a multivariate Gaussian distribution 𝒩( μ_0, σ_0^2 K_0 ). Given a set of data 𝒟 = {(x_i, y_i)}_i=1^N, applying the Bayes rule, the weight posteriors are computed by p(𝐰|𝒟) = 𝒩(μ_τ, K_τ) where
μ_τ = K_τ(Φ𝐲 + K_0^-1μ_0 ), K_τ = (K_0^-1 + ΦΦ^⊤)^-1.
In the above equation, Φ = [ϕ(x_1), …, ϕ(x_N) ], and 𝐲 = [y_1, …, y_N]^⊤. As a result, the posterior predictive distribution at a test point x_t is given by
p(y | x_t, 𝒟) = ∫ p(y| x_t, 𝐰) p(𝐰|𝒟) d𝐰
= 𝒩(μ_τ^⊤ϕ(x_t), Σ_t ),
with
Σ_t = σ_0^2(1 + ϕ(x_t)^⊤ K_τϕ(x_t)).
The BLR can adapt to different tasks if ϕ is able to represent the inductive biases among tasks with appropriate priors of 𝐰. However, it is always difficult to determine a suitable combination of basis functions ϕ to strike a balance between prediction accuracy and computational efficiency <cit.>. The ABLR <cit.> shares the same structure as the vanilla BLR, differently, modifying the fixed ϕ(x) into a trainable mapping function, such as a (deep) NN <cit.> with D outputs. Consequently, ABLR models require training to master insightful inductive biases as well as a good priors from data. The general structure of ABLR models is depicted in Fig. <ref>.
Compared to GPs <cit.>, ABLR explicitly defines a kernel k(x, x^') = ϕ^⊤(x) K_0 ϕ(x^') in an finite dimensional feature space. This approximation benefits meta-Bayesian learning in two aspects: (i) the flexibility for adaptation by tuning kernel structures, (ii) the computational efficiency towards the number of collected data. Specifically, for n data, the computation complexity is 𝒪(D^3n^2) for ABLR and 𝒪(n^3) for GPs <cit.>. Since D is fixed during control, ABLR is more efficient for large-scale on-line sampling.
§ MAIN RESULTS
This section provides the systematic design and theoretical analysis of our model-assisted probabilistic safe adaptive control framework, dubbed . We begin by proposing the overall framework in Section <ref>. Then, we exploit its theoretical verification of safety in Section <ref>. Finally in Section <ref>, we present the detailed algorithms and practical implementation of .
§.§ Model-Assisted Probabilistic Safe Adaptive Control
The overall structure of the is depicted in Fig. <ref>. In what follows, we investigate the off-line meta-learning design of the ABLR model and the on-line probabilistic safe control framework based on CBF method.
§.§.§ Meta-training an ABLR model
The parameters of an ABLR model are learned under the framework of meta-learning, which contains a 'meta-learner' that extracts knowledge from many sampled trajectories of historical control tasks, and a 'base-learner' that can efficiently adapt to new tasks in an on-line manner <cit.>. For a Bayesian model, it is commonly to first learn the priors from historical data and then exploit them for the new tasks <cit.>.
The mapping functions, i.e. the feed-forward network ϕ, and the priors of the Bayesian layer, i.e. 𝒩( μ_0, σ_0^2 K_0 ), are trained together. For brevity, we use a fixed network structure and put focus on learning the values of its weights, denoted by W. In all, the trainable parameters of an ABLR model is given by 𝒲 = {W, μ_0, K_0}. Denote the set of K historical tasks by 𝒯 = {T_ω_i(ξ_i)}_i=1^K where ξ_i ∼ p(ξ). In each task, the control trajectories are sampled from environment labeled by ω_i. It is assumed that uncertainties across tasks follow a fixed unknown distribution p(ξ). For the i-th task, let the collected datasets be 𝒮_ω_i = { (x^j, Δ̃^j_ω_i) }_j=1^t_i where t_i denotes the total number of data pairs. Note that Δ̃_ω_i is an estimation of the true uncertainty Δ_ω_i. To train our model, the Kullback-Leibler divergence between the true model p^*(Δ_t | x_t, 𝒮_ω_i) and ABLR p(Δ̃_t | x_t, 𝒮_ω_i) is minimized. This is equivalent to minimize the negative log likelihood <cit.> given by
L(𝒲) = - 𝔼_x, Δ̃, 𝒮∼ p(x, Δ, 𝒮|ξ)log p(Δ̃_t | x_t, 𝒮_ω).
Moreover, thanks to the Gaussian priors, the log likelihood can be computed analytically. Using Monte-Carlo estimation and substituting (<ref>), we have
L(𝒲) ∝ ∑_i=1^K∑_j=1^t_ilogΣ_i^j + μ_i^j^⊤Σ_i^j^-1μ_i^j,
in which μ_i^j = Δ̃^j_ω_i - μ_i^⊤ϕ(x^j), μ_j and Σ_i^j are the predicted mean and variance at point x^j according to (<ref>) and (<ref>). We train 𝒲 by minimizing (<ref>) using historical tasks T_ω_i and trajectory data 𝒮_ω_i, i=1,…,K, in the 'meta-learner' phase. For a new task T_ω_K+1, we collect trajectory data 𝒮_ω_K+1 online. As designed in <cit.>, only weight priors in Bayesian layer are adjusted for fine-tuning in the 'base-learner' phase.
§.§.§ Fine-Tuning the meta-learned ABLR model
The ABLR model obtained from the meta-learning stage is able to adapt to new tasks by fine-tuning with sequentially collected on-line data 𝒮^t_ω_K+1 = { (x^j, Δ̃^j_ω_K+1) }_j=1^t in which t is the number of data. In this stage, we only adjust the prior mean and covariance of 𝐰, expecting that the forward network ϕ(x) has captured informative features among tasks from diverse historical data. According to (<ref>), the posteriors are given by:
μ_t = K_t-1(Φ_t-1𝐲_t-1 + K_0^-1μ_0 ),
K_t = (K_0^-1 + Φ_t-1Φ_t-1^⊤)^-1,
Φ_t = [ϕ(x^1), …, ϕ(x^t-1) ],
𝐲_t = [Δ̃^1_ω_K+1, …, Δ̃^t-1_ω_K+1]^⊤.
§.§.§ Integrating the ABLR model in safe control framework
For a Bayesian model, the confidence interval drawn from its predictive distribution can provide insightful information for the learning quality. Under informative priors through meta-learning, the ABLR model can make reasonable predictions of the uncertainty term Δ_ω for on-line safe control. For simplification, denote the predicted Gaussian distribution of the ABLR model at a test point x_t by 𝒩(μ̃_t, σ̃_t^2) where μ̃_t and σ̃_t is the mean and standard deviation of (<ref>). The confidence interval related to some confidence level β>0 is computed as ℐ_t = [ μ̃_t - βσ̃_t, μ_t + βσ̃_t ]. Therefore, we can integrate this into a modified CBF constraint below to formulate a probabilistic form of a CBF constraint.
For a valid CBF defined in Definition <ref> and for a Bayesian model such as ABLR, the probabilistic safety constraint is given as
ℒ_f h_ω(x) + ℒ_g h_ω(x) u + μ̃(x) - βσ̃(x) + α(h_ω(x)) ≥ 0.
In this vein, we can design the CBF-based probabilistic safe controller by substituting (<ref>) with (<ref>) in (<ref>). Before deployment, we provide a theoretical investigation to show on what degree can ensure probabilistic safety.
§.§ Theoretical Analysis and Probabilistic Safety
We begin the analysis by evaluate the modeling effect of ABLR models through meta-learning. The following assumptions are necessary as used in literature <cit.>. Note that these assumptions correspond to the smoothness and correct model selection assumptions of GPs in function spaces <cit.>.
[Capacity of Meta Models]
With Assumption <ref>, ∀ξ̃∼ p(ξ), there exists 𝐰^*(ξ̃) ∈ℝ^D satisfying
Δ_ω(x) = ϕ^⊤(x)𝐰^*(ξ̃) + η(ξ̃), ∀ x∈𝒳_ω .
[Calibration of Meta Priors]
For ξ∼ p(ξ), the error between 𝐰^* and prior mean μ_0 is calibrated with probability at least 1-δ̃, where δ̃= δ/κ and κ > 0, as
ℙ( ‖𝐰^* - μ_0 ‖^2_K_0^-1 < σ_0^2 χ_D^2(1 - δ̃) ) ≥ 1 - δ̃.
Theoretically, Assumption <ref> holds with the uniform approximation ability of (multi-layer) NNs <cit.>. It can be further relaxed to permit a bounded approximate error as stated in <cit.>. To ensure Assumption <ref>, we can set a large enough σ_0 for non-informative priors, leading to a conservative theoretical analysis of the meta model.
We can now quantify the modeling effect of ABLR models by the following statements.
Let Assumptions <ref>, <ref>, <ref> and <ref> hold. The meta ABLR model is trained by (<ref>) with historical data and updates according to (<ref>) with on-line collected data 𝒮^t_ω_K+1. Let also 𝐰^* be the optimal weight as (<ref>). Then, 𝐰^* lies in the following set with probability at least 1-δ̃, as
𝒞^δ_t = {𝐰∈ℝ^D |‖𝐰 - μ_t ‖_K_t^-1≤σ_0 Γ^δ̃},
where
Γ_t^δ̃ = √(2log( (K_t)^-1/2(K_0)^1/2/δ̃))
+ √(λ_max(K_t)/λ_min(K_0)χ^2_D (1 - δ̃)).
The derivation is presented in Appendix <ref>.
The above statements quantifies the estimation error bounds of parameters in the ABLR model based on the meta-learned priors (K_0), on-line collected data (K_t), and user-specific probability (δ̃). In the context of CBF method, we can combine the worst-case adaptive CBF method to obtain a valid CBF constraint that ensure safety with a specified probability. Based upon (<ref>), it is better to determine the value of β to correctly obtain the safety probability, other than believing the uncalibrated confidence intevels predicted by ABLR models.
Let Assumptions <ref>, <ref>, <ref> and <ref> hold. The meta ABLR model is trained by (<ref>) with historical data and updates according to (<ref>) with on-line collected data 𝒮^t_ω_K+1. Then, the input u satisfying probabilistic safety constraint (<ref>) renders systems (<ref>) safe with at least probability 1-δ̃ if we set β≥Γ_t^δ̃.
The derivation is presented in Appendix <ref>.
§.§ Algorithm Implementation
In this section, we discuss some implementation details of . The pseudo code for two stages of is presented in Algorithm <ref> and Algorithm <ref>.
Our discussions consist of the following aspects.
§.§.§ Meta-task data collection
Although similar tasks are accessible in many control scenarios <cit.>, from which sampling is intractable considering safety issues. Therefore, the human-assisted control (HAC) is needed, to either inspect control procedures <cit.>, or pre-plan safe trajectories with tracking controllers <cit.>. In addition, we can use high-fidelity simulations to avoid catastrophic unsafe behaviors <cit.>. For all examples considered in this work, we pre-plan safe trajectories for all meta control tasks.
§.§.§ ABLR configurations and Meta training
The structure of forward NNs and cell number D of the Bayesian layer determines the capacity of an ABLR model. To meet Assumption <ref>, we use multi-layer perceptron (MLP) in our experiments. We fix the prior covariance matrix σ_0^2K_0 during training in order to meet Assumption <ref> and reduce computation complexity, referring to <cit.> otherwise. In a more practical perspective, we can added regularization terms during training in case of over-fitting for better performance <cit.>.
§.§.§ Warm start the learning models
Since meta data are also expensive to collected, it is likely to have biased ABLR model after meta-training. We refine the ABLR model using data from the current task. In this stage, we only update the prior mean vector μ_0 <cit.> by minimizing the simplified loss
L(μ_0) ∝∑_j=1^t_K+1μ_K+1^j^⊤Σ_K+1^j^-1μ_K+1^j,
which is similar to optimize the hyperparameters of a GP model while keeping kernel structure fixed. We can further speed up the optimization processes by manually computing the analytical format of gradients <cit.>. Note that at least one sample of the current task is needed to warm start.
§.§.§ Online sampling and control
Despite theoretically attractive, the lower bound of β in Theorem <ref> is too conservative in practice. Typically, we can set β=1.96 for a 95% confidence level in (<ref>). In addition, we can refine the ABLR models using (<ref>) and safe-exploration techniques <cit.> for better control performance. Furthermore, a better deployment for more efficient real-time computation can be a promising improvement of our algorithm <cit.>.
§ ALGORITHM VALIDATION
§.§ Experiment Setup
We test our framework on the obstacle avoidance control of a moving point, which is a popular verification application of safe control methods <cit.>. We consider multiple scenarios, (i) uncertain dynamics with one fixed obstacle, (ii) uncertain dynamics with one uncertain obstacle, and (iii) uncertain dynamics with multiple uncertain obstacles. Our algorithm is compared with three methods: the optimal CBF method () with perfect knowledge of uncertainty <cit.> (no uncertainty), the robust CBF method () with the worst-case estimation of uncertainty <cit.> (no learning), and the GP-based probabilistic CBF method () that model uncertainty by sampling data from the current task <cit.> (no meta). We expect performs nearly the same as , and significantly outperforms with non-informative priors and .
We implement GPs using GPy <cit.> with Matérn 5/2 kernel, constant mean function, and fixed-noise Gaussian likelihood. We implement the ABLR model by JAX <cit.>, with a 3-layer multi-output NN in Haiku <cit.>. Specifically, he first two layers of the NN are fully connected by 256 tanh cells, while the last layer contains D output sigmoid cells. For both models, we fixed the variance of noise distribution by 0.1. The CBF-QP is based on the solver for convex optimization in <cit.>.
The dynamics of a moving point system is given by:
ẋ = [ [ 1 0; 0 1 ]] u - w * [ [ cos(x_1) + sin(x_2); cos(x_1) + sin(x_2) ]]_φ_ω(x),
where φ_ω(x) is unknown and ω takes values randomly from a known distribution p(ξ). We consider a uniform distribution of ω as ω∼U(0.5, 2.5). For simplicity, the obstacles are described by circles. The i-th obstacle is centered at x_ω^i, y_ω^i with radius r_ω^i. Therefore, a valid CBF for i-th obstacle is
h^i_ω (x) = (x_1 - x_ω^i)^2 + (x_2 - y_ω^i)^2 - r^i_ω^2.
In the current task, the moving point starts from the origin and moves towards a target point x_T = [3.0, 4.0]^⊤ with a nominal controller u_ref = -k_f (x - x_T) with a fixed parameter ω = 1.5. There are already 20 noisy samples from a known safe trajectory, as the gray line in Fig. <ref>. The GP and ABLR models are optimized/fine-tuned according to these samples. For all scenarios, we use the following configurations: k_p=10.0, α=1.0, u_max= 5, u_min = -5.0, and β=1.96.
§.§ Experiments Results
§.§.§ Uncertainty in Dynamics
We consider a fixed obstacle described by x_r=y_r = 1.5, and r = 0.8. We randomly sample 20 different ω as historical tasks, and generate 30 sparse samples in each task. We set D=10. The ABLR model is meta-trained according to these samples. The algorithms are evaluated on two simulations without/with on-line samplings respectively. To evaluate performance, we introduce the criteria from the field of adaptive CBF methods <cit.>. Therein, a better algorithm commonly admits a closer distance between system state and the obstacle, namely a smaller h(x) during control. Figure <ref> shows the different trajectories, when the on-line sampling is forbidden, i.e., both GP and ABLR models do not update during control. It is shown that performs the best, while other two algorithms are more conservative. When online sampling is permitted, we set the sampling frequency by 10Hz. As shown in Fig. <ref>, the performance of improves a lot, while is almost the same as the optimal controller .
§.§.§ Uncertainty in Both Dynamics and an Obstacle
In this scenario, the obstacle in each meta task is generated randomly from x_r, y_r∼ U(1.0, 4.0), r∼ U(0.2, 1.0). Since uncertainty increases, we set D=20 and generate 50 meta-tasks, each with 30 sparse samples, to train the ABLR model. In the current task, the new obstacle locates on x_r=1.0, y_r = 2.5, and r = 1.2 (out-of-distribution). The simulation results are presented in Fig. <ref>, which are consistent with the first scenario.
§.§.§ Uncertainty in Multiple Obstacles
Note that the saved model configurations from the second scenario can directly used in the final experiment. We consider three obstacles located on x_r=[1.0, 3.0, 2.5], y_r = [2.5, 2.0, 0.5], and r = [1.0, 0.5, 0.8]. We build three ABLR models from the saved configurations, and refine them according to different pieces of data (observations are organized in pieces). We introduce three CBF constraints into CBF-QP for safe control. Without on-line sampling, as shown in Fig. <ref>, both and fail to find a path to the target position due to conservative estimation of uncertainties, while performs significantly better. With on-line samples, seen from Fig. <ref>, starts to explore a valid path, however, in a very inefficient manner. While performs approximately the same as but inefficient to some extend.
In Fig. <ref> and <ref>, Δ x = x - x_T. We evaluate the efficiency of different algorithms by the error bound of Δ x. In two simulations, without on-line sampling is more efficient than it with new samples. It is mainly because of the rough, biased estimation of the uncertainty, with more risks of performing unsafe behaviors. The different efficiency between and is also revealed in Fig. <ref> As the GP model predicts a more conservative reachable set than meta-trained ABLR model at each step, can be a more suitable choice for on-line safe exploration <cit.>.
§ CONCLUSIONS
In this work, we develop a novel framework for safe control against uncertainties by leveraging Bayesian learning models with meta learning techniques. The theoretical analysis establishes the probabilistic safety guarantee of our method, while also providing insights into the implementation details of the algorithm. We conduct comparative experiments on several control scenarios, demonstrating the effectiveness and superiority of our framework. Beyond these results, we observe that our method can be more efficient for safe on-line exploration, especially with multiple constraints.
There are several limitations of our method which we will investigate in future research. For model-based learning, our algorithm only considers uncertainty in drift dynamics, which is not coupled with control input. However, uncertain input dynamics is also common in control systems. In terms of theoretic analysis, our criteria does not explain the quantitative impact of meta learning for probabilistic safety. Finally, although attractive in studied experiments, our algorithm needs more test on other applications to demonstrate its performance.
§ APPENDIX
§.§ Detailed Illustration
In the illustrative example, the moving point system is determined by (<ref>) with ω=1.5. The known safe trajectory, as the light gray line in Fig. <ref>, follows the a conic as 1.2 (x_1 - 1.5)^2 = x_2 - 0.3. The samples of x_1 follow a uniform distribution U(0.3, 1.5) independently. These samples are also used in experiments for warm start in Section <ref>.
The safety boundaries plotted in Fig. <ref> are mathematically the zero-value contour of different worst-case robust CBF h_a(x) = 0 of a nominal system ẋ = u + φ(x). Specifically, we follow the definition of the adaptive CBF in <cit.> as h_a(x) = h(x) - Δ(x), where Δ(x) is the worst-case estimation of φ(x). The red line represents the accurate estimation, while the green and orange lines are given by pessimistic estimation of different Bayesian models with β = 1.96. The shaded parts represent unreachable sets, different from the infeasible/unsafe set given by a classification model. In this vein, the boundaries imply different conservativeness of CBF methods.
§.§ Proofs of Main Results
Proposition <ref> is derived by leveraging the ideas of vector learning in <cit.> and linear stochastic bandits in <cit.>. The following additional definitions are needed for derivation.
We view the on-line sampling during the control process as generating sequence {ϕ_t }_t=1^∞ where ϕ_t = ϕ(x_t) is the forward network in ABLR model. Consider the σ-algebra ℱ_t = σ( ϕ_1,…,ϕ_t+1, η_1, …, η_t). Let us assume that {ℱ_t }_t=1^∞ is any filtration such that for integer t≥ 1, ϕ_t is ℱ_t-1-measurable, η_t is ℱ_t-measurable, and Δ_ω_K+1(x_t) = ϕ_t^⊤ω^* + η_t is also ℱ_t-measurable. Define a sequence { S_t}_t=0^∞ with S_t = ∑_s=1^t η_t ϕ_t as a martingale w.r.t {ℱ_t}_t=0^∞.
Let Assumption <ref> hold. For a D× D positive definite matrix K_0 and any integer t≥ 0, K_t updates according to (<ref>). Then, for any δ̃ with probability at least 1-δ̃, there is
‖ S_t ‖^2_K_t≤ 2σ_0^2 log( (K_t)^-1/2(K_0)^1/2/δ̃), ∀ t≥0.
With Assumption <ref>, The predicted mean of 𝐰 can be rewritten as
μ_t = K_t-1( Φ_t-1𝐲_t-1 + K_0^-1μ_0)
= K_t-1( Φ_t-1 (Φ_t-1^⊤𝐰^* + η) + K_0^-1μ_0)
= K_t-1( K_t-1^-1𝐰^* + Φ_t-1η + K_0^-1 (μ_0- 𝐰^*))
= 𝐰^* + K_t-1Φ_t-1η + K_t-1K_0^-1( μ_0 - 𝐰^*),
where η_t-1 = [ η_1,…, η_t-1]^⊤ denotes the observation noise during sampling and estimation. Denoting 𝐰̃_i = μ_i - 𝐰^*, subtracting 𝐰^* on both side of (<ref>), and then left multiplying a vector p ∈ℝ^D, we can get
| p^⊤𝐰̃_t |≤‖ p ‖_K_t-1( ‖Φ_t-1η‖_K_t-1 + ‖ K_0^-1𝐰̃_0 ‖_K_t-1).
The second term on the right-hand side is further scaled by
‖ K_0^-1𝐰̃_0 ‖^2_K_t-1≤λ_max ( K_t-1 )/λ_min( K_0 )‖𝐰̃_0 ‖^2_K^-1_0,
By Lemma <ref> and Assumption <ref>, with probability at least 1-2δ̃ (see <cit.>), the following inequality holds ∀ t≥0:
‖𝐰̃_t ‖_K^-1_t-1^2 ≤‖ K_t-1^-1𝐰̃_t ‖_K_t-1Γ^δ_t-1 = ‖𝐰̃_t ‖_K^-1_t-1Γ^δ_t-1 ,
where we let p=K_t-1^-1𝐰̃_t. Furthermore, we have ‖𝐰̃_t ‖_K^-1_t-1≤Γ^δ_t-1, implying 𝐰^* lies in the confidence ellipsoid 𝒞_t^δ.
With the on-line dataset 𝒮^t_ω_K+1, the predicted variance of a single test point x̂ according to (<ref>) is
Σ(x̂) = σ^2_0 (1 + ϕ(x̂)^⊤ K_t ϕ(x̂)) ≥σ_0^2.
Note that K_t is positive definite according to (<ref>). Therefore, we can derive the upper bound of Σ_t by
Σ(x̂) ≤σ^2_0 + σ^2_0 λ_max(K_t) ‖ϕ(x̂) ‖^2
Thus, the predicted standard deviation satisfies
σ̃(x̂) = √(Σ(x̂))≤ σ_0 √(1 + λ_max(K_t) ‖ϕ(x̂) ‖^2)
≤ σ_0 (1 + √(λ_max(K_t))‖ϕ(x̂) ‖).
Replacing σ̃_t in (<ref>) by the above upper bound, we have
ℒ_f h_ω(x̂) + ℒ_g h_ω(x̂) u + μ̃(x̂)
- βσ_0 (1 + √(λ_max(K_t))‖ϕ(x̂) ‖) + α(h_ω(x̂)) ≥ 0.
Note that satisfying (<ref>) is sufficient to also meet (<ref>) for any input u. We next study the relationship between the upper bound of standard deviation (<ref>) and the probabilistic error bound in (<ref>).
On the other hand, according to the adaptive CBF methods <cit.>, the upper error bound in (<ref>) can be considered in vanilla CBF method to ensure robustness against uncertainty. This results in the following adaptive CBF constraint:
ℒ_f h_ω(x_t) + ℒ_g h_ω(x_t) u + μ_t^⊤ϕ(x_t)
- sup_𝐰∈ℝ^D‖𝐰 - μ_t ‖‖ϕ(x_t) ‖ + α(h_ω(x_t)) ≥ 0.
Note that we do not consider the variation of this upper bound for better adaptation as <cit.>, but only a conservative upper bound as the worst-case CBF constraint <cit.>. Note also that the worst-case CBF constraint is sufficient to ensure safety. According to Proposition <ref>, with probability at least 1 - δ̃,
1/√(λ_max(K_t))‖𝐰^* - μ_t ‖≤‖𝐰^* - μ_t ‖_K_t^-1≤σ_0 Γ_t^δ̃.
Therefore, we have with probability at least 1 - δ̃, the following constraint is sufficient to render safety:
ℒ_f h_ω(x_t) + ℒ_g h_ω(x_t) u + μ_t^⊤ϕ(x_t)
- σ_0 √(λ_max (K_t))Γ_t^δ̃‖ϕ(x_t) ‖ + α(h_ω(x_t)) ≥ 0.
Comparing this with (<ref>), we can take β≥Γ_t^δ̃ to obtain that (<ref>) renders safety with probability at least 1 - δ̃.
§ ACKNOWLEDGMENT
This work was supported by NPRP grant: NPRP 9-466-1-103 from Qatar National Research Fund, UKRI Future Leaders Fellowship (MR/S017062/1, MR/X011135/1), NSFC (62076056), EPSRC (2404317), Royal Society (IES/R2/212077) and Amazon Research Award.
IEEEtran
|
http://arxiv.org/abs/2307.02713v1
|
20230706012625
|
A Simple Linear Algebraic Approach to Capture the Dynamics of the Circular Flow of Income
|
[
"Aziz Guergachi",
"Javid Hakim"
] |
econ.GN
|
[
"econ.GN",
"q-fin.EC"
] |
A composition law and refined notions of convergence
for periodic continued fractions
[
=====================================================================================
This article has one single purpose: introduce a new and simple, yet highly insightful approach to capture, fully and quantitatively, the dynamics of the circular flow of income in economies. The proposed approach relies mostly on basic linear algebraic concepts and has deep implications for the disciplines of economics, physics and econophysics.
The act of selling and buying as a driver of money circulation:
A fundamental concept that underlies the notion of scarcity, which is the central core of economics (e.g.,<cit.>, page 4; <cit.>, page 4), is the fact that there are buyers who desire to acquire the scarce goods and services in the market <cit.>. If no one is interested in these goods and services, then they would be irrelevant to the study of economics, even if they were scarce. The approach we propose in this paper to describe the circular flow of income focuses on the very exchange of goods and services between sellers and buyers, and attempts to capture the dynamics of this exchange from the ground up.
To describe the basic idea of this approach, let us imagine a community of three economic agents, who sell goods and services to each other in exchange for money, all in a closed economy. We refer to these three agents as A, B and C, also known as Alice, Bob and Carol (to reuse the fictional characters introduced by cryptographers Rivest et al. <cit.>). As a result of the business exchange taking place among them, Alice, Bob and Carol earn income. Let us denote the incomes earned by Alice, Bob and Carol up until time t and expressed in monetary units as x_1(t), x_2(t) and x_3(t) respectively. Between time t and time t+1, Alice will sell goods and services to Bob who will then pay Alice a certain price p for these goods and services. This price p paid by Bob to Alice will come out of Bob's income x_2(t) at time t; it will be a fraction of x_2(t). Let us denote this fraction of x_2(t) paid by Bob to Alice during the time interval [t,t+1] as f_12(t). In a similar way, Alice will also sell goods and services to Carol between t and t+1 and will, as a result, earn an additional income of f_13(t) × x_3(t), where f_13(t) is the fraction of x_3(t) paid by Carol to Alice between t and t+1 in exchange of the purchased goods and services. Thus, the total income earned by Alice over the time interval [t,t+1] as a result of her doing business with other economic agents is:
x_1(t+1) = f_12(t) x_2(t) + f_13(t) x_3(t)
Let us now focus on Bob as a seller of some different type of goods and services to other economic agents, namely Alice and Carol who would then have to pay some fractions f_21(t) and f_23(t) of their respective incomes x_1(t) and x_3(t) at time t to Bob. Thus, over the time interval [t,t+1], Bob earns a total income of:
x_2(t+1) = f_21(t) x_1(t) + f_23(t) x_3(t)
as result of him conducting business with the other agents.
In a similar way, Carol, as a seller, would earn a total income of:
x_3(t+1) = f_31(t) x_1(t) + f_32(t) x_2(t)
over the time interval [t,t+1], as a result of her selling other goods and services to Alice and Bob, f_31(t) and f_32(t) being the fractions of the buyers' respective incomes x_1(t) and x_2(t) that were paid to Carol.
Using matrix algebra, the above three equations can be written as follows:
𝐱(t+1) = 𝐅_t 𝐱(t)
where 𝐱(τ) = [x_1(τ), x_2(τ), x_3(τ)]^T is the income vector at time τ, and the matrix:
𝐅_τ = [f_ij(τ)]__i,j∈{1,2,3}
is the 3 × 3 matrix whose entries are the above-mentioned fractions f_ij(τ) at time τ. The superscript ^T denotes, in the entire article, the transpose of a vector or a matrix. We should note that some of the entries in 𝐅_τ might be equal to zero if there is no exchange between the corresponding agents. In particular, according to the above linear algebraic description of income circulation, the diagonal entries f_ii(τ) of the matrix 𝐅_τ will be all zeroes, because agents don't sell things to themselves. But they can save money between one time instant and the next one. As business actors, they earn money by selling goods and services to others and, as living entities, they incur expenses to maintain themselves. The difference between the income earned and the expenses incurred would be the agents' savings. To be more specific, let us consider one individual agent j which can be Alice, Bob or Carol. The total expenses that are incurred by j between t and t+1 can be calculated in the following way using the entries in the column j of 𝐅_τ:
(total expenses)_j = ∑_i∈{1,2,3}
i ≠ j f_ij(t) x_j(t)
Then, the difference:
x_j(t) - (total expenses)_j = x_j(t) - ∑_i∈{1,2,3}
i ≠ j f_ij(t) x_j(t)
= ( 1 - ∑_i∈{1,2,3}
i ≠ j f_ij(t) )_s_j(t) x_j(t)
is what the agent j would have saved over the time interval [t,t+1].
Let us now redefine the diagonal entries f_jj(τ) for each agent j ∈{1,2,3} in the matrix 𝐅_τ as:
f_jj(τ) = s_j (τ) = ( 1 - ∑_i∈{1,2,3}
i ≠ j f_ij(τ) )
and redefine the coordinates x_i(τ), i∈{1,2,3}, of the vector 𝐱(τ) as the sum of the income earned and the savings made by agent i between the time instants τ -1 and τ. With these new definitions of f_jj(τ) and x_i(τ), the matrix equation:
𝐱(t+1) = 𝐅_t 𝐱(t)
remains valid with the understanding that the coordinates of 𝐱(τ) at time τ represent the sum “income + savings" made by the individual economic agents between τ-1 and τ, not just the income. At this stage of building the linear algebraic model (<ref>), banks and financial institutions who would lend money to agents are not included. Because of that, the diagonal entries in the matrix 𝐅_τ will remain non-negative. Note also that, with the new definitions of the diagonal entries and the coordinates of 𝐱, the matrix 𝐅_τ is column-stochastic.
An agent-based income circulation model:
The gedankenexperiment we used in the above discussion involved three agents — Alice, Bob and Carol. But we can easily repeat this experiment and generalize it to n agents with n>3, in which case the final version of income circulation model becomes:
𝐱(t+1) = 𝐅_t 𝐱(t)
where 𝐅_τ is a square n × n non-negative matrix that is column-stochastic, and that is referred to in this paper as the income circulation matrix at time τ≥ 0. It goes without saying that, for large values of n, the matrix 𝐅_τ will likely have in it many entries that are zeroes, because it is very improbable in a large economy that every pair of agents engages in an act of selling and buying between the times τ and τ +1.
Let us now specify the context within which the above matrix model (<ref>) would be valid:
* The economy at hand ℰ is a closed economy made up of n agents, and can be a small village, a region, or an entire country. The number n can be in the hundreds, thousands, millions or billions of agents.
* An agent in this economy ℰ can be:
* an individual who: (1) is an employee working for another agent (firm, individual, household, etc.), or (2) is running her own business (corner store, plumbing services, barber shop, handyman, and so on);
* a household made up of individuals who earn income, say the parents, and other ones who are dependent on the income earners;
* a firm which consists of a business or a collection of businesses that are, in principle, more sophisticated than the ones owned and operated by individuals. Those firms that are collections of businesses can be broken down into multiple agents in the matrix model we propose in this paper. For example, if the firm is a retail chain, then each retail store can be considered as a different agent in the income circulation matrix. Such a retail store sells goods and services to other agents in its neighborhood, while it buys goods and services from its corporate office which would be represented as a different agent in the income circulation matrix.
* At this initial stage of the development of the model, we assume that the agents in ℰ cannot be banks or financial institutions who provide credit and, as a result, agent debt is not considered in this model. In general, the financial accounting concepts of liability and asset are intentionally omitted in the current model, but can be incorporated in future versions of the model. Similarly, no governments collecting taxes are included at this stage in the current model.
* Time is discretized into uniform steps t, t+1, t+2, etc. There is no restriction on the length of these time steps which can be in the range of seconds, hours, days, or weeks, depending on the intensity of the economic activity in ℰ.
* For each integer i ∈{1, 2, ⋯, n}, let x_i(τ) ≥ 0 be the sum of income earned and savings made by agent i between times τ - 1 and τ. We will refer to x_i(τ) as the wealth of agent i at time τ. The vector 𝐱(τ) = [x_1(τ), x_2(τ), ⋯, x_n(τ)]^T ∈_+^n is the wealth vector of agents in ℰ.
Under the conditions specified in the above itemized list, the dynamics of the wealth vector 𝐱 are governed by the matrix equation (<ref>). Using the rules of matrix algebra, we can show that the wealth vector 𝐱 can be expressed as:
𝐱(t+1) = ( ∏_τ=0^t𝐅_τ) 𝐱(0)
or, equivalently:
𝐱(t) = ( ∏_τ=0^t-1𝐅_τ) 𝐱(0)
where 𝐱(0) is the wealth vector of the economy ℰ at an arbitrarily selected initial time instant t=0. The matrix equation (<ref>) constitutes what we believe to be a fundamental result about the dynamics of income circulation which, to our knowledge, has never been published in the economics literature, and which can be stated as follows:
26em
The dynamics of wealth distribution in a closed economy where no credit is available to the economic agents are governed by in-homogeneous products of column-stochastic matrices.
The sum M=∑_i=1^n x_i(0) of the coordinates of 𝐱(0) can be referred to as the monetary base. Since the financial sector is inexistent in ℰ, no money is created and, thus, M remains constant over time. Using matrix algebraic rules, one can show that the sum of the coordinates of the wealth vector 𝐱(t) is invariant under transformations by in-homogeneous products of column-stochastic matrices. This ensures that the income circulation equation (<ref>) is consistent with the initial setup of ℰ. Also, in the cases where the length of the time steps are so short that no business exchanges take place within a certain time interval [τ,τ+1], the income circulation matrix 𝐅_τ would just collapse to the identity matrix, as agents get to keep all their wealth to themselves during this time interval. Because the identity matrix is the neutral element for matrix multiplication, the matrix model (<ref>) is consistent with times steps being of any size. We also conjecture that one can derive from the matrix model (<ref>) many of the empirical observations that economists have highlighted about the issues of economic inequality, including the fact that wealth distributions in societies follow, under some conditions on the structures of the income circulation matrices 𝐅_τ (τ >0), the Pareto law.
About the authors:
AG and JH are two Canadian systems scientists with background in mathematics, physical chemistry and engineering science. They live in Toronto, Ontario, and can be reached at the following e-mail address:
[email protected]
Comments and feed-backs on this paper and its equations are welcomed and can be sent to the authors at the above e-mail address.
|
http://arxiv.org/abs/2307.00947v1
|
20230703114332
|
A hybrid finite element/neural network solver and its application to the Poisson problem
|
[
"Uladzislau Kapustsin",
"Utku Kaya",
"Thomas Richter"
] |
math.NA
|
[
"math.NA",
"cs.NA"
] |
We analyze a hybrid method that enriches coarse grid finite element solutions with fine scale fluctuations obtained from a neural network.
The idea stems from the Deep Neural Network Multigrid Solver (DNN-MG) <cit.> which embeds a neural network into a multigrid hierarchy by solving coarse grid levels directly
and predicting the corrections on fine grid levels locally (e.g. on small patches that consist of several cells) by a neural network. Such local designs are quite appealing, as they allow a very good generalizability. In this work, we formalize the method and describe main components of the a-priori error analysis.
Moreover, we numerically investigate how the size of training set affects the solution quality.
A hybrid finite element/neural network solver and its application to the Poisson problem
Uladzislau Kapustsin
Utku Kaya
Thomas Richter
Otto-von-Guericke Universität Magdeburg, Germany
August 1, 2023
=======================================================================================================================
§ INTRODUCTION
Recent advancements in employing neural networks to approximate solutions to partial differential equations (PDEs) mostly focus on
Physics Inspired Neural Networks (PINNs) <cit.> such as the Deep Ritz method <cit.>. They leverage the expressive power of neural networks while incorporating physical principles
and promise substantial efficiency increase for high dimensional or parameter dependent partial differential equations. One main drawback of PINNs is
that they need to re-train when the problem parameters change.
Also, for classical problems, such as three
dimensional fluid dynamics problems, highly sophisticated and well established discretization methods regarding the efficiency and accuracy are available that beat neural network approaches by far.
The method of this paper was introduced as main component the of DNN-MG <cit.> for the instationary Navier-Stokes equations. At each time step, a coarse solution is obtained
by a classical finite element solver and corrections to finer grids are predicted locally via neural networks.
Here, we focus on a simpler linear problem and aim to understand the mechanism of such a hybrid approaches by discussing its a-priori errors and via numerical experiments.
Let Ω⊂ℝ^d, d ∈{2,3} be a domain with polygonal boundary.
We are interested in the weak solution of the Poisson's equation
-Δ u = f,
u |_∂Ω = 0,
with a given force term f∈ H^-1(Ω).
For a subdomain ω⊆Ω, let 𝒯_h(ω)={T_i}_i=1^M be a non-overlapping admissible decomposition of ω into convex polyhedral elements T_i such that
ω= ∪_i=1^M T_i. The diameter of element T is denoted by h_T and h = max_T ∈𝒯_h(Ω) h_T.
With ·_2 we denote the Euclidean norm and for v ∈ C(ω) we define
v _l^2(ω) :=
( ∑_x is node of 𝒯_h(ω) v(x) )^1/2.
Moreover, let V_h^(r) be the space of piecewise polynomials of degree r ≥ 1 satisfying the homogeneous Dirichlet condition on the boundary ∂Ω, i.e.
V_h
:=
{ϕ∈ C(Ω) s.t. ϕ |_T∈ P^(r)(T) ∀ T ∈Ω_h, ϕ |_∂Ω = 0
},
where P^(r)(T) is the space of polynomials of degree r on a cell T∈𝒯_h. We assume that there is a hierarchy of meshes
𝒯_H(Ω)
:=
𝒯_0≼𝒯_1≼⋯≼𝒯_L
=:
𝒯_h(Ω),
where we denote by 𝒯_l-1≼𝒯_l, that each element of the fine mesh T ∈𝒯_l originates from the uniform refinement of a coarse element T'∈𝒯_l-1, for instance, uniform splitting of a quadrilateral or triangular element into four and of a hexahedral or tetrahedral element into eight smaller ones, respectively.
Accordingly we have the nesting V_h^(l-1)⊂ V_h^(l), l = 1, …, L where V_h^(l) is the space defined on the mesh level l.
With a patch 𝒫∈𝒯_h(Ω) we refer to a polyhedral subdomain of Ω, but for simplicity we assume that each patch corresponds to a cell of 𝒯_H(Ω).
By V_h(𝒫) we denote the local finite element subspace
V_h(𝒫) := span{ϕ_h |_𝒫, ϕ_h ∈ V_h
}
and R_𝒫 : V_h → V_𝒫 denotes the restriction to the local patch space, defined via
R_𝒫(u_h)(x_i) = u_h(x_i) for each node x_i∈𝒯_h(𝒫).
The prolongation P_𝒫 : V_h(𝒫) → V_h is defined by
P_𝒫(v)(x) =
1/n v(x) x is a node of 𝒯_h(𝒫),
n∈ℕ being the number of patches containing the node x
0 otherwise.
The classical continuous Galerkin finite element solution of the problem (<ref>) is u_h ∈ V_h s.t.
(∇ u_h, ∇ϕ)
=
(f, ϕ) ∀ϕ∈ V_h,
with the L^2 inner product (·, ·). We are interested in the scenario where one prefers not to solve (<ref>) on the finest level V_h due to lacking hardware resources or too long computational times, but in V_H with H≫ h.
This is the so-called coarse solution u_H∈ V_H and fulfills (∇ u_H, ∇ϕ)
= (f, ϕ) ∀ϕ∈ V_H. The key idea of our method is to obtain the fine mesh fluctuations u_h-u_H in forms of neural network updates w_𝒩 corresponding to the inputs u_H and f. Hence, the neural network updated solution has the form u_𝒩:=u_H + w_𝒩 in the case where the network operates globally on the whole domain.
A more appealing setting is where these updates are obtained locally, such that the network is acting on the data not on the whole domain at once, but on small patches 𝒫∈𝒯_h(Ω).
In this case, while the training is performed in a global manner, the updates are patch-wise and the network updated solution has the form u_𝒩:=u_H + ∑_𝒫P_𝒫w^𝒫_𝒩.
§ HYBRID FINITE ELEMENT NEURAL NETWORK DISCRETIZATION
§.§ Neural network
In this section we introduce the neural network we use and formalize the definition of finite element/neural network solution.
Let L ∈ℕ be the number of layers and let N_i be the number of neurons on layer i ∈{1,…,L}. Each layer i ∈{ 1, … ,L-1} is associated with a nonlinear function l_i(x) : ℝ^N_i-1→ℝ^N_i with
l_i(x) = σ(W_i x + b_i)
and an activation function
σ : ℝ→ℝ.
The multilayer perceptron (MLP) N : ℝ^N_0→ℝ^N_L is defined via
𝒩 = W_n (l_n-1∘…∘ l_1)(x) + b_n
where W_i ∈ℝ^N_i-1× N_i denote the weights,
and b_i ∈ℝ^N_i the biases.
§.§ Hybrid solution
On a patch 𝒫, the network receives a tuple (R_𝒫u_H, R_𝒫f), restrictions of the coarse solution R_𝒫u_H and of the
source term R_𝒫f and it returns an approximation to the fine-scale update v_h^𝒫(u_h-u_H)|_𝒫∈ V_h(𝒫). In order to obtain a globally continuous
function, the prolongation (<ref>) is employed.
The hybrid solution is defined as
u_𝒩
:=
u_H
+
∑_𝒫
P_𝒫 w_N^𝒫,
where
w_N^𝒫 =∑_i=1^N W^P_i ϕ_i,
W^P_i is the i-th output of 𝒩(y) and ϕ_i are the
basis functions of V_h(𝒫). Here, y = (
U_H^𝒫, F_h^𝒫)^T is the input vector where U_H^𝒫 and F_h^𝒫 are the nodal values of u_H on the coarse mesh 𝒯_H(Ω) and f on the mesh 𝒯_h(𝒫), respectively.
For simplicity we will mostly use the notation
u_𝒩 = u_H + 𝒩(f)
in place of (<ref>).
Since each function u_H ∈ V_H also belongs to V_h, it has the form u_H = ∑_i=1^N_dof U^i_Hhϕ^i_h with {ϕ^i_h}_i=1^N_dof being the basis of the fine finite element space V_h and U_Hh being the coefficient vector of interpolation of u_H into V_h. As we update the coarse solution u_H on fine mesh nodes, this procedure can be considered as a simple update
of coefficients U^i_Hh, i.e.
u_𝒩 = ∑_i=1^N_dof (U^i_Hh + W^i_𝒩) ϕ^i_h ∈ V_h,
or simply U_𝒩 := U_Hh+ W_𝒩 being the coefficient vector of u_𝒩.
§.§ Training
The neural network is trained using fine finite element solutions obtained on the mesh 𝒯_h(Ω) and with the loss function
ℒ(u_h,u_H;w_h) := 1/N_T N_P∑_i=1^N_T∑_𝒫∈Ω_h (u^f_i_h-u^f_i_H)-w_^f_i^2_l^2(𝒫)
where N_T is the size of training set and N_P is the number of patches. Here, w_^f_i stands for the finite element function defined by the network update 𝒩(f_i) on the patch 𝒫. The training set F={f_1,…,f_N_tr} consists of N_tr∈ℕ source terms f_i together with corresponding coarse and fine mesh finite element solutions u_H^f_i and u_h^f_i, respectively.
§ ON THE A-PRIORI ERROR ANALYSIS
The difference between the exact solution u ∈ H^1_0(Ω) of (<ref>) and the hybrid solution u_𝒩 from (<ref>) can be split as
u-u_𝒩≤min_f_i∈ F(
u-u_h +
u_h-u_h^f_i +
u_h^f_i-u_𝒩^f_i+
u_𝒩^f_i-u_𝒩},
u_h^f_i, u_𝒩^f_i∈ V_h
being the finite element solution and the neural network updated solution
corresponding to the source term f_i, respectively. Let us discuss individual terms
in (<ref>).
* u-u_h is the fine mesh finite element error. Estimates of this error are well-known in the literature and are of 𝒪(h^r) in the H^1 semi-norm.
* (u_h-u_h^f_i) is a data approximation error and in the H^1 semi-norm it can be bounded by f-f_i_-1 due to through stability of the finite element method.
* (u_h^f_i-u_𝒩^f_i) is a network approximation error and is introduced by the approximation properties of the network architecture. This is bounded by the
tolerance ϵ which depends on the accuracy the minimization problem (<ref>).
* u_𝒩^f_i-u_𝒩 = (u_H - u_H^f_i) + (𝒩(f) - 𝒩(f_i)) consists of a generalization error of the network and a further error term depending on the richness of the data set. While the term u_H^f_i-u_H
can be handled via the stability of the finite element method, the remaining term requires a stability estimate of the neural network.
Overall, an estimate of
∇( u-u_𝒩)≤ c ( h^r f _r+1 + ϵ + min_f_i ∈ F{ f-f_i _-1 + ∇ (𝒩(f) - 𝒩(f_i))})
can be obtained for sufficiently smooth source term f and domain Ω. Improvements of this estimate with the consideration of patch-wise updates is part of an ongoing work.
§.§ Stability of the neural network
The network dependent term of (<ref>) is linked with the stability of the network. For a study of the importance of Lipschitz regularity in the generalization bounds we refer to <cit.>.
Let be a multilayer perceptron (Def. <ref>) and σ: ℝ→ℝ satisfy |σ(y)- σ(y_i)| ≤ c_0 |y- y_i| with c_0>0. Then, on each patch 𝒫 for the inputs y and y_i and the corresponding
FE functions 𝒩(f) and 𝒩(f_i) (uniquely defined by the network updates) holds
𝒩(f)- 𝒩(f_i)_𝒫≤ c· c_0^N_L· c_W · h^dy- y^f_i_2
where
c_W := ∏_j=1^N_LW^j_2.
The definition of the network gives
𝒩(f) - 𝒩(f_i)_l^2(𝒫) =
W^N_L (z_N_L-1(y) - z_N_L-1(y^f_i))_2
≤W^N_L_2 ·z_N_L-1(y) - z_N_L-1(y^f_i)_2
where z_i = l_i ∘⋯∘ l_1 and l_i are as defined in (<ref>).
By using the definition of z_j and the Lipschitz constant of σ(·) we obtain for an arbitrary layer j
z_j(y) - z_j(y^f_i)_2
=
σ(W^j z_j-1(y))
-
σ(W^j z_j-1(y^f_i))
_2
≤ c_0
W^j (
z_j-1(y)
-
z_j-1(y^f_i)
)
_2
≤
c_0 W^j_2
·z_j-1(y) - z_j-1(y^f_i)_2.
Then, by applying (<ref>) recursively from the second to the last layer we obtain
z_N_L-1(y) - z_N_L-1(y^f_i)_2
≤
c_0^N_L-1∏_i=1^N_L-1W_j_2 ·y - y^f_i_2
Hence, by applying it to (<ref>) and using the inequality
v_𝒫^2 ≤ c h^2dv_l^2(𝒫)^2 ∀ v ∈ V_h(𝒫)
we arrive at the claim.
Lemma <ref> leads to
∇(𝒩(f) - 𝒩(f_i))≤ c_invc_1 ( c_Ω h^-1 f-f_i_-1 + h^d ∑_𝒫f-f_i _l^2(𝒫))
with the constant c_1 = c· c_0^N_L· c_W arising from Lemma above and c_inv and c_Ω arising from inverse and Poincaré estimates, respectively.
The definition of inputs together with the triangle inequality and the inequality
v_l^2(𝒫)^2 ≤ h^-2dv_𝒫^2 ∀ v ∈ V_h(𝒫)
provides
y - y^f_i_2 ≤ u_H-u^f_i_H _l^2(𝒫)+ f-f_i _l^2(𝒫)≤ h^-d u_H-u^f_i_H _𝒫 + f-f_i _l^2(𝒫)
for each patch . In the whole domain this, with Poincareś inequality, leads to
𝒩(f)- 𝒩(f_i)≤ c_1 (c_Ωh^-1∇ (u_H - u_H^f_i) +h^d ∑_𝒫f-f_i _l^2(𝒫)).
The stability of the coarse discrete solution and the inverse estimate shows the claim.
A different network architecture may include several layers that perform convolutions. This kind of networks are called convolutional neural networks.
In the two-dimensional setting, this would correspond to replacing l_i of Definition <ref> with a nonlinear function
l^c_i : ℝ^N_i^c × N_i^c→ℝ^N^c_i+1× N^c_i+1 defined as
l^c_i(x) = σ(W_i ∗ x + b_i)
with W_i ∈ℝ^N^*_i × N^*_i and b_i ∈ℝ^N^c_i+1× N^c_i+1
where ∗ is the matrix convolution operator. While N^*_i stands for the dimension of the kernel W_i of the corresponding convolution,
we assume N_i = N_i^c · N_i^c and N_i+1 = N_i+1^c · N_i+1^c. The embedding into the multilayer perceptron is usually performed with the use of
reshape_N ( ℝ^N^2→ℝ^N × N) and
flatten_N (ℝ^N× N→ℝ^N^2) operators so that the dimensions of convolutional layer matches with the dense layer.
In a scenario where a dense layer j of MLP is replaced with a convolutional layer, equation (<ref>) must be modified as
z_j(y) - z_j(y^f_i)_F
=
σ(W^j ∗ z_j-1(y))
-
σ(W^j ∗ z_j-1(y^f_i))
_F
≤ c_0
W_j ∗(
z_j-1(y)
-
z_j-1(y^f_i)
)
_F
≤
c_0 W_j_F
·z_j-1(y) - z_j-1(y^f_i)_F.
Hence, for a neural network with an index set of dense layers S_d and convolutional layers S_c the result (<ref>) holds with the modified constant
c_W = ∏_j∈ S_dW_j_2 ∏_j∈ S_cW_j_F
by taking into account, that
reshape(·)_F = ·_2 and flatten(·)_2 = ·_F.
§ NUMERICAL EXPERIMENTS
We consider the two-dimensional Poisson equation on the unit square Ω = (0,1)^2 with homogeneous Dirichlet boundary conditions.
The training data is picked randomly from the set of source terms
F :=
{
f(x,y) = ∑_i=1^4 α_isin(β_iπ(x+C_i)),
C_1,C_2∈ [0,1], C_3,C_4∈ [0,1/2],
α_1=α_2=1/2, α_3=α_4=1/10, β_1=β_2=2, β_3=β_4=4}
together with the corresponding fine and coarse finite element solutions u_H and u_h, respectively.
We employ a multilayer perceptron as described in Definition <ref> with 4 hidden layers, each with 512 neurons and σ(·)=tanh(·) as an activation function. We train it using the Adam optimizer <cit.> and loss function ℒ from <ref>.
Figure <ref> shows the mean error of the proposed method w.r.t. a reference one, which is one level finer that the target one. Here we consider the error on training and testing datasets of different sizes. We also consider different refinement levels, i.e. h=H/2, H/4 and H/8. The x-axis corresponds to the fine step size h and the y-axis to the mean error. Here, the two topmost lines (blue) show the error of the coarse solution, which is used as an input to the neural network. The two bottom-most lines (green) show the error of the fine solution, used for the computation of the loss. The rest of the lines depict the errors of the proposed method for training data of different size. Here we observe that given enough data, one is able to get arbitrarily close to the fine solutions used for training.
Figure <ref> shows an example of how the loss function behaves during the training. Here we have trained a network for 400 epochs and have used learning rate decay with a factor of 0.5 every 100 epochs. Due to this one can observe significant drops in the value of loss function at 100, 200 and 300 epochs.
Figure <ref> shows an example of coarse, fine and network solution for a particular right hand side from the test data. Here we observe, that the quality of the network solution is significantly better than the quality of the original coarse solution.
§ ACKNOWLEDGEMENTS
The authors acknowledge the support of the GRK 2297 MathCoRe, funded by the Deutsche Forschungsgemeinschaft, Grant Number 314838170.
1
Margenberg2021
N. Margenberg, D. Hartmann, C. Lessig, and T. Richter.
A neural network multigrid solver for the navier-stokes equations.
Journal of Computational Physics, 460:110983, 2022.
raissi2019
M. Raissi, P. Perdikaris, and G.E. Karniadakis.
Physics-informed neural networks: A deep learning framework for
solving forward and inverse problems involving nonlinear partial differential
equations.
Journal of Computational Physics, 378:686–707, 2019.
weinan2018
W. E and B. Yu.
"the deep ritz method: A deep learning-based numerical algorithm for
solving variational problems".
Communications in Mathematics and Statistics, 6(1):1–12, 2018.
bartlett2017spectrally
Peter L Bartlett, Dylan J Foster, and Matus J Telgarsky.
Spectrally-normalized margin bounds for neural networks.
Advances in neural information processing systems, 30, 2017.
kingma2017adam
Diederik P. Kingma and Jimmy Ba.
Adam: A method for stochastic optimization.
In International Conference on Learning Representations (ICLR),
2015.
|
http://arxiv.org/abs/2307.03273v2
|
20230706202112
|
ADASSM: Adversarial Data Augmentation in Statistical Shape Models From Images
|
[
"Mokshagna Sai Teja Karanam",
"Tushar Kataria",
"Shireen Elhabian"
] |
cs.CV
|
[
"cs.CV",
"eess.IV"
] |
ADASSM: Adversarial Data Augmentation in SSM from Images
M. Karanam et al.
Kahlert School of Computing, University Of Utah Scientific Computing and Imaging Institute, University of Utah
{mkaranam, tushar.kataria, shireen}@sci.utah.edu
ADASSM: Adversarial Data Augmentation in Statistical Shape Models From Images
Mokshagna Sai Teja Karanam^1,2 Tushar Kataria^1,2 Shireen Elhabian^1,2
August 1, 2023
=============================================================================
Statistical shape models (SSM) have been well-established as an excellent tool for identifying variations in the morphology of anatomy across the underlying population. Shape models use consistent shape representation across all the samples in a given cohort, which helps to compare shapes and identify the variations that can detect pathologies and help in formulating treatment plans. In medical imaging, computing these shape representations from CT/MRI scans requires time-intensive preprocessing operations, including but not limited to anatomy segmentation annotations, registration, and texture denoising. Deep learning models have demonstrated exceptional capabilities in learning shape representations directly from volumetric images, giving rise to highly effective and efficient Image-to-SSM. Nevertheless, these models are data-hungry and due to the limited availability of medical data, deep learning models tend to overfit. Offline data augmentation techniques, that use kernel density estimation based (KDE) methods for generating shape-augmented samples, have successfully aided Image-to-SSM networks in achieving comparable accuracy to traditional SSM methods. However, these augmentation methods focus on shape augmentation, whereas deep learning models exhibit image-based texture bias results in sub-optimal models. This paper introduces a novel strategy for on-the-fly data augmentation for the Image-to-SSM framework by leveraging data-dependent noise generation or texture augmentation. The proposed framework is trained as an adversary to the Image-to-SSM network, augmenting diverse and challenging noisy samples. Our approach achieves improved accuracy by encouraging the model to focus on the underlying geometry rather than relying solely on pixel values.
§ INTRODUCTION
Statistical shape modeling (SSM) is widely used in the fields of medical image analysis and biological sciences for studying anatomical structures and conducting morphological analysis. It enables shape analysis by facilitating the understanding of the geometrical properties of shapes that are statistically consistent across a population. SSM has diverse applications in neuroscience <cit.>, cardiology <cit.>, orthopedics <cit.>, and radiology <cit.>.
Optimization-based SSM <cit.> methods typically involve anatomy segmentation, data preprocessing (e.g., image resampling, denoising, rigid registration), and optimizing population-level shape representation i.e., correspondence points (or particles), all of which require substantial expertise-driven workflow, involving intensive preprocessing that can be time-consuming. Deep learning approaches for SSM train networks to learn the functional mapping from unsegmented images to statistical representations of anatomical structures <cit.>. This shift towards deep learning-based methods offers a more efficient and automated approach to SSM, bypassing the need for extensive manual preprocessing and leveraging the power of neural networks to learn directly from raw imagery data <cit.>. However, deep learning models are notorious for requiring enormous quantities of data to achieve acceptable performance <cit.>, necessitating the use of data augmentation to supplement the available training data <cit.>.
In the field of deep learning for medical image analysis, data augmentation plays a crucial role <cit.>. Nevertheless, unlike computer vision applications, acquiring a substantial number of segmented medical images is difficult due to privacy concerns, the substantial human effort and expertise required, and the intensive preprocessing involved <cit.>. Off-the-shelf data augmentation methods may not generate augmented samples that promote invariances and improve the task-specific generalizability of the model <cit.>. Therefore, having a large amount of data supplemented with challenging task-specific variations would be extremely beneficial for training deep neural networks and improving model performance. In the field of medical imaging, attempts have been made to employ task-driven automatic data augmentation techniques <cit.> for tasks such as image segmentation and classification <cit.>. For regression tasks, various strategies have been proposed for handling data augmentation that includes, data-dependent shape augmentation for Image-to-SSM networks <cit.> and a mix-up <cit.> based augmentation by interpolating input samples using the similarity of labels <cit.>.
DeepSSM <cit.>, and other variants <cit.>, learn to map unsegmented images to shape models, exhibiting comparable performance to traditional SSM <cit.> methods, as well as in downstream tasks such as atrial fibrillation recurrence <cit.>. DeepSSM relies heavily on offline shape-based data augmentation via kernel density estimation (KDE)
in PCA subspace <cit.>. This shape augmentation approach entails using generative modeling to sample augmented shapes from probability distribution estimated via KDE. These offline methods have three deficiencies: (1) the generation of augmented samples is independent of the task (shape modeling) at hand, (2) the augmentation process focuses on shape augmentation rather than incorporating noise/texture augmentation, neglecting the inherent texture bias often present in deep learning models <cit.>, which can lead to sub-optimal models in shape analysis, and (3) It requires extensive offline data compilation, which is time- and resource-consuming.
We draw inspiration from adversarial domain adaptation <cit.> and adversarial data augmentation for classification tasks <cit.>. Here, we adapted these ideas to regression tasks with application to Image-to-SSM networks. The regression task poses a greater challenge as it is not straightforward to generate challenging adversarial samples for the learning task at hand due to the absence of label-separating hyperplanes. Consequently, we focus our methodology on generating adversarial noise as augmentation. The proposed method results in noise-augmented samples as an alternative to KDE based shape augmentation <cit.> and implicitly drives the model to attend to the shape of the underlying object of interest instead of explicit shape augmentation <cit.>. We also demonstrate that data- and task-dependent noise augmentation is better than off-the-shelf noise augmentation with varied variance levels. The proposed augmentation approach is generic enough to be used for any Image-to-SSM network, but here we focus on DeepSSM <cit.> to showcase the efficacy of the on-the-fly noise augmentation vs offline shape and noise augmentation.
As we focus on image noise augmentation for this work, the shape representation of the augmented images should not be affected. As a result, we employ a contrastive loss to inform the deep learning model that noisy and their corresponding original images should be projected to the same latent representation. This contrastive loss acts as a regularizer to both the augmentation framework and the Image-to-SSM network.
The contributions of this paper are as follows:-
* A computationally efficient, automated, on-the-fly adversarial data augmentation method for regression tasks with better generalizability.
* A contrastive loss based regularization that enables enhanced noise generation that is more task- and data-dependent.
* Image-to-SSM and downstream results on left atrium and femur datasets show the efficacy of the proposed approach.
§ METHODOLOGY
This section explains the details of the proposed method and the regularization losses. The block diagram for the proposed approach is shown in Figure <ref>.
§.§ Adversarial Data Augmentation Block
The proposed framework for augmentation aims to enhance the performance of the Image-to-SSM network by generating data-dependent noise. This architecture can be especially useful in the context of SSM tasks, where the input data size is typically limited. Due to the paucity of data samples, deep learning models may suffer from overfitting, leading to poor generalization performance on unseen data. To address this, we integrate an adversarial <cit.> data augmentation approach for regression tasks with applications in shape modeling. The generator produces adversarial data samples that are difficult for the shape model to project into statistical shape representation.
Conditional Noise Generator: Conditional noise generator receives an Image(𝐱_1) from input set X_1 and a randomly sampled Gaussian vector, 𝐳 as inputs. It then generates noise 𝐧_1, which is subsequently added to the original input volume 𝐱̂_1 =𝐱_1+𝐧_1, resulting in the generation of a noisy augmented sample.
𝐱̂_1 = G(𝐳,𝐱_1) ⊕𝐱_1
Here, G denotes the conditional generator and ⊕ represents voxelwise addition. To maintain controlled augmentations and minimize excessive perturbations, a regularization loss based on total variation (TV) is incorporated <cit.>. TV loss also enables the generation of noise variations that exhibit smooth transitions, which is crucial for capturing the inherent features of real-world images.
𝐋_TV = ||G(𝐳,𝐱_1)||_2
To further regularize the noise generation a discriminator is used. This discriminator uses the remaining input samples from set X_2, as reference distribution, and noisy samples are treated as samples of input distributions. The objective of the discriminator is to assist the generator in producing realistic noise while ensuring that the noisy augmented sample comes from the same distribution as the original data. The generative adversarial network (GAN) aims to strike a balance between meaningful data-specific augmentations and excessive perturbations. The loss of the block is given in the equation below:-
𝐋_GAN = Gmin Dmax 𝐄_X∈Ω [log D(𝐱_2)] + 𝐄_X∈Ω [log(1-D(G(𝐳,𝐱_1) ⊕𝐱_1))] + β𝐋_TV
where β is hyperparameter.
§.§ Adversary to Image-To-SSM network
The noisy augmented sample is fed into the Image-To-SSM network to obtain the predicted shape representation(ŷ), which is compared to the original shape representation via an RMSE loss.
𝐋(ŷ,𝐲) = RMSE(𝐲,DeepSSM(𝐱̂_1))
The Image-to-SSM network and GAN framework are put in an adversarial relationship with the help of a gradient reversal layer, as shown in the block diagram above. The following objective function aims to minimize the Image-To-SSM network error while maximizing the conditional generator G perturbations, setting up a second adversarial objective:
𝐋_RMSE = 𝐄_X,Y∈Ω [Mmin Gmax 𝐋(ŷ, 𝐲)]
The framework above allows the augmentation model to search along the adversarial direction, leading to the generation of challenging noise augmentations that facilitate the learning of more robust shape features.
§.§ Image-To-SSM Network
The DeepSSM Model employs a deterministic encoder and a deterministic linear decoder. The reconstructed correspondences are obtained as the output of the Image-to-SSM network, providing both shape representation as well as low dimensional latent features for each input volume.
§.§ Shape Regularization Loss
Noise augmentation affects only the texture of the input volume and does not affect the underlying shape. We hypothesize that the shape representation (after DeepSSM) of the Augmented noisy volume (𝐱̂_1) and original volume (𝐱_1), should be closer in the shape space. We use a contrastive loss as an additional regularizer to ensure that both the Image-to-SSM network and GAN account for this.
𝐋_contrastive = -log(exp(sim(DeepSSM(𝐱_1),DeepSSM(𝐱̂_1)))/∑^N exp(sim(DeepSSM(𝐱_1),DeepSSM(𝐱̂_1))))
We propose to use contrastive loss at two different latent representations as shown in Figure <ref>: 1) Correspondences (𝐋_p_contrastive, PC), which are the predicted SSM representation by the Image-to-SSM network. This loss ensures that the augmentation does not effect the final statistical shape representation of the augmented image. 2) Bottleneck (𝐋_b_contrastive, BC), which is the low-dimensional space representation obtained from the Image-to-SSM network. This loss helps the model learn the same latent representation despite noise augmentation. These regularization losses encourage both the generator and shape model to focus more on shape-related information during the learning process and factor out texture variations.
The overall objective function for the proposed model :
𝐋 = α𝐋_GAN + 𝐋_RMSE + λ_1 𝐋_b_contrastive + λ_2 𝐋_p_contrastive
where α,λ_1,λ_2 are hyperparameters.
§ RESULTS
We use the same Image-to-SSM (DeepSSM <cit.>) model architecture across all experiments to ensure that variations in model performance can only be attributed to different augmentation techniques. As a baseline for comparison, we train an Image-to-SSM architecture without any augmentations (NoAug). Additionally, we train another model using KDE <cit.> augmentation (KDE <cit.>). We also compute 2 other baselines with off-the-shelf Gaussian noise augmentation with different variances(σ=1 and 10).
§.§ Metrics
Root Mean Squared Error (RMSE): To measure the error, we calculate the average relative mean squared error (RMSE) between the predicted 3D correspondences and this is achieved by computing the RMSE for the x, y, and z coordinates and averaging them as shown in (7)
RMSE = 1/3 ( RMSE_x + RMSE_y + RMSE_z)
For N 3D correspondences, RMSE_x = √(||C_x - C_x^'||_2^2/N). The same calculation is applied to RMSE_y and RMSE_z for the respective coordinates. Additionally, we calculate the RMSE error for each correspondence point as, RMSE_i = √(||C_x^i - C_x^'i||_2^2 + ||C_y^i - C_y^'i||_2^2 + ||C_z^i - C_z^'i||_2^2/3) The per-point RMSE helps us assess the accuracy of DeepSSM in modeling various local anatomical features. For all experiments, the shape representations were calculated on same test data (held-out data) using the trained DeepSSM model and were only used for inference.
Surface-to-Surface Distance (mm): The surface-to-surface distance is measured between the ground truth mesh and the mesh reconstructed from the predicted correspondences by DeepSSM. This distance provides a more precise measure of how well the correspondences adhere to the shape and indicates their suitability for anatomy segmentation.
Furthermore, we validate the effectiveness of DeepSSM with the learned shape representations by utilizing its correspondences for various downstream analysis applications. The specific downstream applications vary for each dataset and are described in separate subsections below.
§.§ Femur
Data Description and Processing:
The femur dataset consists of 49 CT images of the femur bone, of which 42 are considered healthy with no morphological abnormalities. DeepSSM <cit.> requires generating point distribution models (or correspondences) for the training images. we use ShapeWorks <cit.> to optimize a shape model with 1024 correspondences. Along with the training and validation data, we also randomly selected 7 controls and 2 CAM-FAI scans for testing the DeepSSM Model.
To meet GPU memory requirements, each image has been downsampled by a factor of 2 from 260 × 184 × 235 (0.5mm isotropic voxel spacing). The resulting downsampled 3D volumes have isotropic voxel spacing of 1mm, with dimensions of 130 × 92 × 117 voxels. The training images are divided into training and validation sets with 80% -20% split.
Training Specifics: Empirically we set α = 1 and β = 0.1. The default parameters and configuration are used to get the result for KDE <cit.> augmentation for Femur dataset. The training process involves optimizing the loss on correspondences for 1500 epochs, employing a data augmentation framework based on validation loss. For all ADASSM experiments, involving the proposed regularization losses, a learning rate of 5e-5 is utilized while ADASSM itself is trained with a learning rate of 1e-5. The generator and discriminator learning rates are set to 5e-3, except for ADASSM+BC+PC, which employs a learning rate of 1e-3. A batch size of 4 is used for training all the proposed models and baseline models. In the generator, the noise perturbation range is set to 500 for ADASSM experiments.
Evaluation and Analysis: Figure <ref> visualizes the RMSE results alongside the surface-to-surface distance. Augmenting the DeepSSM model with Gaussian noise with different variances without any adversarial training improves the RMSE when compared to the KDE augmentation<cit.>. However, an increase in the surface distance of the predicted correspondences indicates misalignment with the ground truth femur bone segmentation. This result shows that standard noise augmentation can provide better results for Image-to-SSM networks.
We can observe that for the proposed ADASSM (data-dependent noise augmentation), RMSE results are better compared to both KDE <cit.> and Gaussian noise. The addition of contrastive regularization losses further improves RMSE error and surface distance, which proves that data-dependent noise-augmented samples are better compared to Gaussian augmented samples and shape augmentation <cit.>.
Visualizations of surface-to-surface distance for the best, median, and worst cases for the test set are shown in Figure <ref>. Upon careful examination of these visualizations, we can observe that the proposed models demonstrate a remarkable reduction in errors in critical regions such as the greater trochanter, growth plate, femoral neck, and epiphyseal lines in the best-case scenario. In the median case, a detailed analysis of both views reveals that in View (1), the KDE baseline <cit.> exhibits some errors around the trochanter region that are substantially reduced by the proposed models. In View (2), the error around the trochanter region is completely reduced with the ADASSM+BC+PC model. In the worst-case scenario, View (2) displays the majority of errors in the KDE baseline <cit.>, but these errors are significantly diminished, particularly in the lower trochanter region, with the employment of the ADASSM variants.
Downstream Task - Group Differences: To evaluate the effectiveness of the learned shape representations using the proposed models, we conducted a downstream analysis aimed at evaluating whether the models can accurately capture group differences <cit.> in medically relevant regions. To achieve this, we formed two groups: a control group and a pathology group consisting of CAM-FAI cases. We calculated the mean differences (μ_normal and μ_cam-FAI) between these groups and visualized the differences on a mesh and compared the group differences obtained using the predicted correspondences from the proposed models and the ShapeWorks PDM model. We utilized the entire dataset, including both training and testing samples, for these group differences, and the results are presented in Figure <ref>.
Each group difference illustrates the transition from the mean shape of the pathological group to that of the control group, overlaid on the mean pathological scan. Interestingly, we observed that the group differences between the state-of-the-art PDM model and ADASSM+PC were quite similar compared to baseline KDE <cit.>. In some cases, the proposed models exhibited similar differences in medically relevant regions of the femur, whereas in other areas, the models identified additional variations. These findings suggest that the established correspondences can be employed to characterize CAM deformity effectively.
§.§ Left Atrium
Data Description and Processing:
The left atrium dataset consists of 176 late gadolinium enhancement (LGE) MRI images from patients that have been diagnosed with atrial fibrillation (AF). These scans are acquired after the first ablation. 80-20% split is used to split the data where 146 volumes are used to train the model and 30 scans which are used to test the DeepSSM network. To generate point distribution models (or correspondences) for training images, we utilize ShapeWorks <cit.> to optimize a shape model with 1024 correspondences.
For training purposes, the MRIs are downsampled from 235 × 138 × 175 (0.625mm isotropic voxel spacing) to 117 × 69 × 87 (1.25mm voxel spacing) by a factor of 2.
Training Specifics : The results for KDE augmentation <cit.> on the Left Atrium dataset are obtained using the default parameters and configuration. With the proposed data augmentation framework DeepSSM model is trained for 1000 epochs. For the various ADASSM models with the aforementioned regularization losses, a learning rate of 1e-4 is utilized, while ADASSM itself is trained with a learning rate of 5e-3. A batch size of 6 is employed during the training of both the proposed and baseline models. In all ADASSM experiments, the noise perturbation range in the generator is set to 100. Following the publication of the manuscript, we plan to make the training models, implementation code, and relevant hyperparameters publicly available.
Evaluation and Analysis:
Figure <ref> displays the RMSE results alongside the surface-to-surface distance. We can make the following observations from the bar graph:- 1) When augmenting the DeepSSM model with Gaussian noise of varying variances without any adversarial training, we find that it does not improve performance, which may be because the original dataset already has more intensity variation when compared to CT volumes. 2) By enhancing the data- and task-dependency of the noise and integrating the proposed data augmentation framework with various regularization losses, the methodology surpasses the baseline Gaussian noise and KDE shape augmentation framework <cit.>. In the Left Atrium, the performance disparity between the ADASSM variants is more pronounced compared to the Femur. This can be attributed to the significant variations observed in the Left Atrium, where the proposed method excels in effectively regulating the Image-to-SSM task.
In Figure <ref>, visualizations of the surface-to-surface distance are presented for the best, median, and worst cases in the test set. For best-case and median-case views, the proposed model plainly outperforms other methods, while worst-case views are comparable.
Downstream Task - AF Recurrence Prediction: The shape of the left atrium can provide insights into the recurrence of atrial fibrillation (AF) <cit.>. The dataset has binary outcome labels indicating whether patients experienced AF recurrence after ablation. The goal is to estimate the probability of AF recurrence based on the learned shape representations. We use PCA projections of the shape representations as features for a Multi-Layer Perceptron (MLP) for classification.
The results are summarized in Table <ref>. Compared with the traditional SSM <cit.> and KDE, we observe similar performance. All ADASSM variants perform on par with the PDM, with the ADASSM+PC model outperforming the baselines by capturing better shape descriptors for the left atrium than the PCA scores learned in other models. Due to the fact that the classification model is based on the PCA scores of correspondences, ADASSM+PC has the highest accuracy, as the contrastive loss will bring the correspondence's latent space representation closer. But if we train a classifier with non-linear features (other than PCA), ADASSM+BC+PC might result in the best accuracy.
§.§ Training Time
The proposed augmentation method not only improved model performance for both the left atrium and femur datasets but also significantly reduces the training time by approximately 60% compared to the baseline method <cit.> as shown in Table <ref>.
§ CONCLUSION AND FUTURE WORK
In this study, we introduced a novel methodology by proposing an adversarial data augmentation framework for generic regression tasks with applicability to Image-to-SSM networks. Using data-dependent noise augmentation, the proposed method seeks to discover effective shape representations for three-dimensional volumes. By generating challenging augmentations during model training, the proposed method eliminates the need for offline data augmentation, effectively training a more accurate Image-to-SSM network. The proposed noise augmentation framework outperforms the shape augmentation framework <cit.> and standard noise augmentation, demonstrating that data-dependent noise aids the model by implicitly attending to shape. Through downstream task analysis, we confirmed that the proposed method effectively taught models robust shape descriptors that capture pertinent pathology information. In addition, compared to earlier frameworks for shape augmentation, the proposed methodology is not only more robust but also faster. The limitation of the proposed framework is that it trains only on data-dependent intensity/noise augmentations and does not take shape augmentation into account. We plan to extend this framework to data-dependent shape augmentation as well.
splncs04
|
http://arxiv.org/abs/2307.01378v1
|
20230703221617
|
A CNN regression model to estimate buildings height maps using Sentinel-1 SAR and Sentinel-2 MSI time series
|
[
"Ritu Yadav",
"Andrea Nascetti",
"Yifang Ban"
] |
cs.CV
|
[
"cs.CV",
"cs.AI",
"eess.IV"
] |
Information Synergy Maximizes the Growth Rate of Heterogeneous Groups
August 1, 2023
=====================================================================
Accurate estimation of building heights is essential for urban planning, infrastructure management, and environmental analysis. In this study, we propose a supervised Multimodal Building Height Regression Network (MBHR-Net) for estimating building heights at 10m spatial resolution using Sentinel-1 (S1) and Sentinel-2 (S2) satellite time series. S1 provides Synthetic Aperture Radar (SAR) data that offers valuable information on building structures, while S2 provides multispectral data that is sensitive to different land cover types, vegetation phenology, and building shadows. Our MBHR-Net aims to extract meaningful features from the S1 and S2 images to learn complex spatio-temporal relationships between image patterns and building heights. The model is trained and tested in 10 cities in the Netherlands. Root Mean Squared Error (RMSE), Intersection over Union (IOU), and R-squared (R2) score metrics are used to evaluate the performance of the model. The preliminary results (3.73m RMSE, 0.95 IoU, 0.61 R^2) demonstrate the effectiveness of our deep learning model in accurately estimating building heights, showcasing its potential for urban planning, environmental impact analysis, and other related applications.
Building Height Estimation, Sentinel, Deep Learning, Fusion, Regression, Time Series.
§ INTRODUCTION
More than half of the world's population currently lives in cities. By 2050, an estimated 7 out of 10 people will likely live in urban areas. While cities contribute more than 80% of global GDP they are also accountable for major energy consumption and carbon emission <cit.>. Urbanization monitoring is essential to assess its impact on the environment and support sustainable development. Accurate building height estimation plays an important role in urban planning, as it is an indicator of urban heat islands effect, population, energy consumption, and urban climate.
Earth Observation (EO) has been highlighted as an effective tool for mapping large-scale human settlements. Several methodologies have been developed to extract building footprints in the last decades. Various large-scale global urban footprint data sets are available and widely used by the scientific community <cit.>. However, these data sets are intrinsically two-dimensional and do not provide information on building height.
In recent years, some studies have tried to fill this gap and estimate building heights from satellite imagery. For example, <cit.>, proposed a Random Forest (RF) regressor for continental-scale height mapping at 1 km spatial resolution. The authors used Landsat-8 OLI, Sentinel-1 SAR and various handcrafted spatial features along with auxiliary data. Reference data was derived from a combination of open street maps, government websites and commercial maps.
The authors of <cit.> developed a VVH building height indicator from Sentinel-1 SAR data and estimated building heights at 500m resolution. The indicator was evaluated in major cities in the US with ICESat data as reference and achieved an RMSE of 1.5m.
<cit.>, extended the World Settlement Footprint (WFS) <cit.>, including the building heights derived by the DSM collected by the TanDEM-X mission, and generated the WFS 3D data set at 90m resolution. The estimated building heights have been validated showing a promising accuracy with 6.01m RMSE score. However, it relies on a commercial DSM that is not easy to update frequently.
<cit.> presented a Support Vector Machine (SVM) regression model to derive building heights at 10m resolution with RMSE of 3.2m to 4.2m; the authors used a set of handcrafted spatial and temporal features from Sentinel-1 and Sentinel-2 time series as input to the model. The approach is tested in several cities in Germany using available ALS (Airborne Laser Scanner) data as a reference.
<cit.> estimated building height for China at 10m resolution and achieved 6.1m RMSE. The authors used a combined approach from <cit.> and <cit.> with additional ALOS PALSAR, WFS footprints and DEM data. The reference data is derived from Baidu map services with an assumption of each floor height to be 3m.
The aim of this study is to investigate a supervised Convolutional Neural Network (CNN) based regression model for estimating building height using only freely available Sentinel-1 SAR and Sentinel-2 MSI time series data. We frame the task of generating building height maps as a pixel-wise regression task, assuming the following: (1) zero-pixel values represent no buildings (as usual in urban footprint data), and (2) pixel values greater than 1 directly correspond to estimated building height. We developed a CNN regression model based on the U-Net architecture that takes Sentinel-1 and Sentinel-2 multi-temporal data and estimates building height at a 10m spatial resolution.
§ DATA DESCRIPTION
The data used in this study include the Sentinel-1 Ground Range Detected (GRD) and Sentinel-2 MSI Level-2A. We collected data on ten largest cities in The Netherlands namely Amsterdam, Rotterdam, The Hague, Utrecht, Eindhoven, Groningen, Breda, Tilburg, Nijmegen and Almere.
For reference, we used 3D Bag data developed by the 3D geoinformation Group of the Technical university of Delft. The database is completely open source and automatically generated from the Buildings and Addresses Register (BAG) providing 2D footprints and from the National Altimetric Model (AHN) derived from ALS data. The database contains multiple Levels Of Detail (LOD) building models (LOD1.2, LOD1.3 and LOD 2.2). We selected LOD1.3 data for this study.
We created non-overlapping tiles of size 128x128 pixels within the administrative boundaries of the 10 metropolitan areas provided by the European Environment Agency. Corresponding to each tile we collected S1 and S2 time series using Google Earth Engine's python API <cit.> and collected LOD1.3 data from 3D BAG database. For S2 MSI data, we generated monthly cloud-free composites and downloaded 5 bands (Red, Green, Blue, NIR and SWIR). For S1 SAR data we first computed the monthly average to reduce the speckle for both ascending and descending orbits, then downloaded 4 bands (VV, VH polarizations for both orbits). The S2 data contains 12 images per tile (one image for each month) and the year is matched with the acquisition of AHN data. The input data is downloaded at 10m spatial resolution, and the reference data are rasterized and resampled to match 10m. We divided the dataset into training and test set using an 80-20 ratio, resulting in 1,737 training samples and 434 test samples.
§ METHODOLOGY
§.§ MBHR-Net Architecture
Figure <ref> shows the architecture of the proposed MBHR-Net, consisting one branch for learning multispectral features of S2 images and another branch for learning SAR backscatter features of S1 images. The S2 branch takes a five-channel input (red, green, blue, NIR and SWIR bands) and the S1 branch takes a four-channel input (VV, VH for both descending and ascending orbits).
We adopted U-Net architecture, a widely used encoder-decoder based segmentation network. An encoder compresses the salient features (feature maps) of the input images and a decoder upsamples the compressed features to predict output of same size as input. The encoder block has a number of repetitions (level) of the sequence; convolutional layer, maxpooling layer and batch normalization layer. The decoder has a sequence of convolutional layers, upsampling layer and back normalization layer to output a segmentation map.
Our MBHR-Net contains two encoders one in each branch to learn different modality features separately. We adopted ResNet50 with four levels as encoder. The output feature maps of each level is of different size capturing different semantics. From each encoder, four feature maps are extracted. The size of the feature maps are (64x64), (32x32), (16x16) and (8x8). These multiscale features are fused using element-wise concatenation operation. Via skip connection, the fused features maps are combined with the same size decoder layers. These skip connections help the decoder network to condition not only on the latent representation but also on intermediate representations of the encoder, which lead to fine-grained details in predictions<cit.>. The combined feature maps are upsampled and processed through decoder layers. At the end of the decoder network we use a convolutional later with (1x1) kernel and ReLU activation function making it a regression layer.
§.§ Augmentation strategy and Training
For one reference patch we have 24 input images i.e. 12 time series images for each modality. These 12 images capture surrounding features in different months. We assume that in a 12-month period, there are negligible changes in building heights but the season changes surrounding conditions. Therefore, the 12-month images can be treated as augmented images, creating 12 augmented pairs. With augmentation, the size of training samples increased from 1734 to 1734x12.
As training losses we used a weighted combination of two regression losses, Mean Squared Error loss (MSE) and Cosine Similarity (CS) loss given in equation Eq. <ref> – <ref>. We used 0.8 weight for CS loss and 0.2 for MSE loss.
MSE = (y_true - y_pred)^2
CS = -∑_^ y_true2 * y_pred2
We trained the model for 100 epochs with batch size 4, adam optimizer and 0.0001 as initial learning rate. For better convergence, the learning rate is decayed until 0.00001. The decay steps are controlled with the "reduce on plateau" method. The code is implemented in Keras and the model is trained for 6 hours on Google colab GPU.
§ RESULTS AND EVALUATION
The predicted building height maps MBHR-Net are first filtered using building footprints obtained by binarizing the reference height maps. For binarization, a pixel is set to be building pixel (1.0) if the pixel value is > 1.0 otherwise set to no building (0.0 value). The filtered building height maps are evaluated using two metrics Root Mean Square Error (RMSE) and R^2 score given in Eq. <ref>, <ref>, where n is the number of validation samples, BH_est, i is the estimated value of the height of the building and BH_ref, i is reference building height. RMSE indicates the accuracy of predicted heights with respect to reference and R^2 score estimates the model effectiveness in learning variance in the building heights.
RMSE = √(∑_i=1^n(BH_ref, i - BH_pred, i)^2/n)
R^2 = 1- (n-1)∑_i=1^n(BH_ref, i - BH_pred, i)^2/(n-2)∑_i=1^n(BH_ref, i - BH_pred, i)^2
For accurate building height estimation, it is important to ensure the alignment of predicted buildings with the reference. We evaluated building alignment using well known metric Intersection over Union (IoU). RMSE and R^2 scores are calculated using reference and network predictions directly whereas IoU is calculated on binarized (building and no building) references and predictions.
A high-quality building height estimation model is characterized by a low RMSE, a high IoU, and a high R^2 score. These metrics provide insight into different aspects of the model's performance. The RMSE, IoU and R^2 scores of the proposed MBHR-Net are 3.73 m, 0.95 and 0.61. The RMSE score of 3.73 m suggests that, on average, the model's accuracy is approximately one floor showing a remarkable accuracy considering the spatial resolution of S1 and S2 imagery (10-20 m). The IoU score of 0.95 indicates that the predicted height regions overlap significantly with the ground truth regions, demonstrating the model's ability to precisely identify buildings. The R^2 score is 0.61, suggesting that the model can explain a substantial portion of the variance in building heights but the model may have limitations in accurately capturing certain factors or complexities affecting building heights.
Figure <ref> shows a scatter plot between predicted heights and reference heights of all pixels with values greater than 1 in the test set. In the scatterplot we reported that the model shows a fair correlation between reference heights and predictions. But there is an overall underestimation of heights, which we aim to improve in our future works.
We also present a few test samples for qualitative evaluation (see Figure <ref>). The first two rows show good height estimation examples where the predicted heights are accurate with few underestimations. The third sample shows comparatively less accurate height estimations with error ranges of 0 to 4 meters. This is possibly an example of the type of complexity which is not accurately learned by the model. The high IOU score of the model can be verified from all samples showing a good alignment between the predicted buildings and the ground truth buildings, including the boundary areas.
§ CONCLUSION
In this study, we developed a deep learning model for building height estimation using combined Sentinel-1 SAR and Sentinel-2 MSI time series. The performance evaluation of MBHR-Net demonstrated promising accuracy in both height estimation, with an RMSE of 3.73m, and building footprint delineation, with a 95% IoU. These results indicate the potential of MBHR-Net for estimating building heights with accurate building footprint delineation. The potential future investigation directions are to expand the data set to include different geographic regions, building types, incorporate additional data sources, and develop a more advanced deep learning model to handle the complexity of such large data to build a more generalized approach.
IEEEbib
|
http://arxiv.org/abs/2307.02661v1
|
20230705213037
|
Many-objective Optimization via Voting for Elites
|
[
"Jackson Dean",
"Nick Cheney"
] |
cs.NE
|
[
"cs.NE",
"cs.AI"
] |
0009-0007-3276-1890
[email protected]
University of Vermont
Burlington
Vermont
USA
05401
0000-0002-7140-2213
[email protected]
University of Vermont
Burlington
Vermont
USA
05401
Real-world problems are often comprised of many objectives and require solutions that carefully trade-off between them. Current approaches to many-objective optimization often require challenging assumptions, like knowledge of the importance/difficulty of objectives in a weighted-sum single-objective paradigm, or enormous populations to overcome the curse of dimensionality in multi-objective Pareto optimization. Combining elements from Many-Objective Evolutionary Algorithms and Quality Diversity algorithms like MAP-Elites, we propose Many-objective Optimization via Voting for Elites (MOVE). MOVE maintains a map of elites that perform well on different subsets of the objective functions. On a 14-objective image-neuroevolution problem, we demonstrate that MOVE is viable with a population of as few as 50 elites and outperforms a naive single-objective baseline. We find that the algorithm’s performance relies on solutions jumping across bins (for a parent to produce a child that is elite for a different subset of objectives). We suggest that this type of goal-switching is an implicit method to automatic identification of stepping stones or curriculum learning. We comment on the similarities and differences between MOVE and MAP-Elites, hoping to provide insight to aid in the understanding of that approach – and suggest future work that may inform this approach’s use for many-objective problems in general.
Many-objective Optimization via Voting for Elites
Nick Cheney
August 1, 2023
=================================================
§ INTRODUCTION
Complex real-world optimization problems often require solutions that carefully balance non-linear and non-intuitive trade-offs between many competing objectives. As a result, evolutionary algorithms excel at finding solutions to multi-objective problems, with approaches such as Pareto optimization able to simultaneously explore a wide range of trade-offs between objectives and optimize each goal synergistically, in spite of, or agnostic to other competing objectives <cit.>.
However evolving solutions to many-objective problems becomes increasingly challenging as the number of objectives grows <cit.>. The population size required to maintain Pareto fronts scales exponentially and quickly becomes computationally infeasible <cit.>. Alternative approaches that collapse the many objectives down into a single weighted-sum objective or optimize them on a schedule may not properly weight/order each objective in the aggregate fitness function/schedule, as doing so would require knowledge of the dynamics and trade-offs between the objectives a priori <cit.>.
Existing approaches to optimize many-objectives (see <cit.> for a review) often assume that this privileged knowledge is available and assign different weights to the objectives.
Pareto dominance is the traditional way to compare solutions on multiple objectives without weighting <cit.>. Existing algorithms, such as NSGA-II <cit.> and SPEA2 <cit.>, are effective at approximating a theoretical Pareto front (the set of non-dominated solutions) when there are few objectives. However, extrapolating from the population sizes determined in <cit.>, we estimate that we would need a population of over 8000 to optimize our 14 objectives via Pareto optimization.
A previous attempt to improve on Pareto dominance for use with many objectives is PPD-MOEA which uses partial Pareto dominance on subsets of objectives <cit.>. Unlike our approach, PPD-MOEA uses two subsets, the ordering of objectives is fixed and predetermined, and is only tested up to 10 objectives.
Determining the attributes of goals that lead to intermediate stepping stones towards a complex solution is a significant challenge for evolutionary algorithms
<cit.>.
Open-ended novelty-seeking and Quality Diversity algorithms similarly seek to explore a diversity of novel solutions in hopes of discovering promising stepping stones, and may combine this with a pressure to become highly fit within a wide variety of behavioral/phenotype niches <cit.>.
MAP-Elites <cit.> is popular quality diversity algorithm with a map of discrete cells which each represent a phenotypic behavioral characteristic. Each cell maintains a member of the population, the top performing (elite) individual on a global objective function that matches the subset of behavior characteristics assigned to that cell.
This approach generates significant Quality Diversity, while implicitly overcoming the problem of determining stepping stones or learning curriculum order, as offspring for any given subset of the phenotypic space compete to become the elite in every other cell within the map – constantly goal-switching to find potential stepping stones that may enable them to perform well on other phenotypic subsets <cit.>.
Multi-Objective MAP-Elites <cit.> fills each cell in the behavior map with a Pareto front rather than a single individual.
Unlike in our work, each cell corresponds with a unique behavior descriptor and optimizes all objectives simultaneously.
§ APPROACH
Inspired by these approaches, we introduce a new variation of the MAP-Elites algorithm for optimizing many-objective search problems. Our approach, Many-objective Optimization via Voting for Elites (MOVE), defines each cell within a map as a subset of the many objectives that make up the overall search problem, and maintains a map of elites that are the best solutions found for each objective-combination in the map (Fig. <ref>).
While this approach inherits the structure, goal-switching, and diverse stepping-stones of the above approaches, its goal of finding the single optimal solution for the many-objective problem means that it is not exactly a quality diversity algorithm. Quality diversity algorithms seek to maximize the sum of fitness across all cells (phenotypic niches) in the map, while MOVE is more similar to approaches which construct building blocks that all ultimately lead to a single aggregate solution <cit.> as it explores many different subsets of the possible trade-offs and relationships between objectives in parallel.
In MOVE, the population of genomes is stored in a map of cells. Each cell is assigned a fixed number of fitness functions selected randomly from all possible combinations.
Every cell contains up to one elite, the most fit solution found thus far on the objectives in that cell. During selection and reproduction, each elite produces one child, a mutated copy of itself.
For every cell on the map, each child is compared to the current elite
by summing the number of fitness functions (of the cell’s assigned functions) on which the child scores higher than the elite.
If the child earns more than half of these ‘votes’, it replaces the elite.
Further intuition for MOVE is provided in Appendix <ref>
and a full implementation is available at <github.com/uvm-neurobotics-lab/MOVE>.
While this generic algorithm is agnostic to the particular type of solution being evolved, here we use a 14-dimensional optimization problem known to be fraught with local optima <cit.>: target image generation via Compositional Pattern Producing Networks <cit.>.
In the experiments below we set out to evolve synthetic images which maximize 14 different Image Quality Assessment metrics from the PyTorch Image Quality (PIQ) library <cit.> (Appendix <ref>).
This approach allows evolution to exploit goal-switching, since a solution that performs well on one fitness function may produce offspring that perform well on different fitness functions. Unlike traditional scheduling approaches, no information about the ideal order of tasks is required as the algorithm designer does not have to choose when solutions jump between cells.
Results reported here employ the target image in Fig. <ref>.
Results from additional target images are reported in Appendix <ref>.
Each condition includes 20 trials of 1000 generations. Unless otherwise noted, each trial used a map (i.e. population size) of 100 cells. We evaluate significance via a Wilcoxon rank-sum test with α=0.05.
To analyze how MOVE compares with a naive multi-objective approach, we conducted two baseline experiments:
Single-objective hillclimbers each optimize only one of the 14 objectives. Each of the 14 hillclimbers produce 7 children per generation, approximating the computational costs
of MOVE.
The all-objective hillclimber control consist of a single hillclimber with 100 children per generation, resulting in the same total compute as MOVE. The fitness of a given image is the mean normalized fitness of all 14 objectives, as is the case in a traditional multi-term fitness function.
§ RESULTS
§.§ Functions per cell
While the general concept and flow of the MOVE algorithm is straightforward to describe, its ideal implementation and the impact of such decisions on performance relative to existing baselines are largely uncertain.
Firstly, it is not immediately clear how many objectives should be in each subset. Having more functions per cell would lead each subset to more closely approximate the full many-objective problem, while having fewer functions per cell would enable greater diversity across cells and potentially simpler subproblems to solve with fewer competing trade-offs in each.
Here we explore subsets of 1, 3, 5, 7, 9, and 11 random objective functions per cell. Only odd numbers of functions were included to avoid ties.
With any number of functions per cell, MOVE results in a significantly higher final overall fitness than the all-objective hillclimber (all p < 0.01; Table <ref>).
The top performer according to overall fitness average across all 14 functions is the setting with 3 functions per cell,
but it was not significantly better than 5, 7, or 9.
As the number of objective functions per cell grow, the number of objectives that any two cells have in common is expected to increase. Thus one might expect it to become easier for an offspring to goal-switch with more functions per cell and perform well on a different cell than its parent. In all cases, parents were far more likely to produce offspring that replaced other cells in the map than themselves (Table <ref>).
If replacements happened randomly to each of the 100 cells, 99% would be to a new (non-parent) cell. With 11 functions per cell, we see 98.55% of replacements jumping cells.
Even with just 3 functions per cell, we already see 96.26% of cells replacing a parent other than their own, suggesting both the importance of goal switching above replacing parents – and perhaps the potential of this algorithm to find many unique stepping stones and learning trajectories towards an ultimate solution.
Looking at the diversity within search, the total number of unique solutions across the 100 cells at the end of training was highly correlated with the number of functions per cell – with 3 func./cell resulting in the greatest unique solutions (p < 0.001) and diversity of this final population dropping as func./cell increase.
All trials in Table. <ref> employ 100 cells, but we also ran experiments with 25, 50 cells. With 5 functions per cell, MOVE worked just as well with 50 cells but worse with 25 (Appendix <ref>), suggesting the potential even further computational savings than provided here.
§.§ Goal-switching
The results from the number of functions per cell experiments suggest that an important feature of MOVE is the ability of offspring to replace cells other than their parent cell. We hypothesized that this allows MOVE to find stepping stones and escape local optima along the path of any one cell in isolation. To investigate further, we conduct an ablation study that tests the importance of switching between cells in the map.
We explore three experimental conditions in which children were eligible to replace a different set of elites.
No jumping: children can only replace their parents. This is functionally similar to 100 parallel hillclimbers.
One jump: children can replace any cell, including their parent, but only one. If a child is eligible to replace multiple elites, the one that lost the vote by the largest margin is chosen.
Unlimited jumping: children can replace any cell and replace multiple cells each generation.
With 5 functions/cell, allowing unlimited jumping (all-objective fitness: 1.19) or a one jump (1.18) significantly increased performance compared with no jumping (1.09).
Allowing multiple jumps per offspring was the same (for 1, 7, and 9 functions per cell) or better (for 3, 5, and 11 functions per cell). These data support the hypothesis that MOVE benefits from the ability to goal-switch.
Furthermore, when an elite produced a surviving offspring, the replaced cell shared significantly more functions with the parent cell than would be expected from two random cells (Appendix <ref>), suggesting that shared objectives do enable increased goal-switching. Additionally, with 5 functions per cell, the path taken by the ancestors of a final solution included almost half of all the cells in the map on average, suggesting that a wide variety of cells serve as stepping stones en route to a final solution (Table <ref>, Appendix <ref>).
§.§ By function
Table <ref> demonstrated that MOVE can achieve a higher mean performance across all objectives at once than an all-objective hillclimber attempting to optimize an aggregate function of all objectives. But how does this translate to performance on the individual objectives? In Fig. <ref>, we compare the performance of MOVE (5 fn/cell; with unlimited jumping) to the all-objective hillclimber, and also to each single-objective hillclimber on how well each algorithm performs on each single objective.
MOVE significantly outperforms the aggregate hillclimber training on 13 of the 14 objectives.
Surprisingly, MOVE also performs as well as, or better, than the single-objective hillclimber which does not have to compromise on any trade-offs between objectives on 9 of the 14 objective functions. This suggests that not only is MOVE able to overcome the trade-offs between objectives, it is able to find synergistic relationships between the different objectives and identify stepping stones to make the 14-dimensional many-objective optimization problem even easier than solving 14 independent single-objective searches.
§ DISCUSSION
This paper offers a proof-of-concept for a novel way to optimize many-objectives, and tests the approach on an image neuroevolution problem.
Our method maintains solution diversity by separating a many-dimensional fitness landscape into parallel, but not fully independent, searches on smaller-dimensional spaces. Rather than maintaining a single many-dimensional Pareto front, MOVE stores a population of elites which each perform well on a different subset of the objective functions. This allows MOVE to successfully find stepping stones while maintaining a much smaller population than would be necessary in traditional Pareto optimization.
This study finds important relationships between hyperparameters of this model and its overall performance.
The number of objectives per cell influenced overall aggregate fitness found, the propensity of offspring to replace their parents within the same cell vs goal-switching to become the elite in a new cell, and the diversity of solutions found (Table <ref>).
Much like MAP-Elites <cit.> and Innovation Engines <cit.>, one of the driving forces behind the success of MOVE is the ability of solutions to jump between cells and generate their own trajectories of stepping stones over time (as seen in Appendix <ref>). We find that final solutions have used as many as 48% of the cells in the map as stepping stones throughout their evolutionary trajectory – switching goals as many as 100 times over the course of evolution (Table <ref>), and that removing the ability of MOVE to goal-switch across cells significantly hinders performance (Table <ref>). These migrations between cells evolving in parallel for different goals is also reminiscent of migration in island models <cit.>.
The ease and order of jumps between cells may also inform weightings or orderings for other many-objective optimization approaches.
Future extensions of MOVE may incorporate concepts from multi-objective evolutionary algorithms, such as weighted objectives or other ways to condense multiple objectives into a single criterion to determine the ranking of an elite and a competing offspring in a cell. Though, in our limited investigations, voting over objectives appeared to outperform normalized-weighted-sum fitness functions.
Random objective weightings of many or all objectives (as opposed to the current binary inclusion of a random subset of functions) assigned to each cell in the map could be employed as a softened version of the voting mechanism.
The results presented in this paper are likely highly dependent on the specific target images, encoding, and task chosen (see Appendix <ref> for experiments with other target images and <ref> for discussion of potential extensions).
Future work should consider anti-correlated objectives, since MOVE may behave differently in problems where objectives are less aligned with one another than is the case with different Image Quality Assessment metrics here.
Despite its simplicity,
MOVE is an effective method for many-objective optimization in this particular setting.
We are curious about the potential for this approach to generalize to the challenging problems of many-objective optimization across various domains.
This material is based upon work supported by NSF Grant No. 2218063.
Computations were performed on the Vermont Advanced Computing Core supported in part by NSF award No. 1827314. Thanks to Rachel Gehman for helpful brainstorming.
ACM-Reference-Format
§ APPENDIX:
§ DETAILED APPROACH
§.§ Algorithm
[1] #1
Intuition for the MOVE algorithm is provided in Algorithm <ref>, however these operations can be done much more efficiently in practice (see <github.com/uvm-neurobotics-lab/MOVE> for an open-source implementation).
§.§ Encoding
Similar to <cit.>, we represent the individuals within our map as Compositional Pattern Producing Networks (CPPNs) <cit.>.
The CPPNs in these studies contained the activation functions: sine, cosine, gaussian, identity, and sigmoid. The inputs were x, y, and a bias of +1.0. The outputs were R, G, B color values <cit.>. No recombination/crossover was used.
§.§ Objective functions
The 14 target functions used were: Mean Squared Error (MSE), Peak Signal-to-Noise Ratio (PSNR), Feature Similarity Index (FSIM), Structural Similarity Index (SSIM), Multiscale Structural Similarity Index (MS-SSIM), Visual Information Fidelity (VIF), Visual Saliency-Induced index (VSI), Haar wavelet-based Perceptual Similarity Index (HaarPSI), Style loss, Gradient Magnitude Similarity Deviation (GMSD), Mean Deviation Similarity Index (MDSI), DCT Subband Similarity (DSS) Deep Image Structure and Texture Similarity (DISTS), and Learned Perceptual Image Patch Similarity (LPIPS). See <cit.> for a full description and references for all metrics. Metrics that measure image dissimilarity were inverted so that higher values were associated with more similar images.
Many of the Image Quality Assessment metrics have been shown to strongly correlate with human perception. Some metrics focus on low-level structure like gradients and textures while others reward for high-level similarities. Newer approaches leverage deep neural network feature extractors trained for other tasks and compare images based on differences in extracted features.
Fitness values were normalized within each target and function by the mean highest fitness found by the all-objective hillclimber for that function and target. Normalizing the range of each fitness function enables more intuitive visualization and analysis of individual-objective and mean-overall fitness values (e.g. Figure <ref>) though the algorithm does not rely on these normalized values to assess the overall quality across multiple objectives, as majority voting across individual objectives in a cell is employed during fitness evaluation/selection. An alternative version of this algorithm that explored the alternative method of determining fitness by the mean-normalized-fitness of objectives within a cell was also tested, and resulted in similar performance – so the simpler and scale-agnostic voting mechanism is used throughout.
Figure <ref> shows the top image found across 20 runs of each single-objective hillclimber for 4 different target images.
§.§ Number of cells
Results in this work are reported from trials with 100 cells in the MOVE map. However we also ran experiments with as few as 25 cells. With 5 functions per cell, the 25, 50, and 100 cell conditions all resulted in higher final overall fitness than the all-objective hillclimber (p<0.001). Runs with 25 cells (1.15 ± 0.03) performed significantly worse (p ≤ 0.04) than 50 (1.19 ± 0.03) and 100 (1.19 ± 0.03). Interestingly, 50 cells did not perform significantly worse than 100 cells (p=0.83), suggesting that MOVE can perform effectively with surprisingly small population sizes.
§ DETAILED RESULTS
§.§ Functions per cell
With 5 functions per cell, when we restrict children to only replacing their parent within the same cell, performance drops significantly from 1.19 to 1.09 (p<0.001). Similarly, allowing multiple jumps per offspring was the same (for 1, 7, 9 functions per cell) or better (for 3, 5, and 11 functions per cell; p ≤ 0.02) than only allowing one.
The no-jumping condition still performed better than the single-objective baseline for 3, 5, 7, 9 and 11 functions per cell (p≤9.41e-3) but worse than the baseline for 1 function per cell (p=3.74e-06), demonstrating the importance of including unique combinations of objectives within cells.
§.§ Goal-switching
Are there any patterns observable suggesting when goal-switching occurs? Intuitively we might expect cells with overlapping objective to enable more jumping between them. With 5 functions per cell, the average number of shared functions between any two cells in our trials was 1.87 / 5 (±.03).
When an elite produced a surviving offspring, the replaced cell shared significantly more functions with the parent cell than the overall average (jumping to one: 1.90 / 5 ±0.03, p<0.01; jumping to any: 1.92 / 5 ±0.03, p<0.001). As expected, when no jumping was allowed, the average was 5.0 / 5 (±0, p<0.001) since the children can only replace their parent. Findings for other number of functions per cell were similar, suggesting that shared objectives do enable increased goal-switching.
When considering the complete evolutionary history, 5 functions per cell resulted in more unique cells in the genealogy of the final solutions than any other number of functions per cell (Table <ref>, p<0.01). While 1 function per cell resulted in significantly less unique cells in final solution ancestries (6.73/100 ± 1.15) than all other conditions (p <0.001). With 5 functions per cell, the path taken by the ancestors of a final solution included almost half of all the cells in the map on average, suggesting that a wide variety of cells serve as stepping stones en route to a final solution. Since so many different cells (each with a random subset of objective functions) are present in the evolutionary history of a solution, it is understandable how MOVE can produce solutions that perform well on the mean of all 14 fitness functions, even when not ever being exposed to more than a handful at any given time.
In addition to more exploration of the map, with 5 functions per cell, there was significantly more total replacements (i.e. successful offspring) over the 1000 generations (p ≤0.01; Table <ref>), followed by 3 functions per cell which had more than the others (p ≤8.03e-3). With only 1 function per cell, there were significantly less replacements than all other conditions (p ≤6.3e-08). As the number of functions per cell increases, the number of votes that a child needs (i.e. the number of objectives it must simultaneously satisfy) to replace an elite increases, perhaps helping to explain why there are fewer replacements with 11 functions per cell.
Ablating the ability for offspring to jump to a cell other than its parents' (goal-switch) predictably resulted in fewer replacements and more unique solutions. Similarly, limiting the offspring to only be able to jump to one cell resulted in more unique solutions and fewer jumps than trials with unlimited jumps (Table <ref>).
§.§ By function
MOVE champions outperformed the single-objective hillclimber on 9/14 objective functions. Even when we gave the hillclimber far more compute by increasing the number of offspring per generation per hillclimber to 100 each (1400/gen overall across all hillclimbers), MOVE still performed better on 4 objectives (p ≤ 0.03), the same on 7, and worse on just 3 (p ≤ 1.13e-3).
§ OTHER TARGET IMAGES
The results reported in this work were from trials with the sunrise target image. Hyperparameters were highly tuned to that specific simple image. Trials with more complicated images show that MOVE champions still outperform the all-objective hillclimber, but no longer are very likely to outperform the single-objective hillclimbers (Fig. <ref>). These more complicated images are presumably more difficult to find stepping stones for, and a version of MOVE that is tuned for the sunrise image does not generalize its ability to escape local optima. More research is required to understand how MOVE can be generalized to other target images and other tasks.
§ LIMITATIONS AND FUTURE WORK
The specific objective functions in this work was mostly chosen for convenience of implementation in <cit.>.
Preliminary exploratory data analysis suggested that many, but not all, pairwise combinations of objectives were well aligned and complementary – such that, on average, increasing one would also increase the other. This matches the intuition of the different fitness functions being alternative ways to assess the similarity of two images (the evolved solution to the target images). Though there are notable differences in how the different Image Quality Assessment metrics make this assessment, and the general intuition that color-based, texture-based, deep-learning-content/encoding-based, etc. metrics being different; the visual inspection of the single-objective hillclimber evolved solutions for each (e.g. Figure <ref>); and the lower performance of the all-objective hillclimber all similarly suggest that these are not perfectly complementary objectives. It is not clear how much the alignment between objectives influences the MOVE algorithm, which is a major limitation for the generalizability of this work.
Future work should consider more carefully crafted fitness metrics which may be more or less compatible with one another, as well as a variety of complex real-world many-objective optimization problems.
Additionally, recombination is an important tool for evolutionary many-objective optimization algorithms and was not implemented for this paper. Future work should consider different encodings, mutation operators and recombination.
|
http://arxiv.org/abs/2307.01089v1
|
20230703150902
|
Synthesizing Control Laws from Data using Sum-of-Squares Optimization
|
[
"Jason J. Bramburger",
"Steven Dahdah",
"James Richard Forbes"
] |
math.OC
|
[
"math.OC",
"cs.SY",
"eess.SY",
"math.DS"
] |
Synthesizing Control Laws from Data using Sum-of-Squares Optimization
This work was supported by the NSERC discovery grant program.
Jason J. Bramburger
Dept. of Mathematics and Statistics
Concordia University
Montréal, Canada
[email protected]
Steven Dahdah
Dept. of Mechanical Engineering
McGill University
Montréal, Canada
[email protected]
James Richard Forbes
Dept. of Mechanical Engineering
McGill University
Montréal, Canada
[email protected]
===================================================================================================================================================================================================================================================================================================================================================================================
The control Lyapunov function (CLF) approach to nonlinear control design is well established. Moreover, when the plant is control affine and polynomial, sum-of-squares (SOS) optimization can be used to find a polynomial controller as a solution to a semidefinite program. This letter considers the use of data-driven methods to design a polynomial controller by leveraging Koopman operator theory, CLFs, and SOS optimization. First, Extended Dynamic Mode Decomposition (EDMD) is used to approximate the Lie derivative of a given CLF candidate with polynomial lifting functions. Then, the polynomial Koopman model of the Lie derivative is used to synthesize a polynomial controller via SOS optimization. The result is a flexible data-driven method that skips the intermediary process of system identification and can be applied widely to control problems. The proposed approach is used to successfully synthesize a controller to stabilize an inverted pendulum on a cart.
Control Lyapunov function, Koopman operator, Extended Dynamic Mode Decomposition, lifting functions, sum-of-squares optimization
§ INTRODUCTION
Nonlinear systems can be found throughout engineering, science, economics, and other domains. Often nonlinear systems must be controlled to realize useful behavior. There are a plethora of nonlinear control design methods to choose from, such as gain-scheduling, feedback linearization, integrator backstepping, sliding-mode control, and others <cit.>.
A popular nonlinear control design method is the control Lyapunov function (CLF) approach <cit.>. The gist of the CLF approach to control design is that, given a control affine system, a controller is sought such that a Lyapunov function candidate associated with the closed-loop system satisfies V(x_*) = 0, V(x) > 0, ∀ x ∈𝒟∖{x_* }, and V̇(x) < 0, ∀ x ∈𝒟∖{x_* } where x_* is an equilibrium point and 𝒟⊆ℝ^d is some domain. See <ref> for a review of the CLF approach to controller design.
One means of designing a controller via the CLF approach is to employ techniques from sum-of-squares (SOS) optimization. Roughly speaking, by exploiting the polynomial form of the plant and the Lyapunov function candidate, a sufficient condition for the existence of a polynomial controller is the existence of a solution to a linear matrix inequality (LMI) feasibility problem <cit.>. The SOS optimization approach to control design via a CLF is attractive because semidefinite programming can be leveraged to solve LMI feasibility problems in a simple and efficient manner.
There have been numerous SOS optimization approaches to CLF-based control design. For instance, <cit.> consider controller design using a CLF approach to ensure that the region of attraction of the closed-loop system about the equilibrium point x_* is as large as possible. In <cit.>, an approximate solution to the Hamilton-Jacobi-Bellman (HJB) equation is found using SOS optimization, yielding a suboptimal controller. State- and output-feedback control design in a SOS optimization framework for parabolic PDE systems is considered in <cit.>.
To use the CLF approach in concert with SOS optimization for control design, a model of the nonlinear system is needed. When a model is not available, but a plethora of data is available, a data-driven approach to modelling and control design is natural. The Koopman operator approach to data-driven modelling of nonlinear systems has garnered significant attention recently <cit.>. The basic idea behind the Koopman operator is that a finite-dimensional nonlinear system can be expressed as an infinite-dimensional linear system using lifting functions <cit.>. The linearity of the Koopman operator is attractive because standard linear systems tools, such as the eigenspectrum <cit.>, can be used to analyze nonlinear systems. A finite-dimensional approximation of the Koopman operator can be readily identified from data <cit.>. This approximate Koopman operator can then be used as the basis for control design. For instance, <cit.> considers model predictive control (MPC), <cit.> considers an active learning approach in a Koopman framework, and <cit.> considers discounted optimal control using the Perron-Frobenius operator, the dual to the Koopman operator.
This paper proposes a data-driven approach to CLF-based control design using the Koopman operator. The CLF approach relies on the generator of the Koopman operator, the so-called Lie derivative, of the closed-loop Lyapunov function candidate being strictly negative. When the lifting functions associated with the Koopman operator are polynomials, the Lie derivative of the closed-loop Lyapunov function candidate is almost polynomial. Forcing the controller to be polynomial enables the use of SOS optimization to find a suitable controller that renders the Lie derivative of the closed-loop Lyapunov function candidate negative definite. As a consequence, the equilibrium point x_* of the closed-loop system is made asymptotically stable.
The novel contribution of this letter is nonlinear controller synthesis via the following two-step method. First, the Lie derivative of a given CLF candidate is approximated using the Koopman operator with polynomial lifting functions. Next, this Koopman representation of the Lie derivative is incorporated into a SOS optimization problem that parameterizes the controller as a polynomial of the state variables. Convex SOS optimization routines are then leveraged to find a control law that renders the equilibrium point x_* of the closed-loop system asymptotically stable.
This letter is organized as follows. CLFs, SOS optimization, and Koopman operator theory are reviewed in <ref>. The main theoretical results are presented in <ref>. A numerical example involving the control of an inverted-pendulum on a cart is provided in <ref>. The paper is drawn to a close in <ref>.
§ PRELIMINARIES
In this section we provide the necessary preliminary information to present our method of synthesizing control laws from data. Throughout we will have ⟨ u,v⟩ denote the inner product of vectors u,v∈ℝ^d. The temporal argument of the state x(t), the control u(t), etc., will be suppressed unless required for clarity.
§.§ Control Lyapunov functions
Consider a control affine system
ẋ = f(x) + ∑_i = 1^m g_i(x)u_i, x ∈ℝ^d, u_i ∈ℝ,
for which we wish to specify a control input u = [u_1, …, u_m]^T that forces all initial conditions belonging to some domain 𝒟⊆ℝ^d into an equilibrium point x_* as t →∞ under the flow of (<ref>). To do so, one may specify a control Lyapunov function (CLF) <cit.> V: 𝒟→ℝ satisfying V(x) > 0 ∀ x ∈𝒟∖{x_*} and V(x_*) = 0, and then seek a control input so that
⟨∇ V(x),f(x) + ∑_i = 1^m g_i(x)u_i⟩ < 0, ∀ x∈𝒟∖{x_*}.
Indeed, (<ref>) guarantees that V decreases monotonically along trajectories of (<ref>), eventually reaching the global minimum at V(x_*) = 0. Crucial for our work later in this letter, the system being control affine means that u enters (<ref>) linearly.
When the inequality in (<ref>) is not strict, only Lyapunov stability can be concluded rather than asymptotic stability. Precisely, this means that 0 ≤ V(x(t)) ≤ V(x(0)) for all t ≥ 0, meaning that the motion of x(t) is constrained by the level set V(x(0)). Moreover, LaSalle's invariance principle guarantees the limiting behaviour of x(t) is contained in the set of x values for which (<ref>) is exactly zero. Such non-strict inequalities become important when imposing a tractable relaxation of the inequality (<ref>) in the following subsection.
§.§ Synthesizing controllers with semidefinite programming
Although finding a control input u to satisfy (<ref>) is difficult in general, under certain assumptions on (<ref>) this process can be automated using standard optimization methods. Specifically, if f(x), g_i(x), and V(x) are all polynomial, then we can search for a polynomial state-dependent feedback control law u(x). Fixing the degree of u(x) allows its coefficients to be optimized to satisfy the polynomial inequality constraint (<ref>) for all x ∈𝒟. Although these coefficients appear linearly in (<ref>), tuning them to satisfy such a polynomial inequality is an NP-hard task in general <cit.>.
To make the controller synthesis problem tractable, we replace the polynomial inequalities with the sufficient conditions that the polynomials are sum-of-squares (SOS). Precisely, a polynomial p(x) is SOS if there exists polynomials q_1(x),…,q_k(x) so that
p(x) = ∑_i = 1^k q_i(x)^2.
Identifying an SOS representation of a polynomial trivially verifies that it is nonnegative. Furthermore, verifying an SOS representation constitutes a semidefinite program, since p(x) is SOS if and only if there exists a vector of monomials v(x) and a positive semidefinite matrix P such that p(x) = v(x)^TPv(x).
Recall (<ref>) and suppose that 𝒟 is a semialgebraic set, meaning there exists polynomials {a_j}_j = 1^J, {b_ℓ}_ℓ = 1^L so that
𝒟 = {x ∈ℝ^d| a_j(x) ≥ 0, b_ℓ(x) = 0, ∀ j,ℓ}.
A sufficient condition for identifying a stabilizing polynomial state-dependent controller u(x) = [ u_1(x), …, u_m(x) ]^T is <cit.>
(a) -⟨∇ V(x),f(x) + ∑_i = 1^m g_i(x)u_i(x)⟩
+ ∑_j = 1^Ja_j(x)σ_j(x) + ∑_ℓ = 1^L b_ℓ(x)ρ_ℓ(x) is SOS,
(b) σ_j(x) is SOS,
for polynomials σ_j and ρ_ℓ. Note that “is SOS” means there exists an SOS representation of the given polynomial. By fixing the degrees of u_i, σ_j, and ρ_ℓ, the SOS constraints can be translated into semidefinite programs by freely available software packages like YALMIP <cit.>. Solvers such as MOSEK <cit.> can efficiently solve these semidefinite programs to determine the polynomial coefficients.
§.§ The Koopman operator
Let Φ(t;x):ℝ_+× X → X represent the flow of a dynamical system at time t ≥ 0 with the initial condition Φ(0;x) = x. The Koopman operator is a linear transformation that 𝒦_t that for each t maps a lifting function φ:X →ℝ to
𝒦_tφ = φ(Φ(t;x)), ∀ x ∈ℝ^d.
Koopman lifting functions are also often called observables, but are unrelated to the concept of observability.
The family {𝒦_t| t ∈ℝ_+} is a one-parameter semigroup whose generator is referred to as the Lie derivative, acting on differentiable lifting functions via
ℒφ = lim_t → 0^+𝒦_t φ - φ/t.
Intuitively, the Lie derivative represents a derivative of the lifting function φ in the direction of the flow of the system.
Lyapunov functions are examples of lifting functions whose values decrease along trajectories. Moreover, given a candidate Lyapunov function V:𝒟→ℝ for (<ref>), the Lie derivative of V is evaluated using the chain rule to be exactly
ℒV = ⟨∇ V(x),f(x) + ∑_i = 1^m g_i(x)u_i⟩,
which demonstrates a connection between the Koopman operator and Lyapunov functions.
§ MAIN RESULT
Our method of synthesizing control laws from data comes as a two-step process. First, we estimate the Lie derivative from data using well-developed techniques for approximating the action of the Koopman operator on finite dictionaries of lifting functions <cit.>. Second, we integrate our estimated Lie derivative into the SOS framework of <ref> and provide an SOS optimization problem that determines control laws directly from data.
§.§ Estimating Lie derivatives from data
Begin by supposing that snapshots of a control system are given in the form of triples {(x_k,u_k,y_k)}_k = 1^n ⊂ℝ^d×ℝ^m×ℝ^d. Here, y_k is the state of the system exactly τ > 0 time units after having state and control values (x_k,u_k). We then consider two dictionaries of polynomial lifting functions of the state, ϕ_1,…,ϕ_p and ψ_1,…,ψ_q. Denote
ϕ = [ ϕ_1 ⋯ ϕ_p ]^T, ψ = [ ψ_1 ⋯ ψ_q ]^T .
Lyapunov functions are assumed to belong to span{ϕ}, while their image under the Lie derivative is projected into span{ψ}. Two dictionaries are required because the Lie derivative associated with a polynomial system is expected to be of a higher degree than the original lifting function it is being applied to. To see why this is the case, let F denote the right-hand-side of our (assumed polynomial) control-affine system (<ref>). Note that the degree of ℒV = ⟨∇ V,F⟩, the Lie derivative (<ref>), is
deg(∇ V) + deg(F) = deg(V) - 1 + deg(F) ≥deg(V),
with a strict inequality when F is nonlinear, that is, deg(F) > 1. In practice one should aim to have span{ϕ}⊆span{ψ} for the approximation of the Lie derivative later in this section.
Following the method of EDMD <cit.>, define the p× n matrix
Φ = [ ϕ(y_1) ϕ(y_2) ⋯ ϕ(y_n) ],
and the q (m+1)× n matrix
Ψ = [ ψ(x_1) ψ(x_2) ⋯ ψ(x_n); ψ(x_1)u_1,1 ψ(x_2)u_1,2 ⋯ ψ(x_n)u_1,n; ⋮ ⋮ ⋱ ⋮; ψ(x_1)u_m,1 ψ(x_2)u_m,2 ⋯ ψ(x_n)u_m,n ],
where we assume that the dynamics are control affine, leading to the specific form of Ψ <cit.>. With these matrices we can approximate the Koopman operator by first obtaining the matrix K ∈ℝ^p × q(m+1) as the solution to the minimization problem
K = ΦΨ^† = _K Φ - KΨ_F,
where (·)^† denotes the Moore-Penrose pseudoinverse and ·_F denotes the Frobenius norm. The matrix K can be broken up into m+1 size p × q matrices
K = [ A B_1 ⋯ B_m ],
representing the lifted components corresponding to f and g_1,…,g_m in a control affine system (<ref>). Then, for any given control input u = [ u_1, …, u_m ]^T, the matrix K leads to an approximation of the Koopman operator 𝒦̃ acting on lifting functions φ = ⟨ c, ϕ⟩∈span{ϕ} by
𝒦̃(φ) := ⟨ c, Aψ⟩ + ∑_i = 1^m⟨ c, B_iψ⟩ u_i,
which one can verify belongs to span{ψ} for each u ∈ℝ^m.
With the approximate Koopman operator 𝒦̃ we can further estimate the Lie derivative as a finite-difference approximation of (<ref>). Precisely, ℒ̃ acts on φ = ⟨ c, ϕ⟩∈span{ϕ} by
ℒ̃(φ) = 𝒦̃(φ) - φ/τ.
Then, using (<ref>), it follows that
ℒ̃(φ) = τ^-1⟨ c, Aψ - ϕ⟩ + τ^-1∑_i = 1^m⟨ c, B_iψ⟩ u_i,
for a given control input u ∈ℝ^m. Having span{ϕ}⊆span{ψ} guarantees that ℒ̃:span{ϕ}→span{ψ}, just like 𝒦̃. We refer the reader to <cit.> for convergence proofs regarding the Lie derivative approximation ℒ̃ in the limits of infinite data (n→∞), sampling rates (τ→ 0^+), and dictionaries (q →∞).
§.§ Synthesizing control laws from data
The work in the previous subsection allows one to estimate the Lie derivative direction from data. We now leverage this method to incorporate it into a SOS-driven method for synthesizing control laws. Let us begin by assuming that V ∈span{ϕ}, that is, there is some c ∈ℝ^p so that V = ⟨ c,ϕ⟩ is our candidate control Lyapunov function. The Lie derivative condition (<ref>) is then replaced with the data-driven Lie derivative condition,
ℒ̃(V) = τ^-1⟨ c, Aψ - ϕ⟩ + τ^-1∑_i = 1^m⟨ c, B_iψ⟩ u_i < 0.
Since c is fixed because V is given, (<ref>) is an affine constraint for the control input u. Moreover, since ϕ and ψ are dictionaries of polynomial lifting functions, it follows that all terms in ℒ̃(V) are polynomials in the state variable x. Thus, we now follow a similar procedure to <ref> by considering u as a polynomial function of the state variable x and relaxing the inequalities to SOS conditions.
In detail, we consider a third dictionary of polynomials in x, denoted
χ = [ χ_1,…,χ_r ]^T,
and consider u = Cχ for some coefficient matrix C ∈ℝ^m × r. Hence, the data-driven SOS relaxation for synthesizing control laws on the semialgebraic set 𝒟 as in (<ref>) is given by
(a) -τ^-1⟨ c, Aψ - ϕ⟩ - τ^-1∑_i = 1^m⟨ c, B_iψ⟩ [Cχ]_i
+ ∑_j = 1^Ja_j(x)σ_j(x) + ∑_ℓ = 1^L b_ℓ(x)ρ_ℓ(x) is SOS,
(b) σ_j(x) is SOS,
which comes from replacing the exact Lie derivative, ℒV in (<ref>), with that approximated from data, ℒ̃(V), in (<ref>). The goal is then to determine the coefficients of the matrix C appropriately to derive a control law from data. Since there could be many choices for C, we propose the following convex optimization problem:
min_C ∈ℝ^m × r{h(C): (<ref>) is satisfied}
where h:ℝ^m× r→ℝ is a convex optimization objective.
The optimization objective is user-specified and should be guided by the specifics of the problem. Well-motivated examples of h:ℝ^m× r→ℝ are as follows.
* Sparsity — A convex proxy for producing control laws that use the fewest elements of χ possible can be implemented by setting h(C) = C_1, the absolute sum of all elements in C. The resulting controller relies on the fewest elements of χ and may help to tame the influence of noisy measurements.
* Boundedness — Physical limitations of actuators often require that the controller output cannot exceed certain values. Synthesizing a controller that has bounded fluctuation over the set 𝒟 can be implemented by minimizing h(C) = M so that M - u_i ≥ 0 and u_i - M ≥ 0 for all x ∈𝒟. Indeed, these conditions guarantee that |u_1|,…,|u_m| ≤ M for all x ∈𝒟, while the inequalities can be relaxed to SOS conditions on 𝒟 similar to (<ref>) and appended to the conditions in (<ref>). One can similarly impose boundedness on derivatives of u in the same way, thereby enforcing a bound on the rate of control.
§ APPLICATION TO THE INVERTED PENDULUM
As a demonstration of the efficacy of our proposed method, we apply it to synthetic data generated from a simple inverted pendulum on a cart. Let θ denote the angle of the pendulum arm, with θ = 0 denoting the upright position. The motion of the pendulum arm can be derived from first principles and is captured by the control affine system <cit.>
θ̈= sin(θ) - εθ̇- cos(θ)u,
where ε > 0 is the scaled viscous friction coefficient and the control input u is the acceleration of the cart on which the pendulum is placed. We fix ε = 0.1 throughout our demonstrations. Motivated by the control Lyapunov functions presented in <cit.>, we choose our candidate Lyapunov function to be
V(θ,θ̇) = 1/2θ̇^2 + 1 - cos(θ) + α(1 - cos^3(θ)),
for some α > 0. This is, of course, only one of many possible Lyapunov functions, and experiments with other Lyapunov functions are similarly promising. For the purposes of reproducing our results, all code related to this demonstration can be found at https://github.com/jbramburger/data-clfgithub.com/jbramburger/data-clf, which contains both MATLAB and Python implementations of the method.
Synthetic data is produced by simulating 20 random initial conditions (θ(0),θ̇(0)) ∈ [0,2π)×[-2,2] subject to a sinusoidal external forcing of the form
u(t) = Asin(t + B),
where A and B are drawn from the uniform distribution on [-1,1] and [-π,π], respectively. We integrate each initial condition from t = 0 up to t = 20, collecting data at evenly spaced intervals of length τ = 0.01. It should be noted that with a physical pendulum on a cart one can easily perform such experiments quickly and collect the resulting data for θ(t), θ̇(t), and u(t) <cit.>. Collecting data directly from the experiment could also improve control results as models such as (<ref>) are only approximate, thus circumventing the need for parameter estimation from a physical model.
Considering the phase variable θ modulo 2π constrains the dynamics of the pendulum to a cylinder parameterized by the state variables (θ,θ̇). We embed this cylinder in ℝ^3 by introducing the lifted state variables
(x_1,x_2,x_3) = (cos(θ),sin(θ),θ̇).
In these lifted variables the Lyapunov function (<ref>) becomes
V(x_1,x_2,x_3) = 1/2x_3^2 + 1 - x_1 + α(1 - x_1^3),
which is now a polynomial and has a global minimum at the unstable equilibrium x_* = (1,0,0). Due to the restriction 1 - x_1^2 - x_2^2 = 0, our dictionaries need only have linear terms in x_2 since higher-order terms can equivalently be provided through constant terms and powers of x_1. With this in mind, we take ϕ to be a dictionary of all monomials x_1^α_1x_2^α_2x_3^α_3 with α_1,α_3 = 0,1,2,3 and α_2 = 0,1, so that V ∈span{ϕ}. For demonstration, we take ψ to be a dictionary of monomials for which α_1,α_3 = 0,1,2,3,4 and α_2 = 0,1, although using larger values of α_1,α_3 returns similar results.
To implement the synthesis of a control law u(x_1,x_2,x_3) as an optimization problem, we use the state space
𝒟 = {(x_1,x_2,x_3)∈ℝ^3| η^2 - x_2^2 ≥ 0, 1 - x_1^2 - x_2^2 = 0}
for some parameter η∈ (0,1), which must be built into our SOS programs as in (<ref>) and (<ref>).
The equality condition 1 - x_1^2 - x_2^2 = 0 was discussed above, while the inequality condition η^2 - x_2^2 ≥ 0 excludes from the domain a strip on the cylinder where x_1 = 0 corresponding to θ = π/2,3π/2. The reason that this is excluded from the domain is that the control input to the system, given by cos(θ)u = x_1u, vanishes at this point and the Lie derivative is not necessarily negative for all x_2 = ± 1 and x_3 ∈ℝ. In practice, we take η as close to 1 as possible to encapsulate as much of the the full state space as we can. Numerical results presented in this demonstration use η^2 = 0.95 and we promote sparsity in the controller using the optimization objective h(C) = C_1, as presented previously.
Experiments reveal that increasing α causes a proportional increase in the exponential rate of convergence of the pendulum arm to the upright position (θ,θ̇) = (0,0). As an example, Figure <ref> presents controlled solutions with α = 100. The initial condition is taken to be close to the hanging down position so that the movement of the cart is forced to swing the pendulum up into the upright position. The synthesized state-dependent control law is given by
u_*(x_1,x_2,x_3) = 212.5755x_1x_2+54.1296x_1x_3,
which, in terms of the original state variables (θ,θ̇), is given by
u_*(θ,θ̇) = 212.5755cos(θ)sin(θ) + 54.1296cos(θ)θ̇.
We see in Figure <ref> this control law leads to a quick jerk of the cart to the left and then right to swing the pendulum up, after which the cart ceases to move. Notice further that the controller (<ref>) vanishes at both equilibria (θ,θ̇) = (0,0) and (θ,θ̇) =(π,0), meaning that the control is inactive at the hanging down (θ = π) position. To circumvent this issue, a discontinuous control would be needed <cit.>, which is not explored herein. Nonetheless, this is little issue from a practical point of view as one may randomize an initial disturbance in the cart to throw oneself away from the hanging down state and then activate the control law to stabilize the system.
§ CONCLUSION
In this letter we have presented a simple, flexible, and efficient method for synthesizing control laws directly from data. We emphasize that our method does not require the intermediary step of identifying the nonlinear system dynamics or performing parameter estimation. Instead, we use the EDMD framework to produce an approximation of the Koopman operator on a dictionary of polynomial lifting functions. The result is a linear description of the dynamics in the lifted coordinates which can then be integrated with well-established CLF methods and convex SOS optimization routines. We therefore provide a two-step data-driven method that can be applied broadly to problems in control.
Although we have explored the application to continuous-time data in this letter, the results can equivalently be applied to discrete-time processes as well by simply fixing τ = 1. Moreover, the EDMD framework will equally approximate the Koopman operator for general stochastic processes with no modifications to the method and similar convergence guarantees <cit.>. This means that our results could also be applied to data generated by stochastic systems, with the only minor difference being that the Lie derivative condition ℒV ≤ 0 is now considered in expectation <cit.>. The ability to apply these methods to stochastic systems could be promising for overcoming the inevitable noise that is produced when gathering real-world data. A report on the applicability and noise-robustness of the method on laboratory data will be left to a follow-up investigation.
IEEEtran
|
http://arxiv.org/abs/2307.03022v1
|
20230706143758
|
Strong Purcell enhancement of an optical magnetic dipole transition
|
[
"Sebastian P. Horvath",
"Christopher M. Phenicie",
"Salim Ourari",
"Mehmet T. Uysal",
"Songtao Chen",
"Łukasz Dusanowski",
"Mouktik Raha",
"Paul Stevenson",
"Adam T. Turflinger",
"Robert J. Cava",
"Nathalie P. de Leon",
"Jeff D. Thompson"
] |
physics.optics
|
[
"physics.optics",
"physics.atom-ph",
"quant-ph"
] |
These authors contributed equally to this work.
Department of Electrical and Computer Engineering, Princeton University, Princeton, NJ 08544, USA=-1
These authors contributed equally to this work.
Department of Electrical and Computer Engineering, Princeton University, Princeton, NJ 08544, USA=-1
These authors contributed equally to this work.
Department of Electrical and Computer Engineering, Princeton University, Princeton, NJ 08544, USA=-1
Department of Electrical and Computer Engineering, Princeton University, Princeton, NJ 08544, USA=-1
Present address: Department of Electrical and Computer Engineering, Rice University, Houston, Texas 77005, USA
Department of Electrical and Computer Engineering, Princeton University, Princeton, NJ 08544, USA=-1
Department of Electrical and Computer Engineering, Princeton University, Princeton, NJ 08544, USA=-1
Present address: AWS Center for Quantum Networking, Boston, Massachusetts 02135, USA
Department of Electrical and Computer Engineering, Princeton University, Princeton, NJ 08544, USA=-1
Present address: Department of Physics, Northeastern University, Boston, Massachusetts 02115, USA
Department of Electrical and Computer Engineering, Princeton University, Princeton, NJ 08544, USA=-1
Department of Electrical and Computer Engineering, Princeton University, Princeton, NJ 08544, USA=-1
Department of Chemistry, Princeton University, Princeton, NJ 08544, USA=-1
Department of Electrical and Computer Engineering, Princeton University, Princeton, NJ 08544, USA=-1
[email protected]
Department of Electrical and Computer Engineering, Princeton University, Princeton, NJ 08544, USA=-1
Engineering the local density of states with nanophotonic structures is a powerful tool to control light-matter interactions via the Purcell effect. At optical frequencies, control over the electric field density of states is typically used to couple to and manipulate electric dipole transitions. However, it is also possible to engineer the magnetic density of states to control magnetic dipole transitions. In this work, we experimentally demonstrate the optical magnetic Purcell effect using a single rare earth ion coupled to a nanophotonic cavity. We engineer a new single photon emitter, Er^3+ in MgO, where the electric dipole decay rate is strongly suppressed by the cubic site symmetry, giving rise to a nearly pure magnetic dipole optical transition. This allows the unambiguous determination of a magnetic Purcell factor P_m=1040 ± 30. We further extend this technique to realize a magnetic dipole spin-photon interface, performing optical spin initialization and readout of a single Er^3+ electron spin. This work demonstrates the fundamental equivalence of electric and magnetic density of states engineering, and provides a new tool for controlling light-matter interactions for a broader class of emitters.
Strong Purcell enhancement of an optical magnetic dipole transition
Jeff D. Thompson
August 1, 2023
===================================================================
The interaction of electromagnetic radiation with matter is of fundamental importance and underlies numerous current and future technologies. In particular, the absorption and emission of light by atomic or molecular transitions has enabled technologies such as the laser, MRI and atomic clocks <cit.>. The ability to control absorption and emission through engineering the environment is particularly relevant for quantum technologies requiring efficient light-matter interfaces, and has been demonstrated using optical cavities and numerous emitters including atoms and ions <cit.>, quantum dots <cit.> and atom-like defects in the solid state <cit.>.
Light-matter interaction can take place through multiple processes, including electric dipole (ED), magnetic dipole (MD), or higher order multipole transitions <cit.>. The natural scale of ED transitions is the largest, and therefore ED transitions are most often targeted for controlling light-matter interactions. However, in certain atoms ED transitions are suppressed, such that higher-order processes become dominant. Many experiments have demonstrated magnetic local density of states (LDOS) engineering in the microwave frequency domain, using metallic or superconducting cavities coupled to spin ensembles <cit.> and, recently, individual spins <cit.>.
Demonstrating magnetic LDOS engineering in the optical domain is more challenging, as many emitters with significant MD decay pathways have competing decay processes that must be distentangled, such as forced electric dipoles and nonradiative decay. In emitters with mixed MD and ED decay pathways, the relative contributions can be distinguished through the angular spectrum of the emitted radiation <cit.>. Furthermore, placing emitters near dielectric interfaces and in thin film structures has enabled small modifications of ED and MD decay rates through the Purcell effect <cit.>. However, demonstrating strong modification of the emission of magnetic dipole emitters in the optical domain via magnetic density of states engineering is a long-standing goal <cit.>.
In this work, we demonstrate strong Purcell enhancement of an optical MD transition in a single Er^3+ ion using a nanophotonic cavity. This is enabled by engineering a new single photon emitter, Er^3+:MgO, which by symmetry has a nearly pure MD optical transition at a wavelength of 1540.48 nm. The MD nature of the transition is experimentally confirmed via lifetime measurements, and comparison of the measured fluorescence and absorption spectra with a crystal field model. By evanescently coupling individual Er^3+ ions to a silicon nanophotonic cavity with a small magnetic mode volume of V_m = 0.068 μm^3, we demonstrate a large Purcell enhancement factor of P_m = 1040 ± 30 which can be unambiguously attributed to the magnetic dipole. Additionally, we use the cavity-enhanced MD transition to realize a spin-photon interface. With this, we determine the ground state spin structure and measure the lifetime and coherence time of a single spin. This work opens the door to using nanophotonic structures to control MD emission and enables the use of a wider class of atoms and atom-like systems for quantum technologies.
Realizing a large magnetic Purcell effect requires two components: a cavity with a large magnetic LDOS, and an emitter with a dominant MD decay pathway. The magnetic LDOS of a cavity can be quantified using the magnetic Purcell factor P_m, which is defined analogously to the electric Purcell factor as <cit.>:
P_e(m) = 3/4π^2(λ/n)^3 Q/V_e(m)( r),
Here, Q denotes the quality factor of the cavity, while the electric (magnetic) mode volume V_e(m)( r) describes the field strength at the position r of the emitter. For electric fields, the mode volume is defined by the electric field E( r) and relative permittivity ϵ_r( r) as:
V_e( r) = ∫ϵ_r( r) | E( r)|^2 d^3 r/max_r ϵ_r( r) | E( r)|^2.
For a magnetic dipole transition, the analogous expression is:
V_m( r) = ∫ | B( r)|^2/μ_r( r) d^3 r/max_r | B( r)|^2/μ_r( r),
where B( r) is the the magnetic field of the cavity mode and μ_r( r) the relative magnetic permeability. In a simple cavity such as a Fabry-Perot resonator, V_e and V_m are identical for optimally positioned emitters, as the distribution of the E and B fields are the same except for a phase shift along the cavity axis by a quarter wavelength (Fig. <ref>(a)).
In this work, we use a dielectric photonic crystal cavity to achieve very small mode volumes. The behavior of the electric and magnetic fields is qualitatively similar to the Fabry-Perot case (Fig. <ref>(b)). However, differences in the dielectric boundary conditions for E and B fields results in a slight difference in mode volumes, and from numerical simulations we find V_e = 0.05 μm^3and V_m = 0.068 μm^3. Therefore, this type of cavity is well-suited for attaining large electric and magnetic Purcell factors, depending on the type of emitter.
To address the second requirement of an emitter with a dominant MD decay pathway, we focus on ions. The 1.5 μm optical transition in connects the ^4I_15/2 and ^4I_13/2 levels; since these have the same parity, an electric dipole transition between them is forbidden in a spherically symmetric environment. In host crystals with low site symmetry, admixtures of 5d orbitals can lead to a so-called forced ED transition, which is often the dominant decay pathway <cit.>. However, this is forbidden in centrosymmetric environments <cit.>, which leads us to consider MgO as an host.
We incorporate Er into MgO using ion implantation followed by annealing (see Supplementary Information for additional information). To spectroscopically identify the cubic site in MgO, we perform site-selective excitation spectroscopy <cit.> in a heavily implanted sample (1× 10^14 Er/cm^2, sample A) and measure the lowest few ground (Z_i) and excited state (Y_i) crystal field levels for six distinct sites. Previous studies of have found a large number of spectral lines, suggesting that incorporates into the crystal in several different configurations <cit.>. Many do not have cubic point group symmetry, most likely because the sits next to a vacancy or interstitial. However, one of the observed sites, with a Z_1 → Y_1 transition at 1540.48 nm, can be reproduced with a cubic crystal-field model (Fig. <ref>(b)) with only four free parameters, with an r.m.s. deviation of 1.6 cm^-1 (see Supplementary Information for further details).
To provide further confirmation of the MD nature of this transition, we measure the excited-state lifetime of the lowest excited crystal field state for this site, ^4I_13/2(Y_1). The experimentally measured value, τ_0 = 21.04 ± 0.02 ms (Fig. <ref>(c)) is considerably longer than lifetimes in many other materials (typically 5-10 ms <cit.>) and only slightly shorter than the theoretically predicted MD lifetime of 23.1 ms, calculated from the cubic crystal-field model and the refractive index of MgO (see Supplementary Information). Therefore, we conclude that the overall ^4I_13/2(Y_1) decay is approximately 91% MD, with the remainder being nonradiative or forced ED arising from a small distortion of the crystal. We note that other emitters in MgO have been observed to have significant nonradiative or forced ED phonon sideband transitions <cit.>; their absence for is a consequence of the isolated nature of the 4f electrons. The contribution of the next-order multipole, the electric quadrupole (E2), is estimated to be ∼ 1× 10^-7 smaller than the MD rate for the ^4I_13/2 state of (see Supplementary Information) <cit.>.
To study the magnetic Purcell effect, we fabricate silicon nanophotonic resonators from a silicon-on-insulator wafer, and bond them onto Er-implanted MgO crystals using a stamping process described previously <cit.>. Each device consists of an array of cavities evanescently coupled to a single bus waveguide (Fig. <ref>(c)), which is coupled to an optical fiber using a grating coupler <cit.>. The in-plane distribution of the B field is shown in Fig. 1(d); the peak field strength with a single photon in the cavity is B_z = 8.4 G. While the field decays exponentially into the substrate, it remains larger than 2.50 G at depths up to 100 nm.
In a first experiment to probe the magnetic Purcell enhancement, we use a sample implanted with 1× 10^12 Er/cm^2, distributed uniformly between the surface and a depth of 100 nm (sample B). After tuning the cavity resonance to the 1540.48 nm transition using gas deposition <cit.>, we probe the cavity-coupled ions using photoluminescence excitation spectroscopy (PLE) by sweeping a pulsed laser over the resonance while collecting time-delayed fluorescence photons with a superconducting nanowire single photon detector (SNSPD). PLE spectroscopy reveals a forest of single ion lines (Fig. <ref>(a)). The single-ion nature is confirmed by measuring the second-order autocorrelation function (g^(2)) of the emitted photons (Fig. 3(c)). Individual ions have linewidths as narrow as 690 kHz (Fig. 3(b)), which is significantly smaller than previous measurements of shallow ions in Y_2SiO_5 <cit.>, LiNbO_3 <cit.>, or silicon <cit.> (though we note that narrower linewidths for single ions have been observed for deeper ions in YSO <cit.>, and for shallow ions in a non-polar site in CaWO_4 <cit.>). This is likely a consequence of the absence of a permanent electric dipole-moment for substituted at the cubic site, which renders the emitter insensitive to charge noise.
Focusing on a single ion, we determine the MD coupling strength to the cavity from the fluorescence lifetime. A representative time trace is shown in Fig. 3(d) alongside the bulk fluorescence lifetime for comparison. The single ion decay rate is τ = 20.3 ± 0.5 μs, which is shorter than the bulk lifetime by P_m = τ_0/τ = 1040 ± 30, thereby demonstrating strong magnetic Purcell enhancement. Using the relationship P_m = 4 g_m^2/(κγ), with the cavity decay rate κ = 2π× 3.14 GHz (Q = 6.2 × 10^4), we extract an atom-cavity coupling strength of g_m = 2π× 2.49 MHz. With g_m = μ B/ħ and a theoretical dipole moment of μ = 0.62 μ_B (where μ_B is the Bohr magneton), we determine the single-photon magnetic field of B= 2.86 G at the position of the ion. This is in good agreement with the simulated magnetic field strength for an optimally positioned ion at a depth of 65 nm (Fig. 1(d)), consistent with the expected ion distribution for sample B.
Lastly, we use the Purcell-enhanced MD transition to realize a spin-photon interface. We use a third sample with an even lower implantation dose to allow clearer resolution of single ions (sample C, 2× 10^10 Er/cm^2). Based on the crystal field model, the ground state is expected to be a effective spin-3/2 quartet state (Γ_8 in Bethe notation), which is only allowed for at a cubic site <cit.>. However, an infinitesimal distortion breaking the cubic symmetry can lift this degeneracy, resulting in two Kramers' doublets, Z_1 and Z_2, and the presence of such a perturbation is suggested by the double-peak structure in Fig. 3(a), with a splitting of 1.5 GHz. A small magnetic field (50 G) further separates the Kramers' doublets into single states, that is, Z_1± and Z_2± (Fig. 4a).
With the laser and cavity tuned into resonance with the Z_1-→ Y_1- transition, repeated application of laser pulses results in an exponential decay of the fluorescence (Fig. (4b)). The initial fluorescence amplitude is proportional to the spin population in Z_1-, while the decay indicates optical pumping into other states. After initializing a population imbalance in this way, we were able to observe coherent Rabi oscillations between Z_1- and Z_1+ while applying a microwave (MW) drive with frequency f = 1.63 GHz.
By using the initialization and readout sequence from Fig. 4(b), and calibrated microwave rotations from Fig. 4(c), we can measure the coherence properties of the Z_1 ground state. To measure the lifetime, we first initialize with optical pumping, and then measure the final state population in Z_1- (using a direct fluorescence measurement) and Z_1+ (using a MW π pulse followed by direct fluorescence measurement). We find a population relaxation time of 2.6 ms. In most other materials, the lowest Kramers' doublet has a relaxation time exceeding several seconds at sub-Kelvin temperatures <cit.>, because there is no direct phonon matrix element between the spin sublevels <cit.>. We attribute the shorter observed lifetime to thermal transitions to the low-lying Z_2 level. If the separation of this level is much less than the Boltzmann energy k_B T/h ≈ 10 GHz (here, T=0.5 K is the environment temperature, k_B is the Boltzmann constant, and h is the Planck constant), then we expect the population of the lowest four levels {Z_1-,Z_1+,Z_2-,Z_2+} to change from approximately {0,0.33,0.33,0.33} after optical pumping (assuming the Y_1 branching ratio to Z_1 and Z_2 is similar), to {0.25,0.25,0.25,0.25} after thermalization. This is in agreement with the observed change in population of the Z_1+ state, providing support for the role of the Z_2 levels in the spin relaxation, although we note that we did not directly observe the Z_2 levels for the single ion. A similar relaxation timescale has been observed for ensembles in bulk EPR measurements <cit.>.
Finally, we measure the spin coherence of the |Z_1±⟩ states using Ramsey and Hahn echo sequences (Fig. <ref>(e-f)), finding T_2^* = 54 ± 6 ns and T_2^Hahn = 560 ± 40 ns, respectively. The measured value of T_2^* is in good agreement with the predicted range of values from the ^25Mg nuclear spins bath (I=5/2, g_n=-0.34, relative abundance = 10%), estimated using cluster correlation expansion (CCE) <cit.> simulations. However, the Hahn echo time, T_2, is shorter than predicted from the nuclear spin dynamics alone, indicating an additional source of noise with fast fluctuations. The most likely sources are native defects (that is, F centers <cit.>), paramagnetic impurities or surface spins.
In conclusion, we have demonstrated magnetic Purcell enhancement with a Purcell factor of P_m = 1040 ± 30. This was enabled by engineering a new single photon emitter, Er^3+:MgO, and coupling it to a silicon nanophotonic cavity. This large Purcell enhancement establishes the equivalence of electric and magnetic density of states for nanophotonic engineering and opens the door to engineering strong light-matter interaction with a wider range of emitters.
There are several aspects of these results worth discussing. First, the observed magnetic Purcell factor P_m = 1040 ± 30 is not only the first strong Purcell enhancement for an optical magnetic dipole transition, but it is also comparable to the largest Purcell factors ever reported for electric dipole transitions in any context, using nanopohtonic cavities <cit.> or plasmonic devices <cit.>. This provides a striking demonstration of the equivalence of optical density of states engineering with electric and magnetic fields.
A particularly valuable application of magnetic Purcell enhancement is to develop new quantum emitters and spin-photon interfaces based on magnetic dipole transitions. Rare earth ions in centrosymmetric sites are one interesting class of MD emitters, which includes the example studied in this work. The main benefit of a centrosymmetric site is the absence of a permanent electric dipole moment and corresponding linear DC Stark shift in the presence of charge noise, which can suppress the spectral diffusion that prevents indistinguishable photon generation <cit.>, which is important for applications in quantum networks <cit.>. In fact, we attribute the narrow single-ion optical linewidth observed in this work to this effect. The observed nearly-quartet ground state is problematic for use as a qubit, because of fast spin relaxation. This effect could be overcome by the deliberate introduction of strain to further separate the Z_1 and Z_2 levels (e.g., using mechanical structures <cit.> or epitaxial growth <cit.>), or by using host crystals that are centrosymmetric but not cubic. Finally, materials with strong optical MD transitions are of particular interest for the development of negative refractive index materials <cit.>.
Acknowledgements This work was primarily supported by the U.S. Department of Energy, Office of Science, National Quantum Information Science Research Centers, Co-design Center for Quantum Advantage (C2QA) under contract number DE-SC0012704. We also acknowledge support from the DOE Early Career award (for modeling of decoherence mechanisms and spin interactions), as well as AFOSR (FA9550-18-1-0334 and YIP FA9550-18-1-0081), the Eric and Wendy Schmidt Transformative Technology Fund, the Princeton Catalysis Initiative, and DARPA DRINQS (D18AC00015) for establishing the materials spectroscopy pipeline and developing integrated nanophotonic devices. CMP was supported by a National Defense Science and Engineering Graduate (NDSEG) Fellowship.
§ SUPPLEMENTARY INFORMATION
§ MGO SAMPLE PREPARATION
The MgO samples used in this study were procured from MTI Corporation. They have a double-sided epi polish, and are specified by the vendor to have a chemical purity >99.95 %. Erbium was introduced into the samples using ion implantation (II-VI Inc.). A total of three different implantation and annealing treatments were used for different experiments. Sample A was implanted with an erbium density of 1× 10^14 Er/cm^2 targeting a uniform erbium distribution between the surface and a depth of 100 nm, using the fluences and energies shown in Tab. <ref>. After implantation, this sample was annealed in air at 600 ^∘C for 6 hours. Sample B was implanted with an erbium density of 1× 10^12 Er/cm^2 and targeted a uniform erbium density between the surface and a depth of 100 nm (Tab. <ref>), with no post-implantation anneal. Finally, sample C was implanted with an erbium density of 2× 10^10 Er/cm^2 using a 35 keV implantation energy corresponding to a target depth of 15 nm. After implantation, this sample was annealed in air at 400 ^∘C for 8 hours. The implantation energies were chosen to match the target depth based on simulations using the Stopping-Range of Ions in Matter package <cit.>.
§ EXPERIMENTAL DETAILS
Nanophotonic devices were fabricated from silicon on insulator wafers and transferred to the MgO substrate. Light was coupled to the devices via a grating coupler using an angle polished fiber and MW delivery was achieved using a scanning probe head. Details of the fabrication procedure and experimental apparatus can be found in Ref. <cit.>.
The single ion fluorescence measurement (Fig. 3(a)) was performed at 4 K with the sample cooled with a Montana Inc Cryostation. All other single ion measurements were conducted at 500 mK using a BlueFors ^3He cryostat.
The fluorescence measurement (Fig. 3(a)) was performed using 4.2 μs excitation pulses interleaved with 101 μs fluorescence collection windows, repeated 1×10^4 times for each frequency step, respectively. The narrow single ion measurement (Fig. 3(b)) utilized 4.2 μs excitation pulses and 41 μs fluorescence collection windows repeated 4×10^5 times. The second order auto-correlation experiment (Fig. 3(c)) used 4.2 μs excitation pulses and 500 μs fluorescence collection windows, repeated 7.36×10^6 times. The single ion lifetime measurement (Fig. 3(d)) utilized 4.2 μs excitation pulses interleaved with 101 μs fluorescence collection windows, repeated 2×10^4 times per shot. The lifetime was averaged over a total of 91 shots.
Spin characterization experiments were all performed with sample C using two distinct magnetic field orientations. Initialization experiments used an external magnetic field magnitude of 50 G and an orientation of (θ, ϕ) = (60^o, 65^o) selected to maximize cyclicity (see Fig. 1(c) of main text for coordinate system definition). Using this configuration, 41 pulses were used to initialize the spin state (Fig. 4(b) of main text). Increasing the number of pulses beyond this did not improve the initialization since an equilibrium between the optical pumping rate and phonon cross-relaxation rate (discussed in the main text) is reached. The same field configuration was utilized for spin T_1 characterization. The optical pulse sequence used for Fig. 4(b) and Fig. 4(d) consisted of a 4.2 μs optical excitation pulse interleaved with 61 μs fluorescence collection windows, and was repeated a total of 1× 10^4 times.
Rabi, Ramsey and Hahn echo measurements were performed using a magnetic field magnitude of 125 G, with an orientation of (θ, ϕ) = (90^o, 38^o), selected to minimize background fluorescence from neighboring ions. This field corresponded to a ground state splitting of f = 1.63 GHz. The optical pulses used for acquiring Rabi oscillation data consisted of a 4.2 μs excitation pulses in tandem with 101 μs fluorescence windows, with each point repeated 4 × 10^4 times. The Ramsey data used optical excitation pulses of length 4.2 μs with fluorescence collection windows of 101 μs length, repeated 4×10^4 times for each data point. Finally, the Hahn data used optical excitation pulses of length 47 ns interleaved with 101 μs length fluorescence collection windows and a total of 4.2 × 10^5 repetitions. The very short optical excitation pulses corresponded to calibrated optical π pulses and were performed with a peak optical power four orders of magnitude larger than the uncalibrated 4.2 μs excitation pulses used in other experiments.
For ensemble spectroscopy (Fig. 2(b-c) and Fig. S1), we used a separate setup at 4 K, based on an Oxford Instruments Optistat cryostat.
§ CRYSTAL-FIELD MODELING OF THE CUBIC SITE
A crystal field model was developed to verify that the observed transitions are consistent with a cubic point-group symmetry, and to determine a theoretical lifetime of the optical excited state.
The complete Hamiltonian for the 4f electrons has the form
H = H_FI + H_CF + H_NMD,
where H_FI is the free-ion contribution, H_CF is the crystal-field Hamiltonian, and H_NMD is the nuclear magnetic-dipole interaction. The free-ion Hamiltonian follows the parameterization of Carnall et al. <cit.>
H_FI = E_AVG + ∑_1,2,3 F^k f_k + ζ_4f A_SO + α L(L+1) + β G(G_2) + γ G(R_7) + ∑_i = 2,3,4,6,7,8 T^i t_i,
with E_AVG the central field Hamiltonian. Subsequent terms are arranged in pairs of parameters multiplied by the matrix elements of an operator. Specifically, F^k are the Slater parameters and f_k the components of the angular part of the electrostatic repulsion, ζ_4f is the spin-orbit coupling constant with A_SO the spin-orbit coupling operator. The parameters α, β, and γ account for two-body interactions (Trees parameters) and the parameters T^i account for three-body interactions (Judd parameters). Furthermore, G(G_2) and G(R_7) are the eigenvalues of the Casimir operators of the groups G_2 and R_7 <cit.>.
For a substitutional site with a cubic point-group symmetry, the crystal-field Hamiltonian reads
H_CF = B^4_C [C^4_0 + √(5/14)(C^(4)_4 + C^(4)_-4)] + B^6_C [C^6_0 + √(7/2)(C^(6)_4 + C^(6)_-4)].
Here, the B^4_C and B^6_C are the crystal field parameters and C^(k)_q are spherical tensor operators expressed using the normalization of Wybourne <cit.>. The last term in Eq. (<ref>), H_NMD, accounts for the nuclear magnetic dipole; a detailed description of this contribution to H can be found in Ref. <cit.>.
The Hamiltonian parameters, shown in Tab. <ref>, were optimized by fitting the eigenvalues of the Hamiltonian (<ref>) to the energy levels of (Fig. <ref>) reported in Ref. <cit.>.
The experimental and theoretical energies are summarized in Tab. <ref>, yielding an r.m.s. difference of 1.6 cm^-1. This close agreement establishes that the studied site in has cubic point-group symmetry.
We note that due to available experimental data being limited to the ^4I_15/2 and ^4I_13/2 multiplets, it was only possible to fit the two crystal-field parameters B^4_C and B^6_C, as well as the central field contribution E_AVG and the spin-orbit coupling constant ζ. The remaining parameters were fixed to values of Er^3+:LaF_3 from Carnall et al. <cit.>. As a consequence, the obtained fit is unlikely to accurately reproduce the inter-multiplet spacing of higher energy terms. However this does not affect the accuracy of the fitted cubic crystal-field parameters to leading order. Additionally, high-resolution spectroscopy of the ^4I_15/2(Z_1) to ^4I_13/2(Y_1) transition showed a lifting of the Γ_8 quartet into two Kramers' doublets with a splitting of 1.5 GHz. It was possible to reproduce this by adding a small axial perturbation to the cubic crystal field of B^2_0 = 1.4 cm^-1. To achieve close agreement with the observed lineshape, it was necessary to include an additional rank 2 contribution of B^2_2 = 0.6 cm^-1, along with the nuclear magnetic dipole hyperfine interaction with a coupling strength of 0.0054 cm^-1 <cit.>.
§ PREDICTED MAGNETIC DIPOLE DECAY RATE
In order to evaluate the optical lifetime of the ^4I_13/2(Y_1) state for the cubic site, we use the Hamiltonian (<ref>) to determine the magnetic dipole moment. Following the treatment of Ref. <cit.>, we note that
μ^(1)_q = -e ħ/2 m_e c (𝐋 + 2 𝐒)^(1)_q,
where μ^(1)_q is the rank 1 magnetic dipole operator with polarization q = {-1, 0, 1}. The transition strength for magnetic dipole transitions from initial state I to final state F can be found by summing over all the i and f components of these states:
S^MD_FI,q = ∑_i ∑_f |⟨Ff|μ^(1)_q |Ii⟩|^2.
We note that the excited state lifetime is the inverse of the sum over emission rates from all polarizations. Therefore, one can define an effective emission rate by <cit.>
A^MD_FI = 1/4 πϵ_04 ω^3/ħ c^3 n^3 1/ξ1/3∑_q S^MD_q.
Here ϵ_0 is the permittivity of free space, ω the angular frequency, ħ is Planck's constant, c the speed of light, n the refractive index of the host crystal, and ξ the degeneracy of the initial state.
The transition dipole moment μ can be determined from the wavefunctions obtained from Eq. (<ref>). We find μ = 0.62 μ_B. For a refractive index of n=1.715 <cit.> we then find A^MD=43.23 s^-1, corresponding to a lifetime of 23.1 ms. We note that the ^4I_13/2 lifetime of has been previously calculated for the free-ion case in Ref <cit.>, yielding 19.6 ms after scaling by n^3 to account for the MgO host. The slightly slower decay rate that we predict is related to the MD matrix element. We believe our predicted matrix element is more accurate because we include the effect of the crystal field (H_CF), and also fit the spin-orbit constant ζ to the spectroscopic data.
§ OTHER CONTRIBUTIONS TO THE DECAY RATE
The theoretically calculated magnetic dipole relaxation rate for MgO is slightly slower than the experimentally observed decay rate, implying an additional relaxation path. It is therefore possible that the observed Purcell enhancement could be attributed to the other decay pathway, instead of the MD decay. However, given the comparable electric and magnetic mode volumes of the cavity, the maximum attainable Purcell factor for pure ED and pure MD emitters are also comparable. The observed Purcell factor agrees well with the predicted enhancement for a pure MD emitter. Therefore, it is not plausible to attribute the bulk of the Purcell enhancement to the < 10% ED decay fraction.
Finally we note that in addition to a small ED decay component, one may also consider the possibility of an electric quadrupole contribution. The electric quadrupole decay rate for the ^4I_13/2 term of has been estimated using a free-ion Hamiltonian to be Γ^EQ_FI = 1.16 × 10^-6 s^-1 <cit.>. To first order, the free-ion case can be used to estimate the expected decay rate in a host material using Γ^EQ = n^5 Γ^EQ_FI <cit.>, which yields Γ^EQ = 1.72 × 10^-5 s^-1. This decay rate is negligible compared to the MD decay rate and may therefore be disregarded.
|
http://arxiv.org/abs/2307.00561v1
|
20230702130132
|
SAT-based Formal Fault-Resistance Verification of Cryptographic Circuits
|
[
"Huiyu Tan",
"Pengfei Gao",
"Taolue Chen",
"Fu Song",
"Zhilin Wu"
] |
cs.CR
|
[
"cs.CR",
"cs.AR",
"cs.SE"
] |
Quantum interference between quasi-2D Fermi surface sheets in UTe_2
A. G. Eaton
August 1, 2023
===================================================================
camerareadyfirstpage
empty
plain
plain
Fault injection attacks represent a type of active, physical attack against cryptographic circuits.
Various countermeasures have been proposed to thwart such attacks, the design and implementation of which are, however, intricate, error-prone, and laborious.
The current formal fault-resistance verification approaches are limited in efficiency and scalability. In this paper,
we formalize the fault-resistance verification problem which is shown to be NP-complete.
We then devise novel approach for encoding the fault-resistance verification problem as the Boolean satisfiability (SAT) problem so that
off-the-shelf SAT solvers can be utilized. The approach is implemented in an open-source tool which is evaluated extensively on realistic cryptographic circuit benchmarks.
The experimental results show that is able to verify fault-resistance of almost all (46/48) benchmarks in 3 minutes (the other two is verified in 35 minutes).
In contrast, the prior approach fails on 23 fault-resistance verification tasks even after 24 hours (per task).
§ INTRODUCTION
Cryptographic circuits have been
widely used in providing secure authentication, privacy, and integrity,
due to rising security risks in sensor networks, healthcare, cyber-physical systems, and the Internet of Things <cit.>.
However, cryptographic circuits are vulnerable to various effective physical attacks, and remains an open challenge even after two decades of research.
This paper focuses on an infamously effective attack, i.e., fault injection attacks <cit.>.
Fault injection attacks deliberately inject disturbances into a cryptographic circuit when it is running cryptographic computation, and analyze the information from the correct (non-faulty) and the incorrect (faulty) outputs, attempting to deduce information on the secret key. Fault injection attacks allow the adversary to bypass certain assumptions
in classical cryptanalysis methods
where the cipher is considered to be a black box and therefore cannot be tampered with.
The disturbances could be injected in various different ways, such as clock glitches <cit.>, underpowering <cit.>,
voltage glitches <cit.>, electromagnetic pulses <cit.>
and laser beams <cit.>.
Secret information can be deduced by differential fault analysis <cit.>,
ineffective fault analysis <cit.>, statistical fault analysis <cit.>, and statistical ineffective fault analysis <cit.>.
Thus, fault injection attacks pose a severe security threat to embedded computing devices with cryptographic modules.
Both detection-based and correction-based countermeasures have been proposed to defend against fault injection attacks <cit.>.
The former aims to detect fault injections and infect the output result in the presence of faults in order to make it nonexploitable by an attacker with an error flag output;
the latter aims to correct the faulty cryptographic computation in the presence of faults.
An effective countermeasure must be fault-resistance, i.e., detecting or correcting faults in time once some faults occur. Designing and implementing secure, efficient, and low-cost cryptographic circuits
is notably error-prone, hence it is crucial to rigorously verify
fault-resistance, especially
at the gate level (which is closer to the final circuit sent to the fab for the tape-out).
Typically this is done at the last stage of the front-end design so the flaws introduced by front-end tools (e.g., optimization passes) can be detected.
There is more specialized work for assuring fault-resistance, (e.g., <cit.>), but almost all of them
focus on finding flaws or checking the effectiveness of user-specified fault test vectors (i.e., certain instances).
In principle, to achieve completeness, all the possible test vectors (varying in fault types, injected gates and clock cycles)
must be
checked under all valid input combinations, which is virtually infeasible in practice.
To alleviate this issue, recently, a binary decision diagram (BDD) <cit.> based approach, called FIVER <cit.> was proposed,
which does not explicitly enumerate all valid input combinations and is optimized to avoid some fault test vectors.
However, it still has to repeatedly build BDD models for a huge number of fault test vectors, failing to verify relatively larger circuits in a reasonable amount of time.
(For instance, it fails to prove fault-resistance of a single-round 2-bit protected AES in 24 hours.)
Contributions. In this work,
we first formalize the fault-resistance verification problem,
which lays a solid foundation for the subsequent reasoning.
We show that this problem is NP-complete.
Highlighted by this computational complexity,
we then propose a novel SAT-based approach for verifying fault-resistance
by utilizing modern efficient SAT solvers.
The encoding for fault-resistance verification into SAT solving requires a novel treatment.
Technically, with
a countermeasure and a fault-resistance model, we generate a new conditionally-controlled faulty circuit, which is in turn reduced
to the SAT problem.
Intuitively we replace each vulnerable gate with a designated gadget (i.e., sub-circuit) with (1) a control input for controlling if a fault is injected on the gate,
and (2) selection inputs for choosing which fault type is injected. This approaches
avoids explicit enumeration of all the possible fault test vectors and can fully utilize
the conflict-driven clause learning (CDCL) feature of modern SAT solvers.
Furthermore, we introduce a
reduction technique to safely reduce the number of vulnerable gates when verifying fault-resistance, which significantly improves the verification efficiency.
We implement our approach in an open-source tool (Fault Injection counteR Measure verifiER), which is
based on Verilog gate-level netlists.
We evaluate on 19 realistic cryptographic circuits (i.e., rounds of AES, CRAFT and LET64) with both detection- and correction-based countermeasures, where
the number of gates range from 925 to 40,184. The results show
that our approach is effective and efficient in verifying the fault-resistance against various fault-resistant models.
Almost all the benchmarks (46 out of 48) can be verified in less than 3 minutes (except for two which take
31 and 35 minutes respectively). In comparison,
FIVER runs out of time (with timeout 24 hours) on 23 (out of 48) fault-resistance verification tasks.
To summarize, we make the following major contributions:
* We formalize the fault-resistance verification problem
and identify its NP-complete computational complexity for the first time.
* We propose a novel SAT-based approach for formally verifying fault-resistance with an accelerating technique.
* We implement an open-source tool for verifying fault-resistance in Verilog gate-level netlists.
* We extensively evaluate our tool on realistic cryptographic circuits to show its effectiveness and efficiency.
Outline.
Section <ref> briefly recaps circuits, fault injection attacks and their countermeasure.
Section <ref> formulates the fault-resistance verification problem and studies its complexity.
Section <ref> presents our SAT-based verification approach.
Section <ref> reports experimental results.
We discuss related work in Section <ref> and finally conclude this work in Section <ref>.
With the development of fault injection analysis technology, researchers have been working in three main studies. These studies include enhancing fault injection methods, proposing new analysis techniques, and designing corresponding and effective countermeasures.
For the first research direction, several physical methods have been proposed. The methods include clock glitches attack<cit.>, voltage glitches attack<cit.>, laser beams attack<cit.>, electromagnetic pulses attack<cit.> etc<cit.> <cit.> <cit.> <cit.>. Among them, the first two attack are relatively simple to implement, but the accuracy of attack is not high. The laser beams attack has a very high degree of accuracy, but it is difficult to implement. Electromagnetic pulses attack is simpler to implement than the laser beams attack, but it's not as powerful as the laser beams attack.
Regarding the second research direction, Biham and Shamir proposed Differential Fault Analysis (DFA)<cit.> in 1997. After that, a number of different methods of attack have been proposed in recent years, including Ineffective Fault Attacks (IFA)<cit.>, Statistical Fault Attacks<cit.>, and Statistical Ineffective Fault Analysis<cit.>.
In the third research direction, researchers have proposed a number of countermeasures against fault injection attack. Some methods use temporal, spatial, information redundancy etc to implement countermeasures. Those countermeasures can further divided into detection-based, correction-based and infection-based countermeasures. Anita Aghaie et al. proposed a detection-based countermeasure<cit.> in 2020. Soon after, Aein Rezae et al. proposed a correctioen-based countermeasure<cit.> extended from <cit.>. In 2012, Benedikt Gierlichs et al. proposed a infection-based countermeasure<cit.>.
In 2020, Victor Arribas et al. proposed the VerFI framework to address these challenges. The framework uses simulation to detect faults in the system and requires the user to input the type and location of the faults. However, this requirement can also be seen as a drawback since it requires the user to have expertise in the field and may result in false positives if the inputs are not good enough. Additionally, since the framework relies on user input, there is a risk of missing faults that may not be detected by the framework.
To address the limitations of the VerFI framework, Jan Richter-Brockmann et al. proposed a new framework called FIVER<cit.>. FIVER is based on Binary Decision Diagram (BDD). The framework transfers a digital logic circuit to BDD and uses a general fault model to describe the attacker's capability. The verification process is automated and performed directly on the gate-level circuit.
One advantage of FIVER is that it eliminates the problem of false positives caused by specific inputs. BDD is able to automatically generate inputs, which ensures that all input cases are covered. Additionally, all fault combinations can also be automatically generated and verified, providing a comprehensive solution. The user is only required to provide the attacker's capability, so it does not require expertise in the field. The automation of the verification process ensures that all fault combinations are fully covered.
Despite its advantages, the FIVER framework still has several limitations. One major limitation is the difficulty in verifying large-scale circuits. The verification process requires the rebuilding of a BDD for each combination of faults, making it time-consuming and inefficient.
To overcome these limitations, we inspire to come up with an approach based on a Satisfiability (SAT) solver. We propose three optimizations to significantly reduce the validation time. The use of a SAT solver instead of a BDD allows for a more efficient verification process, making it possible to validate large-scale circuits in a reasonable amount of time.
In this paper, we present a formal verification method for countermeasures against Fault Injection Attacks (FIA) based on a Satisfiability (SAT) solver. The method involves fault injecting the abstract model of a gate-level circuit and converting it into Conjunctive Normal Form (CNF). The CNF is then solved using a SAT solver.
This approach offers several benefits. Firstly, it automatically considers all possible input combinations, thereby avoiding false positives caused by inappropriate selection of test vectors. Secondly, we present a method of fault injection that covers all possible locations, avoiding any situation that might result in a successful but undetected injection. The use of a SAT solver allows for a comprehensive and efficient verification process that considers all possible scenarios, ensuring the security and correctness of the countermeasures against FIA.
In an effort to enhance the performance of the proposed formal verification method, we introduce three optimization techniques to reduce the verification time. These optimization techniques are rigorously analyzed and proven theoretically in the paper. The implementation of these optimizations results in a more efficient verification process, allowing for quicker results.
Outline
In Section <ref>, we briefly introduce the notations and introduce circuit models, then give the background about fault injection attack and the fault model. We further give the definition of the security. In Section <ref>, we give the SAT-based verification in detail. In Section <ref>, we give three optimization algorithms and prove them. After, Section <ref> give the experimental results. Then we introduce related work in Section <ref> and conclude our work in Section <ref>.
§ PRELIMINARY
In this section, we introduce (gate-level) circuits, fault injection attacks
and their countermeasures.
§.§ Notations
We denote by the Boolean domain {,}
and by [n] the set of integers {1,⋯, n} for any integer n≥ 1.
To describe standard circuits, we consider the logic gates:
(∧), (∨), (), (), (⊕), (), and (),
all of which are binary gates except for .
We note that ∙(x_1,x_2)=∙(x_1,x_2),
so ∙ may be used to denote ∙ for any ∙∈{∧,∨,⊕}.
The set of these gates is denoted by .
In addition, we introduce three auxiliary logic gates to describe faulty circuits:
(), () and (),
where x=x, and are two constant logic gates whose outputs are and , respectively.
The extended set of logic gates is denoted by ^⋆.
§.§ Synchronous Circuits
A combinational circuit C is a tuple
(V,I,O, E,), where
* V is a set of vertices, I⊂ V is a set of inputs, and O⊂ V is a set of outputs;
* E⊆ (V∖ O) × (V∖ I) is a set of edges
each of which represents a wire connecting two vertices and carrying a digital signal
from the domain ;
* (V,E) forms a directed acyclic graph (DAG);
* each internal vertex v∈ V∖ (I∪ O) is a logic gate associated with its function (v)∈^⋆ whose fan-in size
is equal to the in-degree of the vertex v.
Intuitively, combinational circuits represent Boolean
functions.
The behavior of a combinational circuit is
memoryless, namely, the outputs depend solely on the
inputs and are independent of the circuit's past history.
The semantics of the combinational circuit C is described by the associated
Boolean function C:^|I|→^|O|
such that for any signals x⃗∈^|I| of the inputs I,
C(x⃗)=y⃗ iff under the input signals x⃗ the output signals O of the circuit C are y⃗ .
A (synchronous) sequential circuit is a combinational circuit with feedback synchronized by a global clock.
It has primary inputs, primary outputs,
a combinational circuit and memory in the form of registers (or flip-flops).
The output signals of registers at a clock cycle represent an internal state.
At each clock cycle, the combinational circuit produces its output using the current internal state
and the primary inputs as its inputs. The output comprises two parts: one is used as primary outputs
while the other is stored in the registers, which will be the internal state for the next clock cycle and
can be seen as the feedback of the combinational circuit to the next clock cycle.
We focus on round-based circuit implementations of cryptographic primitives
so that the synchronous circuits always have bounded clock cycles
and can be unrolled
by clock cycles.
A k-clock cycle synchronous (sequential) circuit for k≥ 1 is a tuple
(,,, s⃗_0, ), where
* (resp. ) is a set of primary inputs (resp. primary outputs);
* =R_0⊎⋯⊎ R_k is a set of registers, called memory gates;
* s⃗_0∈^|R_0| gives initial signals to the memory gates in R_0;
* ={C_1,⋯, C_k} where C_i=(V_i,I_i, O_i, E_i,_i) for each i∈[k]. Moreover,
the inputs I_i consist of the primary inputs and memory gates R_i-1,
the outputs O_i
consists of the primary outputs and memory gates R_i, and V_i∩ V_j=∅ for any j≠ i.
Since
memory gates are for synchronization only and are essentially the same as the identity function, for the sake of
presentation, we extend the function _i such that
for every memory gate r∈ R_i-1, _i(r)=.
However, we emphasize that it may be changed if fault injections are considered.
A state s⃗ of the circuit comprises the output signals of the memory gates.
At any clock cycle i∈[k-1],
given a state s⃗_i-1 and signals x⃗_i of the primary inputs , the next state
s⃗_i
is C_i
(s⃗_i-1,x⃗) projected onto the registers R_i and
C_i(s⃗_i-1,x⃗) projected onto gives the primary outputs y⃗_i.
In general, we write s⃗_i-1x⃗_i|y⃗_i⟶s⃗_i for the state transition at the i-th clock cycle.
A run π under a given sequence of primary inputs (x⃗_1,⋯,x⃗_k) is a sequence
s⃗_0x⃗_1|y⃗_1⟶s⃗_1x⃗_2|y⃗_2⟶s⃗_2x⃗_3|y⃗_3⟶s⃗_3⋯s⃗_k-1x⃗_k|y⃗_k⟶s⃗_k.
The semantics of the circuit
is described by its associated Boolean function
:(^||)^k→ (^||)^k
such that for any sequence of input signals x⃗_1,⋯, x⃗_k∈ (^||)^k,
(x⃗_1,⋯, x⃗_k)=(y⃗_1,⋯, y⃗_k)
iff
s⃗_0x⃗_1|y⃗_1⟶s⃗_1x⃗_2|y⃗_2⟶s⃗_2x⃗_3|y⃗_3⟶s⃗_3⋯s⃗_k-1x⃗_k|y⃗_k⟶s⃗_k.
We remark that in practice, the combinational circuits C_i's in round-based hardware implementations of a cryptographic primitive
are often similar (many of them are actually the same up to renaming of the vertices),
because the internal rounds of a cryptographic primitive often perform similar computations.
Furthermore, only partial signals of primary inputs may be used in one clock cycle and the signals of primary outputs may only be produced in the last clock cycle.
Our formalization ties to be general.
§.§ Fault Injection Attacks
Fault injection attacks are a type of physical attacks that actively inject faults on some logic and/or memory gates during the execution of a cryptographic circuit
and then statistically analyze the faulty primary outputs to deduce sensitive data such as the
cryptographic key <cit.>. Over the last two decades, various fault injection mechanisms have been proposed
such as clock glitches <cit.>, underpowering <cit.>,
voltage glitches <cit.>, electromagnetic pulses <cit.>
and laser beams <cit.>.
Clock glitch
causes transient faults in circuits by tampering with a clock signal with
glitches. Under the normal clock, the clock cycle is larger than the maximum path delay in
combinational circuits, allowing full propagation of the signals
so that the input signals to memory gates are stable before the next clock signal triggers the
sampling process of the memory gates. In contrast, under a clock with glitches,
some clock periods are shorter than the maximum path delay
so the input signals to memory gates become unstable (i.e., only parts of input signals have reached).
As a result, the memory gates may sample faulty results.
Underpowering and voltage glitches are similar to clock glitches except that
underpowering lowers the supply voltage of the device throughout the entire
execution while voltage glitches only lower the supply voltage for a
limited period of time during the execution.
In contrast to clock glitches that decrease clock periods,
lowering the supply voltage increases the maximum path delay in combinatorial circuits
which can also induce memory gates to sample faulty results.
Electromagnetic pulses induce currents in wire loops
which are the power and ground networks in integrated circuits.
The induced current in a wire loop leads to a (negative or positive) voltage swing between the power
and ground grid. The negative (resp. positive) voltage swings can decrease (resp. increase) the clock signal and the input signals
to memory gates, often leading to reset (resp. set) of the corresponding memory gates, thus injecting faults on memory gates.
A laser beam on a transistor produces a dense distribution of electron-hole pairs along the laser path,
leading to a reduced voltage and eventually a temporary drift current.
The temporary drift current can be used to alter the output signal of a (logic or memory) gate.
Clock glitches, underpowering and voltage glitches are non-invasive,
as they do not require a modification of the targeted device,
thus are considered as rather inexpensive.
In contrast, electromagnetic pulses and laser beams are semi-invasive, allowing the adversary to inject localized faults,
thus have higher precision than non-invasive attacks,
but still at reasonable equipment and expertise requirement.
§.§ Countermeasures against Fault Injection Attacks
Various countermeasures have been proposed to defend against fault injection attacks.
For clock glitches, underpowering and voltage glitches,
an alternative implementation of the circuit can be developed where signal path delays in combinatorial circuits
are made independent of the sensitive data.
For instance, delay components can be added to
certain signal paths <cit.>,
or combinational circuits can be reorganized <cit.>, so that the arrival time of all
output signals of logic gates are independent of the sensitive data.
However, such countermeasures fail to defend against
electromagnetic pulses and laser beams.
Redundancy-based countermeasures are proposed to detect the presence of a fault.
For instance, spatial redundancy recomputes the output multiple times in parallel <cit.>;
temporal redundancy recomputes the output multiple times consecutively <cit.>,
and information redundancy leverages linear error code from coding theory <cit.>.
Once a fault is detected, the output is omitted or the sensitive data is destroyed, with an error flag signal.
However, such countermeasures are still vulnerable against
advanced fault injection attacks such as Ineffective Fault Attack (IFA) <cit.> and Statistical Ineffective Fault Analysis (SIFA) <cit.>.
The linear error-code based approach proposed in <cit.>
was extended in <cit.>, which can correct
faults to protect against IFA and SIFA.
§ THE FAULT-RESISTANCE VERIFICATION PROBLEM
We are interested in verifying redundancy based countermeasures.
In this section, we first formalize the fault-resistance verification problem,
and then present an illustrating example.
§.§ Problem Formulation
Fix a k-clock cycle circuit =(,,, s⃗_0, ),
where ={C_1,⋯, C_k} and C_i=(V_i,I_i, O_i, E_i,_i) for each i∈[k].
We assume that is a cryptographic circuit without deploying any countermeasures.
Let '=(,',', s⃗_0', ') be the protected counterpart of using
a redundancy-based countermeasure,
where '={C_1',⋯, C_k'} and C_i'=(V_i',I_i', O_i', E_i',_i') for each i∈[k]. We assume that '=∪{o_ flag}, where o_ flag is an error flag output indicating whether a fault was detected when
the circuit ' adopts a detection-based countermeasure.
If ' adopts a correction-based countermeasure, i.e., no error flag output is involved,
for clarity, we assume that the error flag output o_ flag is added but is always .
To formalize the fault-resistance verification problem, we first introduce some notations.
We denote by the blacklist of gates that are protected against
fault injection attacks.
usually contains the gates used in the sub-circuits implementing a detection or correction mechanism.
Note that the effects of faults injected on the other gates can
be propagated into the gates in .
To model the effects of different fault injections, we introduce the following three fault types:
* bit-set fault τ_s: when injected on a gate, its output signal becomes ;
* bit-reset fault τ_r: when injected on a gate, its output signal becomes ;
* bit-flip fault τ_bf: when injected on a gate, its output signal is flipped, i.e., from to or from to .
These fault types are able to capture all the effects of faults induced
by both non-invasive fault injections (i.e., clock glitches, underpowering and voltage glitches) and semi-invasive fault injections
(i.e., electromagnetic pulses and laser beams). The detailed discussion refers to <cit.>.
We denote by 𝒯 = {τ_s,τ_r,τ_bf} the set of fault types.
A fault injection with fault type τ∈𝒯 on a gate
can be exactly characterized by replacing its associated function ∙
with τ(∙):
τ(∙):=
{[ , if τ= τ_s;; , if τ= τ_r;; ∙, if τ= τ_bf.; ].
To specify when, where and how a fault is injected, we introduce fault events.
A fault event
is given by
(σ,β,τ), where
* σ∈[k] specifies the clock cycle of the fault injection, namely, the fault injection occurs at the σ-th clock cycle;
* β∈ R_σ-1'∪ V_σ'∖ (I_σ'∪ O_σ') specifies the gate on which the fault is injected; we require that β∉;
* τ∈𝒯 specifies the fault type of the fault injection.
A fault event (σ,β,τ) yields the faulty circuit
'[(σ,β,τ)]=(,',', s⃗_0', {C_1”,⋯, C_k”}),
where for each i∈[k] and every
β'∈ R_σ-1'∪ V_σ'∖ (I_σ'∪ O_σ'),
* C_i”:=
{[ (V_i',I_i', O_i', E_i”,_i”), if i= σ;; C_i', if i≠σ; ].
* E_i” is obtained from E_i' by removing the incoming edges of the gate β if τ∈{τ_s,τ_r},
* _σ”(β'):=
{[ τ(_σ'(β)), if β'=β ;; _σ'(β), if β'≠β, ].
Intuitively, the faulty circuit '[(σ,β,τ)] is the same as
the circuit ' except that the function _σ'(β) of the gate β
is transiently replaced by τ(_σ'(β)) in the σ-th clock cycle,
while all the other gates at all the clock cycles
remain the same.
We denote by τ(β) the faulty counterpart of β with fault type τ.
In practice, multiple fault events can occur simultaneously during the same clock cycle
and/or consecutively in different clock cycles, allowing the adversary to conduct sophisticated fault injection attacks.
To formalize this, we introduce fault vectors, as a generalization of
fault events.
A fault vector (',,T)
is given by a (non-empty) set of fault events
(',,T)={[ (α_1,β_1,τ_1),; ⋯,; (α_m,β_m,τ_m) ]|[ ∀ i,j∈ [m].; β_i≠β_j∧; α_i∈[k]∧τ_i∈ T ]}.
A fault can be injected to a gate at most once, but multiple faults can be injected to different gates.
in the same or different clock cycles.
Note that ' is unrolled with clock cycles where each gate is renamed in different clock cycles. Thus, some gates in a fault vector may be the same in the unrolled counterpart.
Similarly, a fault vector (',,T) on the circuit ' yields the faulty circuit
'[(',,T)], which is obtained by iteratively applying fault events in (',,T),
i.e.,
'[(',,T)]:='[(α_1,β_1,τ_1)]⋯ [(α_m,β_m,τ_m)].
A fault vector (',,T) is effective if there exists a sequence of primary inputs (x⃗_1,⋯,x⃗_k)
such that the sequences of primary outputs '(x⃗_1,⋯,x⃗_k)
and '[(',,T)](x⃗_1,⋯,x⃗_k)
differ at some clock cycle which is before the clock cycle when the error flag output o_ flag differs.
Intuitively, an effective fault vector breaks the functional equivalence between and '
and the fault is not successfully detected (i.e., setting the error flag output o_ flag).
Note that there are two possible cases for an ineffective fault vector:
either '(x⃗_1,⋯,x⃗_k)
and '[(',,T)](x⃗_1,⋯,x⃗_k)
are the same or the fault is successfully detected
in time.
Hereafter, we denote by ♯ Clk((',,T)) the size of the set {α_1,⋯,α_m}, i.e.,
the number of clock cycles when fault events can occur, and by
MaxEpC((',,T)) the maximum number of fault events per clock cycle,
i.e., max_i∈ [m] |{(α_i,β_i,τ_i)∈(',,T)}|. Inspired by the consolidated fault adversary model <cit.>,
we introduce the security model of fault-resistance which characterizes
the capabilities of the adversary.
A fault-resistance model
is given by ζ(_e,_c,T,ℓ), where
* _e is the maximum number of fault events per clock cycle;
* _c is the maximum number of clock cycles in which fault events can occur;
* T⊆𝒯 specifies the allowed fault types; and
* ℓ∈{,,} defines vulnerable gates: for logic gates in combinational circuits, for memory gates
and for both logic and memory gates.
For instance, the fault-resistance model ζ(_e,k,𝒯,) gives the strongest capability
to the adversary for a large _e
allowing the adversary to inject faults
to all the gates simultaneously at any clock cycle (except for
those protected in the blacklist ). The fault-resistance model ζ(1,1,{τ_bf},)
only allows the adversary to choose one logic gate to
inject a bit-flip fault in one chosen clock cycle.
Formally, ζ(_e,_c,T,ℓ) defines the set ζ(_e,_c,T,ℓ) of possible fault vectors
that can be conducted by the adversary, i.e.,
ζ(_e,_c,T,ℓ) is
{(',_ℓ,T)|[ MaxEpC((',_ℓ,T))≤ n_e; ∧; ♯ Clk((',_ℓ,T))≤_c ]},
where all the fault types involved in the fault vectors are limited in T
and
_ℓ:=
{[ , if ℓ=;; ∪, if ℓ=;; ∪⋃_i∈ [k] V_i'∖ (I_i'∪ O_i'), if ℓ=.; ].
The circuit '
is fault-resistant w.r.t. a blacklist and a fault-resistance model ζ(_e,_c,T,ℓ),
denoted by
⟨',⟩ζ(_e,_c,T,ℓ),
if all the fault vectors (',,T)∈ζ(_e,_c,T,ℓ) are ineffective on the circuit '.
The fault-resistance verification problem is to determine if ⟨',⟩ζ(_e,_c,T,ℓ), and in particular,
if ⟨',⟩ζ(_e,_c,𝒯,).
If ⟨',⟩ζ(_e,_c,𝒯,), then ⟨',⟩ζ(_e,_c,T,ℓ)
for any T⊆𝒯 and any ℓ∈{,,}.
The problem of determining whether a circuit ' is not fault-resistant is
NP-complete.
The NP upper bound is relatively easy.
The NP-hardness is proved by reducing from the SAT problem. For a detailed proof, we refer readers to the appendix.
Note that we focus on transient fault events that have a dynamic nature
and become inactive after certain periods or changes in the circuit.
There are persistent and permanent fault events that have a static nature and will remain active for several or even the entire clock cycles.
As mentioned by <cit.>,
it suffices to consider only transient fault events when modeling fault events,
as a persistent or permanent fault event can be
modeled as a repetitive transient fault event.
§.§ An Illustrating Example
Consider the S-box used in the cipher RECTANGLE <cit.>, which is a 4-bit to 4-bit mapping S:^4→^4 given in Table <ref> (the top two rows).
It can be implemented in a combinational circuit as shown in Fig. <ref> (grey-area).
It has four 1-bit inputs { a,b,c, d} denoting the binary representation of the 4-bit input x⃗,
and four 1-bit outputs { w,x,y,z} denoting the binary representation of the 4-bit output S(x⃗), where
a and w are the most significant bits.
The values of the inputs a,b,c and d depend upon the secret key.
If a fault with fault type τ is injected on the gate s7 (i.e., the gate whose output is s7),
its function ( s7) is changed from ∧ to τ(∧).
As highlighted in red color in Fig. <ref>,
the effect of this fault will be propagated to the output w.
We denote by S[ s7,τ] the faulty S-box, given in Table <ref> for each τ∈𝒯,
where the faulty output is highlighted in bold.
Since the distribution of the XOR-difference S[ s7,τ](x⃗)⊕ S(x⃗) is biased,
the adversary can narrow down the solutions for x⃗ according to the value of S[ s7,τ](x⃗)⊕ S(x⃗) which is known to
the adversary.
Finally, the adversary solves x⃗ uniquely,
based on which a round key can be obtained (Details refer to <cit.>).
To thwart single-bit fault injection attacks, one may adopt a single-bit parity protection mechanism <cit.>, as shown in
Fig. <ref>. The sub-circuit in the blue-area is a redundancy part which computes the Hamming weight of
the output of the S-box from the input but independent on the sub-circuit in the grey-area,
i.e., p6.
The sub-circuit in the yellow-area checks the parity of the Hamming weights of S(x⃗) computed in two independent sub-circuits, i.e.,
flag= p6⊕ w⊕ x⊕ y⊕ z.
If no faults occur, flag is .
Re-consider the fault injected on the gate s7.
We can see that either flag becomes , i.e.,
this fault injection can be successfully detected, or the outputs of S[ s7,τ](x⃗) and S(x⃗) are the same,
thus the fault injection is ineffective.
However, the entire circuit is still vulnerable against single-bit fault injection attacks, as
one single-bit fault injection can yield an even number of faulty output bits so that the Hamming weight of the faulty output
remains the same. For instance, the fault injection on the gate s9 will affects both the outputs x and z.
As shown in Table <ref>, the fault injection cannot be successfully detected if one of the follows holds:
* τ=τ_s∧x⃗∈{0000,1001,1110,1111},
* τ=τ_r∧x⃗∈{0001,0110,0111,1000,1111},
* τ=τ_bf∧x⃗∈{0000,0001,0110,0111,1000,1001,1110,1111}.
It is fault-resistant against single-bit fault injection attacks when the blacklist includes
all the logic gates in the parity checking (i.e., yellow-area) and all the logic gates in the S-box (i.e., grey-area) whose out-degree is larger than 2 (highlighted in blue color in Fig. <ref>).
This issue also could be avoided by leveraging the independence property defined by <cit.>, to ensure
an n-bit fault injection attack only affect at most n output bits, at the cost of the circuit size.
The revised implementation is given in the appendix
which is fault-resistant
against single-bit fault injection attacks when the blacklist includes only the logic gates in the parity checking.
According to the definition of fault injection attack, we abstract it into a fault model. Jan Richter-Brockmann et al. introduced a fault model<cit.> as follow:
Given the fault number n, the fault type t and the fault location l, the definition of the fault model is ζ(n,t,l).
The parameters are defined as follows:
Fault Type. Fault type can be expressed as 𝒯 = {τ_s,τ_r,τ_sr,τ_bf,τ_fm}. Usually, the fault type include τ_s:=set, τ_r:=reset, τ_sr:=set-reset, τ_bf:=bit-flip and τ_fm for custom's free definition of fault model. set and reset correspond to stuck-at-0 and stuck-at-1, which means an individual signal (e.g. logical 0 and logical 1 for a binary gate). Meanwhile, bit-flip is the reversal of a signal dependent on the original input into the opposite signal (e.g. change signal 1 to signal 0).
Fault Location. Fault location indicates where the fault is injected. The areas includes only combinational gates, only sequential gates and both of them, We denote it as set ℒ={ c, s, cs}. In this paper, we only consider ℒ={cs}.
Such a fault model can limit the attacker in the spatial dimension. In addition, each attacker is also limited in temporal dimension.
For the temporal dimension, we use variate to represent it. It can be further divided into 𝑢𝑛𝑖𝑣𝑎𝑟𝑖𝑎𝑡𝑒 and 𝑚𝑢𝑙𝑡𝑖𝑣𝑎𝑟𝑖𝑎𝑡𝑒. 𝑈𝑛𝑖𝑣𝑎𝑟𝑖𝑎𝑡𝑒 only considers the injection of faults in a single clock cycle, while for 𝑚𝑢𝑙𝑡𝑖𝑣𝑎𝑟𝑖𝑎𝑡𝑒, fault can occur in multiple clock cycles.
Taken together, the total number of faults an attacker can inject is affected by both temporal and spatial dimensions. In the spatial dimension, we use n to denote the number of injected faults. And in the temporal dimension, we use v to denote the number of variate. Thus, the total number of fault that an attacker can inject in a single circuit is n× v.
§.§ Countermeasures of Fault Injection attack
The current mainstream research directions are detection-based countermeasures and correction-based countermeasures.
Detection-based countermeasures are security techniques that utilize redundancy in different aspects such as temporal, spatial, area, and information to detect faults or potential attacks. Tal Malkin et al. proposed two methods for detection-based countermeasures<cit.>. The first method involves duplicating the cryptographic algorithm and comparing the outputs. The second method involves recomputing the output and comparing the results of the two computations. Both methods utilize either spatial redundancy or temporal redundancy to detect errors.
Anita Aghaie et al. proposed a code-based Concurrent Error Detection scheme that takes into account the propagation of faults<cit.>. This method ensures that faults are detected at any location, including data paths, finite state machines, control signals, and at any clock cycle induced in the design.
For certain types of attacks, such as Statistical Ineffective Fault Attacks (SIFA)<cit.>, detection-based countermeasures are not sufficient. In these cases, correction-based countermeasures are necessary to provide effective protection. Joan Daemen et al.<cit.> proposed a countermeasure against SIFA that utilizes error-correcting codes to correct faults at the non-linear layer. Additionally, Aein Rezaei Shahmirzadi et al. proposed an error-correcting code-based scheme that takes into account fault propagation and extends the capabilities of detection-based methods<cit.>.
With such a countermeasure, we can further distinguish the fault by effects. We usually divide them into 𝑒𝑓𝑓𝑒𝑐𝑡𝑖𝑣𝑒, 𝑖𝑛𝑒𝑓𝑓𝑒𝑐𝑡𝑖𝑣𝑒 and 𝑑𝑒𝑡𝑒𝑐𝑡𝑒𝑑 faults. 𝑖𝑛𝑒𝑓𝑓𝑒𝑐𝑡𝑖𝑣𝑒 fault is the fault that does not cause an obvious misbehavior. 𝑒𝑓𝑓𝑒𝑐𝑡𝑖𝑣𝑒 fault is the fault that causes significant misbehavior and is not detected. 𝑑𝑒𝑡𝑒𝑐𝑡𝑒𝑑 fault is the fault that causes significant misbehavior but is detected by the circuit.
§.§ Example
Here, we give an example circuit of RECTANGLE S-box implementation<cit.>, protected by a single bit parity.
The parity is rely on spatial redundancy. Thus, the model is shown as Fig <ref>:
For this, S is the S-box function which receives the 4-bit input S_in=<a, b, c, d> and provides the 4-bit output S_out=<w, x, y, z>. R is the redundancy of single-bit parity that has the same input as S and provides 1-bit output, representing the parity of 1 in S_out=<w, x, y, z>. error flag indicates whether parity of the S function's output is consistent with the output of redundancy R, where error flag=0 means the results are consistent and error flag=1 means the results are inconsistent. We then provide the code of the abstract model of the circuit in Example <ref>.
With the original circuit, we provide a circuit after injecting the fault. The red parts in Fig <ref> shows the fault injection. We inject the fault at gate g_s7. After the fault propagation, the output w is affected. Then it will be detected by the countermeasure and the error flag will output 1.
§.§ Definition of Security
Based on the characteristics of this attack and countermeasures, we propose a definition of the corresponding security. For a given abstract model 𝒟, if the model is injected with faults, it can be expressed as 𝒟_fault, while the fault-free model can be expressed as 𝒟_golden. The output of the model is represented as set 𝒪(𝒟). Suppose the number of output is m. Since we do not change the number of the gates, the size of 𝒪(𝒟_fault) and 𝒪(𝒟_godlen) are the same. Then the definition of security is as follows:
Given an abstract model 𝒟, the model with fault injection is 𝒟_fault and the fault-free model is 𝒟_golden. The circuit with countermeasure is secure iff :
(⋀_i=1^m(𝒪(𝒟_golden)_i⊕𝒪(𝒟_fault)_i)) ∧ ( flag)
is UNSATISFIABLE, where 𝑓𝑙𝑎𝑔 is calculated by countermeasure, as shown in Fig <ref>.
Through definition <ref>, we can divide the results into four cases:
* 𝒪(𝒟_golden) is similar with 𝒪(𝒟_fault) and flag is 0. In this case, there is no injection fault or the fault is 𝑖𝑛𝑒𝑓𝑓𝑒𝑐𝑡𝑖𝑣𝑒.
* 𝒪(𝒟_golden) is similar with 𝒪(𝒟_fault) and flag is 1. In this case, the fault is 𝑖𝑛𝑒𝑓𝑓𝑒𝑐𝑡𝑖𝑣𝑒.
* 𝒪(𝒟_golden) is different from 𝒪(𝒟_fault) and flag is 1. In this case, the fault is 𝑑𝑒𝑡𝑒𝑐𝑡𝑒𝑑.
* 𝒪(𝒟_golden) is different from 𝒪(𝒟_fault) and flag is 0. In this case, the fault is 𝑒𝑓𝑓𝑒𝑐𝑡𝑖𝑣𝑒
§ SAT-BASED FAULT-RESISTANCE VERIFICATION
In this section, we propose an SAT-based countermeasure verification approach,
which reduces the fault-resistance verification problems to SAT solving.
§.§ Overview
The overview of our approach is depicted in Fig. <ref>.
Given a circuit (without any countermeasures), a protected circuit ' (i.e., with a countermeasure), a blacklist of gates on which faults cannot be injected, and a fault-resistance model ζ(n_e,n_c,T,ℓ),
outputs a report on if the protected circuit
' is fault-resistant or not.
consists of three key components: vulnerable gate reduction, fault encoding and SAT encoding.
The vulnerable gate reduction safely reduces the number of vulnerable gates,
thus reduces the size of the resulting Boolean formulas and improves the efficiency.
The fault encoding replaces each vulnerable gate with a gadget (i.e., sub-circuit) with additional
primary inputs controlling if a fault is injected and selecting a fault type.
The SAT encoding is an extension of the one for checking functional equivalence, where
(i) the maximum number of fault events per clock cycle
and the maximum number of clock cycles in which fault events can occur are both expressed by constraints over control inputs,
and (ii) a constraint on the error flag output is added.
Below,
we present the details of our fault encoding method,
SAT encoding method
and vulnerable gate reduction.
§.§ Fault Encoding
Gadgets.
To encode a fault injection on a gate β with fault type τ∈𝒯 and (β)=∙,
we define a gadget G_β,τ shown in Fig. <ref>. Note that τ(β) denotes the faulty counterpart of the gate β,
i.e., (τ(β))=τ(∙).
Indeed, the gadget G_β,τ for a binary gate β defines
a Boolean formula G_β,τ with
G_β,τ(in_1,in_2,c)= (c ? (in_1 ♢ in_2) : (in_1 ∙ in_2) ),
where ♢=τ(∙), and c is a control input indicating whether a fault is injected or not.
Namely, G_β,τ is equivalent to the faulty gate τ(β) if c=, otherwise G_β,τ is equivalent to the original gate β.
Note that the incoming edges of τ(β) should be omitted if τ∈{τ_s,τ_r}.
The gadget G_β,τ for a unary gate β is defined similarly as
G_β,τ(in,c)= (c ? (♢ in) : (∙ in) ).
We now generalize the gadget definition to accommodate different fault types 𝒯={τ_s,τ_r,τ_bf}.
Besides a control input,
selection inputs are introduced to choose fault types.
The gadget G_β,𝒯 for a binary logic gate β defines
a Boolean formula G_a,𝒯 such that G_β,𝒯(in_1,in_2,c,b_1,b_2) is
c ? ( b_1 ? (b_2 ? (in_1♢ in_2) : (in_1 † in_2)) : (in_1 in_2) ) : (in_1 ∙ in_2),
where ♢=τ_s(∙), †=τ_r(∙) and =τ_bf(∙).
Intuitively,
* c= means that no fault is injected, i.e., G_β,𝒯 is equivalent to the original logic gate β;
* c= means that a fault is injected. Moreover, the selection inputs b_1,b_2 are defined as:
* if b_1=b_2=, then G_β,𝒯 is equivalent to the faulty logic gate τ_s(β);
* if b_1= and b_2=, then G_β,𝒯 is equivalent to the faulty logic gate τ_r(β);
* if b_1=, then G_β,𝒯 is equivalent to the faulty logic gate τ_bf(β);
The gadget G_β,𝒯 for a unary gate β can be defined as
the Boolean formula G_β,𝒯 such that G_β,𝒯(in,c,b_1,b_2)
is
c ? ( b_1 ? (b_2 ? (♢ in) : († in)) : ( in) ) : (∙ in).
For a subset of fault types T={τ_1,τ_2}⊂𝒯,
the gadget G_β,T for a binary or unary gate β can be defined accordingly such that
G_β,T(in_1,in_2,c,b) is
c ? ( b ? (in_1 ♢ in_2) : (in_1 † in_2) ) : (in_1 ∙ in_2)
and
G_β,T(in,c,b) is
c ? ( b ? (♢ in) : († in) ) : (∙ in),
where ♢=τ_1(∙) and †=τ_2(∙).
We remark that the faulty counterpart τ(β) of a register β is implemented by adding a logic gate so that no additional registers are introduced.
More specifically, τ_s(β) (resp. τ_r(β)) is a constant logic gate that always outputs the signal (resp. ),
and τ_bf(β) is a logic gate with the incoming edge from the output of the register β.
Conditionally-controlled faulty circuits.
From ',
we construct
a conditionally-controlled faulty circuit ”, where each vulnerable gate
is replaced by a gadget defined above.
Fix a fault-resistance model ζ(n_e,n_c,T,ℓ).
Assume that the control input c and the set of selection inputs of each gadget G_β,T are distinct
and different from the ones used in the circuit '.
We define the conditionally-controlled faulty circuit ” w.r.t. and ζ(n_e,n_c,T,ℓ)
as
'[,ζ(n_e,n_c,T,ℓ)]:=(⊎',',', s⃗_0', ”),
where '=⋃_i∈ [k] I_i”
and ”={C_1”,⋯, C_k”}.
For every i∈ [k], the circuit C_i”=(V_i'⊎ V_i”,I_i'⊎ I_i”, O_i', E_i'⊎ E_i”,_i”) is obtained from the combinational circuit C_i' as follows.
[size=title,colback=white]
For every gate β∈ R_i-1'∪ V_i'∖ (I'_i∪ O'_i), if β∉_ℓ, then β is replaced by the gadget G_β,T,
the control and selection inputs of the gadget G_β,T are added into I_i”, the gates and edges of G_β,T are added into V_i” and E_i” respectively,
the mapping _i' is expanded to _i” accordingly.
Intuitively, a fault vector (',,T)∈ζ(n_e,n_c,T,ℓ)
is encoded as a sequence (b⃗_1,⋯,b⃗_k) of the primary inputs ' for controlling fault types such that (α,β,τ)∈(',,T)
iff the gadget G_β,T is equivalent to the faulty gate τ(β) under the inputs b⃗_α, i.e., the control input of G_β,T is and the selection inputs of G_β,T choose τ(β).
We note that if (α,β,τ)∉(',,T) for any τ∈ T,
then the gadget G_β,T is equivalent to the original gate β under the inputs b⃗_α, i.e., the control input of the gadget G_β,T is
the signal .
We say that the fault vector (',,T) and the sequence (b⃗_1,⋯,b⃗_k) of the primary inputs ' are compatible
if the sequence (b⃗_1,⋯,b⃗_k) encodes the fault vector (',,T).
Note that each sequence (b⃗_1,⋯,b⃗_k) of the primary inputs '
determines a unique compatible fault vector (',,T), but
each fault vector (',,T) determines a unique compatible sequence (b⃗_1,⋯,b⃗_k) of the primary inputs '
only if T⊂𝒯, because G_β,𝒯 is equivalent to the faulty logic gate τ_bf(β) if b_1= no matter the value of b_2.
Thus, we can get:
The number of gates of the circuit ” (i.e., '[,ζ(n_e,n_c,T,ℓ)]) is at most 6|T| times than that of the circuit ', and
the following statements hold:
* for each fault vector (',,T)∈ζ(n_e,n_c,T,ℓ), there exists a compatible sequence (b⃗_1,⋯,b⃗_k) of the primary inputs '
such that for each sequence (x⃗_1,⋯,x⃗_k) of primary inputs ,
'[(',,T)](x⃗_1,⋯,x⃗_k)=”((x⃗_1,b⃗_1),⋯,(x⃗_k,b⃗_k));
* for each sequence (b⃗_1,⋯,b⃗_k) of the primary inputs ', there exists a unique compatible fault vector (',,T)∈ζ(n_e,n_c,T,ℓ)
such that for each sequence (x⃗_1,⋯,x⃗_k) of primary inputs ,
'[(',,T)](x⃗_1,⋯,x⃗_k)=”((x⃗_1,b⃗_1),⋯,(x⃗_k,b⃗_k)).
Hereafter, for any sequence (b⃗_1,⋯,b⃗_k) of the primary inputs ', we denote by
♯ Clk(b⃗_1,⋯,b⃗_k) the number of clock cycles i such that at least one control input of b⃗_i is ,
and by MaxEpC(b⃗_1,⋯,b⃗_k) the maximum sum of the control inputs of b⃗_i per clock cycle i∈[k].
§.§ SAT Encoding
Recall that
⟨',⟩ζ(_e,_c,T,ℓ) iff all the fault vectors (',,T)∈ζ(_e,_c,T,ℓ) are ineffective,
i.e., for any sequence (x⃗_1,⋯,x⃗_k) of primary inputs,
either '(x⃗_1,⋯,x⃗_k)='[(',,T)](x⃗_1,⋯,x⃗_k)
or the fault is successfully detected by setting the error flag output o_ flag in time.
By Proposition <ref>, ⟨',⟩ζ(_e,_c,T,ℓ) iff
for any sequence ((x⃗_1,b⃗_1),⋯,(x⃗_k,b⃗_k)) of primary inputs ∪'
such that
* ♯ Clk(b⃗_1,⋯,b⃗_k)≤_c and MaxEpC(b⃗_1,⋯,b⃗_k)≤_e,
* and '(x⃗_1,⋯,x⃗_k)=”((x⃗_1,b⃗_1),⋯,(x⃗_k,b⃗_k))
or the fault is successfully detected by setting the error flag output o_ flag in time.
The above conditions can be reduced to the SAT problem by adapting the SAT encoding for equivalence checking <cit.>, with additional constraints
♯ Clk(b⃗_1,⋯,b⃗_k)≤_c and MaxEpC(b⃗_1,⋯,b⃗_k)≤_e.
Formally, the fault-resistance problem of the circuit ' can be formulated as:
[ ∀x⃗_1,⋯,x⃗_k∈^||. ∀b⃗_1∈^|I_1”|,⋯,∀b⃗_k∈^|I_k”|.; ∀ i∈[k]. ∀ o∈∖{o_ flag}.; (♯ Clk(b⃗_1,⋯,b⃗_k)≤_c∧ MaxEpC(b⃗_1,⋯,b⃗_k)≤_e); ⇒(ψ_i,o≠ψ_i,o”⇒∃ j∈[i]. ψ_j,o_ flag”) ]
where
* ψ_i,o os a Boolean formula that is satisfiable under the assignment
(x⃗_1,⋯, x⃗_i) iff _i_↓ o(x⃗_1,⋯,x⃗_i)=.
* ψ_i,o” denotes a Boolean formula that is satisfiable under the assignment
((x⃗_1,b⃗_1),⋯, (x⃗_i,b⃗_i)) iff _i”_↓ o((x⃗_1,b⃗_1),⋯, (x⃗_i,b⃗_i))=.
Intuitively, the above formula is valid iff for any sequence ((x⃗_1,b⃗_1),⋯,(x⃗_k,b⃗_k)) of primary inputs ∪'
such that ♯ Clk(b⃗_1,⋯,b⃗_k)≤_c and MaxEpC(b⃗_1,⋯,b⃗_k)≤_e,
if some primary output o (except for the error flag o_ flag) differs at some clock cycle i, then the error flag o_ flag should be
at some clock cycle j with j≤ i, i.e., the fault injection is detected in time.
By negating the above formula, the fault-resistance verification problem is reduced to the satisfiability of the Boolean
formula (Ψ_fr)
[ Ψ_fr:= ( [ Ψ__c∧Ψ__e∧⋁_i∈[k]⋁_o∈∖{o_ flag}; (ψ_i,o≠ψ_i,o”∧⋀_j∈[i]ψ_i,o_ flag”), ]) ] where
[ Ψ__c:=(⋀_i∈[k] (d_i⇔⋁b⃗_i,ctrl) )∧∑_i∈ [k] d_i≤_c,; Ψ__e:=⋀_i∈[k] (∑b⃗_i,ctrl≤_e). ]
and for each i∈[k], b⃗_i,ctrl denotes the set of control inputs in the primary inputs b⃗_i.
Intuitively, Ψ__c encodes the constraint ♯ Clk(b⃗_1,⋯,b⃗_k)≤_c, where for each i∈[k], d_i is a fresh
Boolean variable such that d_i is iff some control input in b⃗_i,ctrl is . Thus, ∑_i∈ [k] d_i
is the total number of clock cycles during which at least one fault is injected on some gate.
Ψ__e encodes the constraint MaxEpC(b⃗_1,⋯,b⃗_k)≤_e, where for each i∈[k], ∑b⃗_i,ctrl is the total number of faults injected at the i-th clock cycle.
Though cardinality constraints of the form ∑_i∈[n] b_i≤ k are used in both Ψ__c and Ψ__e,
they can be efficiently translated into Boolean formulas in polynomial time, and the size of the resulting Boolean formula is also
polynomial in the size of the cardinality constraint <cit.>.
In our implementation, we use the sorting network implemented in Z3 <cit.> for translating cardinality constraints into Boolean formulas.
⟨',⟩ζ(_e,_c,T,ℓ) iff
the formula Ψ_fr is unsatisfiable, where the size of Ψ_fr is polynomial in the size of the circuit '.
Note that if the circuit is not available, the circuit ' can be used
for building Ψ_fr though the size of Ψ_fr may increase.
Consider the fault-resistance model ζ(1,1,𝒯,). Suppose S is the circuit in Fig. <ref> (grey-area),
S' is the entire circuit in Fig. <ref>, and the blacklist contains all the logic gates in the redundancy and parity checking parts.
The Boolean formula Ψ_fr of the example is:
Ψ__c∧Ψ__e∧(⋁_o∈{ w,x,y,z}ψ_1,o≠ψ_1,o”) ∧ψ_1, flag”⋀_i=1^12ϕ_i
where
Ψ__c:=(d_1⇔⋁_i=1^12 c_i)∧ d_1≤ 1, Ψ__e:= (∑_i=1^12 c_i≤ 1),
ψ_1, x:= ((b ⊕ c) ∨ z) ⊕ (( c ∨ a) ⊕ d),
ψ_1, y:= b ⊕ ( c ∨ a) ⊕ d, ψ_1, z:= b ⊕ a ⊕ ( c ∧ d),
ψ_1, w:= (b ⊕ c) ⊕ ((b ⊕ a) ∧ (( c ∨ a) ⊕ d)),
ψ_1, w”:= g_9, ψ_1, y”:= g_10, ψ_1, x”:= g_11, ψ_1, z”:= g_12,
ϕ_1:=g_1⇔ G_⊕,𝒯”(b,c,c_1,b_1,1,b_1,2),
ϕ_2:=g_2⇔ G_⊕,𝒯”(b,a,c_2,b_2,1,b_2,2),
ϕ_3:=g_3⇔ G_,𝒯”(c,c_3,b_3,1,b_3,2),
ϕ_4:=g_4⇔ G_∨,𝒯”(g_3,a,c_4,b_4,1,b_4,2),
ϕ_5:=g_5⇔ G_⊕,𝒯”(g_4,d,c_5,b_5,1,b_5,2),
ϕ_6:=g_6⇔ G_∧,𝒯”(g_2,g_5,c_6,b_6,1,b_6,2),
ϕ_7:=g_7⇔ G_∨,𝒯”(g_1,z,c_7,b_7,1,b_7,2),
ϕ_8:=g_8⇔ G_∧,𝒯”(g_3,d,c_8,b_8,1,b_8,2),
ϕ_9:=g_9 ⇔ G_⊕,𝒯”(g_1,g_6,c_9,b_9,1,b_9,2),
ϕ_10:=g_10⇔ G_⊕,𝒯”(b,g_5,c_10,b_10,1,b_10,2),
ϕ_11:=g_11⇔ G_⊕,𝒯”(g_7,g_5,c_11,b_11,1,b_11,2),
ϕ_12:=g_12⇔ G_⊕,𝒯”(g_2,g_8,c_12,b_12,1,b_12,2),
2lψ_1, flag”:=⊕_i=9^12 g_i⊕( (a ∧ (c d)) ∨(((a ∨ c) d) ∧ b)).
Note that g_i for each i∈[12] is a fresh Boolean variable as a shortcut of a common gadget via ϕ_i,
b_i,1 and b_i,2 (resp. c_i) for each i∈[12] are fresh Boolean variables denoting the selection inputs (resp. control input) of the corresponding gadget,
Ψ__c can be removed from Ψ_fr since it always holds, and
Ψ__e can be efficiently translated into an equivalent Boolean formula.
We can show that Ψ_fr is satisfiable, thus S' is not fault-resistant w.r.t. and ζ(1,1,𝒯,).
Note that in practice, only contains all the logic gates in the parity checking. For the sake of simplicity, also contains all the logic gates in the redundancy part in this example.
§.§ Vulnerable Gate Reduction
Consider a fault event (α,β,τ) to the circuit '.
For any fixed sequence of primary inputs (x⃗_1,⋯,x⃗_k),
if the output signal of the gate β
does not change, then the fault event (α,β,τ) will not affect
the primary outputs, thus can be omitted.
If it changes, then the effect of (α,β,τ) must be propagated to the successor gates.
Assume the output of the gate β is only connected to one vulnerable logic gate β'.
If the output signal of β' does not change,
then the effect of the fault event (α,β,τ) is stopped at the gate β', thus (α,β,τ_bf) can be omitted as well.
If
it changes,
it is flipped either from to or from to , the same effect can be achieved by applying
the fault event (α,β',τ_bf), or the fault event (α,β',τ_s) if it is flipped from to or the fault event (α,β',τ_r) if it is flipped from to .
As a result, it suffices to consider fault injections on the gate β' instead of both β and β'
when τ_bf∈ T or {τ_s,τ_r}⊆ T,
which reduces the number of vulnerable gates.
Consider a fault-resistance model ζ(_e,_c,T,ℓ) such that τ_bf∈ T or {τ_s,τ_r}⊆ T, and ℓ∈{,}.
Let _1(',,T)=(',,T)∪{(α,β,τ)}∈ζ(_e,_c,T,ℓ) be an effective fault vector on the circuit '.
If the output of the gate β is only connected to one logic gate β'∉,
then
there exists a fault vector '(',,T)⊆(',,T)∪{(α,β',τ')} for some τ'∈ T such
that '(',,T) is also effective on the circuit '.
Moreover, if (',,T)=∅, then {(α,β',τ')} for some τ'∈ T is effective on the circuit '.
Theorem <ref> is proved by distinguishing whether the output signal of the gate β' differs in the circuits
' and '[_1(',,T)] under the same sequence of primary inputs (x⃗_1,⋯,x⃗_k).
If it is the same in the circuits
' and '[_1(',,T)] under the same sequence of primary inputs (x⃗_1,⋯,x⃗_k), then
(α,β,τ) can be removed from _1(',,T), otherwise the same effect of (α,β,τ) can be achieved by (α,β',τ') for some τ'∈ T.
For a detailed proof, we refer readers to the appendix.
Let ' be the set of gates β such that the output of the gate β is only connected to one logic gate β'∉,
which can be computed by a graph traversal of the circuit '.
By applying Theorem <ref>, ' can be safely merged with the blacklist while
no protections are required for those gates.
Given a fault-resistance model ζ(_e,_c,T,ℓ) such that τ_bf∈ T or {τ_s,τ_r}⊆ T, and ℓ∈{,},
if ⟨',∪'⟩ζ(_e,_c,T,ℓ), then ⟨',⟩ζ(_e,_c,T,ℓ).
Consider the example in Section <ref> and the fault-resistance model ζ(1,1,𝒯,).
All the gates in the redundancy part except for
the gate p6 (i.e., the gate whose output is p6)
can be added into ', as the effect of an effective fault injection on any of those gates
can be achieved by at most one bit-flip fault injection on the gate p6. Note that the gate p6 itself cannot be added into
' because the gate flag is in .
Similarly, the gates s4,s5,s7,s8 in the grey-area can be added into ',
but the other gates cannot as their outputs are connected to more than one gate or
some outputs of { w,x,y,z}.
Now, the fault-resistance verification problem w.r.t the fault-resistance model ζ(1,1,𝒯,)
and the blacklist is reduced to SAT solving of
the Boolean formula (Ψ_fr”)
Ψ_fr”:= Ψ__c'∧Ψ__e'∧(⋁_o∈{ w,x,y,z}ψ_1,o≠ψ_1,o”) ∧ψ_1, flag”⋀_i∈ Zϕ_i,
where Z={1,2,3,6,9,10,11,12}, Ψ__c':=(⋀_i∈ Z (d_i⇔⋁b⃗_i,ctrl) )∧∑_i∈ Z d_i≤_c
and Ψ__e':= (∑_i∈ Z c_i≤ 1).
Although there are many tools for verification, there are many limitations to these tools. Victor Arribas et al. proposed the VerFI as the fault diagnosis tool. The tool is able to analyze directly at gate-level circuit, and can analyze countermeasures based on detection, correction, and infection. To use the tool, the user need to provide the type of faults, the location of faults (which gate to inject) and the ability of the attacker (the number of fault). Meanwhile, the user also needs to provide the test vector as the input of the circuit. But this tool has an obvious limitation. Since the input of the circuit is provided by the user, there may be misjudgment of the fault due to the poor input, such as the fault that will lead to the error output judgment as invalid fault. In addition, since the fault information is also provided by the user, the user may miss part of the fault combination in the process of verification, resulting in false positives.
To solve the problem of VerFI, Jan Richter-Brockmann et al. provide a framework named FIVER. This framework translate the gate-level netlist circuit into Binary Decision Diagram (BDD), thus can to validate all input and fault combinations automatically. This method avoids false positives for faults. In addition, because the framework introduces a common fault model, the user can complete the verification without providing detailed fault information, such as the need to inject the fault gate. This makes the barriers to use of framework much lower.
However, while the framework solves some of the problems, it also introduces other problems. Since the abstract circuit of the injected circuit needs to be established after the fault is injected, FIVER needs to construct the BDD for every fault combination. As a result, the efficiency of the algorithm is not satisfactory. Inspired by this, we proposed a framework based on SAT solver to avoid the time consumption caused by multiple BDDs constructions.
In this section, we introduce the SAT-based verification. In general, the verification is divided into three steps:
* Encoding the fault into abstract model according to fault model ζ (n, τ , t) and get the model 𝒟_fault.
* The formula for the required solution is given according to the definition of security.
* Encoding the definition of security.
We will introduce three steps in Section <ref> and give the algorithm in Section <ref>.
§.§ The Encoding
Let Φ denote the SAT formula to be created for checking the security of the circuit. For a given digital circuit 𝐂 with a fault model ζ (n, τ , l) and the variate v, Φ can denote as follow:
Φ = Ψ _f-in-m∧Ψ _secu
where the definition of the subformulas are as follow:
- Encode fault into model (Ψ _f-in-m)It encodes the fault into model. Suppose the number of clock cycles in the circuit is r, the number of gate in every clock cycle is q. For each gate g_i in the abstract model 𝒟, assume the output of it is p_out^i. If gate g_i is injected fault, we exparess the output as p_out_τ^i We give it a condition c_gi to determine whether or not an injection fault occurs. If c_gi is true, then the operation of gate g_i is τ, else the operation remains unchanged, where τ∈𝒯. The subformula then can be expressed as ψ_enc=⋀_i=1^q(p_out^i=c_gi?(p_in1^i τ p_in2^i):(p_in1^i p_op^i p_in2^i)). Next step is encoding the position of fault where we select to inject. We need to select the clock cycles and the gates in every clock cycles. The location ℒ does not need to be selected for reasons explained in Section <ref>.
For the selection of clock cycles, we give each clock cycle i a condition c_ci representing whether this clock cycle is selected. If it is true, we inject fault in this clock cycle, else not. The variate v denote the maximal number of clock cycles for which faults can be injected. Further, the subfomula of select the clock cycle can be represented as ψ_v=(∑_i=0^r c_ci≤ v).
The selection of gate in each clock cycle is similar as before. n is the maximal number of gate for which faults can be injected. Thus the subfomula of select the gate in a clock cycle can be represented as ψ_n = (∑_j=0^q c_gj≤ n). The subformula Ψ_f-in-m then can be represented as Ψ_f-in-m=ψ_enc∧ψ_v ∧ψ_n = (⋀_i=1^q(p_out^i=c_gi?(p_in1^i τ p_in2^i):(p_in1^i p_op^i p_in2^i)))∧((∑_i=0^r c_ci≤ v) ∧ (∑_j=0^q c_gj≤ n))
- Security (Ψ _secu) It encodes the security definition into a formula, where the definition is given in Section <ref>. Thus the subformula can be represented as Ψ_secu=(⋀_i=1^m(𝒪(𝒟_golden)_i⊕𝒪(𝒟_fault)_i)) ∧ ( flag)
Overall, Φ can be represented as:
Φ = (⋀_i=1^q(p_out^i=(c_gi?(p_in1^i τ p_in2^i):(p_in1^i p_op^i p_in2^i))))∧((∑_i=0^r c_ci≤ v)
∧ (∑_j=0^q c_gj≤ n))∧((⋀_i=1^m(𝒪(𝒟_golden)_i⊕𝒪(𝒟_fault)_i)) ∧ ( flag))
Next, we give the Equation <ref> to show the encoding formula Φ about abstract model 𝒟_fault and 𝒟_golden, which is based on Example <ref>. The fault model is ζ (n, τ, l) (τ∈𝒯).
Φ=
(
[ s1 = c_g1 ? (b τ c): (b ⊕ c) ∧; s2 = c_g2 ? (τ c):( c) ∧; s3 = c_g3 ? (b τ a):(b ⊕ a) ∧; s4 = c_g4 ? (s2 τ d):(s2 ∧ d) ∧; s5 = c_g5 ? (s2 τ a):(s2 ∨ a) ∧; z = c_g6 ? (s3 τ s4):(s3 ⊕ s4) ∧; s6 = c_g7 ? (s5 τ d):(s5 ⊕ d) ∧; s7 = c_g8 ? (s3 τ s6):(s3 ∧ s6) ∧; s8 = c_g9 ? (s1 τ z):(s1 ∨ z) ∧; w = c_g10 ? (s1 τ s7):(s1 ⊕ s7) ∧; x = c_g11 ? (s8 τ s6):(s8 ⊕ s6) ∧; y = c_g12 ? (b τ s6):(b ⊕ s6) ∧; p1 = c_g13 ? (c τ d):(c ⊕ d) ∧; p2 = c_g14 ? (c τ a):(c∨ a) ∧; p3 = c_g15 ? (a τ p1):(a ∧ p1) ∧; p4 = c_g16 ? (d τ p2):(d ∧ p2) ∧; p5 = c_g17 ? (b τ p4):(b ∧ p4) ∧; p6 = c_g18 ? (p3 τ p5):(p3 ∨ p5) ])
∧ (∑_i=1^18c_gi≤ n) ∧ ((⋀_out=w^z(out⊕ out'))∧( flag))
After the encoding, we give a proposition about the secure of formula Φ.
Given a encoding formula Φ, the model is secure iff Φ is UNSATISFIABLE.
Based on this formula, we next can solve it by SAT solver. We give the Algorithm <ref> to show the process.
Finally, we get the result about whether the circuit is secure or not.
§ EXPERIMENTS
We have implemented our approach as an open-source tool .
Given circuits and ' in the form of Verilog gate-level netlists,
and a configuration file describing the blacklist and fault-resistance model,
verifies the fault-resistance of '.
first expresses the constraints in quantifier-free bit-vector theory (QF_Bitvec) using our SAT encoding methods and then
translates to Boolean formulas (in the DIMACS format) via
Z3 <cit.>.
Those Boolean formulas can be solved by off-the-shelf SAT solvers.
Currently, uses the parallel
SAT solver, Glucose 4.2.1 <cit.>.
We evaluate to answer the following two research questions:
* RQ1. How efficient and effective is for fault-resistance verification?
* RQ2. How effective is the vulnerable gate reduction for accelerating fault-resistance verification?
Benchmarks.
We use the VHDL
implementations of three cryptographic algorithms (i.e., CRAFT, LED and AES), taken from <cit.>.
The VHDL implementations are unrolled manually and transformed into Verilog
gate-level netlists using the Synopsys design compiler (version O-2018.06-SP2).
The blacklists are the same as the ones used in FIVER <cit.>.
The statistics of the benchmark is given in Table <ref>, where the first three columns
respectively give the benchmark name, number of clock cycles,
size of the blacklist , and the other columns give the numbers of primary inputs, primary outputs, gates and each specific gate.
The CRAFT benchmarks adopts both detection- (D) and correction-based (C) countermeasures.
The CRAFT and LED benchmarks vary with the number of rounds (Ri) and maximal number of protected faulty bits (bi) (i.e., the circuit ' is claimed to be fault-resistant).
The number of gates ranges from 925 to 40,184 so that the scalability of can be
evaluated.
The experiments were conducted on a machine with Intel Xeon Gold 6342 2.80GHz CPU, 1T RAM, and Ubuntu 20.04.1. Each verification task is run with 24 hours timeout.
§.§ RQ1: Efficiency and Effectiveness of for Fault-resistance Verification
To answer RQ1, we compare with (1) the state-of-the-art verifier FIVER <cit.> and (2) an SMT-based approach which directly
checks the constraints generated by our encoding method without translating to Boolean formulas.
We use the SMT solver bitwuzla <cit.>, which is the winner of QF_Bitvec (Single Query Track) Division at SMT-COMP 2021 and 2022.
The results are reported in Table <ref>, where both the SAT solver Glucose and the BDD-based verifier FIVER are run with 8 threads
while the SMT solver bitwuzla is run with a single thread, because there is no promising parallel SMT solvers for (quantifier-free) bit-vector theory.
Columns (2CNF) and (Solving) give the execution time of building and solving Boolean formulas in seconds, respectively.
Columns (Total) and (Time) give the total execution time in seconds.
Mark 51 (resp. 55) indicates that the protected circuit ' is fault-resistant (resp. not fault-resistant).
Overall, both (i.e., the SAT-based approach) and the SMT-based approach solved all the verification tasks, while FIVER runs out of time on 23 verification tasks within the time limit (24 hours per task).
The SAT/SMT-based approach become more and more efficient than FIVER
with the increase of round numbers (i.e., Ri) and maximal number of protected faulty bits (i.e., bj).
is significantly more efficient than the SMT-based approach on relatively larger benchmarks (e.g., AES-R1-b1, AES-R1-b2, CRAFT-R1-b2-C, CRAFT-R2-b2-C, CRAFT-R3-b3-D, CRAFT-R4-b3-D, LED64-R2-b1-D, LED64-R2-b2-D, LED64-R3-b1-D and LED64-R3-b2-D) while they are comparable on smaller benchmarks.
Interestingly, we found that
(i) implementations with correction-based countermeasures are more difficult to prove than that with detection-based countermeasures (e.g., CRAFT-Ri-bj-C vs. CRAFT-Ri-bj-D, for i=1,2 and j=1,2), because implementing correction-based countermeasures require more gates;
(ii) is more efficient at disproving fault-resistance than proving fault-resistance, because UNSAT instances are often more difficult to prove than SAT instances in CDCL SAT solvers.
(iii) often scales very well with increase of the round numbers (i.e., Ri for i=1,2,3,4),
maximal number of protected faulty bits (i.e., bj for j=1,2,3), maximum number of fault events per clock cycle (i.e., _e)
and the maximum number of clock cycles in which fault events can occur (i.e., _c),
but FIVER has very limited scalability.
To understand the effect of the number of threads, we evaluate on the benchmarks AES-R1-b1-D, AES-R1-b2-D, and CRAFT-R4-b3-D
by varying the number of threads from 1 to 12. The results are depicted in Fig. <ref>
and Fig. <ref>, respectively, where _ei and _cj denote the fault-resistance mode ζ(i,j,𝒯,).
Detailed results are reported in the appendix.
We observe that almost always outperforms the SMT-based approach.
On the fault-resistant benchmarks (i.e., curves with bj-_e k such that j≥ k), becomes more and more efficient
while the improvement becomes less and less, with the increase of the number of threads.
On the non-fault-resistant benchmarks (i.e., curves with bj-_e k such that j<k), multi-threading does not improve performance
instead slightly worsens performance, because they are easy to be disproved and thread scheduling causes overhead.
[size=title]
Answer to RQ1:
is able to efficiently and effectively verify the fault-resistance of realistic cryptographic
circuits with detection- and correction-based countermeasures.
It performs significantly better than the state-of-the-art, as well as an SMT-based appraoch.
§.§ RQ2: Effectiveness of the Vulnerable Gate Reduction
To answer RQ2, we evaluate with/without the vulnerable gate reduction using 8 threads for SAT solving,
considering all the fault types 𝒯 = {τ_s,τ_r,τ_bf}.
The results are reported in Table <ref>,
where columns (#Gate) give the number of vulnerable gates that should be considered when verifying
fault-resistance. We can observe that our vulnerable gate reduction is able to significantly reduce the number of vulnerable gates that should be considered when verifying
fault-resistance, achieving more than 72% reduction rate on average, consequently, significantly reduce the size of the resulting Boolean formulas.
Interestingly, reducing the size of the resulting Boolean formulas does not necessarily improve the overall verification efficiency.
Indeed, the vulnerable gate reduction is very effective in proving fault-resistant benchmarks no matter the adopted countermeasure, fault-resistance model,
fault-resistance and size of the benchmarks, but sightly worsens the performance for disproving non-fault-resistant benchmarks as they are easy to disprove
and the vulnerable gate reduction itself has some overhead.
[size=title]
Answer to RQ2:
Our vulnerable gate reduction achieves on average more than 72% reduction rate of vulnerable gates and is very effective in proving fault-resistant benchmarks.
The results w.r.t. the fault-resistance models limited to the fault type τ_bf are given in the appendix
from which a similar conclusion can be drawn.
In this section, We evaluated our method and the state-of-art technology on the cryptographic hardware implementation. Specifically, we evaluated CRAFT, LED and AES circuit with detection-based and correction-based countermeasures. All designs come from <cit.><cit.>. To get the verilog gate-level netlist, we use the Synopsys design compiler with version O-2018.06-SP2. After that, we use parser to translate the verilog gate-level netlist to nl file with the parser. The configuration of the computer is Ubuntu 20.04.1 with Intel(R) Xeon(R) Gold 6342 CPU with 2.80GHz and 1T RAM. For SAT solver, we use Glocuse 4.2.1<cit.>, who won the first place in the parallel track at the 2017 SAT competition. For SMT solver, we choose bitwuzla<cit.>, the champion at SMT-COMP 2021. The tool that encodes the abstract model into formula Φ_simp is Z3. All experiments are limited to 24 hours. If the time goes beyond 24 hours, we stop the code and write 𝐭𝐢𝐦𝐞𝐨𝐮𝐭. The threads of experiments is 8 except SMT solver. The SMT solver is signal thread since we don't find a good parallel SMT solver.
We present the experiments from the following aspects:
* We compared the time of our algorithm with FIVER proposed by Jan Richter-Brockmann et al. <cit.> and SMT solver.
* For each of the three optimizations, we performed ablation experiments and compared the time.
§.§ Compare with FIVER and SMT
In this section, we compare the results between ours, SMT and FIVER. We choose our optimal results and so does FIVER. Since we provide optimization schemes 1, our fault model is unified as ζ(_e,_c,𝒯,), where n_e and n_c are defined before.
We give the comparison of our method, SMT and FIVER in <ref>.
§.§.§ CRAFT
For CRAFT, we verify 1–4 round design protected against 1-bit, 2-bit and 3-bit attack with detection-based countermeasure. For 2, 3 and 4 round, we also consider the design which protected against multivariate attack.
The evaluation for 1 round and 2 round designs with detection-based countermeasure using our method is executed around 1s, better than FIVER in most benchmarks. Only in 1 round design with 1-bit protected and fault model ζ (1, 1, τ_bf, ) and 2 round design with 2-bit protected and fault model ζ (2, 1, τ_bf, ), FIVER is better than ours.
For 3 round and 4 round design, FIVER cannot handle such a large circuit in 24 hours, and our method still completes the verification at a very fast rate. For 3 round design with 3-bit protected, our method takes less than 15s to complete the verification. For 4 round design with 3-bit protected, it only cost less than 30s. Meanwhile, SMT solver gets a time closer to ours in 1 round design with 1 and 2 bit protected, and significantly longer than ours in other benchmarks.
For correction-based countermeasure, FIVER is better than us at 1 round and 2 round design with 1-bit protected and fault model ζ (1, 1, τ_bf, ). For all other designs, our method is faster than FIVER. For all benchmarks with correction-based countermeasure, SMT solver gets longer time than ours.
It is worth noting that if the circuit is not secure after injecting a fault, our method gives a quick conclusion of 𝐒𝐀𝐓𝐈𝐒𝐅𝐈𝐀𝐁𝐋𝐄. Whereas FIVER does not give a quick conclusion because they may need to traverse the combination of secure gates as well. The same conclusion was reached in the later experiments.
§.§.§ LED-64
For LED-64, we analyze 1, 2 and 3 round design with 1-bit, 2-bit and 3-bit protected. As shown in Table <ref>, FIVER is faster than us in 1 round design with 1-bit protected and fault model ζ (1, 1, τ_bf, ). In other benchmarks, our method is better than FIVER. The longest time is no more than 6s. FIVER also cannot handle the 2 round and 3 round circuit in 24h cause the circuit is too large for FIVER. SMT solver gets a very close time with ours in LED-64 1 round design, but in other benchmarks longer than ours.
§.§.§ AES-128
For AES-128, we analyze 1 round design with protected against 1-bit and 2-bit. Table <ref> shows that FIVER is better than us in 1-bit protected with fault model ζ (1,1, τ_bf, ). For the benchmark that with 2-bit protected, FIVER cannot deal with the circuit in 24h. Meanwhile, our method can verify the 2-bit protected circuit with fault model ζ(2,1,τ_bf,) in 1500s. For the benchmarks that has 1-bit protected with fault model ζ(2,1,τ_bf, ) and that has 2-bit protected with fault model ζ(3,1,τ_bf, ), our method can complete the verification in 10s, which is much faster than FIVER. For SMT solver, it is always longer than ours in all AES-128 benchmarks.
§.§.§ Different threads
We further compare verification time with different number of threads. Fig <ref> shows the result of AES128 under different number of threads. Fig <ref> shows the result of CRAFT 4round under different number of threads. As shown in the figure, If the task takes a short time, increasing the number of threads does not speed things up and may even increase the time (i.e. AES128 1bit 2fault, AES128 2bit 3fault and CRAFT 4round 3bit 4fault). But if the task takes a long time to get the result, increasing the number of threads will significantly reduces the solving time (i.e. other example). We also compared with SMT solver under 1 thread. As the figure shows, even if we compare single thread for SMT to single thread for SAT solver, SMT solver takes much longer than SAT. Table <ref> and Table <ref> in Appendix show the time in detail.
§.§ Enhancement from optimization strategies
In this section, we show the performance gains from our optimizations. We give ablation experiments of two optimizations.
§.§.§ Bit flip vs Bit flip + static analysis
In this section, we show the effectiveness of static analysis. Table <ref> shows the time that with static analysis and without static analysis. The fault model for both are ζ (_e,_c, τ_bf, ). As we can see, the total time to verify the circuit with static analysis is significantly faster than that without static analysis for all benchmarks. In addition, after static analysis, the number of gates required to be verified is significantly reduced, which is reflected in the CNF size. Meanwhile, the security of the circuit is same as before, which further proves the correctness of the optimization.
§.§.§ Mixed fault + static analysis vs Bit flip + static analysis
In this section, we show the effectiveness of the first optimization. The fault model for mixed fault with static analysis is ζ(_e,_c,𝒯,), and for bit flip with static analysis is ζ(_e,_c, τ_bf, ). Table <ref> shows the results of mixed fault with static analysis compared with bit flip with static analysis. In this table, we can observe that the time of latter is always shorter than the former. This results prove that our optimization is effective. The fact that bit flip and mixed fault have the same security is further proof that our theory is correct.
§ RELATED WORK
This work focuses on formal verification
of countermeasures against fault injection attacks.
In this section, we discuss related work on functional equivalence checking, safety and fault-resistance verification of hardware designs.
Functional equivalence checking of hardware designs can be roughly divided into
combinational and sequential equivalence checking <cit.>,
where the former requires that the given gates match in the circuits under the same inputs,
while the latter only requires that the outputs match in the circuits under the same inputs.
Various combinational and sequential equivalence checking techniques have been proposed such as SAT/SMT-based ones (e.g., <cit.>)
and BDD-based ones (e.g., <cit.>).
Safety verification of hardware designs is usually done by model-checking, where safety properties are expressed as assertions using temporal logic.
Both SAT/SMT-based (e.g., <cit.>) and BDD-based (e.g., <cit.>) methods have been widely studied.
However, all the approaches and tools for checking functional equivalence and safety properties cannot be directly applied
to check fault-resistance, though our SAT encoding method is inspired by the existing SAT-based ones for checking sequential equivalence.
Indeed, the fault-resistance problem is significantly different from
the functional equivalence problem and cannot be easily expressed as a safety property.
Mover, our vulnerable gate reduction which is every effective in improving verification efficiency
cannot be leveraged using existing tools.
Due to the severity of fault injection attacks, many simulation- and SAT-based approaches have been proposed to find effective fault vectors
or check the effectiveness of a user-specified fault vector (e.g., <cit.>).
Though promising, it is infeasible if not impossible to verify a given hardware design considering all possible fault vectors that could occur under all valid input combinations.
To fill this gap,
a BDD-based approach, named FIVER, was proposed <cit.>, which exclusively focuses on fault-resistance verification.
In general, for each possible fault vector, FIVER builds a BDD model to represent the concrete faulty circuit w.r.t. the fault vector,
and analyzes fault-resistance w.r.t. the fault vector by comparing the BDD model with the BDD model of the original circuit.
Though several optimizations were proposed, it suffers from the combinatorial exploration problem with the increase of fault types, vulnerable gates and clock cycles,
thus is limited in efficiency and scalability.
Thanks to our novel fault event encoding method and effective vulnerable gate reduction, our approach does not need to explicitly enumerate all the possible fault vectors
and the verification process can fully utilize the conflict-driven clause learning feature of modern SAT solvers.
Thus, scales very well and is significantly more efficient than FIVER on relatively larger benchmarks.
Countermeasure synthesis techniques have also been proposed to repair flaws (e.g., <cit.>).
However, they do not provide security guarantees (e.g., <cit.>) or
are limited to one specific type of fault injection attacks (e.g., clock glitches in <cit.>) and thus are still vulnerable to other fault injection attacks.
§ CONCLUSION
We formalized the fault-resistance verification problem of cryptographic circuits and proved that it is NP-complete for the first time.
We proposed novel fault encoding and SAT encoding methods to reduce the fault-resistance verification problem into the SAT problem so that state-of-the-art SAT solvers can be harnessed.
We also presented a novel vulnerable gate reduction technique to effectively reduce the number of vulnerable gates, which can significantly improve the verification efficiency.
We implemented our approach in an open-source tool and extensively evaluate it on a
set of realistic cryptographic circuits.
Experimental results show that our approach significantly outperforms the state-of-the-art
for fault-resistance verification.
Our tool enables hardware designers to assess and verify the
countermeasures in a systematic and automatic way.
For future research, it would be interesting to develop automated flaw repair techniques by leveraging the
verification results produced by our approach.
IEEEtranS
§ MISSING PROOFS
§.§ Proof of Theorem <ref>
Theorem <ref>.
The problem of determining whether a circuit ' is not fault-resistant is
NP-complete.
To show that the problem is in NP, we can first non-deterministically guess a sequence of primary inputs (x⃗_1,⋯,x⃗_k) and a fault vector (',,T)∈ζ(_e,_c,T,ℓ),
then construct the faulty circuit '[(',,T)] in polynomial time,
finally compute and check if the sequences of primary outputs '(x⃗_1,⋯, x⃗_k) and '[(',,T)](x⃗_1,⋯, x⃗_k)
differ at some clock cycle before the error flag output o_ flag differs in polynomial time.
If not, then ⟨',⟩ζ(_e,_c,T,ℓ).
The NP-hardness is proved by reducing from the SAT problem.
Let C_φ be a combinational circuit representing a Boolean formula φ, where the inputs of C_φ are the Boolean variables
of φ, and the output indicates the result of φ.
We create a circuit '=(,,, s⃗_0, ) as shown in Fig. <ref>, where
* ={x_1,⋯,x_m} is the set of inputs of the circuit C_φ;
* is the set {o_i,o_ flag| 1≤ i≤ 2_e+1};
* =R_1∪ R_2, where R_1={r_i| 1≤ i≤ 2_e+1} and R_2={r_i'| 1≤ i≤ 2_e+1};
* s⃗_0 is a vector consisting of ;
* ={C_1,C_2,C_3}, where
* C_1 comprises 2_e+1 copies of the circuit C_φ: all the copies share the same inputs , the output
of the i-th copy is connected to r_i, and the output o_ flag is always ;
* C_2 outputs signals of the memory gates R_1 and store them into the memory gates R_2 again,
checks if 1≤∑_i=1^2_e+1 r_i≤_e, and the output o_ flag is iff 1≤∑_i=1^2_e+1 r_i≤_e;
* C_3 checks whether 1≤∑_i=1^2_e+1 r_i'≤ 2_e, and the output o_ flag is iff 1≤∑_i=1^2_e+1 r_i'≤ 2_e.
Claim. The circuit ' is not fault-resistant w.r.t. the blacklist =∅ and the fault-resistance model ζ(_e,1,{τ_bf},) iff
the Boolean formula φ is satisfiable.
(⇐) Suppose φ is satisfiable. Let x⃗ be the satisfying assignment of φ.
Obviously, under the primary inputs x⃗, the output o_ flag is and the outputs {o_i| 1≤ i≤ 2_e+1} are in all the clock cycles.
Consider the fault event (2,r_1,τ_bp).
Along the sequence of primary outputs '[(2,r_1,τ_bp)](x⃗),
the output o_ flag is at the first two clock cycles and becomes at the 3-rd clock cycle.
However, the output o_1 differs in '(x⃗) and '[(2,r_1,τ_bp)](x⃗)
at the 2-nd clock cycle due to the bit-flip fault injection on the memory gate
r_1. Thus, ⟨',∅⟩ζ(_e,1,{τ_bf},).
(⇒) Suppose φ is unsatisfiable.
Obviously, under any primary inputs x⃗, all the primary outputs {o_ flag,o_i| 1≤ i≤ 2_e+1} are in all the clock cycles.
For any fault vector (',,T)∈ζ(_e,1,{τ_bf},),
at most _e memory gates can be bit-flipped in one single clock cycle. If
some memory gates in R_1 are bit-flipped at the 2-nd clock cycle, then the output o_ flag is at the 2-nd clock cycle.
If no memory gates of R_1 are bit-flipped at the 2-nd clock cycle and some memory gates in R_2 are bit-flipped at the 3-rd clock cycle,
the primary outputs {o_i| 1≤ i≤ 2_e+1} are at the 2-nd clock cycle,
o_ flag is at the 3-nd clock cycle although some primary outputs of {o_i| 1≤ i≤ 2_e+1}
become at the 3-nd clock cycle. Thus, ⟨',∅⟩ζ(_e,1,{τ_bf},)
§.§ Proof of Theorem <ref>
Theorem <ref>.
Consider a fault-resistance model ζ(_e,_c,T,ℓ) such that τ_bf∈ T or {τ_s,τ_r}⊆ T, and ℓ∈{,}.
Let _1(',,T)=(',,T)∪{(α,β,τ)}∈ζ(_e,_c,T,ℓ) be an effective fault vector on the circuit '.
If the output of the gate β is only connected to one logic gate β'∉,
then
there exists a fault vector '(',,T)⊆(',,T)∪{(α,β',τ')} for some τ'∈ T such
that '(',,T) is also effective on the circuit '.
Moreover, if (',,T)=∅, then {(α,β',τ')} for some τ'∈ T is effective on the circuit '.
Since _1(',,T) is an effective fault vector on the circuit ',
there exists a sequence of primary inputs (x⃗_1,⋯,x⃗_k) such that
'(x⃗_1,⋯, x⃗_k)
and '[_1(',,T)](x⃗_1,⋯, x⃗_k) differ at
some clock cycle before the error flag output o_ flag differs.
We proceed by distinguishing whether the output signal of the gate β' differs in the circuits
' and '[_1(',,T)] under the same sequence of primary inputs (x⃗_1,⋯,x⃗_k).
* If the output signal of the gate β' is the same in the circuits
' and '[_1(',,T)] under the same sequence of primary inputs (x⃗_1,⋯,x⃗_k),
then the effect of the fault event (α,β,τ) is stopped at the gate β', as
the output of the gate β is only connected to the gate β'. Thus,
the sequences of primary outputs '[(',,T)](x⃗_1,⋯, x⃗_k)
and '[_1(',,T)](x⃗_1,⋯, x⃗_k) are the same.
It implies that '(x⃗_1,⋯, x⃗_k)
and '[(',,T)](x⃗_1,⋯, x⃗_k) differ at
some clock cycle before the error flag output o_ flag differs.
The result immediately follows.
* If the output signal of the gate β' differs in the circuits
' and '[_1(',,T)] under the same sequence of primary inputs (x⃗_1,⋯,x⃗_k),
then the fault propagation from the fault event (α,β,τ) flips the output signal of the gate β'.
It implies that '[_1(',,T)](x⃗_1,⋯, x⃗_k)
and '[_2(',,T)](x⃗_1,⋯, x⃗_k) are the same, as
the output of the gate β is only connected to the gate β'.
Let _2(',,T)=(',,T)∪{(α,β',τ')}, where τ'=τ_bf if τ_bf∈ T,
otherwise τ=τ_s if the output signal of the gate β' flips from to due to the fault event (α,β,τ)
and τ=τ_r if the output signal of the gate β' flips from to due to the fault event (α,β,τ).
Thus, '(x⃗_1,⋯, x⃗_k)
and '[_2(',,T)](x⃗_1,⋯, x⃗_k) differ at
some clock cycle before the error flag output o_ flag differs. The result immediately follows.
Moreover, if (',,T)=∅, then the output signal of the gate β' must differ in the circuits
' and '[_1(',,T)] under the same sequence of primary inputs (x⃗_1,⋯,x⃗_k),
otherwise _1(',,T) is ineffective on the circuit '.
The result follows from the fact that _2(',,T)={(α,β',τ'}.
Proof for ony bit-flip fault type
Theorem <ref>.
Let _1(',)=(',)∪{(α,β,τ_bf)}∈ζ(_e,_c,{τ_bf},ℓ) be an effective fault vector on the circuit '
and _2(',)=(',)∪{(α,β',τ_bf)}.
If the output of the gate β is only connected to one logic gate β'∉ and ℓ∈{,},
then _2(',)∈ζ(_e,_c,{τ_bf},ℓ)
and
there exists a fault vector '(',)⊆_2(',)
that is also effective on the circuit '.
Moreover, if (',)=∅, then {(α,β',τ_bf)} is an effective fault vector on the circuit '.
Following from the facts that β' is a logic gate, β'∉ and ℓ∈{,},
it is easy to see that _2(',)∈ζ(_e,_c,{τ_bf},ℓ).
Below, we show that there exists an effective fault vector '(',)⊆_2(',)
on the circuit '.
Since _1(',) is an effective fault vector on the circuit ',
there exists a sequence of primary inputs (x⃗_1,⋯,x⃗_k) such that
'(x⃗_1,⋯, x⃗_k)
and '[_1(',)](x⃗_1,⋯, x⃗_k) differ at
some clock cycle before the error flag output o_ flag differs.
We proceed by distinguishing whether the output signal of the gate β' differs in the circuits
' and '[_1(',)] under the same sequence of primary inputs (x⃗_1,⋯,x⃗_k).
* If the output signal of the gate β' is the same in the circuits
' and '[_1(',)] under the same sequence of primary inputs (x⃗_1,⋯,x⃗_k),
then the effect of the fault event (α,β,τ_bf) is stopped at the gate β', as
the output of the gate β is only connected to the gate β'. Thus,
the sequences of primary outputs '[(',)](x⃗_1,⋯, x⃗_k)
and '[_1(',)](x⃗_1,⋯, x⃗_k) are the same.
It implies that '(x⃗_1,⋯, x⃗_k)
and '[(',)](x⃗_1,⋯, x⃗_k) differ at
some clock cycle before the error flag output o_ flag differs.
The result immediately follows.
* If the output signal of the gate β' differs in the circuits
' and '[_1(',)] under the same sequence of primary inputs (x⃗_1,⋯,x⃗_k),
then the fault propagation from the fault event (α,β,τ_bf) flips the output signal of the gate β'.
It implies that '[_1(',)](x⃗_1,⋯, x⃗_k)
and '[_2(',)](x⃗_1,⋯, x⃗_k) are the same, as
the output of the gate β is only connected to the gate β'.
Thus, '(x⃗_1,⋯, x⃗_k)
and '[_2(',)](x⃗_1,⋯, x⃗_k) differ at
some clock cycle before the error flag output o_ flag differs. The result immediately follows.
Moreover, if (',)=∅, then the output signal of the gate β' must differ in the circuits
' and '[_1(',)] under the same sequence of primary inputs (x⃗_1,⋯,x⃗_k),
otherwise _1(',) is ineffective on the circuit '.
The result follows from the fact that _2(',)={(α,β',τ_bf)}.
§ PSEUDO-CODE OF THE ILLUSTRATING EXAMPLE AND ITS REVISED VERSION
The corresponding pseudo-code of the illustrating example is given in Fig. <ref>, where the left two columns implements the function
of the S-box and the right column implements a single-bit parity protection mechanism.
The circuit representation of the revised implementation of the RECTANGLE S-box
is shown in Fig. <ref> and its pseudo-code is shown in Fig. <ref>,
following the independence property defined by <cit.>.
We can observe that any fault injection on one single logic gate in the redundancy part does not change any of the outputs { w,x,y,z},
any fault injection on one single logic gate in the S-box part only change one of the outputs { w,x,y,z}
and also changes the error flag output flag. Thus, the revised implementation is fault-resistant
w.r.t. the blacklist and the fault-resistance model ζ(1,1,𝒯,), where only contains the logic gates in the parity checking.
§ DETAILED RESULTS OF AES-R1-B1-D, AES-R1-B2-D, AND CRAFT-R4-B3-D BY VARYING THE NUMBER OF THREADS
Detailed results of AES-R1-b1-D, AES-R1-b2-D, and CRAFT-R4-b3-D by varying the number of threads
from 1 to 12 are reported in Table <ref>.
§ RESULTS OF FAULT-RESISTANCE VERIFICATION WITH ONLY THE FAULT TYPE Τ_BF
We also evaluate for verifying fault-resistance with only the fault type τ_bf.
§.§ RQ1: Efficiency and Effectiveness of for Fault-resistance Verification
We still compare with (1) the state-of-the-art verifier FIVER and (2) an SMT-based approach which directly
checks the constraints generated by our encoding method without translating to Boolean formulas.
The results are reported in Table <ref>, where both the SAT solver Glucose and the BDD-based verifier FIVER are run with 8 threads
while the SMT solver bitwuzla is run with a single thread.
Columns (2CNF) and (Solving) give the execution time of building and solving Boolean formulas in seconds, respectively.
Columns (Total) and (Time) give the total execution time in seconds.
Mark 51 (resp. 55) indicates that the protected circuit ' is fault-resistant (resp. not fault-resistant).
Overall, both (i.e., the SAT-based approach) and the SMT-based approach solved all the verification tasks, while FIVER runs out of time on 22 verification tasks within the time limit (24 hours per task). The SAT/SMT-based approach become more and more efficient than FIVER
with the increase of round numbers (i.e., Ri) and maximal number of protected faulty bits (i.e., bj).
is significantly more efficient than the SMT-based approach on relatively larger benchmarks (e.g., AES-R1-b1, AES-R1-b2, CRAFT-R1-b2-C, CRAFT-R2-b2-C, CRAFT-R3-b3-D, CRAFT-R4-b3-D, LED64-R2-b1-D, LED64-R2-b2-D, LED64-R3-b1-D and LED64-R3-b2-D) while they are comparable on smaller benchmarks.
Interestingly, we found that
(i) implementations with correction-based countermeasures are more difficult to prove than that with detection-based countermeasures (e.g., CRAFT-Ri-bj-C vs. CRAFT-Ri-bj-D, for i=1,2 and j=1,2), because implementing correction-based countermeasures require more gates;
(ii) is more efficient at disproving fault-resistance than proving fault-resistance, because UNSAT instances are often more difficult to prove than SAT instances in CDCL SAT solvers.
(iii) often scales very well with increase of the round numbers (i.e., Ri for i=1,2,3,4),
maximal number of protected faulty bits (i.e., bj for j=1,2,3), maximum number of fault events per clock cycle (i.e., _e)
and the maximum number of clock cycles in which fault events can occur (i.e., _c),
but FIVER has very limited scalability.
To understand the effect of the number of threads, we evaluate on the benchmarks AES-R1-b1-D, AES-R1-b2-D, and CRAFT-R4-b3-D
by varying the number of threads from 1 to 12. The results are depicted in Fig. <ref>
and Fig. <ref>, respectively, where _ei and _cj denote the fault-resistance mode ζ(i,j,τ_bf,).
Detailed results are reported in Table <ref>.
We can observe that always outperforms the SMT-based approach.
On the fault-resistant benchmarks (i.e., curves with bj-_e k such that j≥ k), becomes more and more efficient
while the improvement becomes less and less, with the increase of the number of threads.
On the non-fault-resistant benchmarks (i.e., curves with bj-_e k such that j<k), multi-threading does not improve performance
and instead slightly worsens performance, because they are easy to be disproved and thread scheduling causes overhead.
§.§ RQ2: Effectiveness of the Vulnerable Gate Reduction
We evaluate with/without the vulnerable gate reduction using 8 threads for SAT solving,
considering only the fault type τ_bf.
The results are reported in Table <ref>,
where columns (#Gate) give the number of vulnerable gates that should be considered when verifying
fault-resistance. We can observe that our vulnerable gate reduction is able to significantly reduce the number of vulnerable gates that should be considered when verifying
fault-resistance (more than 72% reduction rate on average), consequently, significantly reducing the size of the resulting Boolean formulas and
accelerate fault-resistance verification, no matter the adopted countermeasure, fault-resistance model,
fault-resistance and size of the benchmarks.
|
http://arxiv.org/abs/2307.01986v1
|
20230705021338
|
On the Well-posedness of Hamilton-Jacobi-Bellman Equations of the Equilibrium Type
|
[
"Qian Lei",
"Chi Seng Pun"
] |
math.AP
|
[
"math.AP",
"math.OC",
"q-fin.MF",
"93E20, 35A01, 35A02, 35K10, 35Q93, 49L12"
] |
Tetrahedron genuine entanglement measure of four-qubit systems
Shao-Ming Fei
==============================================================
This paper studies the well-posedness of a class of nonlocal parabolic partial differential equations (PDEs), or equivalently equilibrium Hamilton-Jacobi-Bellman equations, which has a strong tie with the characterization of the equilibrium strategies and the associated value functions for time-inconsistent stochastic control problems. Specifically, we consider nonlocality in both time and space, which allows for modelling of the stochastic control problems with initial-time-and-state dependent objective functionals. We leverage the method of continuity to show the global well-posedness within our proposed Banach space with our established Schauder prior estimate for the linearized nonlocal PDE. Then, we adopt a linearization method and Banach's fixed point arguments to show the local well-posedness of the nonlocal fully nonlinear case, while the global well-posedness is attainable provided that a very sharp a-priori estimate is available. On top of the well-posedness results, we also provide a probabilistic representation of the solutions to the nonlocal fully nonlinear PDEs and an estimate on the difference between the value functions of sophisticated and naïve controllers. Finally, we give a financial example of time inconsistency that is proven to be globally solvable.
Existence and Uniqueness, Time-inconsistent stochastic control problems, Equilibrium Hamilton-Jacobi-Bellman equation, Nonlocal partial differential equation, Method of Continuity, Linearization
93E20, 35A01, 35A02, 35K10, 35Q93, 49L12
§ INTRODUCTION
Stochastic control problems can be categorized into two classes: time consistent or not, depending on whether Bellman's principle of optimality (BPO) holds for the problem. The classical stochastic control problems are time consistent and the mathematical tools of solving them are well documented in <cit.>. However, the violation of BPO is common in many decision-making problems, especially in behavioral finance and economics, as long as the objective functionals depend on the initial time or initial state. For example, hyperbolic discounting problems involve initial-time dependent objective; see <cit.>. Endogenous habit formation problem and portfolio selection with state-dependent risk aversion involve initial-state dependent objectives; see <cit.>. This paper is devoted to address the open problems related to time-inconsistent (TIC) stochastic control problems.
When the BPO does not hold, the globally optimal solution may no longer be optimal as time evolves, which gives rise to the questions about different types of dynamic optimality and how to solve for them. These questions were first studied in <cit.> and they categorize three types of agents facing TIC, namely myopic agents (who ignore dynamic optimality completely and thus are normally not considered), pre-committers, and sophisticated agents. We refer the readers to <cit.> for a comprehensive review on treatments for time inconsistency (also abbreviated as TIC), while we will also elaborate the latter two with more technical details in Section <ref>. Pre-commitment policy is difficult to identify as the dynamic programming (DP) techniques are no longer available. Though embedding techniques may be employed to convert the TIC problem to a time-consistent problem, only for some specific TIC models, such as mean-variance analysis of the linear-quadratic type, would be approachable; see <cit.>. This paper is focused on the equilibrium policy by the sophisticated agents as there would be systematic framework of identifying such policies with a game-theoretic concept. However, there remain some open problems about the mathematical framework and we aim to fill in the research gap with this paper.
Specifically, by ways of establishing DP or Hamilton-Jacobi-Bellman (HJB) equations, there are generally two analytical frameworks for TIC models while they agree with each other, namely extended DP (or extended HJB system) and equilibrium HJB equation, originated from <cit.> and <cit.>, respectively. The extended DP technique introduces auxiliary (unknown) functions and adjustment terms to revive the BPO from the viewpoint of subgame perfect equilibrium. Although one can show the verification theorem for the solutions to the corresponding extended HJB system, its derivation and the definition of equilibrium policies are heuristic. <cit.> properly address the mathematical shortcomings of <cit.> by a discretization approach for formulating subproblems partitioned arbitrarily over the investment horizon and showing the convergence of the recursive equations to an equilibrium HJB equation for the value function. Noteworthy is however that the framework of equilibrium HJB equation can by far cover the initial-time-and-state dependence of the objective functionals but not the nonlinearity of expectation operator nested in the extended HJB system. We mainly refer to the framework of equilibrium HJB equation for the aforementioned TIC models, while both frameworks agree with each other in this case; see Section <ref> below for more details. The open problem of our interest that persists in both frameworks is the well-posedness issues, including existence, uniqueness, and stability of the solutions, of the equilibrium HJB equation (or the extended HJB system).
Although hundreds of works have adopted the frameworks of <cit.> or <cit.> to obtain an equilibrium policy, the mathematical property, especially uniqueness, of the policy and the associated value function is underexplored. Moreover, the relation between the equilibrium HJB equations and the TIC stochastic control problems requires further analyses. On one hand, it is expected (Sufficiency) that if a regular solution to the equilibrium HJB equation exists, one can identify an equilibrium value function and the corresponding equilibrium policy. On the other hand, it is expected (Necessity) that for every equilibrium policy, the corresponding value function solves the equilibrium HJB equation. We refer the readers to <cit.> and <cit.> for the studies on sufficiency and necessity, respectively. However, the concerns above are built on top of the solvability (well-posedness) of the equilibrium HJB equation and it can be viewed as a standalone mathematical problem. The equilibrium HJB equation elaborated in the subsequent subsection is a nonlocal fully nonlinear partial differential equation (PDE), whose well-posedness is not covered in the classical PDE theory. It should be noted that we may also approach the TIC stochastic control problem with a backward stochastic Volterra integral equation (BSVIE) approach (see <cit.> for example) but the well-posedness issues persist. We do not adopt the BSIVE approach while based on our established well-posedness results, we will show (in Theorem <ref>) that the solution to the equilibrium HJB equation solves a flow of backward stochastic differential equations (BSDEs), which is equivalent to a second-order BSVIE.
§.§ Related Literature and Challenges
This paper aims to address the well-posedness issues for a broader class of nonlocal fully nonlinear parabolic PDEs, called the HJB equations of the equilibrium type, which nests the equilibrium HJB equation stemming from TIC stochastic control problems, of the form
{[ u_s(t,s,x,y) = F(t,s,x,y,(∂_I u)_|I|≤ 2(t,s,x,y), (∂_I u)_|I|≤ 2(s,s,x,y)|_x=y),; u(t,0,x,y) = g(t,x,y), t,s∈[0,T], x,y∈ℝ^d. ].
where the mapping (nonlinearity) F could be nonlinear with respect to all its arguments, and both s and y are dynamical variables while (t,x) should be considered as an external space-time parameter. Here, I=(i_1,…,i_j) is a multi-index with j=|I|, and ∂_I u:=∂^|I|u/∂ y_i_1⋯∂ y_i_j. The nonlocality comes from the dependence on the unknown function u and its derivatives evaluated at not only the local point (t,s,x,y) but also at the diagonal line of the space-time domain (s,s,y,y). A more specific and relevant application of (<ref>) is the equilibrium HJB equation (<ref>) in Section <ref> that characterizes the equilibrium solution to a TIC stochastic control problem.
The majority of the existing literature that attempted the well-posedness of (<ref>) or (<ref>) has been focused on the cases without the term (∂_I u)_|I|= 2(s,s,x,y)|_x=y, which corresponds to the TIC stochastic control problems without controls on the diffusion part of the state process; see <cit.>. In this case, the equilibrium HJB equation is a nonlocal quasilinear PDE (of the form (<ref>) below), whose well-posedness is relatively easy to achieve with an explicit form of the fundamental solution to the corresponding homogeneous PDE. However, free of (∂_I u)_|I|= 2(s,s,x,y)|_x=y or leaving the diffusion part uncontrolled largely limits the modelling of the stochastic control problems and its negative impact is more pronounced for risk-sensitive tasks. In fact, only when the controls take effect on the magnitude or uncertainty, the stochastic control problems essentially differ from the deterministic counterparts. Moreover, the well-posedness results obtained in <cit.> do not consider x-dependence of the objective functional, which imposes significant analytical challenges. Recently, <cit.> have extended the well-posedness studies of <cit.> to controlled diffusion setting or fully nonlinearity of (<ref>) and to higher order systems for TIC stochastic differential games but they still consider nonlocality only in time rather than in both time and state.
To elaborate the challenges of extension from <cit.> and from <cit.>, we may revisit the classical contraction mapping approach to the well-posedness issue in <cit.>. Specifically, we attempt to
construct a nonlinear operator from u to U defined by the solution to the PDE
U_s = ∑_|I|= 2a^I(s,y)∂_I U +
F(t,s,x,y,(∂_I U)_|I|≤ 1, (∂_I u)_|I|≤ 1|_l
t=s
x=y
), U|_s=0=g.
With the unmanageable diagonal terms at (s,s,y,y) replaced with a known function u, the classical PDE theory promises that the mapping from u to U is a contraction such that it admits a unique fixed point solving the nonlocal PDE (with u replaced by U in (<ref>)). Note that (<ref>) still has local terms ∂_|I|= 2U appearing in a linear manner. However, if the PDE involves the highest order nonlocal terms nonlinearly, i.e., (<ref>), the similar mapping is no longer contractive but just continuous. Even <cit.> got rid of this challenge with an integral representation of the nonlocal terms, the methods used in the existing literature are still limited to handling nonlocality in time only. When we further advance the well-posedness to nonlocality in both time and state, we encounter the following two main challenges:
* If we represent the diagonal (nonlocal) terms (∂_I u)_|I|≤ 2(s,s,x,y)|_x=y of (<ref>) by the way of (∂_I u)_|I|≤ 2(t,s,x,y)-∫^t_s·-∫^x_y·, it is clear that the first integral is a usual definite integral while the second one could be indefinite since its domain of integration is infinite. Hence, a natural question is that if the space of bounded functions is available to study the general nonlocal PDE (<ref>). We need to specify some suitable norms and induced spaces as well as topologies for (<ref>).
* A set of approaches in <cit.> cannot provide Schauder's prior estimate for solutions of the linearized version (<ref>) (given below) of (<ref>), which gives a certain compactness to the class of possible solutions. Such a compactness is necessary for Banach's fixed-point theorem in the study of more complicated PDEs. In the study of x-independent nonlocal PDEs in <cit.>, the prior estimate can be established by studying a Cauchy initial value problem over time interval near the external temporal parameter t. However, it is difficult to consider such an associated PDE in a small neighbourhood of the spatial parameter x, since there are no boundary conditions imposed to verify the well-posedness of such an initial-boundary-value problem.
§.§ Our Approach
The methods in the highly related literature <cit.> are not feasible to address the well-posedness of the general
nonlocal PDE (<ref>). In this paper, we provide a new approach for proving the well-posedness, which is compatible with all previous results. With the designs of norms and function spaces tailored for (<ref>), the main procedure of our analysis is outlined as follows:
Step 1a. We first study a linearized version of (<ref>) of the form
L_0u:=u_s-∑_|I|≤ 2 a^I(s,y)∂_Iu+∑_|I|≤ 2 b^I(s,y)(∂_Iu)|_l
t=s
x=y
=f, u|_s=0=g,
where both a^I and b^I are independent of (t,x). It turns out that (<ref>) is mathematically equivalent to a decoupled system of PDEs (see (<ref>) below) for a unknown vector-valued function. By proving that the system admits a regular enough solution and satisfies some important properties, we can show that there also exists a unique classical solution satisfying (<ref>) in [0,T]^2×ℝ^d;d. Noteworthy is that due to the appearance of (∂_I u)_|I|= 2(s,s,x,y)|_x=y, (<ref>) is not a special case of (<ref>);
Step 1b. We then investigate a linearized PDE of (<ref>) with general coefficients
Lu:=u_s-∑_|I|≤ 2 A^I(t,s,x,y)∂_Iu+∑_|I|≤ 2 B^I(t,s,x,y)(∂_Iu)|_l
t=s
x=y
=f, u|_s=0=g,
where A^I and B^I depend on not only (s,y) but also (t,x). The most difficult part of proving the solvability of (<ref>) is to provide a Schauder's estimate of its solutions. Based on the well-designed norms and function spaces, we not only provide quantitative information on the regularity of solutions, but also show that the behavior of the solutions of (<ref>) can be controlled by the non-homogeneous term f and the initial data g;
Step 1c. Let us consider a family of operators parameterized by τ∈[0,1]:
L_τ u:=(1-τ)L_0 u+τ L_1 u
where L_1u:=Lu. Thanks to the Schauder estimate of solutions of (<ref>), we will take advantage of the method of continuity to prove the global well-posedness of (<ref>) in [0,T]^2×ℝ^d;d;
Step 2. We analyze the operator Λ(u)=U, where U is the solution of
U_s=LU+F(t,s,x,y,(∂_I u)_|I|≤ 2, (∂_I u)_|I|≤ 2|_l
t=s
x=y
)-Lu, U|_s=0=g,
which is well-defined, provided that the nonlocal linear PDE (<ref>) is well-posed. Moreover, it is obvious that each fixed point of (<ref>) solves (<ref>). Thanks again to the Schauder estimate of solutions of (<ref>), we first prove that Λ is a contraction and then make use of Banach's fixed point theorem to justify the local well-posedness of (<ref>). Subsequently, we show its global solvability, provided that a very sharp a prior estimate is available.
§.§ Contributions and Organization of Our Paper
Our contributions are mainly threefold: (PDE) For such kind of nonlocal PDEs with initial-dynamic space-time structure arising from TIC stochastic control problems, we devise an analytical framework under which nonlocal linear/nonlinear PDEs are well-posed in the sense that we can establish the existence, uniqueness, and stability of their solutions.
This paper has a detailed exploration about the underlying space of functions as well as mathematical properties of mappings between these spaces. (Stochastic control) Our framework allows the control variate entering the diffusion of state process, which breaks successfully through the existing bottleneck of TIC stochastic control problems. Together with the sufficiency and necessity analysis in the existing literature, our well-posedness results indicate directly the solvability of TIC control problems at least in a maximally-defined time interval. Thanks to our well-posedness and regularities results, some long-standing open problems in the TIC stochastic control theory can be solved in our analytical framework; see our Proposition <ref> and its discussion following it.
Moreover, the difference between “naïve" and “sophisticated" value functions is first examined; (BSDE) We provide a probabilistic expression for solutions of PDEs (<ref>). The proposed concept of flow of second-order BSDEs provides a unified and general treatment for the related studies.
The rest of this paper is organized as follows. Section <ref> is devoted to the preliminaries for our study. We review the concepts of equilibrium controls and the associated equilibrium HJB equations for time-consistent stochastic optimal control problem. Section <ref> studies the linerized version of the nonlocal PDEs. We first establish the Schauder's prior estimate of solutions of the nonlocal linear PDEs, then take advantage of the method of continuity to prove its global well-posedness. In Section <ref>, by the linearization method and Banach's fixed point theorem, we show that the nonlocal fully nonlinear PDE is locally solvable in a small time interval. Subsequently, we study how to extend the well-posedness result into a larger interval. Furthermore, it turns out that the fully nonlinear PDE is solvable as well if a very sharp a prior estimate holds. As a corollary, we show the global solvability of nonlocal quasilinear PDEs by using the newly acquired well-posedness results. In addition to the contributions in PDE theory, we also propose a flow of BSDEs to provide a probabilistic interpretation for our nonlocal PDEs. In Section <ref>, we apply our PDE results to equilibrium HJB equations for the analysis of TIC stochastic control problems. We also estimate the difference between “naïve" and “sophisticated" value functions. Moreover, we provide a financial TIC example that is globally solvable. Finally, Section <ref> concluds.
§ TIME-INCONSISTENT STOCHASTIC CONTROL PROBLEMS AND EQUILIBRIUM HJB EQUATIONS
In this section, we introduce the time-inconsistent (TIC) stochastic control problems, whose (equilibrium) solutions can be characterized by Hamilton-Jacobi-Bellman (HJB) equations (of the equilibrium type). While our focus is on the well-posedness of the HJB equations of the equilibrium type, we simply introduce the differential equations under our considerations and their rigorous derivation can be found in <cit.>. With the well-posedness of the HJB equations, we will convert the conclusions for the TIC stochastic control problems.
Let (Ω,ℱ,𝔽,ℙ) be a complete filtered probability space that supports a n-dimensional standard Brownian motion, whose natural filtration augmented by all the ℙ-null sets is given by 𝔽={ℱ_s}_s≥ 0. Let T>0 be a finite horizon and U⊆ℝ^m be a non-empty set that could be unbounded. The set of all admissible stochastic control processes over [t,T] for t∈ [0,T) is defined as
𝒰[t,T]:={α:[t,T]×Ω→ U:α(·) is 𝔽-progressively measurable
with 𝔼∫^T_t|α(·)|^2ds<∞}.
The controlled state process and the cost functional will be characterized with forward-backward stochastic differential equations (FBSDEs) below, while the admissibility of the stochastic controls also ensures the well-posedness of the FBSDEs. To define a TIC problem, we often fix the time t∈[0,T] and consider a time variable s∈[t,T]. It is convenient to introduce a set notation for the time pair (t,s): ∇[0,T]:={(t,s):0≤ t≤ s≤ T}; similarly, we also define Δ[0,T]:={(t,s):0≤ s≤ t≤ T}. To ease notational burden, we also introduce ℝ^d;d:=ℝ^d×ℝ^d.
§.§ Stochastic Controls with Time-and-State-Varying Objectives
For a given pair (t,x)∈[0,T]×ℝ^d, we aim to find an α(·)∈𝒰[t,T] such that
J(t,x;α(·)):=inf_α(·)∈𝒰[t,T]J(t,x;α(·))
where the cost functional J(t,x;α(·)):=Y(t;t,x,α(·)) with (X(·),Y(·),Z(·)) (in greater detail, (X(·;t,x,α(·)),Y(·;t,x,α(·)),Z(·;t,x,α(·)))) being the adapted solution to the following controlled FBSDEs:
{[ dX(s) = b(s,X(s),α(s))ds+σ(s,X(s),α(s))dW(s), s∈[t,T],; dY(s) = -h(t,s,X(t),X(s),α(s),Y(s),Z(s))ds+Z(s)dW(s), s∈[t,T],; X(t) = x, Y(T)=g(t,X(t),X(T)), ].
where b:[0,T]×ℝ^d× U→ℝ^d and σ:[0,T]×ℝ^d× U→ℝ^d× n are the drift and volatility of the state process X(·), respectively, h:∇[0,T]×ℝ^d;d× U×ℝ×ℝ^1× n→ℝ and g:[0,T]×ℝ^d;d are the generator and terminal condition of the utility process (Y(·),Z(·)), respectively, and they are all deterministic functions. Under some suitable conditions (see <cit.>), for any (t,x)∈[0,T]×ℝ^d and α(·)∈𝒰[t,T], the controlled FBSDEs (<ref>) admit a unique adapted solution (X(·),Y(·),Z(·)). Moreover, <cit.> reveals that it admits a probabilistic representation:
J(t,x;α(·)) =𝔼_t,x[g(t,X(T),X(T))+∫^T_t h(t,s,X(t),X(s),α(s),Y(s),Z(s))ds],
where 𝔼_t,x[·] is the conditional expectation 𝔼[·|ℱ_t] under X(t)=x. One can easily observe that the Bellman's principle of optimality (BPO) for the stochastic control problem (<ref>) is not available as the h and the g in (<ref>) depend on the current time t and the current state X(t). Mathematically, the time-and-state-varying objectives are against the flow property of Y in (<ref>) or the tower property of J with its probabilistic representation. As a result,
even the agent can find an optimal control, denoted by α(·):=α(·;t,x), for the problem over [t,T] with any initial pair (t,x)∈[0,T]×ℝ^d, we can anticipate that at a later time point s∈(t,T],
due to the time-and-state-dependence of objectives,
J(s,X(s);α(·;t,x)|_[s,T])>J(s,X(s);α(·;s,X(s))) almost surely,
where X(·) is the adapted solution to (<ref>) with α(·) and (t,x) fixed. Hence, there is an incentive to deviate from the optimal control policy derived at (t,x), α(·;t,x), as time evolves. In other words, without BPO, we can no longer leverage that “local optimal is global optimal" to resolve the sequential decision-making problems. Such problems are called TIC problems.
TIC issues are common in financial and behavioral economic problems, ranging from mean-variance analysis <cit.> to hyperbolic discounting <cit.> and cumulative prospect theory <cit.>. The key to addressing the TIC problems is to clarify the dynamic optimality and the properties of the optimal policy. There are three prevailing approaches to TIC problems. First, we may consider only pre-commitment policies, i.e., α^pc(·):={α(s;0,x_0)}_s∈[0,T]; see <cit.>. The pre-committers care about only the performance anticipated at time t=0 and thus ignore the TIC issue (<ref>). Second, we may consider dynamically optimal policy that consists of the instantaneous strategies at time t for problems indexed by t∈[0,T], i.e., repeated pre-commitment strategies α^rpc(·):={α(t;t,X(t))}_t∈[0,T]; see <cit.>. These two approaches require the knowledge of α(·;t,x) for any (t,x)∈[0,T]×ℝ^d but it may not be available for general TIC problems. The policy α^pc(·) has clear objective but suffers from the TIC issue (<ref>), while the policy α^rpc(·) is time-consistent (free of (<ref>)) but its constitution does not lead to any dynamic optimality. The third approach, as we advocate, is an equilibrium subgame approach that finds the locally optimal controls under the constraint of no events like (<ref>) occurring. In what follows, we define the equilibrium strategies in greater details.
A continuous map e:[0,T]×ℝ^d→ U is called a closed-loop equilibrium strategy of the TIC stochastic control problem (<ref>) if the following two conditions hold:
1. For any x∈ℝ^d, the dynamics equation
{[ dX(s) = b(s,X(s),e(s,X(s)))ds+σ(s,X(s),e(s,X(s)))dW(s), s∈[0,T],; X(0) = x, ].
admits a unique solution X(·);
2. For each (s,a)∈[0,T)× U, let X^ϵ(·) satisfy
{[ dX^ϵ(s) = b(s,X^ϵ(s),a)ds+σ(s,X^ϵ(s),a)dW(s), s∈[t,t+ϵ),; dX^ϵ(s) = b(s,X^ϵ(s),e(s,X^ϵ(s)))ds+σ(s,X^ϵ(s),e(s,X^ϵ(s)))dW(s),; s∈[t+ϵ,T],; X^ϵ(t) = X(t), ].
then the following inequality holds:
ϵ↓ 0limJ(t,X(t);a·1_[t,t+ϵ)⊕e)-J(t,X(t);e)/ϵ≥ 0,
where
(a·1_[s,s+ϵ)⊕e)(s)={[ a, s∈[t,t+ϵ),; e(s,X^ϵ(s)), s∈[t+ϵ,T]. ].
Furthermore, {X(s)}_s∈[0,T] and V(t,X(t)):= J(t,X(t);e(s,X(s))}_τ∈[t,T]) are called the equilibrium state process and the equilibrium value function, respectively.
Condition (<ref>) characterizes a subgame perfect equilibrium (SPE) solution to a game played by the incarnations of the agent at different time points. Hence, the closed-loop equilibrium strategy achieves local optimality in a proper sense. Considering the violation of the BPO and the deviation of optimal controls as time evolves in TIC problems, such a locally optimal control revives the recursive relationship between two sub-problems initiating at (t,X(t)) and (t+ϵ,X^ϵ(t+ϵ)), respectively. As a result, the closed-loop equilibrium strategy is time-consistent and free of (<ref>). The game-theoretic interpretation of the closed-loop equilibrium strategy above can be referred to <cit.> or the highlights of the multi-person differential game approach of <cit.> below.
§.§ Equilibrium HJB Equations
The equilibrium solution, referring to equilibrium strategy, equilibrium state process, and equilibrium value function in Definition <ref>, was first elaborated with game-theoretic concept and characterized with extended dynamic programming and extended HJB systems in <cit.>. Recently, <cit.> point out that the derivation in <cit.> is not rigorous in the sense that the solvability of the state process with the ϵ-policy (<ref>) was not properly addressed and the extended dynamic programming equation was introduced heuristically. Though the framework of <cit.> can also accommodate the nonlinearity of conditional expectations as a TIC source, we follow a more rigorous mathematical framework of <cit.> for our problem (<ref>). For which, both frameworks are indeed boiled down to solving the same (equilibrium) HJB equation.
While all adopt a multi-person differential game approach, <cit.> consider only time-varying objectives and a more recent work of <cit.> considers time-and-state-varying objectives. Here, we do not repeat the mathematical derivation of the equilibrium HJB equation while we recap its main idea as follows: with a partition of the time interval,
a family of approximate equilibrium strategy and their associated value functions are first defined and characterized piecewise in a small time interval; then, they are stitched together to form time-consistent solutions with a dynamic structure; finally, we look for the continuous-time limits of the solutions by sending the mesh size of the partition to zero.
More specifically, we consider a partition 𝒫:0=t_0<t_1<⋯<t_N-1<t_N=T of the whole time interval [0,T], which yields N subintervals: [t_0,t_1),…,[t_N-1,t_N). The TIC stochastic control problem is then regarded as a N-person stochastic differential game, where the players are labelled from 1 through N and each Player k for k=1,…,N takes over the dynamics system and controls the system over [t_k-1,t_k) by selecting her own admissible control α^k(·)∈𝒰[t_k-1,t_k). The individual stochastic control problems for N players are connected with the sophisticated cost functionals, where the cost functional for Player k is determined by the solution to (<ref>) over [t_k,t_k+1] (t=t_k) with the action of Player k+1, α^k+1(·)∈𝒰[t_k,t_k+1), and the sophisticated cost functional Y(t_k+1)=Y(t_k+1;t_k+1,X(t_k+1),α^k+1(·)). Hence, the sophisticated cost functionals and players' actions are determined backwardly, while we note that for each k, Player k solves a conventional (time-consistent) stochastic control problem over [t_k-1,t_k) according to her own (t_k-1,X(t_k-1))-dependent preference. The agent/players at different time subintervals with sophisticated cost functionals are called sophisticated agent, who thinks globally and acts locally.
Suppose that all players can identify the optimal controls for their own problems (that will be converted into assumption (<ref>) below), we can then
construct a 𝒫-dependent equilibrium strategy and the corresponding equilibrium value function of the N-person game. Subsequently, by sending the mesh size of the partition, i.e. |𝒫|:=max_1≤ i≤ N(t_i-t_i-1), to zero, we can obtain the continuous-time limits of the equilibrium strategy e(s,y)=Ψ(s,y) and the equilibrium value function V(s,y)=u(s,s,y,y) of the original TIC optimal control problem (<ref>) for (s,y)∈[0,T]∈ℝ^d, where u(t,s,x,y) solves the following parabolic PDE (<ref>) with a initial-dynamic space-time structure:
{[ u_s(t,s,x,y) + ℋ(t,s,x,y,Ψ(s,y),u(t,s,x,y),u_y(t,s,x,y),u_yy(t,s,x,y)) = 0,; (t,s,x,y)∈∇[0,T]×ℝ^d;d,; u(t,T,x,y) = g(t,x,y), (t,x,y)∈[0,T]×ℝ^d;d ].
with the Hamiltonian given by
ℋ(t,s,x,y,a,u,p,q) =1/2tr[q·(σσ^⊤)(s,y,a)]+p^⊤ b(s,y,a)
+h(t,s,x,y,a,u,p^⊤·σ(s,y,a))
for (t,s,x,y,a,u,p,q)∈∇[0,T]×ℝ^d;d× U×ℝ×ℝ^d×𝕊^d, in which the superscript ⊤ denotes the transpose of vectors or matrices and 𝕊^d⊆ℝ^d× d denotes the set of all d× d-symmetric matrices and
Ψ(s,y)=ψ(s,s,y,y,u(s,s,y,y),u_y(s,s,x,y)|_x=y,u_yy(s,s,x,y)|_x=y)
for (s,y)∈[0,T]×ℝ^d, in which we assume that there exists a map ψ:∇[0,T]×ℝ^d;d×ℝ×ℝ^d×𝕊^d→ U with all needed smoothness and boundedness of its derivatives such that
ψ(t,s,x,y,u,p,q)∈{a∈ U:ℋ(t,s,x,y,a,u,p,q)=min_a∈ Uℋ(t,s,x,y,a,u,p,q)}
holds for all (t,s,x,y,u,p,q)∈∇[0,T]×ℝ^d;d×ℝ×ℝ^d×𝕊^d.
Equation (<ref>) is called an equilibrium HJB equation. The initial-dynamic space-time structure refers to the arguments of the unknown function involving both initial space-time variable (t,x) and dynamic space-time variable (s,y). Note that (t,x) cannot be viewed as parameters since Ψ(s,y) in the equation involves the nonlocal terms u(s,s,y,y),u_y(s,s,x,y)|_x=y,u_yy(s,s,x,y)|_x=y and thus the equilibrium HJB equation (<ref>) is fully nonlinear and nonlocal.
Heuristically speaking, for the sophisticated players, “thinking globally” induces the u-terms evaluated at (t,s,x,y) while “acting locally” implies the u-terms evaluated at (s,s,y,y) in the equilibrium HJB equation (<ref>). In the case that the equilibrium HJB equation (<ref>) is well-posed, the formal convergence (as the mesh size |𝒫| goes to zero) will become rigorous. Moreover, by the similar arguments in <cit.>, one can prove that (e,V)(s,y):=(Ψ(s,y),u(s,s,y,y)) is exactly the closed-loop equilibrium strategy and the corresponding equilibrium value function in the sense of Definition <ref>. Therefore, the well-posedness of the equilibrium HJB equation (<ref>) is a crucial piece of puzzle in the frameworks of addressing TIC stochastic control problems.
<cit.> prove the well-posedness (existence and uniqueness of the solution) of (<ref>) when it is free of u_yy(s,s,x,y)|_x=y. Their relaxation, however, prohibits the controls from entering into the diffusion of the state process, i.e., σ(s,y,a)≡σ(s,y), which poses a limitation in formulating the TIC stochastic control problem. Recently, <cit.> proves the local well-posedness of (<ref>) when it is free of x. Subsequently, <cit.> extends the results to a multi-dimensional setting (vector function u that facilitates the formulation of TIC stochastic differential games) and global well-posedness (where the linear and quasilinear cases are complete while the fully nonlinear case requires a sharp a-priori estimate). <cit.> significantly advance the investigation of the well-posedness of (<ref>) but since they exploited the time ordering property, the objectives are limited to the time-varying ones. In fact, the mathematical treatment of the initial-state-dependence in (<ref>) is scarce in the literature, though <cit.> has mentioned that one can establish parallel results following <cit.>'s treatment on the initial-time-dependence. On top of the challenges due to the unbounded space for x and y, the desired involvement of u_yy(s,s,x,y)|_x=y in (<ref>) brings the difficulty of proving its well-posedness to another level.
By the change of time variables in Theorem <ref>, (<ref>) can be reformulated as an initial value problem in a forward form of (<ref>), which can ease the notational burden compared to the terminal value problem. Moreover, the order relation t≤ s between the initial time point t and the running time s in (<ref>) can be removed in the study of nonlocal PDEs, since it is natural to extend the solutions of (<ref>) from the triangular time zone ∇[0,T] to a rectangular one [0,T]^2. In the next two sections, we will establish the well-posedness of (<ref>).
§ NONLOCAL LINEAR PDES
To establish the well-posedness of the fully nonlinear PDE (<ref>), we first study its linearized version (<ref>).
While the nonlocal linear PDE is simpler by right, it plays a crucial role in the study of nonlocal fully nonlinear PDEs with a linearization method in Section <ref>.
§.§ Function Spaces and Nonlocal Differential Operators
We first introduce some suitable norms, spaces, and differential operators for the study of the nonlocal differential equations arising from time inconsistency.
Given generic real numbers a,b∈[0,T] and a≤ b, we denote by C([a,b]×ℝ^d;ℝ) the set of all the continuous and bounded real functions in [a,b]×ℝ^d endowed with the supremum norm |·|^(0)_[a,b]×ℝ^d (for short |·|^(0)). In the studies of local parabolic equations of second order <cit.>, we usually introduce the spaces of Hölder continuous functions to ensure the solvability of PDEs in the classical solution sense. Let C^l/2,l([a,b]×ℝ^d;ℝ) be the Banach space of the functions φ(s,y) such that φ(s,y) is continuous in [a,b]×ℝ^d, its derivatives of the form D^i_sD^j_yφ for 2i+j<l exist, and it has a finite norm defined by
|φ|^(l)_[a,b]×ℝ^d := ∑_k≤[l]∑_2i+j=k|D^i_sD^j_yφ|^(0)+∑_2i+j=[l]⟨ D^i_sD^j_yφ⟩^(l-[l])_y
+∑_0<l-2i-j<2⟨ D^i_sD^j_yφ⟩^(l-2i-j/2)_s,
where l is a non-integer positive number with [l] being its integer part and for α∈(0,1),
⟨φ⟩^(α)_y:=sup_c a≤ s≤ b
0<|y-y^'|≤ 1|φ(s,y)-φ(s,y^')|/|y-y^'|^α, ⟨φ⟩^(α)_s:=sup_c a≤ s< s^'≤ b
y∈ℝ^d|φ(s,y)-φ(s^',y)|/|s-s^'|^α.
Wherever no confusion arises, we do not distinguish between |φ|^(l)_[a,b]×ℝ^d and |φ|^(l)_ℝ^d for functions φ(y) independent of s.
Considering the pair of space-time arguments (i.e. (t,x) and (s,y)) of solutions of nonlocal PDEs, we can similarly introduce the space C([a,b]^2×ℝ^d;d;ℝ). For a real-valued function ψ(t,s,x,y) and a vector-valued Ψ=(ψ^1,ψ^2,⋯,ψ^m)(t,s,x,y), we introduce the following norms:
|Ψ|^(l)_[a,b] := ∑_m|ψ^m(t,·,x,·)|^(l)_[a,b], for any fixed (t,x)∈[a,b]×ℝ^d,
[Ψ]^(l)_[a,b] := sup_(t,x)∈[a,b]×ℝ^d{∑_m|ψ^m(t,·,x,·)|^(l)_[a,b]},
and
‖ψ‖^(l)_[a,b] := sup_(t,x)∈[a,b]×ℝ^d{|(ψ,ψ_t,ψ_x,ψ_xx)(t,·,x,·)|^(l)_[a,b]}
:=sup_(t,x)∈[a,b]×ℝ^d{|ψ(t,·,x,·)|^(l)_[a,b]+|ψ_t(t,·,x,·)|^(l)_[a,b]+|ψ_x(t,·,x,·)|^(l)_[a,b]
+|ψ_xx(t,·,x,·)|^(l)_[a,b]},
which induces the following Banach space
Ω^(l)_[a,b]:={ψ∈ C([a,b]^2×ℝ^d;d;ℝ):
‖ψ‖^(l)_[a,b]<∞}.
To ease the notational burden, we introduce the following vector functions:
ψ(t,s,x,y)=[ ψ; ψ_t; ψ_x; ψ_xx ], ψ(t,s,x,y)=[ ψ; ψ_t; ψ_x ], ψ(t,s,x,y)=[ ψ_t; ψ_x; ψ_xx ].
Hence, ‖ψ‖^(l)_[a,b] can be rewritten as [ψ]^(l)_[a,b].
Now, we turn to regulate the nonlocal linear differential operator:
Lu:= u_s(t,s,x,y) - ∑_|I|≤ 2A^I(t,s,x,y)∂_I u(t,s,x,y)
+ ∑_|I|≤ 2B^I(t,s,x,y)∂_I u(s,s,x,y)|_x=y
where I=(I_1,I_2,⋯,I_d) is a multi-index of non-negative integers, |I|=I_1+I_2+⋯+I_d. The operator ∂_I is interpreted as the partial derivative ∂^|I|/∂ y^I_1_1∂ y^I_2_2⋯∂ y^I_d_d of order I_i in y_i and u_s as the derivative of u in s. Moreover, for each I, the coefficients A^I, B^I∈Ω^(α)_[0,T] satisfy the uniform ellipticity conditions, i.e. there exists some λ>0 such that
∑_|I|= 2A^I(t,s,x,y)ξ^I ≥ λ|ξ|^2,
∑_|I|= 2(A^I(t,s,x,y)+B^I(t,s,x,y))ξ^I ≥ λ|ξ|^2,
for any (t,s,x,y)∈[0,T]^2×ℝ^d;d and ξ∈ℝ^d, where ξ^I=ξ^I_1_1ξ^I_2_2⋯ξ^I_d_d. It is noteworthy that the nonlocal operator (<ref>) and the uniformly ellipticity conditions (<ref>)-(<ref>) reduce to the classical counterparts when B^I=0.
With the introduction of (<ref>) and its regularity conditions, we study the nonlocal linear PDE of the form
{[ Lu(t,s,x,y) = f(t,s,x,y),; u(t,0,x,y) = g(t,x,y), t,s∈[0,T], x,y∈ℝ^d. ].
where the non-homogeneous term f∈Ω^(α)_[0,T] and the initial condition g∈Ω^(2+α)_[0,T]. Roughly speaking, the existence, uniqueness, and stability of solutions of nonlocal linear PDE (<ref>) corresponds to the surjection, injection, and continuity (boundedness) properties of nonlocal operator (<ref>) within Ω^(l)_[a,b], respectively. In what follows, we will find that (<ref>) is well-posed in Ω^(l)_[0,T] under some mild conditions. Consequently, the nonlocal linear operator L:{u∈Ω^(2+α)_[0,T]:u|_s=0=g}→Ω^(α)_[0,T] is bijective and continuous.
§.§ Schauder's Estimate of Solutions to Nonlocal Linear PDEs
We leverage the method of continuity to prove the global existence of nonlocal linear PDE (<ref>). To this end, we embed our original problem (<ref>) into a family of parameterized PDEs. Compared with studying directly the solvability of (<ref>), it is more practical and feasible to investigate a simplified problem in the family of PDEs. Subsequently, the method of continuity promises the solvability of (<ref>) from the simplified PDE. Though, the key transition between the two problems requires a parameter-independent Schauder's estimate of solutions of the family of PDEs.
Loosely speaking, the prior estimate not only controls the behaviour of the solutions and provides quantitative information on their regularity but also guarantees certain compactness of the set of all possible solutions. Such a compactness is necessary for the method of continuity in the study of nonlocal linear PDEs with variable coefficients and for fixed-point arguments in the fully nonlinear setting.
In what follows, we establish the prior estimate of solutions to nonlocal linear PDEs (<ref>). First of all, let us rewrite (<ref>) as the following form
{[ u_s(t,s,x,y) = ∑_|I|≤ 2A^I(t,s,x,y)∂_I u(t,s,x,y); +∑_|I|≤ 2B^I(t,s,x,y)∂_I u(s,s,x,y)|_x=y+f(t,s,x,y),; u(t,0,x,y) = g(t,x,y), t,s∈[0,T], x,y∈ℝ^d. ].
Suppose that (<ref>) admits a solution u(t,s,x,y)∈Ω^(2+α)_[0,T]. Then, for any (t,x)≐(x_0,x_1,⋯,x_d)^⊤∈[0,T]×ℝ^d, we have
{[ (∂ u/∂ x_i)_s(t,s,x,y) = ∑_|I|≤ 2A^I∂_I (∂ u/∂ x_i)(t,s,x,y)+∑_|I|≤ 2A^I_x_i∂_I u(t,s,x,y); +∑_|I|≤ 2B^I_x_i∂_I u(s,s,x,y)|_x=y+f_x_i, i=0,1,…,d,; (∂ u/∂ x_i)(t,0,x,y) = g_x_i(t,x,y), t,s∈[0,T], x,y∈ℝ^d, ].
where the dependence of A^I, B^I, and f on their arguments (t,s,x,y) is suppressed here. Furthermore, we also have
{[ (∂^2 u/∂ x_i∂ x_j)_s(t,s,x,y) = ∑_|I|≤ 2A^I∂_I (∂^2 u/∂ x_i ∂ x_j)(t,s,x,y)+∑_|I|≤ 2A^I_x_j∂_I (∂ u/∂ x_i)(t,s,x,y); +∑_|I|≤ 2A^I_x_ix_j∂_I u(t,s,x,y)+∑_|I|≤ 2A^I_x_i∂_I (∂ u/∂ x_j)(t,s,x,y); +∑_|I|≤ 2B^I_x_ix_j∂_I u(s,s,x,y)|_x=y+f_x_ix_j, i,j=1,…,d,; (∂^2 u/∂ x_i∂ x_j)(t,0,x,y) = g_x_ix_j(t,x,y), t,s∈[0,T], x,y∈ℝ^d. ].
To simplify, (<ref>)-(<ref>) can be reorganized in a compact way as a parabolic system for a vector-valued function u^⊤=(u,∂ u/∂ t,∂ u/∂ x,∂^2 u/∂ x∂ x)^⊤(t,s,x,y):
{[ u^⊤_s(t,s,x,y) = ∑_|I|≤ 2P^I∂_Iu^⊤(t,s,x,y) + ∑_|I|≤ 2(B^I)^⊤∂_I u(s,s,x,y)|_x=y + f^⊤,; u^⊤(t,0,x,y) = g^⊤(t,x,y), t,s∈[0,T], x,y∈ℝ^d, ].
where each of P^I=P^I(t,s,x,y) for |I|≤ 2 is a lower-triangular matrix, whose diagonal elements are exactly A^I while the off-diagonal elements do not matter the subsequent analyses. Moreover, B^I, f, and g are all vector-valued, consisting of themselves and their derivatives in t and x. Thanks to the structure of such a matrix P^I, the existence and regularity of the fundamental solution of the parabolic operator Du:=u_s-∑ P^I∂_Iu is promised by the uniformly ellipticity condition (<ref>) of A^I; see <cit.>.
Next, we take advantage of the integral representations below to replace all diagonal terms ∂_I u(s,s,x,y)|_x=y in (<ref>)-(<ref>) with a relatively manageable ∂_I u(t,s,x,y).
∂_I u(t,s,x,y)-∂_I u(s,s,x,y)|_x=y
= ∫^t_s∂_I(∂ u/∂ t)(θ_t,s,x,y)dθ_t+∫^x_1_y_1∂_I(∂ u/∂ x_1)(s,s,θ_1,x_2,⋯,x_d,y)dθ_1
+∫^x_2_y_2∂_I(∂ u/∂ x_2)(s,s,x_1,θ_2,x_3,⋯,x_d,y)|_x_1=y_1dθ_2
+∫^x_3_y_3∂_I(∂ u/∂ x_3)(s,s,x_1,x_2,θ_3,x_4,⋯,x_d,y)|_c x_1=y_1
x_2=y_2dθ_3
⋯
+∫^x_d-1_y_d-1∂_I(∂ u/∂ x_d-1)(s,s,x_1,x_2,⋯,x_d-2,θ_d-1,x_d,y)|_c x_i=y_i
i=1,2,⋯,d-2dθ_d
+∫^x_d_y_d∂_I(∂ u/∂ x_d)(s,s,x_1,x_2,⋯,x_d-1,θ_d,y)|_c x_i=y_i
i=1,2,⋯,d-1dθ_d
≐ -ℐ^I[∂ u/∂ t,∂ u/∂ x](t,s,x,y)
Note that the (d+1)-dimensional (∂ u/∂ t,∂ u/∂ x)^⊤ constitutes a conservative vector field and the potential function of which is u(t,s,x,y). Hence, ∂_I u(s,s,x,y)|_x=y has various integral representations when we alter the integral paths from (s,y) to (t,x).
Thanks to the integral representation (<ref>), the equations (<ref>) and (<ref>) can be rewritten as the following coupled system of PDEs:
{[ u_s(t,s,x,y) = ∑_|I|≤ 2(A^I+B^I)∂_I u(t,s,x,y); +∑_|I|≤ 2B^Iℐ^I[∂ u/∂ t,∂ u/∂ x](t,s,x,y) + f,; (∂ u/∂ x_i)_s(t,s,x,y) = ∑_|I|≤ 2A^I∂_I (∂ u/∂ x_i)(t,s,x,y)+∑_|I|≤ 2(A^I_x_i+B^I_x_i)∂_I u(t,s,x,y); +∑_|I|≤ 2B^I_x_iℐ^I[∂ u/∂ t,∂ u/∂ x](t,s,x,y)+f_x_i, i=0,1,2,⋯,d,; (u,∂ u/∂ x_i)(t,0,x,y) = (g,g_x_i)(t,x,y), t,s∈[0,T], x,y∈ℝ^d, ].
which is equivalent to a parabolic system for u^⊤=(u,∂ u/∂ t,∂ u/∂ x)^⊤(t,s,x,y):
{[ u^⊤_s(t,s,x,y) = ∑_|I|≤ 2Q^I∂_Iu^⊤(t,s,x,y)+∑_|I|≤ 2B^⊤ℐ^I[∂ u/∂ t,∂ u/∂ x](t,s,x,y) + f^⊤,; u^⊤(t,0,x,y) = g^⊤(t,x,y), t,s∈[0,T], x,y∈ℝ^d, ].
where each Q^I for |I|≤ 2 is a lower-triangular matrix, whose off-diagonal elements do not matter the subsequent analyses while the diagonal elements are either A^I or A^I+B^I; specifically, the coefficients in front of ∂_Iu(t,s,x,y) are A^I+B^I while all other coefficients related to ∂_Iu_x_i(t,s,x,y) are A^I. Consequently, by the classical theory of PDE systems <cit.>, the differential operator D^' u:=u_s-∑ Q^I∂_Iu admits a fundamental solution Z(s,τ,y,ξ;t,x), which is ensured by the uniformly ellipticity conditions (<ref>)-(<ref>) of A^I and A^I+B^I.
After showing a variety of equations/systems, we are ready to prove the Schauder prior estimate of solutions to nonlocal linear PDE (<ref>).
Suppose that u is a solution of (<ref>) (i.e. (<ref>)) in Ω^(2+α)_[0,T]. Then we have
* u and u solve (<ref>) and (<ref>) on [0,T]^2×ℝ^d;d, respectively;
* there exists a constant C depending only on λ, α, d, T, ‖ A^I‖^(α)_[0,T], and ‖ B^I‖^(α)_[0,T] such that
‖ u‖^(2+α)_[0,T]≤ C(‖ f‖^(α)_[0,T]+‖ g‖^(2+α)_[0,T]).
The first claim is straightforward as it follows by our introductions of the systems (<ref>) and (<ref>) before. Next, we focus on the proof of the second claim.
We first show that the inequality (<ref>) holds for a suitably small δ∈[0,T] and then the conclusion can be extended to the case of δ=T. According to the classical theory of parabolic system <cit.>, for any fixed (t,x)∈[0,δ]×ℝ^d and system (<ref>), there exists a constant C>0 such that
|u(t,s,x,y)|^(2+α)_(s,y)∈[0,δ]×ℝ^d
≤ C(∑_|I|≤ 2|∂_I u(s,s,x,y)|_x=y|^(α)_(s,y)∈[0,δ]×ℝ^d+|f(t,s,x,y)|^(α)_(s,y)∈[0,δ]×ℝ^d
+|g(t,s,x,y)|^(2+α)_(s,y)∈[0,δ]×ℝ^d).
Next, we estimate |∂_I u(s,s,x,y)|_x=y|^(α)_(s,y)∈[0,δ]×ℝ^d for |I|=0,1,2. In addition to the estimates of |∂_I u(s,s,x,y)|_x=y|^(0)_(s,y)∈[0,δ]×ℝ^d, we need to evaluate the difference between ∂_I u(s,s,x,y)|_x=y and ∂_I u(s^',s^',x,y^')|_x=y^' for any 0≤ s<s^'≤δ and y,y^'∈ℝ^d with 0<|y-y^'|≤ 1. It is obvious that the evaluation requires not only ∂_Iu but also the partial derivatives ∂_Iu_t and ∂_Iu_x. All of them are characterized by (<ref>) for u. As usual (without loss of generality), we assume that g=0; otherwise, we consider L'v:=f-Lg with v|_s=0=0 noting that the problems of (<ref>), (<ref>), and (<ref>) are all of linear-type. By the classical theory of parabolic systems, the vector-valued classical solution u of (<ref>) can be represented as
u(t,s,x,y)
=∫^s_0dτ∫_ℝ^dZ(s,τ,y,ξ;t,x)∑_|I|≤ 2B^I(t,τ,x,ξ)ℐ^I[∂ u/∂ t,∂ u/∂ x](t,τ,x,ξ)dξ
+ ∫^s_0dτ∫_ℝ^dZf(t,τ,x,ξ)dξ
where Z is the fundamental solution for the differential operator D^' u:=u_s-∑ Q^I∂_Iu. Since B, f∈Ω^(α)_[0,T] and u∈Ω^(2+α)_[0,T], its first-/second derivatives with respect to y, i.e. |I|=1,2, satisfy
∂_Iu(t,s,x,y)
= ∫^s_0dτ∫_ℝ^d∂_IZ∑_|I|≤ 2[(B^Iℐ^I)(t,τ,x,ξ)-(B^Iℐ^I)(t,τ,x,y)]dξ
+ ∫^s_0(∫_ℝ^d∂_IZdξ)∑_|I|≤ 2(B^Iℐ^I)(t,τ,x,y)dτ
+ ∫^s_0dτ∫_ℝ^d∂_I Z[f(t,τ,x,ξ)-f(t,τ,x,y)]dξ
+∫^s_0(∫_ℝ^d∂_IZdξ)f(t,τ,x,y)dτ
Generally speaking, in order to obtain |∂_I u(s,s,x,y)|_x=y|^(α)_(s,y)∈[0,δ]×ℝ^d for |I|=0,1,2, we need to evaluate not only the absolute value |∂_I u(s,s,x,y)|_x=y|^(0)_(s,y)∈[0,δ]×ℝ^d but also the difference
|∂_I u(s^',s^',x,y^')|_x=y^'-∂_I u(s,s,x,y|_x=y|
≤ |∂_I u(s^',s^',x,y^')|_x=y^'-∂_I u(s,s^',x,y^'|_x=y|+|∂_I u(s,s^',x,y^')|_x=y-∂_I u(s,s,x,y|_x=y|
≤ |∂_I(∂ u/∂ t,∂ u/∂ x)(η_t,s^',x,y^')|_x=η_x|(|s^'-s|+|y^'-y|)
+|∂_I u(s,s^',x,y^')|_x=y-∂_I u(s,s,x,y|_x=y|/|s^'-s|^α/2+|y^'-y|^α(|s^'-s|^α/2+|y^'-y|^α)
for any 0≤ s<s^'≤δ and y, y^'∈ℝ^d with 0<|y-y^'|≤ 1, where η=(η_t,η_x)=(1-c)(s,y)+c(s^',y^') for some c∈(0,1) for the mean value theorem in several variables. We denote by ρ the parabolic distance √((s^'-s)+|y-y^'|^2) between (s,y) and (s^',y^'). Hence, we need to estimate the eight terms E_i (i=1,2,⋯,8) in Table <ref>. The estimation, however, appears to be standard while lengthy and thus we place it in Appendix <ref>.
The estimates of E_i-terms and (<ref>) imply that for a suitably small δ∈[0,T] and any fixed (t,x)∈[0,δ]×ℝ^d,
|u(t,s,x,y)|^(2+α)_(s,y)∈[0,δ]×ℝ^d ≤1/2[u]^(2+α)_[0,δ]
+C(‖ f‖^(α)_[0,T]+‖ g‖^(2+α)_[0,T])
≤1/2[u]^(2+α)_[0,δ]
+C(‖ f‖^(α)_[0,T]+‖ g‖^(2+α)_[0,T]).
Thanks to the integral representation of ℐ^I[u_t,u_x] in (<ref>), we can set the coefficient 1/2 in front of [u]^(2+α)_[0,δ] in (<ref>)
by choosing a small enough δ. Consequently, we have
‖ u‖^(2+α)_[0,δ]≤ C(‖ f‖^(α)_[0,δ]+‖ g‖^(2+α)_[0,δ]).
To complete the proof, we ought to show that the small δ in (<ref>) can be extended to an arbitrarily large T<∞. It can simply follow the proof of Theorem 3.3 in <cit.>. Essentially, since we can obtain prior estimates similar to (<ref>) in any subinterval, we can extend the horizon by solving the same PDE with the initial condition updated by the upper bound of the current interval. It follows that (<ref>) for any finite T holds as well.
§.§ Global Well-Posedness of Nonlocal Linear PDEs
By the Schauder prior estimate for the solutions to (<ref>) in Ω^(2+α)_[0,T] in Theorem <ref>, we apply the method of continuity to prove the global well-posedness of (<ref>). To this end, we ought to show the global solvability of a simplied version of (<ref>) with constant coefficients and (t,x)-independent variable coefficients:
{[ L_0u(t,s,x,y) = f(t,s,x,y),; u(t,0,x,y) = g(t,x,y), t,s∈[0,T], x,y∈ℝ^d ].
with a nonlocal parabolic differential operator of the form
L_0u:= u_s(t,s,x,y) - ∑_|I|≤ 2a^I(s,y)∂_I u(t,s,x,y) + ∑_|I|≤ 2∂_Ib^I(s,y) u(s,s,x,y)|_x=y
where both of a^I and b^I belong to Ω^(α)_[0,T] and satisfy the uniformly ellipticity conditions (<ref>)-(<ref>). We aim to establish the global existence, uniqueness, and stability of solutions to (<ref>) and from which, we make use of the method of continuity and the Schauder prior estimate to transfer the well-posedness results to (<ref>).
If f∈Ω^(α)_[0,T] and g∈Ω^(2+α)_[0,T], then the simplified nonlocal linear PDE (<ref>) admits a unique solution in Ω^(2+α)_[0,T].
In order to show the global existence of solutions of (<ref>) in Ω^(2+α)_[0,T], we directly construct a regular enough solution w for it by studying the following decoupled system (<ref>) of PDEs for the unknown vector-valued function W(t,s,x,y)=(w,w_0,w_1,⋯,w_d,w_11,w_12,⋯,w_dd)(t,s,x,y),
{[ W^⊤_s(t,s,x,y) = ∑_|I|≤ 2 Q^I(s,y)∂_IW^⊤(t,s,x,y); + ∑_|I|≤ 2(B^I,O)^⊤(s,y)ℐ^I[w_0,w_1,⋯,w_d](t,s,x,y) + f^⊤,; ; W^⊤(t,0,x,y) = g^⊤(t,x,y), t,s∈[0,T], x,y∈ℝ^d. ].
where O is (d^2+d+1)-dimensional zero vector and Q^I is a lower triangular matrix, whose main diagonal elements are a^I(s,y) or (a^I+b^I)(s,y). The coefficients in front of ∂_Iw is (a^I+b^I)(s,y) and other coefficients are all a^I(s,y). The construction of the system (<ref>) for W is inspired by (<ref>) and (<ref>).
Next, we will show that
1.) the system (<ref>) admits a unique classical solution W;
2.) (∂_Iw_0,∂_Iw_1,⋯,∂_Iw_d) of W is a conservative vector field, the potential function of which is just ∂_Iw. Furthermore, ∂_Iw_0=∂_Iw_t, ∂_Iw_i=∂_Iw_x_i, and ∂_Iw_ij=∂_Iw_x_ix_j for i,j=1,…,d and |I|=0,1,2;
3.) the first component w of W solves the simplified nonlocal linear PDE (<ref>);
4.) the estimate [W]^(2+α)_[0,T]<∞ holds such that w∈Ω^(2+α)_[0,T];
5.) the nonlocal PDE (<ref>) is solvable in Ω^(2+α)_[0,T].
1.) We are to prove that the system (<ref>) admits a unique solution W. Note that (<ref>) is a decoupled system and the PDEs of (w_0,w_1,⋯,w_d,w_11,⋯,w_dd) are all classical equations. Hence, by the classical PDE theory <cit.>, we can find a unique classical solution (w_0,w_1,⋯,w_dd) satisfying
sup_(t,x)∈[0,T]×ℝ^d|(w_0,w_1,⋯,w_dd)(t,s,x,y)|^(2+α)_(s,y)∈[0,T]×ℝ^d≤ C(‖ f‖^(α)_[0,T]+‖ g‖^(2+α)_[0,T])<∞.
Moreover, after solving for (w_0,w_1,⋯,w_d), the nonlocal term ℐ^I[w_0,w_1,⋯,w_d] in (<ref>) is known as well. Consequently, the nonlocal PDE of w reduces to a classical equation. Considering the boundedness of (w_0,w_1,⋯,w_d) and the integral structures of ℐ^I[w_0,w_1,⋯,w_d], there exists a unique classical solution w although it possibly increases with x, y→∞. We will show that w is bounded later. Now, we have shown that the decoupled system (<ref>) exists a unique classical solution W.
2.) We are to prove that (∂_Iw_0,∂_Iw_1,⋯,∂_Iw_d) of W is a conservative vector field, the potential function of which is just ∂_Iw. Here, we only consider the case |I|=0 while the other two cases for |I|=1,2 can be proved similarly. For (<ref>), it is clear that the solution (w_0,w_1,⋯,w_d) can be represented with a fundamental solution in an integral form
(w_0,w_1,⋯,w_d)(t,s,x,y)
=∫^s_0dτ∫_ℝ^dZ(s,τ,y,ξ)(f_t,f_x_1,⋯,f_x_d)(t,τ,x,ξ)dξ
+ ∫_ℝ^dZ(s,0,y,ξ)(g_t,g_x_1,⋯,g_x_d)(t,x,ξ)dξ,
where the real-valued fundamental solution Z is independent of (t,x) since a^I=a^I(s,y). Next, in order to show that it is a conservative vector field, we need to prove that a line integral of the vector field (w_0,w_1,⋯,w_d) is path-independent. Let us consider any two paths r_a(θ)=(t^a(θ),x^a(θ)) and r_b(θ)=(t^b(θ),x^b(θ)) connecting between two fixed endpoints (t^',x^') and (t^'',x^''), both of which are parameterized by θ∈[0,1] such that r_1(0)=r_2(0)=(t^',x^') and r_1(1)=r_2(1)=(t^'',x^''). Then, we have
∫_r_a(w_0,w_1,⋯,w_d)· dr
= ∫^1_0(w_0,w_1,⋯,w_d)(t^a(θ),s,x^a(θ),y)·(d t^a(θ)/dθ,d x^a(θ)/dθ)^⊤ dθ
= ∫^s_0dτ∫_ℝ^dZ(s,τ,y,ξ) ∫^1_0(f_t,f_x)(t^a(θ),τ,x^a(θ),ξ)·(d t^a(θ)/dθ,d x^a(θ)/dθ)^⊤ dθ dξ
+ ∫_ℝ^dZ(s,0,y,ξ)∫^1_0(g_t,g_x)(t^a(θ),x^a(θ),ξ)·(d t^a(θ)/dθ,d x^a(θ)/dθ)^⊤ dθ dξ
= ∫^s_0dτ∫_ℝ^dZ(s,τ,y,ξ) (f(t^'',τ,x^'',ξ)-f(t^',τ,x^',ξ)) dξ
+ ∫_ℝ^dZ(s,0,y,ξ)((g_t,g_x)(t^'',x^'',ξ)-(g_t,g_x)(t^',x^',ξ)) dξ
= ∫_r_b(w_0,w_1,⋯,w_d)· dr,
which shows that the choice of paths between two points does not change the value of the line integral. Hence, we obtain that (w_0,w_1,⋯,w_d) is a conservative vector field. Similarly, (∂_Iw_0,∂_Iw_1,⋯,∂_Iw_d) for |I|=1,2 is also a conservative vector field.
From the claims above, there exist some (continuously differentiable) scalar fields ϕ^I (i.e. real-valued functions) such that ∇_t,xϕ^I=(∂_Iw_0,∂_Iw_1,⋯,∂_Iw_d). Next, we will prove that ∂_Iw (|I|=0,1,2) are simply the corresponding potential functions, i.e. ϕ^I=∂_Iw. Since (∂_Iw_0,∂_Iw_1,⋯,∂_Iw_d) is a conservative vector field, we will show that the nonlocal term ℐ^I[w_0,w_1,⋯,w_d](t,s,x,y) satisfies the following properties:
∂ℐ^I[w_0,w_1,⋯,w_d](t,s,x,y)/∂ t =-∂_Iw_0(t,s,x,y),
∂ℐ^I[w_0,w_1,⋯,w_d](t,s,x,y)/∂ x_k =-∂_Iw_k(t,s,x,y)
for any k=1,2,⋯,d and |I|=0,1,2. From the definition (<ref>) of ℐ^I, the first equation of (<ref>) is clear. As for the second equation, we can rearrange the order of w_i in ℐ^I[w_0,w_1,⋯,w_d] such that the integral of w_i appears in the first position. Thanks to the property of path-independence, we have
-ℐ^I[w_0,w_1,⋯,w_d](t,s,x,y)
= ∫^t_s∂_Iw_0(θ_t,s,x,y)dθ_t+∫^x_1_y_1∂_Iw_1(s,s,θ_1,x_2,⋯,x_d,y)dθ_1
+∫^x_2_y_2∂_Iw_2(s,s,x_1,θ_2,⋯,x_d,y)|_x_1=y_1dθ_2
+∫^x_3_y_3∂_I w_3(s,s,x_1,x_2,θ_3,⋯,x_d,y)|_c x_1=y_1
x_2=y_2dθ_3
⋯+∫^x_d_y_d∂_Iw_d(s,s,x_1,x_2,⋯,x_d-1,θ_d,y)|_c x_i=y_i
i=1,2,⋯,d-1dθ_d
= ϕ^I(t,s,x,y)-ϕ^I(s,s,y,y)
= ∫^x_k_y_k∂_Iw_k(t,s,x_1,x_2,⋯,x_k-1,θ_k,x_k+1,⋯,x_d,y)dθ_k
+ ∫^t_s∂_Iw_0(θ_t,s,x,y)|_x_k=y_kdθ_t + ∫^x_1_y_1∂_I w_1(s,s,θ_1,x_2,⋯,x_d,y)|_x_k=y_kdθ_1
⋯ + ∫^x_k-1_y_k-1∂_Iw_k-1(s,s,x_1,⋯,x_k-2,θ_k-1,x_k,⋯,x_d,y)|_c x_i=y_i,x_k=y_k
i=1,2,⋯,k-2dθ_k-1
+ ∫^x_k+1_y_k+1∂_Iw_k-1(s,s,x_1,⋯,x_k,θ_k+1,x_k+2,⋯,x_d,y)|_c x_i=y_i
i=1,2,⋯,kdθ_k+1
⋯+∫^x_d_y_d∂_Iw_d(s,s,x_1,x_2,⋯,x_d-1,θ_d,y)|_c x_i=y_i
i=1,2,⋯,d-1dθ_d,
which directly indicates that the second equation of (<ref>) holds.
Next, we will show that the potential function of (∂_Iw_0,∂_Iw_1,⋯,∂_Iw_d) is just ∂_Iw. Furthermore, ∂_Iw_0=∂_Iw_t, ∂_Iw_i=∂_Iw_x_i, and ∂_Iw_ij=∂_Iw_x_ix_j for i,j=1,2,⋯,d and |I|=0,1,2. Note that ∂_Iw_i is the (i+1)-th component of ∂_IW while ∂_Iw_x_i is the partial derivative of the first component ∂_I w of ∂_I W with respect to x_i. Hence, it is not trivial to check if they are identical.
With the differentiability of coefficients a^I and b^I, the nonhomogeneous terms ℐ^I[w_0,w_1,⋯,w_d](t,s,x,y) and f, and the initial condition g in (t,x), the implicit function theorem guarantees that the solution w of the first PDE of (<ref>) is also differentiable in (t,x). Thanks to (<ref>), we first differentiate the first PDE of (<ref>) for w and then subtract the equation of (<ref>) for w_i from it. We find that the difference w_x_i-w_i satisfies the following classical PDE
{[ (w_x_i-w_i)(t,s,x,y) = ∑_|I|≤ 2(A^I+B^I)(s,y)∂_I(w_x_i-w_i)(t,s,x,y); (w_x_i-w_i)(t,0,x,y) = 0, t,s∈[0,T], x,y∈ℝ^d. ].
By the classical PDE theory <cit.>, we have ∂_Iw_x_i=∂_Iw_i for |I|=0,1,2. Hence, the potential function of (∂_Iw_0,∂_Iw_1,⋯,∂_Iw_d) is just ∂_Iw.
3.) We are to prove that w solves the simplified nonlocal linear PDE (<ref>). Since ∂_Iw_i=∂_Iw_x_i for i=0,1,2,⋯,d and |I|=0,1,2, we replace all ∂_Iw_i of the nonlocal terms ℐ^I[w_0,w_1,⋯,w_d] by ∂_Iw_x_i. Then the first PDE of (<ref>) is exactly the simplified nonlocal linear PDE (<ref>).
4.) We are to show that the estimate [W]^(2+α)_[0,T]<∞, i.e., w∈Ω^(2+α)_[0,T]. First of all, it is obvious that [w]^(2+α)_[0,T]<∞ with the regularities of f and g. Similar to the proof of the Schauder prior estimate (<ref>), by the simplified nonlocal PDE (<ref>) and the system (<ref>), we have
|w(t,s,x,y)|^(2+α)_(s,y)∈[0,T]×ℝ^d ≤ C([w]^(2+α)_[0,T]+‖ f‖^(α)_[0,T]+‖ g‖^(2+α)_[0,T]) <∞
for any (t,x)∈[0,T]×ℝ^d, which implies [w]^(2+α)_[0,T]<∞. Hence, we have [W]^(2+α)_[0,T]<∞ and w∈Ω^(2+α)_[0,T]. Furthermore, from 3.) and 4.), the nonlocal PDE (<ref>) is solvable in Ω^(2+α)_[0,T].
5.) Finally, for the uniqueness and stability of solutions of (<ref>), both of them come directly from the Schauder estimate (<ref>). Suppose that u_1, u_2∈Ω^(2+α)_[0,T] are two solutions of (<ref>), then we have
{[ L_0(u_1-u_2)(t,s,x,y) = 0,; (u_1-u_2)(t,0,x,y) = 0, t,s∈[0,T], x,y∈ℝ^d, ].
which shows that u_1=u_2.
Similarly, we can also show that the map from data (f,g) to solutions of (<ref>) is continuous in the Ω^(l)_[0,T]-topology. Specifically, let u and u correspond to (f,g) and (f,g) satisfying the
assumptions of Theorem <ref>, respectively. Then, we have
‖ u-u‖^(2+α)_[0,T]≤ C(‖ f-f‖^(α)_[0,T]+‖ g-g‖^(2+α)_[0,T]).
With the claims 1.)-5.), the proof is completed.
With the Schauder estimate (<ref>) and the well-posedness of the simplified version (<ref>) of (<ref>), we are ready to prove the global solvability of (<ref>) by the method of continuity.
If f∈Ω^(α)_[0,T] and g∈Ω^(2+α)_[0,T], then the nonlocal linear PDE (<ref>) admits a unique solution in Ω^(2+α)_[0,T].
Since the problem (<ref>) is of linear type, we assume g=0 without loss of generality. Consider the family of equations:
L_τ u:=(1-τ)L_0 u+τ L_1 u
where L_1u:=Lu. It is clear that
‖ L_τ u‖^(α)_[0,T]≤ C‖ u‖^(2+α)_[0,T]
where C is a positive constant depending only on d, α, T, ‖ A^I‖^(α)_[0,T], and ‖ B^I‖^(α)_[0,T]. Hence, for each τ∈[0,1], the nonlocal parabolic L_τ is a bounded linear operator from B:={u∈Ω^(2+α)_[0,T]: u(t,0,x,y)=0} to V:=Ω^(α)_[0,T]. We know that L_0 is solvable (i.e., L_0 is surjective) by Theorem <ref>. Moreover, there exists a constant C such that the following a-priori estimate holds for all u∈ B and τ∈[0,1]
‖ u‖^(2+α)_[0,T]≤ C‖ L_τ u‖^(α)_[0,T],
since u solves the equation with L_τ u as the nonhomogeneous term. By the method of continuity, L_1 is also solvable (i.e., surjective). Furthermore, the uniqueness directly follows from the Schauder estimate for the homogeneous, linear, and strongly parabolic PDE with zero initial value, which is satisfied by the difference of any two solutions in Ω^(2+α)_[0,T] to the equation. For the stability of solutions of (<ref>), one can refer to the counterpart of Theorem <ref>.
By Theorem <ref>, we can immediately conclude the properties of the linear operator L below.
Given g(t,x,y)∈Ω^(2+α)_[0,T], the nonlocal operator L from {u∈Ω^(2+α)_[0,T]:u|_s=0=g} to Ω^(α)_[0,T], defined in (<ref>), is linear, bijective, continuous, and bounded.
§ NONLOCAL FULLY NONLINEAR PDE
In this section, we make use of the linearization method and Banach's fixed point theorem to prove the local existence, uniqueness, and stability of solutions to nonlocal fully nonlinear PDE:
{[ u_s(t,s,x,y) = F(t,s,x,y,(∂_I u)_|I|≤ 2(t,s,x,y), (∂_I u)_|I|≤ 2(s,s,x,y)|_x=y),; u(t,0,x,y) = g(t,x,y), t,s∈[0,T], x,y∈ℝ^d. ].
where the mapping (nonlinearity) F could be nonlinear with respect to all its arguments. With the local well-posedness, we then extend the results to the largest possible time horizon, resulting in the maximally defined solution. Finally, in regards of the global solvability, we will show that it holds if a very sharp a-priori estimate is available. Especially for a special case of (<ref>), called nonlocal quasilinear PDEs, the global solvability can be achieved.
§.§ Local Well-posedness of Fully Nonlinear PDEs
To take advantage of the results of nonlocal linear PDEs in Section <ref>, we impose some regularity assumptions on the nonlinearity F and the initial data g. In addition to g∈Ω^(2+α)_[0,T], it is required that the nonlinear mapping (t,s,x,y,z)→ F(t,s,x,y,z) is defined in Π=[0,T]^2×ℝ^d;d× B(z,R_0) for a positive constant R_0, where z∈ℝ×ℝ^d×ℝ^d^2×ℝ×ℝ^d×ℝ^d^2 and satisfies that
* (Uniformly ellipticity condition) for any ξ=(ξ_1,…,ξ_d)^⊤∈ℝ^d, there exists a constant λ>0 such that
∑_|I|=2∂_I F(t,s,x,y,z)ξ^I ≥ λ|ξ|^2,
∑_|I|=2(∂_I F+∂_I F)(t,s,x,y,z)ξ^I ≥ λ|ξ|^2
hold uniformly with respect to (t,s,x,y,z)∈Π;
* (Locally Hölder continuity) for every δ≥ 0 and z∈ B(z,R_0), there exists a constant K>0 such that
sup_(t,x,z){| ℱ(t,·,x,·,z)|^(α)_[0,δ]×ℝ^d}=K;
* (Locally Lipschitz continuity) for any (t,s,x,y,z_1),(t,s,x,y,z_2)∈Π, there exists a constant L>0 such that
|ℱ(t,s,x,y,z_1)-ℱ(t,s,x,y,z_2)|≤ L|z_1-z_2|,
where ∂_I F denotes the partial derivative of F with respect to ∂_Iu (t,s,x,y) while ∂_I F denotes the derivative of F with respect to ∂_I u(s,s,x,y)|_x=y. Moreover, the generic notation ℱ in the conditions of (<ref>) and (<ref>) represents F itself and some of its first-, second, third-order partial derivatives, which are indicated by “√" or “√" in Tables <ref>, <ref>, and <ref>.
For these second-order derivatives denoted by “√" in Table <ref>, we require further regularities listed in Table <ref>.
where ∂^3_𝒳𝒴ZF represents the first partial derivative of ∂^2_𝒳𝒴F with “√" in Table <ref> with respect to the argument 𝒵.
After introducing the regularities for F and g, we are now in position of stating a local existence and uniqueness result for the nonlocal fully nonlinear PDE (<ref>).
Suppose that F satisfies the conditions (<ref>)-(<ref>), g∈Ω^(2+α)_[0,T], and that the range of (∂_I g(t,x,y),∂_I g(s,x,y)|_x=y) is contained in the ball centered at z with radius R_0/2 for a positive constant R_0. Then, there exist δ>0 and a unique u∈Ω^(2+α)_[0,δ] satisfying (<ref>) in [0,δ]^2×ℝ^d;d.
We adopt the linearization method and Banach's fixed point argument to prove the local well-posedness of nonlocal fully nonlinear PDE. Overall speaking, we search for the solution of (<ref>) as a fixed point of the operator Λ, defined by Λ(u)=U over the space
𝒰={u∈Ω^(2+α)_[0,δ]:u(t,0,x,y)=g(t,x,y),‖ u-g‖^(2+α)_[0,δ]≤ R}
for two constants δ and R (determined later), where U is the solution to
{[ U_s(t,s,x,y) = ℒU+F(t,s,x,y,(∂_I u)_|I|≤ 2(t,s,x,y),; (∂_I u)_|I|≤ 2(s,s,x,y)|_x=y)-ℒu,; U(t,0,x,y) = g(t,x,y), t,s∈ [0,δ], x,y∈ℝ^d, ].
in which
ℒu(t,s,x,y)
=∑_|I|≤ 2∂_I F(t,0,x,y,θ_0(t,x,y))·∂_I u(t,s,x,y)
+∑_|I|≤ 2∂_I F(t,0,x,y,θ_0(t,x,y))·∂_I u(s,s,x,y)|_x=y
with θ_0(t,x,y):=((∂_I g)_|I|≤ 2(t,x,y), (∂_I g)_|I|≤ 2(0,x,y)|_x=y). Note that the partial derivative ∂_I F(t,0,x,y,θ_0(t,x,y))
is meant to be evaluated at (t,0,x,y,θ_0(t,x,y)), i.e. (t,0,x,y,(∂_I g)_|I|≤ 2(t,x,y), (∂_I g)_|I|≤ 2(0,x,y)). Similarly, the same convention applies to ∂_I F(t,0,x,y,θ_0(t,x,y)). Remarkably, the nonlinear operator Λ defined by (<ref>) is well-defined given the well-posedness of nonlocal linear PDE (<ref>).
In order to apply the Banach's fixed point theorem, we need to strike a balance between δ and R such that they satisfy the following three conditions:
* To validate F(t,s,x,y,(∂_I u)_|I|≤ 2(t,s,x,y), (∂_I u)_|I|≤ 2(s,s,x,y)|_x=y), we require that the range of various derivatives of u in 𝒰 is contained in B(z,R_0). Noting that
sup_(t,s,x,y)∈[0,δ]^2×ℝ^d;d∑_|I|≤ 2(|∂_I u(t,s,x,y)-∂_I g(t,x,y)|
+|∂_I u(s,s,x,y)|_x=y-∂_I g(s,x,y)|_x=y|)≤ Cδ^α/2R,
it should hold that Cδ^α/2R≤R_0/2;
* After a rather lenghty verification (see Appendix B), we can show the core inequality
‖Λ(u)-Λ(u)‖^(2+α)_[0,δ]≤ C(R)δ^α/2‖ u-u‖^(2+α)_[0,δ],
and thus a small enough δ can be chosen to ensure C(R)δ^α/2r≤1/2 such that Λ is a 1/2-contraction;
* Before applying a fixed point argument, the last step is to prove that Λ maps 𝒰 into itself, i.e. ‖Λ(u)-g‖^(2+α)_[0,δ]≤ R. Hence, R should be suitably large such that ‖Λ(g)-g‖^(2+α)_[0,δ]≤ R/2.
(Contractility of Λ) Let us consider the equation for U-U:=Λ(u)-Λ(u):
{[ (U-U)_s(t,s,x,y) = ℒ(U-U)+F(t,s,x,y,(∂_I u)_|I|≤ 2(t,s,x,y),; (∂_I u)_|I|≤ 2(s,s,x,y)|_x=y); -F(t,s,x,y,(∂_Iu)_|I|≤ 2(t,s,x,y),; (∂_Iu)_|I|≤ 2(s,s,x,y)|_x=y); -ℒ(u-u),; (U-U)(t,0,x,y) = 0, t,s∈[0,δ], x,y∈ℝ^d. ].
According to the prior estimates (<ref>) and (<ref>) of nonlocal linear PDEs (<ref>), we have
‖ U-U‖^(2+α)_[0,δ]≤ C‖φ‖^(α)_[0,δ],
where the constant C is independent of δ and the inhomogeneous term φ is given by
φ(t,s,x,y)= F(t,s,x,y,(∂_I u)_|I|≤ 2(t,s,x,y), (∂_I u)_|I|≤ 2(s,s,x,y)|_x=y)
-F(t,s,x,y,(∂_Iu)_|I|≤ 2(t,s,x,y), (∂_Iu)_|I|≤ 2(s,s,x,y)|_x=y)-ℒ(u-u),
for convenience, which can be rewritten as an integral representation:
∫^1_0d/dσF(t,s,x,y,θ_σ(t,s,x,y))dσ-ℒ(u-u)
= ∫^1_0∑_|I|≤ 2∂_I F(t,s,x,y,θ_σ(t,s,x,y))×(∂_I u(t,s,x,y)-∂_Iu(t,s,x,y))dσ
+∫^1_0∑_|I|≤ 2∂_I F(t,s,x,y,θ_σ(t,s,x,y))
×(∂_I u(s,s,x,y)|_x=y-∂_Iu^b(s,s,x,y)|_x=y)dσ -ℒ(u-u)
= ∫^1_0∑_|I|≤ 2(∂_I F(t,s,x,y,θ_σ(t,s,x,y))-∂_I F(t,0,x,y,θ_0(t,x,y)))
×∂_I(u-u)(t,s,x,y)dσ
+∫^1_0∑_|I|≤ 2(∂_I F(t,s,x,y,θ_σ(t,s,x,y))-∂_I F(t,0,x,y,θ_0(t,x,y)))
×∂_I(u-u)(s,s,x,y)|_x=ydσ,
in which
θ_σ(t,s,x,y) :=σ((∂_I u)_|I|≤ 2(t,s,x,y), (∂_I u)_|I|≤ 2(s,s,x,y)|_x=y)
+(1-σ)((∂_Iu)_|I|≤ 2(t,s,x,y), (∂_Iu)_|I|≤ 2(s,s,x,y)|_x=y).
To estimate ‖φ‖^(α)_[0,δ], we need to estimate the Hölder regularities of |φ(t,s,x,y)|^(α)_(s,y)∈[0,δ]×ℝ^d, |φ_t(t,s,x,y)|^(α)_(s,y)∈[0,δ]×ℝ^d, |φ_x(t,s,x,y)|^(α)_(s,y)∈[0,δ]×ℝ^d, and |φ_xx(t,s,x,y)|^(α)_(s,y)∈[0,δ]×ℝ^d for any fixed t∈[0,δ] and x∈ℝ^d, all of which are listed in Table <ref>. After a lenghty but straightforward investigation of K_1-K_12 in Table <ref> (see Appendix <ref>), for a small enough δ, we have
‖ U-U‖^(2+α)_[0,δ]≤ C‖φ‖^(α)_[0,δ]≤ C(R)δ^α/2‖ u-u‖^(2+α)_[0,δ]≤1/2‖ u-u‖^(2+α)_[0,δ].
(Self-mapping of Λ) Before applying the Banach's fixed point theorem, we need to choose a suitably large R such that Λ maps 𝒰 into itself. Letting δ and R satisfy
C(R)δ^α/2≤1/2,
then Λ is a 1/2-contraction and for any u∈𝒰, we have
‖Λ(u)-g‖^(2+α)_[0,δ]≤R/2+‖Λ(g)-g‖^(2+α)_[0,δ].
Define the function G:=Λ(g)-g as the solution of the equation
{[ G_s(t,s,x,y)=ℒG+F(t,s,x,y,(∂_I g)_|I|≤ 2(t,x,y), (∂_I g)_|I|≤ 2(s,x,y)|_x=y),; G(t,0,x,y)=0, t,s∈ [0,δ], x,y∈ℝ^d. ].
By (<ref>), there is C>0 independent of δ such that
‖ G‖^(2+α)_[0,δ]≤ C‖ψ‖^(α)_[0,δ]=:C^',
where ψ(t,s,x,y)=F(t,s,x,y,(∂_Ig)_|I|≤ 2(t,x,y), (∂_Ig)_|I|≤ 2(s,x,y)|_x=y). Hence, we have
‖Λ(u)-g‖^(2+α)_[0,δ]≤R/2+C^'.
Therefore for a suitably large R, Λ is a contraction mapping 𝒰 into itself and it has a unique fixed point u in 𝒰 satisfying
{[ u_s(t,s,x,y) = F(t,s,x,y,(∂_I u)_|I|≤ 2(t,s,x,y), (∂_I u)_|I|≤ 2(s,s,x,y)|_x=y),; u(t,0,x,y) = g(t,x,y), t,s∈ [0,δ], x,y∈ℝ^d. ].
(Uniqueness) To complete the proof, we ought to show that u is the unique solution of (<ref>) in the whole space Ω^(2+α)_[0,δ] rather than just in the subset 𝒰. We can directly study a nonlocal PDE satisfied by the difference of any two solutions u and u in Ω^(2+α)_[0,δ] to (<ref>). By using the mean value theorem and the technique of (<ref>), it is clear that the difference u-u solves a nonlocal, homogeneous, linear, and strongly parabolic PDE with initial value zero. Consequently, similar to the proof of Theorem <ref>, we obtain the uniqueness of (<ref>).
Alternatively, the uniqueness of solutions of (<ref>) can also be proven by establishing a contradiction. Specifically, supposed that (<ref>) admits two solutions u and u and let
t_0=sup{t∈[0,δ]:u(t,s,x,y)=u(t,s,x,y),(t,s,x,y)∈[0,t]^2×ℝ^d;d}.
If t_0=δ the proof is completed. Otherwise, we can consider a new nonlocal fully nonlinear PDE with an updated initial condition g^'(t,x,y)=u(t,t_0,x,y)=u(t,t_0,x,y) in (t,s,x,y)∈[t_0,T]^2×ℝ^d;d. As the proof of Theorem <ref> shows, there exist a enough small δ^' and a suitable large R^' such that the updated (<ref>) admits a unique solution in a ball centered at g^' with radius R^'. By choosing a sufficiently large R^' such that both u and u are contained in this ball, we have u=u in [0,t_0+δ^']^2×ℝ^d;d. This contradicts the definition of t_0. Consequently, t_0=δ and u=u.
With some extra efforts, it is provable that the solution of (<ref>) depends continuously on the initial datum g.
Under the assumptions of F and g in Theorem <ref>, there exist r>0 and C>0 such that for each g∈Ω^(2+α)_[0,T] with ‖ g-g‖^(2+α)_[0,T]≤ r, the solution u of (<ref>) with initial datum g is defined in [0,δ]^2×ℝ^d;d and satisfies
‖ u-u‖^(2+α)_[0,δ]≤ C‖ g-g‖^(2+α)_[0,δ],
where δ>0 is given by Theorem <ref>.
This stability result comes directly from the prior estimate (<ref>) and the proof of Theorem <ref>. Thus, its proof is omitted here.
§.§ Extensions to a Larger Time Horizon
In this subsection, we extend the local well-posedness result in Theorem <ref> to the largest possible time horizon, leading to a maximally defined solution. In fact, we have proven that there exists a δ_1>0 such that the nonlocal fully nonlinear PDE (<ref>) is well-posed in [0,δ_1]^2×ℝ^d;d, i.e. the time region R_1 in Figure <ref>.
Hence, the diagonal condition can be determined for s∈[0,δ_1], which means all nonlocal terms (∂_I u)_|I|≤ 2(s,s,x,y)|_x=y of (<ref>) are known for s∈[0,δ]. Subsequently, (<ref>) reduces to a family of classical fully nonlinear PDEs parameterized by (t,x)∈[0,T]×ℝ^d. By the classical PDE theory, it is natural to extend the solution of (<ref>) from R_1 to R_1⋃ R_2. Next, we take s=δ_1 as initial time and g^'(t,x,y)=u(t,δ_1,x,y) as initial datum, then consider the nonlocal PDE
{[ u_s(t,s,x,y) = F(t,s,x,y,(∂_I u)_|I|≤ 2(t,s,x,y), (∂_I u)_|I|≤ 2(s,s,x,y)|_x=y),; u(t,0,x,y) = g^'(t,x,y), t,s∈[δ_1,T], x,y∈ℝ^d. ].
Since g^'∈Ω^(2+α)_[δ_1,T], Theorem <ref> promises that there exists δ_2>0 such that (<ref>) is solvable as well in R_3×ℝ^d;d. Similarly, after solving (<ref>) in the time region R_3, the nonlocal term of (<ref>) are all given for s∈[δ_1,δ_2] and (<ref>) reduces to a family of conventional PDEs. The solution of (<ref>) can then be extended from R_3 to R_3⋃ R_4⋃ R_5. Until now, we have obtained a solution of (<ref>) in [0,T]×[0,δ_2]×ℝ^d;d. One can continue extending the solution to a larger time interval by repeating the procedure above until a maximally defined solution u(t,s,x,y) defined in [0,T]×[0,τ)×ℝ^d;d is reached. The time region [0,T]×[0,τ) is maximal in the sense that if τ<∞, then there does not exist any solution of (<ref>) belonging to Ω^(2+α)_[0,τ].
It is noteworthy that the problem of existence at large for arbitrary initial data is a difficult task even in the local fully nonlinear case. The difficulty is caused by the fact that a-priori estimate in a very high norm |·|^(2+α)_[a,b]×ℝ^d (or ‖·‖^(2+α)_[a,b]) is needed to establish the existence at large. To this end, there should be severe restrictions on the nonlinearities. More details are discussed in <cit.>. Next, let us denote τ(g) as the maximal time interval associated with g. To achieve the global existence, it is desired to prove that τ(g)=T.
Inspired by <cit.>, we will show that the existence of solutions to (<ref>) for an arbitrary large T>0, ϵ>0, and initial data g∈Ω^(2+α+ϵ)_[0,T] if a very sharp priori estimate is available.
Let F and g satisfy the assumptions of Theorem <ref> with α replaced by α+ϵ. For a fixed g∈Ω^(2+α+ϵ)_[0,T], let u be the maximally defined solution of problem (<ref>) for s∈[0,τ(g)). Further assume that there exists a constant M>0 such that
‖ u‖^(2+α+ϵ)_[0,σ]≤ M, for all σ∈[0,τ(g)),
then we have either lim_s→τu(t,s,x,y)∈∂𝒪 or τ(g)=T, where ∂𝒪 is the boundary of the open set 𝒪 of Ω^(2+α)_[0,T] with
𝒪={u | ((∂_I u)_|I|≤ 2(t,s,x,y), (∂_I u)_|I|≤ 2(s,s,x,y)|_x=y)⊆ B(z,R_0)}.
We refer the readers to <cit.> for the similar claim and proof. Assume that lim_s→τu(t,s,x,y)=u(t,τ,x,y)∈𝒪. To obtain a global solution, the maximally defined solution in [0,τ(g)) has to be extended into the bigger interval [0,τ(g)] such that we can update the initial data with u(·,τ,·,·)∈Ω^(2+α)_[0,T]. It requires that the mapping u:s↦ u(t,s,x,y) from [0,τ) to Ω^(2+α)_[0,T] is at least uniformly continuous. By the estimate (<ref>), we have
u∈ B([0,τ);Ω^(2+α+ϵ)_[0,T]), u_s∈ B([0,τ);Ω^(α+ϵ)_[0,T]),
where B([a,b);X) denotes the space of bounded functions over [a,b) valued in the Banach space X. By an interpolation result for θ∈[0,1] (see <cit.>), it follows that u∈ C^1-θ([0,σ];Ω^(α+ϵ+2θ)_[0,T]) for every σ∈(0,τ) with a Hölder constant independent of σ. By choosing θ=1-ϵ/2, we have u∈ C^ϵ/2([0,σ];Ω^(2+α)_[0,T]). Consequently, u can be continued at s=τ(g) in such a way that the extension belongs to u(·,τ,·,·)∈Ω^(2+α)_[0,T]. Then, by Theorem <ref>, (<ref>) admits a unique solution u∈Ω^(2+α)_[0,τ+τ_1] for some τ_1>0, which contradicts the definition of τ(g). Therefore, we have τ(g)=T.
Remarkably, in order to obtain the 1/2-contractility of Λ defined by (<ref>), we need to strike a balance between R and δ in C(R)δ^α/2 of (<ref>). In the study of nonlocal fully nonlinear PDEs, the radius R of solution u of (<ref>) contains its ‖·‖^(2+α)_[0,δ]-norm information. Hence, a prior estimate ‖ u‖^(2+α)_[0,σ]≤ M for σ∈[0,τ(g)) is not sufficient for existence in the large. The ϵ of (<ref>) provides an additional regularity for the maximally defined solution to ensure the extension. Similar sufficient conditions to obtain a-priori estimates like (<ref>) for the classical PDEs can be found in <cit.>. However, it is not straightforward to express such conditions in terms of coefficients and data of the local and nonlocal fully nonlinear PDEs.
Next, we will make use of Theorems <ref> and <ref> to show the global solvability of a special class of (<ref>), which is called nonlocal quasilinear PDEs, of the form:
{[ u_s(t,s,x,y) = ∑_|I|= 2A^I(s,y)∂_I u(t,s,x,y); +Q(t,s,x,y,(∂_I u)_|I|≤ 1(t,s,x,y), (∂_I u)_|I|≤ 1(s,s,x,y)|_x=y),; u(t,0,x,y) = g(t,x,y), t,s∈[0,T], x,y∈ℝ^d, ].
where the nonlinearity Q(t,s,x,y,p^I,q^I) satisfies that there exists a constant K<∞ such that Q and its first-/second-derivatives satisfy
|Q|≤ K(1+∑|p^I|),
max_|I|≤ 1, 1≤ i,j≤ d{|Q_p^I|, |Q_t|, |Q_x_i|, |Q_p^Ip^J|, |Q_tp^I|, |Q_x_ip^I|, |Q_x_ix_j|}≤ K.
Such a simplified PDE (<ref>) contains the equilibrium-type HJB equations originated from TIC stochastic control problems driven by state processes controlled solely through the drift. For such cases, we obtain the global well-posedness result.
Suppose that A^I satisfies the uniform ellipticity condition (<ref>) and the nonlinearity Q meets the conditions (<ref>)-(<ref>). The nonlocal quasilinear PDE (<ref>) admits a unique solution in Ω^(2+α)_[0,T] in [0,T]^2×ℝ^d;d.
We leverage Theorem <ref> for the analysis of (<ref>). It is clear that (<ref>) admits a unique solution in Ω^(2+α)_[0,T] in [0,τ]^2×ℝ^d;d. If τ=T, the proof is completed. Otherwise, we ought to examine if the maximally defined solution can be extended uniquely into [0,T]^2.
Following the proof of Theorem <ref> and the definition of Λ, we can find that the argument R of C(R)δ^α/2 in (<ref>) depends on ‖·‖^(1+α)_[0,δ]-norm of u and g rather than their ‖·‖^(2+α)_[0,δ]-norm, which is a key difference between (<ref>) and (<ref>).
In the case of (<ref>), we only need to control the behavior of solutions in the ‖·‖^(1+α)_[0,δ]-topology and show that the mapping u:s↦ u(t,s,x,y) from [0,τ) to Ω^(1+α)_[0,T] is uniformly continuous.
By (<ref>) restricted in [0,τ)^2×ℝ^d;d, we can differentiate the equation once and twice with respect to x_i, i=0,1,⋯,d, then
{[ (∂ u/∂ x_i)_s(t,s,x,y) = ∑_|I|= 2A^I(s,y)∂_I (∂ u/∂ x_i)(t,s,x,y); +∑_|I|≤ 1Q_p^I(u)∂_I(∂ u/∂ x_i)(t,s,x,y)+Q_x_i(u),; (∂ u/∂ x_i)(t,0,x,y) = g_x_i(t,x,y), t,s∈[0,τ), x,y∈ℝ^d, i=0,1,⋯,d ].
and
{[ (∂^2 u/∂ x_i∂ x_j)_s(t,s,x,y) = ∑_|I|= 2A^I(s,y)∂_I (∂^2 u/∂ x_i∂ x_j)(t,s,x,y); +∑_|I|≤ 1Q_p^I(u)∂_I(∂^2 u/∂ x_i∂ x_j)(t,s,x,y); +∑_|I|≤ 1,|J|≤ 1Q_p^Ip^J(u)(∂_I(∂ u/∂ x_i)∂_J(∂ u/∂ x_j))(t,s,x,y); +∑_|I|≤ 1Q_p^Ix_j(u)∂_I(∂ u/∂ x_i)(t,s,x,y); +∑_|I|≤ 1Q_x_ip^I(u)∂_I(∂ u/∂ x_j)(t,s,x,y)+Q_x_ix_j(u),; (∂^2 u/∂ x_i∂ x_j)(t,0,x,y) = g_x_ix_j(t,x,y), t,s∈[0,τ), i,j=1,⋯,d. ].
With the conditions (<ref>)-(<ref>) and the Grönwall–Bellman inequality, it is clear from (<ref>), (<ref>), and (<ref>) that u∈Ω^(1+α)_[0,τ) and there exists a constant K^' such that ‖ u‖^(1+α)_[0,τ)≤ K^'. Consequently, the nonlinearity Q of (<ref>) belongs to Ω^(α)_[0,τ) as well. Furthermore, Theorem <ref> implies that (<ref>) admits a unique solution u∈Ω^(2+α)_[0,τ) in [0,τ)^2×ℝ^d;d and ‖ u‖^(2+α)_[0,τ)≤ K^', where K^' could vary from line to line. Therefore, by Lemma 8.5.5 in <cit.>, the mapping u:s↦ u(t,s,x,y) from [0,τ) to Ω^(1+α)_[0,T] has an analytic continuation at s=τ and u(·,τ,·,·)∈Ω^(2+α)_[0,T]. With the same spirit of the proof of Theorem <ref>, we can extend the maximally defined solution until [0,T]^2 by updating the initial condition.
§.§ Probabilistic Representation for Nonlocal PDEs
Before we end this section, we provide a probabilistic representation for the solutions of nonlocal fully nonlinear PDEs on top of their well-posedness. With such a representation, it is promising to combine the Monte Carlo simulations and deep learning techniques to devise a numerical scheme of solving the nonlocal PDEs (even in a high-dimensional setting); see <cit.>.
Suppose that σ(s,y)∈ C^1,2([0,T]×ℝ^d) and (<ref>) admits a unique solution u(t,s,x,y) that is first-order continuously differentiable in s and third-order continuously differentiable with respect to y in ∇[0,T]×ℝ^d. Furthermore, let
Y(t,s) := u(t,s,X(t),X(s)), Z(t,s) := (σ^⊤ u_y)(t,s,X(t),X(s)),
Γ(t,s) :=(σ^⊤(σ^⊤ u_y)_y)(t,s,X(t),X(s)), A(t,s) := 𝒟(σ^⊤ u_y)(t,s,X(t),X(s)),
where (σ^⊤ u_y)(t,s,x,y)=σ^⊤(s,y)u_y(t,s,x,y) and the operator 𝒟 is defined by
𝒟φ=φ_s+1/2∑^d_i,j=1(σσ^⊤)_ij∂^2φ/∂ y_i∂ y_j+∑^d_i=1b_i∂φ/∂ y_i,
then the family of random fields (X(·),Y(·,·),Z(·,·),Γ(·,·),A(·,·)) is an adapted solution of the following flow of 2FBSDEs:
X(s) = X(t)+∫^s_tb(τ,X(τ))dτ+∫^s_tσ(τ,X(τ))dW(τ),
Y(t,s) = g(t,X(t),X(T))
+∫^T_sℱ(t,τ,X(t),X(τ),Y(t,τ),Y(τ,τ),Z(t,τ),Z(τ,τ),Γ(t,τ),Γ(τ,τ))dτ
-∫^T_s Z^⊤(t,τ)dW(τ),
Z(t,s) = Z(t,X(t))+∫^s_tA(t,τ)dτ+∫^s_tΓ(t,τ)dW(τ), 0≤ t≤ s≤ T,
where ℱ is defined by
ℱ(t,τ,X(t),X(τ),Y(t,τ),Y(τ,τ),Z(t,τ),Z(τ,τ),Γ(t,τ),Γ(τ,τ))
= F(t,τ,X(t),X(τ),(∂_I u)_|I|≤ 2(t,τ,X(t),X(τ)),(∂_I u)_|I|≤ 2(τ,τ,X(τ),X(τ)))
with the definition of F
F(t,τ,x,y,(∂_I u)_|I|≤ 2(t,τ,x,y), (∂_I u)_|I|≤ 2(τ,τ,x,y)|_x=y)
:= F(t,τ,x,y,(∂_I u)_|I|≤ 2(t,τ,x,y), (∂_I u)_|I|≤ 2(τ,τ,x,y)|_x=y)
-1/2∑^d_i,j=1(σσ^⊤)_ij(τ,y)∂^2 u/∂ y_i∂ y_j(t,τ,x,y)-∑^d_i=1b_i(τ,y)∂ u/∂ y_i(t,τ,x,y).
The results come directly from the application of Itô's lemma. We refer the readers to the similar claims and proofs in <cit.>. We make three important observations about the stochastic system (<ref>): (I) when the generator ℱ is independent of diagonal terms, i.e. Y(τ,τ), Z(τ,τ), and Γ(τ,τ), the flow of FBSDEs (<ref>) are reduced to a family of 2FBSDEs parameterized by (t,X(t)), which is exactly the 2FBSDE in <cit.> and equivalent to the ones in <cit.> for any fixed t; (II) (<ref>) is more general than related results in the previous literature <cit.> since it allows for the nonlinearity of (Y(t,τ),Z(t,τ),Γ(t,τ)) by introducing an additional SDE and also contains their diagonal terms (Y(τ,τ),Z(τ,τ),Γ(τ,τ)) in almost arbitrary way; (III) inspired by <cit.> and <cit.>, it is interesting to establish the well-posedness of (<ref>) in the theoretical framework of SDEs. However, it is beyond the scope of this paper while we put it into our research agenda.
§ WELL-POSEDNESS OF EQUILIBRIUM HJB EQUATIONS AND AN EXAMPLE
With the well-posedness results in Section <ref>, we now echo back the TIC stochastic control problem of our interest (<ref>) and analyze the well-posedness of the equilibrium HJB equation (<ref>). We first summarize for the latter and then give a financial example.
In the previous two sections, we consider initial value PDE problems for notational simplicity. To study stochastic control problem, we first show the equivalence between the solvability of (<ref>) and that of the nonlocal boundary value problems (<ref>) with a terminal condition:
{[ u_s(t,s,x,y) + F(t,s,x,y,(∂_I u)_|I|≤ 2(t,s,x,y), (∂_I u)_|I|≤ 2(s,s,x,y)|_x=y) = 0,; u(t,T,x,y) = g(t,x,y), t,s∈[0,T], x,y∈ℝ^d. ].
The solvabilities of forward problem (<ref>) and backward problem (<ref>) are equivalent.
Let u(t,s,x,y)=v(T-t,T-s,x,y), then (<ref>) can be reformulated as
{[ -v_T-s(T-t,T-s,x,y) + F(t,s,x,y,(∂_I v)_|I|≤ 2(T-t,T-s,x,y),; (∂_I v)_|I|≤ 2(T-s,T-s,x,y)|_x=y) = 0,; v(T-t,0,x,y) = g(t,x,y), t,s∈[0,T], x,y∈ℝ^d. ].
Let (t^',s^')=(T-t,T-s) and (x^',y^')=(x,y). Then we have
{[ v_s^'(t^',s^',x^',y^') = F(T-t^',T-s^',x^',y^',(∂_I v)_|I|≤ 2(t^',s^',x^',y^'),; (∂_I v)_|I|≤ 2(s^',s^',x^',y^')|_x^'=y^') = 0,; v(t^',0,x^',y^') = g(T-t^',x^',y^'), t^',s^'∈[0,T], x^',y^'∈ℝ^d. ].
By introducing F^'(t^',s^',x^',y^',p,q)=F(T-t^',T-s^',x^',y^',p,q) and g^'(t^',x^',y^')=g(T-t^',x^',y^'), the backward problem (<ref>) is equivalent to
{[ v_s^'(t^',s^',x^',y^') = F^'(t^',s^',x^',y^',(∂_I v)_|I|≤ 2(t^',s^',x^',y^'),; (∂_I v)_|I|≤ 2(s^',s^',x^',y^')|_x^'=y^'),; v(t^',0,x^',y^') = g^'(t^',x^',y^'), t^',s^'∈[0,T], x^',y^'∈ℝ^d. ].
which completes the proof.
With the connection in Theorem <ref>, we now can use the results (theorems) in Section <ref> for boundary value (backward) problems. The descriptive extension in Section <ref> can be derived in parallel as shown in Figure <ref>.
§.§ Analyses of Equilibrium HJB Equations
In this subsection, we apply Theorems <ref> and <ref> to equilibrium HJB equations. Moreover, we examine how close in value function between sophisticated and “naïve" controllers, which will be detailed later.
To this end, we first introduce some useful notations:
* For two fixed bounded subsets P, Q⊆ℝ×ℝ^d×ℝ^d^2×ℝ×ℝ^d×ℝ^d^2, then
d(P,Q):=inf{|p-q|:p∈ P,q∈ Q}.
* For any u∈Ω^(2+α)_[0,T], the range of u is defined by
R(u):={(∂_I u(t,s,x,y),∂_Iu(s,s,x,y)|_x=y):t,s∈[0,T]^2,x,y∈ℝ^d}.
We have the following conclusions.
Let ℋ(t,s,x,y,z)=ℋ(T-t,T-s,x,y,z) defined in (<ref>), and g(t,x,y)=g(T-t,x,y). Suppose that ℋ satisfies the conditions (<ref>)-(<ref>) in an open ball B with d(∂ B,R(g))>0. Then, for the TIC stochastic control problem (<ref>)-(<ref>), we have that
* if both the drift and the diffusion of (<ref>) are controlled, there exist τ∈(0,T] and a unique maximally defined solution u∈Ω^(2+α)_[T-τ,T] satisfying the equilibrium HJB equation (<ref>) in ∇[T-τ,T]×ℝ^d;d. Consequently, one has a feedback equilibrium control and a C^1,2 equilibrium value function of the form
{[ e(s,y): = ψ(s,s,y,y,(∂_I u)_|I|≤ 2(s,s,x,y)|_x=y),; V(s,y): = J(s,y;e(s,y))=u(s,s,y,y), ].
at least in the maximally defined time interval. Further assume that the domain of ℋ is large enough and (<ref>) holds, we have τ=T;
* in the case where only the drift is controlled while the diffusion of (<ref>) is uncontrolled, i.e. σ(s,y,a)=σ(s,y), the equilibrium HJB equation (<ref>) is solvable globally.
Proposition <ref> directly comes from Theorems <ref>, <ref>, and <ref>. The only important thing to note in the proposition is that the condition d(∂ B,R(g))>0 is weaker than the counterpart in Theorem <ref>, where d(∂ B,R(g))>R_0/2 was required. In fact, from the inequality (<ref>) in the proof of Theorem <ref>, we can find that the local solution always exists only if the range of initial data is contained in the interior of the domain of nonlinearity.
Proposition <ref> is significant as it partially addressed some open research problems listed in <cit.>, i.e.,
* to provide conditions on primitives which guarantee that the functions V and u are regular enough to satisfy the extended HJB system; and
* to prove existence and/or uniqueness for solutions of the extended HJB system.
From our well-posedness and regularities results of nonlocal PDEs (<ref>), it is clear that these open problems can be solved at least within the maximally defined time interval. Note that the extended HJB equation in <cit.> and the equilibrium HJB equation (<ref>) are equivalent; see <cit.>. Moreover, our function space Ω^(2+α)_[a,b] supports the C^1,1,2,2-regular condition for possible solutions of the extended HJB equation. In addition, our function space requires neither second-order partial derivative in t nor mixed ones between t and x. It suits well the formulations of TIC stochastic control problems.
While the sophisticated controllers would think globally and act locally, it would be interesting to investigate for the “naïve" controllers, who think and act locally. Specifically, at each time t and state x, the naïve controllers fix their originally time-varying objectives as the one at (t,x) throughout the stochastic control problem over [t,T]×ℝ. Given (t,x), the naïve controllers only need to solve a time-consistent stochastic control problem, whose value function is denoted by u^n(t,s,x,y) for (s,y)∈ [t,T]×ℝ. By the dynamic programming approach and the assumption of existing optimum of Hamiltonian (<ref>), it is not difficult to find that the PDE for u^n is given by
{[ u^n_s(t,s,x,y) = ℋ(t,s,x,y,ψ(t,s,x,y,(∂_I u^n)_|I|≤ 2(t,s,x,y)),; (∂_I u^n)_|I|≤ 2(t,s,x,y)), (t,s,x,y)∈[0,T]×ℝ^d;d,; u^n(t,0,x,y) = g(t,x,y), (t,x,y)∈[0,T]×ℝ^d;d. ].
Its key difference from (<ref>) is that there is no u-function terms with substitution of (t,x) by (s,y) in (<ref>). Thus, the PDE (<ref>) is local as (t,x) can be viewed as fixed parameters. Subsequently, the function, defined by V^n(s,y)=u^n(s,s,y,y), naturally serves as a lower bound for the corresponding equilibrium value function V since V^n(s,y) is the local solution (in value) of problem (<ref>). However, we should note that there is no consistent control policy (in the face of TIC) that can give the value function V^n(s,y). In other words, u^n and V^n are merely nominal while not achievable.
Our next proposition estimates the difference between the functions V(s,y) and V^n(s,y), which somehow implies the goodness of using the equilibrium strategy. The result builds on top of the well-posedness of the nonlocal fully nonlinear PDEs we obtained in the previous sections. To avoid repeated discussions on the global solvability, we leverage only the local well-posedness results and discuss only on a small time interval. Though the result is extendable, it may not bring additional insights.
Suppose that ℋ and ψ possess needed
regularity.
Then there exists a constant C(R)>0 such that
V^n(s,y)≤ V(s,y)≤ V^n(s,y)+C(R)((T-s)^2+(T-s)^3/2)
for any s∈[T-δ,T] and y∈ℝ^d, where R and δ are determined by Theorem <ref>.
In order to evaluate the difference between V^n and V, we study the following two nonlocal PDEs (<ref>) and (<ref>) for (t,s,x,y)∈[0,δ]×ℝ^d;d.
Consequently, given the well-posedness of u(t,s,x,y), we obtain a classical PDE for (u^n-u)(t,s,x,y) of the following form
{[ (u^n-u)_s(t,s,x,y) = ∑_|I|≤ 2M^I(t,s,x,y)∂_I(u^n-u)(t,s,x,y); +∑_|I|≤ 2N^I(t,s,x,y)ℐ^I[∂ u/∂ t,∂ u/∂ x](t,s,x,y); +K(t,s,x,y), (t,s,x,y)∈[0,δ]×ℝ^d;d,; (u^n-u)(t,0,x,y) = 0, (t,x,y)∈[0,δ]×ℝ^d;d, ].
where
{[ M^I(t,s,x,y)= ∫^1_0(∂ℋ/∂ψ∂ψ/∂(∂_Iu)+∂ℋ/∂(∂_Iu))(s,y,η_σ(t,s,x,y))dσ,; N^I(t,s,x,y)= ∫^1_0∂ℋ/∂ψ∂ψ/∂(∂_Iu)(s,y,η_σ(t,s,x,y))dσ,; K(t,s,x,y)= ∫^1_0∂ℋ/∂ψ∂ψ/∂(t,x)(s,y,η_σ(t,s,x,y))dσ·∫^(t,x)_(s,y)1dθ,; η_σ(t,s,x,y)= σ·(t,x,(∂_I u^n)_|I|≤ 2(t,s,x,y)); +(1-σ)·(s,y,(∂_I u)_|I|≤ 2(s,s,x,y)|_x=y). ].
Then, by setting (t,x)=(s,y) and following earlier analyses, we have
0 ≤(V-V^n)(T-s,y)=|(V^n-V)(T-s,y)|=|(u^n-u)(s,s,y,y)|
≤ C(R)∫^s_0dτ∫_ℝ^d(s-τ)^-d/2exp{-c(R)ϖ(s,τ,y,ξ)}(|s-τ|+|y-ξ|)dξ
≤ C(R)(s^2+s^3/2), s∈[0,δ],
where ϖ(s,τ,y,ξ) is defined in Appendix <ref>. The proof is completed.
Compared with the nonlocal PDE (<ref>), the equation (<ref>) can be considered as a family of classical PDEs parameterized by (t,x). Hence, after solving (<ref>), it is helpful to use the difference between V and V^n to estimate equilibrium value function V. Moreover, if (<ref>) is globally solvable, the estimate can be extended to the whole time horizon.
§.§ A Financial Example
In this subsection, we give an example of a TIC stochastic control problem (<ref>) and illustrate its global solvability. With a specific model of the example, we can obtain explicit expressions of equilibrium control and equilibrium HJB equation while the latter can be further reduced to an ordinary differential equation (ODE) system with an ansatz. In the similar spirit of the proof of Theorem <ref>, we can show the global solvability of the ODE system and thus we obtain the global well-posedness of the equilibrium HJB equation. However, although our results are applicable to the state process with controlled drift and volatility, which is the case of this example, they do not cover this example as one needs to first improve the norm and Banach space in our results. Our example in this subsection is more of inspirational.
Consider a market model in which there are one bond with a constant risk-less interest rate r>0 and one risky asset with the appreciation rate μ>r and the volatility σ>0. With α(·) being the money amount invested in the risk asset (while the rest invested in the bond) and c(·) being the consumption amount, the wealth process, characterized by X(·), together with the TIC recursive utility process (Y(·),Z(·)) satisfies the following controlled FBSDE:
{[ dX(s)= [rX(s)+(μ-r)α(s)-c(s)]ds+σα(s)dW(s), s∈[t,T],; dY(s)= -[v(t,s)c(s)^β-w(t,s)Y(s)+z(t,s)x^γ]ds+Z(s)dW(s), s∈[t,T],; X(t)= x, Y(T)=g^(1)(t)X(T)^β+g^(2)(t)x^γ, ].
where β,γ∈(0,1), v, w, z, g^(1), and g^(2) are all continuous and positive functions. Next, we define the recursive utility functional for the investor as follows:
J(t,x;α(·),c(·)):=Y(t;t,x,α(·),c(·)).
Hence, the problem for the investor is to identify the optimal investment and consumption policy such that a sort of mixed power utilities of the instantaneous and terminal wealth is maximized.
Due to the general (t,x)-dependence of the generator and the terminal condition of the BSDE, the aforementioned stochastic control problem is TIC. Note that we are dealing with a utility maximization problem. With some simple transformation, it can be reformulated as a minimization of a TIC functional, to align with our framework.
Following the derivation in Section <ref>, we consider the Hamiltonian function (with the abuse of notations)
ℋ(t,s,x,y,a,c,u,p,q) =1/2σ^2a^2q+[ry+(μ-r)a-c]p
+[v(t,s)c^β-w(t,s)u+z(t,s)x^γ].
Maximizing it with respect to (a,c) yields the maxima (for p>0 and q<0)
a=-(μ-r)p/σ^2q, c=(p/β v(t,s,x))^1/β-1.
Consequently, the equilibrium policy admits the forms
a(s,y)=-(μ-r)u_y(s,s,x,y)|_x=y/σ^2u_yy(s,s,x,y)|_x=y, c(s,y)=(u_y(s,s,x,y)|_x=y/β v(s,s,y))^1/β-1
with u(t,s,x,y) being the solution to the following equilibrium HJB equation
{[ u_s(t,s,x,y) + (μ-r)^2(u_y(s,s,x,y)|_x=y)^2/2σ^2(u_yy(s,s,x,y)|_x=y)^2u_yy(t,s,x,y),; + [ry-(μ-r)^2u_y(s,s,x,y)|_x=y/σ^2u_y(s,s,x,y)|_x=y-(u_y(s,s,x,y)|_x=y/β v(s,s,y))^1/β-1]u_y(t,s,x,y); + v(t,s)(u_y(s,s,x,y)|_x=y/β v(s,s,y))^β/β-1-w(t,s)u(t,s,x,y)+z(t,s)x^γ=0,; u(t,T,x,y) = g^(1)(t)y^β+g^(2)(t)x^γ, 0≤ t≤ s≤ T, x,y∈(0,∞). ].
Now, by observing the terminal condition of the BSDE (<ref>), we consider the following ansatz for u:
u(t,s,x,y)=φ^(1)(t,s)y^β+φ^(2)(t,s)x^γ, 0≤ t≤ s≤ T, x,y∈(0,∞),
for φ^(1)(t,s) and φ^(2)(t,s) to be determined. Then, we have φ^(1)(t,T)=g^(1)(t), φ^(2)(t,T)=g^(2)(t), and
φ^(1)_s(t,s)y^β+φ^(2)_s(t,s)x^γ +(μ-r)^2(φ^(1)(s,s)β y^β-1)^2/2σ^2(φ^(1)(s,s)β(β-1)y^β-2)^2φ^(1)(t,s)β(β-1)y^β-2
+ [ry-(μ-r)^2φ^(1)(s,s)β y^β-1/σ^2φ^(1)(s,s)β(β-1)y^β-2-(φ^(1)(s,s)β y^β-1/β v(s,s,y))^1/β-1]φ^(1)(t,s)β y^β-1
+ v(t,s)(φ^(1)(s,s)β y^β-1/β v(s,s))^β/β-1-w(t,s)φ^(1)(t,s)y^β-w(t,s)φ^(2)(t,s)x^γ+z(t,s)x^γ
= {φ^(1)_s(t,s) +(μ-r)^2β/2σ^2(β-1)φ^(1)(t,s) + [rβ-(μ-r)^2β/σ^2(β-1)-β(φ^(1)(s,s)/v(s,s))^1/β-1]φ^(1)(t,s)
+ v(t,s)(φ^(1)(s,s)/ v(s,s))^β/β-1-w(t,s)φ^(1)(t,s)} y^β
+{φ^(2)_s(t,s)-w(t,s)φ^(2)(t,s)+z(t,s)}x^γ=0
Therefore, φ^(1)(t,s) and φ^(2)(t,s) satisfy the following ODEs:
{[ φ^(1)_s(t,s) + [k(t,s)-β(φ^(1)(s,s)/v(s,s))^1/β-1]φ^(1)(t,s)+ v(t,s)(φ^(1)(s,s)/ v(s,s))^β/β-1 = 0,; φ^(2)_s(t,s) - w(t,s)φ^(2)(t,s)+z(t,s)=0, 0≤ t≤ s≤ T,; φ^(1)(t,T) = g^(1)(t), φ^(2)(t,T) = g^(2)(t) , 0≤ t≤ T, ].
where k(t,s):=rβ-(μ-r)^2β/2σ^2(β-1)-w(t,s). Then, by variation of constants method, we have
φ^(1)(t,s) = exp{∫^T_s[k(t,τ)-β(φ^(1)(τ,τ)/v(τ,τ))^1/β-1]dτ}g^(1)(t)
+ ∫^T_sexp{∫^λ_s[k(t,τ)-β(φ^(1)(τ,τ)/v(τ,τ))^1/β-1]dτ}v(t,λ)(φ^(1)(λ,λ)/v(λ,λ))^β/β-1dλ,
and
φ^(2)(t,s) = exp{-∫^T_sw(t,τ)dτ}g^(2)(t)+ ∫^T_sexp{-∫^λ_sw(t,τ)dτ}z(t,λ)dλ,
for 0≤ t≤ s≤ T and x,y∈(0,∞). It is clear that (<ref>) admits a unique solution since it can be considered as a family of classical ODEs parameterized by t. Taking t=s in (<ref>) gives
φ^(1)(s,s) = exp{∫^T_s[k(s,τ)-β(φ^(1)(τ,τ)/v(τ,τ))^1/β-1]dτ}g^(1)(s)
+ ∫^T_sexp{∫^λ_s[k(s,τ)-β(φ^(1)(τ,τ)/v(τ,τ))^1/β-1]dτ}v(s,λ)(φ^(1)(λ,λ)/v(λ,λ))^β/β-1dλ.
Let us denote by
φ^(1)(s)=φ^(1)(s,s)/v(s,s), g^(1)(s)=g(s)/v(s,s), v(t,s)=v(t,s)/v(t,t).
Then, we obtain a nonlinear integral equation for φ^(1)(s):
φ^(1)(s) = exp{∫^T_s[k(s,τ)-βφ^(1)(τ)^1/β-1]dτ}g^(1)(s)
+ ∫^T_sexp{∫^λ_s[k(s,τ)-βφ^(1)(τ)^1/β-1]dτ}v(s,λ)φ^(1)(λ)^β/β-1dλ,
If (<ref>) is solvable, it turns out that there exists a unique solution φ^(1)(t,s) solving (<ref>). Moreover, if the classical parameterized ODE for φ^(2) in (<ref>) is solvable, the proof of global solvability of φ^(1) can be obtained by similar arguments in <cit.>.
In what follows, we claim that the ODE system (<ref>) admits a unique positive solution (φ^(1),φ^(2))(t,s) for 0≤ t≤ s≤ T such that the TIC stochastic control problem (<ref>)-(<ref>) is solvable globally.
For v, w, and z:∇[0,T]→(0,∞) and g^(1), g^(2):[0,T]→(0,∞) being continuous, and s↦ v(t,s) being continuously differentiable, the ODE system (<ref>) admits a unique positive solution (φ^(1),φ^(2))(t,s) in 0≤ t≤ s≤ T. Consequently, the equilibrium value function and the equilibrium policy for problem (<ref>)-(<ref>) are given by
{[ V(s,y)=φ^(1)(s,s)y^β+φ^(2)(s,s)y^γ,; a(s,y)=-(μ-r)/σ^2(β-1)y,; c(s,y)=(φ^(1)(s,s)/ v(s,s,y))^1/β-1y, (s,y)∈[0,T]×(0,∞). ].
One could take advantage of fixed-point arguments to show the local well-posedness of (<ref>). Let us focus on proving the lower and upper bounds of φ^(1)(s), which suffices to guarantee the global existence of solutions of (<ref>) by an analytic continuation.
First, we let
φ^(1)(s)=φ^(1)(s)exp{β∫^T_sφ^(1)(τ)^1/β-1dτ},
g(s)=g(s)exp{∫^T_s k(s,τ)dτ}, v(t,s)=v(t,s)exp{∫^s_t k(t,τ)dτ}.
Then, (<ref>) can be rewritten as
φ^(1)(s)=g(s)+∫^T_sφ^(1)(τ)φ^(1)(τ)^1/β-1v(s,τ)dτ, s∈[0,T].
Under some suitable conditions (see <cit.>), the following inequalities hold:
h(s)≥ g_0>0, v(t,s)≥exp{-ϱ(s-t)}
for some constants g_0>0 and ϱ>0. From (<ref>), it is clear that
φ^(1)(s)≥ g_0+∫^T_sexp{-ϱ(τ-s)}φ^(1)(τ)φ^(1)(τ)^1/β-1dτ, s∈[0,T],
which is equivalent to
φ^(1)(s)exp{-ϱ s}≥ g_0exp{-ϱ s}+∫^T_s[φ^(1)(τ)exp{-ϱτ}]φ^(1)(τ)^1/β-1dτ≡ξ(s).
Since,
ξ^'(s) =-ϱ g_0exp{-ϱ s}-[φ^(1)(s)exp{-ϱ s}]φ^(1)(s)^1/β-1
≤ -ϱ g_0exp{-ϱ s}-ξ(s)φ^(1)(s)^1/β-1,
we have
[ξ(s)exp{-∫^T_sφ^(1)(τ)^1/β-1dτ}]^'≤ -ρ g_0exp{-ϱ s-∫^T_sφ^(1)(τ)^1/β-1dτ}.
Integrating both sides over [s,T], we obtain
ξ(s) ≥exp{∫^T_sφ^(1)(τ)^1/β-1dτ}
×[g_0exp{-ϱ T}+ρ g_0∫^T_sexp{-ϱλ-∫^T_λφ^(1)(τ)^1/β-1dτ}dλ].
Consequently,
φ^(1)(s) =φ^(1)(s)exp{-β∫^T_sφ^(1)(τ)^1/β-1dτ}
≥exp{-β∫^T_sφ^(1)(τ)^1/β-1dτ+ϱ s}ξ(s)
≥exp{(1-β)∫^T_sφ^(1)(τ)^1/β-1dτ+ϱ s}
×[g_0exp{-ϱ T}+ϱ g_0∫^T_sexp{-ϱλ-∫^T_λφ^(1)(τ)^1/β-1dτ}dλ]
≥exp{-ϱ(T-s)}g_0≥δ>0.
The lower bound of φ^(1)(s) leads to
φ^(1)(s)^β/β-1=1/φ^(1)(s)^β/1-β≤1/δ^β/1-β≤ K,
On the other hand, (<ref>) implies that
φ^(1)(s) = exp{∫^T_s[k(s,τ)-βφ^(1)(τ)^1/β-1]dτ}g^(1)(s)
+ ∫^T_sexp{∫^λ_s[k(s,τ)-βφ^(1)(τ)^1/β-1]dτ}v(s,λ)φ^(1)(λ)^β/β-1dλ
≤ exp{∫^T_s k(s,τ)dτ}g^(1)(s)+∫^T_sexp{∫^λ_s k(s,τ)dτ}v(s,λ)1/δ^β/1-βdλ
≤ K.
The proof is completed.
§ CONCLUSION
For the TIC stochastic control problems with initial-time and -state dependent objectives, their well-posedness issues are shown to be equivalent to that of a class of nonlocal fully nonlinear PDEs, provided that the optimum of the Hamiltonian is attainable. We sequentially establish the global well-posedness of the linear and the fully nonlinear nonlocal PDEs. While the fully nonlinear case would require a sharp prior estimate for the global well-posedness, we show that its special case of nonlocal quasilinear PDEs, which correspond to the state processes with only drift being controlled, possess global well-posedness with mild conditions. On top of the well-posedness results, we also provide the probabilistic representation of the solution to the nonlocal fully nonlinear PDEs and an estimate on the difference between the value functions of the sophisticated and naïve controllers.
This work has partially addressed the open problems left in <cit.>. It also provides a new way to study the similar HJB equations of the equilibrium type. Along this research direction, the following future research is promising: 1) extending our results from second-order PDE to a higher-order system (inspired by <cit.>); 2) extending our results from Markovian to non-Markovian setting (referring to <cit.>); 3) expressing conditions (<ref>) in terms of coefficients and data of the local and nonlocal fully nonlinear PDEs (<ref>).
§ HÖLDER REGULARITIES OF ∂_I U(S,S,X,Y)|_X=Y
We investigate the Hölder continuities of |∂_I u(s,s,x,y)|_x=y|^(α)_(s,y)∈[0,δ]×ℝ^d in s and y for |I|=0,1,2, which are required by Theorem <ref> to obtain the Schauder estimate (<ref>) of solutions of nonlocal linear PDE (<ref>). Upon the analysis of (<ref>), we need to estimate all terms (E_1-E_8) in Table <ref>. Let us begin with |∂_I u(s,s,x,y)|_x=y|^(0)_(s,y)∈[0,δ]×ℝ^d for |I|=0,1,2.
(Estimate for E_1) By making use of integral representations (<ref>)-(<ref>), for any fixed (t,x)∈[0,δ]×ℝ^d and |I|=0, we have
|u(t,s,x,y)|
≤ C∫^s_0dτ∫_ℝ^d(s-τ)^-d/2exp{-cϖ(s,τ,y,ξ)}∑_|I|≤ 2|ℐ^I[∂ u/∂ t,∂ u/∂ x](t,τ,x,ξ)|dξ
+ Cs|f(t,s,x,y)|^(α)_(s,y)∈[0,δ]×ℝ^d,
where the constants C and c are independent of (t,s,x,y) while they only depend on ‖ A^I‖^(α)_[0,T] and ‖ B^I‖^(α)_[0,T], and ϖ(s,τ,y,ξ)=∑^d_i=1|y_i-ξ_i|^2(s-τ)^-1. Consequently, when (t,x)=(s,y)∈[0,δ]×ℝ^d, it is clear that
|u(s,s,y,y)|
≤ C∫^s_0dτ∫_ℝ^d(s-τ)^-d/2exp{-cϖ}(|s-τ|+|y-ξ|)[u]^(2+α)_[0,δ]dξ+ Cs‖ f‖^(α)_[0,δ]
≤ C(δ^2+δ^3/2)[u]^(2+α)_[0,δ]+Cδ‖ f‖^(α)_[0,δ].
(Estimate for E_2) For any fixed (t,x)∈[0,δ]×ℝ^d and |I|=1,2, we have
|∂_Iu(t,s,x,y)|
≤ C∫^s_0dτ∫_ℝ^d(s-τ)^-d+|I|/2exp{-cϖ(s,τ,y,ξ)}
×∑_|I|≤ 2|(B^Iℐ^I)(t,τ,x,ξ)-(B^Iℐ^I)(t,τ,x,y)|dξ
+ C∫^s_0(s-τ)^-|I|-α/2∑_|I|≤ 2|ℐ^I(t,τ,x,y)|dτ + Cs^2-|I|+α/2|f(t,s,x,y)|^(α)_(s,y)∈[0,δ]×ℝ^d
≤ C∫^s_0dτ∫_ℝ^d(s-τ)^-d+|I|/2exp{-cϖ(s,τ,y,ξ)}
×∑_|I|≤ 2|B^I(t,τ,x,ξ)-B^I(t,τ,x,y)||ℐ^I(t,τ,x,ξ)|dξ
+C∫^s_0dτ∫_ℝ^d(s-τ)^-d+|I|/2exp{-cϖ(s,τ,y,ξ)}
×∑_|I|≤ 2|ℐ^I(t,τ,x,ξ)-ℐ^I(t,τ,x,y)||B^I(t,τ,x,y)|dξ
+ C∫^s_0(s-τ)^-|I|-α/2∑_|I|≤ 2|ℐ^I(t,τ,x,y)|dτ + Cs^2-|I|+α/2|f(t,s,x,y)|^(α)_(s,y)∈[0,δ]×ℝ^d.
Hence, when t=s and x=y, it holds that
|∂_Iu(s,s,x,y)|_x=y|
≤ C∫^s_0dτ∫_ℝ^d(s-τ)^-d+|I|/2exp{-cϖ(s,τ,y,ξ)}
× |y-ξ|^α(|s-τ|+|y-ξ|)[u]^(2+α)_[0,δ]dξ
+
C∫^s_0dτ∫_ℝ^d(s-τ)^-d+|I|/2exp{-cϖ(s,τ,y,ξ)}
×∑_|I|≤ 2{|∫^s_τ∂_I(∂ u/∂ t)(θ_t,τ,x,ξ)|_x=ydθ_t-∫^s_τ∂_I(∂ u/∂ t)(θ_t,τ,x,y)|_x=ydθ_t|
+∑_1≤ i≤ d|∫^y_i_ξ_i∂_I(∂ u/∂ x_i)(τ,τ,x_1,⋯,x_i-1,θ_i,x_i+1,⋯,x_d,ξ)|_c x_j=ξ_j,j<i,
x_j=y_j,j>idθ_i.
.-∫^y_i_y_i∂_I(∂ u/∂ x_i)(τ,τ,x_1,⋯,x_i-1,θ_i,x_i+1,⋯,x_d,y)|_c x_j=y_j,j<i
x_j=y_j,j>i
dθ_i |} dξ
+ C∫^s_0(s-τ)^-|I|-α/2∑_|I|≤ 2|ℐ^I(t,τ,x,y)|dτ + Cs^2-|I|+α/2‖ f‖^(α)_[0,δ].
Subsequently, it follows that
|∂_Iu(s,s,x,y)|_x=y| ≤ C∫^s_0((s-τ)^-|I|-α-2/2+(s-τ)^-|I|-α-1/2)[u]^(2+α)_[0,δ]dτ
+
C∫^s_0dτ∫_ℝ^d(s-τ)^-d+|I|/2exp{-cϖ(s,τ,y,ξ)}
×{(s-τ)|y-ξ|^α+|y-ξ|}[u]^(2+α)_[0,δ] dξ
+ C∫^s_0(s-τ)^-|I|-α/2(s-τ)[u]^(2+α)_[0,δ] dτ + Cs^2-|I|+α/2‖ f‖^(α)_[0,δ]
≤ C∫^s_0((s-τ)^-|I|-α-2/2+(s-τ)^-|I|-α-1/2)[u]^(2+α)_[0,δ]dτ
+ C∫^s_0((s-τ)^-|I|-α-2/2+(s-τ)^-|I|-1/2)[u]^(2+α)_[0,δ] dτ
+ C∫^s_0(s-τ)^-|I|-α/2(s-τ)[u]^(2+α)_[0,δ] dτ + Cs^2-|I|+α/2‖ f‖^(α)_[0,δ]
≤ C(δ^4-|I|+α/2+δ^3-|I|+α/2+δ^3-|I|/2)[u]^(2+α)_[0,δ]+Cδ^2-|I|+α/2‖ f‖^(α)_[0,δ]
With the estimates (<ref>) and (<ref>) of |∂_Iu(s,s,x,y)|_x=y| for |I|=0,1,2, we find that |∂_I u(s,s,x,y)|_x=y| is bounded by [u]^(2+α)_[0,δ] and ‖ f‖^(α)_[0,δ]. Moreover, the coefficient δ in front of [u]^(2+α)_[0,δ] in (<ref>) and (<ref>) could be significant while suitably small δ∈(0,T] to establish 1/2 of (<ref>).
Next, we will also show that E_3-E_8 possess similar properties.
(Estimate for E_3) For |I|=0, by (<ref>), we have
|u(η_t,s^',η_x,y^')|
≤ C∫^s^'_0dτ∫_ℝ^d(s^'-τ)^-d/2exp{-cϖ(s^',τ,y^',ξ)}
×∑_|I|≤ 2{|∫^η_t_τ∂_I(∂ u/∂ t)(θ_t,τ,x,ξ)|_x=η_xdθ_t|
+∑_1≤ i≤ d|∫^(η_x)_i_ξ_i∂_I(∂ u/∂ x_i)(τ,τ,x_1,⋯,x_i-1,θ_i,x_i+1,⋯,x_d,ξ)|_c x_j=ξ_j,j<i,
x_j=(η_x)_j,j>idθ_i|} dξ
+Cs^'‖ f‖^(α)_[0,δ]
≤ C∫^s^'_0dτ∫_ℝ^d(s^'-τ)^-d/2exp{-cϖ(s^',τ,y^',ξ)}
×(|η_t-τ|+|η_x-ξ|)[u]^(2+α)_[0,δ]dξ+ Cs^'‖ f‖^(α)_[0,δ]
≤ C∫^s^'_0dτ∫_ℝ^d(s^'-τ)^-d/2exp{-cϖ(s^',τ,y^',ξ)}
×(|η_t-s^'|+|s^'-τ|+|η_x-y^'|+|y^'-ξ|)[u]^(2+α)_[0,δ]dξ+ Cs^'‖ f‖^(α)_[0,δ]
≤ C(|η_t-s^'||s^'|+|s^'|^2+|η_x-y^'||s^'|+|s^'|^3/2)[u]^(2+α)_[0,δ]dξ+ Cs^'‖ f‖^(α)_[0,δ]
≤ C(δ^2+δ+δ^3/2)[u]^(2+α)_[0,δ]dξ+ Cδ‖ f‖^(α)_[0,δ].
(Estimate for E_4) For |I|=1,2, the representation (<ref>) implies that
|∂_Iu(η_t,s^',x,y^')_x=η_x|
≤ C∫^s^'_0dτ∫_ℝ^d(s^'-τ)^-d+|I|/2exp{-cϖ(s^',τ,y^',ξ)}
×∑_|I|≤ 2|(B^Iℐ^I)(η_t,τ,η_x,ξ)-(B^Iℐ^I)(η_t,τ,η_x,y^')|dξ
+ C∫^s^'_0(s^'-τ)^-|I|-α/2∑_|I|≤ 2|ℐ^I(η_t,τ,η_x,y^')|dτ + C|s^'|^2-|I|+α/2‖ f‖^(α)_[0,δ]
≤ C∫^s^'_0dτ∫_ℝ^d(s^'-τ)^-d+|I|/2exp{-cϖ(s^',τ,y^',ξ)}
×∑_|I|≤ 2|B^I(η_t,τ,η_x,ξ)-B^I(η_t,τ,η_x,y^')||ℐ^I(η_t,τ,η_x,ξ)|dξ
+
C∫^s^'_0dτ∫_ℝ^d(s^'-τ)^-d+|I|/2exp{-cϖ(s^',τ,y^',ξ)}
×∑_|I|≤ 2|ℐ^I(η_t,τ,η_x,ξ)-ℐ^I(η_t,τ,η_x,y^')||B^I(η_t,τ,η_x,y^')|dξ
+ C∫^s^'_0(s^'-τ)^-|I|-α/2∑_|I|≤ 2|ℐ^I(η_t,τ,η_x,y^')|dτ + C|s^'|^2-|I|+α/2‖ f‖^(α)_[0,δ]
By making use of the representation of (<ref>), we have
|∂_Iu(η_t,s^',x,y^')_x=η_x|
≤ C∫^s^'_0dτ∫_ℝ^d(s^'-τ)^-d+|I|/2exp{-cϖ(s^',τ,y^',ξ)}
×|y^'-ξ|^α(|η_t-τ|+|η_x-ξ|)[u]^(2+α)_[0,δ] dξ
+
C∫^s^'_0dτ∫_ℝ^d(s^'-τ)^-d+|I|/2exp{-cϖ(s^',τ,y^',ξ)}
×∑_|I|≤ 2{|∫^η_t_τ∂_I(∂ u/∂ t)(θ_t,τ,x,ξ)|_x=η_xdθ_t-∫^η_t_τ∂_I(∂ u/∂ t)(θ_t,τ,x,y^')|_x=η_xdθ_t|
+∑_1≤ i≤ d|∫^(η_x)_i_ξ_i∂_I(∂ u/∂ x_i)(τ,τ,x_1,⋯,x_i-1,θ_i,x_i+1,⋯,x_d,ξ)|_c x_j=ξ_j,j<i,
x_j=(η_x)_j,j>idθ_i.
.-∫^(η_x)_i_y^'_i∂_I(∂ u/∂ x_i)(τ,τ,x_1,⋯,x_i-1,θ_i,x_i+1,⋯,x_d,y^')|_c x_j=y^'_j,j<i
x_j=(η_x)_j,j>i
dθ_i |} dξ
+ C∫^s^'_0(s^'-τ)^-|I|-α/2(|η_t-τ|+|η_x-y^'|)[u]^(2+α)_[0,δ] dτ + C|s^'|^2-|I|+α/2‖ f‖^(α)_[0,δ]
Moreover, we have
|∫^(η_x)_i_ξ_i∂_I(∂ u/∂ x_i)(τ,τ,x_1,⋯,x_i-1,θ_i,x_i+1,⋯,x_d,ξ)|_c x_j=ξ_j,j<i,
x_j=(η_x)_j,j>idθ_i.
.-∫^(η_x)_i_y^'_i∂_I(∂ u/∂ x_i)(τ,τ,x_1,⋯,x_i-1,θ_i,x_i+1,⋯,x_d,y^')|_c x_j=y^'_j,j<i
x_j=(η_x)_j,j>i
dθ_i |
≤ |∫^(η_x)_i_ξ_i∂_I(∂ u/∂ x_i)(τ,τ,x_1,⋯,x_i-1,θ_i,x_i+1,⋯,x_d,ξ)|_c x_j=ξ_j,j<i,
x_j=(η_x)_j,j>idθ_i.
.-∫^(η_x)_i_y^'_i∂_I(∂ u/∂ x_i)(τ,τ,x_1,⋯,x_i-1,θ_i,x_i+1,⋯,x_d,ξ)|_c x_j=ξ_j,j<i
x_j=(η_x)_j,j>i
dθ_i |
+
|∫^(η_x)_i_y^'_i∂_I(∂ u/∂ x_i)(τ,τ,x_1,⋯,x_i-1,θ_i,x_i+1,⋯,x_d,ξ)|_c x_j=ξ_j,j<i,
x_j=(η_x)_j,j>idθ_i.
.-∫^(η_x)_i_y^'_i∂_I(∂ u/∂ x_i)(τ,τ,x_1,⋯,x_i-1,θ_i,x_i+1,⋯,x_d,ξ)|_c x_j=y^'_j,j<i
x_j=(η_x)_j,j>i
dθ_i |
+
|∫^(η_x)_i_y^'_i∂_I(∂ u/∂ x_i)(τ,τ,x_1,⋯,x_i-1,θ_i,x_i+1,⋯,x_d,ξ)|_c x_j=y^'_j,j<i,
x_j=(η_x)_j,j>idθ_i.
.-∫^(η_x)_i_y^'_i∂_I(∂ u/∂ x_i)(τ,τ,x_1,⋯,x_i-1,θ_i,x_i+1,⋯,x_d,y^')|_c x_j=y^'_j,j<i
x_j=(η_x)_j,j>i
dθ_i |
≤ (|y^'-ξ|+|η_x-y^'||y^'-ξ|+|η_x-y^'||y^'-ξ|^α)[u]^(2+α)_[0,δ].
Consequently, we obtain
|∂_Iu(η_t,s^',x,y^')_x=η_x|
≤ C∫^s^'_0dτ∫_ℝ^d(s^'-τ)^-d+|I|/2exp{-cϖ(s^',τ,y^',ξ)}
×(y^'-ξ)^α(|η_t-τ|+|η_x-ξ|)[u]^(2+α)_[0,δ] dξ
+
C∫^s^'_0dτ∫_ℝ^d(s^'-τ)^-d+|I|/2exp{-cϖ(s^',τ,y^',ξ)}
×{|η_t-τ||y^'-ξ|^α +|y^'-ξ|+|η_x-y^'||y^'-ξ|+|η_x-y^'||y^'-ξ|^α}[u]^(2+α)_[0,δ] dξ
+ C∫^s^'_0(s^'-τ)^-|I|-α/2(|η_t-τ|+|η_x-y^'|)[u]^(2+α)_[0,δ] dτ + C|s^'|^2-|I|+α/2‖ f‖^(α)_[0,δ]
≤ C∫^s^'_0dτ∫_ℝ^d(s^'-τ)^-d+|I|/2exp{-cϖ(s^',τ,y^',ξ)}
×(y^'-ξ)^α(|η_t-s^'|+|s^'-τ|+|η_x-y^'|+|y^'-ξ|)[u]^(2+α)_[0,δ] dξ
+
C∫^s^'_0dτ∫_ℝ^d(s^'-τ)^-d+|I|/2exp{-cϖ(s^',τ,y^',ξ)}
×{(|η_t-s^'|+|s^'-τ|)|y^'-ξ|^α +|y^'-ξ|
+|η_x-y^'||y^'-ξ|+|η_x-y^'||y^'-ξ|^α}[u]^(2+α)_[0,δ] dξ
+ C∫^s^'_0(s^'-τ)^-|I|-α/2(|η_t-s^'|+|s^'-τ|+|η_x-y^'|)[u]^(2+α)_[0,δ] dτ
+ C|s^'|^2-|I|+α/2‖ f‖^(α)_[0,δ].
Simple calculation yields that
|∂_Iu(η_t,s^',x,y^')_x=η_x|
≤ C(|η_t-s^'||s^'|^4-|I|+α/2 +|s^'|^4-|I|+α/2+|η_x-y^'||s^'|^2-|I|+α/2+|s^'|^3-|I|+α/2)[u]^(2+α)_[0,δ]
+
C(|η_t-s^'||s^'|^2-|I|+α/2+|s^'|^4-|I|+α/2
+|s^'|^3-|I|/2+|η_x-y^'||s^'|^3-|I|/2+|η_x-y^'||s^'|^2-|I|+α/2)[u]^(2+α)_[0,δ]
+C(|η_t-s^'||s^'|^2-|I|+α/2+|s^'|^4-|I|+α/2
+|η_x-y^'||s^'|^2-|I|+α/2)[u]^(2+α)_[0,δ]+ C|s^'|^2-|I|+α/2‖ f‖^(α)_[0,δ]
≤ C(δ^6-|I|+α/2+δ^4-|I|+α/2+δ^3-|I|+α/2
+δ^2-|I|+α/2+δ^3-|I|/2)[u]^(2+α)_[0,δ]+ Cδ^2-|I|+α/2‖ f‖^(α)_[0,δ].
After the analyses of E_3-E_4, namely (<ref>) and (<ref>), we turn to investigate the difference quotient of (<ref>), i.e. |∂_I u(s,s^',x,y^')|_x=y-∂_I u(s,s,x,y)|_x=y|/|s^'-s|^α/2+|y^'-y|^α. In what follows, we need to study the difference between ∂_I u(s,s^',x,y^')|_x=y and ∂_I u(s,s,x,y)|_x=y for |I|=0,1,2.
(Estimate for E_5) First of all, we consider the difference in the case that s≤ρ^2. Then, for |I|=0, the estimate of E_3 tells us that
|u(s,s^',y,y^')| ≤ C(|s-s^'||s^'|+|s^'|^2+|y-y^'||s^'|+|s^'|^3/2)[u]^(2+α)_[0,δ]+ C|s^'|‖ f‖^(α)_[0,δ].
In addition, from E_1, we also have
|u(s,s,y,y)|
≤ C∫^s_0dτ∫_ℝ^d(s-τ)^-d/2exp{-cϖ}(|s-τ|+|y-ξ|)[u]^(2+α)_[0,δ]dξ+ Cs‖ f‖^(α)_[0,δ]
≤ C(s^2+s^3/2)[u]^(2+α)_[0,δ]+Cs‖ f‖^(α)_[0,δ].
Consequently, it follows that
|u(s,s^',y,y^')-u(s,s,y,y)|
≤ C(|s-s^'||s^'|+|s^'|^2+|y-y^'||s^'|+|s^'|^3/2)[u]^(2+α)_[0,δ]+ C|s^'|‖ f‖^(α)_[0,δ]
+ C(s^2+s^3/2)[u]^(2+α)_[0,δ]+Cs‖ f‖^(α)_[0,δ]
≤ C(|s-s^'||s^'|+|s^'||s^'-s+s|+|y-y^'||s^'|+|s^'|^1/2|s^'-s+s|)[u]^(2+α)_[0,δ]
+ C(s^2+s^3/2)[u]^(2+α)_[0,δ]+Cs‖ f‖^(α)_[0,δ]+ C|s^'-s+s|‖ f‖^(α)_[0,δ]
≤
Cδ^1/2ρ^α[u]^(2+α)_[0,δ]+Cρ^α‖ f‖^(α)_[0,δ].
(Estimate for E_6) Similarly, for s≤ρ^2 and |I|=1,2, the term E_4 yields that
|∂_Iu(s,s^',x,y^')_x=y|
≤ C(|s-s^'||s^'|^4-|I|+α/2 +|s^'|^4-|I|+α/2+|y-y^'||s^'|^2-|I|+α/2+|s^'|^3-|I|+α/2)[u]^(2+α)_[0,δ]
+
C(|s-s^'||s^'|^2-|I|+α/2+|s^'|^4-|I|+α/2
+|s^'|^3-|I|/2+|y-y^'||s^'|^3-|I|/2+|y-y^'||s^'|^2-|I|+α/2)[u]^(2+α)_[0,δ]
+
C(|s-s^'||s^'|^2-|I|+α/2+|s^'|^4-|I|+α/2
+|y-y^'||s^'|^2-|I|+α/2)[u]^(2+α)_[0,δ]+ C|s^'|^2-|I|+α/2‖ f‖^(α)_[0,δ]
Moreover, from E_2, it is clear that
|∂_Iu(s,s,x,y)|_x=y|
≤ C∫^s_0((s-τ)^-|I|-α-2/2+(s-τ)^-|I|-α-1/2)[u]^(2+α)_[0,δ]dτ
+
C∫^s_0((s-τ)^-|I|-α-2/2+(s-τ)^-|I|-1/2)[u]^(2+α)_[0,δ] dτ
+ C∫^s_0(s-τ)^-|I|-α/2(s-τ)[u]^(2+α)_[0,δ] dτ + Cs^2-|I|+α/2‖ f‖^(α)_[0,δ]
≤ C(s^4-|I|+α/2+s^3-|I|+α/2+s^3-|I|/2)[u]^(2+α)_[0,δ]+Cs^2-|I|+α/2‖ f‖^(α)_[0,δ]
Hence, it holds that
|∂_Iu(s,s^',x,y^')|_x=y-∂_Iu(s,s,x,y)|_x=y|
≤ |∂_Iu(s,s^',x,y^')|_x=y|+|∂_Iu(s,s,x,y)|_x=y|
≤
C(|s-s^'||s^'|^4-|I|+α/2 +|s^'|^4-|I|/2|s^'-s+s|^α/2
+|y-y^'||s^'|^2-|I|+α/2+|s^'|^3-|I|/2|s^'-s+s|^α/2)[u]^(2+α)_[0,δ]
+
C(|s-s^'||s^'|^2-|I|+α/2+|s^'|^4-|I|/2|s^'-s+s|^α/2
+|s^'|^3-|I|-α/2|s^'-s+s|^α/2+|y-y^'||s^'|^3-|I|/2+|y-y^'||s^'|^2-|I|+α/2)[u]^(2+α)_[0,δ]
+
C(|s-s^'||s^'|^2-|I|+α/2+|s^'|^4-|I|/2|s^'-s+s|^α/2
+|y-y^'||s^'|^2-|I|+α/2)[u]^(2+α)_[0,δ]+ C|s^'-s+s|^2-|I|+α/2‖ f‖^(α)_[0,δ]
+ C(s^4-|I|/2s^α/2+s^3-|I|/2s^α/2+s^3-|I|-α/2s^α/2)[u]^(2+α)_[0,δ]+Cs^2-|I|+α/2‖ f‖^(α)_[0,δ]
≤ C(δ^3-|I|-α/2+δ^2-|I|+α/2)ρ^α[u]^(2+α)_[0,δ]+Cδ^2-|I|/2ρ^α‖ f‖^(α)_[0,δ].
(Estimate for E_7) Next, we consider ρ^2<s. We examine the difference between ∂_I u(s,s^',x,y^')|_x=y and ∂_I u(s,s,x,y)|_x=y for |I|=0. By (<ref>), we have
|u(t,s^',x,y^')-u(t,s,x,y)|
≤ |∫^s^'_0dτ∫_ℝ^dZ(s^',τ,y^',ξ;t,x)∑_|I|≤ 2B^I(t,τ,x,ξ)ℐ^I(t,τ,x,ξ)dξ.
.-∫^s_0dτ∫_ℝ^dZ(s,τ,y,ξ;t,x)∑_|I|≤ 2B^I(t,τ,x,ξ)ℐ^I(t,τ,x,ξ)dξ|
+ |∫^s^'_0dτ∫_ℝ^dZ(s^',y^')f(t,τ,x,ξ)dξ-∫^s_0dτ∫_ℝ^dZ(s,y)f(t,τ,x,ξ)dξ|=:T_1+T_2,
the first term T_1 of which is analyzed as follows:
|T_1| ≤|∫^s^'_0dτ∫_ℝ^dZ(s^',τ,y^',ξ;t,x)∑_|I|≤ 2B^I(t,τ,x,ξ)ℐ^I(t,τ,x,ξ)dξ.
.-∫^s_0dτ∫_ℝ^dZ(s^',τ,y^',ξ;t,x)∑_|I|≤ 2B^I(t,τ,x,ξ)ℐ^I(t,τ,x,ξ)dξ|
+|∫^s_0dτ∫_ℝ^dZ(s^',τ,y^',ξ;t,x)∑_|I|≤ 2B^I(t,τ,x,ξ)ℐ^I(t,τ,x,ξ)dξ.
.-∫^s_0dτ∫_ℝ^dZ(s,τ,y,ξ;t,x)∑_|I|≤ 2B^I(t,τ,x,ξ)ℐ^I(t,τ,x,ξ)dξ|.
Subsequently, we have
|T_1| ≤ C∫^s^'_sdτ∫_ℝ^d(s^'-τ)^-d/2exp{-cϖ(s^',τ,y^',ξ)}∑_|I|≤ 2|ℐ^I(t,τ,x,ξ)|dξ
+∫^s_0dτ∫_ℝ^d|Z(s,τ,y,ξ;t,x)-Z(s,τ,y^',ξ;t,x)|∑_|I|≤ 2|ℐ^I(t,τ,x,ξ)|dξ
+∫^s_0dτ∫_ℝ^d|Z(s,τ,y^',ξ;t,x)-Z(s^',τ,y^',ξ;t,x)|∑_|I|≤ 2|ℐ^I(t,τ,x,ξ)|dξ
≤
C∫^s^'_sdτ∫_ℝ^d(s^'-τ)^-d/2exp{-cϖ(s^',τ,y^',ξ)}∑_|I|≤ 2|ℐ^I(t,τ,x,ξ)|dξ
+C∫^s_0dτ |y^'-y|^α∫_ℝ^d|s-τ|^-d+α/2exp{-cϖ(s,τ,y,ξ)}∑_|I|≤ 2|ℐ^I(t,τ,x,ξ)|dξ
+C∫^s_0dτ |s^'-s|^α/2∫_ℝ^d|s-τ|^-d+α/2exp{-cϖ(s,τ,y^',ξ)}∑_|I|≤ 2|ℐ^I(t,τ,x,ξ)|dξ.
Similarly, we can obtain the estimate for T_2. Then, when (t,x)=(s,y), it follows that
|u(s,s^',y,y^')-u(s,s,y,y)|
≤
C∫^s^'_sdτ∫_ℝ^d(s^'-τ)^-d/2exp{-cϖ(s^',τ,y^',ξ)}(|s-τ|+|y-ξ|)[u]^(2+α)_[0,δ]dξ
+C∫^s_0dτ |y^'-y|^α∫_ℝ^d|s-τ|^-d+α/2exp{-cϖ(s,τ,y,ξ)}
×(|s-τ|+|y-ξ|)[u]^(2+α)_[0,δ]dξ
+C∫^s_0dτ |s^'-s|^α/2∫_ℝ^d|s-τ|^-d+α/2exp{-cϖ(s,τ,y^',ξ)}
×(|s-τ|+|y-ξ|)[u]^(2+α)_[0,δ]dξ
+C∫^s^'_sdτ∫_ℝ^d(s^'-τ)^-d/2exp{-cϖ(s^',τ,y^',ξ)}‖ f‖^(α)_[0,δ]dξ
+C∫^s_0dτ |y^'-y|^α∫_ℝ^d|s-τ|^-d+α/2exp{-cϖ(s,τ,y,ξ)}‖ f‖^(α)_[0,δ]dξ
+C∫^s_0dτ |s^'-s|^α/2∫_ℝ^d|s-τ|^-d+α/2exp{-cϖ(s,τ,y^',ξ)}‖ f‖^(α)_[0,δ]dξ.
Consequently, we have
|u(s,s^',y,y^')-u(s,s,y,y)|
≤
C(|s^'-s|^2+|y^'-y||s^'-s|+|s^'-s|^3/2)[u]^(2+α)_[0,δ]
+(|y^'-y|s^4-α/2+|y^'-y|s^3-α/2)[u]^(2+α)_[0,δ]
+(|s^'-s|^α/2s^4-α/2+|s^'-s|^α/2|y^'-y|s^2-α/2+|s^'-s|^α/2s^3-α/2)[u]^(2+α)_[0,δ]
+Cρ^α s^2-α/2‖ f‖^(α)_[0,δ]
≤ C(ρ^αδ^4-α/2+ρ^αδ+ρ^αδ^3-α/2)[u]^(2+α)_[0,δ]+Cρ^αδ^2-α/2‖ f‖^(α)_[0,δ].
(Estimate for E_8) Next, we consider the cases where ρ^2<s and |I|=1,2. According to the representation (<ref>) of ∂_Iu(t,s,x,y), we have
_s,y∂_Iu(t,s,x,y):=∂_Iu(t,s,x,y)-∂_Iu(t,s^',x,y^')
= ∫^s-λ_0dτ∫_ℝ^d_s,y∂_IZ(s,τ,y,ξ;t,x)
×∑_|I|≤ 2[(B^Iℐ^I)(t,τ,x,ξ)-(B^Iℐ^I)(t,τ,x,y)]dξ
+∫^s_s-λdτ∫_ℝ^d∂_IZ∑_|I|≤ 2[(B^Iℐ^I)(t,τ,x,ξ)-(B^Iℐ^I)(t,τ,x,y)]dξ
+∫^s-λ_0(_s,y∂_I∫_ℝ^dZdξ)∑_|I|≤ 2(B^Iℐ^I)(t,τ,x,y)dτ
+∫^s_s-λ∂_I∫_ℝ^dZdξ∑_|I|≤ 2(B^Iℐ^I)(t,τ,x,y)dτ
-∫^s^'_s-λdτ∫_ℝ^d∂_I,y^'Z(s^',τ,y^',ξ;t,x)
×∑_|I|≤ 2[(B^Iℐ^I)(t,τ,x,ξ)-(B^Iℐ^I)(t,τ,x,y^')]dξ
-∫^s^'_s-λ∂_I,y^'∫_ℝ^dZ(s^',τ,y^',ξ;t,x)dξ∑_|I|≤ 2(B^Iℐ^I)(t,τ,x,y^')+J_7=L∑^7_i=1J_i,
where λ=1/2ρ^2 and the last term J_7 is equal to the sum of the first six terms (J_1-J_6) with all ∑_|I|≤ 2(B^Iℐ^I) by f.
(J_1-term) We have
|J_1|≤ ∫^s-λ_0dτ∫_ℝ^d(|_y-y^',y∂_IZ(s,τ,y,ξ;t,x)|+|_s-s^',s∂_I,y^'Z(s,τ,y^',ξ;t,x)|)
×∑_|I|≤ 2|(B^Iℐ^I)(t,τ,x,ξ)-(B^Iℐ^I)(t,τ,x,y)|dξ
≤ ∫^s-λ_0dτ∫_ℝ^d|_y-y^',y∂_IZ(s,τ,y,ξ;t,x)|
×∑_|I|≤ 2{|B^I(t,τ,x,ξ)-B^I(t,τ,x,y)||ℐ^I(t,τ,x,ξ)|
+ |ℐ^I(t,τ,x,ξ)-ℐ^I(t,τ,x,y)||B^I(t,τ,x,y)|} dξ
+∫^s-λ_0dτ∫_ℝ^d|_s-s^',s∂_I,y^'Z(s,τ,y^',ξ;t,x)|
×∑_|I|≤ 2{|(B^Iℐ^I)(t,τ,x,ξ)-(B^Iℐ^I)(t,τ,x,y^')|
+|(B^Iℐ^I)(t,τ,x,y^')-(B^Iℐ^I)(t,τ,x,y)| } dξ.
Moreover, it holds that
|J_1|≤ ∫^s-λ_0dτ∫_ℝ^d|_y-y^',y∂_IZ(s,τ,y,ξ;t,x)|
×∑_|I|≤ 2{|B^I(t,τ,x,ξ)-B^I(t,τ,x,y)||ℐ^I(t,τ,x,ξ)|
+ |ℐ^I(t,τ,x,ξ)-ℐ^I(t,τ,x,y)||B^I(t,τ,x,y)|} dξ
+∫^s-λ_0dτ∫_ℝ^d|_s-s^',s∂_I,y^'Z(s,τ,y^',ξ;t,x)|
×∑_|I|≤ 2{|B^I(t,τ,x,ξ)-B^I(t,τ,x,y^')||ℐ^I(t,τ,x,ξ)|
+ |ℐ^I(t,τ,x,ξ)-ℐ^I(t,τ,x,y^')||B^I(t,τ,x,y^')|}
+{|B^I(t,τ,x,y)-B^I(t,τ,x,y^')||ℐ^I(t,τ,x,y)|
+ |ℐ^I(t,τ,x,y)-ℐ^I(t,τ,x,y^')||B^I(t,τ,x,y^')|}dξ.
From the regularities of the fundamental solution Z and B^I, we have
|J_1| ≤ C|y^'-y|^α∫^s-λ_0dτ∫_ℝ^d(s-τ)^-d+|I|+α/2exp{-cϖ(s,τ,y,ξ)}
×∑_|I|≤ 2{|y-ξ|^α|ℐ^I(t,τ,x,ξ)|+ |ℐ^I(t,τ,x,ξ)-ℐ^I(t,τ,x,y)|} dξ
+C|s^'-s|^α/2∫^s-λ_0dτ∫_ℝ^d(s-τ)^-d+|I|+α/2exp{-cϖ(s,τ,y^',ξ)}
×∑_|I|≤ 2{|y^'-ξ|^α|ℐ^I(t,τ,x,ξ)|+ |ℐ^I(t,τ,x,ξ)-ℐ^I(t,τ,x,y^')|
+|y-y^'|^α|ℐ^I(t,τ,x,y)|+|ℐ^I(t,τ,x,y)-ℐ^I(t,τ,x,y^')|} dξ.
Consequently, when t=s and x=y, we have
|J_1| ≤ C|y^'-y|^α∫^s-λ_0dτ∫_ℝ^d(s-τ)^-d+|I|+α/2exp{-cϖ(s,τ,y,ξ)}
×{|y-ξ|^α(|s-τ|+|y-ξ|)[u]^(2+α)_[0,δ]
+ ((s-τ)|y-ξ|^α+|y-ξ|)[u]^(2+α)_[0,δ]} dξ
+ C|s^'-s|^α/2∫^s-λ_0dτ∫_ℝ^d(s-τ)^-d+|I|+α/2exp{-cϖ(s,τ,y^',ξ)}
×{|y^'-ξ|^α(|s-τ|+|y-y^'|+|y^'-ξ|)[u]^(2+α)_[0,δ]
+ ((s-τ)|y^'-ξ|^α+|y^'-ξ|+|y^'-y||y^'-ξ|+|y^'-y||y^'-ξ|^α)[u]^(2+α)_[0,δ]
+|y-y^'|^α(s-τ)[u]^(2+α)_[0,δ]+((s-τ)|y^'-y|^α+|y^'-y|)[u]^(2+α)_[0,δ]} dξ
Then
|J_1| ≤ C|y^'-y|^α∫^s-λ_0((s-τ)^-|I|-2/2+(s-τ)^-|I|-1/2)[u]^(2+α)_[0,δ]dτ
+C|y^'-y|^α∫^s-λ_0(|s-τ|^-|I|-2/2+(s-τ)^-|I|+α-1/2)[u]^(2+α)_[0,δ]dτ
+C|s^'-s|^α/2∫^s-λ_0((s-τ)^-|I|-2/2+(s-τ)^-|I|-1/2+(s-τ)^-|I|/2|y^'-y|)[u]^(2+α)_[0,δ]dτ
+C|s^'-s|^α/2∫^s-λ_0((s-τ)^-|I|-2/2+(s-τ)^-|I|+α-1/2
+(s-τ)^-|I|+α-1/2|y^'-y|+(s-τ)^-|I|/2|y^'-y|)[u]^(2+α)_[0,δ]dτ
+C|s^'-s|^α/2∫^s-λ_0(s-τ)^-|I|+α-2/2|y^'-y|^α[u]^(2+α)_[0,δ]dτ
+C|s^'-s|^α/2∫^s-λ_0((s-τ)^-|I|+α-2/2|y^'-y|^α+(s-τ)^-|I|+α/2|y^'-y|)[u]^(2+α)_[0,δ]dτ.
It is noteworthy that |y^'-y|≤ d=(2λ)^1/2≤ 2^1/2(s-τ)^1/2 if τ∈[0,s-λ]. To simplify the results, we also extend the upper bound of integrals from s-λ to s since all integrands are positive in the interval. Consequently, we have
|J_1| ≤ Cρ^α(s^4-|I|/2+s^3-|I|/2+s^3-|I|-α/2+s^4-|I|-α/2)[u]^(2+α)_[0,δ]
≤ Cρ^α(δ^4-|I|/2+δ^3-|I|/2+δ^3-|I|-α/2+δ^4-|I|-α/2)[u]^(2+α)_[0,δ].
(J_2-term) Next, we investigate the second term J_2.
|J_2|≤ ∫^s_s-λdτ∫_ℝ^d|∂_IZ|∑_|I|≤ 2|(B^Iℐ^I)(t,τ,x,ξ)-(B^Iℐ^I)(t,τ,x,y)|dξ
≤ ∫^s_s-λdτ∫_ℝ^d|∂_IZ|∑_|I|≤ 2{|B^I(t,τ,x,ξ)-B^I(t,τ,x,y)||ℐ^I(t,τ,x,ξ)|
+|ℐ^I(t,τ,x,ξ)-ℐ^I(t,τ,x,y)||B^I(t,τ,x,y)|} dξ
≤ C∫^s_s-λdτ∫_ℝ^d(s-τ)^-d+|I|/2exp{-cϖ(s,τ,y,ξ)}
×{|y-ξ|^α|ℐ^I(t,τ,x,ξ)|+|ℐ^I(t,τ,x,ξ)-ℐ^I(t,τ,x,y)|} dξ
Moreover, in the case that t=s and x=y, it follows that
|J_2| ≤ C∫^s_s-λdτ∫_ℝ^d(s-τ)^-d+|I|/2exp{-cϖ(s,τ,y,ξ)}
×{|y-ξ|^α((s-τ)+|y-ξ|)[u]^(2+α)_[0,δ]
+((s-τ)|y-ξ|^α+|y-ξ|)[u]^(2+α)_[0,δ]} dξ
≤ C(λ^4-|I|+α/2+λ^3-|I|+α/2+λ^3-|I|/2)[u]^(2+α)_[0,δ]
≤ Cρ^α(δ^4-|I|/2+δ^3-|I|/2+δ^3-|I|-α/2)[u]^(2+α)_[0,δ].
(J_5-term) Now, we study J_5.
|J_5|≤ ∫^s^'_s-λdτ∫_ℝ^d|∂_I,y^'Z(s^',τ,y^',ξ;t,x)|
×∑_|I|≤ 2|(B^Iℐ^I)(t,τ,x,ξ)-(B^Iℐ^I)(t,τ,x,y^')|dξ
≤ ∫^s^'_s-λdτ∫_ℝ^d|∂_I,y^'Z(s^',τ,y^',ξ;t,x)|
×∑_|I|≤ 2{|B^I(t,τ,x,ξ)-B^I(t,τ,x,y^')||ℐ^I(t,τ,x,ξ)|
+|ℐ^I(t,τ,x,ξ)-ℐ^I(t,τ,x,y^')||B^I(t,τ,x,y^')|} dξ
≤ C∫^s^'_s^'-λdτ∫_ℝ^d(s^'-τ)^-d+|I|/2exp{-cϖ(s^',τ,y^',ξ)}
×{|y^'-ξ|^α|ℐ^I(t,τ,x,ξ)|+|ℐ^I(t,τ,x,ξ)-ℐ^I(t,τ,x,y^')|} dξ
Hence, when t=s and x=y, we have
|J_5| ≤ C∫^s^'_s^'-λdτ∫_ℝ^d(s^'-τ)^-d+|I|/2exp{-cϖ(s^',τ,y^',ξ)}
×{|y^'-ξ|^α(|s-s^'|+|s^'-τ|+|y-y^'|+|y^'-ξ|)
+ |y^'-ξ|^α(|s-s^'|+|s^'-τ|)
+ |y^'-ξ|+|y-y^'||y^'-ξ|+|y-y^'||y^'-ξ|^α}[u]^(2+α)_[0,δ] dξ
≤ Cρ^α(δ^2-|I|+α/2+δ^4-|I|/2+δ^3-|I|/2+δ^3-|I|-α/2)[u]^(2+α)_[0,δ].
(J_4-term and J_6-term) The estimates of J_4 and J_6 are similar, which are evaluated as follows:
|J_4|≤ C∫^s_s-λdτ∫_ℝ^d(s-τ)^-d+|I|/2exp{-cϖ(s,τ,y,ξ)}(s-τ)[u]^(2+α)_[0,δ] dξ
≤ Cρ^αδ^4-|I|/2[u]^(2+α)_[0,δ].
and
|J_6| ≤ C∫^s^'_s^'-λdτ∫_ℝ^d(s-τ)^-d+|I|/2exp{-cϖ(s,τ,y,ξ)}
×(|s-s^'|+|s^'-τ|+|y-y^'|)[u]^(2+α)_[0,δ] dξ
≤ Cρ^α(δ^2-|I|+α/2+δ^4-|I|/2)[u]^(2+α)_[0,δ].
(J_3-term) Next, we analyze the J_3-term.
|J_3|≤∫^s-λ_0|_s,y∂_I∫_ℝ^dZdξ|∑_|I|≤ 2|B^Iℐ^I(t,τ,x,y)| dτ.
Then
|_s,y∂_I∫_ℝ^dZdξ|
≤ |_y^'-y,y∂_I∫_ℝ^dZ(s,τ,y,ξ;t,x)dξ|+|_s^'-s,s∂_I,y^'∫_ℝ^dZ(s,τ,y^',ξ;t,x)dξ|
≤ |_y^'-y,y∂_I∫_ℝ^dZ_0(s-τ,y-ξ,τ,ξ;t,x)dξ| + |_y^'-y,y∂_I∫_ℝ^dW(s,τ,y,ξ;t,x)dξ|
+|_s^'-s,s∂_I,y^'∫_ℝ^dZ_0(s-τ,y^'-ξ,τ,ξ;t,x)dξ|+|_s^'-s,s∂_I,y^'∫_ℝ^dW(s,τ,y^',ξ;t,x)dξ|
≤ C∑^d_i=1|y^'_i-y_i|(s-τ)^-|k|+1-α/2+C|y^'-y|^α(s-τ)^-|k|/2
+C(s^'-s)(s-τ)^-|k|+2-α/2+C(s^'-s)^α/2(s-τ)^-|k|/2.
Since |y^'_i-y_i|≤ d≤ 2^1/2(s-τ)^1/2 and s^'-s≤ d^2≤ 2(s-τ) in the interval τ∈[0,s-λ]. Consequently, when t=s and x=y, we have
|J_3|≤ C|y^'-y|^α∫^s_0(s-τ)^-|I|-2/2[u]^(2+α)_[0,δ]dτ + Cd^α∫^s_0(s-τ)^-|I|-2/2[u]^(2+α)_[0,δ]dτ
+C|s^'-s|^α/2∫^s_0(s-τ)^-|I|-2/2[u]^(2+α)_[0,δ]dτ
≤ Cρ^αδ^4-|I|/2[u]^(2+α)_[0,δ].
Finally, according to the classical theory of parabolic linear systems, we can find that |J_7|≤ Cρ^α‖ f‖^(α)_[0,δ]. From the analyses of J_1-J_7, it follows that in the case of (t,x)=(s,y), |_s,y∂_Iu(t,s,x,y)| is bounded by ‖ u‖^(2+α)_[0,δ] and ‖ f‖^(α)_[0,δ] and that the coefficient in front of ‖ u‖^(2+α)_[0,δ] could be arbitrarily small by choosing a suitable δ.
After showing the estimates of E_1-E_8 in Table <ref>, for any (t,x)∈[0,δ]×ℝ^d, there exists a small enough δ∈(0,T], independent of (t,s,x,y), such that
|(u,∂ u/∂ t,∂ u/∂ x,∂^2 u/∂ x∂ x)(t,s,x,y)|^(2+α)_(s,y)∈[0,δ]×ℝ^d ≤1/2[u]^(2+α)_[0,δ]+C(‖ f‖^(α)_[0,δ]+‖ g‖^(2+α)_[0,δ])
≤1/2‖ u‖^(2+α)_[0,δ]+C(‖ f‖^(α)_[0,δ]+‖ g‖^(2+α)_[0,δ]).
Hence, we have
‖ u‖^(2+α)_[0,δ]≤ C(‖ f‖^(α)_[0,δ]+‖ g‖^(2+α)_[0,δ]).
§ ESTIMATE OF ‖Φ‖^(Α)_[0,Δ]
In this appendix, we will evaluate the nonhomogeneous term φ(t,s,x,y) of (<ref>), and show it holds that
‖φ‖^(α)_[0,δ]≤ C(R)δ^α/2‖ u-u‖^(2+α)_[0,δ],
such that the mapping Λ:u↦Λ(u) defined by (<ref>) is a 1/2-contraction over the closed ball 𝒰 centered at g with radius R for a small enough δ∈(0,T]. In order to establish the inequality, we need to evaluate the terms of K_1-K_12 in Table <ref>.
(Estimates K_1-K_3 of |φ(t,s,x,y)|^(α)_(s,y)∈[0,δ]×ℝ^d). Let us consider |φ(t,s,x,y)-φ(t,s^',x,y)| for any 0≤ s< s^'≤δ≤ T, t∈[0,δ], and x,y∈ℝ^d. By making use of (<ref>), it is convenient to add and subtract
∫^1_0∑_|I|≤ 2∂_I F(t,s^',x,y,θ_σ(t,s^',x,y))×∂_I(u-u)(t,s,x,y)dσ
+ ∫^1_0∑_|I|≤ 2∂_I F(t,s^',x,y,θ_σ(t,s^',x,y))×∂_I(u-u)(s,s,x,y)|_x=ydσ.
Subsequently, we need to estimate
|∂_IF(t,s,x,y,θ_σ(t,s,x,y))-∂_IF(t,s^',x,y,θ_σ(t,s^',x,y))|,
|∂_IF(t,s,x,y,θ_σ(t,s,x,y))-∂_IF(t,s^',x,y,θ_σ(t,s^',x,y))|,
|∂_IF(t,s^',x,y,θ_σ(t,s^',x,y))-∂_IF(t,0,x,y,θ_0(t,x,y))|,
and
|∂_IF(t,s^',x,y,θ_σ(t,s^',x,y))-∂_IF(t,0,x,y,θ_0(t,x,y))|.
Next, we have
|∂_IF(t,s,x,y,θ_σ(t,s,x,y))-∂_IF(t,s^',x,y,θ_σ(t,s^',x,y))|
≤ K(s^'-s)^α/2+L(|u(t,·,x,·)|^(2+α)_[0,δ]×ℝ^d(s^'-s)^α/2+sup_s∈(s,s^')|u_t(s,·,x,·)|^(2+α)_[0,s]×ℝ^d(s^'-s).
+|u(s^',·,x,·)|^(2+α)_[0,s^']×ℝ^d(s^'-s)^α/2 +|u(t,·,x,·)|^(2+α)_[0,δ]×ℝ^d(s^'-s)^α/2
. +sup_s∈(s,s^')|u_t(s,·,x,·)|^(2+α)_[0,s]×ℝ^d(s^'-s)+|u(s^',·,x,·)|^(2+α)_[0,s^']×ℝ^d(s^'-s)^α/2)
≤ (K+L(‖ u‖^(2+α)_[0,δ]+‖u‖^(2+α)_[0,δ]))(s^'-s)^α/2
≤ C_1(R)(s^'-s)^α/2
and
|∂_IF(t,s^',x,y,θ_σ(t,s^',x,y))-∂_IF(t,0,x,y,θ_0(t,x,y))|
≤ K(s^')^α/2+L(|(u-g)(t,·,x,·)|^(2+α)_[0,δ]×ℝ^d(s^'-0)^α/2+sup_s∈(0,s^')|g_t(s,x,·)|^(2+α)_ℝ^d(s^'-0).
+|u(s^',·,x,·)|^(2+α)_[0,s^']×ℝ^d(s^'-0)^α/2 +|(u-g)(t,·,x,·)|^(2+α)_[0,δ]×ℝ^d(s^'-0)^α/2
. +sup_s∈(0,s^')|g_t(s,x,·)|^(2+α)_ℝ^d(s^'-0)+|u(s^',·,x,·)|^(2+α)_[0,s^']×ℝ^d(s^'-0)^α/2)
≤ (K+L(‖ u-g‖^(2+α)_[0,δ]+‖u-g‖^(2+α)_[0,δ]+‖ g‖^(2+α)_[0,δ]))(s^'-0)^α/2
≤ C_2(R)δ^α/2
where L>0 is a constant which can be different from line to line and the subscripts of C are to represent different constant values within the derivation. In a similar manner, we can obtain
|∂_IF(t,s,x,y,θ_σ(t,s,x,y))-∂_IF(t,s^',x,y,θ_σ(t,s^',x,y))| ≤ C_3(R)(s^'-s)^α/2,
and
|∂_IF(t,s^',x,y,θ_σ(t,s^',x,y))-∂_IF(t,0,x,y,θ_0(t,x,y))|≤ C_4(R)δ^α/2.
(K_2-term) Consequently, for K_2, it holds that
|φ(t,s,x,y)-φ(t,s^',x,y)|
≤ ∫^1_0∑_|I|≤ 2|∂_IF(t,s,x,y,θ_σ(t,s,x,y))-∂_IF(t,s^',x,y,θ_σ(t,s^',x,y))|
×|∂_I(u-u)(t,s,x,y)|dσ
+ ∫^1_0∑_|I|≤ 2|∂_IF(t,s,x,y,θ_σ(t,s,x,y))-∂_IF(t,s^',x,y,θ_σ(t,s^',x,y))|
×|∂_I(u-u)(s,s,x,y)|_x=y|dσ
+ ∫^1_0∑_|I|≤ 2|∂_IF(t,s^',x,y,θ_σ(t,s^',x,y))-∂_IF(t,0,x,y,θ_0(t,x,y))|
×|∂_I(u-u)(t,s,x,y)-∂_I(u-u)^b(t,s^',x,y)|dσ
+∫^1_0∑_|I|≤ 2|∂_IF(t,s^',x,y,θ_σ(t,s^',x,y))-∂_IF(t,0,x,y,θ_0(t,x,y))|
×|∂_I(u-u)(s,s,x,y)|_x=y-∂_I(u-u)(s^',s^',x,y)|_x=y|dσ
Furthermore, we have
|φ(t,s,x,y)-φ(t,s^',x,y)|
≤ C_1(R)(s^'-s)^α/2δ^α/2|(u-u)(t,·,x,·)|^(2+α)_[0,δ]×ℝ^d
+C_2(R)δ^α/2(s^'-s)^α/2|(u-u)(t,·,x,·)|^(2+α)_[0,δ]×ℝ^d
+C_3(R)(s^'-s)^α/2δ^α/2|(u-u)(s,·,y,·)|^(2+α)_[0,s]×ℝ^d
+C_4(R)δ^α/2(s^'-s)^α/2(sup_s∈(s,s^')|(u-u)_t(s,·,y,·)|^(2+α)_[0,s]×ℝ^d
+|(u-u)(s^',·,y,·)|^(2+α)_[0,s^']×ℝ^d)
≤ C_5(R)δ^α/2(s^'-s)^α/2‖ u-u‖^(2+α)_[0,δ],
(K_1-term) It directly implies K_1 by noting that φ(t,0,x,y)≡ 0,
|φ(t,·,x,·)|^(0)_[0,δ]×ℝ^d≤ C_5(R)δ‖ u-u‖^(2+α)_[0,δ].
(K_3-term) Next, to estimate |φ(t,s,x,y)-φ(t,s,x,y^')| for any y,y^'∈ℝ^d with 0<|y-y^'|≤ 1, it is convenient to add and subtract
∫^1_0∑_|I|≤ 2(∂_I F(t,s,x,y^',θ_σ(t,s,x,y^'))-∂_IF(t,0,x,y^',θ_0(t,x,y^')))
×∂_I(u-u)(t,s,x,y)dσ
+ ∫^1_0∑_|I|≤ 2(∂_I F(t,s,x,y^',θ_σ(t,s,x,y^'))-∂_IF(t,0,x,y^',θ_0(t,x,y^')))
×∂_I(u-u)^b(s,s,x,y)|_x=ydσ.
Note that
|∂_I F(t,s,x,y,θ_σ(t,s,x,y))-∂_I F(t,s,x,y^',θ_σ(t,s,x,y^'))|
+|∂_IF(t,0,x,y,θ_0(t,x,y))-∂_IF(t,0,x,y^',θ_0(t,x,y^'))|
≤ 2K|y-y^'|^α+L|θ_σ(t,s,x,y)-θ_σ(t,s,x,y^')|+L|θ_0(t,x,y)-θ_0(t,x,y^')|
≤ 2K|y-y^'|^α
+L|y-y^'|^α(|u(t,·,x,·)|^(2+α)_[0,δ]×ℝ^d+|u(s,·,y^'·)|^(2+α)_[0,δ]×ℝ^d+sup_y∈(y,y^')|u_x(s,·,y,·)|^(2+α)_[0,δ]×ℝ^d
+|u(t,·,x,·)|^(2+α)_[0,δ]×ℝ^d+|u(s,·,y^',·)|^(2+α)_[0,δ]×ℝ^d+sup_y∈(y,y^')|u_x(s,·,y,·)|^(2+α)_[0,δ]×ℝ^d
+|g(t,x,·)|^(2+α)_ℝ^d+|g(0,y^',·)|^(2+α)_ℝ^d+sup_y∈(y,y^')|g_x(0,y,·)|^(2+α)_ℝ^d)
≤ (2K+L(‖ u‖^(2+α)_[0,δ]+‖u‖^(2+α)_[0,δ]+‖ g‖^(2+α)_[0,δ]))|y-y^'|^α
≤ C_6(R)|y-y^'|^α,
and
|∂_IF(t,s,x,y,θ_σ(t,s,x,y))-∂_IF(t,0,x,y,θ_0(t,x,y))|
≤ Ks^α/2+L(|(u-g)(t,·,x,·)|^(2+α)_[0,δ]×ℝ^ds^α/2+sup_s∈(0,s^')|g_t(s,x,·)|^(2+α)_ℝ^ds.
+|u(s,·,x,·)|^(2+α)_[0,s]×ℝ^ds^α/2 +|(u-g)(t,·,x,·)|^(2+α)_[0,δ]×ℝ^ds^α/2
.
+sup_s∈(0,s)|g_t(s,x,·)|^(2+α)_ℝ^ds+|u(s,·,x,·)|^(2+α)_[0,s^']×ℝ^ds^α/2)
≤ (K+L(‖ u-g‖^(2+α)_[0,δ]+‖u-g‖^(2+α)_[0,δ]+‖ g‖^(2+α)_[0,δ]))s^α/2
≤ C_7(R)δ^α/2
Similarly, we also have
|∂_I F(t,s,x,y,θ_σ(t,s,x,y))-∂_I F(t,s,x,y^',θ_σ(t,s,x,y^'))|
+|∂_IF(t,0,x,y,θ_0(t,x,y))-∂_IF(t,0,x,y^',θ_0(t,x,y^'))| ≤ C_8(R)|y-y^'|^α,
and
|∂_I F(t,s,x,y,θ_σ(t,s,x,y))-∂_I F(t,0,x,y,θ_0(t,x,y))|≤ C_9(R)δ^α/2.
Hence, for K_3, we have
|φ(t,s,x,y)-φ(t,s,x,y^')|
≤ C_6(R)|y-y^'|^αδ^α/2|(u-u)(t,·,x,·)|^(2+α)_[0,δ]×ℝ^d
+C_8(R)|y-y^'|^αδ^α/2|(u-u)(s,·,y,·)|^(2+α)_[0,δ]×ℝ^d
+C_7(R)δ^α/2|y-y^'|^α|(u-u)(t,·,x,·)|^(2+α)_[0,δ]×ℝ^d
+C_9(R)δ^α/2|y-y^'|^α(|(u-u)(s,·,y^',·)|^(2+α)_[0,δ]×ℝ^d
+sup_y∈(y,y^')|(u-u)_x(s,·,y,·)|^(2+α)_[0,δ]×ℝ^d)
≤ C_10(R)δ^α/2|y-y^'|^α‖ u-u‖^(2+α)_[0,δ].
Consequently, from (<ref>), (<ref>), and (<ref>), for any (t,x)∈[0,δ]×ℝ^d, we obtain
|φ(t,·,x,·)|^(α)_[0,δ]×ℝ^d≤ C_11(R)δ^α/2‖ u-u‖^(2+α)_[0,δ].
(Estimates K_4-K_6 of |φ_t(t,s,x,y)|^(α)_(s,y)∈[0,δ]×ℝ^d). We now analyze the Hölder continuity of φ_t(t,·,x,·) with respect to s and y in [0,δ]×ℝ^d. According to the integral representation (<ref>) of φ(t,s,x,y), its first derivative in t satisfies
φ_t(t,s,x,y)
= ∫^1_0∑_|I|≤ 2[∂(∂_I F(t,s,x,y,θ_σ(t,s,x,y))-∂_I F(t,0,x,y,θ_0(t,x,y)))/∂ t
×∂_I(u-u)(t,s,x,y)
+(∂_I F(t,s,x,y,θ_σ(t,s,x,y))-∂_I F(t,0,x,y,θ_0(t,x,y)))
×∂_I(u_t-u_t)(t,s,x,y)]dσ
+∫^1_0∑_|I|≤ 2∂(∂_I F(t,s,x,y,θ_σ(t,s,x,y))-∂_I F(t,0,x,y,θ_0(t,x,y)))/∂ t
×∂_I(u-u)(s,s,x,y)|_x=ydσ
By the product rule and chain rule, we have
φ_t(t,s,x,y)
= ∫^1_0∑_|I|≤ 2(∂^2_tI F(t,s,x,y,θ_σ(t,s,x,y))-∂^2_tI F(t,0,x,y,θ_0(t,x,y)))
×∂_I(u-u)(t,s,x,y)dσ
+ ∫^1_0∑_|I|≤ 2[∑_|J|≤ 2(∂^2_IJF(t,s,x,y,θ_σ(t,s,x,y))·(σ∂_Ju_t(t,s,x,y)+(1-σ)∂_Ju_t(t,s,x,y))
-∂^2_IJF(t,0,x,y,θ_0(t,x,y))·∂_Jg_t(t,x,y))]×∂_I(u-u)(t,s,x,y)dσ
+ ∫^1_0∑_|I|≤ 2(∂_I F(t,s,x,y,θ_σ(t,s,x,y))-∂_I F(t,0,x,y,θ_0(t,x,y)))
×∂_I(u_t-u_t)(t,s,x,y)dσ
+∫^1_0∑_|I|≤ 2(∂^2_tIF(t,s,x,y,θ_σ(t,s,x,y))-∂^2_tIF(t,0,x,y,θ_0(t,x,y)))
×∂_I(u-u)(s,s,x,y)|_x=ydσ
+ ∫^1_0∑_|I|≤ 2[∑_|J|≤ 2(∂^2_IJF(t,s,x,y,θ_σ(t,s,x,y))·(σ∂_Ju_t(t,s,x,y)+(1-σ)∂_Ju_t(t,s,x,y))
-∂^2_IJF(t,0,x,y,θ_0(t,x,y))·∂_Jg_t(t,x,y))]×∂_I(u-u)(s,s,x,y)|_x=ydσ
:=M_1+M_2+M_3+M_4+M_5
It is easy to see that the estimates of M_1, M_3, and M_4 are similar to the terms of |φ(t,·,x,·)|^(α)_[0,δ]×ℝ^d. Hence, we focus on the remaining two terms M_2 and M_5. We denote η(t,s,x,y)=M_2+M_5.
(Hölder continuity of η in s) In order to estimate |η(t,s,x,y)-η(t,s^',x,y)| for 0≤ s< s^'≤δ≤ T and any x,y∈ℝ^d, it is convenient to add and subtract
∫^1_0∑_|I|≤ 2[∑_|J|≤ 2(∂^2_IJF(t,s^',x,y,θ_σ(t,s^',x,y))·(σ∂_J u_t(t,s^',x,y)+(1-σ)∂_Ju_t(t,s^',x,y)))]
×∂_I(u-u)(t,s,x,y)dσ
+ ∫^1_0∑_|I|≤ 2[∑_|J|≤ 2(∂^2_IJF(t,s^',x,y,θ_σ(t,s^',x,y))·(σ∂_Ju_t(t,s^',x,y)+(1-σ)∂_Ju_t(t,s^',x,y)))]
×∂_I(u-u)(s,s,x,y)|_x=ydσ.
Subsequently, we need to estimate
|∂^2_IJF(t,s,x,y,θ_σ(t,s,x,y))·(σ∂_J u_t(t,s,x,y)+(1-σ)∂_Ju_t(t,s,x,y))
-∂^2_IJF(t,s^',x,y,θ_σ(t,s^',x,y))·(σ∂_J u_t(t,s^',x,y)+(1-σ)∂_Ju_t(t,s^',x,y))|,
|∂^2_IJF(t,s,x,y,θ_σ(t,s,x,y))·(σ∂_J u_t(t,s,x,y)+(1-σ)∂_Ju_t(t,s,x,y))
-∂^2_IJF(t,s^',x,y,θ_σ(t,s^',x,y))·(σ∂_J u_t(t,s^',x,y)+(1-σ)∂_Ju_t(t,s^',x,y))|,
and
|∂^2_IJF(t,s^',x,y,θ_σ(t,s^',x,y))·(σ∂_J u_t(t,s^',x,y)+(1-σ)∂_Ju_t(t,s^',x,y))
-∂^2_IJF(t,0,x,y,θ_0(t,x,y))·∂_J g_t(t,x,y)|,
|∂^2_IJF(t,s^',x,y,θ_σ(t,s^',x,y))·(σ∂_J u_t(t,s^',x,y)+(1-σ)∂_Ju_t(t,s^',x,y))
-∂^2_IJF(t,0,x,y,θ_0(t,x,y))·∂_J g_t(t,x,y)|.
Note that
(<ref>)≤ |∂^2_IJF(t,s,x,y,θ_σ(t,s,x,y))·(σ∂_J u_t(t,s,x,y)+(1-σ)∂_Ju_t(t,s,x,y))
-∂^2_IJF(t,s^',x,y,θ_σ(t,s^',x,y))·(σ∂_J u_t(t,s,x,y)+(1-σ)∂_Ju_t(t,s,x,y))|
+ |∂^2_IJF(t,s^',x,y,θ_σ(t,s^',x,y))·(σ∂_J u_t(t,s,x,y)+(1-σ)∂_Ju_t(t,s,x,y))
-∂^2_IJF(t,s^',x,y,θ_σ(t,s^',x,y))·(σ∂_J u_t(t,s^',x,y)+(1-σ)∂_Ju_t(t,s^',x,y))|
≤ C_12(R)(s^'-s)^α/2
and
(<ref>)≤ |∂^2_IJF(t,s^',x,y,θ_σ(t,s^',x,y))·(σ∂_J u_t(t,s^',x,y)+(1-σ)∂_Ju_t(t,s^',x,y))
-∂^2_IJF(t,s^',x,y,θ_σ(t,s^',x,y))·∂_J g_t(t,x,y)|
+ |∂^2_IJ F(t,s^',x,y,θ_σ(t,s^',x,y))·∂_J g_t(t,y)-∂^2_IJF(t,0,x,y,θ_0(t,x,y))·∂_J g_t(t,x,y)|
≤ C_13(R)δ^α/2.
Similarly, we also have
(<ref>)≤ C_14(R)(s^'-s)^α/2, (<ref>)≤ C_15(R)δ^α/2.
Hence, we obtain that
|η(t,s,x,y)-η(t,s^',x,y)|
≤ C_12(R)(s^'-s)^α/2δ^α/2|(u-u)(t,·,x,·)|^(2+α)_[0,δ]×ℝ^d
+C_14(R)(s^'-s)^α/2δ^α/2|(u-u)(s,·,y,·)|^(2+α)_[0,s]×ℝ^d
+
C_13(R)δ^α/2(s^'-s)^α/2|(u-u)(t,·,x,·)|^(2+α)_[0,δ]×ℝ^d
+C_15(R)δ^α/2(s^'-s)^α/2(sup_s∈(s,s^')|(u-u)_t(s,·,y,·)|^(2+α)_[0,s]×ℝ^d
+|(u-u)(s^',·,y,·)|^(2+α)_[0,s^']×ℝ^d)
≤ C_16(R)δ^α/2(s^'-s)^α/2‖ u-u‖^(2+α)_[0,δ],
(Boundedness of η) Then (<ref>) implies the following by noting that η(t,0,x,y)≡ 0,
|η(t,·,x,·)|^∞_[0,δ]×ℝ^d≤ C_16(R)δ‖ u-u‖^(2+α)_[0,δ].
(Hölder continuity of η in y) In order to estimate |η(t,s,x,y)-η(t,s,x,y^')|, it is convenient to add and subtract
∫^1_0∑_|I|≤ 2[∑_|J|≤ 2(∂^2_IJF(t,s,x,y^',θ_σ(t,s,x,y^'))·(σ∂_J u_t(t,s,x,y^')+(1-σ)∂_Ju_t(t,s,x,y^'))
-∂^2_IJF(t,0,x,y^',θ_0(t,x,y^'))·∂_Jg_t(t,x,y^'))]×∂_I(u-u)(t,s,x,y)dσ
+ ∫^1_0∑_|I|≤ 2[∑_|J|≤ 2(∂^2_IJF(t,s,x,y^',θ_σ(t,s,x,y^'))·(σ∂_J u_t(t,s,x,y^')+(1-σ)∂_Ju_t(t,s,x,y^'))
-∂^2_IJF(t,0,x,y^',θ_0(t,x,y^'))·∂_J g_t(t,x,y^'))]×∂_I(u-u)(s,s,x,y)|_x=ydσ.
Then we need to evaluate the estimates (for F)
|∂^2_IJF(t,s,x,y,θ_σ(t,s,x,y))·(σ∂_J u_t(t,s,x,y)+(1-σ)∂_Ju_t(t,s,x,y))
-∂^2_IJF(t,0,x,y,θ_0(t,x,y))·∂_J g_t(t,x,y)
-∂^2_IJF(t,s,x,y^',θ_σ(t,s,x,y^'))·(σ∂_J u_t(t,s,x,y^')+(1-σ)∂_Ju_t(t,s,x,y^'))
+∂^2_IJF(t,0,x,y^',θ_0(t,x,y^'))·∂_J g_t(t,x,y^')|
as well as the estimates (for F)
|∂^2_IJF(t,s,x,y,θ_σ(t,s,x,y))·(σ∂_J u_t(t,s,x,y)+(1-σ)∂_Ju_t(t,s,x,y))
-∂^2_IJF(t,0,x,y,θ_0(t,x,y))·∂_J g_t(t,x,y)
-∂^2_IJF(t,s,x,y^',θ_σ(t,s,x,y^'))·(σ∂_J u_t(t,s,x,y^')+(1-σ)∂_Ju_t(t,s,x,y^'))
+∂^2_IJF(t,0,x,y^',θ_0(t,x,y^'))·∂_J g_t(t,x,y^')|.
Moreover, we also need to estimate
|∂^2_IJF(t,s,x,y,θ_σ(t,s,x,y))·(σ∂_J u_t(t,s,x,y)+(1-σ)∂_Ju_t(t,s,x,y))
-∂^2_IJF(t,0,x,y,θ_0(t,x,y))·∂_J g_t(t,x,y)|
and
|∂^2_IJF(t,s,x,y,θ_σ(t,s,x,y))·(σ∂_J u_t(t,s,x,y)+(1-σ)∂_Ju_t(t,s,x,y))
-∂^2_IJF(t,0,x,y,θ_0(t,x,y))·∂_J g_t(t,x,y)|.
Note that
(<ref>)≤ |∂^2_IJF(t,s,x,y,θ_σ(t,s,x,y))·(σ∂_J u_t(t,s,x,y)+(1-σ)∂_Ju_t(t,s,x,y))
-∂^2_IJF(t,s,x,y^',θ_σ(t,s,x,y^'))·(σ∂_J u_t(t,s,x,y^')+(1-σ)∂_Ju_t(t,s,x,y^'))|
+ |∂^2_IJF(t,0,x,y^',θ_0(t,x,y^'))·∂_J g_t(t,x,y^')-∂^2_IJF(t,0,x,y,θ_0(t,x,y))·∂_J g_t(t,x,y)|
=: N_1+N_2.
For N_1, it holds that
N_1 ≤ |∂^2_IJF(t,s,x,y,θ_σ(t,s,x,y))·(σ∂_J u_t(t,s,x,y)+(1-σ)∂_Ju_t(t,s,x,y))
-∂^2_IJF(t,s,x,y,θ_σ(t,s,x,y))·(σ∂_J u_t(t,s,x,y^')+(1-σ)∂_Ju_t(t,s,x,y^'))|
+ |∂^2_IJF(t,s,x,y,θ_σ(t,s,x,y))·(σ∂_J u_t(t,s,x,y^')+(1-σ)∂_Ju_t(t,s,x,y^'))
-∂^2_IJF(t,s,x,y^',θ_σ(t,s,x,y^'))·(σ∂_J u_t(t,s,x,y^')+(1-σ)∂_Ju_t(t,s,x,y^'))|
≤ C_17(R)|y-y^'|^α.
For N_2,
N_2 ≤ |∂^2_IJF(t,0,x,y^',θ_0(t,x,y^'))·∂_J g_t(t,x,y^')-∂^2_IJF(t,0,x,y,θ_0(t,x,y))·∂_J g_t(t,x,y^')|
+ |∂^2_IJF(t,0,x,y,θ_0(t,x,y))·∂_J g_t(t,x,y^')-∂^2_IJF(t,0,x,y,θ_0(t,x,y))·∂_J g_t(t,x,y)|
≤ C_18(R)|y-y^'|^α.
From the estimates of N_1 and N_2, we have
(<ref>)≤ C_19(R)|y-y^'|^α
Moroever, we have
(<ref>)≤ |∂^2_IJF(t,s,y,θ_σ(t,s,y))·(σ∂_J u_t(t,s,x,y)+(1-σ)∂_Ju_t(t,s,x,y))
-∂^2_IJF(t,s,x,y,θ_σ(t,s,x,y))·∂_J g_t(t,x,y)|
+|∂^2_IJF(t,s,x,y,θ_σ(t,s,x,y))·∂_J g_t(t,x,y)-∂^2_IJF(t,0,x,y,θ_0(t,x,y))·∂_J g_t(t,x,y)|
≤ C_20(R)δ^α/2.
Similarly, for F, we have
(<ref>)≤ C_21(R)|y-y^'|^α, (<ref>)≤ C_22(R)δ^α/2.
Hence, we have
|η(t,s,x,y)-η(t,s,x,y^')|≤ C_19(R)|y-y^'|^αδ^α/2|(u-u)(t,·,x,·)|^(2+α)_[0,δ]×ℝ^d
+C_21(R)|y-y^'|^αδ^α/2|(u-u)(s,·,y,·)|^(2+α)_[0,δ]×ℝ^d
+C_20(R)δ^α/2|y-y^'|^α|∂_I(u-u)(t,·,x,·)|^(2+α)_[0,δ]×ℝ^d
+C_22(R)δ^α/2|y-y^'|^α(|(u-u)(s,·,y^',·)|^(2+α)_[0,δ]×ℝ^d
+sup_y∈(y,y^')|(u-u)_x(s,·,y,·)|^(2+α)_[0,δ]×ℝ^d)
≤ C_23(R)δ^α/2|y-y^'|^α‖ u-u‖^(2+α)_[0,δ].
Therefore, together with (<ref>), (<ref>), and (<ref>), we have
|η(t,·,x,·)|^(α)_[0,δ]×ℝ^d≤ C_24(R)δ^α/2‖ u-u‖^(2+α)_[0,δ].
Since M_1, M_3 and M_4 of (<ref>) satisfy the same estimates, the estimates of K_4, K_5 and K_6 hold as well. Hence, we have
|φ_t(t,·,x,·)|^(α)_[0,δ]×ℝ^d≤ C_25(R)δ^α/2‖ u-u‖^(2+α)_[0,δ].
Thanks to the symmetry between t and x, we also have the estimates of K_7, K_8 and K_9. Then, for any (t,x)∈[0,δ]×ℝ^d, it holds that
|φ_x(t,·,x,·)|^(α)_[0,δ]×ℝ^d≤ C_25(R)δ^α/2‖ u-u‖^(2+α)_[0,δ].
Similarly, we can also acquire an integral representation for φ_xx(t,s,x,y). Due to the chain rule and the product rule of derivatives, it is clear that the same estimate (K_10-K_12) holds for the term |φ_xx(t,·,x,·)|^(α)_[0,δ]×ℝ^d, i.e.
|φ_xx(t,·,x,·)|^(α)_[0,δ]×ℝ^d≤ C_25(R)δ^α/2‖ u-u‖^(2+α)_[0,δ].
Consequently, together with (<ref>), (<ref>), (<ref>), and (<ref>), we have
‖φ‖^(α)_[0,δ]≤ C(R)δ^α/2‖ u-u‖^(2+α)_[0,δ],
Furthermore, it follows that
‖ U-U‖^(2+α)_[0,δ]≤ C‖φ‖^(α)_[0,δ]≤ C(R)δ^α/2‖ u-u‖^(2+α)_[0,δ]≤1/2‖ u-u‖^(2+α)_[0,δ],
for a small enough δ∈(0,T].
siamplain
|
http://arxiv.org/abs/2307.03226v1
|
20230706180002
|
Galaxy bias in the era of LSST: perturbative bias expansions
|
[
"Andrina Nicola",
"Boryana Hadzhiyska",
"Nathan Findlay",
"Carlos García-García",
"David Alonso",
"Anže Slosar",
"Zhiyuan Guo",
"Nickolas Kokron",
"Raúl Angulo",
"Alejandro Aviles",
"Jonathan Blazek",
"Jo Dunkley",
"Bhuvnesh Jain",
"Marcos Pellejero",
"James Sullivan",
"Christopher W. Walter",
"Matteo Zennaro"
] |
astro-ph.CO
|
[
"astro-ph.CO"
] |
TuRMoiL of Survival: A Unified Survival Criterion for Cloud-Wind Interactions
[
August 1, 2023
=============================================================================
§ INTRODUCTION
The last three decades have seen the emergence of the ΛCDM cosmological model as the concordance model preferred by a number of different cosmological probes. Current and future surveys across all wavelengths will allow us to put this model to its most stringent test to date. In the optical wavelength range, these include the Dark Energy Survey (DES)[<https://www.darkenergysurvey.org/>.], the Hyper-Suprime Cam Survey (HSC)[<https://hsc.mtk.nao.ac.jp/ssp/0>.], the Kilo Degree Survey (KiDS)[<https://kids.strw.leidenuniv.nl/>.], the Dark Energy Camera Legacy Survey (DECaLS)[<https://www.legacysurvey.org/decamls/>.], the Baryon Oscillation Spectroscopic Survey (BOSS)[<https://www.sdss4.org/surveys/boss/>.], the Dark Energy Spectroscopic Instrument (DESI)[<https://www.desi.lbl.gov/>.], the Rubin Observatory Legacy Survey of Space and Time (LSST)[<https://www.lsst.org/>.], Euclid[<https://www.euclid-ec.org/>.], and the Roman Space Telescope[<https://roman.gsfc.nasa.gov/>.].
Alongside weak gravitational lensing, galaxy clustering is one of the main probes observable with these surveys. This powerful cosmological probe offers the promise to deliver tight constraints on modifications of ΛCDM, such as neutrino masses <cit.> and primordial non-Gaussianity <cit.>, as well as the physics of galaxy formation. Theoretical modeling of galaxy clustering on small scales is hampered for two main reasons: First, on small scales the clustering of Dark Matter (DM) becomes non-linear. Second, on these scales, the relation between galaxy tracers and the DM field also becomes non-linear and mildly non-local. In the absence of baryons, the clustering of dark matter in the mildly nonlinear regime can be modeled either using analytic perturbative approaches or using N-body simulations (see e.g. Refs. <cit.>). The relation between galaxies and dark matter on small scales depends on the physics of galaxy formation, which involves a variety of different processes and spans several orders of magnitude in scale. These processes are impossible to model ab-initio on a cosmological scale, even using the highest-accuracy hydrodynamic simulations (see e.g. Ref. <cit.>). The relation between galaxies, or any biased tracer, and the underlying DM distribution therefore presents the largest theoretical systematic uncertainty in galaxy clustering analyses. Several approaches have been developed to model these effects, and they can be subdivided in two different categories: (i) perturbative models, and (ii) phenomenological models. The former models use a perturbative expansion to jointly model the non-linear evolution of the DM distribution and its relation to the distribution of biased tracers. This expansion can be either performed in the initial conditions or at late times, leading to two distinct frameworks within which to model tracer bias, Lagrangian or Eulerian perturbation theory (PT) (see e.g. Refs. <cit.>). The effective field theory of Large-Scale Structure (EFToLSS, hereafter we use EFT for short) presents a closely related approach that treats cosmological fields at mildly non-linear scales as effective fields emerging from a more complete theory describing small-scale structure formation <cit.>. A number of studies have analyzed the reach of these methods and have found them to be accurate up to maximal wave numbers of k_max∼ 0.1 - 0.3 h Mpc^-1 at redshift z=0 (see e.g. Ref. <cit.>). Recently, Ref. <cit.> proposed a new method aimed to improve upon this reach by developing a hybrid bias model that combines the accuracy of N-body simulations with the theoretical underpinning of Lagrangian perturbation theory, called Hybrid Effective Field Theory (HEFT) hereafter. In Ref. <cit.> it was shown that this model allows for an accurate fit to N-body data up to k_max∼ 0.6 h Mpc^-1 at redshift z=0. In contrast to these methods, phenomenological models of galaxy bias typically rely on the Halo Model <cit.> coupled with a Halo Occupation Distribution (HOD). These models are built on the assumption that all matter in the Universe exists in the form of halos and that galaxies populate these halos with statistics solely determined by halo mass. The advantage of these models is that they are, despite their conceptual simplicity, surprisingly successful at explaining clustering with decent accuracy well into the non-linear regime, where perturbative approaches falter. Their main disadvantage is that they rely on a number of heuristic assumptions on a qualitative level, and thus cannot strictly be shown to provide a complete description of clustering. Additionally, predictions based on the Halo Model typically show inaccuracies in the transition region between the 1- and 2-halo terms and do not account for smearing of the Baryonic Acoustic Oscillation feature (see e.g. Refs. <cit.>).
All of the methods outlined above have successfully been applied to data (see e.g. Ref. <cit.>), but they tend to be computationally intensive, thus making parameter inference an expensive part of cosmological analyses. Therefore, several recent works have developed hybrid methods that couple these bias models with machine learning methods to build emulators that can generate fast predictions for statistics involving biased tracers (see e.g. Refs. <cit.>).
Combined in a so-called “” analysis, galaxy clustering and weak lensing form a key component of the cosmological analysis planned by upcoming photometric galaxy redshift surveys such as Euclid and LSST. A large amount of cosmological and astrophysical information will be contained in galaxy clustering at small spatial scales. In order to ensure robust and unbiased constraints from these data, it is crucial to assess the performance of different bias models in this high signal-to-noise regime.
In this work, we aim to perform a consistent comparison of non-linear galaxy bias models and assess their performance in analyses including high-precision galaxy clustering data from Stage IV surveys, using as an example the LSST survey. To this end, we use the AbacusSummit simulations and generate simulated data vectors and corresponding covariances, loosely matching LSST Year 10 (Y10) data <cit.>. We then analyze these data using a number of current non-linear galaxy bias models. Specifically, we employ Eulerian (or Standard) Perturbation Theory, Lagrangian Perturbation Theory, and two implementations of HEFT, and . We fit all of these models to the simulated data, and assess the performance of each model based on the accuracy of the returned cosmological parameter constraints and the goodness-of-fit. Our results have implications for photometric surveys beyond LSST, but do not directly apply to spectroscopic surveys such as DESI, as we do not model a number of effects important in this regime, such as redshift space distortions (RSDs).
There are three over-arching questions that we would like to answer in this work. Which model and approach offers the most robust and accurate constraints on fundamental cosmological parameters given the high-precision of forthcoming photometric Stage IV surveys? How deep into the non-linear regime can we go using the best-performing method? How much do constraints on cosmological parameters improve as we push to increasingly smaller scales?
The last question is particularly interesting, because it will guide us in further theoretical developments. While linear scales retain the most amount of memory regarding the primordial fields and their subsequent evolution, we expect a relative loss of cosmological sensitivity on non-linear scales[Ref. <cit.> for example, show that the halo- and matter fields start to decorrelate once one-loop corrections to the power spectrum become significant.]. Despite this fact, the number of observable modes increases significantly at smaller scales, and it is thus interesting to investigate the impact of these competing effects. Previous galaxy clustering analyses using HODs have found significant improvements in cosmological constraining power when increasing the minimal scale included in the analysis (see e.g. Refs. <cit.>), while analyses based on PT have found smaller effects, tied to degeneracies between cosmological and bias parameters (see e.g. Ref. <cit.>). Here, we aim to investigate these questions also in the light of upcoming Stage IV photometric surveys.
This manuscript is structured as follows. In Sec. <ref>, we introduce perturbative bias models, and in Sec. <ref> we describe the observables considered. Section <ref> gives an overview of the simulations employed, while Sec. <ref> describes the methodology used in our analysis. We
present our results in Sec. <ref> and conclude in Sec. <ref>. Implementation details are deferred to the Appendices.
§ PERTURBATIVE BIAS MODELS
The basic premise behind perturbative approaches to describe galaxy biasing is acknowledging the presence of complex physical, non-gravitational processes behind the formation of galaxies. These processes are non-local, and in general involve all the matter in a region around each galaxy of size R_g (e.g. the Lagrangian size of the parent halo). On scales larger than R_g however, galaxy formation can be described as an effectively local process, thus removing the need to describe these physical processes in detail. In this limit, one can invoke the Equivalence Principle which, in its non-relativistic limit, implies that the only measurable gravitational local quantities in a freely-falling frame are the second derivatives of the gravitational potential ∂_i∂_jΦ (see e.g. Ref. <cit.>). In other words, on these scales the overdensity of galaxies δ_g( x) can be described by a general function F of these second derivatives:
1+δ_g=F[∂_i∂_jΦ].
Perturbative bias models then proceed by expanding F in powers of ∂_i∂_jΦ. Since 1+δ_g is a scalar quantity, each order in this expansion can only involve scalar combinations of ∂_i∂_jΦ, which further limits the number of possible unique terms at each order in the expansion. Up to second order in powers of ∂^2Φ, only three terms are allowed: the matter overdensity δ_m (proportional to the trace ∇^2Φ), the squared overdensity δ_m^2, and the trace squared of the tidal tensor s^2≡ s_ijs^ij, where s_ij≡∂_i∂_jΦ-δ_ij∇^2Φ/3.
The approximation of local bias is expected to break down on scales close to or smaller than R_g <cit.>, and the leading correction to Eq. <ref> due to non-local processes is given by the Laplacian of the matter density field, i.e. R_g^2∇^2δ_m <cit.>. In Fourier space, we have ∇^2δ_m(𝐤) = -k^2δ_m(𝐤), which is what we include in our model given recent detections of non-local bias for halos (see e.g. <cit.>). We note that a similar expansion of Eq. <ref> at higher orders in ∂^2Φ and higher-order derivative operators can be derived when including non-local terms, but we will limit our discussion to the lowest-order contribution described here.
Finally, the details of galaxy formation are sensitive to fluctuations in the initial conditions on scales smaller than R_g. This leads to stochasticity in the galaxy bias relation which, at lowest order, can be captured by an additional stochastic field ε that is uncorrelated on large scales (i.e. it is assumed to have a white power spectrum on k≪ R_g^-1) <cit.>, and does not correlate with any of the perturbative terms described above.
Under the above assumptions, in this work, we describe the galaxy overdensity δ_g perturbatively up to second order as
1+δ_g=1+b_1δ_m+b_2/2!(δ_m^2-⟨δ_m^2⟩)+b_s^2/2!(s^2-⟨ s^2⟩)+b_∇^2/2!∇^2δ_m+ε,
where we have employed the Eulerian bias picture (though one could analogously expand the galaxy field in the Lagrangian picture). In Eq. <ref>, we have removed the variance of the quadratic fields to ensure a mean zero galaxy overdensity. Furthermore, the quantity b_1 denotes the linear bias, b_2 is the quadratic bias, b_s^2 is the tidal bias, b_∇^2 denotes the non-local bias, and finally ε is the stochastic contribution.
Different perturbative bias models perform this expansion at different points in time, and can thus be subdivided into Eulerian and Lagrangian approaches. In Eulerian Perturbation Theory (EPT), the perturbative expansion is performed locally at the time corresponding to the galaxy redshift. In Lagrangian Perturbation Theory (LPT) the bias parameters are defined with respect to the initial density field, and galaxy positions are then traced forward in time following their expected trajectories under gravity. If complete to a given order, Eulerian and Lagrangian bias expansions are equivalent <cit.>. In the following, we briefly discuss the traditional implementations of Eulerian and Lagrangian perturbation theory as well as an extension of LPT, named Hybrid Effective Field Theory, which aims to track the galaxy Lagrangian trajectories non-perturbatively. For a more detailed description of galaxy bias, the reader is referred to Ref. <cit.>.
§.§ Eulerian Perturbation Theory
In Eulerian perturbation theory, the equations of structure formation are solved by focusing on a particular point in space and following the fluid's movement through this point in time <cit.>. Keeping all terms up to second order, and allowing for non-local Eulerian bias (in both space and time), the galaxy field at any given redshift z can thus be expressed by Eq. <ref>, with all quantities (δ_g, δ_m, s^2) evaluated at the current galaxy position and time <cit.>.
The galaxy-galaxy and galaxy-matter power spectra in the Eulerian bias framework are thus given by
P_gg(k, z) = ∑_i, j b_ib_j P_ij(k, z) + P_SN,P_gm(k,z)=∑_i b_i P_iδ_m(k,z),
where i, j ∈{δ_m, δ_m^2, s^2, ∇^2δ_m}, and the set b ={b_1, b_2, b_s^2, b_∇^2} denotes the corresponding bias parameters. As a technical subtlety, we note that in purely perturbative approaches, those bias parameters are the renormalized version of the “bare” bias parameters in Equation <ref>. In full generality, the power spectrum of first and third order bias terms (proportional to b_1 b_ 3NL) gives rise to terms of the same order as those generated by the auto-correlation of second order fields <cit.>. These power spectra however, are strongly degenerate with other terms and can thus be absorbed into lower-order bias coefficients. In this work, we therefore do not consider bias terms beyond second order but note that these are crucial for fitting e.g. higher-order statistics. P_SN is the power spectrum of the stochastic term ε in Eq. <ref>, which is assumed to be scale-independent on the scales considered in this analysis. At the lowest order, this term can be thought of as the Poisson noise associated with the discrete nature of galaxy tracers, but it also incorporates a variety of other effects. Physically, these phenomena are described as halo exclusion, but in perturbation theory, they naturally arise as a renormalization of the terms that result in white power spectra on large scales <cit.>. As pure Poisson noise gives rise to a stochastic power spectrum P_SN = n̅_g^-1, we expect this term to be of the same order, but not exactly equal.
The power spectra between the different terms in the bias expansion (P_ij in Eq. <ref>) can be computed using Eulerian perturbation theory (although see Section <ref>), and to do so, we use the [The code can be found at <https://github.com/JoeMcEwen/FAST-PT>.] package <cit.>.
As described in more detail below, in this work we fit the set of bias parameters b ={b_1, b_2, b_s^2, b_∇^2} and the shot noise parameter P_SN to simulated data from AbacusSummit.
§.§ Lagrangian Perturbation Theory
In the Lagrangian bias picture <cit.>, the perturbative expansion of Eq. <ref> is applied to the proto-galaxy field from which galaxies form in the initial conditions (i.e. at high redshifts during matter domination). In this case, ∂_i∂_jΦ is the Hessian of the linear gravitational potential, δ_m is the linear Lagrangian density etc., and all fields are evaluated at the initial Lagrangian coordinates q, and denoted by the subscript L (see e.g. Ref. <cit.>).
Once the initial proto-galaxy overdensity field is established, its evolution is determined by the Lagrangian trajectories of the galaxies under gravity. Thus at late times, when galaxies are actually observed, the galaxy overdensity δ_g at Eulerian coordinates x is
1 + δ_g(𝐱, z) = ∫d^3𝐪 δ^3(𝐱-𝐪-Ψ(𝐪, z)) ( 1 + δ_g^L( q) ),
where δ^3 is the 3-dimensional Dirac delta function, δ_L^g is the Lagrangian-space galaxy overdensity, given by Eq. <ref> in terms of the different bias expansion operators in the initial conditions, and Ψ is the Lagrangian displacement field. In the usual parlance of Lagrangian perturbation theory, the final galaxy overdensity is found by “advecting” the Lagrangian bias overdensity to the final Eulerian coordinates x.
As in the Eulerian bias expansion, the galaxy-galaxy and galaxy-matter power spectra are given by Eq. <ref>, with the exception that in this case the indices i,j run over an extended set of operators {1, δ_L, δ_L^2, s_L^2, ∇^2δ_L}, with corresponding bias parameters b ={b_0, b^L_1, b^L_2, b^L_s^2, b^L_∇^2}. Here, b_0≡1 is not a free parameter, and P_11 denotes the non-linear matter power spectrum. Finally, P_SN denotes a stochastic term as discussed in the previous section. The reason for the additional term is the fact that, in the Lagrangian picture, the advection of completely homogeneous density field simply yields the inhomogeneous matter density in Eulerian space. This does not change the number of free parameters of the model (since b_0 is fixed), but it implies that the Lagrangian and Eulerian bias parameters are not identical to one another (hence the L superscripts above).
To calculate the displacement field Ψ and thus the power spectra of the advected operators (i.e. the P_ij(k) in Eq. <ref>) we can use Lagrangian perturbation theory. The details of this calculation can be found in e.g. Ref. <cit.>. In this work, we compute these LPT power spectra using [The code can be found at <https://github.com/sfschen/velocileptors>.], which is described in Ref. <cit.>.
§.§ HEFT
The physical processes underlying galaxy formation are complex, and thus formulating a non-perturbative bias model based on first principles is commensurately difficult. However, non-linear evolution under gravity is a simpler problem that can be solved to high accuracy numerically via N-body simulations. This fact may be used to formulate a hybrid bias expansion, where the relation between galaxy overdensity and ∂_i∂_jΦ is given perturbatively at early times, and the subsequent evolution under gravity (i.e. solving for the Lagrangian displacement Ψ) is carried out numerically via simulations. First proposed in Ref. <cit.> and subsequently explored by Refs. <cit.>, this approach allows for higher accuracy predictions while keeping the physical intuition of LPT.
Building on Ref. <cit.>, two separate works <cit.> employed suites of N-body simulations to build emulators for the different power spectra in Eq. <ref>, corresponding to the advected fields {1,δ_L,δ_L^2,s_L^2,∇^2δ_L}. These emulators thus allow for the computation of HEFT predictions for a wide range of cosmological models. We describe these briefly below, but refer the reader to Refs. <cit.> for further details.
* anzu: In Ref. <cit.>, the authors use the suite of N-body simulations to compute predictions for the LPT basis spectra. This simulation suite has been designed for emulating cosmological quantities, and covers the parameter space ϑ = {Ω_b h^2, Ω_c h^2,σ_8, H_0, n_s, N_eff , w} within priors set by a combination of current observational constraints. Using the base power spectra, P_ij(k, z), computed from for a broad range of cosmological models, employs polynomial chaos expansions <cit.> to emulate these quantities, and is publicly available on [The code can be found at <https://github.com/kokron/anzu>.] (see Ref. <cit.> for an updated version of this emulator).
* BACCO: The analysis presented in Ref. <cit.> uses a similar approach: The authors use the suite of numerical simulations coupled with cosmology rescaling <cit.> to create a library of LPT base power spectra that cover the parameter space ϑ = {Ω_m, Ω_b,σ_8, n_s, h, M_ν , w_0, w_a}. Using these power spectra, the authors construct an emulator using a simple Neural Network[The code can be found at <https://bacco.dipc.org/emulator.html>.].
Various works <cit.> have found that this hybrid approach is able to reproduce the galaxy-galaxy and galaxy-matter power spectra in real space down to significantly smaller scales than the purely perturbative approaches. In general, precision of a few per cent can be achieved up to k_max∼ 0.6 h Mpc^-1 for redshifts 0 ≲ z ≲ 1.
§.§ Relations between bias models
As galaxies found in the evolved, Eulerian field can be traced back to proto-galaxies in Lagrangian space at early times, we can relate Lagrangian bias parameters to their Eulerian counterparts <cit.>. Assuming coevolution of the galaxy and the matter distribution, which is equivalent to galaxy number conservation and vanishing velocity bias, allows us to derive simple relations between bias parameters defined at early times to those defined at all later times. Physically, this is due to the fact that the density ratio of conserved, comoving fluids is unchanged under gravity owing to the equivalence principle. Expanding the galaxy overdensity in Lagrangian space up to second order neglecting non-local terms, and using the continuity equation, leads to a relation between Lagrangian and Eulerian bias parameters given by <cit.>
b_1 = 1 + b^L_1,
b_2 = 8/21b^L_1 + b^L_2,
b_s^2 = -4/7b^L_1 + b^L_s^2.
Note that we have adjusted the prefactors to match the bias definition in Eq. <ref>. A popular toy model for galaxy bias is the so-called local-in-matter-density (LIMD) Lagrangian bias. In this model, we assume the galaxy overdensity at early times to be solely a function of the local matter density, which amounts to setting b^L_s^2=b^L_∇^2=0 in Eq. <ref>. In this special case Eq. <ref> implies a relation between Eulerian bias parameters given by b_s^2 = -4/7(b_1 - 1) <cit.>. This shows that gravitational evolution leads to a bias with respect to the squared tidal field at late times, even in absence of such a bias at early times, i.e. LIMD at early times is inconsistent with LIMD at late times. In general, we do not expect these relations to hold exactly, as galaxy evolution is a complex process determined by forces other than gravity such as momentum transfer due to baryonic feedback and radiation pressure <cit.>. In addition, several works have investigated empirical relations between bias parameters, beyond coevolution. Refs. <cit.> for example have found strong correlations between bias parameters for halos, and Refs. <cit.> have found similar results for galaxies, albeit with a larger intrinsic scatter and slightly different relations between bias parameters. In Sec. <ref> we will test the validity of a subset of these relations for the galaxy samples considered in this work.
Finally, we note that gravitational co-evolution will generate third-order bias terms in Eulerian space from a second-order bias expansion in Lagrangian space. The models described in Sections <ref> and <ref> are therefore not fully equivalent. However, in this work we have chosen to sacrifice full theoretical consistency for consistency in the number of bias parameters used for each model. We will further discuss this choice and how it might affect our results in Sec. <ref>.
§.§ Implementation choices
When quantifying the validity of the different bias expansions introduced above, we will make use of a few implementation-specific choices.
Bias evolution:
The perturbative bias expansions explored here, in their different incarnations, determine the scale dependence of the different terms contributing to the final galaxy power spectra, but do not impose any restrictions on the redshift dependence of the bias parameters. As described in more detail below, our analysis is based on angular cross-correlations between galaxies in different tomographic redshift bins. Since each tomographic bin in principle corresponds to a distinct galaxy sample, we assume different and independent bias parameters in each bin. We consider relatively broad redshift bins, and since we expect galaxy properties and thus galaxy bias to evolve with redshift, our model must account for this. In our fiducial implementation of all perturbation theory models, we therefore allow for a redshift evolution in the lowest-order (linear) bias parameters, b_1 and b^L_1. We assume a linear bias evolution with redshift of the form (e.g. for Eulerian bias):
b_1(z) = b_1 + b_1, p(z-z̅),
where z̅ denotes the mean redshift of each bin, and we use a similar expression for b^L_1[We consider a linear relation between bias and redshift as this corresponds to the lowest-order Taylor expansion of a possibly more complex relation, but can be generalized to higher orders.]. By default all other bias parameters are assumed constant within each redshift bin, although we will also investigate the impact of allowing for a redshift-evolution of b_2 and b^L_2 in Sec. <ref>.
Decorated PT:
As stated in Sections <ref> and <ref>, in the EPT and LPT frameworks, the power spectra of the bias expansion terms (P_ij in Eq. <ref>), may be computed to a given order in Eulerian or Lagrangian perturbation theory. This perturbative approach can be potentially improved by replacing the P_δ_mδ_m(k,z) terms by its non-perturbative prediction calculated e.g. via the halofit fitting function <cit.>, effectively re-summing all PT terms contributing to it. This selective resummation approach has been found to improve the quality of the fit while remaining largely unbiased <cit.>. When studying the EPT and LPT frameworks, we therefore consider two different approaches:
approach: We use Eulerian or Lagrangian perturbation theory at next-to-leading order to compute all terms in Eq. <ref>. The power spectrum of the non-local term and the matter density is approximated as
P_δ_m,∇^2δ_m(k,z) → -k^2P_1- loop(k,z),
where P_1- loop is the 1-loop matter power spectrum and we have used that in Fourier space ∇^2δ_m(𝐤) = -k^2δ_m(𝐤)[We note that the functional form of this expression is equivalent to the counterterms present in EFT approaches (see e.g. Refs. <cit.>). We will further discuss this similarity and its impact on our results in Appendix <ref>.]. All other terms in the expansion of P_gm and P_gg will be calculated using EPT or LPT at next-to-leading order[In particular, the matter power spectrum is modeled as P_mm = P_1- loop.].
approach: In a second approach, we use the full non-linear matter power spectrum P_ NL(k,z) from halofit to compute the term multiplying b_1 in P_gm and b_1^2 in P_gg in the EPT case. For LPT, we make the following substitutions in the case of P_gm and P_gg respectively:
P_11(k,z)+b_1^LP_1δ_m(k,z) → (1+b_1^L)P_ NL(k,z)
P_11(k,z)+2b_1^LP_1δ_m(k,z)+(b_1^L)^2P_δ_mδ_m(k,z) → (1+b_1^L)^2P_ NL(k,z).
We calculate all other terms in the expansion using EPT or LPT at next-to-leading order. As above, the cross-power spectrum of the non-local term and the matter density is approximated as
P_δ_m,∇^2δ_m(k,z) → -k^2P_ NL(k,z).
As opposed to the other bias models considered, where we keep all auto- and cross-correlations involving ∇^2δ_m, in this case we only keep the power spectra multiplying the matter density in order to not mix halofit and PT predictions for the nonlocal bias terms. In the EPT case, we thus only keep the term involving b_1 and ∇^2δ_m, while for LPT we make the substitution P_1,∇^2δ_m(k,z)+b_1^LP_δ_m,∇^2δ_m(k,z) → -(1+b_1^L)k^2P_ NL(k,z).
§ OBSERVABLES
The key cosmological constraints from LSST and other future photometric surveys will be obtained from a so-called analysis, i.e. a joint analysis of galaxy clustering, galaxy-galaxy lensing and cosmic shear. In this work, we assess the performance of different galaxy bias models in a Fourier-space-based analysis, closely matching the specifications for LSST Y10. In full generality, the spherical harmonic power spectrum between probes a and b, and redshift bins i, j can be computed using the Limber approximation[We note that the Limber approximation has been shown to lead to biased results at low multipoles (see e.g. Ref. <cit.>), but we do not expect this to affect our conclusions as these scales are still well within the linear regime and we choose a conservative minimal multipole of ℓ_min=32.] <cit.> as:
C^a_i b_j_ℓ = ∫dz c/H(z)q^a_i(χ(z)) q^b_j(χ(z))/χ^2(z) P_ab(z,k=ℓ+1/2/χ(z)),
where c is the speed of light, H(z) denotes the Hubble parameter, and χ(z) is the comoving distance. In addition, q^a_i(χ(z)) denotes the window function for probe a and bin i, and P_ab(z,k) is the three-dimensional power spectrum between probes a and b. For galaxy tracers, the window function is given by
q^δ_g,i(χ(z)) = H(z)/cp^i(z),
where p^i(z) denotes the normalized redshift distribution of the considered sample. The window function for lensing tracers is
q^γ_i(χ(z)) = 3/2Ω_mH_0^2/c^2χ(z)/a∫_χ(z)^χ_hdz' p^i(z')χ(z')-χ(z)/χ(z'),
where Ω_m denotes the fractional matter density today, H_0 is the current expansion rate, a denotes the scale factor, and χ_h is the comoving distance to the horizon. Furthermore, we have P_γγ(z,k)=P_δ_mδ_m(z,k).
Additionally, we also consider a combination of the data vector with CMB lensing, and the window function in this case is given by:
q^κ_CMB(χ(z))=3/2Ω_mH^2_0/c^2χ(z)/aχ(z_*)-χ(z)/χ(z_*),
where z_* denotes the redshift to the surface of last scattering.
In this work, we model the uncertainties associated to this data vector analytically. The covariance matrix generally consists of three different parts: a Gaussian contribution, a non-Gaussian contribution and a contribution due to super-sample covariance (SSC) (see e.g. Ref. <cit.>). Here we make the simplifying assumption that the data covariance is Gaussian, and thus neglect non-Gaussian and SSC contributions. Including those contributions will have two main effects on the covariance: (i) increasing the size of the error bars, and (ii) correlating different ℓ-modes. While we expect the first effect to lead to our analysis being conservative since our smaller error bars lead to tighter requirements, the second might affect goodness-of-fit tests in a nonlinear way. We defer an investigation including non-Gaussian covariances to future work. Under the assumption of Gaussianity, we compute the covariance between the spherical harmonic power spectra C^ij_ℓ and C^i'j'_ℓ' using
Cov_G(C^ij_ℓ, C^i'j'_ℓ') = ⟨Δ C^ij_ℓΔ C^i'j'_ℓ'⟩ = δ_ℓℓ'/(2ℓ+1)Δℓ f_sky[(C^ii'_ℓ + N^ii'_ℓ)(C^jj'_ℓ + N^jj'_ℓ) .
. +(C^ij'_ℓ + N^ij'_ℓ)(C^i'j_ℓ + N^i'j_ℓ)],
where C^ij_ℓ denotes the signal part of the spherical harmonic power spectrum, N^ij_ℓ is the corresponding noise, f_sky denotes the fraction of sky covered by the experiment, and Δℓ is the width of the ℓ-bin used for power spectrum estimation. Usually, we set N^ij_ℓ = δ_ij N^ii, but here we keep a more generic expression in order to cater for potentially scale-dependent noise correlated across different probes.
In addition, in Refs. <cit.>, it was shown that the three-dimensional power spectra obtained from the HEFT emulators exhibit relative errors of around 1% when compared to the power spectra measured directly from the N-body simulations. Both works account for this systematic uncertainty through an additional term in their covariance matrix. We follow these analyses assuming full correlation of theoretical errors of any two power spectra, and add a systematic error floor to our covariance matrix given by
Cov_sys(C^ij_ℓ, C^i'j'_ℓ') = f_sys^2C^ij_ℓC^i'j'_ℓ'δ_ℓℓ',
where f_sys is the fractional error of the theoretical model, which we set to f_sys=0.01 as per Ref. <cit.>. The total covariance matrix is thus given by
Cov(C^ij_ℓ, C^i'j'_ℓ') = Cov_G(C^ij_ℓ, C^i'j'_ℓ')+Cov_sys(C^ij_ℓ, C^i'j'_ℓ').
§ SIMULATIONS
We use the AbacusSummit suite of high-performance cosmological N-body simulations <cit.> to create the simulated galaxy samples used in this analysis. The AbacusSummit suite was designed to meet the simulation requirements of the Dark Energy Spectroscopic Instrument (DESI) survey and was run with the high-accuracy cosmological code Abacus <cit.>. We utilize one of the “base”-resolution AbacusSummit boxes, , at the fiducial cosmology: Ω_b h^2 = 0.02237, Ω_c h^2 = 0.12, h = 0.6736, 10^9 A_s = 2.0830, n_s = 0.9649. Its box size is 2000 Mpc/h, and it contains 6912^3 particles with mass M_ part = 2.1 × 10^9 M_⊙/h.
To construct the mock galaxy samples from the N-body outputs, we utilize the 10% particle subsample (i.e., including both the particle and subsamples, which constitute 3% and 7% of the particles, respectively), which is selected randomly and is consistent across redshift, and the on-the-fly halo catalogues, which are generated using the halo-finding algorithm CompaSO <cit.>. Specifically, we apply the AbacusHOD model <cit.> to the halo catalogue outputs at z = 0.1, 0.3, 0.5, 0.8, 1.1, 1.4, 1.7, 2.0, 2.5, 3.0. The AbacusHOD model builds upon the baseline halo occupation distribution (HOD) model by incorporating various generalizations pertaining to halo-scale physics and assembly bias.
For the fiducial galaxy samples considered in this analysis, we assume that they are well-approximated by the “baseline HOD” model with no decorations. The model is akin to the 5-parameter model of <cit.>, which gives the mean expected number of central and satellite galaxies per halo given halo mass M:
N̅_cent(M) = 1/2erfc[log_10(M_min/M)/√(2)σ_M],
N̅_sat(M) = [M-κ M_min/M_1]^αN̅_cent(M).
Here, M_min characterizes the minimum halo mass to host a central galaxy, M_1 is the typical halo mass that hosts one satellite galaxy, σ_M describes the steepness of the transition from 0 to 1 in the number of central galaxies, α is the power law index on the number of satellite galaxies, and κ M_min gives the minimum halo mass to host a satellite galaxy. Central galaxies are placed at the halo centre, whereas satellite galaxies are “painted” on random particles.
In this work, we consider two main galaxy samples, which are designed to mimic two distinct galaxy sample choices for clustering analyses: (i) a homogeneous sample of Luminous Red Galaxies (LRGs) with a moderate number density (called `red' hereafter), which will constitute our fiducial sample throughout this work, and (ii) a magnitude-limited, high-number density sample (called `maglim' hereafter)[Similar samples have been considered in the DES Y3 analyses (see e.g. Ref. <cit.>), and we expect such samples to also be used for clustering analyses of LSST data.]. We model the HODs of these samples based on observational constraints using Equations <ref> and <ref>. In order to account for evolution of both samples, we assume a redshift dependence of the three HOD masses (M_ min, M_1, and M_0≡κ M_) of the form <cit.>
log_10 M_x/M_⊙ = μ_x+μ_x,p[1/1+z-1/1+z_p],
with z_p=0.65. Using this parameterization, we model the HOD of the maglim sample using the results derived for the HSC sample studied in Ref. <cit.>. For the red sample on the other hand, we assume redshift-dependent HOD masses consistent with the DESI LRG selection described in Ref. <cit.>. The adopted HOD parameters for both samples are given in Tab.<ref>.
We complement these samples with an additional data set for which we also model assembly bias effects. A number of studies have investigated the dependence of clustering statistics on quantities other than halo mass, such as halo concentration, environment or spin (see e.g. Refs. <cit.>), and some of these works have resulted in significant detections of this so-called assembly bias effect. The nonlinear galaxy bias models considered in this work in principle offer the flexibility to also describe tracers affected by these processes. In order to test this, we generate an additional galaxy sample from AbacusSummit via the fast and efficient decorated HOD model, AbacusHOD <cit.>, incorporating assembly bias effects due to halo concentration and environment as described and defined in Refs. <cit.>. In particular, we adopt non-zero values for the concentration and environment assembly bias parameters, and , for both the central and satellite populations (subscript referring to centrals and to satellites), given by = -0.73, = -0.24, = -0.0093, and = 0.0037. These parameters modify the central and satellite halo occupations by reranking the haloes at fixed mass based on their intrinsic concentration and environment, and their values are to be close to the best-fit for CMASS data <cit.> and modify the central and satellite halo occupations (hence, the subscripts) by re-ranking the haloes at fixed mass by their concentration ( and ) and their environment ( and ). For full definitions of the parameters and how they are used, see Ref. <cit.>.
We would like to stress that all galaxy samples considered in this work are generated using a simplified HOD model, which includes some extensions beyond the “vanilla” model described in Ref. <cit.>. However, for maximal realism, one could imagine incorporating other effects such as redshift dependence of the selection functions, deviations of the satellite occupations from a Poisson distribution, and a further dependence of the central occupations on assembly bias. These additional effects might significantly affect the clustering and stochasticity of the samples, and thus potentially also our results. We leave an investigation of more complex HOD models to future work.
In addition in this work, we do not consider the additional uncertainties in realistic samples due to galaxy selection effects and photo-z estimation. These uncertainties are specific to photometric surveys; recent analyses from Stage III surveys, e.g. Ref <cit.>, have found that imperfect correction of “survey properties” appear to dominate statistical uncertainties for red samples in DES. Given LSST's statistical precision, these issues may require significant effort. In addition, the sample selection and redshift binning interplays with bias evolution, magnification and other effects not considered here. Finally, we note that the Euclid survey has planned an ambitious tomographic analysis comprising about a dozen redshift bins. The opportunities and challenges posed by such an analysis require separate investigation.
§ METHODS
§.§ Generating smooth spectra
We use the simulated galaxy samples from AbacusSummit to compute three-dimensional galaxy-galaxy auto- and galaxy-matter cross-power spectra for all the redshifts covered by the simulations. Sample variance in the power spectra extracted from AbacusSummit is significant on large scales compared to the expected error bars, which could lead to biases in our analysis. As described in detail in Appendix <ref>, we suppress this noise by combining the spectra measured from the simulations with theoretical predictions for the linear power spectrum on large scales. This procedure allows us to model the measured power spectra to an accuracy below the sample variance uncertainties of the AbacusSummit simulation box. The reader is referred to Appendix <ref> for a more detailed description of the methodology.
§.§ Computing simulated data vectors
In order to compute spherical harmonic power spectra using Eq. <ref>, we need to specify redshift distributions for the source galaxies as well as the two clustering samples considered in this work. For the source sample, we assume a number density and overall redshift distribution based on the LSST Y10 sample as specified in the LSST DESC Science Requirements document <cit.>. The sample is divided into 5 redshift bins, enforcing an equal number of galaxies in each bin, and assuming Gaussian photometric redshift uncertainties with standard deviation σ_z=0.05(1+z). The resulting redshift distributions are shown in the left panel of Fig. <ref>, with the inter-bin tails caused by the photo-z uncertainties. For the clustering on the other hand, we assume different distributions for the maglim and the red sample: For the maglim sample we make the assumption that we use the same set of galaxies for lensing and clustering measurements, and thus set the clustering redshift distributions to those assumed for the source sample. For the red sample on the other hand, we assume smaller photometric redshift uncertainties and a lower maximal redshift. The number density and overall redshift distribution is calculated from the luminosity function of red galaxies, following the procedure detailed in Ref. <cit.>. The sample is then divided into 6 redshift bins, evenly spaced in photometric redshift, and assuming a photo-z scatter σ_z=0.02(1+z). The resulting tomographic redshift bins are shown in the right panel of Fig. <ref>.
Using these redshift distributions, we obtain spherical harmonic power spectra for an LSST-like analysis. We use the computed three-dimensional power spectra from AbacusSummit and interpolate those in redshift to evaluate Eq. <ref>. We verified that the number of samples in redshift used in this interpolation was large enough to produce accurate angular power spectra. In a next step, we compute all possible auto- and cross-correlations between probes and tomographic bins, except that we do not include cross-correlations between clustering bins in our analysis. In practice, we perform all projections into angular power spectra using the DESC Core Cosmology Library ([<https://github.com/LSSTDESC/CCL>.]) <cit.>. This leaves us with a set of 51 or 45 spherical harmonic power spectra for the red or maglim clustering samples respectively, which we evaluate for 22 bandpowers with edges ℓ = {32, 42, 52, 66, 83, 105, 132, 167, 210, 265, 333, 420, 528, 665, 838, 1054, 1328, 1672, 2104, 2650, 3336, 4200, 5287}. When applying a scale cut based on a maximal wavenumber k_max for power spectra involving clustering, we only keep bandpowers for which the mean of the ℓ-bin satisfies ℓ < k_maxχ(z̅), where χ(z̅) denotes the comoving distance to the mean redshift of the respective bin. For the cosmic shear power spectra on the other hand, we fix ℓ_max=2000 for all cases.
As the aim of this work is to assess the performance of different galaxy bias models, we need to prevent biasing our results due to inaccuracies in modeling the matter power spectrum used for cosmic shear predictions. We therefore rescale the AbacusSummit matter power spectrum to match the halofit <cit.> prediction (see e.g. Ref. <cit.> for a similar approach)[As shown in Ref. <cit.>, the matter power spectrum from AbacusSummit agrees with that from halofit to a level of 2% for k<0.5 h Mpc^-1 for the fiducial AbacusSummit cosmology. For other cosmological models however, the discrepancies increase; an example is shown in Ref. <cit.>, who find differences of up to 4% between the ν simulations and <cit.> at k=0.2 h Mpc^-1. We expect these to be comparable to the differences between Abacus and halofit for a range of cosmological models.]. In Sec. <ref>, we explore the effect of not rescaling cosmic shear power spectra. We note that all other power spectra are left unchanged and taken from the AbacusSummit simulations, which means that matter power spectrum mis-modeling might affect our results for P_δ_gδ_g and P_δ_gδ_m. As described in Sec. <ref>, we test the sensitivity of LPT and EPT to implementation choices for modeling the matter power spectrum, but leave a more extensive analysis to future work.
We generate covariances for these data as outlined in Sec. <ref>, modeling LSST Y10 data based loosely on the LSST DESC Science Requirements Document (SRD) <cit.>. We set the fraction of sky covered to f_sky=0.4. We model the noise for cosmic shear following Ref. <cit.>, setting the effective number density of galaxies to n_eff = 27 arcmin^-2, and the intrinsic ellipticity to σ_e = 0.28. For galaxy clustering on the other hand, we follow an alternative approach: We find the considered samples to exhibit a significant level of non-Poissonian stochasticity. Subtracting an estimate for the Poissonian shot noise of the sample from the simulated three-dimensional power spectra would therefore lead to biased results. We circumvent this issue by considering clustering noise levels consistent with the stochasticity determined by the HODs of the simulated galaxy samples. In practice this means that we do not subtract any Poisson shot noise estimate from the simulated power spectra, but account for it in the theoretical modeling as described in Sections <ref> and <ref>[We note that this approach amounts to assuming that the noise levels determined by the assumed HODs are consistent with the clustering noise levels expected for LSST Y10 data. Given that we have modeled our samples according to observational results, we believe this to be an acceptable approximation.].
Finally we consider an extension of our LSST-like analysis: we additionally model a joint analysis of our fiducial red galaxy sample from LSST with CMB lensing from CMB S4. Practically, we thus extend our fiducial data vector with all possible cross-correlations with the CMB lensing convergence leading to a set of 62 spherical harmonic power spectra. To model CMB S4 lensing and its associated covariance through Equations <ref> and <ref>, we follow the specifications given in the CMB S4 wiki[The description can be found at <https://cmb-s4.uchicago.edu/wiki/index.php/Survey_Performance_Expectations>.] (private communication Toshiya Namikawa and Colin Hill), and use the CMB lensing reconstruction noise computed using the [The package can be found at <https://github.com/toshiyan/cmblensplus>.] code. We assume a common sky coverage of LSST and CMB S4 given by f_sky=0.4.
To fully simulate the expected data vectors, one should draw a noise realization from the covariance matrix and add it to the noiseless prediction for the observables. We choose not to do this: for an ideal model, our result should thus center on the true values of cosmological parameters and the corresponding χ^2 should satisfy χ^2=0. Therefore, any deviation in χ^2 and parameter values indicates systematic errors in the theory modeling (see also Section <ref>).
§.§ Deriving best-fit parameters and associated uncertainties
For all bias models and galaxy samples analyzed in this work, we assume a Gaussian likelihood with simulated data and covariance matrix as described above. In order to compare the performance of different bias models and test if the fits to the simulated data return unbiased constraints on cosmological parameters, we would ideally sample the likelihood in an Monte Carlo Markov Chain (MCMC) and thus derive the full posterior of our model parameters. As we are considering a large number of bias models and different implementations, this is computationally expensive (although see Ref. <cit.> for potential ways to accelerate this process). We therefore resort to a simplified approach: for all models considered, we run an optimization algorithm to maximize the likelihood and determine the corresponding values of cosmological and bias parameters. Specifically, we use the [The code can be found at <https://numericalalgorithmsgroup.github.io/pybobyqa/build/html/index.html>.] algorithm <cit.> through the [The code can be found at <https://cobaya.readthedocs.io/en/latest/>.] package as well as Powell's optimization method as implemented in <cit.>. We choose the latter method as the parameter space considered in this analysis exhibits significant degeneracies, and we find that directional optimization as implemented in Powell's method outperforms in most cases. We therefore use Powell's method for most of our fiducial results but always compare to the results obtained using to test the robustness of our conclusions. In addition, we perform a number of further tests to ensure the stability of our results: we rerun each case multiple times to test stability against changing the initial conditions, we also vary the convergence criterion for both methods as well as the number of starting points for , finding consistent results for reasonable settings of these parameters.
In order to characterize biases on cosmological parameters, we additionally need to estimate parameter uncertainties. In this work, we use a Fisher matrix (FM) formalism to forecast uncertainties on cosmological and nonlinear bias parameters. The Fisher matrix allows for propagation of uncertainties on observables (in our case, the spherical harmonic power spectra) to corresponding uncertainties on model parameters. Under the assumption that the dependence of the data covariance matrix 𝐂 on the parameters of interest θ_α can be neglected (which is a very good approximation <cit.>), the Fisher matrix F is given by (see e.g. Refs. <cit.>)
F_αβ = ∂𝐃/∂θ_α𝐂^-1∂𝐃/∂θ_β,
where 𝐃 denotes the data vector of a given experiment (in our case, a list containing all the power spectra used in the analysis).
The Cramér-Rao bound then states that the uncertainty on θ_α, marginalized over all other θ_β satisfies
Δθ_α≥√((F^-1)_αα).
In this analysis, we consider a fiducial model characterized by two cosmological and a set of six nonlinear bias parameters per clustering redshift bin, i.e. θ = {σ_8, Ω_c, b^p, i_1, b^p, i_1p, b^p, i_2, b^p, i_s^2, b^p, i_∇^2, P^p, i_SN}, where i denotes the redshift bin (i.e. i=1… 5/6 in our analysis), and p = L in the case of LPT and HEFT. Computing the Fisher matrix requires the assumption of fiducial values for the model parameters, which we set to the best-fit values derived from the minimizer. We compute derivatives of the observables with respect to the model parameters numerically using a five-point stencil with step size ϵ = 0.01θ, where θ denotes any parameter considered in our analysis[For parameters with fiducial value of zero, we set ϵ = 0.01.]. We test the stability of our results by varying the parameter ϵ and find our results to be largely insensitive to this choice.
Fisher matrix analyses are prone to numerical instabilities (see e.g. Refs. <cit.>). An additional complication in our case is that the posterior is likely to exhibit non-Gaussian features due to degeneracies between model parameters. We therefore test the robustness of our results by comparing our FM constraints to those derived using an MCMC for selected cases, finding reasonable agreement as described in detail in Appendix <ref>.
§.§ Assessing model performance
For all the bias models considered in this analysis, we assess model performance as a function of the smallest scale (largest wavenumber) k_max considered for galaxy clustering and galaxy-galaxy lensing data. As we are working with angular power spectra, we convert this quantity to a maximal angular multipole for each redshift bin using the approximate relation ℓ_max = k_maxχ(z̅), where z̅ denotes the mean redshift of each bin.
In this work, we assess model performance in several different ways. First, we test if the values of the cosmological parameters recovered from our fits are consistent with their fiducial values within statistical uncertainties as derived from the Fisher matrix analysis. This test does not guarantee good model performance, as even models that do not fit the data can potentially yield unbiased parameter constraints. We therefore additionally assess goodness-of-fit (GOF) of all models considered. As described in Section <ref>, we work with noiseless data vectors.
We therefore do not expect the χ^2 of the fit to follow a χ^2-distribution with mean given by the number of degrees of freedom in the data, but rather be significantly smaller and dominated by the model performance and numerical errors. Based on this, we devise a set of two χ^2-tests which we use to validate our results:
Noiseless GOF: First, we determine the minimal χ^2-value of the fit to the noiseless data vector, which we will call χ^2_theory. We need to make sure that this quantity is significantly smaller than the Δχ^2 allowed by statistical uncertainties and the number of degrees-of-freedom of our model. In the presence of noise, we expect the minimal χ^2 of the fit to roughly follow a χ^2-distribution with number of degrees-of-freedom dof = n_data-n_param, where n_data denotes the number of data points, and n_param is the number of parameters in the model[Strictly, this is only valid for linear models with independent basis functions (see e.g. Ref. <cit.>). We will nevertheless use this criterion throughout this analysis, as we are mainly interested in deriving a threshold and not accurate probabilities-to-exceed.]. We call this latter quantity χ^2_noise, which in our case is given by χ^2_noise = n_C_ℓ - n_param - n_C_ℓ^γγ, where n_C_ℓ denotes the total number of spherical harmonics C_ℓ, and n_C_ℓ^γγ denotes the number of shear-only data points. As described above, we set the matter power spectrum of the simulations to its halofit prediction, and we therefore do not count the shear power spectra as degrees-of-freedom of the model.
In our first test we thus assess that the sum of the expected χ^2_noise and χ^2_theory is reasonably likely given a χ^2 distribution with mean χ^2_noise. This tests ensures that the differences due to systematic uncertainties in the theory are subdominant to statistical uncertainties. For the noiseless data vector, we also examine the size of the fit residuals.
Noisy GOF: Secondly, we use our synthetic data vector alongside the covariance matrix to create a noisy realization of the data. We then fit this data using all the bias models discussed and test that the best-fit yields a χ^2 consistent with χ^2_noise + χ^2_theory. In contrast to earlier, we set χ^2_noise = n_C_ℓ - n_param, as we do sample the cosmic shear data as well. We additionally assert that the histogram of the fit residuals is consistent with a Gaussian.
In principle, we require the bias models to pass both these tests in order to be deemed acceptable. Given the large number of cases considered in our analysis, and the computational cost of performing the `Noisy GOF' test for each case, in practice we only perform the second test for our fiducial case. As we find largely consistent results between the two tests in this case, we resort to only the `Noiseless GOF' test for the remainder of this work. As is customary, we set a threshold of a p-value larger than p=0.05 for a model to pass a specific test.
§ RESULTS
In the following sections, we present the results of our analysis. Unless stated otherwise, all results will be presented for our fiducial, red sample. At the end of the section, we also give a summary of the results obtained for the alternative samples considered.
§.§ HEFT
We first investigate the performance of the two HEFT implementations, and . As discussed in Sec. <ref>, we minimize the likelihood with respect to two cosmological parameters {Ω_c, σ_8}, a well as six bias parameters per redshift bin, {b_1, b_1p, b_2, b_s^2, b_∇^2, P_SN}. We keep the remaining cosmological parameters fixed at their fiducial values adopted in AbacusSummit.
The upper panels of Fig. <ref> show the values of the cosmological parameters σ_8 and Ω_c recovered from fitting our fiducial red galaxy sample with and . As can be seen, both HEFT methods are able to recover the true parameter value within 1σ uncertainty for all values of k_max considered[For as well as the PT-based methods discussed in Sec. <ref>, we see a sign that the recovered values of σ_8 and Ω_c are systematically under- or over-estimated. These effects are not significant at the precision we are working in, but might be a sign for biases that could become important at higher signal-to-noise. However, this could also be a feature of the particular simulation realization as the bias parameters derived as a function of k_max are highly correlated.]. The bottom left panel of Fig. <ref> shows the minimal χ^2 obtained for both models, and we can use these values to assess the goodness-of-fit in all cases. The most stringent test for each model is obtained at the maximal k_max considered in our analysis, k_max=0.4 Mpc^-1 (corresponding to 0.6 hMpc^-1), and in the following we only discuss this case. The results of our goodness-of-fit tests are described in detail in Appendix <ref>, and here we only give a summary of our main findings. For the noiseless data vector (noiseless GOF), we find that both models, and , pass our χ^2 tests[We note that for both HEFT methods as well as for our fiducial implementations of LPT and EPT, we find consistent results when we restrict our data vector to only include galaxy clustering and galaxy-galaxy lensing in a so-called 2x2pt analysis.]. In addition, most obtained fit residuals lie within 1σ of the synthetic data and the relative differences are generally of order 1%. We additionally consider two noisy realizations of the data vector (noisy GOF), finding all tests to pass with the exception of the analysis of one of the noisy realizations with , which we consider to be a statistical fluctuation. In all cases considered, we further find the distribution of fit residuals to be consistent with a Gaussian.
In order to test for possible systematic modeling uncertainties, we compare the consistency of the bias parameter values obtained for the two HEFT implementations. While a detailed description is deferred to Appendix <ref>, we generally find consistency between bias parameters derived using or . The only exception being b_∇^2 for which we find consistently higher values for than we do for . We attribute these differences to slightly different implementations of the nonlocal power spectra in the two HEFT models, as well as known sensitivities of b_∇^2 to small-scale implementation details such as different smoothing scales used when deriving template spectra (see e.g. Ref. <cit.>).
Finally, we expect that systematics in the modeling might manifest themselves as dependencies of the derived bias parameter values on the maximal wavenumber k_max used in the analysis. In order to test for this, we thus compare the bias parameter values obtained for when varying k_max from k_max=0.1 Mpc^-1 to k_max=0.4 Mpc^-1, finding largely consistent results.
Based on these results, the two HEFT implementations considered in this work thus appear promising for analyzing high-precision data as expected from LSST.
§.§ LPT/EPT
In addition to the HEFT implementations discussed above, we also assess the performance of two PT-based models, Lagrangian and Eulerian perturbation theory. As described in Sec. <ref>, we consider two different ways to model the matter power spectrum P_δ_mδ_m(z, k) as well as nonlocal contributions in P_δ_gδ_g and P_δ_gδ_m. These are denoted for the halofit-based model, and for the fully perturbative case. We remind the reader that this is separate from our rescaling of the matter power spectrum for modeling cosmic shear (as described in Sec. <ref>), which is performed for both and .
The results obtained from fitting the red galaxy sample with the LPT and EPT models are shown in Fig. <ref>. Similarly to the results for the two HEFT implementations, we find that for the models, the recovered values for σ_8 and Ω_c agree with their fiducial values within 1σ in all cases. For the models on the other hand, we find the recovered values for the cosmological parameters to start showing biases of around 1σ for k_max≳ 0.3 Mpc^-1, increasing to 1.5 to 2σ for k_max = 0.4 Mpc^-1. This is borne out by our goodness-of-fit tests, as described in Appendix <ref>. We find that both methods pass our tests on the noiseless data. Despite both models returning biased constraints on cosmological parameters, we find that the fully perturbative implementation of LPT still passes our goodness-of-fit test, while EPT does not. These results suggest that predicting the galaxy power spectra considered in this analysis up to a maximal wavenumber of k_max = 0.4 Mpc^-1 requires more accurate modeling of the matter power spectrum and nonlocal terms, which we achieve here through our use of the halofit fitting function. These findings are consistent with previous studies, such as e.g. Refs. <cit.>, which also found significantly improved performance of EPT bias models when using non-perturbative predictions for the matter power spectrum.
Nevertheless, comparing to previous analyses that roughly find LPT and EPT bias models to only reach up to k_max∼ 0.3 h Mpc^-1 (see e.g. Ref. <cit.>), these results are somewhat surprising. We believe this to be due both to differences in the considered data vector as well as the criterion chosen to assess goodness-of-fit: first, in this work we focus on spherical harmonic power spectra, which constitute a line-of-sight projection of the underlying three-dimensional power spectrum usually studied in the literature. The line-of-sight projection results in the smoothing of power spectrum features and might thus increase the reach of perturbative nonlinear bias models. In contrast to other works, we additionally do not include redshift space distortions (RSDs) in our analysis, as their impact on projected statistics is expected to be small (although potentially not negligible <cit.>), which might also lead to better performance of PT-based models. Finally, our results could also be affected by our pragmatic approach to assess the goodness-of-fit of a given model: we only require the fit to return unbiased constraints on Ω_c and σ_8, as well as returning a χ^2-value passing the criteria discussed in Sec. <ref>. In particular, we do not make any requirements on model residuals as was done in previous analyses, and we include a theoretical relative error floor of 1% in all our covariances. We believe it is for these reasons that our results find perturbative bias models to be applicable up to slightly smaller scales than previous analyses. In Appendix <ref> we further investigate this by performing an analogous analysis of three-dimensional power spectrum data in real space from the UNIT simulation <cit.>.
As above, in a final test we compare the bias parameter values obtained using our fiducial implementation of LPT and EPT. As discussed in Appendix <ref>, we generally find good agreement between both methods, although the EPT approach seems to prefer significantly larger values for b_∇^2 and P_ SN compared to LPT. In addition, the recovered bias parameters for the two models are generally consistent with those obtained from the HEFT methods. The only exception being b_∇^2, for which we obtain significantly lower values for the HEFT implementations than we do for the perturbation-theory-based models, particularly at high redshift. As further discussed in Appendix <ref>, this could be a possible sign for larger model inaccuracies in the PT-based methods as compared to HEFT.
§.§ Bias consistency relations
As discussed in Sec. <ref>, we expect Eulerian and Lagrangian bias parameters to exhibit consistency relations under purely gravitational evolution. While we do not anticipate these relations to hold exactly due to non-gravitational processes involved in galaxy formation, they present a complementary means for validating our results, and our bias parameter fitting procedure in particular. We therefore investigate the validity of the coevolution relations given in Eq. <ref>. The results using the models with k_max = 0.4 Mpc^-1 are illustrated in Fig. <ref> both for our fiducial, red sample as well as the maglim sample, which we discuss in more detail below: in the upper panels we show the value of b_2 and b_s^2 as a function of b^L_1, b^L_2 and b^L_1, b^L_s^2, respectively, while the lower panel shows the relation between b_s^2 and b_1. In these figures, we also show the uncertainties associated with the various bias parameters. However, as discussed in more detail in Sections <ref> and <ref>, we find the Fisher matrix-derived uncertainties on bias parameters to be rather unstable for the full model, while they match their MCMC counterparts if we fix either b_s^2 or b_∇^2. Where possible we thus always show error bars obtained setting b_s^2=0 (even though this has no effect on cosmological parameter uncertainties). As this is not possible in this case, we caution the reader to keep these instabilities in mind when interpreting the uncertainties. As can be seen from the figure, the obtained bias values largely follow the theoretically expected trends. However, we do see signs of significant deviations, particularly in the consistency between b_2 and b^L_1, b^L_2. Further investigating this, we also compare our results to the empirical relations between Lagrangian bias parameters found in Ref. <cit.> and repeat the analysis for the UNIT simulations, finding similar results in both cases. This suggests that these findings are not driven by our use of AbacusSummit, and that empirical bias relations do not provide a better fit to our data than those derived from coevolution. Another possible reason for these results might be our choice to not consider third-order bias parameters in EPT, which breaks the full correspondence between EPT and LPT (as discussed in Sec. <ref>), and might lead to some of these differences being absorbed by lower-order bias parameters. We leave a further investigation to future work.
§.§ Alternative samples
We investigate the generality of the results presented so far by applying the nonlinear bias models considered in this work to the analysis of three alternative samples, as described in Sections <ref> and <ref>: a magnitude-limited galaxy sample, a galaxy sample featuring assembly bias, and a joint analysis of our fiducial data vector with CMB lensing data from a CMB S4-like experiment. For all the following cases, we only show the results for σ_8 as the results for Ω_c are qualitatively similar.
§.§.§ Magnitude-limited sample
We repeat our analysis using the maglim sample discussed in Sec. <ref>, using the same redshift bins for galaxy clustering and weak gravitational lensing. In its fiducial implementation only covers redshifts z≤ 1.5, and we thus need to extend the emulator using LPT at high redshift in order to model the maglim sample. The results obtained for all bias models after implementing this extension are shown in Fig. <ref>. As can be seen from the upper panels we generally recover unbiased results for all our four fiducial bias models[The only exception is the fit using LPT at k_max=0.25 Mpc^-1, which is an outlier as compared to the other cases. We suspect this to be due to a parameter degeneracy specific to this case preventing the minimizer to converge to the global minimum. This hypothesis is strengthened by the fact that evaluating the likelihood at k_max=0.25 Mpc^-1 with the best-fit parameter values derived using data up to k_max=0.4 Mpc^-1 leads to a significantly lower χ^2-value.]. For the maglim sample we only consider five clustering bins, thus leading to an effective number of degrees-of-freedom of dof = 436. Following Sec. <ref>, we thus require χ^2_theory, max≤ 49 to pass our goodness-of-fit test. From Fig. <ref> we can see that all minimal χ^2-values derived using our fiducial bias models satisfy this criterion. In the lower panels of Fig. <ref>, we additionally show the recovered parameter values for the variants of LPT and EPT. As opposed to the results obtained for the red sample, we recover unbiased constraints on σ_8 and Ω_c also for those models. This suggests a higher reach of fully-perturbative bias models for the maglim sample, likely related to the fact that its associated linear bias is significantly smaller than that of the red sample.
§.§.§ Assembly bias
Given the current uncertainties on the dependence of galaxy clustering on quantities beyond halo mass, it is important to assess the performance of nonlinear bias models in the presence of these effects. To this end we apply our four fiducial models to the galaxy sample with assembly bias described in Sec. <ref>, jointly fitting the combination of galaxy clustering, galaxy-galaxy lensing and cosmic shear. The results for both HEFT methods as well as the implementation of EPT and LPT are shown in Fig. <ref>[Given the results found for the red sample in Sec. <ref>, we do not consider the implementation of EPT and LPT here.]. As can be seen, all four models yield unbiased constraints on cosmological parameters, thus confirming theoretical expectations that these models offer the flexibility to capture the effects of assembly bias as implemented in our simulated data for LSST Y10-like precision (see Ref. <cit.> for similar results).
§.§.§ Including CMB lensing convergence
The combination of LSS surveys with CMB measurements has been shown to be a powerful way to constrain deviations from ΛCDM (see e.g. Refs. <cit.>), and thus constitutes a key priority for current and future surveys. We therefore additionally test the applicability of current nonlinear bias modeling techniques to the combination of galaxy clustering, galaxy-galaxy lensing, cosmic shear and their cross-correlations with the CMB lensing potential. Specifically, we use the implementation of HEFT to analyze the simulated joint LSST and CMB S4 data vector described in Sec. <ref>. The inclusion of CMB lensing leads to reduced uncertainties on cosmological parameters by roughly 10% to 30% as can be seen from Fig. <ref>. Nevertheless, we find unbiased recovery of σ_8 and Ω_c for all maximal wavenumbers considered even for this extended data vector, which suggests that these methods meet the accuracy-requirements for future joint analyses of LSST with CMB S4 or SO.
§.§.§ No halofit rescaling
In a final test, we investigate the impact of not rescaling the matter power spectrum to match the halofit prediction. While we do not explicitly show the results, in this case we recover significantly biased constraints on cosmological parameters for all bias models considered. These results suggest that at the precision of LSST Y10, we are sensitive to systematic differences between the halofit fitting function and matter power spectra measured from simulations. This is not related to the problem of characterizing galaxy bias, which is the focus of this paper, and instead is limited to the shear-shear component of the data vector. A thorough study of the precision with which the matter power spectrum needs to be modeled for LSST (including the impact of baryons) will be the focus of future work.
§.§ Stochasticity
A number of studies have found evidence for non-Poissonian stochasticity in simulated galaxy samples (see e.g. Refs. <cit.>). It is therefore interesting to compare the levels of stochasticity obtained in this work with their Poissonian expectations. The results presented so far have been derived using spherical harmonic power spectra, which constitute line-of-sight projections of the associated three-dimensional power spectra. It is therefore difficult to relate the redshift-averaged stochasticities obtained in our analysis to their Poisson counterparts, and we thus use measurements of three-dimensional power spectra to investigate stochasticity. As described in detail in Appendix <ref>, we consider a data vector consisting of 𝐝={P_gg(z, k), P_gm(z, k), P_mm(z, k)} at discrete redshifts, and determine the best-fitting bias parameters.
The ratio of the derived values of P_ SN to their Poissonian expectations for k_max=0.4 Mpc^-1 are shown in Fig. <ref> as a function of redshift. As can be seen, for the red sample we detect significantly sub-Poissonian stochasticity, with the largest discrepancies at low redshift. As shown in Ref. <cit.>, non-Poissonian stochasticity is due to two different effects: halo exclusion and HOD stochasticity. Halo exclusion denotes the effect that large halos are not a Poisson realization of an underlying density field as the halos cannot overlap <cit.>. This effectively decreases the volume available to the halos and thus reduces stochasticity. HOD stochasticity on the other hand denotes the additional stochasticity in galaxy samples caused by the variance in the galaxy-halo-distribution. This effect is always positive and thus acts to increase P_ SN. The stochasticity of a given galaxy sample is thus determined by the combined effects of halo exclusion and HOD stochasticity. Our results therefore suggest that the host halos of the red galaxy sample exhibit significant halo exclusion, which is qualitatively and quantitatively comparable to the findings presented in Ref. <cit.> for DESI LRGs. However, our results suggest lower values for P_ SN, especially at low and high redshift. We believe these changes to be due to either a difference in methodology to constrain stochasticity, or differences in the adopted HODs for the DESI LRG sample, or a combination of both[The differences in HODs might be caused by our assumption of a redshift-evolving HOD model, and comparing secondary halo properties of our sample to those reported in Ref. <cit.>, we find the largest differences at low and high redshift, which is also where we see the largest discrepancies with Ref. <cit.>.].
Performing an analogous analysis for the maglim sample, we find a significantly lower level of sub-Poissonian noise than what is detected for the red sample. Specifically we find a 7% reduction as compared to the Poissonian expectation at most. Comparing the mean halo masses of both samples at redshift z=0.65, we find M̅_h = 4.1 × 10^13 M_⊙ for the red sample, while we find 1.3 × 10^13 M_⊙ for the maglim sample. In addition, we find significantly higher satellite fractions for the maglim sample. These higher satellite fractions for the maglim sample can be interpreted using the results of e.g. Ref. <cit.> that finds the satellite fraction to increase steeply with decreasing luminosity but also with increasing redness of the sample. The higher satellite fraction of the maglim sample thus suggests that its lower luminosity dominates over its reduced fraction of red galaxies. Combined, these two properties of the maglim sample indicate that it exhibits stochasticity closer to its Poissonian expectation because its host halos show lower halo exclusion and the larger satellite fractions give rise to higher HOD stochasticity.
As shown in Appendix <ref>, the analysis in terms of spherical harmonic power spectra leads to similar conclusions, as we find good agreement between bias parameter constraints derived using two- and three-dimensional power spectra.
§.§ Minimal bias model
The results presented so far have been derived considering galaxy bias terms up to quadratic order, resulting in a model with six bias parameters per redshift bin. As shown in Sections <ref>, <ref> and <ref>, this model allows us to fit the AbacusSummit data reasonably well, but it is interesting to ask if we can reduce the complexity of the model while maintaining its performance. To this end, we first focus on the red sample and repeat our analysis, setting a number of bias parameters to zero. Specifically, we consider the cases b_s^2=0, b_∇^2=0 and b_1, p=0. This choice corresponds to the most drastic approach to model reduction, and we note that one could also explore alternative methods such as placing tight priors on bias parameters. The values recovered for σ_8 and their corresponding minimal χ^2-values when setting b_s^2=0 and b_∇^2=0 respectively are shown in Fig. <ref> for the HEFT methods as well as the implementation of EPT and LPT. As can be seen, we find that the HEFT methods return unbiased constraints on σ_8 even for these reduced models, while the perturbation-theory-based methods show significant biases, most pronounced at high k_max. This is borne out by looking at the χ^2_min-values: for b_s^2=0 both HEFT methods pass our goodness-of-fit test at k_max=0.4 Mpc^-1, while the PT methods do not. For b_∇^2=0 on the other hand, only recovers a χ^2 low enough to pass our test at the largest wavenumber considered. It is interesting to note that while the biases in the PT methods are more pronounced when setting b_s^2=0, the χ^2_min-values are significantly worse for the model with b_∇^2=0. Additionally, while we do not show the results, we find that all models yield biased constraints on cosmological parameters when we set b_1, p=0, highlighting the need to account for the evolution of at least the linear bias across each redshift bin at the level of precision achieved by LSST Y10.
We test the stability of these results by comparing the values of the common bias parameters recovered for the three levels of model complexity considered ({b_1,b_1,p,b_2,b_s^2,b_∇^2}, {b_1,b_1,p,b_2,b_∇^2}, and {b_1,b_1,p,b_2,b_s^2}) using the implementation of HEFT. Without explicitly showing the results, we find all recovered values to be consistent within uncertainties. This suggests that the values recovered for the linear and quadratic bias, and, perhaps more importantly, for the stochasticity P_ SN, are not driven by inaccuracies in the bias model, but rather represent a genuine physical feature of the sample under consideration.
In order to investigate the sample-dependence of these findings, we repeat our analysis for the maglim and assembly bias samples. The results obtained when setting the tidal bias to zero for the magnitude-limited sample, i.e. b_s^2=0, are shown in Fig. <ref> for the HEFT methods as well as the implementation of EPT and LPT. As can be seen, all bias models recover unbiased constraints on cosmological parameters, but the fits lead to χ^2-values significantly higher than our threshold[We note that in contrast to the results presented in Sec. <ref>, the models without tidal bias, b_s^2=0, give biased constraints on cosmological parameters.]. We find similar results when setting the non-local bias to zero, i.e. b_∇^2=0, and thus do not show the results explicitly. The only exception is that in this case all models, including , pass our goodness-of-fit tests. Investigating this further, we repeat our analysis for b_s^2=0 excluding the highest redshift bin for the clustering statistics. In this case, we find unbiased constraints on cosmological parameters as well as minimal χ^2-values passing our test criteria for all bias models. The model residuals for thus appear predominantly driven by the highest redshift bin, and we suspect that the low model performance observed might be caused by inaccuracies in our extension of to high redshift, but we leave a more detailed analysis to future work. For the assembly bias sample, we find results very similar to those obtained for the red sample, i.e. EPT and LPT yield biased constraints in all cases, while both HEFT implementations show good performance.
These results confirm the expectation that the minimal bias model depends on the data set as well as the actual model. For the red and the assembly bias samples, we find that all PT-based methods ( and ) lead to biases when setting b_s^2=0 or b_∇^2=0, while the HEFT methods perform generally well. We attribute this to the fact that a less accurate model will need more parameters to fit a given data set in order to compensate for inaccuracies in the template power spectra. It is interesting that a vanishing tidal bias appears to be more consistent with the data, while we start seeing biases even for the HEFT methods when setting b_∇^2=0. In addition, it is worth noting that the scale-dependent modification induced by b_∇^2≠0 is qualitatively different in real and harmonic space. Thus, these results may be sensitive to our analysis choices, in which scale cuts are imposed in harmonic space. Finally, we find both the HEFT and methods to yield unbiased results for the maglim sample, thus suggesting that the inaccuracies of PT-based models are most pronounced for more highly biased galaxy samples as compared to their less biased counterparts, and thus their modeling requires a smaller number of bias parameters.
It is worth noting that an alternative approach to finding a minimal bias model would be to make use of the rather tight relations between bias parameters found in the literature (see Sec. <ref> or e.g. Ref. <cit.>) and use them to fix the value of certain bias parameters rather than set them to zero. We leave an investigation thereof to future work.
§.§ Error as a function of maximal wavenumber
In order to determine optimal scale cuts for a given analysis involving galaxy clustering data, it is essential to investigate to what extent parameter constraints are tightened by the inclusion of additional small-scale information. In particular it is interesting to investigate if in the setup considered in this analysis, additional small-scale information results in tighter constraints on cosmological parameters, or rather serves to constrain bias parameters more tightly. To this end we compare the uncertainties on cosmological parameters as well as the four bias parameters b_1, b_2, b_∇^2 and P_ SN obtained as a function of k_max[We focus on these, as we find b_1p to be only weakly constrained in our analysis. We further note that we compute uncertainties setting b_s^2=0, as discussed in Sec. <ref>.]. The relative uncertainties for σ_8 and Ω_c are shown in the lower right panel of Fig. <ref> while Fig. <ref> illustrates the corresponding relations for the bias parameters. All uncertainties are normalized with respect to those obtained for at k_max=0.4 Mpc^-1, and as can be seen from Fig. <ref>, the marginalized 1σ errors on cosmological parameters decrease only by roughly 15% as we increase k_max from 0.05 Mpc^-1 to 0.4 Mpc^-1. These gains are small given the significant increase in the amount of small-scale information included in the fits. Looking at Fig. <ref> on the other hand, we see that the bias parameter constraints tighten significantly, by up to 2 orders of magnitude, for the same increase in k_max, in particular for b_∇^2 and P_ SN. These results suggest that in a joint analysis of galaxy clustering, galaxy-galaxy lensing and weak lensing with LSST Y10-like specifications, the information contained in small-scale galaxy clustering mainly serves to constrain galaxy bias parameters as compared to cosmological parameters.
The lack of improvement in cosmological constraining power with increasing k_max appears to be driven by the cosmic shear data, as we find that the errors on σ_8 and Ω_c decrease by roughly a factor of 1.5-2 when we increase k_max from 0.1 Mpc^-1 to 0.4 Mpc^-1 and only consider galaxy-galaxy and galaxy-shear correlations, as can be seen from Fig. <ref>[While the figure only shows , we find similar results for the other bias models.]. To further test this hypothesis, we rerun our fiducial analysis, freeing up the Hubble parameter h and the scalar spectral index n_s in addition to σ_8 and Ω_c. While the improvements in uncertainties for σ_8 and Ω_c are similar, we find larger gains for h and n_s of up to 30%. In particular, the gains do not seem to saturate at high k_max as they do in our fiducial analysis.
While the relatively modest gains in cosmological constraining power seem to be partly driven by our inclusion of weak lensing data, which yields tight constraints on σ_8 and Ω_c, and thus limits the gains from small-scale clustering, we do see a general trend that small-scale clustering improves bias parameter constraints more significantly than constraints on their cosmological counterparts. However, we caution the reader that these conclusions might change when constraining a larger number of cosmological parameters, including systematic uncertainties, or considering data vectors different from the ones we are investigating in this work (e.g. including higher-order correlators, or redshift-space distortions in spectroscopic survey analyses). As an example, galaxy-galaxy lensing at small scales has been shown to help constrain intrinsic galaxy alignments and photometric redshift systematics, which would otherwise significantly degrade the constraining power of cosmic shear (see e.g. Ref. <cit.>).
§ CONCLUSIONS
In this work, we compare the performance of a number of nonlinear galaxy bias models when applied to an LSST Y10-like tomographic joint analysis of galaxy clustering, galaxy-galaxy lensing and cosmic shear (analysis). Specifically, we compare two perturbative approaches, Lagrangian perturbation theory (LPT) <cit.> and Eulerian perturbation theory (EPT) to two implementations of Hybrid Effective Field Theory (HEFT), and , which combines a perturbative bias expansion in Lagrangian space with an exact treatment of the gravitational evolution via cosmological simulations <cit.>. We test all the methods using simulated data vectors computed from the AbacusSummit <cit.> cosmological simulation, considering several different galaxy samples: a DESI-like red sample, a magnitude-limited sample based loosely on HSC DR1, and a galaxy sample with assembly bias. We fit these simulated data using all bias models considered, keeping terms up to second order, and account for nonlocal bias as well as deviations from Poissonian stochasticity. In a final step, we compare their performance based on the accuracy and precision of the constraints obtained for the cosmological parameters σ_8 and Ω_c as well as the goodness-of-fit.
For our fiducial, red galaxy sample we find that the two HEFT implementations allow us to jointly model galaxy clustering, galaxy-galaxy lensing and cosmic shear with LSST Y10-like precision up to at least a maximal wavenumber of k_max=0.4 Mpc^-1. This is also true for LPT and EPT when we combine these methods with non-perturbative predictions for the matter power spectrum entering some of the terms in the expansion (). In contrast, when we use the predictions from perturbation theory for these terms (), the LPT and EPT implementations lead to biased constraints on cosmological parameters for k≳0.2 Mpc^-1. We find comparable results when analyzing the galaxy sample with assembly bias. For the magnitude-limited sample on the other hand, we find good performance for all bias models, including EPT and LPT with a perturbative prediction for the matter power spectrum. We further consider an extension of our fiducial galaxy sample with CMB lensing cross-correlations loosely matching the specifications for CMB S4, finding unbiased constraints on cosmological parameters as well as good minimal χ^2-values. In all these analyses, we find significant detections of non-Poissonian stochasticity in the galaxy clustering auto-correlations.
We further investigate the effect of reducing bias model complexity by setting the tidal and nonlocal bias to zero respectively, finding sample- and model-dependent results. We find that the HEFT approaches are able to obtain unbiased constraints and provide a good fit to the data with this reduced models in most cases, the only exception being for vanishing nonlocal bias and k_max=0.4 Mpc^-1. In turn, while LPT and EPT perform well on the magnitude-limited sample, they lead to biases on cosmological parameters when applied to the more highly-biased red sample (with or without assembly bias) within these reduced parameterizations. This is the case regardless of the prescription used to model the matter power spectrum ( or ).
Investigating the constraints on cosmological and bias parameters obtained as a function of maximal wavenumber k_max, we find the uncertainties on cosmological parameters to decrease only by around 15 % as we increase k_max from 0.05 Mpc^-1 to 0.4 Mpc^-1. The bias parameter uncertainties on the other hand decrease significantly, in some cases by more than an order of magnitude. Removing the weak lensing auto-correlations from our data vector yields larger relative improvements on cosmological parameter uncertainties, of up to a factor of 2. This is a useful case to consider as the weak lensing auto-correlations can have separate systematic uncertainties (e.g. PSF induced additive shear contributions) while not being sensitive to galaxy bias, so separating out the cosmology inferred from those is a powerful consistency check. Nevertheless, the qualitative result is that pushing towards smaller scales seems to lead to significantly larger improvements in bias parameters than in cosmological parameters. These results are subject to a number of caveats: most importantly, we only consider constraints on galaxy bias as well as two cosmological parameters, σ_8 and Ω_c, and we do not account for systematics such as photometric redshift uncertainties or calibration biases in cosmic shear. This might artificially increase the constraining power of weak lensing, thus reducing the gain obtained from small-scale clustering. We have also not considered uncertainties due to magnification, intrinsic alignments and their interplay with photo-z uncertainties – these effects can modify our findings on the improvements from small scale information (likely in the direction of greater gain). We leave an investigation of these effects to future work.
In summary, our results confirm the performance of HEFT approaches found in previous work, while suggesting a potentially higher reach for EPT and LPT than previously found. This is likely in part due to our focus on projected clustering statistics (as opposed to three-dimensional clustering in redshift space), and to the specific metrics used to quantify goodness of fit (based on the expected performance of an LSST-like experiment, rather than on ad-hoc precision requirements).
With regards to LSST, the results of our analysis suggest that current nonlinear bias models appear promising for the analysis of tomographic galaxy clustering, galaxy-galaxy lensing and weak lensing from LSST Y10 data. Amongst the methods investigated, we find the recently developed HEFT methods to show particular promise. This bodes well for future tomographic galaxy clustering and analyses using small-scale information for both current and future photometric surveys such as LSST and Euclid. Nevertheless, more work is necessary in order to be able to fully exploit these methods in a robust and reliable manner. First, application of these methods to future data will require investigating the impact of observational effects such as photometric redshift uncertainties or large-scale galaxy clustering systematics and associated scale cuts. In addition, as we have shown, a good characterization of the theoretical uncertainties associated with the non-linear scheme used is vital to obtain unbiased constraints with adequate errors. A more precise treatment and modeling of these uncertainties than that used in our analysis will therefore be needed before the bias models studied here can be applied to future data sets. Furthermore our results have highlighted the susceptibility of perturbative bias prescriptions to modeling of the matter power spectrum, thus requiring higher-accuracy models. Although our analysis has covered the most likely target samples for galaxy clustering (LRG-like and magnitude-limited samples), including some amount of assembly bias, the results found here should be validated against a wider variety of galaxy samples, incorporating different physical effects such as satellite segregation and baryonic effects in the dark matter distribution <cit.>. Furthermore, as the internal consistency relations between different bias coefficients provide an avenue to reduce the freedom allowed to the bias model, studying the applicability of these relations to these samples in the context of LSST would be a useful exercise. Finally, higher-order statistics have the potential to unlock a significant amount of untapped non-Gaussian information in the galaxy distribution, and thus studying the ability of the bias models explored here to describe these observables is of high priority.
This paper has undergone internal review by the LSST Dark Energy Science Collaboration. We kindly thank the internal reviewers Simone Ferraro, Andrew Hearin and Shivam Pandey for providing helpful comments, which helped us improve the quality and clarity of the paper.
We are very happy to thank Toshiya Namikawa and Colin Hill for sharing the CMB S4 lensing noise curves and for help with their usage. With pleasure we would also like to thank Martin White for many very helpful comments and suggestions.
The contributions from the primary authors are as follows: DA: Co-designed the project, constructed simulated data vectors, contributed to likelihood software. NF: Performed initial fits with LPT and EPT models to determine the minimum scales to which unbiased results could be recovered. CGG: Implemented CMB lensing in likelihood, data and covariances. ZG: Implemented and tested Fisher matrix for uncertainty estimation. BH: Generated mocks from AbacusSummit and contributed to some iteration of the likelihood. BJ: Helped compare the results in this study with prior work on galaxy bias models and discussed/edited Sections 1, 2.4, 2.5, 4, 5.2, 6.7 and 7. NK: Implemented the hybrid EFT code anzu into the CCL-based likelihood used to compare models and data in the challenge. AN: Co-designed the project, lead the analysis and writing of the paper. AS: Contributed to the design of the experiment, help with interpretation of results, contributed to the paper text.
CGG acknowledges support from the European Research Council Grant No: 693024 and the Beecroft Trust. DA acknowledges support from the Beecroft Trust, and from the John O'Connor Research Fund, at St. Peter's College, Oxford. ZG and CW were supported by the Office of Science of the U.S. Department of Energy, grant DE-SC0010007.
The DESC acknowledges ongoing support from the Institut National de Physique Nucléaire et de Physique des Particules in France; the Science & Technology Facilities Council in the United Kingdom; and the Department of Energy, the National Science Foundation, and the LSST Corporation in the United States. DESC uses resources of the IN2P3 Computing Center (CC-IN2P3–Lyon/Villeurbanne - France) funded by the Centre National de la Recherche Scientifique; the National Energy Research Scientific Computing Center, a DOE Office of Science User Facility supported by the Office of Science of the U.S. Department of Energy under Contract No. DE-AC02-05CH11231; STFC DiRAC HPC Facilities, funded by UK BEIS National E-infrastructure capital grants; and the UK particle physics grid, supported by the GridPP Collaboration. This work was performed in part under DOE Contract DE-AC02-76SF00515.
The author(s) are pleased to acknowledge that the work reported on in this paper was substantially performed using the Princeton Research Computing resources at Princeton University which is consortium of groups led by the Princeton Institute for Computational Science and Engineering (PICSciE) and Office of Information Technology's Research Computing.
This work made use of the following software packages: [<https://www.astropy.org/>.], [<https://matplotlib.org/>.], [<https://numpy.org/>.] and [<https://scipy.org/>.].
§ DESCRIPTION OF METHODOLOGY TO CREATE SMOOTHED POWER SPECTRA
To eliminate the noise in the measured power spectra due to the finite size of the AbacusSummit simulation boxes, we start by modeling the overdensity of a given tracer x as
δ_x( k)=b_x(k) δ_ IC( k) + n_x( k),
where δ_ IC denotes the linear matter overdensity in the initial conditions, b_x(k) is a deterministic function of k and, by definition, n( k) is the small-scale component of the overdensity field, δ_x, that does not correlate with δ_ IC.
Then at each snapshot, we compute the following power spectra from the simulation:
P_gg(k), P_gm(k), P_mm(k), P_g, IC(k), P_m, IC(k), and P_ IC, IC(k), where m and g represent the overdensities of matter and of the target sample of galaxies. We also have a theoretical prediction for the linear power spectrum of the initial conditions, P̅_ IC, IC(k). From the model in Eq. <ref>, we compute a first estimate of the bias functions b_g(k) and b_m(k) as
b̂_x(k)≡P_x, IC(k)/P_ IC, IC(k).
Since both P_x, IC and P_ IC, IC come from the same realization, the resulting bias function is reasonably smooth on large scales. To obtain a fully smooth function that is defined on the full continuum of k, we fit the resulting measured b̂_x to a smooth function of the form:
b_x(k)=b_0 e^-(k/k_0)^α [1+c e^-(k-k_1/0.1)^2],
with b_0, k_0, α, c and k_1 as free parameters. We found this functional form to provide a good fit in all cases explored (see top left panel of Fig. <ref>).
After determining the b_x, we estimate the power spectrum between the small-scale components n_x and n_y as
P^n_xy(k)=P_xy(k)-P_x, IC(k)P_y, IC(k)/P_ IC, IC(k).
The resulting curve is fairly smooth on large k, but exhibits residual noise for small and intermediate wavenumbers (k≲0.2 h Mpc^-1), which we correct for as follows (for illustration, see the top right panel of Fig. <ref>):
* P^n_xy(k) reaches a maximum at a transition scale k_ trans around ∼ 0.1 h Mpc^-1, which provides a subdivision between the large-scale and small-scale components of P^n_xy(k). In a first step, we determine this transition scale as the value at which the P^n_xy(k) estimated from the simulation reaches its maximum value.
* We fit the data on scales below k_ trans using a 4th-order polynomial. Since this polynomial can take negative values, which are purely driven by large-scale noise, we apply a positivity prior on the resulting function. We note that on the largest scales, where this occurs, the final power spectrum is dominated by the large-scale correlated part from before, so these choices have a negligible impact on the final result.
* On scales above k_ trans, we find the measured spectra to exhibit small noise-like oscillations, which we reduce by smoothing the spectra with a Savitzky-Golay (SG) filter of order 1 and window size 25.
* Finally, we combine the large-scale polynomial P^n, L_xy(k) and the SG-smoothed small-scale component P^n, S_xy(k) by smoothing the transition between the two regimes such as:
P^n_xy(k)=e^-(k/k_ trans)^5P^n, L_xy(k)+[1-e^-(k/k_ trans)^5]P^n, S_xy(k).
This procedure thus provides us with a smooth set of tabulated measurements of P^n_xy, which we then interpolate linearly in log(k). In a last step, we compute the final power spectrum from the theoretical linear power spectrum of the initial conditions combined with the models for b_x(k) and P^n_xy(k) as:
P_xy(k)=b_x(k)b_y(k)P̅_ IC, IC(k)+P^n_xy(k).
To go beyond the smallest scale measured by AbacusSummit (k_ max≃ 2.7 Mpc^-1) we use a power law extrapolation with a logarithmic slope calculated from the last 50 points in the measurement. As our results will be mostly based on scales k≲0.4 Mpc^-1, the effects of this extrapolation are largely irrelevant.
Note that other approaches have been recently proposed in the literature <cit.> to reduce the impact of noise on simulation-based measurements of summary statistics. We verified that the scheme described above is able to provide a good description of the measured power spectra, well below the statistical uncertainties of the AbacusSummit simulation box (see bottom panel of Fig. <ref>), and leave a comparison to other approaches to future work.
§ DETAILED DESCRIPTION OF GOODNESS-OF-FIT TESTS
§.§ HEFT
In the following, we describe the goodness-of-fit tests employed to test the performance of the two HEFT implementations considered.
The number of degrees-of-freedom for our fiducial red sample is given by χ^2_noise=478, where we have used n_C_ℓ=786, n_C_ℓ^γγ=270, and n_p=38. Therefore, the maximal χ^2_theory allowed by our p-value criterion is given by χ^2_theory, max=51. The recovered χ^2-values for both HEFT models pass this test, with χ^2_theory≃ 11 for , and χ^2_theory≃ 17 for . The associated p-values are p=0.36 and p=0.28, respectively. In Fig. <ref>, we also show the normalized fit residuals for the auto-correlation of the highest clustering redshift bin as well as its cross-correlation with the highest weak lensing bin[We choose these particular two combinations as they display some of the largest differences between the simulated data and the models, and thus serve as illustration of the worst model performance.]. As can be seen, we find most of the residuals to be within 1σ, and the relative differences generally lie within 1%.
Additionally, we consider two noisy realizations of the synthetic data, finding χ^2 p-values of p=0.096 (p=0.63) and p=0.012 (p=0.11) for and respectively. The distributions of galaxy clustering and galaxy-galaxy lensing residuals are consistent with Gaussians, as can be seen in Fig. <ref> for one of the realizations, and a Kolmogorov-Smirnov (KS) test yields p-values of p=0.38 (p=0.93) for and p=0.21 (p=0.89) for . Given these results, we conclude that the two HEFT implementations we consider in this work are both suited to the analysis of high-precision data as expected from LSST, and we regard the failed p-value test for one of the noisy realizations for as a statistical fluctuation.
§.§ LPT/EPT
For the implementations of LPT and EPT, we obtain minimal χ^2-values of χ^2_theory=8.6 for LPT, and χ^2_theory=13.8 for EPT when using the noiseless data vector. Furthermore, as can be seen from Fig. <ref>, we find the residuals between model and data to mostly lie within their 1σ uncertainties, with relative differences largely smaller than 1% as we found for the HEFT models. For the models on the other hand, we find χ^2-values of χ^2_theory=28.7 for LPT, and χ^2_theory=59.0 for EPT at our minimal cutoff scale (k_ max=0.4 Mpc^-1).
Finally, we also analyze the same two noisy realizations of the data vector as used in the HEFT case with our fiducial implementation of LPT and EPT, finding χ^2 p-values of p=0.16 (p=0.77) and p=0.064 (p=0.83) for LPT and EPT respectively. As shown in Fig. <ref> for one of the realizations, we also find the distribution of fit residuals to be consistent with a Gaussian (with corresponding p-values of p=0.99 for LPT and p=0.98 for EPT).
§ CONSISTENCY OF BIAS PARAMETER VALUES FROM DIFFERENT MODELS
In order to test for possible systematic modeling uncertainties, we compare the consistency of the bias parameter values obtained for the two HEFT implementations. As discussed in more detail in Appendix <ref> below, we find the Fisher matrix uncertainties on the bias parameters to be rather unstable when varying both b_s^2 and b_∇^2. The constraints obtained for the cosmological parameters on the other hand, are stable in all cases considered. Fixing one of the two parameters, b_s^2 or b_∇^2, yields stable error bars in all cases, which are additionally consistent with their MCMC analogs (see Appendix <ref>). We therefore compare the recovered bias parameters for and fixing b_s^2 to its recovered value, and the results are shown in Fig. <ref> for all parameters except b_s^2[We find the recovered values for b_s^2 to be generally consistent between all four models considered here, and thus do not show the plots as we are unable to provide error bars for the measurements.]. As can be seen, these are generally consistent, the only exception being b_∇^2 for which we find consistently higher values for than we do for . These differences might be due to the fact that the nonlocal bias parameter b_∇^2 depends on small-scale properties of the fields considered and is thus affected by implementation details such as different smoothing scales used when deriving template spectra (see e.g. Ref. <cit.>). In addition, and employ different methods to model power spectra involving higher derivative terms: while in these terms are determined from the simulations, uses the approximation ⟨ X, ∇^2δ_L⟩ = -k^2⟨ X, 1 ⟩, where X denotes one of the fields described in Sec. <ref>.
We find similar results when comparing the bias parameter values obtained using our fiducial implementation of LPT and EPT. As can be seen from Fig. <ref>, we generally find good agreement between both methods, although the EPT approach seems to prefer significantly larger values for b_∇^2 and P_ SN compared to LPT. It is also interesting to note that the recovered bias parameters for the two PT-based models are generally consistent with those obtained from the HEFT methods. As above, the notable exception is b_∇^2, for which we obtain significantly lower values for the HEFT implementations than we do for the perturbation-theory-based models, particularly at high redshift. We see two possible explanations for these findings: (i) As discussed above, our results could be another consequence of different bias parameter normalizations due to implementation details such as smoothing. (ii) Nonlocal bias terms parameterized by b_∇^2 have the same functional form as EFT counter-terms, which absorb corrections to the bias model due to small-scale physics. These corrections are partly accounted for in the HEFT approach, while they are not present in PT-based models. Without providing rigorous confirmation, these results suggest that the larger nonlocal bias values obtained for LPT and EPT are due to larger contributions from EFT counterterms for these models (see Ref. <cit.> for similar results).
§ COMPARISON TO MCMC RESULTS
Fisher matrix analyses are prone to numerical instabilities (see e.g. Refs. <cit.>), and it is therefore essential to validate our results by comparing our FM constraints to those derived using a Monte Carlo Markov Chain (MCMC). Here we focus on our fiducial case using with k_max=0.4 Mpc^-1, and perform two separate MCMC analyses using the same specifications as used for the Fisher matrix computation: in our first analysis we allow for variations in all cosmological and bias parameters, while in the second case we fix b_s^2=0. The comparison of the MCMC constraints on cosmological parameters obtained in the first case and their FM counterparts are shown in the left hand panel of Fig. <ref>. As can be seen, we find the two approaches to yield consistent constraints on the Ω_c-σ_8 plane. This is not the case for the constraints on bias parameters as discussed above and we thus repeat this analysis setting b_s^2=0 in both cases. The ratio of the 1-σ uncertainties obtained from the FM and MCMC analyses respectively are shown in the right hand panel of Fig. <ref> for all bias parameters considered. Similarly to before, we find those to be consistent within roughly 30%. This is a reasonable level of agreement, given the approximations and numerical instabilities involved in Fisher matrix analyses.
§ FITTING DATA FROM
As an additional consistency test of the results presented in Sections <ref> and <ref>, we repeat our analysis using the sample from the simulations <cit.> employed in Ref. <cit.>. Specifically, we work with the three-dimensional power spectrum and fit the combination of 𝐝={P_gg(z, k), P_gm(z, k)} using the covariance matrix derived in Ref. <cit.>. The results for all bias models considered in this work are shown in Fig. <ref>. As can be seen, we find results very similar to those obtained for AbacusSummit data, i.e. both HEFT and implementations of EPT and LPT yield unbiased on σ_8 and Ω_c, while the methods give rise to biases in the recovered cosmological parameters for large maximal wavenumbers k_max. Compared to the results shown in Fig. <ref>, we find the models to break down at smaller wavenumbers. This is particularly true for EPT, which yields significantly biased constraints on cosmological parameters for k_max≳ 0.2 Mpc^-1. These results suggest that spherical harmonic power spectra might indeed be less susceptible to nonlinear bias modeling systematics than their three-dimensional counterparts, and provide confirmation that the findings reported in Sections <ref> and <ref> are not driven by our usage of data from the AbacusSummit simulations.
§ THREE-DIMENSIONAL POWER SPECTRUM ANALYSIS
In order to investigate stochasticity in the red and maglim galaxy samples, we analyze three-dimensional power spectrum data. Specifically, we use power spectra at six discrete redshifts covered by the AbacusSummit simulations, i.e. z={0.1, 0.3, 0.5, 0.8, 1.1, 1.4}. We consider a data vector 𝐝={P_gg(z, k), P_gm(z, k), P_mm(z, k)} and assume a Gaussian likelihood with Gaussian covariance matrix given by (see e.g. <cit.>)
Cov(P_ij(k), P_ln(k')) = κ2π^2δ_kk'/k^2Δ k V
2 P^2_ii(k), if i=j=l=n
2 P_gg(k)P_gm(k), if i,j=g and l=g, n=m
[P_gg(k)P_mm(k) + P^2_gm(k)], if i,l=g, and j,n=m,
where V denotes the volume of the survey and Δ k is the width of each k-bin. For consistency with our previous analysis, we aim to ensure a similar signal-to-noise ratio for the P(k) measurements. We therefore follow a rather crude approach and introduce a scaling factor κ that scales the relative P(k) errors to be equal to those for C_ℓ data obtained for bins with matching mean redshift[We note that we repeat this analysis without the scaling factor and recover very similar values for the stochasticity.]. We then use this likelihood and fit the data with the implementation of HEFT, keeping the cosmological parameters fixed at their fiducial AbacusSummit values[We find this to be necessary, as the single bin fits exhibit a strong degeneracy between cosmological and bias parameters, and thus the minimization and error bars become unstable when fitting all parameters jointly.].
§ CONSISTENCY OF RESULTS FOR DIFFERENT DATA SETS
Extending the analysis described in Sec. <ref> and as a further consistency test of our results, we compare the bias values obtained using different combinations of our simulated data. Specifically, we compare our fiducial constraints obtained using spherical harmonics and fitting all redshift bins simultaneously to those obtained by fitting each bin separately both using spherical harmonics and three-dimensional power spectra. For the P(k) data we proceed as described in Sec. <ref>, while for the single-bin spherical harmonic fits, we choose to only fit the galaxy auto-correlation and the cross-correlation between the galaxies and DM while keeping the cosmological parameters fixed at their fiducial values as above. As discussed in Sec. <ref>, we find that while our fiducial analysis fits for all bias parameters simultaneously, the Fisher matrix constraints obtained in this setup are numerically unstable. In order to compare the constraints obtained from different data sets, we therefore follow the approach described in Sec. <ref>, and consider the FM errors obtained at the best-fit values when setting b_s^2=0. In addition, given the ad-hoc procedure to rescale the covariance matrix used for analyzing P(k) data, we do not consider any error bars for these measurements. The results obtained for are shown in Fig. <ref>, and as can be seen we find a very good agreement between the bias values derived in all cases. This suggests that the fits to the spherical harmonic power spectra show internal consistency, and that the redshift-averaged bias parameters obtained from these are consistent with the single-redshift fits obtained from the P(k) analysis.
|
http://arxiv.org/abs/2307.02274v1
|
20230705131752
|
RBDCore: Robot Rigid Body Dynamics Accelerator with Multifunctional Pipelines
|
[
"Yuxin Yang",
"Xiaoming Chen",
"Yinhe Han"
] |
cs.RO
|
[
"cs.RO",
"cs.AR"
] |
RBDCore: Robot Rigid Body Dynamics Accelerator with Multifunctional Pipelines
Yuxin Yang, Xiaoming Chen, Yinhe Han
Institute of Computing Technology, Chinese Academy of Sciences
Email: [email protected], [email protected], [email protected]
August 1, 2023
=================================================================================================================================================================================
Rigid body dynamics is a key technology in the robotics field.
In trajectory optimization and model predictive control algorithms, there are usually a large number of rigid body dynamics computing tasks.
Using CPUs to process these tasks consumes a lot of time, which will affect the real-time performance of robots.
To this end, we propose a multifunctional robot rigid body dynamics accelerator, named RBDCore, to address the performance bottleneck.
By analyzing different functions commonly used in robot dynamics calculations, we summarize their reuse relationship and optimize them according to the hardware.
Based on this, RBDCore can fully reuse common hardware modules when processing different computing tasks.
By dynamically switching the dataflow path, RBDCore can accelerate various dynamics functions without reconfiguring the hardware.
We design Structure-Adaptive Pipelines for RBDCore, which can greatly improve the throughput of the accelerator.
Robots with different structures and parameters can be optimized specifically.
Compared with the state-of-the-art CPU, GPU dynamics libraries and FPGA accelerator, RBDCore can significantly improve the performance.
§ INTRODUCTION
Recently, the large multimodal model represented by GPT-4 <cit.> is developing at an astonishing speed and has brought people endless imagination.
This has also spawned many works combining large models and robots, such as Google's PaLM-E <cit.> and Microsoft's ChatGPT for Robotics <cit.>.
These studies make it possible for robots to become artificial general intelligent agents.
However, the large model can only be used as a user-friendly human-computer interaction tool at present,
and still needs to call the low-level interfaces of the robot to obtain specific information or perform specific actions.
In order to sort out the capabilities needed for artificial general intelligent robots, let us look back at the origin of intelligence, the human brain.
As shown in Fig. <ref>, the human brain can be divided into several main regions, which correspond to different capabilities.
The frontal lobe carries out the highest level of abstraction.
It collects information from other parts and makes decisions after integration.
The motor cortex in it then generates movement signals and cooperates with the cerebellum to control specific muscle movements <cit.>.
Inspired by the human brain, robots could also have similar software structures.
And with the help of multimodal model, this structure can be greatly simplified.
GPT-4 can cover most of the capabilities in Fig. <ref> (colored in green).
But as mentioned earlier, it can only generate higher-level instructions, and still needs to call the low-level planning and control interfaces.
End-to-end models are difficult, because the control frequency requirements (>100Hz is recommended) of the robot are much higher than the output frequency of the large models.
So we need separate processes for planning and control algorithms, just like the human brain.
After the above analysis, we believe that a reasonable robot computing architecture in the future is shown in Fig. <ref>.
A major feature of this architecture is that the CPU only needs to run the top-level system and user-defined software, just like the current computer operating system.
Large computing loads are assigned to additional accelerators.
For example, large model can be deployed on GPUs/TPUs or on the cloud, SLAM can be deployed on GPUs (or be integrated into the large model in the future), and planning/control computation can be deployed on dedicated accelerators.
The problem to be discussed in this paper is how to accelerate the computation related to the robot planning and control.
Trajectory optimization (TO) and model predictive control (MPC) <cit.> are two important techniques in the field of robot planning and control.
These algorithms contain a large number of robot dynamics computing tasks, which will seriously affect the real-time performance of robots <cit.>.
Depending on the complexity of the robot's structure, the time of the rigid body dynamics computing can account for 30%-90% of the total algorithm running time <cit.>.
In fact, most algorithms in robot dynamics has forward and backward propagation, similar to the training algorithm of neural networks.
It is a cache-unfriendly calculation, and require a lot of memory access <cit.>.
Therefore, when using CPU multithreading or GPU for acceleration, memory bottlenecks will be encountered.
GPU also has this problem. And the single-task latency of the GPU is relatively large, which is not a good choice for some tasks.
In addition, high-performance CPUs or GPUs generally have high power consumption, so they are not very suitable for battery-dependent mobile robots.
To verify the above feature, we build a robot application example (Fig. <ref>a) and test it in the robot simulator Webots <cit.> with the control framework OCS2 <cit.>.
As shown in Fig. <ref>b, when the number of threads increases to a certain level, the performance is no longer significantly improved.
On the other hand, from Fig. <ref>c, we can observe that the proportion of parallelizable parts (LQ Approximation, dark blue) is large and contains various types of tasks.
This implies that the entire task has the potential to be accelerated in parallel.
For energy-efficient real-time computing, many end-to-end domain-specific hardware accelerators designed for the MPC algorithms are proposed, such as <cit.>.
But they can only deal with linear models or simple robot structures, and cannot handle dynamics models for robots with high degrees-of-freedom (DOF).
Robomorphic <cit.> tries to use FPGA/ASIC to accelerate rigid body dynamics derivatives.
But it only supports this single function (forward dynamics derivatives) in the dynamics calculations, and the throughput is not high.
In addition, it also require the cooperation of the CPU.
This will bring a lot of communication overhead and extra computational overhead to the CPU.
In response to the above problems, we design a multifunctional robot dynamics accelerator, RBDCore.
It is also a general rigid body dynamics accelerator design framework that can be applied to a wide variety of robots.
RBDCore needs to solve two interrelated design challenges.
The first is how to achieve high performance with resource constraints.
The second is how to optimize performance and resource usage when implementing a general dynamics accelerator.
To deal with the first challenges, our design uses medium-grained pipeline at the submodule level, which has deep pipeline stages that support massive parallelism.
Previous work <cit.> lacked this pipeline depth.
They spent a lot of resources on the fine-grained pipeline inside the submodule.
Instead we use as few resources as possible inside submodules.
This strategy can greatly improve throughput while maintaining low resource consumption, at the cost of a small increase in computational latency.
We use a series of innovative optimizations to address the second challenge.
We summarize commonly used dynamics algorithms and propose hardware-specific optimizations to support the design of the entire RBDCore architecture.
By utilizing the software and hardware co-optimization, many hardware resources can be reused, and a considerable degree of parallelism can be guaranteed.
Different functions can be implemented by dynamically switching the dataflow without reconfiguring the hardware.
We design Structure-Adaptive Pipelines (SAPs) for RBDCore.
It uses the medium-grained pipeline strategy mentioned above, and can be further optimized according to the structure of the robot itself.
In addition, RBDCore can independently and completely calculate different functions without the assistance of the CPU.
This can greatly reduce the communication overhead and further improve the performance of the entire system.
We evaluate our architecture using the same FPGA chip as that used in Robomorphic <cit.>.
Compared with the existing state-of-the-art CPU<cit.>, GPU<cit.> and FPGA accelerators<cit.>, RBDCore can achieve 10.3×, 3.4× and 6.3× higher throughput, respectively, in the derivatives of dynamics calculations.
Compared with the CPU dynamics library Pinocchio <cit.> and GPU dynamics library GRiD <cit.>, the performance of RBDCore has also been significantly improved.
The main contributions of this paper are summarized as follows:
* To our knowledge, RBDCore is the first multifunctional domain-specific hardware accelerator for robot rigid body dynamics.
* We summarize the characteristics and relationships of commonly used dynamics algorithms, and propose a new mass matrix generation/inversion algorithm.
On this basis, we implemented all necessary basic dynamics functions using hardware, and carried out corresponding software-hardware co-optimization.
* SAPs are proposed to improve the throughput. Independent pipeline arrays can further perform asynchronous calculations.
* The RBDCore architecture can dynamically adjust the dataflow to support different functions, so that hardware resources can be fully reused.
§ BACKGROUND
In order to design a general-purpose robot dynamics accelerator, we need to model the robot in a general format.
As discussed in <cit.>, we can use a topological tree to describe an open-chains robot.
As the example shown in Fig. <ref>, the robot has 5 limbs connected to the body.
Four of them are legs composed of three links, and the remaining one is a robotic arm composed of six links.
We can assume that the body of the robot is a floating base that connects to a fixed world coordinate system through a virtual 6-DOF joint.
In this way, each link of the robot can correspond to a joint.
We can describe each link and joint in the form of matrices.
We assume that the robot has N_B (number of bodies/links) joints and links, with a total of N DOF.
Each link has its own mass and rotational inertia, which can be represented by the symmetric inertia matrix I_i∈ℝ^6×6.
Each joint has a specific type.
Types of joints include revolute, prismatic, helical, cylindrical, planar, spherical, 3-DOF translation, 6-DOF joint, etc.
Different types of joints have different motion subspaces S_i∈ℝ^6× N_i, where N_i is the DOF of ith joint.
For the most common joints in robots (revolute and prismatic), N_i = 1 and S_i are one-hot vectors.
The definitions of S_i for other type of joints can be found in <cit.>.
When the joint state q_i is given, we can calculate the pose relationship between the two links connected by the joint, which can be represented by the transformation matrix ^iX_λ_i∈ℝ^6×6, where λ_i is the parent link's id.
The transformation matrix ^iX_λ_i has a unique sparsity. Its top right 3×3 elements are always 0.
The inertia matrix I_i also has sparsity according to the joint and link.
All the above parameters are defined in their respective joint coordinate systems.
For the robots with the same model, N_B, S_i and the sparsity of I_i are all the same.
But we may need to calibrate the parameters in I_i and ^iX_λ_i for different robots with the same model.
For a specific robot, I_i, S_i and parameters in ^iX_λ_i can be seen as constant.
Most of calculations in robot dynamics are essentially related to the equation of motion:
M(q)q̈ + C(q, q̇, f^ext) = τ,
where q, q̇, q̈, τ are vectors of position, velocity, acceleration and force/torque variables, respectively,
M(q) is the symmetric positive-definite mass matrix of the robot,
C(q, q̇, f^ext) is the generalized bias forces accounts for the Coriolis and centrifugal forces, gravity, and any other forces (f^ext) acting on the system other than those in τ.
Different applications require the calculation of the equation of motion (Eq. (<ref>)) from different perspectives.
For example:
* Find τ in control algorithms and TO <cit.>;
* Find q̈ in simulation and MPC <cit.>;
* Find M in optimal control and TO <cit.>;
* Find M^-1 in kinematics and MPC <cit.>;
* Find ∂_u τ in optimal control and TO <cit.>;
* Find ∂_u q̈ in MPC<cit.> (u=[q;q̇]);
§ SOFTWARE-HARDWARE CO-DESIGN
In this section, we summarize the characteristics and relationships of commonly used dynamics functions and their various algorithm implementations.
For all necessary basic dynamics functions, we select, optimize and innovate the algorithms from the perspective of hardware design.
All these basic functions are related to Eq. (<ref>), as listed in Table <ref>.
Here var u represents both q and q̇, and ∂_u τ means (∂τ/∂ q, ∂τ/∂q̇).
The definitions of these functions are consistent with those in <cit.>.
All implementations in this section are specifically designed for a serial arm.
They are used to explain various optimization details in the design.
In the next section, complete implementations for more complex robots will be presented.
§.§ Inverse Dynamics
Inverse dynamics can be calculated using the Recursive Newton-Euler Algorithm (RNEA) <cit.>, which is the most basic dynamics algorithm.
This algorithm consists of a forward pass calculation and a backward pass calculation.
In order to adapt to our accelerator, the RNEA with appropriate modifications is shown in Algorithm <ref>.
The expression of the inverse dynamics function is exactly the same as the equation of motion Eq. (<ref>).
With the help of modified RNEA Algorithm, the inverse dynamics function can be written as
τ = ID(q, q̇, q̈, f^ext)
= M(q)q̈ + C(q, q̇, f^ext)
= RNEA(q, sinq, cosq, q̇, q̈, f^ext).
We design the RNEA pipeline according to Algorithm <ref>, as shown in Fig. <ref>a.
Each iteration of the loop in the algorithm is mapped to a submodule.
For example, for a robot arm with N_B=n links, the RNEA pipeline has 2n submodules, namely R_fi and R_bi, where i ∈ [1, n];
These submodules are independent of each other and transmit data through FIFO streams.
In this way, each submodule can just handle the calculation of the corresponding joint and link.
For the submodule R_fi, it needs to update X_i, calculate v_i,a_i,f_i according to the Algorithm <ref>, then transfer ftr_i forward to R_fi+1 and dtr_i downward to R_bi.
For the submodule R_bi, it needs to update X_i again, update f_i, compute btr_i-1 and τ_i, then transfer btr_i-1 backward to R_bi-1 and τ_i to output_i.
For each RNEA calculation, it needs to go through forward pass and backward pass.
For the previous submodules, intermediate data needs to be cached for a long time.
This is why the CPU's cache system doesn't handle this type of task very well.
Certain bypass buffers are added to our hardware design to avoid pipeline stalls.
In this way, the multi-stage pipeline can greatly improve the parallelism of calculations, thereby improving throughput.
This is the medium-grained pipeline mentioned earlier.
Although there are multiple similar matrix multiplications inside the submodule, we did not implement a fine-grained matrix multiplication pipeline unit.
Doing so will consume a lot of resources and will also make the sparse optimization less efficient.
We have some methods to further optimize the design.
§.§.§ Global Trigonometric Calculation
We do not calculate sin and cos in any submodule because the latency and cost of these two operators are very high.
So we centralize the computation of the required trigonometric functions in a separate module, and then provide the sin and cos values as inputs to the submodules.
This is reflected in both algorithms and hardware.
§.§.§ Sparsity and Constant Optimization
As shown in Fig. <ref>b, we can optimize the design according to the fixed sparse characteristics and constants in the robot parameters.
We assume that the nth joint is the most common revolute joint in robots.
Then only 12 elements in the 6 × 6 matrix X_n are non-constant, and these 12 elements have only 8 different values, all of which are in the form of c*sinq or c*cosq.
The symmetric matrix I_n has only 8 distinct non-zero constants, and the S_n is a one hot vector.
These features are also mentioned in Section <ref>, which can greatly reduce the computational cost.
Therefore, we don't need to design a complete 6 × 6 matrix vector computing unit, but only need to consider the non-zero part.
We can also further optimize the design according to the characteristics of the hardware, such as saving the constants on-chip, or using the look up tables in FPGAs, thereby reducing memory access overhead and improving performance.
§.§.§ Reupdate Transformation Matrix
As shown in Fig. <ref>c, we reupdate X_2.
Instead of buffer and transfer the calculated X_i, we recalculate it.
As mentioned above, the calculation cost of x is very small, while the transmission cost is relatively greater.
In matrix X_i, in addition to trigonometric functions, only one or two multiplication are required.
Since the forward submodules are more complex than the backward submodules, we need to minimize the number of interfaces and tasks of the forward module.
Therefore, we choose to transmit less data downward.
Only one data q_i is needed for the prismatic joint, and only two data sinq_i, cosq_i are needed for the revolute joint.
In Fig. <ref>c, f_i is lazy updated in submodules R_bi.
This avoids data loopback dependencies in the pipeline.
We will introduce this optimization method in detail in the next subsection.
§.§ Derivatives of Inverse Dynamics
The structure of the Δ RNEA algorithm is the same as that of the RNEA algorithm, the difference is the calculation in the loop body.
For specific algorithm details, please refer to <cit.> and <cit.>.
With the v,a,f results generated by the ID function, we can calculate the derivatives of inverse dynamics by the following steps
∂_u τ = Δ ID(q, q̇, q̈, f^ext)
= Δ RNEA(q, sinq, cosq, q̇, [v, a, f])
= Δ RNEA(q, sinq, cosq, q̇, ID(q, q̇, q̈, f^ext)).
We design the Δ RNEA pipeline, as shown in Fig. <ref>a.
It has the same structure as the RNEA pipeline, but the calculation content in each submodule is much more complicated.
In addition to the three optimization methods mentioned above, the Δ RNEA pipeline has the following further optimization methods.
§.§.§ Column Vector Type
We design a unified column vector type for data communication between submodules.
With the help of column vector type, the matrices can be decomposed into multiple column vectors.
Once a column vector is ready, it can be streamed to the next submodule instead of waiting for the entire matrix to be ready.
This allows for a smoother pipeline and reduces computational latency.
The interface complexity of submodules can also be reduced, thereby reducing resource consumption.
The smallest square in Fig. <ref>b is the column vector.
§.§.§ Incremental Calculation
The matrix variables within the submodules also have a sparsity characteristic related to the number of iterations.
In fact, the number of their useful columns is proportional to the iteration depth, as shown in Fig. <ref>b.
We can further utilize column vectors to only pass and calculate useful columns, while each submodule only needs to incrementally initialize newly added columns.
§.§.§ Lazy Update
There are some operations that need to add or subtract variables from other loop bodies, such as Algorithm <ref> line 10, and the corresponding calculation in the Δ RNEA algorithm.
If we update the data sequentially as described by the original algorithm, the loopback dependency between loop bodies will destroy the dataflow of the pipeline.
Actually we do not need to read them first and then write them back.
We can just pass the addend or subtraction to the corresponding submodule, and wait for the next cycle to calculate.
We call this approach lazy update, as shown in Fig. <ref>c.
The example of this method can also be seen in Fig. <ref>c.
§.§ Forward Dynamics
There are two methods for the forward dynamics calculation.
The first method is the Articulated Body Algorithm (ABA) <cit.>.
The time complexity of this algorithm is proportional to the number of links of the robot.
For robots with high DOF, it is currently the best performing algorithm on CPUs.
However, it requires forward or backward pass calculations for three times.
Therefore, ABA is not efficient for low-complexity robot dynamics, and the entire algorithm structure is also very unfriendly to hardware implementation.
The second method is the mass matrix inverse method.
According to the rigid body equation of motion (Eq. (<ref>)), we can easily deduce the following formula,
q̈ = FD(q, q̇, τ, f^ext)
= M^-1(q)(τ - C(q, q̇, f^ext))
= Minv(q)(τ - ID(q, q̇, 0, f^ext)),
where C(q, q̇, f^ext) = ID(q, q̇, 0, f^ext) can be derived from Eq. (<ref>), and it can be calculated by the RNEA (Algorithm <ref>).
For hardware accelerators, the mass matrix inverse method is worth adopting, because this method can reuse the RNEA module, which can save a lot of computing resources.
The only remaining question is how to efficiently compute the matrix M^-1(q) by the function Minv(q).
§.§ Mass Matrix and Its Inverse
There are many ways to compute the inverse of the mass matrix.
We can first use the Composite Rigid Body Algorithm (CRBA) <cit.> to calculate the mass matrix, and then use a certain decomposition algorithm to calculate the inverse of the mass matrix.
Because the mass matrix is a symmetric positive definite matrix, it can be decomposed by Cholesky factorization or LDL^T factorization, which can be written as M = LDL^T.
After the decomposition, we can get M^-1 efficiently <cit.>.
In many cases, the inverse of the mass matrix is calculated to aid in the calculation of forward dynamics.
If we just want to calculate the forward dynamics, we do not even need to know what M^-1 is.
We can transform Eq. (<ref>) into L^-TD^-1L^-1q̈ = (τ - C),
then we can solve q̈ by two back-substitution procedures.
But this method has shortcomings.
We use the CRBA to first calculate the mass matrix, and then calculate the inverse of the matrix or solve the linear equation system through matrix decomposition.
The two serial steps will introduce long latency when accelerated in parallel using dedicated hardware.
In fact, we can do part of the matrix decomposition and back-substitution procedures in advance while computing the M matrix, especially the reciprocal operation.
This can overlap the computational latency of the two parts, thereby reducing the latency of the overall computation.
We propose MMinvGen (Algorithm <ref>) to generate the mass matrix or the inverse of the mass matrix.
It combines the CRBA algorithm <cit.> and a simplified ABA algorithm <cit.>.
We analyze the differences and similarities of these two algorithms, and further simplify and unify them through software-hardware co-optimization.
Compared with the original algorithms, we avoid a whole forward loop, which can greatly reduce hardware resource consumption.
Compared with the CRBA algorithm, our algorithm greatly improves the parallelism of computation.
We can choose whether to do matrix factorization and inversion while building the M matrix through the inv parameter.
The intermediate variables I_i^A and F_i are initialized to all-zero matrices.
The notation tree(i) represents the set of id of all nodes contained in the subtree of node i, and tree_e(i) = tree(i) \ i.
Then we can easily get the mass matrix and its inverse:
M = M(q)
= MMinvGen(q, sinq, cosq, 0),
M^-1 = Minv(q)
= MMinvGen(q, sinq, cosq, 1).
Based on Algorithm <ref>, we design the MMinvGen pipeline, as shown in Fig. <ref>a.
The overall structure of the MMinvGen pipeline is basically similar to that of the RNEA pipeline, but the direction of the dataflow is reversed.
It uses all the optimization methods used in RNEA pipeline and Δ RNEA pipeline,
such as the Incremental Calculation and the Lazy Update for the matrix F_i, and the Sparsity Optimization for the symmetric matrices M, M^-1 and I^A_i, as show in Fig. <ref>b.
In addition, it has the following optimization methods:
§.§.§ Priority Vector
The symmetric matrix I^A_i needs to be updated lazily in each submodule, yet its computation is in the critical path of the entire pipeline.
We no longer divide it into column vectors in the previous way, but divide it into four vectors according to specific priorities, calculate and pass the vectors in the critical path first.
The division method is shown in Fig. <ref>b.
§.§.§ Fixed-point vs Floating-point Reciprocal
Compared with floating-point numbers, fixed-point addition, subtraction and multiplication are very resource-efficient and fast.
However, there are reciprocal operations in Algorithm <ref> line 5.
And fixed-point division is very slow.
Therefore, we need to convert fixed-point numbers into floating-point numbers first, and then use the characteristics of floating-point numbers to quickly find the reciprocal <cit.>.
After getting the result, it is converted back to a fixed-point number to participate in subsequent operations.
This can greatly improve performance.
§.§ Derivatives of Forward Dynamics
Corresponding to forward dynamics, there is a derivative version of the ABA algorithm <cit.>.
However it is too complex and cannot take advantage of the hardware resources we already have.
Fortunately, as discussed in <cit.>, the following formula holds
Δ FD = M^-1Δ ID.
So we can implement Δ FD function by reusing the functions Δ ID and FD, which can be derived from the following formula
∂_u q̈ = Δ FD(q, q̇, τ, f^ext)
= M^-1(q) Δ ID(q, q̇, q̈, f^ext)
= M^-1(q) Δ ID(q, q̇, FD(q, q̇, τ, f^ext), f^ext),
where the matrix M^-1(q) is already calculated in function FD, so we can reuse it directly.
The forward dynamics derivatives function implemented in <cit.> is different from Δ FD.
To distinguish these two functions, we call the former Δ iFD, which can be defined as following
∂_u q̈ = Δ iFD(q, q̇, q̈, M^-1, f^ext)
= M^-1Δ ID(q, q̇, q̈, f^ext).
They are not complete forward dynamics derivatives calculations because both M^-1 and q̈ need to be known in advance and passed in as input parameters.
§ ARCHITECTURE OF RBDCORE
The hardware in the previous section is designed for a serial robotic arm with a fixed base, but the actual robot has a more complex structure and does not have a fixed base.
RBDCore needs to take this into account.
Based on this, the versatility of RBDCore needs to have two aspects, supporting multiple functions and supporting multiple robots.
It should be noted that RBDCore's support for multi-robots is not dynamic, but needs to be reconfigured. For a specific model of robot, only one initial configuration is required.
In this section, we discuss how RBDCore optimizes performance and resource usage while implementing a general dynamics accelerator.
§.§ Multifunction Support
If all the functions are implemented one by one with the above-mentioned hardware design directly, it will cause waste of resources.
In fact, if we implement the function Δ FD according to Eq. (<ref>)-Eq. (<ref>), we can find that it already contains almost all the necessary hardware resources, as shown in Fig. <ref>a.
In order to completely calculate the result of Δ FD, we must go through 6 steps 172 to 177.
We found that the five functions ID, FD, Minv, Δ ID and Δ FD are actually a subset of these 6 calculation steps.
Therefore, we can take out part of this implementation as the interface of these five functions.
For RNEA and Δ RNEA modules, the intermediate result v,a,f needs to be passed between them.
We further optimized the data paths between them and interleaved their submodules together, as shown in Fig. <ref>b.
The submodules of Δ RNEA can be seen as bypass buffer when we need to calculate ID.
We can further find that both RNEA and matrix multiplication are used twice.
We can further reuse the hardware to perform these two calculations.
The format of matrix multiplication is unified as shown in Fig. <ref>c.
A is a symmetric matrix that can be optimized for sparsity.
Therefore, after the software-hardware co-design in the previous section, RBDCore can naturally realize multi-function support.
§.§ Architecture Overview
RBDCore needs to be configured according to the model and parameters of the robot before calculation.
To demonstrate the architecture of RBDCore, we use quadruped robot with an arm (Fig. <ref>) as an example to configure the accelerator.
It is a quadruped robot with a mechanical arm, which has N_B=19 links, and the DOF of the robot is N=24 (including the 6-DOF floating base).
After the configuration, the structure of RBDCore will be fixed, as shown in Fig. <ref>.
The inputs of RBDCore include type, q, q̇, q̈/τ, f^ext and M^-1.
Parameter type indicates which function RBDCore should run.
It also contains some flag informations to provide function options and output types.
The outputs of RBDCore contain τ, q̈, M, M^-1, ∂_u τ, and ∂_u q̈.
They can be selected and combined into the output of any function.
The function Δ FD can optionally output M^-1.
RBDCore contains 8 modules, namely Decode Module, Encode Module, Global Trigonometric Module, Input Stream Module,
Forward-Backward Module, Backward-Forward Module, Schedule Module and Feedback Module.
Next, we will briefly introduce the main design methods of the RBDCore architecture.
§.§.§ Decode and Encode
Depending on the chosen function, RBDCore will have different inputs and outputs.
In order to facilitate the design of the multifunctional pipeline, we unify the formats of all inputs and outputs.
The Decode Module will deserialized and decode the data from the input interfaces.
The Encode module can convert data into a CPU-friendly type for subsequent use.
§.§.§ Scheduling System
The scheduling system of RBDCore consists of Input Stream Module, Schedule Module and Feedback Module.
They constitute a state machine with a feedback structure.
With the support of this scheduling system, RBDCore can provide timely and accurate data to the multifunctional pipelines, and coordinate conflicts within and between functions.
The Input Stream Module collects the corresponding data according to the module instruction inst, and then provides data to the specified module in a certain order.
There are some subtle differences between the function type type and the module instruction inst.
An instruction type will be translated into multiple different microinstructions inst during its life cycle, so as to facilitate the scheduling of multi-function pipelines.
In this way, modules and submodules can automatically select the output type and path according to the current module instruction, thereby realizing the dynamic switching of the dataflow.
The Schedule Module is responsible for integrating and scheduling the calculation results of multifunctional pipelines.
It also provides the remaining few vector subtraction and matrix multiplication calculations for the FD, Δ FD and Δ iFD functions.
When calculating numerical integration in TO or MPC, Schedule Module can also combine the calculation results of the FD function with the current state to generate a new integration step.
This new task will be passed to the Feedback Module, waiting for the next calculation.
§.§.§ Forward-Backward and Backward-Forward
Forward-Backward and Backward-Forward are an abstraction of the dataflow types of dynamic algorithms.
The dataflow of the Forward-Backward module pass forward first, and then pass backward,
while the dataflow of the Backward-Forward module is just the opposite.
Various dynamics algorithms can be implemented through the combination of these two dataflow, only the calculation logic is different.
The Forward-Backward Module is used to compute RNEA and its gradient Δ RNEA.
It has 4N_B submodules, namely R_fi, R_bi, D_fi and D_bi, where i ∈ [1,N_B].
The submodules are divided into root and multiple branches according to the structure of the robot.
The data between them needs to be broadcast and reduced.
In each branch, all submodules can form a Dynamics Array.
Dynamics Array can work similarly to a systolic array.
Its submodules can take turns reading data in joint order, and perform calculations along the array in the form of a pipeline, and then output them in turn.
The Dynamics Array interleaves the submodules of RNEA and Δ RNEA together.
Therefore, the results from each submodule of RNEA pipeline can be directly passed to the corresponding submodule of Δ RNEA pipeline.
If there is no need to compute the derivatives, the Δ RNEA submodules can be switched to a data pass mode to support standalone operation of the RNEA.
In the current implementation, the Backward-Forward Module is used to compute MMinvGen.
It has 2N_B+2 submodules, namely M_bi and M_fi, where i ∈ [0,N_B], and also be divided into root and multiple branches.
It has the potential to implement the ABA algorithm, but due to resource constraints we do not currently implement it.
§.§ Structure-Adaptive Pipelines
There are many different structures of robots,
such as mobile arm (Tiago, Fig. <ref>),
quadruped robot (Spot-arm, Fig. <ref>),
and humanoid robot (Atlas, Fig. <ref>), etc.
These different robot structures will change the dependencies between iterations of the dynamics algorithms.
Therefore, the pipeline structure needs to be adjust according to the robot structure.
So we call it Structure-Adaptive Pipelines.
§.§.§ Organization of Submodules
We design a general optimization method that adapts the organization of RBDCore submodules according to the robot structure.
We illustrate this approach with the three robot examples above, as shown in Fig. <ref>.
The left side of each subfigure is the topological relationship of the robot structure, and the right side is the organizational relationship of the submodules in Backward-Forward Module.
The organizational relationship of the submodules in Forward-Backward Module is the same as that of Backward-Forward Module.
Tiago has a 3-DOF mobile base and a 7-DOF arm, its topology is linear.
So we can just use a root and a branch to organize all submodules.
The root joint type is planar, which only has 3 degrees of freedom, so it does not need to be split into two joints.
As shown in Fig. <ref>, the root corresponds to the base link, and the branch corresponds to the robotic arm.
Spot-arm has a 6-DOF body, four 3-DOF legs and a 6-DOF arm.
It has a tree topology.
We can organize all submodules into a root and three branches.
As shown in Fig. <ref>, the root corresponds to the body link, a 6-size branch corresponds to the robotic arm, and two 3-size branches corresponds to the four legs.
Here we optimize the number of leg branches.
Since the legs of the Spot are all symmetrical, the sparsity of the leg parameters is the same, and only a few parameters differ, most of which differ only in sign.
According to this observation, we can handle two symmetrical legs with one 3-size branch.
This saves hardware resources.
At the same time, due to the small topological depth of the leg branches, the complexity of calculating the leg dynamics is much smaller.
Therefore, the delay of each submodule will be much smaller, and the processing time of one robotic arm can fully support the processing of two legs.
Atlas is a humanoid robot.
It has a body with a waist, two arms and two legs, forming a topological tree.
The waist is made up of three joints (torso1, torso2 and torso3) that connect the torso to the pelvis.
This makes it impossible for branching structures to exist only at the root.
Traditionally, people will define the pelvis as the root, as shown in Fig. <ref>.
Under this definition, the depth of the topological tree will become 11.
As we have discussed before, the complexity of submodules will increase as the iteration depth increases (section <ref>).
So we can optimize RBDCore by adjusting the depth of robot topology tree, as shown in Fig. <ref>.
We redefine torso2 as the root of the robot.
This does not change the robot's topological connectivity, but the depth of the topological tree was reduced to 9, and the depth of each branch is balanced.
This can reduce a lot of resource consumption, while reducing computing latency.
Because Atlas is a symmetrical robot, we can also handle two arms or two legs with a single branch array.
It is worth noting that in Fig. <ref>, the broadcast_s and reduce_s connected to the root (torso2) submodules are oriented to different hardware branches,
while the broadcast_t and reduce_t connected to torso3 or pelvis submodules are oriented to the same hardware branches, which are time-division multiplexed.
This difference also exists in the submodules' organization of the Spot-arm robot.
§.§.§ Pipeline Stages
We do not perform fine-grained pipelining for the components of matrix operations, or the resource consumption would be very large.
We do not only perform coarse-grained pipelining for each module, either.
Such fine-grained pipelines combined with coarse-grained pipelines are precisely the architecture proposed in the previous work <cit.>.
Instead, RBDCore disassembles the algorithm modules into submodules, and then performs medium-grained pipelining among these submodules.
Fig. <ref> shows the pipeline stages for the Forward-Backward Module.
The pipeline of Backward-Forward Module is similar to this.
Each branch array is independent of each other, so they can perform asynchronous parallel computing.
As the complexity of submodules will increase with the increase of iteration depth (Section <ref>), the pipeline cycle of branch 1 is almost double that of branch 2/3.
So branch 2/3 can handle twice as many tasks as branch 1.
§.§.§ Pipeline Schedule
In many TO and MPC algorithms, computations on multiple sampling points can be processed in parallel.
We can easily use SPAs to speed up these tasks without worrying about dependencies.
But sometimes we need to call the dynamics function serially.
For example, the 4th-order Runge-Kutta sensitivity analysis has 4 serial sub-tasks on each sampling point.
These tasks need to be scheduled appropriately, as shown in Fig. <ref>.
Subsequent sub-tasks need to be scheduled after the predecessor tasks are completed.
Before that, RBDCore can compute other independent batched tasks first.
Under this scheduling strategy, RBDCore can effectively avoid the negative impact of serial sub-tasks on parallelism.
For comparison, we also plot how the multithreaded CPU schedules these tasks.
The CPU performs spatial parallelism through multi-core resources, while RBDCore performs temporal parallelism through SAPs.
§.§.§ Branch-induced Sparsity
In the dynamics algorithms mentioned in this article, the structure of the robot determines the dependencies between variables (λ_i).
Therefore, there will be no direct dependencies between submodules of different branches.
According to this feature, submodules can only keep relevant matrix columns instead of the entire parameter matrix.
This is robot branch-induced sparsity.
§.§.§ Root Submodules
In order to avoid complicated calculations, the base link is split into two parts.
The robot assumed in the architecture diagram has a floating base with 6 DOF.
We can split the 6-DOF virtual joint into two 3-DOF virtual joints, which are spherical joint and 3-DOF translation joint.
This reduces the computational complexity of the root node.
According to the different needs of the application, RBDCore can provide different modes to handle the dynamics for the root.
For the virtual joint corresponding to the base link, we can treat it as an ordinary joint and use the standard algorithms for calculation.
For some bases that are almost not affected by dynamics (for example a robotic arm attached to a very heavy base), we can just provide a state for the base link from the input as an initialization for the following links, or even ignore the root node directly.
Both Forward-Backward Module and Backward-Forward Module can select the mode of the root submodules through the module instruction inst.
This can improve the versatility of the RBDCore accelerator and reduce unnecessary calculations in some cases.
§.§ Dataflow
All the modules and submodules described above are completely driven by data, and the data passed between them are implemented by FIFO streams.
Therefore, these modules and submodules can be organized in the form of dataflow.
As shown in Fig. <ref>, the dataflow path and activated modules/submodules for different functions are marked in dark color.
For each function, the Decode Module, Global Trigonometric Module, Input Stream Module, Schedule Module and Encode Module all need to be activated.
For functions ID, FD, M, Minv, Δ ID and Δ iFD,
their dataflow is relatively simple, which is a combination of some modules and submodules of RBDCore.
While for the function Δ FD, the dataflow is complex, especially when batch tasks are calculated at the same time.
According to Eq. (<ref>), the Δ FD function has three stages, the first stage is to calculate the FD function, the second stage is to calculate the Δ ID function, and the third stage is to calculate the final result.
Both the first and second stages use the Forward-Backward Module for calculations, so we need the Feedback Modules to write data back to the Input Stream Module.
Between stages, the Feedback Module is also responsible for saving necessary intermediate results for a second use, such as the inverse of the mass matrix.
These different stages are distinguished by the module instruction inst, so that even with multiple tasks, the overall dataflow will not be messed up.
§.§ Accelerator Configuration and Interface
In order to achieve acceleration capabilities for different robots, the accelerator first needs to be configured according to the specific robot model and parameters.
Specifically, this process determines the structure of the accelerator according to the structure of the robot model,
and optimizes the hardware computing unit according to the sparsity of the robot's parameter matrices.
For different robots with the same model, we can adjust the value of the constant parameter according to the calibration result without adjusting the structure of the accelerator.
For a specific robot, the accelerator only needs to be configured once.
The programming interface of the accelerator is in the form of supported functions.
Users can call the functions in a unified form of RBDCore(type, &in, &out).
The accelerator can be connected to the host via shared memory or a bus.
§ EVALUATION
§.§ Methodology
We implement and evaluate the RBDCore architecture on the Xilinx platform.
Vitis and Vivado are used to synthesize the architecture, simulate the dataflow and estimate the power.
To compare with the previous state-of-the-art CPU<cit.>, GPU<cit.> and FPGA<cit.> works, we use the same FPGA chip (xcvu9p-flga2104-2L-e) as the target.
The hardware configurations used in the following evaluations are listed in Table <ref>.
There are two main reasons why we use these configurations of hardware.
The first is that the previous papers used the same hardware, and the second is that they have similar manufacturing technology (12nm-16nm, ours 14nm) and release year (2016-2018, ours 2016), which makes the comparison more fair.
We compare the latency and throughput of RBDCore with the widely-used C++ dynamics library Pinocchio <cit.> and the state-of-the-art GPU dynamics library GRiD <cit.>.
We perform the best compilation optimization options (-O3,-mavx) on these libraries.
We use the RBDCore architecture to realize the dynamics functions for three robots, LBR iiwa <cit.>, HyQ <cit.> and Atlas <cit.>, respectively.
This is consistent with two previous works <cit.>.
When comparing latency, to avoid memory access bottlenecks, we use a single-threaded CPU, and the GPU uses only a small number of blocks.
When comparing throughput, 4 persistent threads are used, which is the same as in <cit.>.
In our CPU, it is also difficult to further improve performance with more threads (see Fig. <ref>b).
All the results are the averages of millions of runs.
§.§ Latency and Throughput
The performance comparison between RBDCore and CPU, GPU libraries are shown in Fig. <ref>.
For all supported functions, the latency of RBDCore is better than that of the tested CPU and GPU.
For CPU, we implement these functions using the most performant methods in the Pinocchio library.
For GPU, GRiD <cit.> does not realize the calculation of the mass matrix, so there is no data in the corresponding position in the figure.
Benefiting from the SAPs, RBDCore can achieve very high throughput.
When comparing the throughput, 128 batched tasks are tested, and the I/O overhead is considered.
The input and output of RBDCore are in the form of data streams, so the I/O overhead of RBDCore can be greatly masked.
For the iiwa robot, the latency of RBDCore is 1.2× to 5.7× better than CPU, and 6.2× to 16.0× better than GPU (Fig. <ref>);
The throughput of RBDCore is 5.1× to 20.5× higher than CPU and 2.4× to 5.8× higher than GPU (Fig. <ref>).
For the HyQ robot, the latency of RBDCore is 2.0× to 5.4× better than CPU, and 7.2× to 10.4× better than GPU (Fig. <ref>);
The throughput of RBDCore is 5.5× to 27.1× higher than CPU and 2.2× to 7.4× higher than GPU (Fig. <ref>).
For the Atlas robot, the latency of RBDCore is 1.5× to 7.1× better than CPU, and 5.5× to 10.4× better than GPU (Fig. <ref>);
The throughput of RBDCore is 7.4× to 21.6× higher than CPU and 7.0× to 9.1× higher than GPU (Fig. <ref>).
It is worth noting that here we assume that the memory interface bandwidth can be at most 32GB/s, and the function Δ ID has reached the memory access bottleneck.
RBDCore is not specifically optimized for latency. Instead, it is designed to maximize the computing throughput.
So compared with the previous work Robomorphic<cit.>, our computational latency is increased a little bit for the same robot configuration using the same FPGA chip.
Specifically, the latency of computing Δ iFD for robot iiwa is 0.76μs in RBDCore, while the latency in Robomorphic is 0.61μs.
This is mainly due to the additional data transfer overhead caused by the dataflow in RBDCore for supporting multiple functions and the SAPs.
On the other hand, we compare the throughput with the previous works <cit.>, as shown in Fig. <ref>.
They implemented the function Δ iFD with CPU, GPU and FPGA.
As a comparison, when running batched tasks, RBDCore further gives 10.3× to 13× performance improvement over their optimized CPU implementation,
3.4× to 11.3× performance improvement over their optimized GPU implementation,
and 6.3× to 7.0× performance improvement over their FPGA's.
§.§ End-to-End Application
In the robot application mentioned in Fig. <ref>, there are three kind of tasks can be accelerated by RBDCore, namely forward dynamics, inverse of the mass matrix and derivatives of dynamics,
which can be calculated by functions FD and Δ FD.
With the help of the SAPs, RBDCore can achieve 11.2× performance speedup on these tasks.
When RBDCore accelerates the supported tasks, the CPU can compute other batch tasks at the same time.
Compared with only using 4-thread CPU for calculation, RBDCore can help the entire system increase the control frequency by 80%.
§.§ Resource Usage, Power and Energy
Because each submodule needs to perform computation independently, RBDCore requires many computing resources.
But compared with the previous work <cit.>, our architecture is able to achieve higher performance with similar resource usage.
Our implementation takes up 62% DSP, 17% FF and 54% LUT of the target FPGA.
The previous work mentioned that due to the lack of DSP, they could not instantiate a new independent unit on the same FPGA chip to parallelize the computation, so they also take up at least half of the DSP.
The power dissipation of our accelerator varies for different functions and robot configurations.
We estimate RBDCore's power for LBR iiwa.
For different functions the power dissipation ranges from 6.2W to 36.8W.
As a comparison with previous work Robomorphic<cit.> with power 9.6W, which only supports Δ iFD function,
RBDCore's power for function Δ iFD is up to 31.2W.
With 3.25× higher power, RBDCore's speed is 6.6× faster than that of Robomorphic, so the energy consumption of Robomorphic is 2.0× higher than RBDCore's.
In terms of energy delay product, RBDCore is 13.2× better then Robomorphic.
§ CONCLUSION
In this paper, we analyze the computing requirements and bottlenecks of current robot applications, and understand that the calculations of robot dynamics and its derivatives are difficult tasks.
At present, there have been some studies doing the related acceleration work.
We further analyze the shortcomings and deficiencies of these works, and propose our own solutions.
Through the analysis and summary of the robot dynamics algorithms, our proposed architecture RBDCore can provide multifunctional support through resource reuse and dynamic dataflow reconfiguration.
By further designing SAPs and specilizing the architecture according to the robot topological structure and model parameters, the performance and throughput are greatly improved.
IEEEtranS
|
http://arxiv.org/abs/2307.00880v1
|
20230703092028
|
Co-Learning Meets Stitch-Up for Noisy Multi-label Visual Recognition
|
[
"Chao Liang",
"Zongxin Yang",
"Linchao Zhu",
"Yi Yang"
] |
cs.CV
|
[
"cs.CV"
] |
Journal of Class Files, Vol. 14, No. 8, August 2021
Shell et al.: A Sample Article Using IEEEtran.cls for IEEE Journals
Co-Learning Meets Stitch-Up for Noisy Multi-label Visual Recognition
Chao Liang, Zongxin Yang, Linchao Zhu, Yi Yang
Chao Liang, Zongxin Yang, Linchao Zhu, Yi Yang are with School of Computer Science, Zhejiang University, Zhejiang, China. E-mail: {cs.chaoliang, yangzongxin, zhulinchao, yangyics}@zju.edu.cn.
This work is supported by National Key R&D Program of China under Grant No. 2020AAA0108800. This work is partially supported by the Fundamental Research Funds for the Central Universities (No. 226-2022-00051)
August 1, 2023
============================================================================================================================================================================================================================================================================================================================================================================================================================================================================
In real-world scenarios, collected and annotated data often exhibit the characteristics of multiple classes and long-tailed distribution. Additionally, label noise is inevitable in large-scale annotations and hinders the applications of learning-based models. Although many deep learning based methods have been proposed for handling long-tailed multi-label recognition or label noise respectively, learning with noisy labels in long-tailed multi-label visual data has not been well-studied because of the complexity of long-tailed distribution entangled with multi-label correlation. To tackle such a critical yet thorny problem, this paper focuses on reducing noise based on some inherent properties of multi-label classification and long-tailed learning under noisy cases. In detail, we propose a Stitch-Up augmentation to synthesize a cleaner sample, which directly reduces multi-label noise by stitching up multiple noisy training samples. Equipped with Stitch-Up, a Heterogeneous Co-Learning framework is further designed to leverage the inconsistency between long-tailed and balanced distributions, yielding cleaner labels for more robust representation learning with noisy long-tailed data. To validate our method, we build two challenging benchmarks, named VOC-MLT-Noise and COCO-MLT-Noise, respectively. Extensive experiments are conducted to demonstrate the effectiveness of our proposed method. Compared to a variety of baselines, our method achieves superior results.
Noisy labels, multi-label long-tailed recognition, deep learning
§ INTRODUCTION
The remarkable breakthroughs of convolutional neural networks <cit.> in visual recognition can be largely attributed to the arising of large-scale data resources. In image recognition, conventional classification settings <cit.> typically assume that each image is annotated with a single-label and each class contains the same number of instances. In real-world scenarios, however, collected data often exhibit the characteristics of long-tailed distribution <cit.> and multi-label annotations <cit.>. Besides, label noise <cit.> generally exists in large-scale annotations. Training models with noisy labels inevitably degenerates networks' learning performance <cit.> and thus hinders the
development of robust learning-based models.
For improving networks' classification performance under long-tailed multi-label scenarios, several recent works extend widely used long-tailed methods in the single-label setting to the multi-label case. Wu <cit.> proposed a distribution balanced loss to deal with label co-occurrence. Guo <cit.> leveraged the network consistency loss to learn a robust representation under different sampling strategies. As to handle with label noise, current methods <cit.> on learning with noisy labels mostly focus on a single-label setting. DivideMix <cit.> divided the training data into the labeled and unlabeled set, and then employed the semi-supervised way to tackle this problem. MoPro <cit.> resorted to label correction to mitigate the impact of noisy labels. Although the above progress is remarkable, the problem of label noise in long-tailed multi-label data has been barely explored or well-studied since the combination of long-tailed distribution and multi-label correlation will further complicate the label noise' problem. When encountered with long-tailed multi-label data, existing strategies neglect the properties of multi-label annotations <cit.> or fail to generalize well under imbalanced data distribution <cit.>.
In order to relieve the noise problem within the challenging long-tailed multi-label scenarios, we go back to one of the keys of learning with noisy labels, , reducing noise in the training stage, based on some inherent properties of multi-label classification and long-tailed learning under noisy cases. First, the negative effect of noisy labels can be alleviated when training with cleaner data. Multi-label classification aims to detect the existence of the object in a given image for each class. In noisy cases, a set of labeled images (with the same label) is more likely to contain all labeled classes than any single image from the set. As shown in Figure <ref>, when given only one image with a noisy multi-label containing the cat, the model is uncertain to predict whether the cat exists in this image or not. However, when a pair of images is provided to tell the cat's presence, the probability of the cat's existence highly increases. In other words, if we stitch up a set of images and take the union of their labels, a training example with less noisy possibility can be synthesized. Second, disagreement from different sampling priors helps us distinguish noisy labels when the training distribution is long-tailed. Label correction is an effective tool to correct noisy labels in the literature <cit.>.
Differently, Co-teaching<cit.> leveraged the inconsistency between two networks to select clean samples. This inconsistency is obtained from the different network initialization. Motivated by these methods, our intuition is that the disagreement <cit.> under different sampling policies can help us correct noisy labels in the distribution of long-tailed. More specifically, random sampling prefers head classes, while class re-balanced sampling tends to handle the tail better. We inherit such a two-branch structure to rectify noisy labels from cross supervision using the discrepancy between different sampling priors.
Based on the motivations mentioned above, this paper tackles the multi-label long-tailed classification problem with noisy labels. First, we propose a simple but effective augmentation called “Stitch-Up" to synthesize cleaner training samples directly. Specifically, Stitch-Up concatenates several images sharing the same classes and unions their labels simultaneously. Such a strategy can reduce label noise in the training data from the point of the probability theory. On the other hand, it can preserve information lossless.
Second, a Heterogeneous Co-Learning framework is designed to perform label correction by exploiting the sampling prior. Our framework consists of two branches jointly trained with random and balanced sampling. Each branch rectifies noisy labels for those with high confidence and these corrected pseudo-labels are used to guide the training procedure of its peer network. The above two branches can recognize different noisy samples, which benefits from the heterogeneous structure with different sampling distributions. And Co-Learning can reduce error accumulation caused by training with wrong labels so that the performance is improved.
We conduct extensive experiments on the noisy version of two multi-label long-tailed datasets named VOC-MLT-Noise and COCO-MLT-Noise, respectively. Compared to a variety of baseline methods, our superior results demonstrate the effectiveness of the proposed method.
The main contributions are summarized as follows:
* We introduce a novel Stitch-Up augmentation to synthesize cleaner training samples. The generated cleaner training data facilitate the learning of more robust models.
* By leveraging different sampling priors and loss functions, we design a Heterogeneous Co-Learning framework to rectify noisy labels. The corrected pseudo-labels from one network cross-guide the training process of its peer network, which boosts the network's performance.
* To validate the effectiveness of our method, we propose two synthetic multi-label long-tailed benchmarks named VOC-MLT-Noise and COCO-MLT-Noise, respectively, with multiple noisy rates.
* Thorough experiments on these two datasets demonstrate the effectiveness of our method.
§ RELATED WORK
§.§ Long-tailed recognition
When learning with long-tailed data, one of the obstacles is that the frequent classes dominate the training procedure. But the test criterion typically prefers a uniform distribution or places more attention on the less representative classes <cit.>. Such inconsistency between the training and test phase leads to poor performance.
A wide range of strategies have been proposed to mitigate the effect of long-tailed distribution, including data resampling <cit.>, cost-sensitive reweighting <cit.>, margin aware loss <cit.>, two-stage finetune <cit.>, transfer learning <cit.> and meta-learning <cit.>. Among them, resampling and reweighting are the two most prominent directions. Resampling methods tend to adjust a more balanced training distribution. <cit.> advocates oversampling the minority classes while <cit.> claims that undersampling the frequent classes is better. In recent work, Zhou <cit.> proposed a two-branch architecture called BBN with different sampling strategies. The unified conventional and re-balancing branches promote both representation and classifier learning. Reweighting techniques <cit.> assign large weights for the training samples in the tail classes, to resist the skew prior distribution.
The majority of the aforementioned works focus on the multi-class setting, where each image has a single label. In real-world scenarios, large-scale datasets are often annotated with multiple labels and exhibit a long-tailed distribution <cit.>. Recently, Wu <cit.> proposed to combine rebalancing and reweighting methods to handle long-tailed multi-label recognition. Guo <cit.> extended the BBN <cit.> framework to support multi-label learning, enforcing consistency between different branches. Despite their improved performance, they ignored the label noise and assumed the collected datasets are clean. Our work targets handling the label noise in the multi-label long-tailed setting.
§.§ Multi-label classification
Before the era of deep learning, multi-label classification is primarily tackled by turning it into multiple independent binary classification problems <cit.> or adapting the existing algorithms, e.g. k-nearest neighbors <cit.>, decision tree <cit.>, kernel learning <cit.>. However, training separate binary classifiers neglects the relationships between labels and it is impractical to enumerate all the label combinations. As ConvNets receive significant success, modern methods <cit.> rely on deep networks to model the label dependencies. In <cit.>, the recurrent neural network is utilized to embed the label correlations. Lee <cit.> leveraged the knowledge graph <cit.> to describe the relationships between labels. ML-GCN <cit.> captured the label co-occurrence by the graph structures. SDE <cit.> sought for a selective, discriminative and equalizing feature representation by a learning-based feature pooling framework.
Contrary to their works, we focus on the multi-label noise in the image classification. Since the images annotated with the multi-label often share the same label, the label noise can be reduced in the training set when we synthesize cleaner images by stitching up collections of such images.
§.§ Learning with noisy labels
Deep neural networks are prone to fit noisy labels <cit.>. Training with corrupted labels can inevitably yield poor generalization performance. Existing works on learning with noisy labels can be roughly divided into three categories: (1) sample selection <cit.>. It works by filtering out label noise and retraining with clean data. Small loss trick plays an important role in noise identification, based on the observation <cit.> that deep neural networks often memorize simple patterns first and then noisy samples. (2) label correction <cit.>. Unreliable supervision from noisy labels can make optimization difficult. Several methods perform label correction by prediction from the network. (3) sample reweighting <cit.>. This approach is a commonly used strategy against noisy labels. Ren <cit.> allocated weights based on the gradient direction. Meta-Weight-Net <cit.> adopted the meta-learning framework to learn a weighting function mapping from the training loss.
These methods mostly address single-label noise and have great limitations in the long-tailed and multi-label scenarios. Our approach considers the properties of multi-label and long-tailed distribution. We propose a heterogeneous structure that allows better label correction during the training procedure.
§ METHOD
§.§ Overview
Our proposed method aims to address the multi-label long-tailed classification problem with noisy labels. Suppose that we are given a training dataset 𝒟_train={(𝐱_i, ỹ_i)|i=1,2,...,N}, where N is the number of the training samples and each sample is annotated with a noisy multi-label ỹ_i. Specifically, ỹ_i ∈{0,1}^C contains C binary labels with 1 indicating the presence of the label and 0 otherwise. And ỹ_i might be incomplete or mislabeled with more absent categories compared to clean label 𝐲_i. In addition, the number of samples per class is imbalanced. The goal of this task is to learn a robust model with well generalization ability on the unseen test data when training on a multi-label long-tailed noisy dataset.
In this work, we expect to handle label noise under the multi-label long-tailed setting. As deep models are prone to memorizing wrong labels <cit.>, learning with noisy labels poses great challenges to train deep neural networks effectively. One of the keys to alleviating the negative effect of noisy labels is reducing noisy training samples. Following this direction, we take the advantage of inherent properties within multi-label long-tailed circumstances to combat label noise. First, we propose a novel Stitch-Up augmentation to obtain less noisy training samples (Section <ref>). We stitch up multiple images and their multi-labels simultaneously. This results in cleaner training samples. Second, we introduce a Heterogeneous Co-Learning framework to perform online noisy label correction in Section <ref>. By leveraging inconsistency between different sampling priors, we rectify the wrong labels based on the confidence from our model. The corrected pseudo-labels are utilized to cross-guide the training procedure of its peer network. Then, we introduce the overall pipeline in Section <ref>. In the end, Section <ref> and Section <ref> details the loss function we optimize and inference procedure.
§.§ Stitch-Up
Intuitively, training with less noisy examples can boost the model's performance. Since multi-label visual recognition aims to predict the existence of the object in an image for each class, it is more likely to find the presence of a class when given a set of images. This motivates us to synthesize new cleaner training samples by stitching up a collection of images that may share the same label in noisy cases.
For each training sample 𝐱_i with a multi-label ỹ_i, we stitch up a set of examples with overlapping labels in the training set. Sample Selection: We construct the candidate set 𝒮_i^k composed of K samples that share the same class k for Stitch-Up. Specifically, in the first step, we choose an existing object class k from ỹ_i, where ỹ_ik = 1. In the second step, a collection of K-1 samples with class k are selected from the subset of the training data 𝒟_train^k={(𝐱_j, ỹ_j)| ỹ_jk=1}. Combined with the original sample (𝐱_i, ỹ_i), these K samples are formed as 𝒮_i^k where |𝒮_i^k| = K. Stitch-Up Synthesis: Then, we obtain a new training sample (𝐱̅_i, 𝐲̅_i) by stitching up these K samples and performing label union. This process can be expressed as:
𝐱̅_i = ⋃_(𝐱_j, ỹ_j) ∈𝒮_i^k𝐱_j,
𝐲̅_i = ⋃_(𝐱_j, ỹ_j) ∈𝒮_i^kỹ_j.
When training with deep neural networks, we treat Stitch-Up as means of data augmentation and apply this augmentation with a probability of p. Note that our Stitch-Up can be also applied at the feature level. In practice, Stitch-Up can be implemented in various forms. We show three regular types of Stitch-Up augmentation in Figure <ref>. All three types perform label union as means of Label Stitch-Up. For input images concatenation, we concatenate the images directly after sample selection and then feed the concatenated image into deep models. For features concatenation, we obtain the intermediate feature for each image and then concatenate these features. For features average, we average the intermediate features instead. The intermediate features can be extracted from different stages of deep neural networks. Empirically, we adopt the features average in the experiments.
Why does Stitch-Up work?
We provide a simple explanation based on the probability theory. For an object class k with a noise rate of γ_k (0<γ_k<1), the probability of the existence for class k is 1-γ_k when we are given only one image annotated with a noisy multi-label containing class k. If we stitch up two such images that share the same class k, we get a higher probability of 1-γ_k^2. This reveals that stitch-up augmentation can reduce label noise explicitly.
Comparison against Mix-Up. Our Stitch-Up is similar to Mix-Up <cit.> augmentation. Both combine the samples and labels simultaneously to synthesize new training samples. However, the motivation is quite different. Mix-Up encourages the model to behave linearly, reflecting a good inductive bias, while Stitch-Up can synthesize more training samples with less label noise. Besides, our Stitch-Up can benefit from lossless information. Mix-Up linearly interpolates two images, which suffers from the unnatural artifact problem <cit.>. The linear interpolation result looks unnatural and can discard the information to some extent. In contrast, our Stitch-Up concatenates the input images. This can leave the image intact without losing information. Our experiment further verifies Stitch-Up can outperform Mix-Up under the multi-label long-tailed recognition with noisy labels. The information loss might affect the head, medium and tail classes differently due to multi-label noise.
§.§ Heterogeneous Co-Learning
In this section, we introduce a Heterogeneous Co-Learning framework to overcome the overfitting when training with noisy labels. As Co-Learning<cit.> shows promising results in dealing with single-label noise, the main idea is to leverage the inconsistency through different network initialization to select clean training examples. In the long-tailed situation, we notice that different sampling priors have different preferences. To be more specific, classifiers trained with random sampling tend to behave well on head classes while those under balanced sampling can recognize tail classes better. Based on this observation, we propose to exploit the sampling prior to detecting noisy labels. We enforce the network to be cross-guided by the pseudo-labels which corrected by its peer network.
As illustrated in Figure <ref>, we jointly train two branches f and g with different sampling strategies. The first branch f takes the uniform sampling distribution, where each instance has the same sampling probability 1/N. The second branch g adopts the class-rebalanced sampling and each class achieves an equal probability 1/C of being selected. Both f and g are shared with the same backbone Φ.
We rely on the output from the network to rectify noisy labels.
In the multi-label classification problem, each label only has two states: 1 indicates the existence of the class and 0 otherwise. Therefore, we can perform label correction separately. Pseudo Labeling: We convert the probability q_ik produced by the network f(or g) into pseudo-label ŷ_ik (Eq. <ref>) based on the following rules: if the probability q_ik is extremely high or low, which is above or below some certain threshold (α or β), we trust the confidence from the network. Otherwise, we keep the original noisy label unchanged. We define the whole process as follows:
ŷ_ik = 1, if q_ik > α,
0, if q_ik < β,
ỹ_ik, otherwise.
When noisy labels are corrected, we use the generated pseudo-labels to teach the learning of the other branch. Here, we take the random sampling branch as an example. For the training example (𝐱, ỹ) sampled from uniform distribution, we feed it into two branches. Then, given the output g(Φ(𝐱)) from the class-rebalanced branch and the noisy label ỹ, we perform label correction and obtain the pseudo-label ŷ. The pseudo-label is subsequently used to directly guide the training procedure of the random branch f.
Co-Learning benefits from the heterogeneous structure between long-tailed and balanced distributions. The inconsistency helps label correction, which could potentially improve the robustness of the learned model.
Relations to Co-Learning based approaches. We compare our proposed method with other Co-Learning based approaches. Although our method is motivated by other Co-Learning idea <cit.>, there are fundamental differences. Our Heterogeneous Co-Learning is designed to combat label noise in long-tailed multi-label data. First, to tackle the long-tailed distribution problem, we leverage different sampling priors and loss functions for two branches where random sampling prefers head classes and class re-balanced sampling tends to handle the tail better. In contrast, other Co-Learning based approaches <cit.> use different network initialization to help filter out label noise without the consideration of the long-tailed issue. They keep the sampling strategy and loss function the same for the two branches. Second, we exploit the predictions from the network to correct noisy multi-labels directly. Instead, other Co-Learning based approaches <cit.> use the small-loss criterion to select clean data. The selected clean data are utilized to train the network. These approaches are designed for single-label noise. When encountered with multi-label noise, some data might have partially correct labels. It is hard to select totally clean data. Label correction is a more effective way to handle multi-label noise.
§.§ Overall framework
We illustrate our Heterogeneous Co-Learning framework equipped with Stitch-Up. We take K = 2 for example. The full algorithm is described in Algorithm <ref>. The overall framework is presented in Figure <ref>. Given two subsets (𝐗_1, 𝐘_1) and (𝐗_1', 𝐘_1') by random sampling and class re-balanced sampling from 𝒟_train respectively, we perform Sample Selection (Section <ref>) for each training sample in 𝐗_1 and 𝐗_1' to obtain (𝐗_2, 𝐘_2) and (𝐗_2', 𝐘_2') for Stitch-Up. For each (𝐱_1, 𝐱_2) ∈ (𝐗_1, 𝐗_2) with the noisy multi-label (ỹ_1, ỹ_2) ∈ (𝐘_1, 𝐘_2), we feed them into random branch and take Stitch-Up Synthesis (Section <ref>) at the feature level:
𝐟̅ = (f_1(Φ(𝐱_1)) + f_1(Φ(𝐱_2)))/2.
Then, logit 𝐳̅ is generated from f_2:
𝐳̅ = f_2(𝐟̅) .
On the other hand, we cross input (𝐱_1, 𝐱_2) into the class-rebalanced branch to generate the label. First, we get two probabilities 𝐪_1 and 𝐪_2 from the branch g as follows:
𝐪_1 = σ(g_2(g_1(Φ(𝐱_1)))),
𝐪_2 = σ(g_2(g_1(Φ(𝐱_2)))),
where σ denotes the sigmoid activation function. Second, each probability 𝐪_1 and 𝐪_2 is used to correct the corresponding original noisy label ỹ_1 and ỹ_2 via Pseudo Labeling (see Eq. <ref>). As a result, we obtain the pseudo labels ŷ_1 and ŷ_2. We produce the synthesized training label 𝐲̅ by Label Stitch-Up:
𝐲̅ = ŷ_1 ∪ŷ_2.
In the end, the logit and new synthesized training label are fed into the loss function for the optimization of the network. For the class-rebalanced branch, we take the similar operation and get the logit 𝐳̅' and the label 𝐲̅'.
§.§ Loss function
The common approach to multi-label problems is to use the binary cross-entropy (BCE) loss, which casts the multi-label classification as several binary classifications. We define the BCE loss as follows:
ℒ_BCE(𝐳_i, 𝐲_i) = -1/C∑_k=1^C (y_iklog(σ(z_ik)) +
(1-y_ik) log(1-σ(z_ik))),
where 𝐳_i denotes the logit, 𝐲_i is the multi-label and σ is the sigmoid function.
However, the vanilla BCE loss fails to work when the training dataset also exhibits the long-tailed distribution <cit.>. Considering the label co-occurrence and negative classes dominance issues in the multi-label long-tailed recognition, Wu <cit.> proposed Distribution-Balanced (DB) loss under class-rebalanced sampling. Given the logit 𝐳_i and the multi-label 𝐲_i, this loss is formulated as:
ℒ_D B(𝐳_i, 𝐲_i) =-1/C∑_k=1^Cr̂_ik (y_iklog(σ(z_ik-ν_k))+
1/λ(1-y_ik) log(1-σ(λ(z_ik-ν_k)))),
where
r̂_ik = θ + 1/1+exp(-ϕ×(r_ik-μ)),
r_ik = 1/N_k/∑_y_ij=11/N_j,
ν_i = κlog(1/p_k-1).
Herein, N_k denotes the number of instances in the class k, r_ik is the re-balancing weight. ν_k represents the class-specific bias, p_k = N_k/N is the class prior, σ is the sigmoid function and λ, θ, ϕ, μ, κ are hyper-parameters.
Our two branches f and g are optimized by the different loss functions. For random branch f, we use the BCE loss while DB with Focal Loss<cit.> is applied to the class re-balanced branch g. The overall loss is summarized as:
ℒ = 1/|𝐗_1|∑_i=1^|𝐗_1|ℒ_BCE(𝐳̅_i, 𝐲̅_i) + 1/|𝐗_1'|∑_i=1^|𝐗_1'|ℒ_DB-Focal(𝐳̅_i', 𝐲̅_i').
This inconsistency also prevents Co-Learning from degenerating to Self-Training and helps noisy label correction.
§.§ Inference
To evaluate the test data, we ensemble the outputs from two branches. For an unseen image 𝐱, we obtain the output 𝐳:
𝐳 = τ f(Φ(𝐱)) + (1-τ) g(Φ(𝐱)),
where τ is a balanced factor.
§ EXPERIMENTS
§.§ Datasets
We evaluate the effectiveness of our method on two synthetic benchmark datasets: VOC-MLT-Noise and COCO-MLT-Noise. These datasets are artificially derived from VOC-MLT and COCO-MLT <cit.>, which are proposed for the evaluation of multi-label long-tailed image recognition. Since the original datasets are clean, we need to synthesize noisy labels. The details about noisy labels generation and the corrupted datasets are introduced as follows.
§.§.§ Noisy labels generation
Referring to the conventional ways to generate label noise in the single-label settings <cit.>, we flip the original clean labels by using the noise transition matrix. In the context of the multi-label long-tailed problem, we take the label co-occurrence and imbalanced distribution into consideration. We define the noise transition matrix T_ij as the probability of being flipped to noisy label j when given an instance with a clean label i. Formally, assume that noise rate γ∈ [0, 1], the noise transition matrix can be expressed as:
T_i j(X) =ℙ(Y̅=j | Y=i, X=x),
=1-γ, j=i,
N_ij/∑_k iN_ikγ, j i,
where X is the training sample, Y and Y̅ represent the original clean label and the generated noisy label, respectively. Here, N_ij denotes the number of instances in frequency that the label i and label j co-occur in the dataset. Note that our construction does not care about the label combinations that can rarely appear in the same image, e.g. airplane and cow.
In the experiments, we investigate the robustness of our method under the noise rate γ∈{0.3, 0.5, 0.7, 0.9}.
§.§.§ VOC-MLT-Noise
This dataset is extended from VOC-MLT dataset <cit.> by noisy labels generation. The original clean long-tailed dataset is sampled from VOC 2012 train-val set by pareto distribution. The training dataset consists of 1,142 images and 20 classes with a range from 4 to 775 images. Note that the label distribution can be slightly shifted after introducing label noise. We perform the evaluation on VOC2007 clean test set with 4,952 images.
§.§.§ COCO-MLT-Noise
COCO-MLT-Noise is constructed from COCO-MLT <cit.> in a similar way. This dataset is based on MS COCO-2017. There are 4,783 images from 80 classes in the training set.
The maximum training samples per class of the original dataset is 1,356 and the minimum is 6. The test set is from COCO-2017 with 5,000 clean images.
§.§ Implementation Details
§.§.§ Training details
In our experiments, we use ResNet-50 pretrained on ImageNet as the backbone. The input images are randomly cropped and resized to 224× 224 with standard augmentation. The batch size is 32 for random sampling branch and 256 for class re-balanced sampling branch. We use SGD with momentum of 0.9 and weight decay of 0.0001 for optimization. We use linear warm-up for the first 100 iterations with a ratio of 1/3. The total training epochs are 8 and the initial learning rate is cross-validated in {0.02, 0.08, 0.14, 0.2}, which decays by a factor 10 after 5 and 7 epochs. We follow the same DB-Focal loss configuration as <cit.>. We use K=2 images for Stitch-Up augmentation with the probability of p=1.0. The hyperparameters α and β is cross-validated in {0.7, 0.8, 0.9} and {0.1, 0.2, 0.3, 0.4}, respectively. The balanced factor τ for evaluation is 0.1.
§.§.§ Evaluation metric
Following <cit.>, we adopt the mean average precision (mAP) to measure the performance. We report average mAP and the 95% confidence interval over 5-trials for all classes and three subsets including head, medium and tail classes. Head classes have more than 100 samples, medium classes contain 20-100 samples and those less than 20 samples are classified as tail classes. Besides, we also show mAP for each branches to observe the impact of our method.
§.§.§ Baseline settings
We compare our method with several baselines: (1) Empirical Risk Minimization (ERM): This approach treats all the training instances with the same sampling probabilities and the same weights. We use the random sampling strategy and the BCE loss in the experiment. (2) Focal Loss <cit.>: This loss is proposed to solve the class-imbalance problem. We set both the focusing parameter and the weighting factor to 2. (3) Re-Sampling (RS) <cit.>: We apply the class-rebalanced sampling with the vanilla BCE loss. (4) RS-Focal <cit.>: This is the combination of the class-rebalanced sampling strategy and focal loss. (5) Label Distribution Aware Margin loss (LDAM) <cit.>: This margin-based loss encourages each class has the optimal margin. We extend the original softmax-based implementation to BCE-based one for multi-label classification. (6) Bilateral-Branch Network (BBN) <cit.>: Similar to our method, this framework also inherits the two-branch structure with uniform and reversed samplers. This method considers the single-label long-tailed case without label noise. We make some modifications so that it can be adapted in the multi-label setting. (7) ML-GCN <cit.>: A graph-based framework for multi-label image classification. (8) DivideMix <cit.>: A two-branch framework combines the sophisticated semi-supervised technique and sample selection to deal with single-label noise. We replace the sampling strategy and loss function for long-tailed and multi-label classification. Balanced sampling and BCE loss are used. (9) Distribution-Balanced loss (DB) <cit.>: A recently proposed loss to solve multi-label classification in long-tailed datasets. (10) DB-Focal <cit.>: Compared to DB <cit.>, Focal Loss <cit.> is further applied.
§.§ Results
Baseline methods are mostly based on one branch with random sampling or class-rebalanced sampling except BBN <cit.> and DivideMix <cit.>. To verify the effectiveness of our proposed method, we conduct extensive experiments on our two synthetic benchmarks, VOC-MLT-Noise and COCO-MLT-Noise respectively, under the noise rate γ∈{0.3, 0.5, 0.7, 0.9}. We report the total mAP in Table <ref>.
First of all, we observe that with more training samples containing noisy labels, the performance is worse for all of the methods in both datasets. For VOC-MLT-Noise, the best total mAP under the noise rate γ of 0.3 and 0.9 is 76.48% and 34.41%, respectively. The relative gap is around 42%. For COCO-MLT-Noise, the best total mAP under the noise rate γ of 0.3 is 21% better than that in the noise rate γ of 0.9. These results indicate that label noise can significantly hinder the learning of robust models in the multi-label and long-tailed setting. Second, DB and DB-Focal <cit.> are better than other baseline approaches since they consider both multi-label and long-tailed distribution. Especially when the noise rate is low ( γ = 0.3), they still show much robust performance (73.75% and 72.87%). Third, compared to several baseline methods, our method can gain significant improvements on mAP by reducing label noise. The total mAP of our proposed method on VOC-MLT-Noise in the different noise rates γ∈{0.3, 0.5, 0.7, 0.9} is 76.48%, 69.10%, 62.29%, 34.41%, respectively. And the performance gap relative to the state of the art baseline is approximately +2.7%, +5.5%, +7.2%, +5.3%. For COCO-MLT-Noise, the improvement is about +1.7%, +2.0%, +2.8% and +3.2%, respectively. The lower confidence intervals also suggest that our method is more stable. When we take a closer look at the independent evaluation results of two branches, we find the class-rebalanced branch plays a more important role in the recognition. Furthermore, as seen in Table <ref>, the mAP results on VOC-MLT-Noise in three subsets are presented. We find the performance is all improved for head, medium and tail. The recognition ability on the tail class is notably enhanced for the class-rebalanced branch. The class-rebalanced branch outperforms DB <cit.> by 3.2% in the noise rate γ of 0.5. In the meantime, random branch achieves consistent better performance for the head class. When the noise rate γ is 0.5, our random branch gets 65.39% mAP for the head classes, which is 3.4% better than BBN <cit.>. Compared to DivideMix <cit.>, our proposed method achieves about 9.8% improvement on VOC-MLT-Noise under the noise rate of 0.5. Although Dividemix is designed to deal with label noise, it does not consider the long-tailed and multi-label issues. Table <ref> shows the head, medium and tail performance on COCO-MLT-Noise. We notice that our method achieves 56.78% (+3.2%) mAP for the head classes and 45.17% (+1.7%) mAP for the tail classes. These results confirm the superiority of our proposed method.
§.§ Ablation Study
In this subsection, we conduct several ablation studies: (1) Ablation study on the two components: Stitch-Up and Co-Learning; (2) How to apply Stitch-Up augmentation? (3) Comparison against Mix-Up augmentation; (4) Effect of the sampling strategy for Stitch-Up; (5) Effect of K images used in Stitch-Up augmentation; (6) Effect of the probability p to apply Stitch-Up augmentation; (7) Stitch-Up brings more noisy labels? (8) Pseudo labels from Co-Learning or Self-Training? (9) Effect of different sampling priors. (10) Running time analysis. All results are reported with the mAP on VOC-MLT-Noise under the noise rate γ of 0.5.
§.§.§ Ablation analysis
To further understand our proposed method, we first establish a stronger baseline with a two-branch structure based on DB <cit.>. We adopt random and balanced samplers for two branches. The loss functions remain the same as ours. This brings around 3.25% mAP improvement upon DB <cit.>. Then, based on this strong baseline, we perform the ablation analysis on Stitch-Up augmentation and Heterogeneous Co-Learning, named Stitch-Up and Pseudo Labeling, respectively. The results can be found in Table <ref>. As we can see, both Stitch-Up augmentation and Co-Learning can promote the model's performance. For Stitch-Up augmentation, we show that this augmentation receives the total mAP of 68.53% (+1.69%). It suggests that training with the synthesized less noisy samples can relieve the effect of label noise. However, the augmentation can also impair the evaluation performance of head classes. And we notice that the improvement primarily comes from the medium and tail classes. We explain that Stitch-Up might affect the sampling distribution. For the component of Pseudo Labeling, the testing result shows the total mAP (68.46%) is significantly improved. Compared to the baseline without any additional modules, Co-Learning brings the improvement of the head classes. This can be complementary to the Stitch-Up augmentation. Finally, we obtain the best total mAP performance 69.10%, which is better than employing Stitch-Up or Co-Learning alone. It indicates that these two mechanisms can foster learning with less noisy training samples collaboratively.
§.§.§ How to apply Stitch-Up augmentation?
As discussed in Section <ref>, Stitch-Up augmentation can be implemented in various forms. We explore three regular types in deep learning based methods: input images concatenation, features concatenation and features average. Table <ref> shows that features average reaches the best result (68.53%). We notice that all three types of Stitch-Up augmentation perform better than the baseline without Stitch-Up. Note that our Stitch-Up augmentation is easy to implement.
§.§.§ Comparison against Mix-Up augmentation
Our Stitch-Up shares similarity with Mix-Up <cit.> widely used in addressing label noise<cit.>. We are curious whether Stitch-Up can outperform Mix-Up in the multi-label long-tailed problem with label noise. Follow <cit.>, Mix-Up samples the interpolation parameter from Beta distribution. For fair comparison, our Stitch-Up is applied at the image level. Table <ref> shows our Stitch-Up achieves better results (68.08%) than Mix-Up (63.61%). It is seen that Mix-Up performs better on head classes and the performance on medium and tail classes drops. We hypothesize that medium and tail classes are affected more severely by multi-label noise. As we discussed in Section <ref>, Mix-Up leads to the loss of information. Because samples on medium and tail classes are limited and often co-exist with head classes, the effect on medium and tail classes is amplified. It makes learning on medium and tail classes harder. Instead, the model focuses more on the optimization of head classes. Therefore, the model can recognize head classes better.
§.§.§ Effect of the sampling strategy for Stitch-Up
We investigate the effect of the sampling strategy for Stitch-Up. For random and balanced sampling, we compare our Stitch-Up with the baseline respectively. The results are reported in Table <ref>. We observe that Stitch-Up augmentation can enhance the performance no matter what sampling strategy is used. The improvement mostly benefits from the medium and tail classes while sacrificing the head classes.
§.§.§ Effect of K images used in Stitch-Up augmentation
Intuitively, stitching up too many images can not make any sense and can even hurt the performance. If we stitch up the whole dataset, it is highly possible to find all the object classes in the generated new image. Such an easy training sample might force the network to learn less useful representation. We perform Stitch-Up augmentation on the two-branch baseline without Co-Learning. We conduct a series of experiments to investigate the effect of different number of images (K) when we employ Stitch-Up augmentation. Here, we conduct the ablation study on K=2, 3, 4, 5. The results with p=1.0 fixed are presented in Figure <ref>. We have two major observations. First, the total mAP gets worse (67.06%, 66.52%) when K is large (K=4, 5). This phenomenon is consistent with our conjecture. Second, we find the performance for head classes drops a lot while tail classes are less affected. In the experiments, we choose K=2.
§.§.§ Effect of the probability p to apply Stitch-Up augmentation
We study the influence of the probability of this augmentation when applied to the training samples. The experiments are conducted on the two-branch Stitch-Up augmented baseline without Co-Learning. We keep K=2 fixed. We evaluate our Stitch-Up under p ∈{0.0, 0.3, 0.5, 0.8, 1.0}. As shown in Figure <ref>, p=1.0 achieves the overall best performance 68.53%, which is significantly better than the case p=0.0 (66.84%) when we perform no stitch-up augmentation. Meanwhile, we observe that the stitch-up augmentation consistently receives the improvement with increasing probability p. This confirms that our Stitch-Up augmentation can relieve label noise by synthesizing cleaner training samples. As a result, we set p=1.0 in the experiments.
§.§.§ Stitch-Up brings more noisy labels?
It can occur when we stitch up two images where both images contain no class objects but one of them is annotated with the noisy label of that class. Due to the feature average option, the gradients are back-propagated for both images, which can misguide the direction of optimization. However, we find that the positive effect (reducing label noise) of Stitch-Up outweighs the negative effect (introducing label noise) in practice. We calculate the ratio between the amount of label noise reduced and the amount of label noise introduced in one epoch on VOC-MLT-Noise. The ratio is around 2.34. It indicates the overall benefit of Stitch-Up is to reduce label noise.
§.§.§ Pseudo labels from Co-Learning or Self-Training?
As self-training is also a promising approach to solving the noisy label learning problem<cit.>, can Co-Learning perform better than Self-Training? We compare the performance between Co-Learning and Self-Training in the multi-label long-tailed setting when introduced label noise. We evaluate the two strategies under different sampling distributions. Note that we conduct a hyperparameter search for Self-Training. In Table <ref>, we refer “cross" to Co-Learning and “self" to Self-Training. It is observed that Co-Learning achieves better total mAP performance no matter what sampling policies we use. Especially when training with random and balanced sampling simultaneously, the gap between Co-Learning (69.75%) and Self-Training (67.70%) can be expanded to 2%. When we look at the results of each branch, Co-Learning achieves superior performance for both branches. We argue that Self-Training is prone to accumulate errors. When training with noisy labels under the long-tailed distribution, the negative impact is extremely amplified. Co-Learning leverages the inconsistency between sampling to avoid the confirmation bias with wrong noisy labels so that we can learn a robust model effectively.
§.§.§ Effect of different sampling priors
We claim that the inconsistency that comes from different sampling priors can help us correct label noise. To further substantiate our hypothesis, we investigate various combinations of random sampling and balanced sampling, including random+balanced, random+random and balanced+balanced. As shown in Table <ref>, random sampling with balanced sampling can bring about 2% performance gain while the same sampling priors for two branches can slightly improve upon baseline. It demonstrates that disagreement under different sampling priors is key to the success of Co-Learning in challenging scenarios with both label noise and long-tailed distribution.
§.§.§ Running time analysis
We compare the total inference time between DB <cit.> and our proposed method on VOC2007 clean test set in Table <ref>. We report the average running time over 5 trials on the whole test set with a batch size of 1. The experiment is conducted on a single Nvidia RTX2080Ti GPU. Our method is slower than DB due to Co-Learning. The overhead mainly comes from the two-branch ensemble architecture. It introduces an extra model forward time.
§.§ Qualitative Evaluation
§.§.§ Lower noise level after Stitch-Up
We define the noise level as the real noise rate of the training data, different from the original noise rate γ of the training dataset. As we discussed in Section <ref>, Stitch-Up can synthesize cleaner training samples, resulting in a lower noise level of our training data. To verify our motivation, we visualize the change in the noise level with or without Stitch-Up during the training stage. The result on VOC-MLT-Noise and COCO-MLT-Noise is presented in Figure <ref> and Figure <ref>, respectively. It is clear to see the noise level of the training data is decreasing after we perform Stitch-Up augmentation no matter what sampling strategies we use. This demonstrates that the improvement of the performance benefits from cleaner samples introduced by Stitch-Up. We also notice that the noise level keeps steady during the whole training procedure. The noise level of the head class is reduced more compared to the medium and tail class. This is possibly due to the influence of the long-tailed distribution.
§.§.§ Visualization of pseudo labels from Co-Learning
We show several training images with their pseudo labels in Figure <ref> for an intuitive illustration of our Co-Learning. We observe that our model can assign relatively high scores to those labels that might occur in the given images and low scores to those absent labels. For example, the car class does not exist in the first example at the top. And the chair class and dining table class are missing in the annotation. Through the label correction from Co-Learning, we rectify the noisy labels by adding the chair and dining table classes and removing the car class. The generated pseudo labels are closer to the clean labels. Therefore, Co-Learning can facilitate the learning of the model with cleaner labels.
§ CONCLUSION
In this paper, we address multi-label long-tailed visual recognition with noisy labels. Training with noisy labels can hinder the development of a robust model. Considering inherent properties of multi-label classification and long-tailed learning under noisy cases, we propose a Heterogeneous Co-Learning framework equipped with a novel Stitch-Up augmentation to mitigate the impact of label noise. Through extensive experiments on two synthetic noisy datasets named VOC-MLT-Noise and COCO-MLT-Noise, we show that our method exhibits substantial results compared to various baselines.
IEEEtran
|
http://arxiv.org/abs/2307.02868v2
|
20230706091125
|
High-speed photon correlation monitoring of amplified quantum noise by chaos using deep-learning balanced homodyne detection
|
[
"Yanqiang Guo",
"Zinan Hu",
"Jianchao Zhang",
"Chenyu Zhu",
"Xiaomin Guo"
] |
quant-ph
|
[
"quant-ph",
"nlin.CD",
"physics.optics"
] | |
http://arxiv.org/abs/2307.01159v1
|
20230703171019
|
Soft Gripping: Specifying for Trustworthiness
|
[
"Dhaminda B. Abeywickrama",
"Nguyen Hao Le",
"Greg Chance",
"Peter D. Winter",
"Arianna Manzini",
"Alix J. Partridge",
"Jonathan Ives",
"John Downer",
"Graham Deacon",
"Jonathan Rossiter",
"Kerstin Eder",
"Shane Windsor"
] |
cs.RO
|
[
"cs.RO",
"cs.AI",
"D.2.1; I.2.9"
] |
Optimized experimental optical tomography of quantum states of room-temperature alkali-metal vapor
Marek Kopciuch 1,2*, Magdalena Smolis2, Adam Miranowicz3, Szymon Pustelny2**
August 1, 2023
==================================================================================================
Soft robotics is an emerging technology in which engineers create flexible devices for use in a variety of applications. In order to advance the wide adoption of soft robots, ensuring their trustworthiness is essential; if soft robots are not trusted, they will not be used to their full potential. In order to demonstrate trustworthiness, a specification needs to be formulated to define what is trustworthy. However, even for soft robotic grippers, which is one of the most mature areas in soft robotics, the soft robotics community has so far given very little attention to formulating specifications. In this work, we discuss the importance of developing specifications during development of soft robotic systems, and present an extensive example specification for a soft gripper for pick-and-place tasks for grocery items. The proposed specification covers both functional and non-functional requirements, such as reliability, safety, adaptability, predictability, ethics, and regulations. We also highlight the need to promote verifiability as a first-class objective in the design of a soft gripper.
§ INTRODUCTION
Soft robotics is an emerging technology in which engineers create flexible devices for use in a variety of applications, such as surgery, prosthetics, space exploration, and grocery picking.
In order to advance the wide adoption of soft robots, ensuring their trustworthiness is essential.
Trust can be earned and lost over time. Trust can be defined by various research disciplines in different ways; in comparison something is trustworthy when it is deserving of trust.
Autonomous systems (AS) (e.g. soft robotic systems) can be trustworthy when their design, engineering, and operation generate positive outcomes and mitigate harmful outcomes <cit.>.
The trustworthiness of AS depends on many factors like (i) robustness in uncertain and dynamic environments; (ii) accountability, explainability, and understandability to different users; (iii) assurance of their design and operation through verification and validation (V&V) activities; (iv) confidence to adapt their functionality; (v) security to counter attacks; (vi) ethics and human values in their use and deployment; and (vii) governance and regulation of their design and operation <cit.>.
Different techniques can be used to demonstrate the trustworthiness of a system, such as formal verification at design-time, monitoring and runtime verification, synthesis, and test-based methods <cit.>.
However, common to all these techniques is the need to formulate specifications.
A specification is a detailed formulation that provides “a definitive description of a system for the purpose of developing or validating the system” <cit.>.
Soft robotic grippers for manipulating objects are considered to be one of the most mature areas in soft robotics.
However, even for this relatively well advanced application area the soft robotics community has so far given very little attention to formulating specifications (e.g. <cit.>). Creating a specification as part of a development process is an important step in which the requirements for the system are agreed and defined precisely.
The specification then provides a benchmark against which to assess different design options. A well-defined specification also provides a set of criteria against which to verify the performance of the system during design and in operation.
In addition to formulating a well-defined specification, a key technique which can explored to improve the trustworthiness of a soft robotic gripper is verifiability.
Verifiability considers verification as an integral part of the system specification and the system design <cit.>, where it can be promoted to a primary system design objective <cit.>.
Verifiability will essentially give rise to systems, which by their construction, deserve our trust <cit.>.
The main contributions of this paper are as follows:
* We provide an extensive example specification to ensure the trustworthiness of a soft robotic gripper <cit.> with both functional and non-functional requirements, such as reliability, safety, adaptability, predictability, ethics, and regulations.
* We highlight the importance of promoting verifiability as a first-class objective in the design of a soft robotic gripper. Also, we provide illustrations of how to formulate verifiable requirements for soft grippers.
We explore this novel topic using a case study of pick-and-place tasks of grocery items <cit.> involving a recycled soft gripper <cit.>.
The rest of the paper is organised as follows.
In Section <ref>, we provide background information to this work, key related works, and a brief description of the case study.
Section <ref> provides a detailed specification of a soft gripper, and in Section <ref>, we highlight the significance of promoting verifiability.
Finally, Section <ref> concludes the paper.
§ BACKGROUND, RELATED WORK AND CASE STUDY
§.§ Background
§.§.§ Recycled Soft Gripper
The soft robotic gripper used in this work is a two-finger fluidic elastomer actuator, measuring 12 x 134 x 6 mm <cit.> (see Fig.<ref>). The gripper is fabricated using a two-part moulding process with a fabric-constraining layer and comprises a mix of 70% pristine EcoFlex 00-30 silicone elastomer and 30% recycled EcoFlex 00-30 granules that are 1 mm to 2 mm in size.
The operating procedure for the gripper is simple. To actuate the gripper, a positive pressure is introduced to the system that acts to inflate chambers within the gripper body. As the chambers expand, the constraining layer on the base of the gripper prevents expansion of the base of each chamber, while the top of each chamber is free to expand. This differential expansion of top and bottom results in a curving of each chamber, which then leads to an overall curling of the gripper fingers to grasp an object. When vented, this pressure is removed, and the gripper fingers return to a flattened state.
§.§.§ Standards
Although no direct industry standards have been defined for soft grippers so far, there are several standards in the area of rigid robotics that provide: (i) a set of terminology and definitions for robotic grippers (ISO 14539:2000) and hands <cit.>); and (ii) guidance related to the safe spaces, speeds, and forces that a gripping system needs to function (ISO 10218-1/10218-2, and ISO/TS 15066).
§.§ Related Work
Most existing approaches (e.g. <cit.>) only describe some technical requirements and parameters of the end-effector to drive its physical design, such as its dimensions, weight, and material properties.
For example, Netzev et al. <cit.> list several technical requirements that are essential for achieving the desired result of their grasping method, such as dimensions and heights of the largest and smallest objects which can be gripped, maximum object weight, and gripping force.
Also, most specifications provided for soft grippers only describe their functional requirements (e.g. <cit.>), and only a few approaches describe non-functional requirements, such as adaptability and performance (e.g. <cit.>).
For example, Shi el al. <cit.> summarise functional requirements to characterise soft robots with respect to their force, dynamics, and stiffness identification.
Cheng et al <cit.> conduct a series of static and dynamic gripping tests that use different forces and fingertip displacements, and demonstrate enhancements to adaptability and performance of their three-finger soft-rigid gripper.
However, these works do not cover a wide range of properties which affect the trustworthiness of a soft gripper, as proposed in this work.
§.§ Case Study: Pick-and-Place Tasks of Grocery Items
Let us consider an automated warehouse, where items are picked from storage crates using a soft robotic gripper and are then placed in delivery crates for delivery.
An example grocery use case is that of Ocado, which is considered the world's largest online-only supermarket <cit.>.
A range of uncertainties make automation of this process challenging: (i) items can vary in their shape, size, packaging, and orientation; (ii) some items are fragile or deformable; (iii) geometrically constrained and relatively cluttered operating environment <cit.>; (iv) manufacturing inconsistencies (low tolerances) in the gripper; and (v) elastic nature of the materials used in the gripper, which can lead to performance degradation.
In this case study, we consider four classes of items: (i) soft-fragile items (e.g. cake, bread, strawberry, bayberry), (ii) soft-non-fragile items (e.g. dish sponge), (iii) hard-fragile items (e.g. light bulb, egg), (iv) hard-non-fragile items (e.g. plastic spoon).
The objects being picked can be regular-shaped items (e.g. sphere, cube, cone, pyramid, cylinder) or irregular-shaped ones (e.g. strawberries). The robotic pick-and-place task can be structured into a pipeline of four main tasks: (i) pre-grasping, (ii) ascension (grasping), (iii) translation (transport), and (iv) descension (placement).
§ SPECIFICATION OF A SOFT GRIPPER
In this section, we present an example of a wide-ranging specification with functional and non-functional properties which affect the trustworthiness of a soft gripper. We define these in terms of predictability, reliability, adaptability, safety, ethics, and regulations (see Fig.<ref>).
This example specification was developed through consultation with requirements engineers, soft robotic developers, industrial users, ethicists, and sociologists. It is an example of the aspects that may need to be specified for a soft gripper rather than an exhaustive specification for every aspect of the system.
The specification includes functional requirements – those that specify behaviour the system shall perform <cit.>; and non-functional requirements – those that specify not what the system will do but how it will do it (quality attributes) <cit.>.
The engineering requirements (iii a–d) are formulated as `shall' statements following the guidance on writing good requirements <cit.>, and the ethics and the regulatory requirements (iii e–f) are discussed using key frameworks from ethics <cit.> and social science <cit.>.
Below we define the set of requirements {RQx} for a soft gripper across these six properties.
We define the boundaries to measure the success of gripping an item using the conventional bounds of 95% for success and 5% for failure.
Also, in the following, whenever a performance threshold is given, where possible we aim to provide a reference from a published work/experiment.
Predictability
Predictability is “a property of interaction concerning the degree of confidence with which a user can determine the effect subsequent task execution will have on the achievement of the goal" <cit.>. A soft gripper needs to be predictable in its behaviour, so it can build a degree of confidence and trust for the end-user.
* RQ1.1: The fingers of the gripper shall curve when inflated (pump turned on).
* RQ1.2: The fingers of the gripper shall straighten when deflated (pump turned off).
* RQ1.3: The curvature of a finger shall be proportional to its internal pressure.
* RQ1.4: The gripper shall grasp, transport, and place an item successfully with a repeatability of ≥95%.
* RQ1.5: The fingers shall be inflated with a pressure between 3 and 4 psi pressure range <cit.>.
* RQ1.6: The fingers shall be inflated with a flow rate between 2 and 3.2 L/min range <cit.>.
Reliability
Reliability is described as the “ability of a system or component to perform its required functions under stated conditions for a specified period of time" <cit.>. A soft gripper needs to be reliable by not dropping or damaging items during pick-and-place tasks, and should be tolerant of graceful degradation of performance.
* RQ2.1: The gripper shall hold the item being gripped without damaging it.
* RQ2.2: The gripper shall hold the item being gripped without dropping it for at least 10 seconds 95% of the time (<cit.>, p. 652).
* RQ2.3: The gripper shall successfully maintain grasp during the translation of the gripped item for a maximum velocity and acceleration of 0.03 m/s and 0.15 m/s^2 (<cit.>; <cit.>).
* RQ2.4: The gripper shall successfully grasp when the rate of inflation is in the range of 2-3.2 L/min <cit.>.
* RQ2.5: The gripper shall experience ≤ 5% increase in the dropping of an item across 100 hours of operation (graceful degradation).
* RQ2.6: The gripper shall experience ≤ 5% increase in the damaging of an item across 100 hours of operation (graceful degradation).
Adaptability
Adaptability is the “degree to which a product or system can effectively and efficiently be adapted for different or evolving hardware, software or other operational or usage environments" <cit.>. In a soft gripper, adaptability is key to grasping objects of different shapes, sizes, masses, and positions or orientations.
* RQ3.1: The gripper shall hold items of different sizes up to a maximum of 95% of the opening width of the two fingers without dropping them for at least 10 seconds 95% of the time.
* RQ3.2: The gripper shall hold items of different shapes (e.g. sphere, cube, cone, pyramid, cylinder) without dropping them for at least 10 seconds 95% of the time.
* RQ3.3: The gripper shall hold items, which can be of regular or irregular shape (e.g. soft-fragile items like strawberry), without dropping them for at least 10 seconds 95% of the time.
* RQ3.4: The gripper shall hold an item independent of its orientation without dropping it for at least 10 seconds 95% of the time.
Safety
This is described as an “expectation that a system does not, under defined conditions, lead to a state in which human life, health, property, or the environment is endangered" <cit.>.
In this work, we consider safety from the perspective of the item being gripped where physical damage should be avoided during pick-and-place tasks.
* RQ4.1: The item being gripped shall be motionless (to minimise harm) before contact with the gripper.
* RQ4.2: The gripping system shall not collide with the item being gripped.
* RQ4.3: The gripping system shall only make contact with the item using the gripper.
* RQ4.4: When grasping a hard-fragile item (e.g. light bulb, egg), the soft actuator shall be inflated until the gripping force does not exceed 2 N (<cit.>).
* RQ4.5: When grasping a soft-fragile item like cake or bread, the soft actuator shall be inflated until the fingertip displacement does not exceed 3 mm (<cit.>).
* RQ4.6: When grasping a soft-fragile item like strawberry or raspberry, the soft actuator shall be inflated until the gripping force does not exceed 1 N and the fingertip displacement does not exceed 1 mm (<cit.>, p. 14).
Ethics
Human values inform the design and use of technologies, and the resulting distribution of benefits and burdens, and those values underpin system trustworthiness. Rather than describing whether people trust a technology, ethicists are interested in prescribing whether they should trust it. Below are some key ethical requirements for specifying for trustworthiness in a soft gripper in pick-and-place tasks.
* RQ5.1: The gripper shall
be environmentally sustainable.
* RQ5.2: The gripper shall avoid distressing human workers.
* RQ5.3: The gripper shall avoid exploiting humans' trust heuristics.
* RQ5.4: The gripper shall accommodate fair treatment of current human workers.
The use of a recycled gripper may make its manufacturers or the company deploying it look like they care for the environment and so appear more trustworthy, thus increasing public trust in them. However, such a system may not perform as well as, or may degrade more rapidly than, a system made of virgin material, thus making it more likely for items to be dropped or damaged. When the item being gripped is food, the risk of food waste increases, with detrimental implications for the environment. Unless the above reliability requirements are met by design, the use of a recycled gripper could be a form of `greenwashing' <cit.>.
Moreover, while technical requirements aim to prevent physical harm, specifying for trustworthiness should include considerations of potential psychological harm <cit.>. By replicating a human feature (fingers/hand) and behaviour (gripping) without being human-like enough to convince, a soft gripper could produce an uncanny valley effect <cit.> that distresses humans. User studies should identify gripper shapes that are acceptable to humans, whilst aiming for optimal gripping.
Simultaneously, the biomimetic features of soft grippers risk exploiting humans' trust heuristics, which make them trust things they are already familiar with <cit.>. As the use of soft robotics technologies can result in many unsafe conditions (e.g. material failure, crushing <cit.>), trust in the safety of a soft gripper could be misplaced if it was only based on a sense of familiarity with it. To reduce risks of harm, the robot area should be clearly separated from the space accessible to humans.
Regulations
To design successful AS we must formulate specifications from a sociotechnical perspective, that is not only regarding the technical features of the system as discussed in III a–d, but also social features of its development and use as fundamentally interrelated. So, in designing a soft gripper for pick-and-place tasks for grocery items, we need to be sensitive to sociotechnical requirements, where the analysis of human, social, cultural, and organisational dynamics can help us think about why and how failures happen with regards to AS. In this context, Macrae's <cit.> `SOTEC' (structural, organisational, technological, epistemic, and cultural) framework is a useful approach for identifying domains of sociotechnical risk in AS. The schema can help inform emergent regulation by identifying often-neglected risks. Below we outline each category and consider its application to the pick-and-place tasks of grocery items.
Structural sources of risk arise from interactions between human and nonhuman elements in a system.
In a pick-and-place task, such risks may arise from unanticipated disruptions, such as a human moving a grocery item, confusing the perception system. Regulatory requirements should ensure that the system anticipates and accommodates the complexities of human-machine interactions.
Organisational sources of risk emerge when organisational structures, such as rules and expectations, are insensitive to the vagaries of real human behaviour. For example, protocols for checking capabilities required for grasping might presuppose an unrealistic degree of diligence. Regulatory requirements should be alert to the practical dimensions of potential rules and procedures.
Technological sources of risk arise from the shortcomings of the system itself. These are the myriad risks that engineers would conventionally consider as part of their work. In the context of the pick-and-place pipeline, there may be concern about recycled materials degrading and shedding particles into food. Regulators should ensure that designers effectively test the properties of the recycled materials, their limits and behaviours.
Epistemic sources of risk arise from the inherent indeterminacies of knowledge, which create pockets of ignorance that hide unexpected hazards.
Such pockets are difficult to mitigate (since we don't know what we don't know) but it can be anticipated that an attempt by the system to grasp an object could fail for unexpected reasons, and tailor regulatory provisions in ways that promote resilience and learning.
Cultural sources of risk occur from collective values (beliefs and norms) that frame and influence AS design and operation <cit.>.
For instance, if a gripper's grasp is optimised for western food items (e.g. tins), this may mean that it underperforms for eastern food items (e.g. bags), negatively affecting a minority of stakeholders. Regulatory requirements must reflect critically on their underlying values, and work to counter potential inequities.
§ VERIFIABILITY OF A SOFT GRIPPER
In addition to formulating a well-defined specification as discussed in the preceding section, verifiability can also be explored to improve the trustworthiness of a soft robotic gripper.
Verification is the process that can be used to increase confidence in the system's correctness against its specification <cit.>.
One can achieve verifiability by giving consideration to verification early in the process, such as during specification and system design <cit.>, where it can be promoted to a primary system design objective <cit.>.
A unified and holistic approach to verifiability will give rise to systems, which by their construction, deserve our trust <cit.>.
In order to make a system verifiable, a person or a tool must be able to check its correctness <cit.> in relation to its specification <cit.>.
The main challenge is in specifying and designing the system in such a way that this process is made as easy and intuitive as possible.
The specific challenges for AS include <cit.>:
(i) capturing and formalising requirements including functionality, performance, safety, security and, beyond these, any additional non-functional requirements purely needed to demonstrate trustworthiness;
(ii) handling flexibility, adaptation, and learning; and
(iii) managing the inherent complexity and heterogeneity of both the AS and the environment it operates in.
Gerson <cit.> identifies several techniques for ensuring verifiability: bounding the verification task, prioritising effort, ensuring traceability, and breaking down high-level requirements into verifiable portions.
According to Gerson <cit.>, formulating requirements for verifiability should go far beyond avoiding negative requirements and including numerical tolerances, but also aim to design for verifiability.
This is because the ultimate purpose of specifications is verifying that the end product exhibits the intended properties.
This requires the intended properties to be demonstrable; that is, knowledge of the end result needs to be attainable.
In this work, we have considered this early when formulating requirements, by planning and confirming how these properties can be verified subsequently (see Table <ref>).
For example, requirements RQ1.1, 1.2, 1.4, 2.1, 2.2, and 2.4 can be verified by observing the gripping system during operation.
Observation is “a technique that provides a direct way of viewing individuals in their environment performing their jobs or tasks and carrying out processes" <cit.>.
Let us describe a unit test that can be conducted to verify requirement RQ1.5.
A soft pneumatic gripper with two fingers can be fabricated as proposed in <cit.> using 30% recycled material.
One can use black pigments to track the granules and chambers, and the constraining layer's curvature can be tracked using red pigments.
The fingers can be actuated and a camera can be used to capture their motion.
The curvature with time can be determined by conducting image processing on the video, and the internal pressure with time can be monitored.
As mentioned previously, the engineering requirements (iii a–d) have been formulated by following the guidance provided in <cit.>.
On the one hand, most engineering requirements contain numeric tolerances (verifiable).
For example, RQ1.5 identifies a specific pressure range.
Similarly, in RQ4.4, we identify a maximum value of 2 N for gripping force when gripping a hard-fragile item.
On the other hand, there are several high-level requirements which need to be refined into verifiable terms before any verification method can be applied.
For instance, requirements RQ1.1–1.3 have been formulated at a high-level because the amount of curvature and straightening of a finger can often be dependent on the application.
In order to be verifiable, we can refine RQ1.1 as: the fingers of the gripper shall curve within 2% of a curve of 10 cm radius when inflated.
Similarly, we can refine requirement RQ4.2 as: no part of the body of gripping system shall have a position equal to the position of the item being grasped.
This can be verified by monitoring the distance between each of the joints of the gripping system to ensure that there is no collision between the body of the gripping system and the item.
Meanwhile, with respect to RQ4.1, one does not need to impose an initial velocity on the food item in the test bed, so this requirement can be met by design (i.e. no need to formally verify or monitor it). Thus, the above examples illustrate two versions of the specification for a soft robotic gripper – one a verifiable one from the start (e.g. RQ1.5, RQ4.4); and the other a more high-level, unrefined one (e.g. RQ1.1, RQ4.2), for which, through examples, we demonstrate how to make it verifiable. In this manner, this work not only provides a wide-ranging specification for a pick-and-place application, but also provides illustrations of how to formulate verifiable requirements for soft grippers.
§ CONCLUSION
As soft robotics continue to expand into new and diverse industries it is important to consider the design processes used, particularly the development of detailed specifications which can be used to define the requirements for a system and ensure that both functional and non-functional aspects are considered from an early stage of the development process.
Within soft robotics, soft gripping is considered to be one of the most mature areas.
For wider adoption and acceptability, we must build soft grippers worthy of our trust.
In this context, this work proposed an extensive specification for a soft gripper covering several functional and non-functional properties, including predictability, reliability, adaptability, safety, ethics, and regulations.
In addition, we promoted the notion of verifiability of a soft gripper as a first-class design objective, and provided illustrations of how to formulate verifiable requirements.
This work was explored using the pick-and-place tasks of grocery items in an automated warehouse.
We conclude that specifying for trustworthiness in soft robotics as complete systems for real world applications should use a multi-disciplinary approach with inputs from a range of experts including soft roboticists and engineers, as well as experts from the social sciences and humanities. A multi-disciplinary approach will help ensure that any specification covers both functional and non-functional requirements and will help develop trustworthy soft robots.
§ ACKNOWLEDGEMENTS
This work has been supported by the UKRI Trustworthy Autonomous Systems Node in Functionality under Grant EP/V026518/1. J.R. is supported by EPSRC grants EP/R02961X/1, EP/S026096/1, EP/V062158/1, and EP/T020792/1, and the Royal Academy of Engineering through the Chair in Emerging Technologies scheme, grant CiET17182\22.
unsrt
|
http://arxiv.org/abs/2307.00180v1
|
20230701002210
|
Tuning a magnetic energy scale with pressure in UTe$_2$
|
[
"Hyunsoo Kim",
"I-Lin Liu",
"Wen-Chen Lin",
"Yun Suk Eo",
"Sheng Ran",
"Nicholas P. Butch",
"Johnpierre Paglione"
] |
cond-mat.supr-con
|
[
"cond-mat.supr-con",
"cond-mat.str-el"
] |
Present affiliation: Department of Physics, Missouri University of Science and Technology, Rolla, MO 65409, USA
Maryland Quantum Materials Center and Department of Physics, University of Maryland, College Park, Maryland, USA
Maryland Quantum Materials Center and Department of Physics, University of Maryland, College Park, Maryland, USA
Maryland Quantum Materials Center and Department of Physics, University of Maryland, College Park, Maryland, USA
Maryland Quantum Materials Center and Department of Physics, University of Maryland, College Park, Maryland, USA
Maryland Quantum Materials Center and Department of Physics, University of Maryland, College Park, Maryland, USA
Maryland Quantum Materials Center and Department of Physics, University of Maryland, College Park, Maryland, USA
NIST Center for Neutron Research, National Institute of Standards and Technology, Gaithersburg, MD 20899, USA
Maryland Quantum Materials Center and Department of Physics, University of Maryland, College Park, Maryland, USA
Canadian Institute for Advanced Research, Toronto, Ontario M5G 1Z8, Canada
[email protected]
A fragile ordered state can be easily tuned by various external parameters. When the ordered state is suppressed to zero temperature, a quantum phase transition occurs, which is often marked by the appearance of unconventional superconductivity. While the quantum critical point can be hidden, the influence of the quantum criticality extends to fairly high temperatures, manifesting the non-Fermi liquid behavior in the wide range of the p-H-T phase spaces. Here, we report the tuning of a magnetic energy scale in the heavy-fermion superconductor UTe_2, previously identified as a peak in the c-axis electrical transport, with applied pressure and magnetic field as complementary tuning parameters. Upon increasing pressure, the characteristic c-axis peak moves to a lower temperature before vanishing near the critical pressure of about 15 kbar. The application of a magnetic field broadens the peak under all studied pressure values. The observed Fermi-liquid behavior at ambient pressure is violated near the critical pressure, exhibiting nearly linear resistivity in temperature and an enhanced pre-factor.
Our results provide a clear picture of energy scale evolution relevant to magnetic quantum criticality in UTe_2.
Tuning a magnetic energy scale with pressure in UTe_2
Johnpierre Paglione
August 1, 2023
=====================================================
Unconventional superconductivity is often found in the vicinity of a fragile long-range magnetic order (LRMO) <cit.>.
When LRMO is suppressed by either chemical substitutions, magnetic field or physical pressure, the system undergoes a quantum phase transition at a critical value of the tuning parameter <cit.>.
However, the quantum critical point (QCP) is often hidden inside a dome of the superconducting instability around QCP.
The pairing glue responsible for this superconductivity is associated with LRMO, giving rise to the magnetically mediated superconducting state.
While the majority of magnetic unconventional superconductors are found near the antiferromagnetic instability, several uranium-based superconductors including URhGe and UCoGe coexist with ferromagnetism <cit.>, making them promising candidates for a topological spin-triplet superconductor.
Recently, UTe_2 joined as a new U-based superconductor <cit.>, and its T_c is as high as 2 K <cit.>.
The normal state of UTe_2 can be best described by the Kondo lattice where the localized magnetic moment of uranium is hybridized with the conduction electrons at low temperatures.
UTe_2 does not magnetically order, but the superconductivity in this paramagnetic heavy fermion is believed to be in the vicinity of the magnetic instability <cit.>.
The application of moderate pressure induces a long-range magnetic order <cit.>.
Because of the relatively small energy scales of the superconductivity and LRMO in UTe_2, it displays a rich phase diagram when the system is subjected to external parameters.
However, the understanding of competition and interplay between magnetism and superconductivity in UTe_2 remains elusive.
Furthermore, the nature of QCP is not transparent.
In this work, we investigate electrical transport properties in UTe_2 to elucidate its quantum criticality and relevant energy scales by tuning a magnetic field and pressure. We measured the electrical resistance R with the current along the crystallographic c-axis <cit.> under pressure up to 17.4 kbar and in magnetic fields up to 18 T. We determined the pressure and field evolution of the characteristic c-axis peak, H_c2(T), and the power-law behavior of the c-axis electrical resistance. Our results clearly show energy scale evolution relevant to magnetic quantum criticality in UTe_2.
Figure 1 shows the electrical resistance R(T) in UTe_2 with the current along the crystallographic c-axis at various applied pressures up to p=17.4 kbar.
The ambient pressure (0 kbar) R(T) curve, possibly affected by the residual strain in the pressure cell, exhibits the characteristic c-axis peak around 13 K <cit.>.
Panel (a) shows that the peak monotonically moves towards the lower temperature with increasing pressure.
Inset in panel (a) shows the pressure evolution of the resistive superconducting transition.
The application of small pressure of 0.5 kbar slightly lowers T_c, but subsequent higher pressures enhance T_c.
The maximum T_c was observed at p=9.7 kbar above which T_c rapidly decreases.
Superconductivity was observed up to 14.2 kbar.
The resistive superconducting transition in R(T) exhibits distinct features under different pressures.
The R(T) with 0 and 0.5 kbar exhibits a shoulder-like feature in the transition (black and red curves), which may be associated with the double transitions observed in heat capacity measurements <cit.>.
The R(T) curves with p=0.5, 5.3, 7.5, and 9.7 kbar exhibit an upturn before the superconducting transition upon cooling.
At p=9.7 kbar, the resistive superconducting transition is the sharpest.
At p=14.2 kbar, the transition becomes very broad with a long tail, and R(T) gradually becomes zero well below the onset of the superconducting transition.
Above p ≈ 15.6 kbar, low-temperature R(T) curves exhibit a substantial increase with the shoulder-and peak-like features below which R(T) decreases rapidly.
Superconductivity was not observed down to T≈ 0.3 K for both 15.6 and 17.4 kbar.
Figure 1(b) shows R(T,p), which displays the pressure evolution of the c-axis peak spectacularly.
The darker green indicates the larger electrical resistance.
The relatively wide c-axis peak around 13 K at 0 kbar moves to lower temperatures, and the feature becomes narrower.
The characteristic temperatures are overlaid on the plot with symbols.
The red and black symbols represent the superconducting transition <cit.> and a shoulder-like feature in χ_a <cit.>, respectively. The triangle and diamond symbols observed above 14.5 kbar are associated with a long-range magnetic ordering <cit.>. We found that the pressure evolution of the c-axis peak coincides with the pressure evolution of χ_a, which suggests its magnetic origin in nature.
We note that the R(T) data shown in Fig. 1 were taken with a fixed current I = 0.1 mA.
Figure 2(a-e) shows the field-evolution of R(T) with applied pressure from 5.3 kbar to 14.2 kbar where R(T) curves exhibit a local maximum.
We define T^* and R^* which respectively correspond to the temperature and resistance at the c-aixs peak, and they represent the field-evolution of the low-temperature scattering rate at each pressure. The field-dependent T^* and R^* show common features at all pressures.
First, R^* decreases with the field, and the peak evolves to a broad feature at high magnetic fields.
Second, T^* increases as the field increases. The field-dependence of T^* and R^* at various pressures is summarized in panels (f) and (g), respectively.
T^* monotonically increases with the field, and the field-dependence becomes virtually linear above 6 T. The increasing rate dT^*/d(μ_0 H) determined with a field range between 6 T and 18 T is shown in panel (h) as a function of the applied pressure. The pressure evolution of dT^*/d(μ_0H) is nearly linear.
On the other hand, R^* generally decreases with the field in all applied pressure. At low fields, it is weakly field-dependent. In p=14.2 kbar, it is nearly field-independent up to 4 T.
Above 6 T, R^* varies as a^* (H+H_0)^-1 where a^* and H_0 depend on the pressure. a^* corresponds to the field-suppression rate of R^*, and its pressure variation is shown in panel (h).
Figure 3(a) shows the temperature-dependent upper critical field H_c2(T) at various pressures.
The H_c2(T) curves were determined from R(T) measurements with the electrical current along the c-axis and the magnetic field applied parallel to the a-axis at the applied pressures up to p=14.2 kbar.
We used the zero resistance criteria for the superconducting transition temperature T_sc.
While the H_c2(T) curve without the applied pressure exhibits a smooth variation, the application of pressure drastically changes the shape of the superconducting H-T phase lines.
Near T_c, the slope of H_c2(T) increases by almost five-fold with pressure at p=9.7 kbar, and it slightly decreases at 11.8 kbar. Surprisingly, the application of 14.2 kbar induces reentrant behavior of superconductivity.
The large slope change of H_c2(T) at T_c with pressure indicate the significant variation in the orbital limiting H_c2(0) <cit.>. However, the overall observed μ_0 H_c2(T) at the lowest temperature remains around 8± 2 T as shown in panel (a).
When the field-driven superconducting to normal state transition occurs due to the orbital limiting effect, H_c2(0) can be estimated from |dH_c2/dT|_T_c with a relation, H_c2(0)=-λ T_c dH_c2/dT at T_c <cit.>. Here λ≈ 0.73 and 0.69, which correspond to the clean and dirty limits, respectively <cit.>.
Alternatively, the spin-singlet superconductivity can be suppressed due to Pauli paramagnetism, and the limiting value H_P can be estimated by the relation,
H_P=Δ_0/√(2)μ_B. Here Δ_0 and μ_B are the magnitudes of the superconducting energy gap at zero temperature and the Bohr magneton, respectively. For the weak-coupling BCS superconductor, μ_0 H_P=α T_c where α≈ 1.87 T/K.
Figure 3(b) compares the experimental H_c2(0) to both limiting fields, H_HW and H_P.
Figure 3(c) shows the pressure evolution of H_HW/H_c2(0) and H_P/H_c2(0).
While H_P remains less than H_c2(0), indicating non-singlet pairing, H_HW exhibits a substantial variation.
The large H_HW prediction is generally evidence for the heavy-fermion normal state.
The pressure-evolution of H_HW, which exhibits a significant enhancement around 10 kbar, indicates increasing effective mass with pressure.
However, the orbital limiting effective is interrupted, and the largest discrepancy between H_c2(0) and H_HW is observed at 9.7 kbar where the highest T_c is observed.
A similar effect was observed in other heavy fermion superconductors near the quantum critical point, suggesting the existence of QCP near 10 kbar.
At low temperatures, a drastic slope change appears at low temperatures for the pressure between 5.3 and 11.8 kbar.
The slope change in UTe_2 was previously reported by Aoki et al., which was attributed to the existence of other superconducting phases <cit.>.
A similar H_c2(T) behavior was reported by Kasahara et al. in FeSe <cit.>, which was attributed to the Fulde–Ferrell–Larkin–Ovchinnikov (FFLO) state <cit.>.
We found the width of the superconducting phase transition in resistivity is closely related to this anomalous behavior in H_c2(T). To shed light on the origin of this feature, we determined the field-dependent transition width compared to the T_sc that is determined at the zero resistivity, Δ T_c/T_sc. In all studied pressure, Δ T_c/T_sc exhibits strong enhancement where the sudden slope change occurs as shown in panels (d-h). We define H^* where the slope of H_c2(T) changes.
It is notable that Δ T_c/T_sc decreases above H^* in p=7.5, 9.7, 11.8 kbar where the low-temperature date above H^* are available.
The broad superconducting transition is usually associated with inhomogeneity <cit.> or a filamentary superconducting state. However, the systematic field dependence rules out these simple scenarios.
Figure 4(a-e) shows the field-dependent exponent n^* of the low-temperature R(T) determined under various pressures by a relation, n^*=d[log(ρ(T)-ρ_0)]/d[logT].
R(0) is estimated by extrapolating the R(T) tail by assuming a power-law of low-temperature R(T). Provided R(0) is accurate, n^* is equivalent with the exponent from the power-law, R(T)=R(0)+AT^n.
In the study by Eo et al., the Fermi liquid (FL) behavior, i.e., n=2, in the c-axis transport was reported in the absence of both field and applied pressure <cit.>.
Here we focus on the pressures between 5.3 kbar and 14.2 kbar.
At 5.3 kbar, it exhibits the FL behavior (yellow) just above T_c in H=0, but the low temperature n^* decreases down to n^*≈ 1.5 (light green) with increasing field near H_c2(0).
At 7.5 kbar and 9.7 kbar, while the c-axis transport exhibits the non-FL behavior near H_c2(0), the FL behavior (yellow) is recovered at high fields between 15 T and 18 T.
At 11.8 kbar and 14.2 kbar, the exponent reached n=2.5 (red) at high fields.
We performed least-square fitting on selected R(T) curves by fitting a relation R(T) = R(0)+A T^n to the experimental data with T ≤ T^*/2.
The fitting results for n and A are summarized as a function of the field in panels (f,g) and pressure in panels (h,i).
For p=5.3 kbar, n=2 in the absence of the field. However, it shows a smooth variation with the field, showing a minimum value of n=1.5 near 10 T. It weakly increases at high fields. while it remains sub-quadratic at 11 T.
For the higher pressures between 7.5 and 11.8 kbar, a more drastic decrease in n with a minimum near 6-8 T where the H_c2(T) changes the slope.
The lowest exponent n≈ 1 is observed near x T at 9.7 kbar.
At higher fields, n increases substantially to about 2.5 for 11 kbar and 14.2 kbar.
The A-the coefficient is correlated with n showing significant enhancement around 6-10 T.
Panels (h, i) show the pressure variation of n and A as a function of pressure at each fixed field.
The appearance of a dip in n and a peak in A at a field near the suppression of the superconducting state is a typical manifestation of a quantum critical point in the electrical transport property.
In a typical metal, Fermi liquid behavior or T^2 at low temperatures. The exponent less than 2 or non-Fermi liquid behavior signals unconventional scattering due to enhanced spin fluctuations often near a magnetic quantum critical point. Recently, n=1 was reported by Thomas et al. in Ref. <cit.>.
In summary, we investigate electrical transport properties in UTe_2 to elucidate its quantum criticality and relevant energy scales by tuning a magnetic field and pressure. We measured the electrical resistance R with the current along the crystallographic c-axis under pressure up to 17.4 kbar and magnetic fields up to 18 T. We found the temperature of the characteristic c-axis peak decreases with increasing pressure accompanied by decreasing R value at the peak. It is no longer discernible above 14.2 kbar where the resistance values are substantially increased due to the appearance of an apparent long-range magnetic ordering. The peak rapidly broadens with the increasing field, and the peak R-value decreases while moving to a higher temperature. The superconducting H-T phase diagrams are constructed at each pressure, and the estimated H_c2(0) values are compared to the orbital and Pauli limiting fields. Overall, H_c2(T) exhibits anomalous behavior by changing its slope, including reentrance of superconductivity at 14.2 kbar. We observed that the superconducting transition becomes very broad where the slope changes at all measured pressure values. We examined the power-law behavior in the H-p phase space and found the signatures of typical behavior from the magnetic quantum critical point where the A-coefficient and the exponent n exhibit the maximum and minimum, respectively.
The authors are grateful for the useful discussions with Andriy Nevidomskyy.
Research at the University of Maryland was supported by the Department of Energy Award No. DE-SC-0019154 (transport experiments), the Gordon and Betty Moore Foundation’s EPiQS Initiative through Grant No. GBMF9071 (materials synthesis), NIST, and the Maryland Quantum Materials Center.
22
fxundefined [1]
ifx#1
fnum [1]
#1firstoftwo
secondoftwo
fx [1]
#1firstoftwo
secondoftwo
noop [0]secondoftwo
ref[1]@startlink#1@href
href[1]#1@endlink
anitize@url [0]`
12`$12`&12`#12`1̂2`_12`%12
startlink[1]
endlink[0]
rl [1]href #1
@bib@innerbibempty
[Mathur et al.(1998)Mathur,
Grosche, Julian, Walker,
Freye, Haselwimmer, and Lonzarich]Mathur1998
author author N. D. Mathur, author F. M. Grosche,
author S. R. Julian, author I. R. Walker, author
D. M. Freye, author
R. K. W. Haselwimmer, and author G. G. Lonzarich, https://doi.org/10.1038/27838 journal journal
Nature volume 394, pages 39
(year 1998)NoStop
[Stockert and Steglich(2011)]Stockert2011
author author O. Stockert and author F. Steglich, https://doi.org/10.1146/annurev-conmatphys-062910-140546 journal journal Annual Review of Condensed Matter Physics volume 2, pages 79 (year
2011), https://arxiv.org/abs/https://doi.org/10.1146/annurev-conmatphys-062910-140546
https://doi.org/10.1146/annurev-conmatphys-062910-140546 NoStop
[Shibauchi et al.(2014)Shibauchi, Carrington, and Matsuda]Shibauchi2014
author author T. Shibauchi, author A. Carrington, and author Y. Matsuda, https://doi.org/10.1146/annurev-conmatphys-031113-133921 journal journal Annual Review of Condensed Matter Physics volume 5, pages 113 (year
2014), https://arxiv.org/abs/https://doi.org/10.1146/annurev-conmatphys-031113-133921
https://doi.org/10.1146/annurev-conmatphys-031113-133921 NoStop
[Aoki et al.(2019)Aoki,
Ishida, and Flouquet]Aoki2019review
author author D. Aoki, author K. Ishida, and author J. Flouquet, https://doi.org/10.7566/JPSJ.88.022001 journal journal Journal of the Physical Society of Japan volume 88, pages 022001 (year 2019), https://arxiv.org/abs/https://doi.org/10.7566/JPSJ.88.022001
https://doi.org/10.7566/JPSJ.88.022001 NoStop
[Ran et al.(2019)Ran,
Eckberg, Ding, Furukawa,
Metz, Saha, Liu,
Zic, Kim, Paglione, and Butch]Ran2019science
author author S. Ran, author C. Eckberg,
author Q.-P. Ding, author Y. Furukawa, author
T. Metz, author S. R. Saha, author I.-L. Liu, author M. Zic, author H. Kim, author J. Paglione, and author N. P. Butch, https://doi.org/10.1126/science.aav8645 journal journal Science volume 365, pages
684 (year 2019)NoStop
[Rosa et al.(2021)Rosa,
Weiland, Fender, Scott,
Ronning, Thompson, Bauer, and Thomas]Rosa2021
author author P. F. S. Rosa, author A. Weiland, author S. S. Fender,
author B. L. Scott, author F. Ronning, author
J. D. Thompson, author
E. D. Bauer, and author
S. M. Thomas, @noop journal journal arXiv:2110.06200 (year
2021)NoStop
[Ran et al.(2020)Ran,
Kim, Liu, Saha, Hayes, Metz, Eo, Paglione, and Butch]Ran2020
author author S. Ran, author H. Kim, author I.-L. Liu, author
S. R. Saha, author
I. Hayes, author T. Metz, author Y. S. Eo, author J. Paglione, and author N. P. Butch, https://doi.org/10.1103/PhysRevB.101.140503 journal journal Phys. Rev. B volume
101, pages 140503 (year 2020)NoStop
[Eo et al.(2022)Eo,
Liu, Saha, Kim, Ran, Horn, Hodovanets, Collini, Metz, Fuhrman, Nevidomskyy, Denlinger, Butch,
Fuhrer, Wray, and Paglione]Eo2022
author author Y. S. Eo, author S. Liu, author S. R. Saha, author
H. Kim, author S. Ran, author J. A. Horn, author H. Hodovanets, author J. Collini,
author T. Metz, author
W. T. Fuhrman, author
A. H. Nevidomskyy, author
J. D. Denlinger, author
N. P. Butch, author
M. S. Fuhrer, author
L. A. Wray, and author
J. Paglione, https://doi.org/10.1103/PhysRevB.106.L060505 journal
journal Phys. Rev. B volume 106, pages L060505 (year 2022)NoStop
[Thomas et al.(2021)Thomas,
Stevens, Santos, Fender,
Bauer, Ronning, Thompson,
Huxley, and Rosa]Thomas2021
author author S. M. Thomas, author C. Stevens,
author F. B. Santos, author S. S. Fender, author
E. D. Bauer, author
F. Ronning, author J. D. Thompson, author A. Huxley, and author P. F. S. Rosa, https://doi.org/10.1103/PhysRevB.104.224501 journal journal Phys. Rev. B volume 104, pages 224501 (year 2021)NoStop
[Li et al.(2021)Li,
Nakamura, Honda, Sato,
Homma, Shimizu, Ishizuka,
Yanase, Knebel, Flouquet, and Aoki]Li2021
author author D. Li, author A. Nakamura,
author F. Honda, author Y. J. Sato, author
Y. Homma, author Y. Shimizu, author J. Ishizuka, author Y. Yanase, author G. Knebel, author J. Flouquet, and author D. Aoki, https://doi.org/10.7566/JPSJ.90.073703
journal journal Journal of the Physical Society
of Japan volume 90, pages 073703
(year 2021), https://arxiv.org/abs/https://doi.org/10.7566/JPSJ.90.073703
https://doi.org/10.7566/JPSJ.90.073703 NoStop
[Eo et al.(2021)Eo,
Saha, Kim, Ran, Horn, Hodovanets, Collini, Fuhrman, Nevidomskyy, Butch, Fuhrer, and Paglione]Eo2021
author author Y. S. Eo, author S. R. Saha,
author H. Kim, author
S. Ran, author J. A. Horn, author H. Hodovanets, author J. Collini, author W. T. Fuhrman, author A. H. Nevidomskyy, author N. P. Butch, author M. S. Fuhrer, and author J. Paglione, @noop journal journal
arXiv:2101.03102 (year 2021)NoStop
[Braithwaite et al.(2019)Braithwaite, Vališka, Knebel,
Lapertot, Brison, Pourret,
Zhitomirsky, Flouquet, Honda, and Aoki]Braithwaite2019
author author D. Braithwaite, author M. Vališka, author G. Knebel,
author G. Lapertot, author J. P. Brison, author
A. Pourret, author M. E. Zhitomirsky, author J. Flouquet, author F. Honda, and author D. Aoki, https://doi.org/10.1038/s42005-019-0248-z journal journal Communications Physics volume 2, pages 147 (year 2019)NoStop
[Aoki et al.(2020)Aoki,
Honda, Knebel, Braithwaite,
Nakamura, Li, Homma,
Shimizu, Sato, Brison, and Flouquet]Aoki2020
author author D. Aoki, author F. Honda,
author G. Knebel, author D. Braithwaite, author
A. Nakamura, author
D. Li, author Y. Homma, author Y. Shimizu, author Y. J. Sato, author J.-P. Brison, and author J. Flouquet, https://doi.org/10.7566/JPSJ.89.053705 journal journal Journal of the Physical Society of Japan volume 89, pages 053705 (year 2020), https://arxiv.org/abs/https://doi.org/10.7566/JPSJ.89.053705
https://doi.org/10.7566/JPSJ.89.053705 NoStop
[Thomas et al.(2020)Thomas,
Santos, Christensen, Asaba,
Ronning, Thompson, Bauer,
Fernandes, Fabbris, and Rosa]Thomas2020
author author S. M. Thomas, author F. B. Santos,
author M. H. Christensen,
author T. Asaba, author F. Ronning, author
J. D. Thompson, author
E. D. Bauer, author
R. M. Fernandes, author
G. Fabbris, and author
P. F. S. Rosa, journal
journal Science Advances volume 6, https://doi.org/10.1126/sciadv.abc8709 10.1126/sciadv.abc8709
(year 2020)NoStop
[Helfand and Werthamer(1966)]Helfand1966
author author E. Helfand and author N. R. Werthamer, https://doi.org/10.1103/PhysRev.147.288 journal journal Phys. Rev. volume
147, pages 288 (year 1966)NoStop
[Kogan and Prozorov(2012)]Kogan2012
author author V. G. Kogan and author R. Prozorov, https://doi.org/10.1088/0034-4885/75/11/114502
journal journal Reports on Progress in Physics volume 75, pages 114502 (year 2012)NoStop
[Kasahara et al.(2020)Kasahara, Sato, Licciardello, ČČulo, Arsenijevi ćć, Ottenbros,
Tominaga, Böker, Eremin,
Shibauchi, Wosnitza, Hussey, and Matsuda]Kasahara2020
author author S. Kasahara, author Y. Sato,
author S. Licciardello, author M. ČČulo, author S. Arsenijevi ćć, author
T. Ottenbros, author
T. Tominaga, author
J. Böker, author I. Eremin, author T. Shibauchi, author J. Wosnitza, author N. E. Hussey, and author Y. Matsuda, https://doi.org/10.1103/PhysRevLett.124.107001 journal
journal Phys. Rev. Lett. volume 124, pages 107001 (year 2020)NoStop
[Fulde and Ferrell(1964)]Fulde1964
author author P. Fulde and author R. A. Ferrell, https://doi.org/10.1103/PhysRev.135.A550 journal journal Phys. Rev. volume
135, pages A550 (year 1964)NoStop
[Larkin and Ovchinnikov(1964)]Larkin1964
author author A. I. Larkin and author Y. N. Ovchinnikov, https://www.osti.gov/biblio/4653415 journal journal Zh. Eksperim. i Teor. Fiz. volume 47, pages 1136 (year
1964)NoStop
[Matsuda and Shimahara(2007)]Matsuda2007
author author Y. Matsuda and author H. Shimahara, https://doi.org/10.1143/JPSJ.76.051005 journal journal Journal of the Physical Society of Japan volume 76, pages 051005 (year 2007), https://arxiv.org/abs/https://doi.org/10.1143/JPSJ.76.051005
https://doi.org/10.1143/JPSJ.76.051005 NoStop
[Casalbuoni and Nardulli(2004)]Casalbuoni2004
author author R. Casalbuoni and author G. Nardulli, https://doi.org/10.1103/RevModPhys.76.263 journal journal Rev. Mod. Phys. volume
76, pages 263 (year 2004)NoStop
[Park et al.(2012)Park,
Lee, Martin, Lu,
Sidorov, Gofryk, Ronning,
Bauer, and Thompson]Park2012
author author T. Park, author H. Lee, author I. Martin, author
X. Lu, author V. A. Sidorov, author K. Gofryk, author F. Ronning, author E. D. Bauer, and author J. D. Thompson, https://doi.org/10.1103/PhysRevLett.108.077003 journal
journal Phys. Rev. Lett. volume 108, pages 077003 (year 2012)NoStop
|
http://arxiv.org/abs/2307.01741v1
|
20230704141754
|
Ben-ge: Extending BigEarthNet with Geographical and Environmental Data
|
[
"Michael Mommert",
"Nicolas Kesseli",
"Joëlle Hanna",
"Linus Scheibenreif",
"Damian Borth",
"Begüm Demir"
] |
cs.CV
|
[
"cs.CV"
] |
Can We Mathematically Spot Possible Manipulation of Results in Research Manuscripts Using Benford's Law?
Teddy Lazebnik^1* and Dan Gorlitsky^1
^1 Independent researcher, Israel
* Corresponding author: [email protected]
================================================================================================================================
Deep learning methods have proven to be a powerful tool in the analysis of large amounts of complex Earth observation data. However, while Earth observation data are multi-modal in most cases, only single or few modalities are typically considered.
In this work, we present the ben-ge dataset, which supplements the BigEarthNet-MM dataset by compiling freely and globally available geographical and environmental data. Based on this dataset, we showcase the value
of combining different data modalities for the downstream tasks of patch-based land-use/land-cover classification and land-use/land-cover segmentation. ben-ge is freely available and expected to serve as a test bed for
fully supervised and self-supervised Earth observation applications.
Earth Observation, Dataset, Multimodal, Supervised Learning, Self-supervised Learning
§ INTRODUCTION
The amount of Earth observation data grows at an ever-increasing rate. To cope with the vast amount and the complexity of the data, scalable and flexible methods are required for their systematic analysis.
End-to-end Deep Learning approaches have proven highly successful in extracting insights from such complex data across a variety of downstream tasks.
Furthermore, data fusion, the combination of different data modalities of the observed scene, is beneficial for most applications as it provides additional, and in many cases complementary, information on the scene.
Most Deep Learning applications for Earth observation rely on supervised learning approaches that require (large amounts of) annotated data, which are typically expensive and tedious to acquire.
Self-supervised learning approaches, which do not require annotated data, have shown the ability to successfully pretrain deep learning models, which in turn require a significantly smaller amount of
annotated data for a given downstream task while at the same time outperforming fully supervised approaches <cit.>.
In combination with data fusion across multiple data modalities, richer representations of the underlying data can be learned, further improving the trained model performance and data efficiency.
In order to evaluate the impact of combining different data modalities on fully-supervised and self-supervised learning approaches for Earth observation applications, a dedicated dataset is needed.
Currently available large-scale Earth observation datasets <cit.> comprise multiple data modalities, but those are typically limited to multispectral and synthetic aperture radar (SAR) data.
However, other data modalities such as meteorological conditions at the time of observation, the topography of the scene or other geographic features are likely to
support the learning process and improve the performance of the Deep Learning model.
In this work, we present ben-ge, an extension to the BigEarthNet-MM dataset <cit.>, in which we supplement the already existing multispectral imaging (Sentinel-2) and SAR polarization (Sentinel-1) data by adding freely and globally
available data modalities related to geography and environmental conditions. This extension will enable researchers to readily experiment with a wide variety of data modalities for a range of downstream tasks and use cases.
§ THE BEN-GE DATASET
ben-ge supplements each of the 590,326 BigEarthNet patches with freely available geographic and environmental data. Geographic data is provided in the form of patch-based climate-zone classifications, topographic maps, as well as land-use/land-cover maps, while environmental data
is provided in the form of the season and meteorological data concurrent with the Sentinel-1 and Sentinel-2 observations.
Patch-based climate-zone classifications, following the Köppen-Geiger classification scheme, were extracted from <cit.>.
Topographic maps are generated based on the Copernicus Digital Elevation Model (GLO-30) <cit.> and interpolated (bilinear resampling) to 10 m resolution on the ground.
We also provide cropped land-use/land-cover (LULC) classification maps for each BigEarthNet patch from the ESA WorldCover 10 m 2021 dataset <cit.>. As part of this work, we use these LULC maps as targets for model benchmarking (see Section <ref>), but they could also be used as additional input modality.
We encode the season at the time of observation (separately for Sentinel-1/2 data) on a range from zero (winter solstice) to unity (summer solstice) based on a sinusoidal projection of the day of the year at the time of observation.
Weather data at the time of observation (temperature at 2 m above the ground, relative humidity, wind vectors at 10 m above the ground) are extracted from the
ERA-5 global reanalysis <cit.> for the pressure level at the mean elevation of the observed scene and the time of observation (separately for Sentinel-1/2 observations; only
temperature values are used in the experiments listed in Section <ref>).
Map-like data products are available in the form of GeoTiff files, while patch-based data are stored in csv files. The different modalities of the dataset are available for download separately and can therefore be combined in a highly modular fashion.
Download links, additional information on the dataset and useful software tools are available at .
§ EXPERIMENTAL RESULTS
We explore the utility of the different ben-ge data modalities and perform a range of experiments in a fully supervised setting. For this purpose, we consider the two downstream tasks of multi-label patch-based classification and pixel-wise segmentation.
For both tasks we utilize the extracted ESA WorldCover LULC maps as targets. We note that with regard to pixel-wise frequency of the different classes, the LULC data are highly imbalanced:
[43.1, 1.1, 20.9, 14.1, 1.7, 0.1, 0, 18.4, 0.6, 0, 0]% of pixels fall into the classes [tree cover, shrubland, grassland, cropland, built-up, bare/sparse vegetation, snow and ice, permanent water bodies, herbaceous wetland, mangroves, moss and lichen].
The classification task uses as target a one-hot encoded vector considering only those classes from the ESA WorldCover scheme that cover at least 5% of the corresponding patch, leading to a similar class imbalance.
For the classification task we utilize a ResNet-18 model with a BCE loss function and for the segmentation task we use a U-Net model with a cross entropy loss function.
In the case of several input data modalities, we combine these modalities in a late fusion approach: each modality is processed by a separate backbone, resulting representations are concatenated and then passed through a number of linear layers (classification) or convolutional layers (segmentation).
In the model training process we use a Adam optimizer, no additional data augmentations and train each model for 20 epochs. Learning rates were separately chosen for each downstream task and are scheduled; the same schedule was applied to all experiments.
To evaluate our models, we make use of the F1 and accuracy metrics for the classification task and the intersection-over-union (IoU) and pixel-wise accuracy metrics for the segmentation task.
We note that, as a result of the class imbalance inherent to the dataset, the accuracy metric is highly compromised and therefore only reported for comparison purposes.
To quantify the uncertainties inherent to our trained models, we perform each model training with 5 different random seed values and report the means and standard deviations of the resulting performances.
We split our dataset into a training/validation/test dataset using a 0.8/0.1/0.1 split; the composition of these splits is consistent across all experiments (see Section <ref> for details).
We explore our dataset in the following sections with regard to the usefulness of the different Sentinel-2 channels, the dataset size and the different combinations of data modalities. In our evaluations, we will focus on the performance on the downstream tasks and computational efficiency.
§.§ Sentinel-2: Multispectral Data
For most BigEarthNet or ben-ge applications, Sentinel-2 multispectral data will form the fundamental data modality.
However, it is unclear whether all 12 bands (Level-2A data products) contribute equally to solving the downstream tasks defined above. We therefore begin our experiments by
training both models on three variations of Sentinel-2 data (based on the ben-ge-0.2 dataset split, see Section <ref>): all 12 bands, only the highest resolution bands (bands 4, 3, 2 and 8 with 10 m resolution: “RGBNIR”) and RGB (bands 4, 3 and 2),
resulting in F1 scores of 79.5%±0.5%, 77.1%±0.6% and 75.6%±0.9% for the classification task and IoU scores of 40.2%±0.1%, 39.2%±0.1% and 37.1%±0.1% for the segmentation task, respectively.
While the combination of all 12 bands offers the best performance, the RGBNIR subset offers a good compromise, using only one third of the data and thereby providing a more efficient learning process; we adopt the RGBNIR band subset in the following experiments.
§.§ Dataset Size
ben-ge, just like BigEarthNet, contains 590,326 different locations and patches. We address the hypothesis that, for training the downstream tasks of classification and segmentation in a fully supervised setup,
a smaller dataset would suffice. We therefore generate random splits of ben-ge that contain only 20% of the data (in the following referred to as ben-ge-0.2, ∼118k samples), 40% of the data (ben-ge-0.4, ∼236k samples), 60% of the data (ben-ge-0.6, ∼354k samples) and
80% of the data (ben-ge-0.8, ∼472k samples) across all modalities. In addition, we also create a small-scale dataset with 8k samples that are sampled in such a way as to contain the same number of samples of preferably homogeneous LULC maps per class. ben-ge-8k is publicly available to enable quick testing on a limited, but meaningful dataset.
While dataset splits are random, care was taken to include the smaller datasets in the larger datasets (e.g., ben-ge-0.2 is a subset of ben-ge-0.4, which in turn is a subset of ben-ge-0.6);
the same applies to the corresponding training/validation/test splits[Index files for the individual datasets, as well as corresponding training/validation/test splits are available at .].
To investigate the impact of the size of the dataset on the performance of the trained model, we train models for both downstream tasks
on the training subsets of the different dataset splits.
We find that the classification performance increases only insignificantly between 20% of the data and the full dataset. In the case of the segmentation task, a gradual increase in IoU can be observed between 20% of the data (37.9%±0.7%) and the full dataset (41.0%±0.2%).
Based on these findings and in order to improve the efficiency of the training process, we decide to perform all of the following experiments on the ben-ge-0.2 dataset split.
§.§ Data Modalities
We use each of the BigEarthNet-MM and ben-ge data modalities, as well as different combinations thereof, as input for the training of our models. Training and evaluation are based on the ben-ge-0.2 dataset split.
Table <ref> lists the resulting performances for the classification and segmentation tasks. While we use every data modality as a single input in the training, only few combinations of modalities are used as model input;
the selection of data modality combinations is based on the performances of the
individual modalities and thus provides only a glimpse of the possibilities. It can be observed that the impact on the performance is closely correlated to the complexity of the data modality, especially in the case of the segmentation task.
Furthermore, we find that the classification performance increases with the number of modalities used in the process; this effect, however, is much less pronounced in the case of the segmentation task.
§ DISCUSSION AND CONCLUSION
Our experiments provide insights into the capabilities and usefulness of the ben-ge dataset and data fusion across a number of data modalities.
Results from Sections <ref> and <ref> indicate that there is only a moderate
benefit to using more than the RGBNIR subset of the Sentinel-2 bands or more than 20% of the dataset in a fully supervised training scenario for the LULC classification task. The comparison with the segmentation task results indicate different effects for other downstream
tasks, other targets or other data modalities or combinations thereof.
Results reported in Table <ref> show the varying value of the different data modalities for the downstream tasks defined in Section <ref>.
In general, it can be observed that more complex (map-like) data types contribute stronger to the model performance, especially in the case of the segmentation task.
Interestingly, patch-based climate zone information provides rather strong constraints for the classification task. Furthermore, it seems that the overall performance improves with the number of data modalities.
Our results provide a glimpse of the usefulness of extending Earth observation datasets across a range of data modalities. By relying only on freely and globally available data products, the data modalities presented here can be generated for and
therefore utilized to enhance any other Earth observation dataset, offering new opportunities to improve the performance of deep learning models for Earth observation applications in general and self-supervised model pretraining in particular.
This work is funded by Swiss National Science Foundation research project grant 213064.
IEEEbib
|
http://arxiv.org/abs/2307.01589v1
|
20230704092803
|
Anomalies in String-inspired Non-local Extensions of QED
|
[
"Fayez Abu-Ajamieh",
"Pratik Chattopadhyay",
"Anish Ghoshal",
"Nobuchika Okada"
] |
hep-th
|
[
"hep-th",
"hep-ph"
] |
[lime, fill=lime] (0,0)
circle [radius=0.2]
node[white] qagID;
[white, fill=white] (-0.0625,0.095)
circle [radius=0.007];
in A, ..., Zorcidhttps://orcid.org/orcidauthor
[lime, fill=lime] (0,0)
circle [radius=0.2]
node[white] qagID;
[white, fill=white] (-0.0625,0.095)
circle [radius=0.007];
in A, ..., Zorcidhttps://orcid.org/orcidauthor
|
http://arxiv.org/abs/2307.01343v1
|
20230703202848
|
HPC-driven computational reproducibility
|
[
"Yufeng Luo",
"Qian Zhang",
"Roland Haas",
"Zachariah B. Etienne",
"Gabrielle Allen"
] |
gr-qc
|
[
"gr-qc",
"cs.CE",
"physics.comp-ph"
] |
^1 Department of Physics and Astronomy, University of Wyoming, Laramie, Wyoming, 82071, USA
^2 School of Computing, University of Wyoming, Laramie, Wyoming, 82071, USA
^3 Department of Physics, University of Illinois at Urbana-Champaign, Urbana, Illinois, 61801, USA
^4 NCSA, University of Illinois at Urbana-Champaign, Urbana, Illinois, 61801, USA
^5 Strategy & Planning, Digital Research Alliance of Canada, Toronto, Ontario M4S 3C6 Canada
^6 Department of Astronomy, University of Illinois at Urbana-Champaign, Urbana, Illinois, 61801, USA
^7 Department of Physics, University of Idaho, Moscow, ID 83844, USA
Reproducibility of results is a cornerstone of the scientific method.
Scientific computing encounters two challenges when aiming for this goal.
Firstly, reproducibility should not depend on details of the runtime environment, such as the compiler version or computing environment, so results are verifiable by third-parties. Secondly, different versions of software code executed in the same runtime environment should produce consistent numerical results for physical quantities.
In this manuscript, we test the feasibility of reproducing scientific results
obtained using the IllinoisGRMHD code that is part of an open-source community software for simulation in relativistic astrophysics, the
Einstein Toolkit. We verify that numerical results of simulating a single isolated
neutron star with IllinoisGRMHD can be reproduced, and compare them to results reported by the code authors in 2015. We use two different supercomputers: Expanse at SDSC, and Stampede2 at
TACC.
By compiling the source code archived along with the paper on both Expanse and
Stampede2, we find that IllinoisGRMHD reproduces results published in its
announcement paper up to errors comparable to round-off level changes in initial
data parameters. We also verify that a current version of
IlliinoisGRMHD reproduces these results once we account for bug fixes
which has occurred since the original publication.
Keywords: High-performance computing, Computational reproducibility, Numerical Relativity
§ INTRODUCTION: COMPUTATIONAL REPRODUCIBILITY
§.§ Defining computational reproducibility
Computational research, or scientific computing, uses advanced research computing capabilities to understand and solve complex problems in science. Computational research spans many disciplines, but at its core, it involves the development and implementation of mathematical models and numerical simulations applied to data. The main purpose of reproducibility is to verify the scientific method and outputs, and provide a mechanism to confirm or refute a study's conclusions. That is why reproducibility is a Process, not an Achievement <cit.>. In this study, HPC-driven computational reproducibility has a loose definition, which is to obtain consistent scientific outputs rather than exact results using the original artifacts. This reproducibility experiment is conducted by a different team with the same experimental setup, including
the same input data, the same numerical model, and following the same method and computational steps, but using a different computational environment (HPC cluster). Due to differences in compilers and hardware, the source code needs to be modified so that it can be compiled and run on a new cluster.
§.§ Computational Reproducibility and FAIR principles
The FAIR (Findable, Accessible, Interoperable, and Reusable) data principles <cit.>, aim to enhance and support the reuse of digital material by both humans and machines.
A high-level principle of computational reproducibility is to provide a clear, specific, and complete description of how a reported result was reached, although different areas of study or types of inquiry may require different kinds of information.
Although scientific software differs from research data, the high-level FAIR data principles also apply to software code, in terms of the goals that ensure and improve the findability, accessibility, interoperability, reusability, transparency and optimal use of research objects.
A computationally reproducible research package may include data (primary and secondary data), software program(s) and documentation (including software dependencies and runtime / computational environment) for replicating published results, and capturing related provenance information, etc.
Over the last few years, a number of groups have been working towards the development of a set of FAIR guiding principles for research software (RS), including the FAIR For Research Software Working Group (FAIR4RS WG) <cit.> which is co-led by RDA <cit.>, FORCE11 <cit.>, and the efforts of Research Software Alliance (ReSA) <cit.>, the Software Sustainability Institute (SSI) <cit.> and grassroots communities (e.g., UK Reproducibility Network <cit.>).
§.§ Computational reproducibility challenges
Although research reproducibility is a critical and continuous component of the
scholarly communications process, computational irreproducibility cannot be
traced to one single cause. From the research software (RS) perspective, there
are multiple factors that contribute to the lack of reproducibility: RS is not
widely disseminated or shared and not readily discoverable and thus
inaccessible, inhibiting research transparency, reproducibility and
verification. As one of the steps toward scientific reproducibility, RS should
be properly cited so that it is uniquely identified (e.g., the specific version
of any RS package that is used to produce respective results), which also
benefits transparency and traceability of research results. The Accessibility
principle of the FORCE11 Software Citation Principles states that “software
citations should permit and facilitate access to the software itself and to its
associated metadata, documentation, data, and other materials necessary for
both humans and machines to make informed use of the referenced software.”
While this does not require that the RS be freely available, the metadata
should be, and should provide sufficient information for the RS to be accessed
and used. The development, deployment, and maintenance of reusable RS (whether
computational in nature, or that relies on any software-based
analysis/interpretation) are increasingly recognized internationally as a key
part of facilitating trusted, reproducible research outputs and open science.
Software versioning, a robust testing/quality framework (e.g. verification and
validation), code repositories, and portability, all of which are recognized as
desirable aspects of software quality, have all helped to drive the rapid
evolution of research reproducibility. Software sustainability is key to
reproducible science too, as it provides a critical tool for the effective
review and analysis of published results, which may lead to new research
efforts. However, the wide range of robust frameworks and approaches for
curating and preserving RS as a complex digital object represents a significant
challenge for sustainable access, thereby hindering research reproducibility.
At the cultural and societal level, transformation to an open science-driven RS culture depends on the creation of tools, platforms and services that enable researchers to mobilize knowledge and make research processes more efficient, transparent, reproducible, and responsive to societal challenges. Specific elements of this shift include: increasing collaboration and interaction among researchers; the development of technical infrastructure that promotes the adoption of emerging research practices; the development, promotion, and adoption of open-source and open-science practices. These shifts require an agile and responsive ecosystem with strong RS workforce support and sustainable funding.
§.§ Computational reproducibility with the Einstein Toolkit
The lack of direct observational data in numerical relativity and relativistic
astrophysics led to simulation and the development of robust and reliable
software codes being a primary scientific approach. Numerical relativity (NR)
is a discipline that combines general relativity with numerical simulations to
study the physics of massive systems, such as binary neutron stars and black
holes. NR transforms theoretical models for a system into executable codes and
simulates the system using the codes to produce physical observables, such as
gravitational waves, that can be detected and verified by experiments or
astronomy observations, e.g. the Laser Interferometer Gravitational-Wave
Observatory (LIGO). NR simulations can be used to predict and probe the
observables of a given model. In a recent case, it has been used to understand
the physical parameters of the binary black hole system emitting the first
gravitational wave signal detected by LIGO <cit.>. With
both open source and reproducibility being considered important aspects of
numerical simulations, from among a selection of current open- and closed-source
astrophysics
codes GRChombo <cit.>, SpECTRE <cit.>, SpEC <cit.>,
DendroGR <cit.> and BAM <cit.>
we use the Einstein Toolkit to perform our experiments due to its wide use and
support of many different compute clusters.
The reproducibility experiment described here is based on a use-case for the Einstein Toolkit.
The Einstein Toolkit is an open-source, community-driven cyberinfrastructure
ecosystem that provides key computational tools to support research in computational astrophysics, gravitational physics,
and fundamental science. The Einstein Toolkit community involves experts with
diverse backgrounds, from gravitational physics and astronomy to computer
science and engineering. As such, the Einstein Toolkit evolves and grows—just as
fundamental science itself progresses—to facilitate novel applications with
ambitious science goals and high productivity of its users, and to respond to the
needs of new community members.
The Einstein Toolkit is built on the <cit.> computational framework to connect different
modules and to achieve a clean separation between science and infrastructure
components. This enables domain experts in astrophysics and computer engineering
to focus their efforts on the components they are most comfortable dealing with.
All components within the Einstein Toolkit are distributed using free and open-source licenses enabling users to mix and match modules, adapt modules to their
own needs and share these modules freely with collaborators. This arrangement, while flexible and allowing for easy collaboration among distributed and non-coordinating groups, poses both opportunities and challenges with respect to
ensuring reproducible simulations.
It is worth mentioning that the numerical simulation framework — <cit.> already puts a premium on reproducibility and portability.
In particular, the Cactus framework includes basic infrastructure to ensure the reproducibility of results as the code evolves, via
its included test suite mechanism. A set of system
level regression test suites consist of input descriptions for Cactus
in the form of parameter files as well as expected output files and an error
threshold value, which are provided by the code authors. ' infrastructure lets developers, and users, re-run these
test suites and verify that the current code passes all test suites, and ensures
all code changes that result in changes in data beyond the test suite
threshold value are detected, based on which the developers can choose to either update the test data or fix the newly
introduced bug.
also contains a module, “Formaline”, that collects all
source files used to compile the simulation executable and embeds an archive of
these files in the executable itself. In addition, Formaline generates a
unique identifier for each simulation executable and each simulation run.
At run time the
executable outputs a copy of the included archive files along with its regular
simulation output. Each output file is also tagged with the unique identifier
of the simulation executable and simulation run. This way all code used to
generate a set of
output files are included, alongside those files and all output files record the exact
code used to produce them.
Together Formaline and the test-suites provide mechanisms ensuring
reproducible simulations by recording the code version used to produce results and
tracking code changes that affect results.
The computational reproducibility experiment described in this paper follows the current practices of FAIR principles for data and RS, respectively. The raw simulation results and analysis code are findable and accessible through the WyoScholar data depository with . The figures in this paper are reproducible with the containerized environment included in the analysis code with Docker.
§ USE CASE STUDY
As a concrete example of the challenges faced in achieving computational
reproducibility in HPC computations, we reproduce results obtained using the IllinoisGRMHD
code <cit.> which was first officially included in the ET_2015_11 “Somerville” release of the Einstein Toolkit.
In the manuscript announcing IllinoisGRMHD <cit.> the authors evolved solutions for a TOV star in general relativistic hydrodynamics
and compared their results to those obtained by other codes. The TOV star is a spherically symmetric, nonrotating neutron star assuming an equation of state that represents initially cold, degenerate nuclear matter. This TOV star does not have a magnetic field.
Our aim is to reproduce
the results described in that paper. In the following text, we refer to IllinoisGRMHD manuscript <cit.> as , and we refer to the results as .
We performed the case study
on two different
supercomputers,
SDSC Expanse <cit.> and TACC Stampede2 <cit.>, each evolving
the same dataset constituting of the initial condition that was used in and we compared our results to . In order to differentiate between
changes due to modifications to simulation code and changes due to differences in the supercomputer environment, we used two versions of the IllinoisGRMHD:
* the most recent IllinoisGRMHD from
the ET_2022_11 “Sophie Kowalevski” release of the Einstein Toolkit, called 2022 in the following
* the original IllinoisGRMHD used in <cit.>, dubbed 2015 available in <cit.>.
IllinoisGRMHD's
complete code history
is available in its public source code repository <cit.> using git, which we used
to track down commits introducing any observed change in behavior. 2015 can be found on the original publication authors' website <cit.>.
We choose two different
supercomputers to test the consistency of
Einstein Toolkit and to obtain an estimate for the sensitivity of
results on
the runtime environment. Consistency in our test is defined as a simulation that uses the same parameter file and that is created using the same version of the simulation software () run in different runtime environments, such as different compiler versions, hardware configurations, etc., but generating consistent numerical results for physical quantities. For example, we expect that the central density of the star oscillates with the same amplitude and frequency for simulations on Stampede2 and Expanse, but with slight differences in the numerical results due to compiler optimization, code versions and CPU model.
Thus a bitwise notion of reproducibility is not useful in
this context and instead a relaxed notion of reproducibility based on minimal
expected changes in results due to roundoff errors is used. Roundoff error in this paper is defined the same way as in <cit.>. That is, our simulated result should agree with the claimed result at least as well as when simulating otherwise identical data perturbed by the round-off error of the underlying floating point format. More details are discussed in <ref> and figure <ref>.
Both Expanse and Stampede2 are supported by
Einstein Toolkit's Simulation Factory <cit.> module,
which contains information
on how to compile code and submit simulations using the clusters' resource
management system. Simulation Factory is Einstein Toolkit's primary means
to maintain compatibility with computing clusters,
simplifying deployment of code on supported clusters.
§.§ Experimental setup
Both 2015 and 2022 were compiled using
Simulation Factory of 2022.
This is required to account for changes in the cluster environment. Both SDSC Expanse and TACC Stampede2 came online after 2015, so none of the compilation instructions of these two clusters is present in 2015.
In addition, intermediate versions of IllinoisGRMHD obtained from the
source code repository were compiled to pinpoint the exact commit that
introduces any significant changes in output.
On all
clusters the code was compiled with value-unsafe optimizations enabled implying
slightly different realizations of each mathematical expression in compiled
code, both between different clusters and between different compiler versions.
Cluster configurations and compiler versions are shown in table <ref>. Compiling IllinoisGRMHD on the two clusters is slightly different since compiler versions and queuing systems infrastructure differ between the two clusters.
In each case, we use settings taken from Simulation Factory in
2022, which supports both clusters. The key difference
between the clusters, for IllinoisGRMHD, is the different CPU
microarchitecture used: AMD EPYC, launched in 2017, and Intel Skylake, launched
in 2015. This, combined with different compilers and
aggressive optimization settings used, results in round-off level differences
when evaluating mathematical expressions. These differences then propagate
and, potentially, could amplify to levels incompatible with consistent physical results. On
the other hand, different MPI stacks on clusters, the MPICH-based Intel MPI
stack on Stampede2 and OpenMPI on Expanse, do not influence the numerical results since
IllinoisGRMHD's evolution code solely uses data transfer primitives,
e.g., MPI_Send and MPI_Recv, that only copy data identically
but not on reduction operations that act on values.
The code published in <cit.> does not include scripts to
post-process the raw simulation output and plot graphs shown in the manuscript.
As part of the experiment, the required scripts were implemented in Python
based on information available in the published material. Additionally, copies
of the original scripts were obtained from the
author and are now available without modification on <cit.>. Their output, given identical input files, was
compared to that of the newly implemented Python code.
§.§ Simulation parameters and diagnostics
simulations are controlled via parameter files, which define numerical
simulation inputs, such as grid spacing, evolution method, and
initial
data of the physics setup. targets backward compatibility of parameter files – a parameter file
run using Einstein Toolkit
version 2015 should produce numerically consistent results to that same parameter file run using any later version such as 2022.
We reproduced one of the tests
in using the original parameter file to verify this. Two simulations were
created: one using 2015 and the other using
2022. The parameter file, tov_star_parfile_for_IllinoisGRMHD.par, used as the basis for this experiment is included in 2015 and was used with only modifications on the grid spacing corresponding to different resolutions.
All simulations use a cubic Adaptive Mesh Refinement (AMR) grid, and x, y, z dimensions all have the same
number of grid points. The grid spacing in high, medium, and low-resolution
simulations are (0.32, 0.4, 0.5) code units, respectively,in the coarsest refinement level. Four
refinement levels of size (1.5, 3.0, 6.0, 12.0) R_NS, where R_NS is the radius of the star, are used. Therefore, the grid spacings in the finest refinement level are
Δ x_finest=(0.02, 0.025, 0.03125) code units, which correspond to (75, 60, 48) grid points inside the finest refinement level.
Following the conventions in <cit.>, we use two outputs, namely, the change in central density (Δρ_c) and the L2-norm of the
Hamiltonian constraint violation (ℋ) in the numerical tests.
Change in central density is defined as
Δρ_c = ρ_c(t) - ρ_c(t=0)/ρ_c(t=0) .
For a stable TOV star in equilibrium, both the change in central density and
the Hamiltonian constraint violation are expected to converge to zero when resolution increases. For finite resolution both Δρ_c and ℋ will have a small but nonzero value.
Convergence order
is used in <cit.> to evaluate the performance of the code for
different resolutions and as a basic check on the correct implementation of the
evolution equations.
For a quantity Q ∈{Δρ_c, ℋ} whose expected value is 0,
convergence order for a set of
two resolutions Δ x_1 and Δ x_2 is computed as
n = log(Q(Δ x_1)/Q(Δ x_2)) / log(Δ x_1/Δ x_2)
where Q(Δ x_1,2) is the quantity as computed in the simulation with
grid spacing Δ x_1 and Δ x_2, respectively.
We use two additional quantities to measure the difference in numerical results between different setups. The absolute difference in change of central density is defined as
Δ^abs(Δρ_c) = |Δρ_c,1 - Δρ_c,2|
and the relative difference in Hamiltonian constraint violation is defined as
Δ^rel(ℋ) = | ℋ_1 - ℋ_2|/ℋ_2 ,
where relative differences are used for ℋ to remove dependencies on an
arbitrary overall scale for ℋ.
In order to perform this comparison between simulations, one of the time series in the comparison may have to be interpolated to match the time steps of the other time series. We resample time series 1 to match the timesteps of time series 2. Additional details are specified in the caption of each figure.
§.§ Results
* Comparison of different versions of ET (need plots)
* Using Zach's original tarball,
* Using Zach's parfile and new ET released
* tracking down the difference
* Comparison across different supercomputers
* Comet (done)
* rho and Hamiltonian similarities (maybe add another plot for the error and the difference)
* convergence order difference
* the cluster and itself has the openMPI issue for the slow performance?
* Stampede2 (done)
* rho and ham similar to both of comet and original result
* convergence order is also very similar.
* difference of the simulation in general, especially the difference in the version of ILGRMHD of the double adding rhs issue. (done)
After simulating the test cases included in the 2015
announcement manuscript <cit.>, we compared our simulation results
with those results in <cit.>. In particular, the Hamiltonian constraint
ℋ, change in central density Δρ_c, and convergence order of
these quantities are used to compare results across
different supercomputers and to original publication results in figure 3
of <cit.>.
This section presents our reproducibility study results by first computing results using two different versions of the running the same parameter file once on each of TACC Stampede2 and SDSC Expanse, respectively. Then the simulation results by the same version of the on two different clusters are compared against each other, as well as different versions on the same cluster.
The simulations on both clusters were run until at least t = 55 t_dyn≈ 153.0, where t_dyn=√(1/ρ_c), and ρ_c ≈ 1.29×10^-1 is the central density of the star at t=0.
The numerical simulation setup and running steps of this reproducibility experiment are reproducible with details in <ref>. Our simulation setup files, raw data, and analysis code are available at .
§.§.§ Round-off level agreement between simulations
In <cit.> a notion of agreement up to round-off
errors is introduced by observing how two simulations, using the same executable
code, whose input parameters relative difference is no more than the
floating point ϵ (see <ref> for details)
deviate from each
other over the course of the simulation. To understand the random-off error of our numerical experiments, we performed the same significant digits agreement test as in <cit.>.
An initial 15th digit random perturbation was added to the initial data on the grid in manner described in and repeated in appendix <ref>. That is, at each grid point all primitive variables in IllinoisGRMHD are multiplied by a common factor 1+ϵ, where ϵ is a random number in the interval [0, 10^-15). All conserved physical quantities are re-calculated based on the new perturbed initial primitive variables. After 30 dynamical timescales, the significant digits agreement for both 2015 and 2022 cases oscillate between 6 – 8 digits. Our results, as shown in figure <ref>, agree with the original publication result in <cit.> figure 1. Hence, for the set of tests considered in this manuscript, two simulations that differ by no more than an absolute error of 10^-6 in ρ after 30 dynamical time scales are considered to agree up to round-off level errors.
§.§.§ Expanse
Figure <ref> shows the agreement between our simulation result for Δρ_c and the original published result.
The top row shows how Δρ_c evolves when simulated used 2015
and 2022 and compares to the results of . On the left, for
2015 which uses the code published in , our results, shown
in solid, dashed and dot-dashed lines, lose track the (dotted) results of
. This is especially evident in the convergence order plot in the
middle section, which shows near perfect agreement. This is shown explicitly in
the bottom panel, which demonstrates that our results and differ
by no more than 10^-6 and are consistent within round-off error as
introduced in section <ref>.
The right-hand column displays corresponding results comparing 2022
with . Here differences are much more obvious, already in the top
plot for Δρ_c, where there is much less overlap visible between the
curves. This difference is much more obvious in the convergence plot in the
middle, which shows 2022 having much smaller oscillations around the
expected convergence of 2. The bottom right absolute difference graph
quantifies this and shows that the difference starts out very large of order
10^-3 and slowly decreases to 10^-5 as the simulation progresses.
However, even at the end of the simulation, the difference still exceeds the
threshold magnitude established for round-off level agreement. This significant
difference can be tracked down to git commit
https://bitbucket.org/zach_etienne/wvuthorns/commits/8b562af09a888b2d795506e5711cc42a72f840c48b562af09, which is present in 2022 but not in 2015. We have verified
that reverting this single commit brings 2022 into round-off level
agreement with . Inspecting the commit message reveals that the
commit fixes a minor bug present in 2015 whcih results in an incorrect
energy density being present at t=0, physically corresponding to an out of
equilibrium configuration, which explains why the observed difference decreases
over time as the system relaxes back to the equilibrium configuration.
These same observations also hold in figure <ref>
which displays results for the Hamiltonian constraint ℋ.
Similar to the situation for Δρ_c, our result agrees with the originally published result with 2015 and has a significant difference for 2022, due to the double-adding issue.
§.§.§ Stampede2
Figures <ref> and <ref> display equivalent
results for Δρ_c and ℋ obtained on Stampede2. The same effects
as observed on Expanse are evident, including differences caused by changes in
's source code.
§.§.§ Simulation result comparison between Stampede2 and Expanse
Figure <ref> compares results obtained using 2022 on
Expanse and Stampede2, respectively. For both Δρ_c and ℋ
results agree well within the threshold for round-off agreement. The overall
results between the clusters are very similar to the results displayed in the
left-hand columns of Figures <ref>, <ref>,
<ref>, and <ref> illustrating that simulations
using identical source code for are in round-off level agreement
both when using the code in , and 2022.
§ CONCLUSIONS
In this study, we explore two aspects of reproducibility of computational research:
* different computing clusters using the same simulation code,
* same computing cluster with different versions of the simulation code.
This poses challenges unique to scientific software, which is expected
to compile and perform under a variety of runtime environments which are not
known in advance.
For example,
different compute clusters employ different compilers and different compiler
versions optimized for the cluster, some of which may fail to compile the
scientific software without modification.
In general application software, this issue is addressed by packaging all
dependencies with the software, for example, in the form of container images as
used by the Ubuntu SNAP format <cit.>.
Containers are increasingly available on HPC systems <cit.>
providing a way to encapsulate all code dependencies except the operating system
kernel with the science code. This, however, typically entails a loss of
efficiency that may be unacceptable for high-performance
computing <cit.>, and compiling from source remains
the norm for HPC codes in the current state of scientific software.
With this caveat, we observed that the same simulation code produced results agreeing to the round-off level.
Our simulations using 2015 agree with the original
publication results on both Hamiltonian constraint and central density drift to
round-off level, as shown in the lower left
panel of figures <ref>, <ref>, <ref>, <ref>, and
<ref>.
On the other hand, our simulations using 2022 show
discrepancies from the original publication results, which we track down to a
change in the source code. Reverting this change restores reproducibility.
Computational reproducibility is essential to the continuing development of
scientific software, especially when a new module or functionality is added. In
addition, it is also critical for the use of scientific code by others and the
verification of results by others.
Furthermore, long-term computational reproducibility is an important
aspect researchers be aware of. New researchers joining the field should be
able to track and understand how to reproduce and interpret existing numerical
experiment results. With this in mind, we
suggest best practices for manuscripts announcing new scientific software
or simulation results.
* Creating a DOI for specific versions of source code or depositing the source code used in the paper. To help scientific community to reproduce the results, it is recommended that papers include a trackable identifier such as DOI or Git Hash. With these unique identifiers, the simulation results and the software itself can be understood.
* Creating a DOI for parameter files used to run simulations that produce results claimed in the paper. Due to newer versions of the Einstein Toolkit, we found that we have to change some parameters of the simulation parameter files to reproduce the results claimed in . Along with a trackable version of the simulation software, a parameter file associated with a DOI makes the simulation easy to reproduce, and the simulation workflow transparent.
* Separate the computational and physical discussion. Authors for a paper with numerical experiment results should have separate sections to discuss the physical result and computational results, and how to reproduce them. Theories and problem setups should be discussed in the physical result part. Numerical experiment setups, such as simulation framework and parameter files should be introduced in the computational result section.
In conclusion, this work studied the challenges encountered when reproducing scientific results
obtained with a state-of-the-art, real-world science code when applied to a
common test problem. This test problem was originally used to verify both code
correctness and performance in .
Science codes, and compilers used in high-performance computing
architectures, are not designed to provide bit-identical results despite identical
source code and input, instead allowing for deviations to some extent between results as
long as those
deviations are considered “small” compared to intrinsic approximation and
discretization errors of the method used.
Additionally, scientific codes are undergoing
constant changes as bugs are discovered and fixed and new features are added to them
to study new phenomena. These changes result in numeric values differing at
levels above round-off error deviations. Often these fixes are
only documented in a revision control system and not in an explicit change log
file. A key task of determining reproducibility is thus to identify these value
changing bug-fixes and quantify whether the observed differences are compatible
with the change introduced to the code.
Hitherto, this process requires an expert
understanding of the science code and is not automated.
We demonstrate that, within these constraints, results obtained using
2015 can
be independently reproduced on multiple clusters and with multiple versions of
IllinoisGRMHD.
We also provide suggestions to remedy some computational challenges encountered in this reproducibility study.
§ FUTURE WORK
In this work, we have not attempted to extend the notion of reproducibility
past value changing bug fixes in the code. Some codes, for
example <cit.>, attempt to address this
issue by explicitly marking commits that introduce single-bit changes in
results while others <cit.> record a level of
fuzziness within which
results are considered equal. Neither approach seems fully satisfactory
and a more robust method based on the notion of equal up to round-off error may
help provide a better handle on code changes that affect reproducing scientific
results.
Y.L. and R.H. acknowledge support from the National Science Foundation, Office of Advanced Cyberinfrastructure (OAC) through Award Number 2004879. Y.L. was also partially supported by the U.S. Department of Energy, Office of Science, Office of High Energy Physics, under Award Number DE-SC0019022.
§ STEPS TO COMPILE THE CODES
In the following section, we provide detailed instructions on how to compile the
codes, 2015 and 2022, on the two
clusters, Expanse and Stampede2.
2015 is the code used in available from its author's website and
from <cit.>. 2022 is the “Sophie Kowalevski”
() release of the Einstein Toolkit available
from <cit.>.
To compile 2022 on either Expanse or Stampede2-skx the steps documented on the Einstein Toolkit website <cit.>
[language=sh, basicstyle=, caption=Compiling 2022]
curl -kLO https://raw.githubusercontent.com/gridaphobe/CRL/ET_2022_11/GetComponents
chmod a+x GetComponents
./GetComponents –parallel https://bitbucket.org/einsteintoolkit/manifest/raw/ET_2022_11/einsteintoolkit.th
cd Cactus
./simfactory/bin/sim setup-silent
./simfactory/bin/sim build –thornlist thornlists/einsteintoolkit.th
is sufficient.
Compiling 2015 on the other hand, due to changes in compilers and cluster environment, requires some additional steps.
We begin by downloading the code from <cit.>:
[language=sh, basicstyle=, caption=2015: step 1]
curl -OL 'https://zenodo.org/record/7545717/files/IllinoisGRMHD_''Sept_1_2015_public_release__based_on_ET_2015_05.tar.gz'
tar xf IllinoisGRMHD_Sept_1_2015_public_release__based_on_ET_2015_05.tar.gz
cd IllinoisGRMHD_Sept_1_2015_public_release__based_on_ET_2015_05
Next we replace Simfactory2015 with Simfactory2022 to use its machine definition files
[language=sh, basicstyle=, caption=2015: step 2]
rm -r simfactory
cp -a ../Cactus/simfactory/ simfactory
Step 3: 2015 contains legacy code that GCC version 10 and newer flags as invalid and requires adding to and variables, and to and variables in .
Step 4: Simfactory2022 no longer provides , , , , , , , and and instead uses etc. throughout. To compile 2015 we duplicate all settings and rename the copies to settings in .
Intel compilers version 17 or higher contain an apparent compiler bug at high optimization levels that makes them fail to compile file . Thus we add
[language=c, basicstyle=, caption=2015: step 5]
#if __INTEL_COMPILER >= 1700
#pragma GCC optimization_level 1
#endif
at the top of .
With these modifications in place we compile using the file supplied in the main directory of 2015:
[language=sh, basicstyle=, caption=2015: step 6]
cd IllinoisGRMHD_Sept_1_2015_public_release__based_on_ET_2015_05
./simfactory/bin/sim setup-silent
./simfactory/bin/sim build –thornlist ThornList
§ STEPS TO START THE SIMULATIONS
We use a modified version of the file included in 2015. The included parameter file is set up for a short validity test and requires some modification to match the file used in <cit.>.
Step 1: we change the value of , , and from 1.0 to 0.5, 0.4, and 0.32 for low-resolution, medium-resolution, and high-resolution simulations, respectively.
Step 2: we enable the termination condition and set the final time to 155
[language=perl, basicstyle=, caption=Final time]
#cactus::cctk_itlast = 128
Cactus::terminate = "time"
Cactus::cctk_final_time = 155
Step 3: for convenience of output we change 's value to .
Step 4: finally we add checkpointing and recovery options at the end of the file:
[language=perl, basicstyle=, caption=Termination and checkpoint]
ActiveThorns = TerminationTrigger
TerminationTrigger::max_walltime = @WALLTIME_HOURS@
# Trigger termination 30 minutes before the walltime is reached
TerminationTrigger::on_remaining_walltime = 30
TerminationTrigger::output_remtime_every_minutes = 30
TerminationTrigger::termination_from_file = yes
TerminationTrigger::termination_file = "terminate.txt"
TerminationTrigger::create_termination_file = yes
CarpetIOHDF5::checkpoint = yes
IO::recover = "autoprobe"
IO::checkpoint_on_terminate = yes
§ STEPS TO COMPUTE EXPECTED ROUND-OFF LEVEL DIFFERENCES
We estimate the effect of round-off level changes induced, e.g., by compiler
applied code optimization on the time evolution of results by explicitly adding
a small, random perturbation of 10^-15 relative size to the initial data
for all primitive variables. Comparing the results of this perturbed initial data with an
unperturbed simulation provides an estimate for these effects and establishes
an order of magnitude estimate for which differences are compatible with
round-off level changes in the data.
2022 already contains facilities to add such a perturbation
in the ID_converter_ILGRMHD module:
[language=perl, basicstyle=, caption=Random perturbation]
ID_converter_ILGRMHD::random_pert = 1e-15
2015 on the other hand contains a bug that renders the
parameter ineffectual and we apply the git commit hash
f822e2278695615a9ad508d58fe25b0c94451a31
“WVUThorns/ID_converter_ILGRMHD: When adding an optional perturbation to
the initial data, the perturbation should be applied to all IllinoisGRMHD
quantities, not HydroBase, at this part of the routine. Behavior was correct
except for density. This one-line patch fixes that.” from
2022 which fixes the bug.
§ REFERENCES
iopart-num
|
http://arxiv.org/abs/2307.01942v1
|
20230704221827
|
Minimax rates for latent position estimation in the generalized random dot product graph
|
[
"Hao Yan",
"Keith Levin"
] |
math.ST
|
[
"math.ST",
"stat.TH"
] |
theoremTheorem
lemmaLemma
propositionProposition
observationObservation
remarkRemark
exampleExample
definitionDefinition
corollaryCorollary
assumptionAssumption
Instantaneous Wireless Robotic Node Localization Using Collaborative Direction of Arrival
Ehsan Latif and Ramviyas Parasuraman^*
School of Computing, University of Georgia, Athens, GA 30602, USA.
^* Corresponding Author Email: [email protected].
August 1, 2023
=================================================================================================================================================================
Latent space models play an important role in the modeling and analysis of network data. Under these models, each node has an associated latent point in some (typically low-dimensional) geometric space, and network formation is driven by this unobserved geometric structure. The random dot product graph (RDPG) and its generalization (GRDPG) are latent space models under which this latent geometry is taken to be Euclidean. These latent vectors can be efficiently and accurately estimated using well-studied spectral embeddings. In this paper, we develop a minimax lower bound for estimating the latent positions in the RDPG and the GRDPG models under the two-to-infinity norm, and show that a particular spectral embedding method achieves this lower bound. We also derive a minimax lower bound for the related task of subspace estimation under the two-to-infinity norm that holds in general for low-rank plus noise network models, of which the RDPG and GRDPG are special cases. The lower bounds are achieved by a novel construction based on Hadamard matrices.
§ INTRODUCTION
Networks encoding relations among entities are a common form of data in a broad range of scientific disciplines.
In neuroscience, networks encode the strength of connections among brain regions <cit.>.
In biology, networks encode which pairs of genes or proteins are co-expressed or are involved in the same pathways <cit.>.
In the social sciences, networks arise naturally in the form of social network data <cit.>.
Network embeddings are a broadly popular tool for exploring and analyzing network data.
These methods seek to represent the vertices of a network in a lower-dimensional (typically Euclidean) space, in such a way that the geometry of these embeddings reflects some network structure of interest.
Most commonly, these embeddings arise either via spectral methods <cit.>, which construct embeddings from the leading eigenvalues and eigenvectors of the adjacency matrix, or via representation learning methods <cit.>.
Embeddings are especially appropriate in settings where we believe that data is well-approximated by a latent space network model <cit.>.
Under these models, each vertex has an associated latent variable (often a point in Euclidean space), and network formation is driven by these latent variables, with pairs of vertices more likely to form edges if their latent variables are “similar” according to some measure (e.g., proximity in space).
Examples of such models include Hoff models <cit.>, random geometric graphs <cit.>, graph root distributions <cit.> and graphons <cit.>, to name just a few.
Among these latent space models is the random dot product graph <cit.> and its generalization <cit.>.
Under this model, each node v has an associated low-dimensional vector _v ∈^d, called its latent position.
Conditional on these latent positions, the probability of two nodes u and v sharing an edge is given by the inner product of the associated vectors _u^T_v.
Although the RDPG is simple and widely applicable, one limitation of the model is that it can only produce graphs whose expected adjacency matrices are positive semidefinite.
To overcome this drawback, <cit.> introduced the generalized random dot product graph (GRDPG), which allows this expected adjacency matrix to be indefinite.
This model includes many classical models as special cases, including the stochastic block model <cit.>, degree corrected stochastic block model <cit.> and mixed membership stochastic block model <cit.>.
Under the RDPG and GRDPG, the most basic inferential problem involves estimation of the latent positions based on an observed network.
Once estimates of the latent positions are obtained, they can be used in many downstream tasks such as clustering <cit.>, graph hypothesis testing <cit.>, and bootstrapping <cit.>.
A widely-used approach to estimating the latent positions in the RDPG is the adjacency spectral embedding <cit.>.
The consistency of the ASE has been established previously under both the spectral <cit.> and two-to-infinity <cit.> norms and the asymptotic distributional behavior of this estimate was further explored in <cit.>.
For other related approaches to estimating the latent positions under the RDPG, see <cit.>.
The latent positions of the GRDPG can also be estimated consistently using a slight modification of the ASE <cit.>, with similar asymptotic distributional behavior to that established in previous work for the RDPG <cit.>.
These previous results suggest that the estimation rate, as measured in two-to-infinity norm, obtained by the ASE and related methods should be optimal, perhaps up to logarithmic factors.
In this paper, we show that this is indeed the case (see Theorem <ref>), establishing minimax lower bounds for estimation of the latent positions in a class of low-rank network models that includes both the RDPG and GRDPG.
This matches estimation upper bounds previously established in the literature <cit.>, up to logarithmic factors, and is in accord with previous work by <cit.> establishing the minimax rate under the Frobenius norm for the RDPG model.
Our proof is based on a novel construction using Hadamard matrices, which may be of interest to researchers in subspace estimation.
Indeed, as a corollary of our main result, we obtain minimax bounds for the closely related problem of singular subspace estimation in low-rank network models.
Previous results along these lines include <cit.>, who established a lower bound under Gaussian noise, and <cit.>, who provided a lower bound for random bipartite graphs under the spectral norm and Frobenius norm.
Notation.
For a vector , we use _2 to denote its Euclidean norm.
For a matrix , ,_F and _2, ∞ denote the spectral, Frobenius, and two-to-infinity (see Equation (<ref>)) norms, respectively.
We use _ij to denote the element in the i-th row and j-th column of the matrix .
For a sequence of matrices, we use subscripts _1, _2, …, _n to index them if we do not need to specify an element of them.
To specify the (i, j) entry of a sequence of matrices, we use the notation ^(1)_ij, ^(2)_ij, …, ^(n)_ij.
Similarly, we use subscripts _1, _2, …, _n to index a sequence of vectors.
We use letters C and c to denote constants, not depending on the problem size n, whose specific values may change from line to line.
_d denotes the set of all d× d orthogonal matrices.
_d denotes the d× d identity matrix.
denotes a matrix of all zeros.
For a positive integer n, we let [n] = {1, 2, …, n}.
We denote the standard basis in ^n by _1, _2, …, _n, where the components of _i are all zero, save for the i-th component, which is equal to 1.
We make use of standard use of Landau notation.
Thus, for positive sequences (a_n) and (b_n), if there exists a constant C such that a_n ≤ C b_n for all suitably large n, then we write a_n=O(b_n) or a_n ≲ b_n, and we write b_n = Ω(a_n).
We write a_n=Θ(b_n) to denote that a_n=O(b_n) and b_n=O(a_n).
If a_n / b_n → 0 as n →∞, then we write a_n=o(b_n) and b_n = ω(a_n).
§ LOW-RANK MODELS AND EMBEDDINGS
We are concerned in this paper with low-rank network models, in which the expected value of the adjacency matrix, perhaps conditional on latent variables, is of low rank.
These models are exemplified by the RDPG, where conditional on the latent positions, the adjacency matrix has expectation given by the Gram matrix of the latent positions.
Let F be a distribution on ^d such that for all ,∈ F, 0 ≤^T ≤ 1.
Let _1,_2,…,_n ∈^d be drawn i.i.d. according to F, and collect them in
the rows of ∈^n × d.
Conditional on , generate a symmetric adjacency matrix ∈{0,1}^n × n according to
_ij∼( _i^T _j ) independently over all 1 ≤ i < j ≤ n.
Then we say that is the adjacency matrix of a random dot product graph (RDPG), and write
(,) ∼(F,n).
For a fixed choice of , we write ∼() and say that the resulting network is
distributed as a conditional RDPG with latent positions .
As defined, the (conditional) expected adjacency matrix [ | ] is always positive semidefinite under the RDPG, restricting the range of network structures it can express.
The generalized RDPG (GRDPG) resolves this issue.
Let d = p+q where p,q ≥ 0 are integers, and define the matrix
_p,q = ( 1,1,…,1,-1,…,-1 ).
Suppose that F is a distribution on ^d such that 0 ≤^T _p,q≤ 1 for all ,∈ F.
Draw _1,_2,…,_n ∈^d i.i.d. according to F, and collect them in the rows of ∈^n × d.
Conditional on , generate a symmetric adjacency matrix ∈{0,1}^n × n according to
_ij∼( _i^T _p,q_j ) independently over all 1 ≤ i < j ≤ n.
We say that is the adjacency matrix of a generalized random dot product graph (GRDPG) with signature (p,q), and write
(,) ∼(F,p,q,n).
For a fixed and signature (p,q), we write ∼(, p, q) and say that the resulting network is
distributed as a conditional RDPG with latent positions and signature (p,q).
We can naturally extend the conditional versions of these models to a generic “low-rank plus noise” network model, in which the expected adjacency matrix is low-rank.
Let d = p+q for non-negative integers p and q, and let _p,q be as defined in Equation (<ref>).
Let ∈^n × d be such that = _p,q^T has all its entries between 0 and 1.
Given , generate a symmetric binary adjacency matrix ∈{0,1}^n × n according to _ij∼( _ij ), independently over all 1 ≤ i < j ≤ n.
We say that the resulting network is distributed according to a low-rank plus noise model with expectation .
Under both the RDPG and GRDPG as well as under their generalization in Definition <ref>, we have
[ | ] = = _p,q^T
for _p,q as in Equation (<ref>).
Note that we recover the RDPG by taking q=0.
Under these models, the matrix ∈^n × d is a natural inferential target.
The aim of this paper is to establish the limits on estimating this low-rank part under network models like those in Definitions <ref>, <ref> and <ref>.
For non-negative integers p,q, define the set
_n^(p, q) = {∈^n × d : 0 ≤_p,q^T ≤ 1 },
where the inequality is meant entry-wise, so that for each 1 ≤ i < j ≤ n, the element (_p,q^T)_i,j is a probability.
That is, the set _n^(p,q) corresponds to the collection of all possible collections of n latent positions whose indefinite inner products under a signature (p,q) are valid probabilities.
In other words, any ∈^(p,q)_n is a potential collection of latent positions under Definition <ref> or <ref>.
When p = d, the GRDPG model recovers the random dot product graph (RDPG) model as a special case.
As such, we define
^d_n = {∈^n × d : 0 ≤^T ≤ 1 }.
To establish estimation rates for network latent positions (i.e., elements of the set ^(p,q)_n or ^d_n), we must endow the set with a distance.
One such distance, surely the most studied in the context of network modeling, derives from the ()-norm.
Given two matrices ,∈^n × d, this norm is defined according to
- _ = max_i ∈ [n]_i - _i _2,
where ·_2 is the standard Euclidean norm in ^d and _i∈^d denotes the i-th row of ∈^n × d, viewed as a column vector.
We will use this norm to construct a distance on the set ^d_n, once we account for a non-identifiability inherent to latent space models <cit.>.
Observe that for any orthogonal transformation ∈_d, we have ^T = ()^T.
As a result, given an adjacency matrix generated from an RDPG, we can only hope to estimate a particular ∈^d_n up to such an orthogonal transformation.
We thus endow ^d_n with an equivalence relation ∼, writing ∼ if = for some ∈_d.
Our notion of recovering the rows of the true up to orthogonal rotation yields a natural notion of distance on these equivalence classes.
Let ^d_n denote the quotient set of ^d_n by ∼.
Denoting elements of ^d_n by [] for any class representative ∈^d_n, define a distance on ^d_n by
( [],[] )
= min_∈_d - _.
is a distance on ^d_n.
Symmetry and non-negativity of are immediate from the definition and invariance of the ()-norm under right-multiplication by elements of _d.
Similarly, it follows by definition that ([],[] )=0 if and only if []=[].
To establish the triangle inequality, note that for [],[],[] ∈^d_n, we have
( [],[] )
= min_∈_d - _
= min_∈_d, ' ∈_d - ' + ' - _
≤min_∈_d, ' ∈_d - ' _
+ ' - _
= min_∈_d - _
+ min_∈_d - _
= ( [],[] ) + ( [],[] ),
where we have used the fact that the ()-norm is invariant under
right-multiplication by an orthogonal matrix.
Under the GRDPG and other low-rank network models (i.e., Definitions <ref> and <ref>), a similar non-identifiability occurs, but its structure is complicated by the presence of the matrix _p,q.
Analogous to the orthogonal group _d, we denote the indefinite orthogonal group by
= {∈^d× d : _p,q^T = _p, q}.
For any matrix ∈ and any ∈_n^(p,q), we have _p,q^T = _p,q()^T.
As a result, under the GRDPG, the conditional distribution of remains unchanged if we replace with for any ∈.
Thus, we also consider an equivalence relation ∼ on _n^(p, q), whereby for ,∈_n^(p,q), we write ∼ if and only if = for some ∈.
Lemma <ref> shows that the equivalence classes under this relation correspond precisely to the matrices ∈_n^(p,q) that give rise to the same distribution over networks.
A proof can be found in Appendix <ref>
For , ∈_n^(p,q), define respective probability matrices _ = _p,q^T and _ = _p,q^T.
Then ∼ if and only if _=_.
In light of Lemma <ref>, our equivalence relation can also be understood as ∼ if and only if _ = _.
Under this equivalence relation, we denote by _n^(p, q) the set of equivalence classes of _n^(p,q) under ∼.
When it is clear from the context, we also use [] to denote the element of _n^(p, q) corresponding to the equivalence class of ∈^(p,q)_n.
In order to show minimax results for estimation of the latent positions in the GRDPG model and related low-rank network models, we first need to fix a notion of distance over the parameter set _n^(p, q).
To account for non-identifiability in the GRDPG, it is natural to follow Definition <ref> and define the distance between [] and [] according to
inf__1, _2 ∈_1 - _2_.
Unfortunately, this definition is not necessarily a valid distance.
To see a simple example, consider the case when n=1, p=1 and q=1.
For any _0 = (x_0,1, x_0,2) ∈^2 such that x_0,1^2 - x_0,2^2 = r, we observe that _0 moves _0 along the curve C_r: x_1^2 - x_2^2 = r.
Notice that for all r ∈, C_r shares a common asymptote l: x_1 - x_2 = 0.
Therefore, it follows that for any and ∈^2,
inf__1, _2 ∈𝒪_1,1_1 - _2_2 = 0.
Furthermore, the quantity defined in Equation (<ref>) may not satisfy the triangle inequality.
We include an example for n=2, p=1 and q = 1 in Section <ref>.
Instead, we must take a slightly more careful route to define a distance on _n^(p,q).
We begin by noting that for any ∈_n^(p,q), Sylvester's law of inertia implies that _ = _p,q^T, has p positive eigenvalues, q negative eigenvalues and the remaining n-p-q eigenvalues are zero.
Thus, we can always decompose _ as
_ = __^1/2_p, q_^1/2_^T,
where _∈^n× d is a matrix with orthonormal columns and _∈^d× d is a diagonal matrix with positive on-diagonal entries.
By Lemma <ref>, we have __^1/2∈ [], since __^1/2 and both produce the same probability matrix _.
In light of this, we can define a distance _2,∞ on _n^(p,q) according to
( [],[] )
= min_∈_d∩__^1/2 - __^1/2_.
The reader may notice that we have used the same notation as in Definition <ref>.
This can be done without risk of confusion:
when p = d and q = 0, since __^1/2∈ [] and __^1/2∈ [], there exist _, _∈ such that _ = __^1/2 and _ = __^1/2. As a result, since = _d when p=d, we have
( [],[] )
= min_∈_d__^1/2 - __^1/2_
= min_∈_d_ - __
= min_∈_d - _,
which is precisely our definition of for the RDPG as given in Definition <ref>.
is a distance on _n^(p, q).
Symmetry of follows from the fact that ^T ∈_d∩ whenever ∈_d∩, and non-negativity is immediate from the fact that ·_ is a norm.
The triangle inequality follows from the same argument as given in Observation <ref>.
It remains to show that
( [],[] ) = 0
if and only if
[] = [].
Toward this end, suppose that ( [],[] ) = 0.
Since _d ∩ is compact, there exists ^⋆∈_d ∩ such that
__^1/2 - __^1/2^⋆_2,∞ = 0,
that is to say, __^1/2 = __^1/2^⋆.
We therefore have
_ = _p,q^T
= __^1/2_p, q_^1/2_^T
= __^1/2^⋆_p, q^⋆ T_^1/2_^T
= __^1/2_p, q_^1/2_^T
= _p,q^T = _,
and Lemma <ref> implies that ∼.
To show the other direction of the equivalence in Equation (<ref>), let ,∈^(p,q)_n be representatives of [],[] ∈^(p,q)_n, respectively, and suppose that [] = [].
We will show there exists a matrix ^⋆∈_d ∩ such that __^1/2 = __^1/2^⋆, whence it will follow that ([],[]) = 0.
Recall that we associate to and the probability matrices
_ = _p,q^T
= __^1/2_p,q_^1/2_^T and
_ = _p,q^T
= __^1/2_p,q_^1/2_^T,
where _, _∈^n × d both have orthonormal columns and _, _∈^d × d are diagonal and positive definite.
Since [] = [], by Lemma <ref> there exists ∈ such that
__^1/2 = __^1/2.
There also exists a ∈ such that _ = _, since _ and _ corresponds to the same singular subspaces. We also have a permutation matrix such that _^1/2_p,q_^1/2 = _^1/2_p,q_^1/2^T. The presence of _p,q forces to be of the form
= [ _p 0; 0 _q ],
where _p ∈^p× p and _q ∈^q× q are permutation matrices.
Hence, ∈∩ and we also have that _^1/2 = _^1/2^T. It follows from Equation (<ref>) that
= _^-1/2_^1/2.
Denote = for ease of notation.
Since ∈, we have
_^-1/2_^1/2_p,q_^1/2^T_^-1/2 = _p,q.
Rearranging and using the fact that diagonal matrices commute,
__p,q = __p,q.
Therefore, for any i, j ∈ [d], we have
_ij(__p,q)_jj = _ij(__p,q)_ii.
If _ij≠ 0, we have (__p,q)_jj = (__p,q)_ii and thus (_^1/2_p,q)_jj = (_^1/2_p,q)_ii. Otherwise, we have
_ij(_^1/2_p,q)_jj = _ij(_^1/2_p,q)_ii = 0.
Hence, _ij(_^1/2_p,q)_jj = _ij(_^1/2_p,q)_ii always holds and
it follows that
_^1/2_p,q = _^1/2_p,q.
Thus, we have
_p,q
= _^-1/2_^1/2_p,q
= _^-1/2_^1/2_p,q
= _p,q.
Moving _p,q to the right hand side, we have = _p,q _p,q^T, implying that is an orthogonal matrix, whence ∈∩. Taking ^⋆ = completes the proof.
The minimax risk for estimating ∈^(p,q)_n under the ()-norm after accounting for the equivalence structure encoded in ^(p,q)_n is given by <cit.>
inf_sup_∈_n^(p,q)( [], [] )
=
inf_sup_∈_n^(p,q)min_∈_d∩_^1/2_ - _^1/2__,
where the infimum is over all estimators .
Our goal in the remainder of this paper is to lower-bound this minimax risk.
§ MAIN RESULTS
We consider estimation (up to orthogonal non-identifiability) of a low-rank matrix = ^1/2, where is an element of the Stiefel manifold of all d-frames in ^d,
_d(^n) = {∈^n× d: ^T = _d}.
The structure of plays a crucial role in the estimation of .
When the smallest eigenvalues of [|] are especially close to zero, it is hard to distinguish the d “signal” eigenvalues of from the “noise” associated with the remaining n-d eigenvalues.
As such, we consider a particular structure on = (λ_1,λ_2,…,λ_d).
Assuming without loss of generality that λ_1 ≥λ_2 ≥…≥λ_d and defining the condition number κ = κ() = λ_1/ λ_d, this spectral structure is captured by membership in the set
(, )
= { = (λ_1,λ_2,…,λ_d) ∈^d× d: κ() ≤, λ_d ≥ > 0}.
With this notation in hand, we can state our main result.
With the sets _d(^n) and (κ_⋆,λ_⋆) as defined above, define the set
(, , p, q) = {(, ): ∈_d(^n), ∈𝒞(, ), ^1/2∈𝒳_n^(p,q)}.
If κ_⋆ = o(n), κ_⋆≥ 3d and 3≤ n, then
inf_(, ) sup_(, ) ∈(, , p, q) (
[^1/2],
[^1/2] )
≳ √(κ_⋆(λ_⋆∧log n )/n).
Our main tool is a standard packing argument <cit.>.
The main technical hurdle is constructing a collection of elements of _d(^n) all of which produce valid elements of (, , p, q) when paired with a particular choice of .
Our construction is based on stacking Hadamard matrices to form ∈_d(^n).
In particular, we require very different constructions depending on the growth rate of the condition number , and we divide our proof of Theorem <ref> into two cases accordingly.
Details are given in the Appendix.
As a remark, we note that the factor 3 in the conditions ≥ 3d and 3≤ n can each be relaxed to (1+ϵ) and (2+ϵ), respectively, for any constant ϵ > 0.
Details are provided in the Appendix.
§.§ Illustrative Examples and Applications
We now apply our main result to some well-studied special cases from the network modeling literature, starting with the GRDPG.
The assumption in Theorem <ref> that = Ω(d) is a natural one for the RDPG and GRDPG setting.
To see this, we first state Lemma <ref>.
Assume that = ^T, where the row vectors _1, _2, …, _n ∈^d of are independent identically distributed random vectors and let = [_1 _1^T].
For n sufficiently large, it holds with probability at least 1 - 2n^-1 that
λ_1()-δ/λ_d()+δ≤κ() ≤λ_1()+δ/λ_d()-δ,
where δ = 4√(log d/n) + 8log d/3n.
Applying the definition of κ and using basic properties of eigenvalues,
κ() = κ(^T)
= λ_1(^T /n)/λ_d(^T /n).
Since is a probability matrix, for any i ∈ [n], we have 0 ≤_i^T_i ≤ 1, and
_i _i^T - ≤_i _i^T + ≤_i_2^2 + _i _2^2
≤ 2.
Similarly, we also have
[(_i _i^T - )(_i _i^T - )]≤ 4.
Therefore, by a matrix version of Bernstein's inequality <cit.>, with probability at least 1 - 2 n^-1, we have
1/n^T -
= 1/n∑_i=1^n (_i_i^T - )≤ 4√(log d/n) + 8log d/3n.
Hence, by Weyl's inequality, it follows that with probability at least 1 - 2n^-1,
|λ_1() - λ_1(1/n^T)| ≤δ and |λ_d() - λ_d(1/n^T)| ≤δ,
where we set δ = 4√(log d/n) + 8log d/3n.
Rearranging the inequalities completes the proof.
Put simply, Lemma <ref> implies that under the RDPG, when n is sufficiently large, we have κ() ≈κ(). Without loss of generality, we assume that _1, _2, …, _n are sampled from a distribution whose support is a subset of _d(1) ∩^d_+, where _d(1) is the unit ball in ^d.
Denote the covariance matrix as = (_1 - )(_1 - )^T. Notice that for any ℓ∈ [d], _1,ℓ^2 ≤_1_2^2 ≤ 1, hence _1,ℓ^2 ≤_1,ℓ and we have
_ℓ = _1, ℓ≥_1, ℓ^2 = μ_ℓ^2 + _ℓℓ.
this implies that _ℓ≥_ℓℓ.
If = γ_d for some γ > 0, then κ() = γ^-1^T + 1 ≥γ d+1, and hence κ() = Ω_(d).
One sufficient condition for this is that each element of _1 be drawn i.i.d.
For example, if the entries of _1 are generated i.i.d. from the uniform distribution over [0, 1/√(d)], then κ() = 3d + 1.
As another example, if we sample _1, _2, …, _n uniformly from _d(1) ∩^d_+, then one can show that κ() = (2d + π - 2)/(π - 2) > d.
The case for the GRDPG is more complicated, owing to replacing the RDPG's inner product with an indefinite inner product.
We first state Lemma <ref>, which allows us to relate the spectrum of the indefinite matrix = _p,q^T to the spectrum of the positive semidefinite = _1 _1^T.
Assume that = _p,q^T, where the row vectors _1, _2, …, _n ∈^d of are i.i.d. random vectors with second moment matrix = _1 _1^T.
If there exists 0 < δ < 1 such that
^T/n - ≤δ,
then for a suitably chosen constant C > 0, we have
λ_1(_p,q) - C√(δ)/λ_d(_p,q) + C√(δ)≤κ() ≤λ_1(_p,q) + C√(δ)/λ_d(_p,q) - C√(δ).
Since is a symmetric positive semidefinite matrix, its square root ^1/2 is well-defined, as is that of = ^T /n.
Using basic spectral properties,
κ() = κ(_p,q^T)
= |λ_1(_p,q^T /n)/λ_d(_p,q^T /n)|
= |λ_1(^1/2_p,q^1/2)/λ_d(^1/2_p,q^1/2)|.
Applying the triangle inequality and basic properties of the spectral norm, we have
^1/2_p,q^1/2
- ^1/2_p,q^1/2 ≤^1/2_p,q(^1/2 - ^1/2)
+ ^1/2_p,q(^1/2 - ^1/2)
≤(^1/2 + ^1/2) ^1/2 - ^1/2
≤ 2 ^1/2^1/2 - ^1/2 + ^1/2 - ^1/2^2.
Since and are both positive semidefinite matrices, by Theorem X.1.1 in <cit.>, we have
^1/2 - ^1/2≤ - ^1/2 and ^1/2 = ^1/2.
Therefore, using the fact that δ∈ (0,1), we obtain
^1/2_p,q^1/2
- ^1/2_p,q^1/2 ≤ 2^1/2 - ^1/2
+ -
≤ (2√(δ) + δ)≤ 3√(δ).
Applying Weyl's inequality, it follows that
|λ_1(^1/2_p,q^1/2)
-λ_1(^1/2_p,q^1/2)|
≤ C√(δ)
and
|λ_d(^1/2_p,q^1/2)
- λ_d(^1/2_p,q^1/2)| ≤ C√(δ).
Applying these two bounds to Equation (<ref>), it follows that
κ()
≥λ_1(^1/2_p,q^1/2) - C√(δ)/λ_d(^1/2_p,q^1/2) + C√(δ)
= λ_1(_p,q) - C√(δ)/λ_d(_p,q) + C√(δ),
and
κ()
≤λ_1(_p,q) + C√(δ)/λ_d(_p,q) - C√(δ),
completing the proof.
For many distributions, Equation (<ref>) holds with high probability for small choices of δ.
As an example, suppose that for some constant K ≥ 1, _i_2 ≤ K(_i_2^2)^1/2 almost surely. Then
^T/ n - ≤ C (√(K^2 d(log d+log n)/n) + K^2 d(log d + log n)/n)
holds with probability at least 1 - 2n^-1. See Theorem 5.6.1 and Exercise 5.6.4 in <cit.>.
As another example, if the first p entries of _1 are independently drawn from the uniform distribution over the interval [1/(2√(p)), 1/√(p)] and the last q entries are independently drawn from the uniform distribution over the interval [0, 1/(2√(q))], then one can show that κ(_p,q) ≥ 5d - 1/2 and we can show that _i_2 ≤ 3(_i_2^2)^1/2 almost surely, so that κ() = Ω_(d).
On the other hand, if we treat d as a constant with respect to n, then κ() = O_(1) and Theorem <ref> implies the following corollary.
Under the GRDPG, with latent dimension d fixed with respect to n, suppose that the latent position matrix ∈^n× d satisfies 2d ≤κ(_p,q^T) = O(1) and λ_d(_p,q^T) ≥λ_⋆.
Then
inf _sup _∈_n^(p, q)d̃_2, ∞([],[])
≳√(λ_⋆∧log n/n).
Under the RDPG, <cit.> derived a similar minimax lower bound for estimation in Frobenius norm, rather than ()-norm, under the setting where the latent dimension is a constant.
For the sake of comparison, we restate their lower bound using our notation.
Let ∼() for some ∈_n,d, where d is a constant with respect to n.
Let be an estimator of the latent position matrix satisfying _F≲√(n) with probability one.
Then
inf_sup _∈^d_n{1/ninf _∈_d-_F^2}≳1/n.
Directly applying Theorem <ref> in the RDPG setting and using the fact that
_≥_F /√(n)
for any ∈^n× d, we obtain a lower bound of O(n^-1/2).
This has a gap of order λ_⋆∧√(log n) compared to our result in Corollary <ref>.
Further, we note that the techniques used in <cit.> are specialized to the RDPG, and it is not obvious how to adapt their strategy to the more general setting considered here.
§.§ Singular Subspace Estimation
For a matrix = ^T, instead of estimating the latent positions, singular subspace estimation aims to estimate the matrix ∈^n× d. There is a vast literature on singular subspace estimation, and we refer the interested reader to the recent survey by <cit.>.
<cit.> derives a minimax lower bound for subspace estimation for sparse high-dimensional principal component analysis (PCA), and <cit.> provides a more general framework to establish lower bounds in structured PCA problems.
We note that PCA is distinct from the low-rank network models considered here, and that these two papers consider estimation in the Frobenius or spectral norm in the presence of Gaussian noise, while we are concerned with estimation under the ()-norm with Bernoulli-distributed noise.
To the best of our knowledge, the prior work closest to the present manuscript is that by <cit.>, where the authors obtain minimax lower bounds for singular subspace estimation of random bipartite graphs.
A few existing works address minimax lower bounds for singular subspace estimation under the ()-norm.
<cit.> provides a lower bound under the ()-norm for subspace recovery in an incomplete low-rank matrix setting.
Lower bounds can also be found in <cit.>, derived from lower bounds on the spectral norm.
Below, we discuss why such approaches result in lower bounds weaker than those proved in the present work.
As a corollary to Theorem <ref>, we also obtain a minimax lower bound for singular subspace estimation.
The proof uses the same construction as Theorem <ref>, and thus details are omitted.
Under the same setup as Theorem <ref>, we have
inf_sup_∈_d(^n)min_∈_d- _≳√(κ_⋆(λ_⋆∧log n)/λ_⋆ n).
We note that the minimum in Equation (<ref>) is taken over rather than ∩, since our proof of Theorem <ref> only makes use of the fact that ∈, while the restriction to ∩ is necessary to ensure that our distance on ^(p,q)_n is well-defined.
We remark that lower bounds for subspace estimation derived from the Frobenius norm or the spectral norm cannot be optimal in the ()-norm setting.
These lower bounds use the fact that for any ∈^n× d,
_≥1/√(n)
Taking = to be our estimator, we have
inf_sup_∈_d(^n)min_∈_d- ≤sup_∈_d(^n)min_∈_d = 1
or
inf_sup_∈_d(^n)min_∈_d- _F ≤√(d),
where this second bound follows from Equation (<ref>).
It follows that any lower bound on the ()-norm minimax rate can be no larger than O(√(d/n)) if we derive it from the Frobenius norm or the spectral norm through Equation (<ref>) or Equation (<ref>), respectively.
Comparing this with Equation (<ref>), our lower bound in Corollary <ref> improves on this rate by a factor of order √((∧log n) / ) if d is bounded by a constant.
§.§ Upper bounds
In order to see the tightness of our lower bounds in Theorem <ref> and Corollary <ref>, we now consider upper bounds on the ()-norm estimation error in different asymptotic regimes.
Before doing so, we must introduce the concept of average node degree and sparsity of a network.
For a node in a network, its degree is defined as the number of edges connected to it. For a random network with n nodes generated from a probability matrix , the i-th node has an expected degree of ∑_j=1^n _ij. We define the average node degree of a network as the expected degree of each nodes averaging over the entire network, which is given by n^-1∑_i=1^n∑_j=1^n _ij. If the average node degree grows as Θ(n), we are in the dense network regime. Random networks generated by the GRDPG model are dense networks. In applications, networks are observed to be sparse: the average node degree grows as o(n). To incorporate the sparse regime into the GRDPG model, we scale the probability matrix by a sparsity factor ρ_n ∈ (0,1], so that the probability matrix becomes ρ_n, and its average node degree grows as Θ(n ρ_n). When ρ_n = 1, we recover the dense regime. Allowing ρ_n → 0 as n →∞ produces sparse networks.
For latent position estimation under the GRDPG model, Theorem 3 in <cit.> established an upper bound on the estimation errors of the ASE under ()-norm.
We restate this result here.
There exists a universal constant c>1 and a matrix _⋆∈_d ∩ such that, provided the sparsity factor satisfies nρ_n = ω{log^4c n},
^1/2_⋆-^1/2_
=O_(log ^c n/n^1 / 2).
In the setting of Theorem <ref>, the condition number of the probability matrix satisfies κ = O(1) and λ_d = Ω( n ρ_n ) = ω( log n ).
Applying Theorem <ref>, the lower bound in Equation (<ref>) implies that the minimax estimation rate should be n^-1/2log^1/2 n, which matches the upper bound up to a polylogarithmic factor.
This also suggests the near optimality of the ASE in the GRDPG model. Note that Theorem <ref> also applies to the RDPG model since the latter is a special case of the GRDPG model.
For singular subspace estimation of low-rank plus noise models like that in Definition <ref>, an upper bound for the estimation error of the truncated SVD estimator is given by Theorem 4.2 in <cit.>. Adapted to our setting, Theorem 4.2 in <cit.> states that there exists a matrix _⋆∈, such that
_⋆-_≲κ√(ρ_n μ)+ √(ρ_n log n)/λ_d,
where μ= n _/d is the incoherence parameter of the probability matrix .
Notice that we always have μ≥ 1.
Under the GRDPG, both μ and κ are bounded by constants, and λ_1/n = O(ρ_n).
Hence, the lower bound in Equation <ref> ensured by Theorem <ref> also matches the upper bound in Equation (<ref>) up to a constant.
More generally, by the Perron-Frobenius theorem, for any probability matrix , we have
λ_1 ≥min_i∈[n]∑_j = 1^n _ij,
Hence, if we assume that = ρ_n _0 for some probability matrix _0 with entries strictly bounded between 0 and 1, then λ_1 = Θ(nρ_n), and our lower bound in Equation (<ref>) can be rewritten as Ω( √(ρ_n(λ_d∧log n))/λ_d ).
In this setting, if we further assume that μ = O(log n), the upper bound in Equation (<ref>) becomes O( √(ρ_n (λ_d ∧log n) )/λ_d ), and we see that there is a O(κ) gap (up to log factors) between the upper bound derived by <cit.> and our lower bound in Corollary <ref>.
We study this gap through simulations in Section <ref> (see Figure <ref> and Table <ref>).
Based on those experiments, we conjecture that the upper bound in <cit.> can be improved to match our lower bound (up to logarithmic factors), but we leave further exploration of this point for future work.
§ EXPERIMENTS
In this section, we compare our theoretical lower bounds from Section <ref> with empirical estimation performance obtained by the ASE which according to existing results (e.g., Theorem <ref>), matches this lower bound up to logarithmic factors.
Recall that for a pair of estimates (, ), the ()-norm between it and the ground truth (_0, _0) is given by
min_∈∩^1/2 - _0 _0^1/2_.
Finding the exact minimizer of Equation (<ref>) is non-trivial.
Instead, we approximate it by first solving a similar Procrustes problem under the Frobenius norm,
min_∈∩^1/2 - _0 _0^1/2_F
and then plugging in the minimizer to the ()-distance.
In practice, the minimizer under the Frobenius norm provides a good approximation to the exact minimizer.
As a matter of fact, we note that the matrix ^⋆ in Theorem <ref> is the same minimizer of the Procrustes problem under the Frobenius norm, and therefore, the same upper bound for latent position estimation error still holds when κ = O(1).
For details, we refer the reader to the proof of Theorem 3 in <cit.>.
In general, approximating the problem in Equation (<ref>) with the minimizer of Equation (<ref>) serves as a valid upper bound for Equation (<ref>), and if it matches the lower bound, Equation (<ref>) will as well.
Recall from Section <ref> that when κ = O(1), our minimax lower bounds in Theorem <ref> and Corollary <ref> match the corresponding upper bounds up to logarithmic factors.
On the other hand, when κ = ω(1), as discussed in Section <ref>, there is no matching upper bound to our lower bound.
Rather, the best upper bound of which we are aware has a O(κ√(μ/log n)) gap with our minimax lower bound.
In light of this, we consider two different asymptotic regimes, both under the sparse GRDPG as discussed in Section <ref>.
In the first, we fix κ to be a constant and vary the growth rate of the sparsity factor ρ_n.
In the second, where κ = ω(1), we fix the sparsity ρ_n to be a constant and vary the growth rate of κ.
To emphasize the dependence of κ on n, we also write κ as κ_n below.
In both asymptotic regimes, we consider networks generated from a GRDPG with latent position dimension d = 3, and signature (p,q)=(2,1).
The probability matrix _0 ∈ [0, 1]^n× n is set to be _0 = ρ_n _0 _0 _0, where _0 ∈^n× d is constructed according to Equation (<ref>) with suitably chosen constants and
_0 = (n/3, n/3κ_n, -n/3κ_n).
We vary n from 9,000 to 20,000 with a step size of 1000.
In the setting where κ = O(1), we fix κ_n = 6 and vary
ρ_n ∈{0.2, 20n^-1/2, 90n^-2/3, 190n^-3/4, 300n^-4/5, 400n^-5/6, 1800n^-1},
where the constants are chosen so that all the ρ_n are approximately equal to 0.2 when n = 9000.
In the second setting, where κ = ω(1), we fix ρ_n = 0.9 and vary
κ_n ∈{ 1207/500 n^1 / 10,971/1000 n^1 / 5,391/1000 n^3 / 10,157/1000 n^2 / 5,
63/1000 n^1 / 2,1/40 n^3 / 5, 1/100 n^7 / 10, 1/250 n^4 / 5}.
The constants here are chosen to satisfy that all κ_n are approximately equal to 6 when n = 9000.
For each combination of (n, ρ_n, κ_n), we generate 240 Monte Carlo trials when we keep κ_n = 6 and 200 trials when we keep ρ_n = 0.9.
We approximate their latent position and subspace estimation errors as described by Algorithm <ref>.
Figure <ref> shows the results when we fix κ_n = 6 and vary ρ_n.
The left subplot shows the estimation errors for the latent positions as a function of the number of vertices n.
We see that the lines by and large overlap one another, indicating that the growth rate of ρ_n has little effect on the latent position estimation error rate, in agreement with what our lower bounds suggest.
The right subplot shows the estimation error for subspace recovery, again as a function of the number of vertices n.
Examining the different lines in the plot, we see that as the growth rate of ρ_n gets smaller, the estimation error has a slower convergence rate, as suggested by our lower bound.
Of course, our lower bounds make predictions about the precise slope these lines should have, a point we explore in more detail below (see Table <ref>and discussion thereof).
Figure <ref> shows the results of the same experiment when we fix ρ_n = 6 and vary κ_n, once again showing estimation error as a function of the number of vertices n.
The left subplot shows the estimation error for the latent positions while the right subplot shows the log-estimation error for the subspaces.
In both subplots, the estimation error has a slower convergence rate as the growth rate of κ_n gets larger, again in agreement with our lower bounds in Theorem <ref> and Corollary <ref>.
The plots in Figures <ref> and <ref> suggest a roughly log-log linear relationship between the estimation error and the number of vertices n.
Given a pair (ρ_n, κ_n), if the estimation error is of order n^α, then the log estimation error should be of order αlog n.
Therefore, the slope of a line in the log-log plot provides an estimate of the exponent of the growth rate of the estimation error.
To better compare the growth rate obtained from the simulations against our lower bounds in Theorem <ref> and Corollary <ref>, regression the log estimation errors against log n for each (ρ_n, κ_n)-pair in our simulation.
That is, we fit a linear model to the points in each line in Figures <ref> and <ref>.
The estimated slopes are listed in Tables <ref> and <ref> in the columns labeled “latent rate” and “subspace rate”.
We wish to compare these estimation rates against our theoretical lower bounds from Theorem <ref> and Corollary <ref>.
We note that these lower bounds include logarithmic factors, which have no bearing on the predicted slope of the lines in Figures <ref> and <ref> when n tends to infinity, but may lead to appreciably different lower bounds for finite n.
To account for this, we fit a second linear model, this time regression the logarithm of our minimax lower bound against log n.
The estimated slopes are listed in Tables <ref> and <ref> in the columns labeled “latent lower” and “subspace lower”.
As an example, if we exclude the log n factor from our minimax lower bound in Theorem <ref>, then the “latent lower” column of Table <ref> would be all be equal to -0.5, since our lower bound becomes Ω(n^-1/2).
In comparison, fitting a linear model to the lower bounds with logarithmic terms included yields a fitted slope of -0.447, in better agreement with the observed estimation rate.
Examining Tables <ref> and <ref>, we see that the estimated error rates are close to the rates suggested by our lower bounds.
We note, however, that for most (n, ρ_n, κ_n) triples, the estimated error rates are slightly larger than predicted by the lower bounds. One reason for this might be that the ASE method is minimax optimal up to logarithmic factors. Since the minimax lower bounds are obtained for estimators that minimize the worst case risk, it might be the case that the ASE method is near optimal in some of the worst cases and the logarithmic factors in its rate will affect the estimated rate in finite sample cases, therefore making the estimated rate slightly larger. It is also possible that randomness in our simulations still has some significant effect on our estimated slopes in the two tables, though we doubt this is the case.
All told, we do not necessarily expect the estimated error rates to be exactly those appearing in our minimax lower bounds. Nonetheless, our simulations do seem to suggest that our lower bounds are near optimal.
As mentioned in the beginning of this section, one of our goals is to see how the estimation errors grow when κ_n grows with n, since in this setting there is a gap between our minimax results and the best known upper bound on subspace recovery.
When we vary κ_n, we see in Table <ref> that the estimation error rates hew closely to our lower bounds, rather than approaching the upper bound in Equation (<ref>), in agreement with our conjecture in Section <ref>.
§ DISCUSSION
We have presented minimax lower bounds for estimation error of the latent positions and singular subspaces in the generalized random dot product graph and more general low-rank network models. We addressed the identifiability that arises due to the use of the indefinite inner product in the GRDPG model. To account for this nonidentifiability, we defined a distance on the equivalence classes of latent positions. This distance includes as special case a commonly used distance defined for the well-studied RDPG model.
To derive our minimax lower bounds, we constructed packing sets of singular subspaces for probability matrices by stacking Hadamard matrices. We divided our analysis into two parts based on different regimes of the condition number κ = λ_1/λ_d of these probability matrices.
When κ = O(1), we proved minimax lower bounds that hold for sparse GRDPG models with a bounded latent position dimension κ > 3d. We note that this bound on d can be relaxed to κ > (1+ϵ) d for any constant ϵ > 0; we have used 3 here for the sake of simplicity. The resulting lower bounds show that the adjacency spectral embedding <cit.> for estimating the latent positions is minimax optimal up to logarithmic factors. We provided examples to show that the assumption κ > (1+ϵ) d is not a stringent condition under both the GRDPG model and the RDPG model.
In the regim where κ = ω(1), we established minimax lower bounds that also hold for growing latent dimension d, as long as κ > 3d. Here again, the constant 3 can be relaxed to 1+ϵ for any constant ϵ > 0. Under this regime, we are not aware of any matching upper bound for latent position estimation or subspace estimation. The best upper bound currently known to us has a gap of O(κ√(μ/log n)) compared to our bound. To evaluate how close our lower bounds are compared to the actual performance of the adjacency spectral embedding, we conducted simulations under different regimes of κ. The results are in agreement with our lower bounds.
In our future work, we would like to relax the assumption on κ. The main difficulty is that constructing packing sets for singular subspaces of probability matrices with small κ is nontrivial, as it requires a careful combinatorial analysis of the positive and negative patterns of Hadamard matrices or other construction techniques. In addition, we would like to close the theoretical gap between the upper bounds and lower bounds when κ = ω(1). As suggested by our simulation results, we conjecture that in the regime where the condition number is allowed to grow, the existing upper bounds are not sharp. A tighter upper bound requires a more careful study of how noise perturbs singular subspaces and singular values of probability matrices. Lastly, low-rank matrices with a growing rank d are a less studied regime, yet this provides a more realistic model for many real world networks <cit.>. Future work should investigate the estimation error when d ≥κ.
apalike
§ PROOF OF LEMMA <REF>
By definition of our equivalence relation, if ∼, then there exists ∈ such that =, so that, expanding our definition of _,
_ = _p,q^T
= _p,q^T ^T
= _p,q^T
= _.
Conversely, suppose that _ = _.
Write = (_1, _2), where _1 ∈^n × p has its p columns corresponding to the “positive” part of _ and _2 corresponds to the q negative eigenvalues of _.
Writing = (_1, _2) similarly, since _ = _, we have
_1_1^T - _2_2^T
= _p,q^T
= _
= _
= _p,q^T
= _1_1^T - _2_2^T
Rearranging, we have
[_1 _2][ _1^T; _2^T ] = [_1 _2][ _1^T; _2^T ] ,
and it follows that [_1, _2]^T and [_1, _2]^T have the same null space and thus [_1, _2] and [_1, _2] span the same column space.
As a result, there exists a matrix ∈^d× d such that
[_1 _2] = [_1 _2].
Writing in block matrix form,
= [ _11 _12; _21 _22; ]
where _11∈^p× p, _12, _21^T ∈^p× q and _22∈^q× q.
Rearranging Equation (<ref>), we have
_1 _11 = _1 - _2 _21
_2 - _1 _12 = _2 _22 .
Writing Equation (<ref>) in matrix form, we have
[_1 _2] [ _11 -_12; _q ] = [_1 _2] [ _p ; -_21 _22 ].
We note that _22 is invertible, since otherwise there exists a nonzero vector ∈^q such that _22 = 0, from which it would follow that _2 - _1 _12 =, which contradicts the fact that has full column rank.
Since _22 is invertible, we can invert the matrix on the right-hand side, and rearranging Equation (<ref>),
it follows that there exists a matrix ∈^d× d such that =.
To see that ∈, note that since _ = _, we have (_p, q - _p,q^T) ^T =.
Since has full column rank, we must have _p, q - _p,q^T = 0 and therefore ∈.
§ EXAMPLE: EQUATION (<REF>) IS NOT A DISTANCE
In Section <ref>, we made a first attempt at defining a distance on the set ^(p,q)_n according to Equation (<ref>), which we restate here for the sake of convenience:
inf__1, _2 ∈_1 - _2_.
We stated in the text that this quantity fails to be a distance.
We illustrate that point here by constructing a triple of points in ^(1,1)_2 for which the triangle inequality appears to fail.
We have n=2, p=1 and q=1. By Proposition 6.1 and 6.2 in <cit.>, any ∈_1,1 is of the form
(α) = [ coshα sinhα; sinhα coshα; ],
where α∈ and is one of the matrices
[ 1 0; 0 1 ],
[ -1 0; 0 1 ],
[ 1 0; 0 -1 ] or [ -1 0; 0 -1 ].
For , ∈^2× 2, write
= [ x_11 x_12; x_21 x_22 ], = [ y_11 y_12; y_21 y_22 ],
observe that we have
inf__1, _2_1 - _2_ = inf_α_1, α_2, _1, _2(α_1)_1
- (α_2)_2_
= inf_α_1, α_2, _1, _2(α_1)- (α_2)_2_1 _
= inf_α_1, α_2, (α_1)- (α_2)_.
Since
[ coshα sinhα; sinhα coshα; ][ 1 0; 0 -1; ] =
[ coshα -sinhα; sinhα -coshα; ]
= [ cosh(-α) sinh(-α); -sinh(-α) -cosh(-α); ]
= [ 1 0; 0 -1; ][ cosh(-α) sinh(-α); sinh(-α) cosh(-α); ]
and a similar commutative property holds for -_1,1, we have
inf__1, _2_1 - _2_ = inf_α_1, α_2, (α_1)- (α_2)_.
Suppose that the first columns of and are strictly positive and that
x_i1^2 - x_i2^2, y_i1^2 - y_i2^2> 0
for i=1,2.
Then neither = -_1,1 nor = -_2 will be the minimizer of the quantity on the right-hand side of Equation <ref>. To see this, we notice that ((α_1))_i1 > 0 and ((α_2))_i1 > 0 hold for i=1,2 and any α_1, α_2 ∈. It follows that for = -_1,1 or -_2, ((α_2))_i1 is always strictly negative, and we always have (α_1)- (α_2)_ > (α_1)- (-_1,1) (α_2)_ by flipping the term ((α_2))_i1 to be strictly positive. Therefore, we need only to consider
f(, ) = min{inf_α_1, α_2(α_1)- (α_2)_, inf_α_1, α_2(α_1)- _1,1(α_2)_}
= min{inf_α_1, α_2g(, , α_1, α_2), inf_α_1, α_2g(, _1,1, α_1, α_2)},
where we define
g(, , α_1, α_2) = (α_1)- (α_2)_.
Now consider three matrices
= [ 1.9 1.2; 4 -3.8 ],
= [ 12.7 -9.8; 4.1 -0.9 ],
and = [ 0.03 -0.02; 2.3 -1.9 ].
Noting that if we divide , and by a sufficiently large constant C, then they become valid latent positions for probability matrices.
Hence, if the triangle inequality does not hold for , and , then Equation (<ref>) is also not a valid distance when we restrict the matrices in its arguments to be latent positions of probability matrices.
For our choice of , and , we use gradient descent to find the approximate values of f(, ), f(, ) and f(, ).
We find that f(, ) ≈ 2.7324, f(, ) ≈ 1.2291, f(, ) ≈ 7.8288 and from this approximation, we have f(, ) + f(, ) < f(, ).
Finding the value of f(, ) turns out to be a nonconvex optimization problem, and we have no guarantee of finding the global minimum with gradient descent methods.
To better understand the landscape of the optimization problem, we provide contour plots of g(, , α_1, α_2), g(, _1,1, α_1, α_2), g(, , α_1, α_2), g(, _1,1, α_1, α_2), g(, , α_1, α_2), and g(, _1,1α_1, α_2) as functions of α_1 and α_2 in Figures <ref>, <ref> and <ref>.
The contour plots in Figures <ref> through <ref> suggest that the global minimizers of all six of these functions lie in bounded regions.
We provide an intuitive argument to show this.
Noting that
[ coshα sinhα; sinhα coshα ] =
[ 1/√(2) 1/√(2); 1/√(2) -1/√(2) ][ e^α 0; 0 e^-α ][ 1/√(2) 1/√(2); 1/√(2) -1/√(2) ]
=: ^* [ e^α 0; 0 e^-α ]^*,
we denote := ^* and := ^* and have
g(, , α_1, α_2) = [ e^α_1 0; 0 e^-α_1 ] - [ e^α_2 0; 0 e^-α_2 ]_.
Letting t_1 = e^α_1 and t_2 = e^α_2, it follows that
[ g(, , α_1, α_2) ]^2
= max_i=1, 2{(t_1x_i1 - t_2y_i1)^2+(1/t_1x_i2 - 1/t_2y_i2)^2}.
Clearly, if (α_1, α_2) → (-∞, ∞) or (α_1, α_2) → (∞, -∞), then g(, , α_1, α_2) →∞.
On the other hand, we have
[ g(, , α_1, α_2) ]^2 ≥max_i=1, 2{(t_1x_i1 - t_2y_i1)^2},
and
[ g(, , α_1, α_2) ]^2 ≥max_i=1, 2{(1/t_1x_i2 - 1/t_2y_i2)^2}.
Therefore, if x̃_11ỹ_21≠x̃_21ỹ_11, then g(, , α_1, α_2) →∞ when (α_1, α_2) → (∞, ∞).
Similarly, if x̃_12ỹ_22≠x̃_22ỹ_12, then g(, , α_1, α_2) →∞ when (α_1, α_2) → (-∞, -∞).
Thus, combining all the cases, it follows that g(, , α_1, α_2) is coercive if
x̃_1iỹ_2i≠x̃_2iỹ_1i for i=1,2.
One can verify that , , , _1,1 and _1,1 satisfy the condition in Equation (<ref>).
Hence, we indeed have that the global minimizers of all the functions plotting in Figures <ref> through <ref> lie in bounded regions.
For a more careful characterization of these bounded regions, we use polar coordinates to represent , and (t_1, t_2). For i, j ∈{1, 2}, define
(x̃_ij, ỹ_ij) = r_ij(cosθ_ij, sinθ_ij)
(t_1, t_2) = s_1(cosψ, sinψ)
(1/t_1, 1/t_2) = s_2(cosϕ, sinϕ)
for ψ, ϕ∈ [0, π/2].
Noting that by our choice of , since _i1 > |_i2| holds for i = 1, 2, we have = ^* > 0 holds elementwise.
The same holds for , , _1,1 and _1,1.
Hence, we can restrict θ_ij∈ [0, π/2].
Assuming that r_ij≥ l_j > 0 for i, j ∈{1,2}, we have
max_i=1, 2{(t_1x_i1 - t_2y_i1)^2} = s_1^2 max{r_11^2 cos^2(θ_11 - ψ),
r_21^2 cos^2(θ_21 - ψ)
}
≥ s_1^2 l_1^2 max{cos^2(θ_11 - ψ), cos^2(θ_21 - ψ)}.
Note that for α, β, ψ∈ [0, π/2], if α≠β, then the minimum of
max{cos^2(α - ψ), cos^2(β - ψ)} = 1/2 +1/2max{-cos(2ψ-2α), -cos(2ψ-2β)}
occurs at ψ = (β+α)/2.
Therefore, we have
max_i=1, 2{(t_1x_i1 - t_2y_i1)^2}≥s_1^2 l_1^2/2(1 - cos (θ_11 - θ_21) ),
and similarly,
max_i=1, 2{(x_i2/t_1 - y_i2/t_2)^2} ≥s_2^2 l_2^2/2(1 - cos (θ_12 - θ_22) ).
Therefore, it follows from Equations (<ref>) and (<ref>) that
g(, , α_1, α_2)^2 ≥min{s_1^2 l_1^2/2 (1 - cos (θ_11 - θ_21)), s_2^2 l_2^2/2 (1 - cos (θ_12 - θ_22))}.
Based on Equation (<ref>), one can verify that as long as s_i ≥ 1000 or |α_i| ≥ 7 for either i=1 or i=2, then each of g(, , α_1, α_2), g(, _1,1, α_1, α_2), g(, , α_1, α_2), g(, _1,1, α_1, α_2), g(, , α_1, α_2), and g(, _1,1α_1, α_2) is all greater than 8.
Further providing bounds for f(, ), f(, ) and f(, ) within |α_i| ≤ 7 for both i = 1 and i=2 would show that the triangle inequality does not hold.
Rather than providing exact bounds, we evaluate each function on a 1000-by-1000 grid in [-7, 7]×[-7, 7], and the minima on this grid do not get lower than the values found by the results provided by gradient descent.
Combining the approximations provided by gradient descent and the contour plots in Figures <ref> through <ref>, our results suggest that Equation (<ref>) fails to obey the triangle inequality for certain triples of points, and hence is not a distance.
§ PROOF OF THEOREM <REF>
Here, we give a detailed proof of Theorem <ref>, drawing on a number of technical results that can be found in later sections of this appendix.
Our main tool is <cit.> Theorem 2.7, which we restate here for ease of reference.
Let Θ be a set of parameters endowed with a semi-distance δ, let M ≥ 2 and suppose that Θ contains elements θ_0, θ_1, …, θ_M such that:
(i) δ(θ_j, θ_k) ≥ 2 s>0, ∀ 0 ≤ j<k ≤ M
(ii) Letting P_0,P_1,…,P_M be probability measures associated to respective parameters θ_0,θ_1,…,θ_M, it holds for all j=1,2,…,M that P_j ≪ P_0
and
1/M∑_j=1^M (P_j P_0) ≤αlog M,
with 0<α<1 / 8.
Then we have
inf _θ̂sup_θ∈Θ_θ[ δ(θ̂, θ)] ≥ c_α s,
where inf_θ̂ denotes the infimum over all estimators and c_α>0 is a constant depending only on α.
To apply Theorem <ref>, we must construct a collection of elements of (,,p,q) whose pairwise distances as measured by are lower-bounded, but whose pairwise KL-divergences are close (i.e., pairs of these elements give rise to similar distributions over the set of n-vertex networks).
We break our proof into two separate cases, based on the growth rate of .
These two different regimes require slightly different constructions, owing to the different spectral structures they imply.
We first consider the case where = O(1).
For latent dimension d, let k_0 be such that 2^k_0-1 < d ≤ 2^k_0, and define m = ⌊ n/2^k_0⌋
Lemma <ref>, proved in Section <ref>, ensures the existence of a collection of M = 2^k_0m = Ω(n) matrices
= {_i : i = 0,1,2,…, M}⊂_d(^n)
such that, with any λ_1 ≤ n/3 and λ_d = λ_1/κ, for
= ( λ_1, λ_1/κ,
…, λ_1/κ),
it holds for all i ∈ [M] ∪{0} and all j ∈ [M] not equal to i,
min_∈_d ∩_i^1/2 - _j^1/2_≥ C √(κ (log n ∧λ_d)/n)
for a suitably-chosen constant C > 0.
Taking λ_d = and λ_1 =, our assumption that 3 ≤ n implies
λ_1 ≤ n/3 and κ =.
Therefore, the pairs (, ), where ∈, are indeed elements of (,,p,q).
Thus, the set of matrices in Equation (<ref>) are a 2s-packing set of (, , p, q) under the ()-norm, where s = C√((log n ∧) / n ) for suitably chosen constant C > 0.
Writing = ^1/2_p,q^1/2 for ease of notation and taking _i = _i _i^T for all i ∈ [M]∪{0}, we note that (_i,) induces a distribution over n-vertex networks via _i.
Lemma <ref>, proved in Section <ref>, upper bounds the KL divergences between these distributions over networks as
( _i _0 )
≤1/10log n for all i∈ [M]
for all suitably large n.
Averaging over i ∈ [M],
1/M∑_j=1^M ( _i _0 )
≤1/10log n.
Thus, applying Theorem <ref> with s = C√((log n ∧) / n ) and α = 1/10, our result holds for the setting where = O(1).
In the setting where = ω(1), we use a different construction, but our proof largely parallels the argument given above.
Let κ=, set λ_2=λ_3=…=λ_d = and λ_1 =, by assumption, we have λ_1 ≤ n/3. Define the matrix = (λ_1,λ_2,…,λ_d) ∈^d × d.
By Lemma <ref>, proven in Section <ref>, there exists a collection
= {_i : i=1,2,…, ⌊ n/2 ⌋}⊂_d(^n)
such that for any pair of indices 0 ≤ i < j ≤⌊ n/2 ⌋ and any ζ_d ≤ 1/√(640 d),
min_∈∩_i ^1/2 - _j ^1/2_2, ∞≥ζ_d √(d-1)/2√(κ(λ_d ∧log n)/n).
Since κ = and λ_d = by construction,
the set constitutes a 2s-packing set for (,,p,q) with s = C√( (log n∧) / n) for C>0 chosen suitably small.
It remains for us to upper bound the KL divergence between the distributions induced by and the elements of .
Recall our notation = ^1/2_p,q^1/2 and _i = _i _i^T for i=0,1,2,…,⌊ n/2 ⌋.
Applying Lemma <ref>, proven in Section <ref>, for any i =1,2,…,⌊ n/2 ⌋,
_i - _0 _F^2
≤1/80λ_1 (λ_d ∧log n)/n.
Lemma <ref>, also proven in Section <ref>, ensures that _0 has entries bounded by
λ_1 / 3n ≤^(0)_ij≤2/3 for all i,j.
Thus, applying Lemma <ref>, for any i=1,2,…,⌊ n/2 ⌋,
( _i _0 )
≤9 n/λ_1_i - _0 _F^2
≤9/80log n.
Averaging over our packing set,
1/⌊ n/2 ⌋∑_i=1^⌊ n/2 ⌋( _i _0 )
≤9/80log n.
Applying Theorem <ref> with s = C√( (log n∧) / n) and α = 9/80 < 1/8 establishes our result in the regime where = ω(1), completing the proof.
§ THEOREM <REF>: CONSTANT CONDITION NUMBER
Here, we prove Theorem <ref> in the regim where κ = λ_1/λ_d = O(1).
Recall that we have = ^1/2_p,q^1/2^T, where is a diagonal matrix with positive on-diagonal entries λ_1 ≥λ_2 ≥…≥λ_d.
By assumption in Theorem <ref>, κ≥ 3d, so the latent position dimension d may be considered bounded throughout this section.
§.§ Constructing a Packing Set
We begin by constructing our collection of elements of (,,p,q) and establishing a lowerbound on their pairwise distances under .
As we mentioned in Section <ref>, our construction makes use of Hadamard matrices to construct a collection of matrices with orthonormal columns, which will correspond to the singular subspaces of a collection of probability matrices.
We will then show that this collection of singular subspaces, multiplied by a diagonal matrix of suitable eigenvalues, constitute the representatives of equivalence classes that form a packing set over (,,p,q).
In Section <ref>, we establish that the KL divergences of their associated probability matrices are suitably bounded.
Recall that a Hadamard matrix of order n is an n-by-n matrix whose entries are drawn from {-1,1} and whose rows are mutually orthogonal.
In particular, Hadamard matrices have the useful property that if is a Hadamard matrix of order n, then the matrix
[ ; - ]∈{-1,1}^2n × 2n
is a Hadamard matrix of order 2n.
According to Sylvester's construction <cit.>, we can construct Hadamard matrices of order 2^k recursively by
_1 = [ 1 ], _2 = [ 1 1; 1 -1 ],
and
_2^k+1=[ _2^k _2^k; _2^k -_2^k ]
for any integer k ≥ 0.
We write _n to denote a Hadamard matrix of order n = 2^k for integer k ≥ 0 constructed in this manner.
Under the conditions of Theorem <ref>, suppose that = O(1).
There exists a matrix _0 ∈^n × d with orthonormal columns such that
max_j∈[n], k∈[d]|^(0)_jk| ≤1/√(n-r),
1/√(n)≤max_k∈[d]|^(0)_ik| ≤1/√(n-r) for all i ∈ [2^k_0m],
and
√(d/n)≤√(∑_k=1^d (^(0)_ik)^2)≤√(d/n-r) for all i ∈ [2^k_0m].
For a given latent space dimension d = p + q, we let k_0 > 0 be the integer such that 2^k_0-1 < d ≤ 2^k_0.
Let _2^k_0, d denote the matrix obtained by retaining only the first d columns of the Hadamard matrix _2^k_0.
By construction, _2^k_0, d is a 2^k_0× d matrix with orthogonal columns.
Assume that n = 2^k_0 m + r where m>0 is an integer and r is a remainder term such that 0 ≤ r < 2^k_0 < 2d.
To obtain an n× d matrix with orthonormal columns, we first stack m copies of _2^k_0, d together vertically to obtain a matrix _m ∈^m 2^k_0× d given by
_m = . [ _2^k_0, d; _2^k_0, d; ⋮; _2^k_0, d ]} m copies of _2^k_0, d.
Defining
_r = [ _r ; ]∈^r × d,
we construct a matrix _0 ∈^n × d with orthonormal columns by stacking _m and _r and rescaling their columns.
For i ∈ [2^k_0] and j ∈ [d], let h_i, j be the (i, j) entry of _2^k_0, d.
Noting that h_i,1 = 1 for all i ∈ [2^k_0], our construction of _0 is then given by
_0 = [ 1/√(n) h_12/√(n-r) h_13/√(n-r) … h_1 d/√(n-r); ⋮ ⋮ ⋮ ⋯ ⋮; 1/√(n) h_2^k_0,2/√(n-r) h_2^k_0, 3/√(n-r) … h_2^k_0, d/√(n-r); ⋮ ⋮ ⋮ ⋯ ⋮; 1/√(n) h_1,2/√(n-r) h_1,3/√(n-r) … h_1, d/√(n-r); ⋮ ⋮ ⋮ ⋯ ⋮; 1/√(n) h_2^k_0,2/√(n-r) h_2^k_0, 3/√(n-r) … h_2^k_0, d/√(n-r); _r/√(n) ].
Noting that |h_i,j| = 1 for all i ∈ [2^k_0] and j ∈ [d], Equations (<ref>)-(<ref>) all follow from the construction in Equation (<ref>).
To form our packing set, we will construct a collection of matrices that are far from _0 (and from one another) in ()-distance, but yield similar distributions over networks as measured by KL-divergence.
We will do this by selectively modifying one row of _0 at a time.
Toward this end, Lemma <ref> ensures the existence of a collection of vectors from which we will construct these perturbed versions of _0.
Under the conditions of Theorem <ref>, suppose that = O(1).
Let _0 be the matrix guaranteed by Lemma <ref> and let λ_1 ≥λ_2 ≥⋯≥λ_d > 0 be arbitrary.
For each i ∈ [n], let _i ∈^d denote the i-th row of _0.
For latent space dimension d, let k_0 be such that 2^k_0-1 < d ≤ 2^k_0 and let m = ⌊ n/2^k_0⌋.
For n sufficiently large, for each i ∈ [2^k_0m], there exists a vector _i such that
_i^T _i ≥ 0,
|_i^T _i/_i_2_i_2| < √(3)/2,
and for any constant c_0>0,
|_i,ℓ|
= c_0/λ_ℓ√(λ_1 (λ_d ∧log n)/n d), ℓ∈ [d],
Fix i ∈ [2^k_0 m].
We will construct _i ∈^d satisfying Equations (<ref>), (<ref>) and (<ref>).
Toward this end, consider the vector
= c_0 √(λ_1 (λ_d ∧log n)/n d)( λ_1^-1, λ_2^-1, …,
λ_d^-1)^T ∈^d,
where c_0>0 is any constant of our choosing.
Define _i ∈^d by
_i, ℓ
= 1/_i_2( _i, ℓ) |_i,ℓ|
ℓ∈ [d],
where for each i ∈ [2^k_0m], we denote the i-th row of _0 as _i ∈^d.
By Lemma <ref>, there exists a vector _i ∈^d such that
|_i^T _i| < √(2/3).
We define _i ∈^d by _i, ℓ = (_i, ℓ) |_i, ℓ|, and Equation (<ref>) is satisfied trivially.
Define vector ∈^d according to
_i, ℓ = (_i, ℓ) |_i, ℓ|/_i_2
for ℓ∈ [d].
Substituting and applying the triangle inequality,
| _i^T _i /_i _2 _i _2 |
≤| _i^T (_i - _i ) | + | _i^T _i |
≤_i - _i _2 + √(2/3),
where the second inequality follows from Cauchy-Schwarz, the fact that _i = 1 and Equation (<ref>).
Plugging in the definitions of _i and _i,
_i - _i _2
=
√(∑_ℓ = 1^d
(|_i, ℓ|/_i_2
- 1/√(d))^2).
From Equations (<ref>) and (<ref>), we have
(√(n-r/n) - 1)·1/√(d)≤|_i, ℓ|/_i_2 - 1/√(d)≤(√(n/n-r) - 1)·1/√(d)
for any ℓ∈ [d].
As a result, Equation (<ref>) is further bounded by
_i - _i _2
≤max{(1-√(n-r/n)),
(√(n/n-r) - 1) }
≤max{r/n, r/2(n-r)},
where the second inequality follows from the fact that for any x ∈ [0,1], we have
1 - √(1 - x)≤ x and √(1 + x) - 1 ≤x/2.
Hence, choosing an n sufficiently large, for example, n ≥ 42 d ≥ 21 r, and it follows that
| _i^T _i /_i _2 _i _2 |
≤1/21
+ √(2/3)≤√(3)/2.
To see that _i can be chosen to satisfy Equation (<ref>), simply note that if _i^T _i < 0, we may replace _i with -_i without violating Equations (<ref>) and (<ref>).
With Lemma <ref> in hand, we are ready to construct perturbations of the matrix _0 guaranteed by Lemma <ref>.
Our packing set argument in Theorem <ref> requires that the matrices {_i : i=0,1,2,…,2^k_0m } be suitably well separated in ()-distance.
Lemma <ref> establishes that this is the case.
Under the setting of Theorem <ref>, suppose that κ = O(1).
For latent space dimension d, let k_0 be such that 2^k_0-1 < d ≤ 2^k_0 and let m = ⌊ n/2^k_0⌋.
Let = (λ_1,λ_2,…,λ_d) for λ_1 ≥λ_2 ≥⋯≥λ_d > 0 obeying κ = λ_1 / λ_d.
For all suitably large n, there exists a collection of matrices {_i : i =0,1,2,…,2^k_0 m } such that for all i ∈ [2^k_0m] and j ∈ [2^k_0m] ∪{0} not equal to i,
min_∈∩_i^1/2 - _j ^1/2_≥1/4_i^T ^1/2_2,
where for each i ∈ [2^k_0m], _i is the vector guaranteed by Lemma <ref>.
For each i ∈ [2^k_0m], define the matrix
_i = _0 + _i_i^T ∈^n × d ,
Then, denoting the SVD of _i by
_i = _i_i ^T_i, define for each i ∈ [2^k_0], the matrix
_i = _i _i^T ∈^ n × d .
For the sake of simplicity, we prove Equation (<ref>) for i, j ∈ [2^k_0 m].
When either i=0 or j=0, the proof follows a similar idea.
For a fixed ∈∩, by the triangle inequality, we have
_i ^1/2 - _j ^1/2_2, ∞ ≥_i ^1/2 - _j ^1/2_2, ∞
- _i^1/2 - _i^1/2_2,∞
- _j^1/2 - _j^1/2_2,∞.
Thus, to obtain our desired lower bound on _i ^1/2 - _j ^1/2_2, ∞, it will suffice to prove
* an upper bound for every _i^1/2 - _i^1/2_2,∞ and
* a lower bound for every _i ^1/2 - _j ^1/2_2, ∞.
To establish Item <ref>, note that by our definitions and using basic properties of the ()-norm and the operator norm,
_i^1/2 - _i^1/2_2,∞ = _i _i^T ^1/2 - _i _i _i^T ^1/2_2, ∞
≤_i_2, ∞(_i - _d)_i^T^1/2
≤√(λ_1)_i_2, ∞_i - _d .
By our choice of _i in Equation (<ref>), we have
_i^2_2 ≤c_0^2 κ/nλ_d ∧log n/λ_d≤c_0^2κ/n,
where c_0 > 0 is a constant of our choosing.
For n > c_0^2κ, we have _i_2 < 1, so that Lemmas <ref> and <ref> apply, and it follows that
_i^1/2 - _i^1/2_2,∞≤ 2√(λ_1)√(d/n-r + _i_2/1 - _i_2)(√(d/n-r) + 1/4_i_2)_i_2.
Observing that ^1/2_i ≥√(λ_d)_i,
_i^1/2 - _i^1/2_2,∞≤ 2√(κ)√(d/n-r + _i_2/1 - _i_2)(√(d/n-r) + 1/4_i_2)^1/2_i_2.
Hence, in order to show that
_i^1/2 - _i^1/2_2,∞≤1/8^1/2_i_2,
which will suffice for our upper bound in Item <ref>, it suffices to have
√(d/n-r + _i_2/1 - _i_2)(√(d/n-r) + _i_2 /4)
≤3d/2(n-r) + _i_2/1 - _i_2 + _i_2^2/32≤1/16√(κ)
where the first inequality uses the fact that a(b+c) ≤ a^2 + 1/2b^2 + 1/2c^2.
This can be satisfied by requiring
d/n-r≤1/96√(κ), _i_2/1-_i_2≤1/48√(κ) and _i_2^2 ≤2/3√(κ),
where the last inequality is satisfied once n ≥max{(96√(κ) + 2)d, 2500c_0^2 κ^2}.
Turning our attention to Item <ref>, by construction, we have for any ∈∩,
_i^1/2 - _j^1/2_2, ∞
= max {^T ^1/2(_i + _i)
- ^1/2_i_2,.
. ^T ^1/2( _j + _j)
- ^1/2_j _2, .
. max_ℓ∈[n]
ℓ≠ i,j^T ^1/2_ℓ
- ^1/2_ℓ_2
}.
If ^1/2_i_2 ≥ 2^T ^1/2_i - ^1/2_i _2, then trivially
^T ^1/2(_i + _i)
- ^1/2_i _2
≥^1/2_i _2
- ^T ^1/2_i - ^1/2_i _2
≥1/2^1/2_i_2.
Otherwise, suppose that ^1/2_i_2 < 2^T ^1/2_i - ^1/2_i _2.
When n ≥ 8 d, we have m ≥ 3.
Note that n≥ 8d holds eventually, since 3d ≤κ = O(1) by assumption.
From our construction and using the fact that m≥ 3, we can always find an ℓ∈ [2^k_0m] distinct from i and j such that _ℓ = _i and
^T ^1/2_ℓ - ^1/2_ℓ_2
= ^T ^1/2_i - ^1/2_i _2
≥1/2^1/2_i_2.
Thus, combining the two cases with Equation (<ref>),
_i^1/2 - _j^1/2_2, ∞≥1/2^1/2_i_2.
Combining this with the upper bound in Equation (<ref>), we have
_i ^1/2 - _j ^1/2_2, ∞ ≥^1/2_i_2 / 2
- ^1/2_i_2 / 8
- ^1/2_i_2 / 8 ≥1/4^1/2_i_2.
Noting that the right-hand side does not depend on , minimizing over ∈∩ completes the proof.
Under the conditions of Theorem <ref>, suppose that κ = O(1).
For latent space dimension d, let k_0 be such that 2^k_0-1 < d ≤ 2^k_0 and let m = ⌊ n/2^k_0⌋.
Let = (λ_1,λ_2,…,λ_d) for λ_1 ≥λ_2 ≥⋯≥λ_d > 0 obeying κ = λ_1 / λ_d.
For all suitably large n, there exists = (λ_1,λ_2,…,λ_d) ∈^d and a collection of matrices {_i : i =0,1,2,…,2^k_0 m } such that for all distinct i, j ∈ [2^k_0m]∪{0},
min_∈∩_i^1/2 - _j ^1/2_≥ c_0 / 8√( (λ_d ∧log n) κ/ n ),
where c_0 > 0 is as in Lemma <ref>.
Let λ_1 be such that λ_1 ≤ n/2 and
λ_2=λ_3 =⋯=λ_d = λ_1/κ,
and set = ( λ_1,λ_2,…,λ_d) ∈^d× d.
By Lemma <ref>, there exists a collection of matrices {_i : i =0,1,2,…,2^k_0 m }⊂_d(^n) such that for all i ∈ [2^k_0m] and j ∈ [2^k_0m] ∪{0} not equal to i,
min_∈∩_i^1/2 - _j ^1/2_≥1/4_i^T ^1/2_2,
where, letting _i ∈^d denote the i-th row of _0, _i satisfies
| _i,ℓ|
= c_0 /λ_ℓ√(λ_1 (λ_d ∧log n)/ n d ).
Expanding and plugging in our choice of ,
_i^T ^1/2_2^2
= ∑_j=1^d _i,j^2 λ_j
= c_0^2 / nd ∑_j=1^d
λ_1 (λ_d ∧log n)/λ_j
= c_0^2 / nd ( 1 + (d-1)κ)
(λ_d ∧log n).
Using the fact that κ=1 when d=1 and that (d-1) ≥ 1 otherwise, it follows that, taking square roots and using the fact that d is a constant,
_i^T ^1/2_2
≥ c_0
√(1/d + (1-1/d)κ)√(λ_d ∧log n/n)≥ c_0 / 2√( (λ_d ∧log n) κ/ n ).
Plugging this lower-bound into Equation (<ref>), it follows that
min_∈∩_i^1/2 - _j ^1/2_≥ c_0 / 8√( (λ_d ∧log n) κ/ n ),
completing the proof.
Lemma <ref> guarantees the existence of a collection elements of _d(^n) that are well separated in ()-norm after right-multiplication by some = ( λ_1,λ_2,…,λ_d) ∈^d.
To construct a collection of valid expected adjacency matrices from this collection of d-frames, we must choose so that ^1/2_p,q^1/2^T has all entries between 0 and 1 for every in our collection.
Lemma <ref> ensures that this is possible.
Under the conditions of Theorem <ref>, suppose that κ = O(1).
Let _0 ∈_d(^n) be the matrix guaranteed by Lemma <ref>.
There exist λ_1 ≥λ_2 ≥⋯≥λ_d > 0 such that, letting = ( λ_1,λ_2,…,λ_d) ∈^d × d, the matrix
_0 = _0 ^1/2_p,q^1/2_0^T
obeys
λ_1/3n≤^(0)_ij≤2/3.
for all i,j ∈ [n] and all n sufficiently large.
We will choose λ_1 ≥λ_2 ≥⋯≥λ_d > 0 so that
∑_j=2^d λ_j ≤λ_1/1+ϵ and ∑_j=1^d λ_j ≤n/1+ϵ,
for any constant ϵ > 0.
So long as κ = λ_1/λ_d ≥ (1+ϵ)d, we can satisfy Equation (<ref>) by taking
λ_1 ≤ n/(2+ϵ) and λ_d = λ_d-1 =… = λ_2 = λ_1/κ, so that
∑_j=2^d λ_j = d-1/κλ_1 ≤λ_1/1+ϵ
and
∑_j=1^d λ_1 ≤2+ϵ/1+ϵλ_1 ≤n/1+ϵ.
To find lower and upper bounds for each entry of _0, we unroll the definition in Equation (<ref>) to write
^(0)_ij≥1/nλ_1 - 1/n - r∑_l = 2^d λ_ℓ≥(1/n - 1/(1+ϵ)(n-r)) λ_1
≥ϵλ_1/(1+2ϵ)n,
where the last inequality holds for n sufficiently large.
To upper bound the entries of _0, we have
^(0)_ij≤1/nλ_1 + 1/n - r∑_l = 2^d λ_ℓ≤n/(1+ϵ)(n-r)≤1+ϵ/1+2ϵ,
where the second inequality holds for sufficiently large n.
Combining this with Equation (<ref>) and taking ϵ = 1,
λ_1/3n≤^(0)_ij≤2/3,
as we set out to show.
Our perturbation of _0 to obtain _i comes from _i (guaranteed by Lemma <ref>), whose norm _i_2 is of order O(n^-1/2) as established in Equation (<ref>).
As a result, it is not hard to imagine that this perturbation should not change the entries of _0 too much.
Lemma <ref> shows that this is indeed the case.
Under the setting of Theorem <ref>, suppose that κ = O(1).
For latent space dimension d, let k_0 be such that 2^k_0-1 < d ≤ 2^k_0 and define m = ⌊ n/2^k_0⌋.
Let {_i : i ∈ [2^k_0m] } be the collection of d-frames guaranteed by Lemma <ref>, and let ∈^d × d be the diagonal matrix guaranteed by Lemma <ref>.
For sufficiently large n, letting = ^1/2_p,q^1/2 it holds for all i ∈ [2^k_0m] that _i = _i _i^T has all entries strictly bounded between 0 and 1.
Our strategy will be to show that for any i ∈ [m2^k_0], the matrix _i is sufficiently close to _0 entry-wise.
From this fact, it will follow that the entries of _iare all close to _0.
Toward this end, we begin by recalling from the proof of Lemma <ref> that the matrices _i are defined according to
_i - _0 = _i - _i + _i - _0
= _i(_d - _i) _i^T + _i _i^T,
where _i is as defined in Equation (<ref>) and _i = _i_i ^T_i is its SVD.
Define = _i(_d - _i) _i^T.
By Lemma <ref>, we have for j ∈ [n] and k ∈ [d],
|_jk|
= |∑_ℓ = 1^2 ^(i)_jℓ(1 - √(1 + σ_iℓ)) ^(i)_kℓ|
≤ 2max_ℓ∈ [d]{|_jℓ^(i)|} - _d
≤ 2 _i_2,∞ - _d.
From Equation (<ref>), we have _i_2 ≤ c_0√(κ/n), and for suitably large n we may ensure that _i _2 < 1/2 so that Lemmas <ref> and <ref> apply.
If, in addition, we have n ≥ 4d ≥ 2r, then it follows that
|_jk|
≤√(d/n-r + _i_2/1 - _i_2)(_i_2^2+4 √(d/n-r)_i_2)
≤ 2√(d/n + _i_2)(_i_2+8 √(d/n))_i_2.
Again recalling that _i _2 = O(n^-1), it holds for suitably large n that
2√(d/n + _i_2)(_i_2+8 √(d/n))
≤1/√(d).
Noting that _i_∞≥_i_2/√(d), it follows that |_jk| ≤_i_∞ for n suitably large, and thus
max_j∈[n], k∈[d]|^(i)_jk - ^(0)_jk|
≤|_jk| + _i_∞≤ 2 _i_∞.
From Equation (<ref>), we have
max_j∈[n], k∈[d]|^(i)_jk - ^(0)_jk|
≤ 2c_0√(κ/d n).
For any ∈_d(^n), unrolling the definition _jk = (^T)_jk,
_jk = ∑_ℓ=1^d _ℓℓ_jℓ_kℓ≥^(0)_jk - ∑_ℓ=1^d λ_ℓ |^(0)_jℓ^(0)_kℓ - _jℓ_kℓ|
≥(1/3n - ∑_ℓ=1^d |^(0)_jℓ^(0)_kℓ - _jℓ_kℓ| ) λ_1.
If each entry of differs from the corresponding entry of _0 by at most C/d√(n) for some constant C > 0, it follows from Equation (<ref>) that
|^(0)_jℓ^(0)_kℓ - _jℓ_kℓ|
≤|_jℓ - _jℓ^(0)||_kℓ^(0)|
+ |_kℓ - _kℓ^(0)||_jℓ^(0)|
+ |_jℓ - _jℓ^(0)||_kℓ - _kℓ^(0)|
≤2C/d√(n)1/√(n-r) + C^2/d^2 n≤c/d n,
provided C>0 and c > 0 are chosen suitably small.
Combining the above two displays,
_jk≥(1/3n - c/n)λ_1.
Thus, when c > 0 is sufficiently small, we have _jk > 0.
Similarly, we have
_jk = ∑_ℓ=1^d _ℓℓ_jℓ_kℓ≤^(0)_jk + ∑_ℓ=1^d λ_ℓ |^(0)_jℓ^(0)_kℓ - _jℓ_kℓ|
≤2/3 + ∑_ℓ=1^d |^(0)_jℓ^(0)_kℓ - _jℓ_kℓ| λ_1 ≤2/3 + c/nλ_1 ≤2/3+c.
It follows that _jk < 1 for c sufficiently small.
Hence, fixing such a c > 0, recalling that the constant c_0 > 0 in Lemma <ref> is ours to choose, we can pick c_0 < C/2√(dκ) in
Equation (<ref>) to be small enough so that the bound in Equation (<ref>) becomes
max_j∈[n], k∈[d]|^(i)_jk - ^(0)_jk|
≤ C / d √(n).
It follows that for n suitably large, all entries of _i obtained from _i lie between 0 and 1, as we set out to show.
§.§ Bounding the KL Divergence
In addition to the lower-bound guaranteed by Lemma <ref>, Theorem <ref> requires an upper bound on the KL divergences between the distributions encoded by our _i matrices.
To control the KL divergence between _i and _0, we use a basic result established by <cit.>, which we restate here for ease of reference.
Let 0 < a ≤ b < 1 be such that a ≤^(0)_ij≤ b for any i, j ∈ [n].
Then
(_i _0 ) ≤_i - _0_F^2/a(1-b),
In light of this result, it will suffice for us to bound the Frobenius norm between _0 and each of the other elements of our packing set.
Under the setting of Theorem <ref>, suppose that κ = O(1).
For latent dimension d, let k_0 be such that 2^k_0-1 < d ≤ 2^k_0 and let m = ⌊ n/2^k_0⌋.
With _0 as in Lemma <ref>, it holds for all i ∈ [2^k_0m] that
_i - _0_F^2 ≤λ_1(λ_d∧log n)/90 n.
Recall that _i is defined in Equation (<ref>) and _i = _i_i_i^T. We also have _i = _i_i^T. Applying these definitions and the triangle inequality,
_i _i^T - _i _i^T_F = _i_i^T_i _i^T - _i_i_i^T_i _i_i^T_F
≤_i^T_i - _i_i^T_i _i_F
= (_d - _i)_i^T_i + _i_i^T_i (_d - _i)_F
Applying the triangle inequality, it yields that
_i _i^T - _i _i^T_F ≤(_d - _i)_i^T_i_F + _i_i^T_i (_d - _i)_F
≤(_d - _i)_i^T_F + _i_i^T_i (_d - _i)_F
= (1 + _i)(_i-_d)_i^T_F.
Also recalling that _i = _0 + _i_i^T, we have
_i _i^T - _0 _0^T_F = (_0+_i_i^T)(_0+_i_i^T)^T - _0 _0^T_F
≤ 2^T_i _2 + |_i^T_i|
≤ 3^T_i _2
where the last inequality holds if _i_2 < 1.
Plugging in Equation (<ref>) and (<ref>),
_i - _0_F
= _i _i^T - _0 _0^T_F
≤_i _i^T - _i _i^T_F
+ _i _i^T - _0 _0^T_F
< (1 + _i)
(_i - _d) _i^T _F
+ 3_i^T _2.
Applying Lemma <ref>, if _i_2 < 1, then we have
_i ≤ 1 + _d - _i
≤ 1 + 1/2_i_2^2 + 2√(d/n-r)_i_2
< 3/2 + 2√(d/n-r)_i_2.
From Equation (<ref>), we have _i_2 ≤ c_0 √(κ/n), hence, we have
_i_2 ≤1/4√(n-r/d)
when n > 2d + 4c_0√(κ d). It suffices to require n > c_0^2 κ to satisfy that _i_2 < 1. Therefore, if n > 2d + 4c_0 √(κ d) + c_0^2 κ, then we have _i < 2, and Lemma <ref> implies
_i - _0_F
≤ 3 √(17(_i_2^2 + _i_2^2)(_i^T ^2 _i +_i^T ^2 _i)) + 3_i^T _2
≤(15 _i_2 + 15_i_2 + 3)_i^T _2 + 15(_i_2 + _i_2 )_i^T _2.
If n > 902d, then we have _i_2 ≤ 1/30 by the fact that _i _2^2 ≤ d/(n-r). Furthermore, if n ≥ 900c_0^2κ, then _i_2 ≤ 1/30. Hence, setting n ≥ 904d + 4c_0√(κ d) + 900c_0^2κ and collecting terms, we have
_i - _0_F
≤ 4_i^T _2 + 15(_i_2 + _i_2)_i^T _2
≤ 4_i^T _2 + 15(√(d/n-r) + c_0√(κ/n))_i^T _2
By construction and Equation (<ref>),
_i^T _2 = √(∑_ℓ = 1^d λ_ℓ^2 (_iℓ)^2)≤λ_1√(d/n - r).
and
_i^T ^2 _i = c_0^2 λ_1 (λ_d∧log n)/n.
Applying the above bounds to Equation (<ref>), we obtain
_i - _0_F ≤ 4c_0√(λ_1(λ_d ∧log n)/n) + 15 d λ_1/n-r + 15c_0λ_1√(κ d/n(n-r))
≤(4c_0 + 30d/√(n)√(κ∨λ_1/log n) + 15√(2)c_0 √(κ d/n(κ∨λ_1/log n)))√(λ_1(λ_d ∧log n)/n),
where the last inequality holds if n ≥ 4d ≥ 2r.
To complete the proof, it suffices to have
4c_0 + 30 d/√(n)√(κ∨λ_1/log n) + 15√(2)c_0 √(κ d/n(κ∨λ_1/log n))≤1/3√(10),
which holds true if c_0 ≤1/36√(10) and
d ≤max{√(n)/270√(10)(1/√(κ)∧√(log n/λ_1)), n/364500 c_0^2 κ(1/κ∧log n/λ_1)}.
Since both κ and d are assumed to be bounded, the requirements all hold for n sufficiently large.
Using Equation (<ref>) to bound the entries of ^(0) away from zero and applying Lemma <ref>, we have
(_i _0) ≤9n_i - _0_F^2/λ_1.
Finally, the following Lemma yields our desired bound on the KL divergence for use in Theorem <ref>.
Under the setting of Theorem <ref>, suppose that κ = O(1).
For latent space dimension d, let k_0 be such that 2^k_0-1 < d ≤ 2^k_0 and define m = ⌊ n/2^k_0⌋.
There exists a matrix _0 ∈ [0,1]^n × n and a collection of 2^k_0 m matrices {_i : i ∈ [ 2^k_0 m ] }⊂ [0,1]^n × n such that for all suitably large n,
( _i _0 ) ≤1/10log n.
Let _0 and _i be as defined in Lemmas <ref> and <ref>, respectively.
We note that by Lemma <ref>, we can bound the elements of _0 as
λ_1 / 3n ≤^(0)_ij≤2/3 for all i,j ∈ [n].
Applying Lemma <ref> followed by Lemma <ref>, it follows that
(_i _0 )
≤9n _i - _0_F^2/λ_1 ≤λ_d ∧log n / 10 ≤ 1 /10log n
for all suitably large n, completing the proof.
§ THEOREM <REF>: GROWING CONDITION NUMBER
When κ = ω(1), we require a different construction for our packing set than that used in Section <ref>.
Were we to use the construction for the κ = O(1) case here, we would further require addition assumptions on the growth of λ_1.
To obtain more general results, we pursue a different construction here.
§.§ Constructing the Packing Set
Our approach, as in the κ = O(1) case, is to first construct a “base” parameter _0 ∈^n × d to have orthonormal columns.
We will then construct additional d-frames _i ∈^n × d by swapping pairs of rows in _0.
To construct _0, we take its first column to be /√(n).
To construct the remaining columns, we stack columns from a 2^k_0× 2^k_0 Hadamard matrix _2^k_0.
Under the setting of Theorem <ref>, suppose that κ = ω(1) and that κ≥ 3d for all suitably large n.
Define = (λ_1,λ_2,…,λ_d) with
λ_1 ≤ n / 3 and λ_j = λ_1 /κ for 2 ≤ j ≤ d.
There exist matrices _0 ∈_d(^n) such that _0 = _0 ^1/2_p,q^1/2_0^T satisfies, for all suitably large n,
λ_1/3n≤_ij^(0)≤2/3 for all i,j ∈ [n].
Define
β_d
= ζ_d
√(λ_1 (λ_d ∧log n) / n ),
where ζ_d is a quantity depending on n (via dependence on d) that we will specify below.
For rows i=1,2,…,2^k_0 and columns j=2,3,…,d, we take
^(0)_ij
=
β_d h_i,j/λ_j .
Letting M_d = ⌊ n/2^k_0+1⌋≥ 2, we take the next 2^k_0(M_d-1) rows of _0 to be, for i=2^k_0+1,2^k_0+2,…,2^k_0 M_d and j=2,3,…,d,
^(0)_ij
= η_d h_i^*, j/√(n)
where i^* = (i 2^k_0) and η_d is a quantity, possibly dependent on n, to be specified below.
Finally, for i > 2^k_0 M_d and j ≥ 2, we take ^(0)_ij = 0, so that
_0 = [ 1/√(n) β_d h_1, 2/λ_2 … β_d h_1, d-1/λ_d-1 β_d h_1, d/λ_d; ⋮ ⋮ ⋱ ⋮ ⋮; 1/√(n) β_d h_2^k_0, 2/λ_2 … β_d h_2^k_0, d-1/λ_d-1 β_d h_2^k_0, d/λ_d; 1/√(n) η_d h_1,2/√(n) … η_d h_1, d-1/√(n) η_d h_1, d/√(n); ⋮ ⋮ … ⋮ ⋮; 1/√(n) η_d h_2^k_0, 2/√(n) … η_d h_2^k_0, d-1/√(n) η_d h_2^k_0, d/√(n); ⋮ ⋮ ⋱ ⋮ ⋮; 1/√(n) 0 … 0 0; 1/√(n) 0 … 0 0; ⋮ ⋮ … ⋮ ⋮; 1/√(n) 0 … 0 0; ].
To ensure that _0 has orthonormal columns, we require for every 2≤ j ≤ d,
2^k_0(β_d^2/λ_j^2 + (M_d - 1) η^2_d/n) = 1.
Plugging in the definition of β_d from Equation (<ref>) and under our assumption that κ = o(n), we require that
2^k_0ζ_d^2 κ/n·λ_d ∧log n/λ_d
= o(1).
Since 2^k_0 = Θ(d), Equation (<ref>) holds when we take ζ_d = c/√(d) for any constant c ≤√(1/640).
We then pick η_d in such a way that η_d = √(2) + o(1), so that
η_d^2
= n/2^k_0(M_d-1)
- o(1) = n/⌊ n/2 ⌋ - 2^k_0 - o(1) = 2 + o(1),
ensuring that Equation (<ref>) holds.
Unrolling the definition of _0 = _0 ^1/2_p,q^1/2_0^T, it holds for all 1≤ i, j ≤ 2^k_0,
^(0)_ij ≤∑_ℓ=1^d λ_k_ik^(0)_jk^(0) = λ_1/n + (d-1)β_d^2 h_i,kh_j,k/λ_d≤λ_1/n + (d-1) β_d^2/λ_d
= λ_1/n + (d-1) ζ_d^2λ_1(λ_d∧log n)/λ_d n≤ (1+(d-1) ζ_d^2)λ_1/n
≤ (1 + c^2)λ_1/n≤2/3,
where the last inequality holds by our choice of λ_1 ≤ n/3 and any 0 < c ≤ 1.
To lower-bound the entries of ^(0), observe that
_ij^(0) ≥λ_1/n - (d-1) ζ_d^2λ_1(λ_d∧log n)/λ_d n≥λ_1/n - (d-1) ζ_d^2λ_1/n
≥ (1- c^2)λ_1/n.
Choosing c > 0 sufficiently small, we have _ij^(0)≥λ_1/3n.
Combining the above two displays, we conclude that
λ_1 / 3n ≤^(0)_ij≤2/3 for i, j ∈ [2^k_0].
For 1≤ i ≤ 2^k_0 < j ≤ 2^k_0 M_d,
^(0)_ij ≤λ_1/n + (d-1)β_dη_d h_i,kh_j,k/√(n)≤λ_1/n + (d-1) β_d η_d/√(n)
= λ_1/n + (d-1) η_d ζ_d√(λ_1(λ_d∧log n))/n = λ_1/n + o(1) ≤2/3,
where once again the last inequality holds for n suitably large by our choice of λ_1 ≤ n/3.
To lower-bound ^(0), we observe that
^(0)_ij ≥λ_1/n - (d-1)β_dη_d h_i,kh_j,k/√(n)≥λ_1/n - (d-1) η_d ζ_d√(λ_1(λ_d∧log n))/n
= λ_1/n + o(λ_1/n)
≥λ_1/3n,
using the fact that κ = ω(1).
Combining the above two displays, we have
λ_1 / 3n ≤^(0)_ij≤2/3 for 1≤ i ≤ 2^k_0 < j ≤ 2^k_0 M_d.
For 2^k_0<i≤ j≤ 2^k_0 M_d, since κ≥ 3d, η_d^2 = 2 + o(1) and λ_1 ≤ n/3, we have
_ij^(0)≤λ_1/n + (d-1)η_d^2λ_d/n≤(1+d η_d^2/κ) λ_1/n≤2λ_1/n≤2/3
and
_ij^(0)≥λ_1/n - (d-1)η_d^2λ_d/n≥(1-d η_d^2/κ) λ_1/n≥λ_1/3n.
Combining the above two displays, we have
λ_1 / 3n ≤^(0)_ij≤2/3 for 2^k_0<i≤ j≤ 2^k_0 M_d.
Finally, for i > 2^k_0 M_d and j ∈ [n], ^0_ij = λ_1/n, since λ_1 ≤ n/3, we again have
λ_1 / 3n ≤^(0)_ij≤2/3 for i>2^k_0 M_d, j ∈ [n].
Thus, combining Equations (<ref>) through (<ref>), we have for sufficiently large n,
λ_1/3n≤_ij^(0)≤2/3 for all i,j ∈ [n],
completing the proof.
As a remark, noting that we can also take λ_1 ≤n/2+ϵ, so the condition (2+ϵ)κλ_d ≤ n would be suffice for our proof. The condition κ≥ 3d can also be relaxed to κ≥ (1+ϵ) d for any constant ϵ > 0. In order to achieve this, we would take 2^k_0M_d = ⌊ n / (1+ϵ/2) ⌋, and we have η_d^2 = 1+ϵ/2 - o(1) in this case. Repeating the previous steps of unrolling the definition of _0, we would be able to show that ϵλ_1/(2+2ϵ)n≤^(0)_ij≤ 2/(2+ϵ) for n sufficiently large. Furthermore, following the proof in Lemma <ref> and <ref>, we can construct a packing set with ⌊ϵ n⌋ instances, so the rest of the proof also goes through with a more careful analysis. We omit the details.
To construct the rest of our packing set, {_i : i =1,2,…,⌊ n/2 ⌋}, we construct the i-th element _i by swapping the first row of _0 with the (i + ⌊ n/2 ⌋)-th row of _0.
That is, for i ∈ [ ⌊ n/2 ⌋ ], _i is the same as _0 except in its first and i + ⌊ n/2 ⌋)-th rows.
Lemma <ref> lower bounds the distance between the elements of our packing set _0, _1, …, _⌊ n/2 ⌋ .
Let _0 ∈_d(^n) and = (λ_1,λ_2,…,λ_d) ∈^d × d be the matrices guaranteed by Lemma <ref>.
There exists a collection {_i : i =1,2,…,⌊ n/2 ⌋}⊂_d(^n), such that for any pair of indexes 0≤ i < j ≤⌊ n/2 ⌋, we have
min_∈∩_i ^1/2 - _j ^1/2_2, ∞≥ζ_d √(d-1)/2√(κ(λ_d ∧log n)/n),
where ζ_d is any quantity such that ζ_d ≤ 1/√(640 d).
Recalling the definition of _0 from Equation (<ref>), for each i ∈ [ ⌊ n/2 ⌋ ], define _i ∈_d(^n) to have the same rows as _0, but switching the the first and (i+⌊ n/2 ⌋)-th rows of _0.
Define the vectors
_0 = ( 1/√(n), 0, …, 0 )^T ∈^d
and
_1 = (
1/√(n),
β_d h_1,2/λ_2 ,
β_d h_1,3/λ_3 ,
… ,
β_d h_1,d/λ_d )^T ∈^d,
noting that _1 ∈^d is the first row of _0.
Fix a matrix ∈∩.
Trivially lower-bounding the maximum in the ()-norm by the maximum of two particular rows and making use of the construction of _i and _j for any distinct i,j ∈{0,1,2,…,⌊ n/2 ⌋}, we have
_i ^1/2 - _j ^1/2_2, ∞
≥max{^T ^1/2_1 - ^1/2_0 _2,
^T ^1/2_0 - ^1/2_0 _2 }.
Suppose that
^T ^1/2_0 - ^1/2_0 _2
≥ζ_d √(d-1)/2√(κ(log n ∧λ_d)/n).
Then it holds trivially that
_i ^1/2 - _j ^1/2_2, ∞≥ζ_d √(d-1)/2√(κ(log n ∧λ_d)/n).
If, on the other hand, Equation (<ref>) does not hold, the triangle inequality implies
^T ^1/2_1 - ^1/2_0 _2
≥^T ^1/2_1 - ^T ^1/2_0 _2
- ^T ^1/2_0 - ^1/2_0 _2
= ^1/2_1 - ^1/2_0 _2
- ^T ^1/2_0 - ^1/2_0 _2.
Plugging in the definitions of _0 and _1 and using the fact that λ_2=λ_3=⋯=λ_d,
^1/2_1 - ^1/2_0 _2^2
= ∑_j=2^d ζ_d^2 λ_1 (λ_d ∧log n) /λ_j n
= (d-1) ζ_d^2 λ_1 (λ_d ∧log n) /λ_d n.
Further, since Equation (<ref>) fails to hold by assumption,
^T ^1/2_0 - ^1/2_0 _2^2
≤ (d-1) ζ_d^2 / 4λ_d λ_1 (log n ∧λ_d)/n.
Taking square roots and applying the above two displays to Equation (<ref>),
^T ^1/2_1 - ^1/2_0 _2
≥ζ_d √(d-1)/ 2 √(λ_1 (log n ∧λ_d)/ n λ_d ),
so that
_i ^1/2 - _j ^1/2_2, ∞≥√(d-1)ζ_d/2√(κ(log n ∧λ_d)/n).
Note that we have shown this bound to hold whether Equation (<ref>) holds or not.
Since the right-hand side of this bound does not depend on , minimizing over ∈∩ completes the proof.
§.§ Bounding the KL Divergence
Now, we proceed to control the KL divergence among the parameters _0, _1, …, _⌊ n/2 ⌋.
To do this, we must again ensure that the Frobenius norms between different probability matrices are small in order to apply Lemma <ref>.
Under the conditions of Theorem <ref>, suppose that κ = ω(1).
Let _0 ∈_d(^n) and = (λ_1,λ_2,…,λ_d) ∈^d × d be the matrices guaranteed by Lemma <ref> and let {_i : i ∈ [ ⌊ n/2 ⌋ ] be the packing set guaranteed by Lemma <ref>.
For any 1 ≤ i ≤⌊ n/2 ⌋, we have
_i _i^T - _0 _0^T_F^2
≤1/80λ_1 (λ_d ∧log n)/n.
For i ∈ [ ⌊ n/2 ⌋ ].
Adding and subtracting appropriate quantities and applying the triangle inequality,
_i _i^T - _0 _0^T_F
≤_i (_i-_0)^T _F
+ (_i-_0) _0^T _F .
Since _0 and _i are d-frames, basic properties of the Frobenius norm imply
_i _i^T - _0 _0^T_F
≤
2 (_i-_0)_F.
We observe that for i∈ [⌊ n / 2⌋], _i and _0 differ in exactly two rows.
Define, as in the proof of Lemma <ref>,
_0 = ( 1/√(n), 0, …, 0 )^T ∈^d
and
_1 = (
1/√(n),
β_d h_1,2/λ_2 ,
β_d h_1,3/λ_3 ,
… ,
β_d h_1,d/λ_d )^T ∈^d,
noting that _1 ∈^d is the first row of _0 by construction.
The structure of _0 and _i is such that _i-_0 has all rows equal to zero except for two, so that
(_i-_0)_F^2
= 2 ( _1 - _0 ) _2^2.
Plugging in the definitions of , _0 and _1,
(_i-_0)_F^2
= 2 ζ_d^2 (d-1)
λ_1 ( λ_d ∧log n ) /n.
Plugging this into Equation (<ref>),
(_i-_0)_F^2
= 2 ζ_d^2 (d-1)
λ_1 ( λ_d ∧log n ) /n.
Applying this to Equation (<ref>), we conclude that
_i _i^T - _0 _0^T_F^2
≤ 8 ζ_d^2 (d-1)
λ_1 ( λ_d ∧log n ) /n.
Lemma <ref> guarantees ζ_d^2 ≤ 1/640 d < 1/640(d-1), completing the proof.
§ TECHNICAL LEMMAS
Here we collect a number of technical lemmas related to our packing set constructions.
For i∈ [2^k_0m], let _i be as defined in Equation (<ref>). Assume that _i_2 < 1, then
the singular values of _i are given by
( √(1 + σ_i+), √(1 + σ_i-), 1, …, 1 )^T ∈^d,
where
σ_i± = ^T_i _i + _i^T _i α_i±
and
α_i ± = 1/2±1/2√(1 + 4_i^2_2 + 4^T_i _i/_i_2^2),
Write the (reordered) right singular subspace matrix as _i = (_i, ^⊥_i), where _i ∈^n × 2 has as its columns the singular vectors corresponding to √(1+σ_i±).
Then _i is given by
_i^T = [ α_i+/α_i+_i + _i_2 1/α_i+_i + _i_2; α_i-/α_i-_i + _i_2 1/α_i-_i + _i_2; ][ _i^T; _i^T ] .
Fix i ∈ [2^k_0 m ].
Recalling that _i = _i _i _i^T is the SVD of _i,
_i _i^2 _i^T
=
_i^T _i
= (_0 + _i _i^T)^T (_0 + _i _i^T)
= _d + _i_i^T + _i _i^T + _i_i^T.
We observe that for any vector ∈^d, if is orthogonal to both _i and _i, we have _i^T _i =.
Since _i and _i are linearly independent in our construction, the subspace orthogonal to the span of _i and _i has dimension d-2, and it follows that 1 appears as a singular value of _i with multiplicity d-2.
Now, suppose that = α_i + _i is an eigenvector of _i^T _i, so that
(_d + _i _i^T +_i _i^T + _i _i^T)
= λ
for some λ∈.
One can verify that = α_i±_i + _i and λ = 1 + σ_i± satisfy the above, with α_i± and σ_i± as given in Equations (<ref>) and (<ref>), respectively. Renormalizing appropriately yields the claimed value of _i. It remains to show that 1 + σ_i- > 0. Explicitly write down the equation for σ_i-, we have
σ_i- = _i^T_i + 1/2_i_2^2 - 1/2_i_2 √(_i_2^2 + 4_i^T_i + 4 _i_2^2).
Since
1/4_i_2^4 + _i_2^2 _i^T _i + (_i^T_i)^2 ≤1/4_i_2^4 + _i_2^2 _i^T _i + _i_2^2_i_2^2,
it follows that σ_i- < 0. Noting that
σ̃_i - ≥_i^T _i+ 1/2_i^T _i
- 1/2 (_i_2 + 2_i_2)
= _i^T _i - _i_2 _i_2
≥ - _i_2 _i_2.
From our construction, we have _i_2 ≤√(d/n-r). Since _i_2 < 1 and we always have d ≤ n-r, it follows that 1 + σ_i- > 0.
For any i∈ [2^k_0m], let _i be as defined in Equation (<ref>), with singular value decomposition _i = _i _i _i^T.
Recalling the vector _i ∈^d from Lemma <ref>, if _i_2 < 1,
then
_i - _d≤1/2_i_2^2 + 2√(d/n-r)_i_2.
Notice that for a ∈ [0, 1], we have
√(a)≥a+1/2-(a-1)^2/2
and that for any b ∈ [0,1], substituting a = 1-b into Equation (<ref>), we have
√(1 - b)≥ 1 - b/2 - b^2/2.
Applying Lemma <ref> and then applying Equation (<ref>) to σ̃_i- we have
_i - _d = max{√(1 + σ_i+) - 1,
1 - √(1 + σ_i-)}
≤max{σ_i+/1+√(1 + σ_i+), σ̃_i-^2/2 - σ̃_i-/2}
≤1/2max{σ̃_i+, σ̃_i-^2-σ̃_i-}.
From the proof of Lemma <ref>, we have |σ_i-| < 1 if _i_2 < 1 and thus, σ_i-^2 ≤ - σ_i-.
Therefore,
_i - _d≤1/2max{σ̃_i+,
σ̃_i-^2-σ̃_i-}≤1/2( σ̃_i+
+ σ̃_i-^2-σ̃_i-)
≤1/2σ̃_i+ - σ̃_i-.
Since
σ̃_i + =_i^T _i+α_i +_i^T _i
= _i^T _i + 1/2_i^T _i
+ 1/2_i_2 √(_i^T_i
+ 4 _i^T _i + 4_i_2^2)
≤ 2_i_2_i_2+_i_2^2,
applying this and Equation (<ref>) to Equation (<ref>), using the fact that _i^T_i ≥ 0 and recalling the construction of _i from Equation <ref>, we conclude that
_i - _d≤ 2_i_2_i_2
+1/2_i_2^2
= 1/2_i_2^2 + 2√(d/n-r)_i_2,
completing the proof.
For any i∈ [2^k_0m], recall the vector _i ∈^d from Lemma <ref> and the matrix _i from Equation (<ref>).
If _i_2 < 1, then, with _i as defined in
_i _≤√(d/n-r + _i_2/1 - _i_2),
where _i ∈^n × d is the left singular subspace of _0 + _i_i^T ∈^n × d.
For any i ∈ [2^k_0 m],
| _i _^2 - _0 _^2 |
=
|
max_ℓ∈ [n](_i _i^T)_ℓℓ
-
max_ℓ^'∈ [n](_0 _0^T)_ℓ^'ℓ^'|
≤max_ℓ∈ [n]| ( _i _i^T - _0 _0^T )_ℓℓ|
= _i _i^T - _0 _0^T .
Noting that _0 _≤√( d/(n-r) ) by construction, it follows that
_i_^2
≤_0_^2
+ | _i _^2 - _0 _^2 |
≤d/n-r
+ _0 _0^T - _i _i^T .
We then apply Wedin's sinΘ theorem <cit.> to the left singular subspace of _0 and _0 + _i _i^T.
Denote the d-th singular value of _0 as σ_d(_0) and the (d+1)-th singular value as σ_d+1(_0).
From our assumption that _i_2≤ 1 and the fact that _i _i^T = _i_2, we have
_0 _0^T - _i _i^T ≤_i _i^T/σ_d(_0) - σ_d+1(_0) - _i _i^T
= _i_2/1 - _i_2.
Applying this bound to Equation (<ref>) and taking square roots,
_i_≤√(d/n-r + _i_2/1 - _i_2),
which completes the proof.
For any i∈ [2^k_0m], let _i be as defined in Equation (<ref>), with singular value decomposition _i = _i _i _i^T. If _i_2 < 1, then writing = ^1/2_p,q^1/2,
(_i - _d)_i^T ^2_F
≤ 17( _i_2^2 + _i_2^2 )
( _i^T^2 _i + _i^T ^2 _i ).
We first note that Lemma <ref> ensures that
_i - _d
= (
√(1 + σ_i+) - 1,
√(1 + σ_i-) - 1,
0, …, 0 ) ∈^d × d
where σ_i± are defined in Equation (<ref>) and we have σ_i- < 0 < σ_i+.
Recalling α_i± as defined in Equation (<ref>), Lemma <ref> further implies that _i ∈^n × 2, given by
_i^T = [ α_i+/α_i+_i + _i_2 1/α_i+_i + _i_2; α_i-/α_i-_i + _i_2 1/α_i-_i + _i_2 ][ _i^T; _i^T ],
encodes the singular vectors of _i corresponding to √(1+σ_i±).
Defining the quantities
d_i± = √(1 + σ_i±) - 1,
and defining
_i = [ α_i+d_i+/α_i+_i + _i_2 d_i+/α_i+_i + _i_2; α_i-d_i-/α_i-_i + _i_2 d_i-/α_i-_i + _i_2; ],
we have
(_i - _d)_i^T _F^2
= _i [ _i^T; _i^T ]_F^2
= (_i^T _i [ _i^T ^2 _i _i^T ^2 _i; _i^T ^2 _i _i^T ^2 _i ]).
Applying our definition of _i from Equation (<ref>),
(_i - _d)_i^T _F^2
=
(α_i+^2 d_i+^2/α_i+_i + _i_2^2 + α_i-^2 d_i-^2/α_i-_i + _i_2^2) _i^T^2 _i
+ 2(α_i+ d_i+^2/α_i+_i + _i_2^2 + α_i- d_i-^2/α_i-_i + _i_2^2) _i^T ^2 _i
+ (d_i+^2/α_i+_i + _i_2^2 + d_i2^2/α_i-_i + _i_2^2) _i^T ^2 _i.
Our proof will be complete once we establish an upper bound on the α_i± and d_i± terms and a lower-bound on α_i±_i + _i.
Toward this end, rearranging the definition of α_i+ and applying the triangle inequality,
α_i+
= 1/2
+ √(_i_2^2 + 4 _i^T _i + 4 _i ^2_2 )/ 2_i_2
= 1/2 + _i + 2_i _2 /2_i_2
≤ 1 + _i _2 /_i_2 .
A similar argument yields
|α_i -| ≤_i_2/_i_2.
Observing that the function z ↦√(1+z)-1 is upper-bounded by z/2 for z ≥ 0, and recalling our definition of d_i± from Equation (<ref>) above,
Equation (<ref>) (established in the proof of Lemma <ref>) implies
d_i+≤σ_i+/ 2 ≤_i_2 ( _i_2
+1/2_i_2 ).
From the proof of Lemma <ref> and <ref>, as long as _i_2 < 1, we have
|d_i-| = 1 - √(1+σ_i-)
≤ -σ_i-
≤_i_2_i_2.
To control the entries of , we must also control the denominator terms,
α_i +_i+_i_2
and α_i -_i+_i_2.
Expanding the square and using the fact that by construction from Equation (<ref>), α_i+≥ 1 and _i^T _i ≥ 0,
α_i +_i + _i_2^2
=_i_2^2 + 2 α_i +_i^T _i
+α_i +^2 _i^T _i
≥_i_2^2 + _i_2^2.
Expanding the definition of α_i- and rearranging,
α_i-^2 _i^T _2^2
= ( 1/2
- _i + 2 _i _2 / 2 _i _2 )^2 _i _2^2
= 1/4( _i _2 - _i + 2 _i _2 )^2.
Again expanding the definition of α_i-,
2 α_i-_i^T _i
= ( 1 - _i + 2_i _2 /_2 )
_i^T _i
= _i^T _i /_i _2 ( _i _2 - _i + 2_i _2 )
and it follows that
α_i-_i + _i _2^2
= α_i-_i _2^2
+ 2 α_i-_i^T _i + _i _2^2
= 1/4( _i _2 - _i + 2 _i _2 )^2
+ _i^T _i /_i _2 ( _i _2 - _i + 2_i _2 ) + _i _2^2.
Using non-negativity of the square and the reverse triangle inequality,
α_i-_i + _i _2^2
≥_i _2^2
- _i^T _i _i _2 /_i _2
= _i _2^2
( 1 - _i^T _i /_i _2 _i _2 ).
By construction, _i and _i obey Equation (<ref>), from which
α_i-_i + _i _2^2 ≥1/8_i _2^2.
Combining Equations (<ref>), (<ref>) and (<ref>) and using the fact that (a+b)^2 ≤ 2(a^2+b^2),
( α_i+ d_i+)^2 /α_i +_i + _i_2^2 ≤( _i_2 + _i _2 )^2 /_i_2^2 + _i_2^2 ( _i_2
+1/2_i_2 )^2
≤ 4 _i _2^2 + _i _2^2.
Similarly, combining Equations (<ref>), (<ref>) and (<ref>),
| α_i- d_i-|^2 /α_i -_i + _i_2^2 ≤ 8 /_i _2^2 _i_2^2/_i_2^2_i_2^2 _i_2^2
≤ 8_i_2^2.
Combining the above two displays,
(
α_i+^2 d_i+^2/α_i+_i + _i_2^2
+ α_i-^2 d_i-^2/α_i-_i + _i_2^2) _i^T^2 _i
≤( 12 _i _2^2 + _i _2^2 ) _i^T^2 _i.
By Equations (<ref>) and (<ref>),
again using the fact that (a+b)^2 ≤ 2(a^2+b^2),
d_i+^2 /α_i +_i + _i_2^2 ≤_i_2^2 /_i_2^2 + _i_2^2 ( _i_2
+1/2_i_2 )^2
≤
2_i_2^2 + 1/2_i_2^2,
and Equations (<ref>) and (<ref>) yield
d_i-^2 /α_i -_i + _i_2^2 ≤ 8 _i_2^2 _i_2^2 /_i _2^2 ≤ 8 _i_2^2.
Combining the above two displays,
(d_i+^2/α_i+_i + _i_2^2
+ d_i2^2/α_i-_i + _i_2^2)
_i^T ^2 _i
≤( 2_i_2^2
+ 17 / 2 _i_2^2 )
_i^T ^2 _i.
Combining Equations (<ref>), (<ref>) and (<ref>),
α_i+ d_i+^2 /α_i+_i + _i_2^2 ≤( _i_2 + _i _2 )
( 2_i_2^2 + _i_2^2 / 2 )
_i_2 /_i_2^2 + _i_2^2
≤
2 ( _i_2 + _i _2 ) _i_2
≤ 3 _i_2^2 + _i _2^2,
where we have used the fact that 2ab ≤ a^2 + b^2.
Combining Equations (<ref>), (<ref>) and (<ref>) and again using the fact that 2ab ≤ a^2 + b^2,
α_i- d_i-^2 /α_i-_i + _i_2^2 ≤_i_2/_i_2_i_2^2 _i_2^2
8 /_i _2^2 ≤
8 _i_2 _i_2
≤ 4( _i_2^2 + _i_2^2 ).
Combining the above two displays,
2(α_i+ d_i+^2/α_i+_i + _i_2^2
+ α_i- d_i-^2/α_i-_i + _i_2^2) _i^T ^2 _i
≤
2(7 _i_2^2 + 5_i _2^2 )
_i^T ^2 _i.
Using the fact that 2 _i^T ^2 _i ≤_i^T ^2 _i + _i^T ^2 _i,
2 (α_i+ d_i+^2/α_i+_i + _i_2^2
+ α_i- d_i-^2/α_i-_i + _i_2^2) _i^T ^2 _i
≤(7 _i_2^2 + 5_i _2^2 )
( _i^T ^2 _i + _i^T ^2 _i ).
Applying this, along with Equations (<ref>) and (<ref>) to bound the right-hand side of Equation (<ref>),
(_i - _d)_i^T _F^2
≤( 12 _i _2^2 + _i _2^2 ) _i^T^2 _i
+ (7 _i_2^2 + 5_i _2^2 )
( _i^T ^2 _i + _i^T ^2 _i )
+
( 2_i_2^2
+ 17 / 2 _i_2^2 )
_i^T ^2 _i
≤( 17 _i_2^2 + 8 _i_2^2
) _i^T^2 _i
+
( 7 _i_2^2 + 27/2_i_2^2
) _i^T ^2 _i .
The result follows by trivially upper bounding the coefficients of _i_2^2 and _i _2^2.
For any vector ∈^d for d ≥ 2 such that _2 = 1, there exists a vector ∈^d with |_l| = 1/√(d) for all l ∈ [d], such that |^T | ≤√(2/3).
For a set S ⊆ [d], define ∈^d according to
_ℓ = sign(_ℓ)/√(d) ℓ∈ S
-sign(_ℓ)/√(d)
To see that |^T | ≤√(2/3), note that by definition of ,
|^T |
= 1/√(d)|∑_l ∈ S |_l| - ∑_l ∈ S^c |_l||
≤1/√(d)max{∑_l ∈ S |_l| - ∑_l ∈ S^c |_l|,
∑_l ∈ S^c |_l| - ∑_l ∈ S |_l|
}.
Letting _S denote the vector with indices outside of S set to zero, and defining _S^c analogously, Jensen's inequality implies
|^T |
≤1/√(d)max{√(|S|)_S_2,
√(|S^c|)_S^c_2
}
If there exists a set S is such that both _S^2_2 ≤ 2/3 and _S^c^2_2 ≤ 2/3, then the proof is complete, since then
|^T | ≤√( 2 / 3 d max{ |S|, |S^c| })≤√( 2 / 3 ).
Suppose, then, that no such S exists.
That is, for any S ⊆ [d],
either _S^2_2 > 2/3 or _S^c^2 > 2/3.
Observe that _ℓ^2 ≤ 1/3 for any ℓ∈ [d],
since if _ℓ^2 > 1/3,
taking S = {ℓ} so that |S| = 1,
Equation (<ref>) implies
|^T |
≤max{√(|S|)_S_2 /√(d), √(|S^c|)_S^c_2 /√(d)}≤max{1/√(d), √(2(d-1)/3d)}≤√( 2/3 ).
Without loss of generality,
suppose that S is such that _S^2_2 > 2/3
and _S^c^2 < 1/3.
For any ℓ∈ S, consider removing ℓ from S to obtain
= S ∖{ℓ}.
If _^2 ≤ 2/3 and _^c^2 ≤ 2/3,
then we have contradicted our assumption.
Thus, either _^2 > 2/3 or _^c^2 > 2/3.
If the latter, then _ℓ^2 > 1/3, leading to a contradiction.
Therefore,
2/3 < _^2_2 ≤_S ^2.
Note that must be non-empty, since otherwise _=0, and therefore we can repeat our argument.
Repeating this argument enough times, we arrive at a minimal set T ⊆ S
such that _T _2^2 > 2/3,
and for any ℓ∈ T, _T ∖{ℓ}_2^2 ≤ 2/3.
If _T ∖{ℓ}_2^2 ≤ 1/3, we have again found ℓ∈ [d] such that |_ℓ|>1/3, a contradiction.
Therefore,
1/3≤_T ∖{ℓ}_2^2 ≤2/3,
and a similar bound holds for _T^c ∪{ℓ}_2^2,
completing the proof.
|
http://arxiv.org/abs/2307.02681v1
|
20230705225703
|
Superconductivity in the two-dimensional Hubbard model with cellular dynamical mean-field theory: a quantum impurity model analysis
|
[
"C. Walsh",
"M. Charlebois",
"P. Sémon",
"A. -M. S. Tremblay",
"G. Sordi"
] |
cond-mat.str-el
|
[
"cond-mat.str-el",
"cond-mat.supr-con"
] |
Department of Physics, Royal Holloway, University of London, Egham, Surrey, UK, TW20 0EX
Département de Chimie, Biochimie et Physique, Institut de Recherche sur l’Hydrogène, Université du Québec à Trois-Rivières, Trois-Rivières, Québec, Canada G9A 5H7
Département de physique, Institut quantique & RQMP, Université de Sherbrooke, Sherbrooke, Québec, Canada J1K 2R1
Département de physique, Institut quantique & RQMP, Université de Sherbrooke, Sherbrooke, Québec, Canada J1K 2R1
[corresponding author: ][email protected]
Department of Physics, Royal Holloway, University of London, Egham, Surrey, UK, TW20 0EX
Doping a Mott insulator gives rise to unconventional superconducting correlations. Here we address the interplay between d-wave superconductivity and Mott physics using the two-dimensional Hubbard model with cellular dynamical mean-field theory on a 2×2 plaquette.
Our approach is to study superconducting correlations from the perspective of a cluster quantum impurity model embedded in a self-consistent bath.
At the level of the cluster, we calculate the probabilities of the possible cluster electrons configurations.
Upon condensation we find an increased probability that cluster electrons occupy a four-electron singlet configuration, enabling us to identify this type of short-range spin correlations as key to superconducting pairing. Increased probability of this four-electron singlet comes at the expenses of reduced probability of a four-electron triplet with no significant probability redistribution of fluctuations of charges. This allows us to establish that superconductivity at the level of the cluster primarily involves a reorganisation of short-range spin correlations rather than charge correlations.
We gain information about the bath by studying the spectral weight of the hybridization function. Upon condensation, we find a transfer of spectral weight leading to the opening of a superconducting gap.
We use these insights to interpret the signatures of superconducting correlations in the density of states of the system and in the zero-frequency spin susceptibility.
Superconductivity in the two-dimensional Hubbard model with cellular dynamical mean-field theory: a quantum impurity model analysis
G. Sordi
August 1, 2023
===================================================================================================================================
§ INTRODUCTION
The nature of the correlations leading to superconducting pairing in doped Mott insulators remains a central challenge to the understanding of high temperature cuprate superconductors <cit.>. The two-dimensional (2D) Hubbard model <cit.>, which encodes both electrons hopping t and Coulomb onsite repulsion U, is the simplest model to study superconducting correlations arising from a purely electronic mechanism <cit.>. The large value of the interaction strength U, which is necessary to open a Mott gap in the half filled model, requires the use of nonperturbative approaches. Cluster extensions <cit.> of dynamical mean-field theory <cit.> have proved to be powerful tools for exploring strongly correlated superconductivity in the 2D Hubbard model <cit.>.
Over the years, many cluster DMFT studies have established that upon doping the Mott insulator realised by the 2D Hubbard model, a d-wave superconducting state occurs, with a dome-like shape in the temperature-doping phase diagram <cit.>. This indicates that in a strongly correlated superconducting state the short range superexchange interaction leads electrons to pair up into coherent Cooper pairs.
Doping a Mott insulator generates a rich phase diagram with many states that are close in energy <cit.>, and ongoing effort is devoted to clarify whether or not the zero-temperature ground state of the system is superconducting, and for what parameters <cit.>. Despite the fact that long-range order is excluded by thermal fluctuations in 2D <cit.>, and irrespective of whether superconductivity is a true zero-temperature ground state, superconductivity obtained in cluster DMFT is a locally stable state of physical relevance. For example, the study of the emergence of superconductivity from the underlying normal state upon reducing the temperature gives information about the pairing mechanism. Furthermore, tweaking temperature and model parameters (such as third dimensionality and frustration) may cause superconductivity to become the state with lowest free energy <cit.>.
Within cluster DMFT, there is a variety of ways by which we can gain insights on this microscopic mechanism.
One possible approach to the study of superconducting correlations in the 2D Hubbard model takes an energetic viewpoint, and thus analyses the relative change in potential and kinetic energy between the superconducting and the underlying normal state. The rationale of this approach is to identify whether the condensation energy arises from a gain in potential energy (potential energy driven superconductivity), as described by conventional BCS theory, or from a gain in kinetic energy (kinetic energy driven superconductivity).
Intense cluster DMFT investigations <cit.> have shown that upon doping a Mott insulator, the kinetic energy decreases upon superconducting condensation and the doping interval where this kinetic energy driven mechanism occurs progressively extends to larger doping with increasing the interaction strength U.
Another approach to study superconducting correlations in the 2D Hubbard model is through the dynamics of pairing. Within cluster DMFT, this approach comprises the study of the superconducting gap, of the frequency dependence of the anomalous part of the self-energy, of the anomalous spectral weight and its cumulative order parameter <cit.>, and of several fluctuation diagnostic techniques <cit.>.
Overall, cluster DMFT studies on the dynamics of pairing have indicated the importance of short-range spin fluctuations for the pairing mechanism in the doped Hubbard model, at least at moderate coupling.
Yet another approach to the study of the superconducting correlations in the 2D Hubbard model is from a quantum information perspective. The rationale of this approach is to characterise the entanglement properties of superconductivity <cit.>. Common measures of quantum and classical correlations are entanglement entropies and quantum mutual information. Ref. CaitlinPNAS2021 has shown that the local entropy reflects the source of condensation energy and the quantum mutual information is enhanced in the superconducting state.
In this article, we study the superconducting correlations in the 2D Hubbard model at finite temperature from a complementary perspective, that of a cluster impurity embedded in a self-consistent bath, which underlies the cluster DMFT method. This is emphasised in Figure <ref>.
Cluster DMFT maps the lattice system (panel (a)) onto a cluster quantum impurity model fulfilling a self-consistent condition (panel (b)). Hence, in cluster DMFT one focuses on a cluster coupled to a bath of electrons which describes the rest of the lattice. The cluster fluctuates among different electronic configurations and exchanges electrons with the self-consistent bath (illustrated in the Figure), so that both spatial fluctuations (within the cluster) and temporal fluctuations are taken into account.
Therefore in cluster DMFT we can focus on the properties of both the cluster and the bath.
At the level of the cluster, we can analyse the probability that cluster electrons are found in a given configuration, i.e. the relative time the cluster electrons spend in a given configuration <cit.>. At the level of the bath, we can analyse the hybridization function, which fully encodes the dynamics of the hopping processes between cluster and bath.
The strategy of analysing both the probabilities of the impurity configurations and the hybridization function is a standard and often used approach in single-site DMFT studies <cit.>, and especially for multi-orbital systems, where electrons fluctuations among different atomic configurations can be related to a generalised concept of valence <cit.>.
However, this approach has not been explored much in the context of cluster DMFT studies, and there is little knowledge on the signatures of superconductivity on both the cluster electrons configurations and bath hybridization function. Our work addresses this problem, i.e. the effects of superconducting correlations on both cluster electron configurations and bath hybridization function.
More precisely, at the level of the cluster impurity, several cluster DMFT studies have revealed that cluster electrons are locked into short-range singlets in the normal state of the doped Mott insulator <cit.> realised by the 2D Hubbard model.
However, to our knowledge, only a few studies have investigated the impact of superconducting pairing on the cluster electrons configurations: Haule and Kotliar <cit.> have demonstrated amplified short-range singlet correlations for the superconducting state of the t-J model around optimal doping. However, it is still not clear to what extent this mechanism extends to the 2D Hubbard model, and how the onsite interaction strength, doping, and temperature affect this mechanism. Hence, we present here a study of the cluster electrons configurations in the superconducting state of the 2D Hubbard model, for a wide range of doping levels and of interaction strength. We shall show that upon entering the superconducting state, for all values of U and doping, the cluster electrons spend more time in a four-electron singlet configuration, allowing us to identify short-range spin correlations in the form of singlets as key to superconducting pairing. Increased probability of the four-electron singlet comes at the expenses of reduced probability of the four-electron triplet and no significant probability redistribution of fluctuations of charges. This allows us to establish that at the level of the cluster superconductivity primarily involves a reorganisation of short-range spin correlations rather than charge correlations.
At the level of the dynamics of the fluctuations between cluster and bath, few existing cluster DMFT studies have analysed the behavior of the hybridization function in the normal state of the 2D Hubbard model <cit.>. They revealed a mild momentum and doping dependence, in sharp contrast with the Green's function of the system which showed a marked dependence.
Even less work exists on the behavior of the hybridization function in the superconducting state <cit.>, and it is primarily focused on the t-J model. A detailed characterization of the hybridization function is still missing for the 2D Hubbard model in the superconducting state.
Hence, we present here a systematic analysis of the interaction strength, doping and temperature dependence of the spectral function of the bath in the superconducting state. We shall show that upon condensation, there is a redistribution of spectral weight leading to the opening of a superconducting gap. This enables us to infer that singlet pairs propagate coherently throughout the lattice.
Our work is organised as follows. In Section <ref> we briefly outline the cellular extension of DMFT (CDMFT) used in our study and how the probabilities of the different electronic configurations of the cluster can be extracted. In Section <ref> we overview the salient features of the established CDMFT solution of the 2D Hubbard model in the superconducting state. In Section <ref> we analyse the probabilities of the different electronic configurations of the cluster in the superconducting state. In Section <ref> we analyse the behavior of the spectral function of the self-consistently determined bath in the superconducting state. Then in Section <ref> we discuss the insights that can be gained on the behavior of the density of state of the system and of the zero-frequency spin susceptibility in the superconducting state. Finally, Section <ref> summarises our main findings.
§ MODEL AND METHOD
§.§ 2D Hubbard model
The 2D Hubbard model on a square lattice is
H = - ∑_⟨ ij ⟩σ t_ij c_iσ^† c_jσ + U∑_i n_i↑ n_i↓ -μ∑_iσ n_iσ .
Here, t_ij is the hopping amplitude between nearest neighbor sites ⟨ ij ⟩, U is the onsite Coulomb repulsion, μ is the chemical potential, c_iσ and c_iσ^† respectively destroy and create an electron at site i with spin σ, and n=c_iσ^†c_iσ is the number operator. t_ij=t=1 fixes our units.
§.§ CDMFT
We solve Eq. <ref> at finite temperature with the cellular <cit.> extension of dynamical mean field-theory <cit.> (CDMFT). For the purpose of discussing the approach from a cluster plus bath perspective of our work, we outline the CDMFT procedure. In this subsection we focus on the CDMFT equations in the normal state; the generalisation to the superconducting state will be discussed in subsection <ref>. CDMFT partitions the lattice into a superlattice of clusters, singles out one (any) cluster of size N_c from the lattice and embeds it in a self-consistent bath of noninteracting electrons. Hence CDMFT relies on the self-consistent solution of a cluster quantum impurity model.
The cluster quantum impurity Hamiltonian (cluster plus bath) is
H_ imp = H_ cl + H_ hyb +H_ hyb^† + H_ bath ,
where H_ cl=H_ cl(d_iσ, d_iσ^†) is the Hamiltonian of the cluster described by the operators d_iσ, d_iσ^†, H_ bath = ∑_μσϵ_μ a_μσ^† a_μσ is the Hamiltonian of the bath described by the bath energies ϵ_μ and operators a_μσ, a_μσ^†, and H_ hyb = ∑_iμσ V_μ i a_μσ^† d_iσ is the Hamiltonian describing the hybridization between the cluster and the bath via the amplitude V_μ i for an electron to hop from the cluster to the bath.
By integrating out the bath degrees of freedom, the action of the cluster quantum impurity (cluster plus bath) is:
S = S_ cl (ψ̂^†, ψ̂ ) + ∫_0^β dτ∫_0^β dτ' ψ̂^† (τ) Δ(τ,τ') ψ̂(τ'),
where S_ cl is the action of the cluster resulting from the tiling of the lattice, ψ̂ = (d̂_1 ↑ ⋯ d̂_N_c ↑ d̂_1 ↓ ⋯ d̂_N_c ↓)^T is a vector of the Grassmann variables d̂_iσ corresponding to the operators on the lattice,
and Δ(τ,τ') is the hybridization matrix function.
It describes the amplitude processes of hopping via any bath orbital from the cluster site i at time τ to the cluster site j at time τ'. Eq. <ref> can be rewritten as
S = - ∫_0^β dτ∫_0^β dτ' ψ̂^†(τ) G_0^-1 (τ,τ') ψ̂(τ')
+ U ∫_0^β dτ n̂_i↑(τ) n̂_i↓(τ),
where the Green's function of the noninteracting impurity, G_0, has been introduced as
G_0^-1 (iω_n) = (iω_n + μ) I - t_ cl - Δ (iω_n) .
Here t_ cl is the cluster hopping matrix t_ cl = ∫ d k̃ t(k̃) with t(k̃) the lattice hopping matrix in the supercell notation and with k̃ running over the reduced Brillouin zone of the superlattice. The elements of the hybridization matrix function Δ(iω_n) can be written in the form
Δ_ij (iω_n) = ∑_μV_iμ V_μ j^†/iω_n - ϵ_μ
i.e. as a function of the bath degrees of freedom ϵ_μ, V_μ i.
For a given Δ(iω_n), the solution of the cluster quantum impurity model Eq. <ref> gives the cluster Green's function
G_ cl (τ -τ') = - ⟨ T_τ ψ̂(τ) ψ̂^† (τ') ⟩_S .
The Dyson equation defines the cluster self-energy as
Σ_ cl (iω_n) = G_0^-1 (iω_n) - G_ cl^-1 (iω_n) .
To fix Δ, we need to apply the self-consistency condition. The self-consistency condition requires that the cluster Green's function G_ cl computed from the cluster quantum impurity model coincides with the projection onto the cluster of the lattice Green's function G_ latt, i.e. the superlattice averaged Green's function G̅:
G̅ (iω_n) = ∫ d k̃ G_ latt (k̃, iω_n)
= ∫ d k̃ [ (iω_n +μ) I - t(k̃) -Σ_ latt (k̃, iω_n) ]^-1 .
The approximation that allows one to identify G_ cl with G̅ is that Σ_ latt (k̃, iω_n) ≈Σ_ cl (iω_n), i.e.
G̅ (iω_n) ≈∫ d k̃ [ (iω_n +μ) I - t(k̃) -Σ_ cl (iω_n) ]^-1 .
The self-consistency condition can then be written as
Δ (iω_n) = (iω_n +μ) I - t_ cl -Σ_ cl (iω_n) - G̅^-1(iω_n) .
In practice, we solve the CDMFT equations with an iterative procedure: starting from an initial guess for Δ, we solve the cluster quantum impurity model to obtain G_ cl[Δ], and then compute G̅ with Eq. <ref>. From G̅ we obtain an updated hybridization matrix Δ using Eq. <ref>, and we iterate the process until convergence is reached.
§.§ CT-HYB impurity solver
We solve the cluster quantum impurity model Eq. <ref> using the hybridization expansion continuous-time quantum Monte Carlo method (CT-HYB) <cit.>. Here we limit ourselves to outline the key aspects of the CT-HYB algorithm that are relevant for our discussion.
In order to reduce the size of the matrices involved and to speed up the calculation, we choose a single-particle basis that transforms as the irreducible representations of the cluster Hamiltonian symmetries <cit.>. For a 2× 2 plaquette with vertices 1,2, 3, 4 oriented counter-clockwise with 1 on the left bottom corner, the point group symmetry C_2v with mirrors along the plaquette axes leads to the following single-particle basis (cluster momentum basis):
d_A_1, σ = d_(0,0), σ = 1/2(d_1σ+d_2σ+d_3σ+d_4 σ)
d_B_1, σ = d_(π,0), σ = 1/2(d_1σ-d_2σ-d_3σ+d_4 σ)
d_B_2, σ = d_(0,π), σ = 1/2(d_1σ+d_2σ-d_3σ-d_4 σ)
d_A_2, σ = d_(π,π), σ = 1/2(d_1σ-d_2σ+d_3σ-d_4 σ) ,
where A_1, B_1, B_2, A_2 are the irreducible representations of C_2v, respectively denoted with the cluster momenta K = { (0,0), (π,0), (0,π), (π,π) }. In the previous section, every operator was expressed in the position basis, but here they are expressed in this new K basis. In this basis, the 8 × 8 hybridization matrix Δ acquires a block diagonal form:
Δ = ( [ Δ_↑ 0; 0 Δ_↓ ]) ,
with
Δ_σ = ( [ Δ_(0,0) 0 0 0; 0 Δ_(π,0) 0 0; 0 0 Δ_(0,π) 0; 0 0 0 Δ_(π,π) ]) .
Furthermore, time-reversal symmetry restricts the matrix blocks Δ_↑, Δ_↓ to take the same value, and C_4 symmetry (π/2 rotation) prescribes that Δ_(0,π) = Δ_(π,0). As a result, there are only 3 independent components of the hybridization matrix Δ.
The CT-HYB algorithm writes the impurity partition function Z_ imp = e^-β H_ imp in the interaction representation and expands it in powers of the hybridization, obtaining
Z_ imp = ∫ D[ d̂, d̂^†] e^-S
= Z_ bath∑_k=0^∞∫_0^β dτ_1⋯ dτ_k∫_0^β dτ_1^'⋯ dτ_k^'
×∑_ K_1⋯ K_k∑_ K_1^'⋯ K_k^' w {C} ,
where the integrands
w {C} =
_1 ≤ m, n ≤ |C|[
Δ_ K_m K_n^' (τ_m - τ_n^')
]
×_ cl[ T_τ e^-β H_ cl∏_r=1^|C| d_ K_r (τ_r) d_ K_r^'^† (τ_r^') ]
are the weights of a distribution over the configuration space C= { ( K_1, τ_1), ( K_1^', τ_1^') ⋯ ( K_k, τ_k), ( K_k^', τ_k^') }. This configuration space is sampled with Markov chain Monte Carlo. In order reuse matrix products previously calculated, we use the Lazy Skip List algorithm <cit.>.
§.§ Probabilities of the plaquette sectors
In this work we are interested in the reduced density matrix of the cluster, ρ_ cl. Within the CT-HYB algorithm it is possible to measure the diagonal elements of ρ_ cl. This procedure was developed in Ref. hauleCTQMC. Here we limit ourselves to outline the key aspects of the CT-HYB algorithm that are relevant for our discussion. Further details can be found in Refs. hauleCTQMC, patrickSkipList.
For a 2× 2 plaquette, the cluster Hamiltonian H_ cl conserves charge, spin and cluster momentum.
We can group the 256 eigenstates {|μ⟩} of H_ cl according to the quantum numbers N= ∑_i n_i↑ + n_i↓, S_z = ∑_i (n_i↑ - n_i↓)/2 and cluster momentum K, so that both H_ cl and ρ_ cl become block diagonal. This grouping results in 84 blocks where each block, or sector, is labeled by the set of quantum numbers N,S_z, K. Let m be the index of the sector (or matrix block) and {|μ⟩}_m be the set of cluster eigenstates belonging to m. Table <ref> in Appendix <ref> lists the sectors m grouped according to the quantum numbers N, S_z and K.
The reduced density matrix is ρ_ cl = _ bath [e^-β H_ imp/Z_ imp], and the estimator for its diagonal elements is <cit.>
(ρ_ cl)_μμ = ⟨μ| T_τ e^-β H_ cl∏_r=1^|C| d_ K_r (τ_r) d_ K_r^'^† (τ_r^') |μ⟩/_ cl[ T_τ e^-β H_ cl∏_r=1^|C| d_ K_r (τ_r) d_ K_r^'^† (τ_r^') ] .
The probability associated to the cluster eigenstates belonging to a given sector m is
P_m = ∑_μ∈{|μ⟩}_m (ρ_ cl)_μμ .
The probabilities { P_m } will be analysed in Sec. <ref>.
§.§ d-wave superconducting state
The CDMFT formalism outlined in subsections <ref>, <ref>, <ref> applies to the normal state.
For the d-wave superconducting state, the CDMFT method can be generalised <cit.> as follows.
It is useful to introduce the Nambu basis Ψ_ K = (d_(0,0)↑ d_(π,0)↑ d_(0,π)↑ d_(π,π)↑ d_(0,0)↓^† d_(π,0)↓^† d_(0,π)↓^† d_(π,π)↓^† )^T, where we use that -(0,0) = (0,0), -(π,0) = (π,0), etc. because of Umklapp processes. In this basis the 8 × 8 matrix hybridization function becomes
Δ(τ) = ([ Δ_↑(τ) F(τ); F^†(τ) -Δ_↓(-τ) ]) ,
where Δ_σ has the same structure of Eq. <ref>, and the anomalous component F is block diagonal.
Note that the 2 × 2 cluster Hamiltonian H_ cl has C_4v = C_2v⊗ C_4 point group symmetry.
The d-wave superconducting order parameter breaks the C_4 symmetry (i.e. it changes sign under a π/2 rotation), but preserves the C_2v symmetry with mirrors along the plaquette axes, of the original C_4v group.
Therefore the superconducting order parameter has the same C_2v symmetry as the one particle cluster basis (Eq. <ref>).
This choice implies that the d-wave superconducting order parameter transforms in space as the A_1 representation of the C_2v symmetry group with mirrors along the plaquette axes [This has to be contrasted with Ref. Hebert:2015, where, for the anisotropic Hubbard model within 2 × 2 CDMFT, the C_2v symmetry with mirrors along the plaquette diagonals was used as a representation for the one particle basis. This choice dictates that the superconducting order parameter transforms in space as the A_2 representation of the C_2 v symmetry group with mirrors along the plaquette diagonals.].
Hence only the entries in F transforming as A_1 can be finite, implying that only diagonal components of F are nonzero.
Furthermore, the d-wave superconducting order parameter changes sign under a π/2 rotation, imposing the constraints that F_(0,0)↑, (0,0)↓ and F_(π,π)↑, (π,π)↓ are zero and F_(0,π) ↑, (0,π) ↓ = - F_(π,0) ↑, (π,0) ↓. Thus, written explicitly, the anomalous component of the matrix hybridization function reads
F= ( [ 0 0 0 0; 0 F_(π,0) ↑, (π,0) ↓ 0 0; 0 0 -F_(π,0) ↑, (π,0) ↓ 0; 0 0 0 0 ]) .
To allow for a superconducting solution, we start the CDMFT loop scheme with a guess for Δ containing a finite off-diagonal component F_(π,0) ↑, (π,0) ↓. In all subsequent iterations Δ evolves unconstrained
and F_(π,0) ↑, (π,0) ↓ will either survive or vanish. When self-consistency is reached and F_(π,0) ↑, (π,0) ↓ survives, the solution is superconducting.
The d-wave symmetry is broken in the bath (Δ) but not in the cluster. In other words, the superconducting state breaks the C_4 symmetry but this symmetry is still present in the cluster Hamiltonian H_ cl. This broken symmetry in Δ (corresponding to the non vanishing F_(π,0) ↑, (π,0) ↓ component) propagates to G_ cl and Σ_ cl of the cluster through Eq. (<ref>), even though H_ cl still has the full C_4v symmetry.
From the point of view of the impurity solver, the calculation of the Monte Carlo weight of each configuration is influenced by the bath, whose degrees of freedom are in the determinant in Eq. <ref>. That bath has components that are off-diagonal in Nambu space. The trace on the cluster in Eq. <ref> however always conserves the number of particles since the symmetry is never explicitly broken in the cluster. It is through the bath that the cluster Green's function acquires off-diagonal components.
Finally, as demonstrated in Ref. patrickERG, the ergodicity of the CT-HYB algorithm in the d-wave superconducting state can only be obtained by allowing four operator updates in the sampling of the Markov chain.
§ SUPERCONDUCTING STATE PHASE DIAGRAM WITH PLAQUETTE CDMFT
Prior work revealed the finite temperature aspects of the superconducting phase diagram of the 2D Hubbard model on a square lattice with 2× 2 plaquette CDMFT <cit.>. Here, we briefly survey two key features of the superconducting state that are relevant for our discussion: the behavior of the superconducting transition temperature T^d_c (where d emphasises it is the critical temperature at the the cluster dynamical mean-field level), and the link between T^d_c and the onset temperature of the pseudogap T^*.
Although in 2D long-range order is excluded by thermal fluctuations <cit.>, T_c^d physically denotes when superconducting pairs develop within the 2× 2 plaquette <cit.>. Also, here we focus only on the superconducting and normal states only, and competition with other states is not considered.
Figure <ref>a-d shows the temperature hole-doping phase diagram for four different values of the interaction strength U, ranging from U=5.2 to U=12. Within 2× 2 plaquette CDMFT, the value of U needed to transform a metal to a Mott insulator at half-filling (δ=0), is U_MIT≈ 5.95 <cit.>. All data points in Figure <ref> are extracted from our previous work of Ref. CaitlinPNAS2021, which employs the same methodology as used in this work.
The superconducting state is indicated in red. It is bounded by T_c^d, and it is the region below which the superconducting order parameter Φ = ⟨ d^†_(0,π) ↑d^†_(0,π)↓⟩ is nonzero. Figure <ref>e-h shows Φ(δ) for T=1/50 as a sample of the calculations performed across the U-T-δ space.
By systematically varying the interaction strength U and doping δ, interesting trends emerge, from which insights on microscopic mechanisms of superconductivity can be derived <cit.>.
Below U_MIT, T_c^d(δ) decreases with increasing doping; above U_MIT, T_c^d(δ) has the shape of a dome, with the highest T_c^d just above U_MIT; the superconducting dome is asymmetric in doping with a steep slope upon doping the parent Mott state.
Additional insights can be gained from contrasting the superconducting phase and pseudogap phase (shown in blue) <cit.>. The pseudogap is bounded by a crossover T^*(δ), which can be calculated by the drop in the zero-frequency spin susceptibility as a function of T <cit.> (also see Section <ref>). It is a strongly correlated phase that only appears for U> U_MIT. This suggests a link to the superconducting phase, which has a dome-like shape for U> U_MIT only. However at large doping superconductivity can emerge from a metal in the absence of a pseudogap implying they are two distinct phenomena <cit.>. This is because the doping at which the pseudogap ends is contained within the centre of the superconducting dome.
This is where a hidden strongly correlated pseudogap - correlated metal transition occurs. The nature of this transition is first order, and it is a purely electronic transition without symmetry breaking <cit.>. Upon increasing temperature, this transition ends at a critical endpoint which gives way to crossover lines <cit.>. These crossovers mark anomalies <cit.> in observables as a function doping, including electronic specific heat <cit.>, charge compressibility <cit.>, nonlocal density fluctuations <cit.>, entanglement entropy <cit.>, velocity of sound <cit.>. T^*(δ) is a high temperature precursor of such crossovers <cit.>.
Furthermore by studying the difference in kinetic and potential energy between the normal and superconducting states, Ref. LorenzoSC shows that the doping at which the hidden transition occurs correlates with the largest condensation energy.
Finally, Ref. Lorenzo3band shows that the main features obtained in the 2D Hubbard model reviewed here (the superconducting dome, the pseudogap to correlated metal hidden transition and its associated supercritical crossovers, and the source of pairing energy), are also found in the three-band Emery model, suggesting they are emergent phenomena of doped Mott insulators, robust against microscopic details.
§ FLUCTUATIONS BETWEEN CLUSTER EIGENSTATES IN THE SUPERCONDUCTING AND NORMAL PHASES
This work aims at obtaining new insights on the superconducting correlations in the 2D Hubbard model by taking the perspective of the cluster impurity embedded in a self consistent bath.
This section focuses on the properties of the embedded cluster (here, a 2× 2 plaquette). Next section will focus on the properties of the bath. Here we calculate the probability that the electrons in the cluster are in any of the cluster eigenstates {|μ⟩}_m characterised by the quantum numbers N,S_z, K of the sector m. This probability can be viewed <cit.> to represent the relative time the plaquette electrons occupy the cluster eigenstates {|μ⟩}_m.
§.§ Probability distribution of plaquette sectors
The behavior of the probabilities of the plaquette sectors for the normal state solution have been analysed in Ref. sht2. In this work we report the behavior of the probabilities in the superconducting phase, and contrast with that in the normal state.
Figure <ref> shows the histogram of the probability of the plaquette sectors { m }, for the T-U-δ values indicated by filled hexagons in Figure <ref>. Data are at T=1/50, chosen as it is well below (T_c^d)_ max for each U. The x-axis shows the index m of each sector (see Table <ref>). Each bar of the histogram has two solutions that are superimposed: the superconducting solution for each sector m is shown with a filled bar, whereas the normal state solution is shown by an unfilled bar.
From this analysis a few trends emerge. Firstly, of the 84 available sectors { m } of the plaquette, very few have a large probability <cit.> (note the logarithmic scale of the y-axis). Secondly, for each U> U_MIT, there are fewer highly probable states at low doping than at high doping, and even fewer upon increasing U. We note that for U< U_MIT at δ=0, the system is particle-hole symmetric, which is reflected in the probabilities of the sectors. The overall difference in the probability for a given { m } between the normal and the superconducting phase is small (less than 0.1). The small difference in the probabilities between the superconducting and normal state is consistent with the small superconducting condensation energy, which in Ref. LorenzoSC has been estimated as smaller than 0.01t for the same range of U and T considered here.
To gain further insights, we highlight sectors { m } with the highest probabilities for given N and S_z in color in Figure <ref>.
Upon inspection, we find that they can be grouped in the following sets:
𝒮_2= { N=2, S_z=0, K=(0,0) }
𝒟_3= {N=3, S_z=±1/2, K=(0,π), (π,0) }
𝒮_4= {N=4, S_z=0, K=(0,0) }
𝒯_4= {N=4, S_z=0, ± 1, K=(π,π) } .
They represent the following sets: 𝒮_2 and 𝒮_4 denote the set of all the eigenstates with two and four electrons with S_z=0 and in the cluster momentum K=(0,0). As a shorthand notation, we call these sets 'two-electron singlet' and 'four-electron singlet' respectively, because they contain only the sector where S_z=0.
The set 𝒯_4 denotes the set of all the eigenstates with four electrons with S_z=0, ± 1 in the cluster momentum K=(π,π). We call this set 'four-electron triplet' as it contains the 3 sectors with S_z=0, ± 1.
The set 𝒟_3 contains 4 sectors and denotes the set of all the eigenstates with three-electrons with S_z=± 1/2 in the cluster momenta K=(π,0), (0,π). We call this set 'three-electron doublet' as it contains the 2 sectors with S_z=± 1/2. Our naming convention follows Refs. hauleDOPING, sht, sht2.
The probabilities of these four key sets are shown in Figure <ref> for both the normal and superconducting states. The set of the remnant sectors are grouped and denoted ℛ.
For U> U_MIT, previous studies <cit.> have demonstrated that the dominant sector in the normal state pseudogap phase is the four-electron singlet 𝒮_4. Physically, this is because superexchange locks the electrons of the plaquette into one prevailing singlet configuration. When superconductivity emerges from a pseudogap (panels c, e, g), the probabilities of the plaquette sectors do not undergo a drastic change, and in particular the four-electron singlet 𝒮_4 remains the dominant configuration with a slightly increased probability.
On the other hand, when superconductivity emerges from a metal without a pseudogap (panels d, f, h for U> U_MIT, as well as a, b for U< U_MIT) there is a more marked redistribution of plaquette probabilities, although still overall small, and in particular there is an increase in the probability of the four-electron singlet 𝒮_4 at the expense of the probability of the four-electron triplet 𝒯_4. In this regime the 𝒮_4 singlet is largest but not dominant, i.e. its probability is comparable with that of other sectors, suggesting the electrons of the plaquette spend similar time in the other sectors. Regardless of if superconductivity emerges from a pseudogap or a metal, the probability of the doublet 𝒟_3 remains essentially unchanged upon superconducting condensation.
§.§ Doping evolution of plaquette sectors probabilities
Figure <ref> shows the probabilities of the four key cluster sets of sectors identified in Eq. <ref> as a function of doping, in the superconducting and normal states (filled and unfilled symbols, respectively), for different values of U at T=1/50.
The doping evolution of the plaquette sectors probabilities in the normal state has been discussed in Refs. sht,sht2, here we report the behavior in the superconducting state and compare with normal state.
Hole doping enables both charge and spin fluctuations. Compatible with those previous reports, for U>U_MIT, we find: 1) the probability of the four-electron singlet 𝒮_4 decreases rapidly upon doping the parent Mott insulating state, where superexchange is largest; 2) the probability of the three-electron doublet 𝒟_3 increases, in line with charge fluctuations introduced by hole-doping, which break the singlet bonds; 3) the probability of the four-electron triplet 𝒯_4 first increases due to the decay of superexchange with doping, and then decreases as the total number of electrons in the system is reduced; 4) the two-electron singlet 𝒮_2 undergoes a slow but steady increase as the system evolves away from the dominant four-electron singlet sector with doping. On the other hand, for U<U_MIT, the reduction of superexchange physics causes the overall depletion of the four-electron singlet 𝒮_4, and so at low doping the probability of the four-electron triplet 𝒯_4, and to a lesser degree the probability of the three-electron doublet 𝒟_3, is no longer suppressed.
Let us now turn to the doping evolution of the probabilities of the cluster sectors in the superconducting state (filled symbols). Overall, upon condensation the probability of the four-electron singlet 𝒮_4 increases at the expense of the probability of the four-electron triplet 𝒯_4, whereas the probabilities of the three-electron doublet 𝒟_3 and the two-electron singlet 𝒮_2 do not change appreciably.
Physically, this means that, for all values of U, upon condensation the system lowers its energy by a redistribution of mainly short-range spin, but not charge, excitations - electrons in Cooper pairs are locked into short-range spin singlets due to superexchange.
These four-electron 𝒮_4 singlets are already the dominant configuration in the underlying normal state pseudogap. As we shall discuss in Section <ref>, upon condensation these singlets propagate coherently in the lattice. Our results complement the findings for the t-J model around optimal doping of Refs. hauleAVOIDED, hauleDOPING.
To better understand the doping evolution of the plaquette fluctuations between singlet 𝒮_4 and triplet 𝒯_4, we show the difference Δ P between the probabilities of each of these two sets between the normal and the superconducting phases (red and blue respectively) in the insets of each panel. For U>U_MIT, this difference is nonmonotonic with doping.
Examining panel (b) for U=6.2, upon condensation the probability of the 𝒮_4 singlet shows minimal change at low doping, with a rapid increase as the doping is increased past the critical endpoint of the normal-state pseudogap-metal first order transition (grey dot). The difference Δ P eventually decreases again approaching the end of the superconducting dome to recover the normal phase probabilities. At higher U, the trend of the difference for the singlet is similar, but instead shows a gradual increase to a broad maximum. This is possibly because of the temperature dependence of the probabilities in the normal state - indeed, the critical endpoint of the pseudogap-metal transition in the normal state shifts to lower temperature and higher doping with increasing U. Note that for U=6.2, T=1/50 is in close proximity to this transition.
Overall, our analysis on the plaquette sectors provides two main insights on the superconducting correlations. First, it establishes that at the level of the cluster, superconductivity mainly entails a reorganisation of short-range spin correlations rather than charge correlations. Second, it identifies short-range spin correlations in the form of four-electrons singlets as key to superconducting pairing.
Note that, even if isolated plaquettes show tendencies to singlet formation <cit.>, it is only when these singlets are immersed in the self-consistent bath that takes into account the effect of the infinite lattice that superconductivity can arise. The behavior of the bath is thus analysed in the next session.
§ SELF-CONSISTENT BATH HYBRIDIZATION FUNCTION
The preceding section demonstrated that upon condensation there is a redistribution of the probability of the plaquette sectors. This redistribution is small and involves mainly short range spin (singlet 𝒮_4, triplet 𝒯_4) but not charge (doublet 𝒟_3) excitations. In CDFMT however, the plaquette is not isolated but is embedded in a self-consistent bath of noninteracting electrons. The plaquette exchanges electrons with the bath, and it is through this exchange that the plaquette is able to make transitions between the 256 available plaquette eigenstates.
Furthermore, the d-wave symmetry is broken in the bath.
The goal of this section is therefore to analyse the behavior of the bath upon condensation, which is described by the Nambu diagonal hybridization matrix function Δ. Note that although we separate the discussion of the behavior of the bath from that of the plaquette for practical purposes, they are not independent quantities: the plaquette is immersed in the bath which is self-consistently determined. Therefore the behavior of the plaquette influences that of the bath and vice versa.
Figure <ref> shows -ImΔ_K(ω) both in the normal and superconducting states (dashed and filled lines respectively). In the superconducting case, Δ_K(ω) is the Nambu diagonal (i.e. normal) hybridization function (see Eq. <ref>). For both normal and superconducting cases, -ImΔ_K(ω) gives the (normal) spectral function of the bath resolved in the cluster momentum K. This quantity is shown for the values of interaction, doping, and temperature corresponding to the filled hexagons of Fig. <ref>.
We perform the analytical continuation from imaginary to real frequencies using the method of Ref. DominicMEM and plot the independent diagonal components K= (0,0), (π, 0), (π, π) of the matrix Δ, in the energy window ω∈ (-2,2).
The behavior of the bath has been described in the normal state in Ref. hauleDOPING, sht2, and in Ref. michelPRB for a 2-site cluster. The focus of the present work is to analyse the behavior in the superconducting state and contrast it with that in the normal state. Hence, for a better visualisation we show the superconducting solution shaded.
We shall first briefly review the properties of the K-resolved spectral function of the bath in the normal state, as prior work hauleDOPING, sht2 mostly focused on the spectral function of the bath on the Matsubara frequency. The bath shows finite spectral weight close to the Fermi energy, displaying metallic (panels a, b, d, f, h) or pseudogap (panels c, e, g) behavior. The latter takes a markedly asymmetric shape, with a more pronounced peak below the Fermi energy. In all cases, the bath is weakly K-dependent close to the Fermi energy.
Upon entering into the superconducting phase, the Nambu diagonal spectral function of the bath shows a dramatic redistribution of spectral weight at low frequency.
A redistribution of the spectral weight in the bath is expected because in CDMFT, d-wave symmetry is broken in the bath but not in the cluster.
For all interaction strengths and dopings considered, the bath opens a superconducting gap. Superconductivity emerging from a pseudogap (panels c, e, g) leaves a distinct signature in the spectral function of the bath, in the form of an inherited asymmetry of the superconducting gap. The position of the superconducting coherence peaks is different from the position of the peaks of the pseudogap. Upon doping, the size of the gap narrows (panels b, d, f, h) and becomes more symmetric, particularly for K= (π,0). Similarly to the normal state, the bath in the superconducting state shows a weak K-dependence at low frequency.
The results of Figure <ref> can be summarised by computing the local density of states of the bath N_bath(ω)=-ImΔ_R=(0,0)(ω). Its low frequency part is shown in Figure <ref> with the same color code as Figure <ref>. The inset of each panel shows the full frequency range of N_bath(ω), where the lower and upper Hubbard bands can be seen.
To conclude this section on the behavior of the (Nambu diagonal) spectral function and local density of states of the bath, our main finding is therefore a dramatic spectral weight redistribution at low frequency upon superconducting condensation, which leads to the opening of a superconducting gap.
The bath hybridization function Δ_K(ω) describes the hopping processes of the electrons between the plaquette and the bath, therefore a superconducting gap in Δ_K(ω) suggests no dissipation of the dynamics of these one-particle hopping processes.
The increased coherence in the superconducting state can also be deduced from the suppressed electronic entropy <cit.>.
This study also paves the way for future investigations of the Nambu off-diagonal (i.e. anomalous) component of the spectral function of the bath, which gives information about the pairing dynamics. Since this anomalous component is not positive definite, analytical continuation is more challenging <cit.>.
§ CONSEQUENCES ON THE DENSITY OF STATES OF THE SYSTEM AND SPIN SUSCEPTIBILITY
Prior sections <ref> and <ref> have shown how superconductivity is realised in CDMFT at the level of the cluster quantum impurity problem. We found that upon entering the superconducting state, there is a redistribution of the probabilities of the plaquette sectors and of the spectral weight of the bath hybridization function. In this section, we show how this analysis can provide new insights of the behavior of the density of states of the system and of the zero-frequency spin susceptibility.
§.§ Density of states of the system
Figure <ref> shows the imaginary part of the (normal) Green's function -ImG̅_K(ω), which gives the (normal) spectral function of the system. Figure <ref> shows the resulting local density of states of the system N(ω)=-ImG̅_R=(0,0)(ω). Data are shown for both the normal and superconducting state (dashed lines and filled lines with shaded regions, respectively). Previous studies have analysed the behavior of these quantities <cit.>, but the new insight brought by our study lies in the comparison of the Green's function and the hybridization function, in the superconducting state. In single-site DMFT on the Bethe lattice, the self-consistency condition requires Δ= t^2G̅, so the bath is directly proportional to the Green's function. Therefore in CDMFT, differences between Δ and G̅ should be ascribed to the short range correlations that are incorporated in the cluster.
First, we will briefly recap the behavior of the spectral function of the system in the normal state, analysed in Refs. sht2, ssht. In sharp contrast with Fig. <ref> for the bath, the Green's function is strongly K-dependent. At low doping, the spectral function of the bath in Fig. <ref> shows an asymmetric pseudogap for all K-components. In Fig. <ref> there is instead an asymmetric pseudogap in the K=(π,0) component only, with the K=(π,π) component showing insulating-like behavior. Therefore the strong K-differentiation is linked to short range correlations within the cluster. At high doping, the flat behavior of the bath hybridization function at low frequency for all K in Fig. <ref> is replaced in Fig. <ref> by a quasiparticle peak in the K=(π,0) component.
We now turn to the analysis of the superconducting phase. Again, for all values considered here, the Nambu diagonal spectral function of the system shows marked cluster momentum differentiation, in contrast with the behavior of the bath, emphasising again the importance of short-range correlations included in the cluster.
The bath hybridization function in Fig. <ref> shows superconducting coherence peaks for all K components. In Fig. <ref> the coherence peaks manifest predominantly in the K=(π,0) component and to a lesser degree in the K=(0,0) component. At low doping for U>U_MIT (panels c, e, g), the spectral function in the superconducting state reflects the inherited particle-hole asymmetry of the underlying normal-state pseudogap.
We can summarise the results of Fig. <ref> in the local density of states, Fig. <ref>. Upon condensation, the density of states develops a superconducting gap across the Fermi energy. The redistribution of spectral weight between normal and superconducting state occurs over a range of frequency larger than the gap - a typical signature of strongly correlated superconductivity. The asymmetry in the superconducting state is inherited from the asymmetry in the pseudogap. However as has already been observed in Refs. Gull:2013, Verret:PRB2019, the magnitude of the superconducting gap differs from that of the pseudogap, implying they are two distinct phenomena. The width of the superconducting gap decreases with increasing doping. Note that the system is a d_x^2-y^2 superconductor, however a cluster larger than a 2×2 plaquette is needed to resolve the nodes along the diagonals of the Brillouin zone.
If we compare Fig. <ref> to Fig. <ref>, the overall shape is similar, however the magnitude of the coherence peaks is enhanced in G̅ compared to Δ.
This reflects the fact that superconducting fluctuations are also present in the cluster through the Nambu off-diagonal hybridization function and self-energy.
Overall, the comparison between the spectral functions of Δ and G̅ enabled us to identify key signatures of short-range correlations in the spectral functions of the system: enhanced coherence pics and strong cluster momentum dependence.
§.§ Zero-frequency spin susceptibility
Next, we turn to the the signatures of the superconducting correlations in the zero frequency spin susceptibility. This quantity is defined by χ_0 (T) = ∫_0^β⟨ S_z(τ) S_z(0) ⟩ dτ, where S_z is the projection of the total spin of the plaquette along the z direction. Figure <ref> a, b, c, d shows χ_0(T) as a function of temperature T, both in the superconducting and normal states (filled and open symbols, respectively). Data are shown for several values of U and δ, corresponding to the color-coded hexagons in Figure <ref>.
The temperature and doping behavior of χ_0 in the normal state has been discussed in Refs. sht2, ssht, sshtRHO. In the normal state, χ_0(T) is Pauli-like in the correlated metal found below U_ MIT (green and red circles in panel (a)) and above U_ MIT at large δ (red circles in panels (b),(c), (d)). In contrast, for U>U_ MIT χ_0(T) and small doping (blue circles in panels (b),(c), (d)) χ_0(T) shows a low temperature drop.
By looking at the behavior of the probabilities of the cluster sectors as a function of temperature, it can be seen that the drop in the spin susceptibility coincides with an increase in the probability of the four-electron singlet (see lower panels e, f, g, h) as noted in Ref. ssht. The maximum in χ_0(T) signals the crossover temperature T^* in Fig. <ref>, which indicates the opening of the strongly correlated pseudogap.
Let us now turn to the superconducting state. Our main finding is that χ_0(T) dramatically drops below the superconducting temperature T_c^d, for all doping levels and all values of U considered here (filled squares in Figure <ref> panels a, b, c, d).
The drop can be associated to the further increase of the probability of the four-electron singlet that occurs upon entering the superconducting state below T_c^d (see panels e, f, g, h). Physically, Cooper pairs in the superconducting state are locked into singlets, and hence spin fluctuations to other cluster configurations are reduced.
When superconductivity emerges from a metallic state (red curve) there is a more pronounced drop in the spin susceptibility compared to superconductivity emerging from a pseudogap (blue curve). This is because singlet formation already occurred at T^* in the underlying normal state.
Note that in the superconducting state at low temperature, χ_0(T) saturates at a nonzero value, mirroring the saturation of the singlet probability with temperature (see lower panels). This saturation is due to the persisting probability of states such as the four-electron triplet 𝒯_4 and the three-electron doublet 𝒟_3 at low temperature, and hence residual spin flipping due to fluctuations between configurations require that χ_0≠ 0 at finite doping. The spin susceptibility χ_0(T→ 0) drops to zero in the normal state Mott insulator at δ=0 and large U (see Ref. sht2, ssht). This is because in the Mott state at low T and large U the four-electron singlet probability approaches 1.
In the superconducting state, the magnitude of χ_0(T) when superconductivity condenses from a metal exceeds that condensing from a pseudogap. This is due to the doping evolution of the probabilities of the plaquette sectors, where upon doping the high probability of the dominant 𝒮_4 singlet is redistributed across other plaquette sectors, meaning an increase in fluctuations between different configurations, and thus an increase in χ_0(δ).
The results of χ_0 should be considered only as a proxy for understanding the trend of the Knight shift in NMR <cit.>. In a pure singlet superconductor, χ_0 drops to zero at zero temperature. This is because electrons in the Cooper pairs locked into singlets cannot be polarised by an applied magnetic field and thus χ_0(T) drops below the superconducting critical temperature. However, the χ_0 we calculate here is a cluster quantity, and as seen χ_0(T) cannot drop to zero at finite doping.
Overall, our comparison between the zero-frequency susceptibility and the cluster sectors probabilities allowed us to link key features of χ_0 to the redistribution of short-range correlations with temperature and doping.
§ CONCLUSIONS
We address the interplay between superconducting correlations and Mott physics in the two-dimensional Hubbard model solved with CDMFT on a 2×2 plaquette. Our approach takes the perspective of a cluster quantum impurity model embedded in a self-consistent bath. Thus we focus on the properties of both the cluster and the bath. To unveil microscopic trends in the superconducting correlations, our analysis (a) compares the superconducting state with the underlying normal state and (b) covers a wide range of interaction strength, doping, and temperature.
First, at the level of the plaquette, we compute the probabilities that cluster electrons are found in the cluster sectors. We observe that few states have high probability. Upon entering the superconducting state, the cluster electrons spend more time in the four-electron singlet set of sectors 𝒮_4, suggesting the electrons in the Cooper pairs are bound into short-range spin singlets owing to superexchange mechanism. This finding enables us to identify short-range spin correlations in the form of singlets as central to superconducting pairing.
Furthermore, our results show an increase in the probability of the four-electron singlet mostly at the expense of a decrease of the four-electron triplet probability, with a negligible probability redistribution of the charge fluctuations.
The implication of this finding is that superconductivity at the level of the cluster mainly involves a reorganisation of short-range spin correlations but not of charge correlations.
At the level of the self-consistent bath, upon entering the superconducting state we find a redistribution of the spectral weight of the cluster-momentum-resolved (normal) spectral function and of the resulting local density of states of the bath. The most notable feature is the appearance of a superconducting gap and the weak K dependence of the diagonal bath hybridization function.
Our analysis from the perspective of a cluster quantum impurity model in a self-consistent bath can help us to unveil the links between superconducting correlations and some features of the spectral function and of the density of states of the system, as well as the zero-frequency spin susceptibility of the plaquette.
In the superconducting state, short-range correlations give rise to a marked K-dependence of the spectral function of the system, and pronounced coherence peaks in the density of states. Upon superconducting condensation the spin susceptibility drops, mirroring the increase in the probability of the four-electron singlet state.
Overall, our work underscores the importance of short-range spin correlations in the formation of Cooper pairs in a doped Mott insulator. This suggests the possibility of controlling superconducting properties by tuning the probability of the four-electron singlet, for example by introducing frustration at the level of the hopping or the lattice geometry.
From a broader perspective, our work illustrates the value of the approach of analysing both cluster impurity and bath.
Thus our work may open up a new direction to analyse other strongly correlated models with cluster DFMT methods, from the perspective of a quantum impurity model embedded in self-consistent bath.
This work may also contribute to the goal of understanding quantum phases of matter using measures of entanglement <cit.>. Further refinement of the method may enable access to the off-diagonal elements of the reduced density matrix <cit.>, which would be key to enabling the calculation of more sophisticated measures of quantum entanglement <cit.>.
This work has been supported by the Canada First Research Excellence Fund. Simulations were performed on computers provided by the Canadian Foundation for Innovation, the Ministère de l'Éducation des Loisirs et du Sport (Québec), Calcul Québec, and Compute Canada.
*
§ LIST OF PLAQUETTE SECTORS
Table <ref> shows the list of the plaquette sectors.
77
fxundefined [1]
ifx#1
fnum [1]
#1firstoftwo
secondoftwo
fx [1]
#1firstoftwo
secondoftwo
noop [0]secondoftwo
ref[1]@startlink#1@href
href[1]#1@endlink
anitize@url [0]`
12`$12`&12`#12`1̂2`_12`%12
startlink[1]
endlink[0]
rl [1]href #1
@bib@innerbibempty
[Lee et al.(2006)Lee,
Nagaosa, and Wen]lee
author author Patrick A. Lee, author Naoto Nagaosa, and author Xiao-Gang Wen, title title Doping a mott
insulator: Physics of high-temperature superconductivity, 10.1103/RevModPhys.78.17 journal journal Rev.
Mod. Phys. volume 78, eid 17
(year 2006)NoStop
[Norman(2011)]Norman2011
author author Michael R. Norman, title title The
challenge of unconventional superconductivity, 10.1126/science.1200181 journal journal
Science volume 332, pages 196–200
(year 2011)NoStop
[Keimer et al.(2015)Keimer,
Kivelson, Norman, Uchida, and Zaanen]keimerRev
author author B. Keimer, author S. A. Kivelson, author M. R. Norman, author S. Uchida, and author J. Zaanen, title title From quantum matter to
high-temperature superconductivity in copper oxides, 10.1038/nature14165 journal journal Nature volume 518, pages 179–186 (year 2015)NoStop
[Hubbard(1963)]Hubbard1963
author author J. Hubbard, title title Electron
correlations in narrow energy bands, 10.1098/rspa.1963.0204 journal journal
Proceedings of the Royal Society of London Series A volume 276, pages 238–257 (year
1963)NoStop
[Arovas et al.(2022)Arovas,
Berg, Kivelson, and Raghu]ArovasAnnuRev2022
author author Daniel P. Arovas, author Erez Berg, author Steven A. Kivelson, and author Srinivas Raghu, title title The hubbard
model, 10.1146/annurev-conmatphys-031620-102024
journal journal Annual Review of Condensed Matter
Physics volume 13, pages 239–274
(year 2022)NoStop
[Qin et al.(2022)Qin,
Schäfer, Andergassen, Corboz, and Gull]QinAnnuRev2022
author author Mingpu Qin, author Thomas Schäfer, author Sabine Andergassen, author Philippe Corboz, and author Emanuel Gull, title title The hubbard model:
A computational perspective, 10.1146/annurev-conmatphys-090921-033948 journal journal Annual Review of Condensed Matter Physics volume 13, pages 275–302 (year
2022)NoStop
[Tremblay(2013)]AMJulich
author author A.-M. S. Tremblay, title title
Strongly correlated superconductivity, in http://juser.fz-juelich.de/record/137827/files/FZJ-2013-04137.pdf?version=1
booktitle Emergent Phenomena in Correlated Matter
Modeling and Simulation, Vol. volume 3, editor
edited by editor E. Pavarini,
editor E. Koch, and editor U. Schollwöck (publisher Verlag des Forschungszentrum, address Jülich, year 2013) Chap. chapter 10NoStop
[Maier et al.(2005a)Maier, Jarrell,
Pruschke, and Hettler]maier
author author Thomas Maier, author Mark Jarrell,
author Thomas Pruschke, and author Matthias H. Hettler, title title Quantum cluster theories, 10.1103/RevModPhys.77.1027 journal journal Rev. Mod. Phys. volume 77, pages 1027–1080 (year 2005a)NoStop
[Kotliar et al.(2006)Kotliar, Savrasov, Haule, Oudovenko, Parcollet, and Marianetti]kotliarRMP
author author G. Kotliar, author S. Y. Savrasov, author K. Haule,
author V. S. Oudovenko, author O. Parcollet, and author C. A. Marianetti, title title Electronic structure calculations with
dynamical mean-field theory, 10.1103/RevModPhys.78.865
journal journal Rev. Mod. Phys. volume 78, eid 865 (year
2006)NoStop
[Tremblay et al.(2006)Tremblay, Kyung, and Sénéchal]tremblayR
author author A.-M. S. Tremblay, author B. Kyung, and author D. Sénéchal, title title
Pseudogap and high-temperature superconductivity from weak to strong
coupling. Towards a quantitative theory, 10.1063/1.2199446 journal journal Low Temp.
Phys. volume 32, pages 424 (year 2006)NoStop
[Georges et al.(1996)Georges, Kotliar, Krauth, and Rozenberg]rmp
author author Antoine Georges, author Gabriel Kotliar, author Werner Krauth, and author Marcelo J. Rozenberg, title title Dynamical
mean-field theory of strongly correlated fermion systems and the limit of
infinite dimensions, 10.1103/RevModPhys.68.13
journal journal Rev. Mod. Phys. volume 68, pages 13 (year
1996)NoStop
[Maier et al.(2000)Maier,
Jarrell, Pruschke, and Keller]maierSC
author author Th. Maier, author M. Jarrell,
author Th. Pruschke, and author J. Keller, title title d-wave superconductivity in the
hubbard model, 10.1103/PhysRevLett.85.1524 journal journal Phys. Rev. Lett. volume 85, pages 1524–1527 (year
2000)NoStop
[Lichtenstein and Katsnelson(2000)]lkAF
author author A. I. Lichtenstein and author M. I. Katsnelson, title title
Antiferromagnetism and d-wave superconductivity in cuprates: A cluster
dynamical mean-field theory, 10.1103/PhysRevB.62.R9283
journal journal Phys. Rev. B volume 62, pages R9283–R9286 (year
2000)NoStop
[Maier et al.(2005b)Maier, Jarrell,
Schulthess, Kent, and White]maierSystem
author author T. A. Maier, author M. Jarrell,
author T. C. Schulthess,
author P. R. C. Kent, and author J. B. White, title title Systematic study of d-wave
superconductivity in the 2d repulsive hubbard model, 10.1103/PhysRevLett.95.237001 journal journal
Phys. Rev. Lett. volume 95, pages
237001 (year 2005b)NoStop
[Haule and Kotliar(2007a)]hauleDOPING
author author Kristjan Haule and author Gabriel Kotliar, title title
Strongly correlated superconductivity: A plaquette dynamical mean-field
theory study, 10.1103/PhysRevB.76.104509 journal journal Phys. Rev. B volume
76, eid 104509 (year
2007a)NoStop
[Kancharla et al.(2008)Kancharla, Kyung, Sénéchal,
Civelli, Capone, Kotliar, and Tremblay]kancharla
author author S. S. Kancharla, author B. Kyung,
author D. Sénéchal, author M. Civelli, author
M. Capone, author G. Kotliar, and author A.-M. S. Tremblay, title title Anomalous superconductivity and its competition with
antiferromagnetism in doped Mott insulators, 10.1103/PhysRevB.77.184516 journal journal Phys.
Rev. B volume 77, pages 184516
(year 2008)NoStop
[Sordi et al.(2012a)Sordi, Sémon,
Haule, and Tremblay]sshtSC
author author G. Sordi, author P. Sémon,
author K. Haule, and author A.-M. S. Tremblay, title title Strong Coupling Superconductivity,
Pseudogap, and Mott Transition, 10.1103/PhysRevLett.108.216401 journal journal
Phys. Rev. Lett. volume 108, pages
216401 (year 2012a)NoStop
[Gull et al.(2013)Gull,
Parcollet, and Millis]Gull:2013
author author Emanuel Gull, author Olivier Parcollet, and author Andrew J. Millis, title title
Superconductivity and the pseudogap in the two-dimensional hubbard model, 10.1103/PhysRevLett.110.216405 journal
journal Phys. Rev. Lett. volume 110, pages 216405 (year 2013)NoStop
[Chen et al.(2015)Chen,
LeBlanc, and Gull]Chen:2015
author author Xi Chen, author J. P. F. LeBlanc, and author Emanuel Gull, title title Superconducting
fluctuations in the normal state of the two-dimensional hubbard model, 10.1103/PhysRevLett.115.116402 journal
journal Phys. Rev. Lett. volume 115, pages 116402 (year 2015)NoStop
[Dagotto(2005)]Dagotto:Science2005
author author Elbio Dagotto, title title Complexity in
strongly correlated electronic systems, 10.1126/science.1107559 journal journal
Science volume 309, pages 257–262
(year 2005)NoStop
[Zheng et al.(2017)Zheng,
Chung, Corboz, Ehlers,
Qin, Noack, Shi,
White, Zhang, and Chan]Zheng:Science2017
author author Bo-Xiao Zheng, author Chia-Min Chung,
author Philippe Corboz, author Georg Ehlers, author
Ming-Pu Qin, author
Reinhard M. Noack, author
Hao Shi, author Steven R. White, author Shiwei Zhang, and author Garnet Kin-Lic Chan, title title Stripe order in the underdoped region of the
two-dimensional hubbard model, 10.1126/science.aam7127
journal journal Science volume 358, pages 1155–1160 (year
2017)NoStop
[Qin et al.(2020)Qin,
Chung, Shi, Vitali,
Hubig, Schollwöck, White, and Zhang]Qin:PRX2020
author author Mingpu Qin, author Chia-Min Chung,
author Hao Shi, author Ettore Vitali, author
Claudius Hubig, author
Ulrich Schollwöck, author
Steven R. White, and author
Shiwei Zhang (collaboration
Simons Collaboration on the Many-Electron Problem), title
title Absence of superconductivity in the pure
two-dimensional hubbard model, 10.1103/PhysRevX.10.031016 journal journal Phys.
Rev. X volume 10, pages 031016
(year 2020)NoStop
[Chung et al.(2020)Chung,
Qin, Zhang, Schollwöck, and White]Chung:PRB2020
author author Chia-Min Chung, author Mingpu Qin, author Shiwei Zhang, author Ulrich Schollwöck, and author Steven R. White (collaboration The Simons Collaboration on
the Many-Electron Problem), title title
Plaquette versus ordinary d-wave pairing in the
t^-hubbard model on a width-4 cylinder, 10.1103/PhysRevB.102.041106 journal journal Phys. Rev. B volume 102, pages 041106 (year 2020)NoStop
[Mermin and Wagner(1966)]MWtheorem
author author N. D. Mermin and author H. Wagner, title title Absence of
Ferromagnetism or Antiferromagnetism in One- or Two-Dimensional Isotropic
Heisenberg Models, 10.1103/PhysRevLett.17.1133
journal journal Phys. Rev. Lett. volume 17, pages 1133–1136 (year
1966)NoStop
[Maier et al.(2004)Maier,
Jarrell, Macridin, and Slezak]maierENERGY
author author Th. A. Maier, author M. Jarrell,
author A. Macridin, and author C. Slezak, title title Kinetic energy driven pairing in cuprate
superconductors, 10.1103/PhysRevLett.92.027005
journal journal Phys. Rev. Lett. volume 92, pages 027005 (year
2004)NoStop
[Carbone et al.(2006)Carbone, Kuzmenko, Molegraaf, van Heumen, Lukovac, Marsiglio,
van der Marel, Haule, Kotliar, Berger, Courjault, Kes, and Li]carbone2006
author author F. Carbone, author A. B. Kuzmenko, author H. J. A. Molegraaf, author E. van
Heumen, author V. Lukovac,
author F. Marsiglio, author D. van der Marel, author
K. Haule, author G. Kotliar, author H. Berger, author S. Courjault, author P. H. Kes, and author M. Li, title title
Doping dependence of the redistribution of optical spectral weight in
bi_2sr_2cacu_2o_8+, 10.1103/PhysRevB.74.064510 journal journal Phys. Rev. B volume 74, pages 064510 (year 2006)NoStop
[Gull and Millis(2012)]millisENERGY
author author E. Gull and author A. J. Millis, title title Energetics of
superconductivity in the two-dimensional hubbard model, 10.1103/PhysRevB.86.241106 journal journal Phys.
Rev. B volume 86, pages 241106
(year 2012)NoStop
[Fratino et al.(2016a)Fratino, Sémon,
Sordi, and Tremblay]LorenzoSC
author author L. Fratino, author P. Sémon,
author G. Sordi, and author A.-M. S. Tremblay, title title An organizing principle for
two-dimensional strongly correlated superconductivity, 10.1038/srep22715 journal journal Sci. Rep. volume 6, pages 22715 (year
2016a)NoStop
[Maier et al.(2008)Maier,
Poilblanc, and Scalapino]maierPRL2008
author author T. A. Maier, author D. Poilblanc, and author D. J. Scalapino, title title Dynamics of the pairing
interaction in the hubbard and tj models of
high-temperature superconductors, 10.1103/PhysRevLett.100.237001 journal journal
Phys. Rev. Lett. volume 100, pages
237001 (year 2008)NoStop
[Kyung et al.(2009)Kyung,
Sénéchal, and Tremblay]Kyung:2009
author author B. Kyung, author D. Sénéchal, and author A.-M. S. Tremblay, title title
Pairing dynamics in strongly correlated superconductivity, 10.1103/PhysRevB.80.205109 journal journal Physical Review B (Condensed Matter and Materials Physics) volume 80, eid 205109 (year
2009)NoStop
[Civelli(2009a)]civelli1
author author M. Civelli, title title Evolution of
the dynamical pairing across the phase diagram of a strongly correlated
high-temperature superconductor, 10.1103/PhysRevLett.103.136402 journal journal
Phys. Rev. Lett. volume 103, pages
136402 (year 2009a)NoStop
[Civelli(2009b)]civelli2
author author M. Civelli, title title Doping-driven
evolution of the superconducting state from a doped mott insulator: Cluster
dynamical mean-field theory, 10.1103/PhysRevB.79.195113
journal journal Phys. Rev. B volume 79, pages 195113 (year
2009b)NoStop
[Sénéchal et al.(2013)Sénéchal, Day, Bouliane, and Tremblay]senechalPRB2013
author author D. Sénéchal, author A. G. R. Day, author V. Bouliane, and author A.-M. S. Tremblay, title title Resilience of d-wave
superconductivity to nearest-neighbor repulsion, 10.1103/PhysRevB.87.075123 journal journal Phys.
Rev. B volume 87, pages 075123
(year 2013)NoStop
[Reymbaut et al.(2016)Reymbaut, Charlebois, Asiani, Fratino, Sémon, Sordi, and Tremblay]reymbautPRB2016
author author A. Reymbaut, author M. Charlebois, author M. Fellous Asiani, author L. Fratino,
author P. Sémon, author G. Sordi, and author
A.-M. S. Tremblay, title
title Antagonistic effects of nearest-neighbor
repulsion on the superconducting pairing dynamics in the doped mott insulator
regime, 10.1103/PhysRevB.94.155146 journal
journal Phys. Rev. B volume 94, pages 155146 (year 2016)NoStop
[Dong et al.(2022a)Dong, Del Re,
Toschi, and Gull]DongPNAS2022
author author Xinyang Dong, author Lorenzo Del
Re, author Alessandro Toschi, and author Emanuel Gull, title title Mechanism of
superconductivity in the hubbard model at intermediate interaction
strength, 10.1073/pnas.2205048119 journal
journal Proceedings of the National Academy of Science volume 119, eid e2205048119 (year 2022a)NoStop
[Dong et al.(2022b)Dong, Gull, and Millis]DongNatPhys2022
author author Xinyang Dong, author Emanuel Gull, and author Andrew J. Millis, title title Quantifying
the role of antiferromagnetic fluctuations in the superconductivity of the
doped Hubbard model, 10.1038/s41567-022-01710-z
journal journal Nature Physics volume 18, pages 1293–1296 (year
2022b)NoStop
[Amico et al.(2008)Amico,
Fazio, Osterloh, and Vedral]amicoRMP2008
author author Luigi Amico, author Rosario Fazio,
author Andreas Osterloh, and author Vlatko Vedral, title title Entanglement in many-body
systems, 10.1103/RevModPhys.80.517 journal
journal Rev. Mod. Phys. volume 80, pages 517–576 (year 2008)NoStop
[Zeng et al.(2019)Zeng,
Chen, Zhou, and Wen]zhengBOOK
author author B. Zeng, author X. Chen, author D.-L. Zhou, and author X.-G. Wen, 10.1007/978-1-4939-9084-9 title Quantum Information
Meets Quantum Matter (publisher Springer-Verlag, address New York, year 2019)NoStop
[Walsh et al.(2021)Walsh,
Charlebois, Sémon, Sordi, and Tremblay]CaitlinPNAS2021
author author Caitlin Walsh, author Maxime Charlebois, author Patrick Sémon, author Giovanni Sordi, and author André-Marie S. Tremblay, title title
Information-theoretic measures of superconductivity in a two-dimensional
doped mott insulator, 10.1073/pnas.2104114118 journal journal Proceedings of the National Academy of
Sciences volume 118, pages
e2104114118 (year 2021)NoStop
[Haule(2007)]hauleCTQMC
author author Kristjan Haule, title title
Quantum Monte Carlo impurity solver for cluster dynamical mean-field theory
and electronic structure calculations with adjustable cluster base, 10.1103/PhysRevB.75.155113 journal journal Phys. Rev. B volume 75, eid
155113 (year 2007)NoStop
[Shim et al.(2007)Shim,
Haule, and Kotliar]shim:nature
author author J. H. Shim, author K. Haule, and author G. Kotliar, title title Fluctuating valence in a correlated
solid and the anomalous properties of d-plutonium, http://dx.doi.org/10.1038/nature05647 journal journal Nature volume 446, pages
513–516 (year 2007)NoStop
[Kent and Kotliar(2018)]Gabi:Science2018
author author Paul R. C. Kent and author Gabriel Kotliar, title title
Toward a predictive theory of correlated materials, 10.1126/science.aat5975 journal journal
Science volume 361, pages 348–354
(year 2018)NoStop
[Gull et al.(2008)Gull,
Werner, Wang, Troyer, and Millis]gullEPL
author author E. Gull, author P. Werner,
author X. Wang, author
M. Troyer, and author
A. J. Millis, title
title Local order and the gapped phase of the Hubbard
model: A plaquette dynamical mean-field investigation, 10.1209/0295-5075/84/37009 journal journal
Europhys. Lett. volume 84, pages
37009 (year 2008)NoStop
[Ferrero et al.(2009a)Ferrero, Cornaglia, Leo, Parcollet, Kotliar, and Georges]michelEPL
author author M. Ferrero, author P. S. Cornaglia, author L. De Leo,
author O. Parcollet, author G. Kotliar, and author A. Georges, title
title Valence bond dynamical mean-field theory of
doped mott insulators with nodal/antinodal differentiation, 10.1209/0295-5075/85/57009 journal journal Europhys. Lett. volume 85, pages 57009 (year 2009a)NoStop
[Ferrero et al.(2009b)Ferrero, Cornaglia, De Leo, Parcollet, Kotliar, and Georges]michelPRB
author author Michel Ferrero, author Pablo S. Cornaglia, author Lorenzo De Leo, author Olivier Parcollet, author Gabriel Kotliar, and author Antoine Georges, title title Pseudogap
opening and formation of fermi arcs as an orbital-selective mott transition
in momentum space, 10.1103/PhysRevB.80.064501 journal journal Phys. Rev. B volume
80, pages 064501 (year
2009b)NoStop
[Sordi et al.(2010)Sordi,
Haule, and Tremblay]sht
author author G. Sordi, author K. Haule, and author A.-M. S. Tremblay, title title Finite Doping Signatures of
the Mott Transition in the Two-Dimensional Hubbard Model, 10.1103/PhysRevLett.104.226402 journal journal Phys. Rev. Lett. volume 104, pages 226402 (year 2010)NoStop
[Sordi et al.(2011)Sordi,
Haule, and Tremblay]sht2
author author G. Sordi, author K. Haule, and author A.-M. S. Tremblay, title title Mott physics and
first-order transition between two metals in the normal-state phase diagram
of the two-dimensional Hubbard model, 10.1103/PhysRevB.84.075161 journal journal Phys.
Rev. B volume 84, pages 075161
(year 2011)NoStop
[Fratino et al.(2022)Fratino, Bag, Camjayi, Civelli, and Rozenberg]LorenzoPRB2022
author author L. Fratino, author S. Bag,
author A. Camjayi, author M. Civelli, and author M. Rozenberg, title
title Doping-driven resistive collapse of the mott
insulator in a minimal model for vo_2, 10.1103/PhysRevB.105.125140 journal journal
Phys. Rev. B volume 105, pages
125140 (year 2022)NoStop
[Haule and Kotliar(2007b)]hauleAVOIDED
author author Kristjan Haule and author Gabriel Kotliar, title title
Avoided criticality in near-optimally doped high-temperature
superconductors, 10.1103/PhysRevB.76.092503 journal journal Phys. Rev. B volume
76, pages 092503 (year
2007b)NoStop
[Gull et al.(2011)Gull,
Millis, Lichtenstein, Rubtsov, Troyer, and Werner]millisRMP
author author Emanuel Gull, author Andrew J. Millis, author Alexander I. Lichtenstein, author Alexey N. Rubtsov, author Matthias Troyer, and author Philipp Werner, title title Continuous-time
Monte Carlo methods for quantum impurity models, 10.1103/RevModPhys.83.349 journal journal Rev.
Mod. Phys. volume 83, pages 349–404
(year 2011)NoStop
[Werner et al.(2006)Werner,
Comanac, de Medici, Troyer, and Millis]Werner:2006
author author Philipp Werner, author Armin Comanac,
author Luca de Medici, author Matthias Troyer, and author Andrew J. Millis, title title Continuous-time solver for quantum
impurity models, 10.1103/PhysRevLett.97.076405
journal journal Phys. Rev. Lett. volume 97, pages 076405 (year
2006)NoStop
[Sémon et al.(2014a)Sémon, Yee,
Haule, and Tremblay]patrickSkipList
author author P. Sémon, author Chuck-Hou Yee, author Kristjan Haule, and author A.-M. S. Tremblay, title title Lazy skip-lists: An
algorithm for fast hybridization-expansion quantum Monte Carlo, 10.1103/PhysRevB.90.075149 journal journal Phys. Rev. B volume 90, pages 075149 (year 2014a)NoStop
[Hébert et al.(2015)Hébert, Sémon, and Tremblay]Hebert:2015
author author Charles-David Hébert, author Patrick Sémon, and author A.-M. S. Tremblay, title title
Superconducting dome in doped quasi-two-dimensional organic mott insulators:
A paradigm for strongly correlated superconductivity, 10.1103/PhysRevB.92.195112 journal journal Phys.
Rev. B volume 92, pages 195112
(year 2015)NoStop
[Melnick et al.(2021)Melnick, Sémon, Yu, D'Imperio, Tremblay, and Kotliar]patrick21
author author Corey Melnick, author Patrick Sémon, author Kwangmin Yu,
author Nicholas D'Imperio,
author André-Marie Tremblay, and author Gabriel Kotliar, title title Accelerated impurity solver
for dmft and its diagrammatic extensions, https://doi.org/10.1016/j.cpc.2021.108075 journal journal Computer Physics Communications volume
267, pages 108075 (year 2021)NoStop
[Sémon et al.(2014b)Sémon, Sordi, and Tremblay]patrickERG
author author P. Sémon, author G. Sordi, and author A.-M. S. Tremblay, title title Ergodicity of the
hybridization-expansion monte carlo algorithm for broken-symmetry states, 10.1103/PhysRevB.89.165113 journal journal Phys. Rev. B volume 89, pages 165113 (year 2014b)NoStop
[Note1()]Note1
note This has to be contrasted with Ref. @citealpnum
Hebert:2015, where, for the anisotropic Hubbard model within 2 × 2
CDMFT, the C_2v symmetry with mirrors along the plaquette diagonals was
used as a representation for the one particle basis. This choice dictates
that the superconducting order parameter transforms in space as the A_2
representation of the C_2 v symmetry group with mirrors along the
plaquette diagonals.Stop
[Xu et al.(2005)Xu,
Kumar, Buldyrev, Chen,
Poole, Sciortino, and Stanley]water1
author author Limei Xu, author Pradeep Kumar,
author S. V. Buldyrev, author S.-H. Chen, author
P. H. Poole, author
F. Sciortino, and author
H. E. Stanley, title
title Relation between the Widom line and the dynamic
crossover in systems with a liquid liquid phase transition, 10.1073/pnas.0507870102 journal journal
Proc. Natl. Acad. Sci. USA volume 102, pages 16558–16562 (year 2005)NoStop
[McMillan and Stanley(2010)]supercritical
author author Paul F. McMillan and author H. Eugene Stanley, title title Fluid phases:
Going supercritical, doi:10.1038/nphys1711 journal journal Nat Phys volume 6, pages 479–480 (year 2010)NoStop
[Sordi et al.(2012b)Sordi, Sémon,
Haule, and Tremblay]ssht
author author G. Sordi, author P. Sémon,
author K. Haule, and author A.-M. S. Tremblay, title title Pseudogap temperature as a Widom line
in doped Mott insulators, doi:10.1038/srep00547
journal journal Sci. Rep. volume 2, pages 547 (year
2012b)NoStop
[Walsh et al.(2019a)Walsh, Sémon,
Poulin, Sordi, and Tremblay]CaitlinSb
author author C. Walsh, author P. Sémon,
author D. Poulin, author G. Sordi, and author
A.-M. S. Tremblay, title
title Thermodynamic and information-theoretic
description of the Mott transition in the two-dimensional Hubbard model, 10.1103/PhysRevB.99.075122 journal journal Phys. Rev. B volume 99, pages 075122 (year 2019a)NoStop
[Sordi et al.(2013)Sordi,
Sémon, Haule, and Tremblay]sshtRHO
author author G. Sordi, author P. Sémon,
author K. Haule, and author A.-M. S. Tremblay, title title c-axis resistivity, pseudogap,
superconductivity, and Widom line in doped Mott insulators , 10.1103/PhysRevB.87.041101 journal journal Phys. Rev. B volume 87, pages 041101 (year 2013)NoStop
[Sordi et al.(2019)Sordi,
Walsh, Sémon, and Tremblay]Giovanni:PRBcv
author author G. Sordi, author C. Walsh,
author P. Sémon, and author A.-M. S. Tremblay, title title Specific heat maximum as a
signature of mott physics in the two-dimensional hubbard model, 10.1103/PhysRevB.100.121105 journal journal Phys. Rev. B volume 100, pages 121105 (year 2019)NoStop
[Walsh et al.(2019b)Walsh, Sémon,
Sordi, and Tremblay]CaitlinOpalescence
author author C. Walsh, author P. Sémon,
author G. Sordi, and author A.-M. S. Tremblay, title title Critical opalescence across the
doping-driven mott transition in optical lattices of ultracold atoms, 10.1103/PhysRevB.99.165151 journal journal Phys. Rev. B volume 99, pages 165151 (year 2019b)NoStop
[Walsh et al.(2020)Walsh,
Sémon, Poulin, Sordi, and Tremblay]Caitlin:PRXQ2020
author author C. Walsh, author P. Sémon,
author D. Poulin, author G. Sordi, and author
A.-M. S. Tremblay, title
title Entanglement and classical correlations at the
doping-driven mott transition in the two-dimensional hubbard model, 10.1103/PRXQuantum.1.020310 journal journal PRX Quantum volume 1, pages
020310 (year 2020)NoStop
[Walsh et al.(2022)Walsh,
Charlebois, Sémon, Sordi, and Tremblay]CaitlinSoundVelocity
author author C. Walsh, author M. Charlebois,
author P. Sémon, author G. Sordi, and author
A.-M. S. Tremblay, title
title Prediction of anomalies in the velocity of sound
for the pseudogap of hole-doped cuprates, 10.1103/PhysRevB.106.235134 journal journal
Phys. Rev. B volume 106, pages
235134 (year 2022)NoStop
[Fratino et al.(2016b)Fratino, Sémon,
Sordi, and Tremblay]Lorenzo3band
author author L. Fratino, author P. Sémon,
author G. Sordi, and author A.-M. S. Tremblay, title title Pseudogap and superconductivity in
two-dimensional doped charge-transfer insulators, 10.1103/PhysRevB.93.245147 journal journal Phys.
Rev. B volume 93, pages 245147
(year 2016b)NoStop
[Scalapino and Trugman(1996)]scalapino1996
author author D. J. Scalapino and author S. A. Trugman, title title Local
antiferromagnetic correlations and dx2-y2 pairing, 10.1080/01418639608240361 journal journal
Philosophical Magazine B volume 74, pages 607–610 (year 1996)NoStop
[Danilov et al.(2022)Danilov, van Loon, Brener,
Iskakov, Katsnelson, and Lichtenstein]Danilov:2022
author author Michael Danilov, author Erik G. C. P. van Loon, author Sergey Brener, author Sergei Iskakov, author Mikhail I. Katsnelson, and author Alexander I. Lichtenstein, title title Degenerate plaquette physics as key ingredient of high-temperature
superconductivity in cuprates, 10.1038/s41535-022-00454-6 journal journal npj
Quantum Materials volume 7, eid 50
(year 2022)NoStop
[Bergeron and Tremblay(2016)]DominicMEM
author author Dominic Bergeron and author A.-M. S. Tremblay, title title Algorithms for
optimized maximum entropy and diagnostic tools for analytic continuation, 10.1103/PhysRevE.94.023303 journal journal Phys. Rev. E volume 94, pages 023303 (year 2016)NoStop
[Reymbaut et al.(2015)Reymbaut, Bergeron, and Tremblay]AlexisPRB2015MEM
author author A. Reymbaut, author D. Bergeron,
and author A.-M. S. Tremblay, title title Maximum
entropy analytic continuation for spectral functions with nonpositive
spectral weight, 10.1103/PhysRevB.92.060509 journal journal Phys. Rev. B volume
92, pages 060509 (year 2015)NoStop
[Yue and Werner(2023)]yue2023maximum
author author Changming Yue and author Philipp Werner, @noop title Maximum
entropy analytic continuation of anomalous self-energies, (year 2023), http://arxiv.org/abs/2303.16888 arXiv:2303.16888
[cond-mat.supr-con] NoStop
[Verret et al.(2019)Verret,
Roy, Foley, Charlebois,
Sénéchal, and Tremblay]Verret:PRB2019
author author S. Verret, author J. Roy,
author A. Foley, author M. Charlebois, author
D. Sénéchal, and author
A.-M. S. Tremblay, title
title Intrinsic cluster-shaped density waves in
cellular dynamical mean-field theory, 10.1103/PhysRevB.100.224520 journal journal
Phys. Rev. B volume 100, pages
224520 (year 2019)NoStop
[Alloul et al.(1989)Alloul,
Ohno, and Mendels]Alloul:1989
author author H. Alloul, author T. Ohno, and author P. Mendels, title title ^89Y NMR evidence for a
fermi-liquid behavior in
YBa_2Cu_3O_6+x, 10.1103/PhysRevLett.63.1700 journal journal Phys. Rev. Lett. volume 63, pages 1700–1703 (year 1989)NoStop
[Takigawa et al.(1989)Takigawa, Hammel, Heffner, and Fisk]Takigawa:1989
author author M. Takigawa, author P. C. Hammel, author R. H. Heffner,
and author Z. Fisk, title title Spin susceptibility in
superconducting yba_2cu_3o_7
from ^63Cu knight shift, 10.1103/PhysRevB.39.7371 journal journal Phys.
Rev. B volume 39, pages 7371–7374
(year 1989)NoStop
[Walsh et al.(2019c)Walsh, Sémon,
Poulin, Sordi, and Tremblay]Caitlin:PRL2019
author author C. Walsh, author P. Sémon,
author D. Poulin, author G. Sordi, and author
A.-M. S. Tremblay, title
title Local entanglement entropy and mutual
information across the mott transition in the two-dimensional hubbard
model, 10.1103/PhysRevLett.122.067203 journal journal Phys. Rev. Lett. volume 122, pages 067203 (year
2019c)NoStop
[Udagawa and Motome(2015)]Udagawa_Motome:2015
author author Masafumi Udagawa and author Yukitoshi Motome, title title
Entanglement spectrum in cluster dynamical mean-field theory, http://stacks.iop.org/1742-5468/2015/i=1/a=P01016 journal
journal Journal of Statistical Mechanics: Theory and
Experiment volume 2015, pages P01016
(year 2015)NoStop
[Humeniuk(2019)]Humeniuk2019
author author Stephan Humeniuk, title title Quantum state
tomography on a plaquette in the two-dimensional hubbard model, 10.1103/PhysRevB.100.115121 journal journal Phys. Rev. B volume 100, pages 115121 (year 2019)NoStop
|
http://arxiv.org/abs/2307.02080v2
|
20230705074432
|
Resurgent Structure of the Topological String and the First Painlevé Equation
|
[
"Kohei Iwaki",
"Marcos Marino"
] |
hep-th
|
[
"hep-th",
"math-ph",
"math.AG",
"math.MP"
] |
thmTheorem[section]
*mainthmMain Theorem
cor[thm]Corollary
lem[thm]Lemma
prop[thm]Proposition
conj[thm]Conjecture
quest[thm]Question
prin[thm]Principle
definition
Def[thm]Definition
rem[thm]Remark
ass[thm]Assumption
*ackAcknowledgement
exExample[section]
caseCase
∂
|
http://arxiv.org/abs/2307.00370v2
|
20230701154453
|
Improving Text Matching in E-Commerce Search with A Rationalizable, Intervenable and Fast Entity-Based Relevance Model
|
[
"Jiong Cai",
"Yong Jiang",
"Yue Zhang",
"Chengyue Jiang",
"Ke Yu",
"Jianhui Ji",
"Rong Xiao",
"Haihong Tang",
"Tao Wang",
"Zhongqiang Huang",
"Pengjun Xie",
"Fei Huang",
"Kewei Tu"
] |
cs.IR
|
[
"cs.IR",
"cs.CL"
] |
Controlling the electron-phonon heat exchange in a metallic film by its position in a dielectric slab
D. V. AnghelInstitutul National de Cercetare-Dezvoltare pentru Fizica si Inginerie Nucleara Horia Hulubei, 077125 Magurele, Ilfov, Romania,
Research Institute of the University of Bucharest (ICUB), 050663 Bucharest, Romania,
BLTP, JINR, Dubna, Moscow region, 141980, Russia,
[email protected],
M. DolineanuInstitutul National de Cercetare-Dezvoltare pentru Fizica si Inginerie Nucleara Horia Hulubei, 077125 Magurele, Ilfov, Romania,
Doctoral School of Physics, University of Bucharest, Faculty of Physics, 077125 Magurele, Ilfov, Romania,
[email protected],
J. BergliDepartment of Physics, University of Oslo, PO Box 1048, Blindern,
0316 Oslo, Norway, [email protected],
and I. J. MaasiltaNanoscience Center, Department of Physics, University of Jyvaskyla, FI-40014 Jyväskyä, Finland, [email protected]
August 1, 2023
==========================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================
Discovering the intended items of user queries from a massive repository of items is one of the main goals of an e-commerce search system. Relevance prediction is essential to the search system since it helps improve performance. When online serving a relevance model, the model is required to perform fast and accurate inference. Currently, the widely used models such as Bi-encoder and Cross-encoder have their limitations in accuracy or inference speed respectively. In this work, we propose a novel model called the Entity-Based Relevance Model (EBRM). We identify the entities contained in an item and decompose the QI (query-item) relevance problem into multiple QE (query-entity) relevance problems; we then aggregate their results to form the QI prediction using a soft logic formulation. The decomposition allows us to use a Cross-encoder QE relevance module for high accuracy as well as cache QE predictions for fast online inference. Utilizing soft logic makes the prediction procedure interpretable and intervenable. We also show that pretraining the QE module with auto-generated QE data from user logs can further improve the overall performance. The proposed method is evaluated on labeled data from e-commerce websites. Empirical results show that it achieves promising improvements with computation efficiency.
§ INTRODUCTION
Nowadays, e-commerce platforms become a major shopping channel in people's life. The top e-commerce shopping platforms (such as Amazon, Taobao, and eBay) have hundreds of millions of items and active users. From the extremely large amounts of items, an e-commerce search system helps users find the ones they want. To achieve this goal, a relevance model is designed to measure the relevance score of the user query and item <cit.>. Accurate relevance measurement is crucial for an e-commerce search system since displaying irrelevant items for the query would degrade the user experience and harm retention. For queries, users use short and ambiguous text to describe their search intention. On the other side, vendors tend to write long item titles with redundant phrases and do not follow any specific structures. We present a few examples in Figure <ref>. The discrepancy between queries and item titles results in difficulties in building a fast and accurate relevance model.
Term-based methods like BM25 <cit.> and TF-IDF are widely used to estimate the relevance in the search system. These methods utilize term frequency to measure the relevance, but they would suffer from the vocabulary difference between queries and item titles. For example, they would falsely predict relevance for case (b) and irrelevance for case (c) in Figure <ref>. To alleviate the problem, neural-based methods <cit.> map the queries and items to vectors in a dense semantic space and measure the relevance accordingly. Recently, using deep contextualized representations from Transformer-based models, such as BERT <cit.>, has been proven effective in information retrieval tasks <cit.>. These Transformer-based models have two categories: Bi-encoders, which learn separate representations for the query and item; and Cross-encoders, which utilize attention to learn a joint representation of the query and item. By modeling full interaction between words of the query and item, Cross-encoders perform better than Bi-encoders.
Cross-encoders have unbearable latency in practice since they cannot pre-compute the vector representations like Bi-encoders.
A common drawback of existing relevance models is that they can only provide predicted outputs and offer little transparency of their decisions. In contrast, a human can not only judge the relevance between the query and item but also illustrate the reason. For example, we annotate case (c) in Figure <ref> as relevance because the shopping intent of "gym weight" matches the product type entity "dumbbell". It can be seen that human interpretation is often based on matching or mismatching of the query with entities in the item title. Hence, it is reasonable to formulate and interpret relevance based on entities. In the e-commerce scenario, all item titles consist of entities; it is reasonable to use entities as a form of the interpretable reason for prediction. Besides, when wrongly predicted results appear in the online search system, the bad cases need to be directly and efficiently intervened by human experts. However, in current search systems, the bad cases can only be fixed individually, which is very labor-intensive.
In this paper, we introduce Entity-based Relevance Model (EBRM), containing a query-entity (QE) relevance module and a prediction module using soft-logic. EBRM has comparable performance with Cross-encoders and faster inference speed by caching entity-based rules. Meanwhile, EBRM can provide entity-level justification of its query-item (QI) prediction. With interpretable entity-level justifications, the intervention of its prediction is efficient. The contributions of our work are:
* We propose a novel entity-based relevance model with desired properties for practical online search systems. (section <ref>)
* Training the QE relevance model is achieved through the indirect signal from the labeled QI relevance data. (section <ref>)
* We further propose a novel and effective method to pretrain EBRM with a massive amount of search logs, which significantly improves performance. (section <ref>)
* Experimental results show that EBRM satisfies the desired properties of online serving: accurate, fast and memory-efficient, easy to interpret and intervene. (section <ref>)
§ PRELIMINARY
Problem Setup
In this work, we refer to the user query as query and the product in the e-commerce system as item. For the product relevance prediction task, we first define the notations used in this paper. Let x = (Q, I) denote a pair of query text and item title text, y be its relevance label. For this task, the relevance relation can be labeled into two classes: "relevant" and "not relevant", or more fine-grained classes to distinguish the subtle relevance difference <cit.>, such as "strongly relevant", "relevant", "weakly relevant", "not relevant". In this work, we mainly focus on binary classes.
We aim to build a binary classifier that takes x = (Q, I) as input and predict whether the pair is relevant or not: y = 1 if the pair x is relevant or y = 0 otherwise.
Pretrained Language Model
BERT <cit.>, a pretrained language model with transformers <cit.>, is trained on a huge amount of unlabelled text with Masked Language Model (MLM) and Next Sentence Prediction (NSP) losses. Recent works apply BERT to NER <cit.>, IR <cit.>, syntactic parsing <cit.>, and achieve impressive performance. Several studies <cit.> demonstrate that the representation of BERT could improve semantic tasks.
Bi-encoder and Cross-encoder
Two common architectures for pairwise comparisons between texts are Bi-encoders and Cross-encoders.
In a Bi-encoder, the query and item are encoded into two separate vectors:
h̅_Q = pool(T(Q)) h̅_I = pool(T(I))
where h̅_Q and h̅_I are contextualized representations of the query and item. T(·) is a contextualized text encoder. A typical choice of the encoder is the pretrained BERT and the input text is in the form of "[CLS] <query text> [SEP]" and "[CLS] <item text> [SEP]". T(·)={h_1, ..., h_n} is the output of the transformer. pool(·) is the pooling function. Following previous works, we choose the representation of the special token "[CLS]" as the representation of the whole input sequence.
To predict the relevance label of query-item pair x, a logistic classifier is used. In the Bi-encoder, the score is computed with a biaffine scoring function:
P(y = 1 | Q, I) = σ(s(Q, I))
s(Q, I) = h̅_Q^T W h̅_I + b
In the encoding procedure of the Bi-encoder, h̅_Q and h̅_I are encoded independently. As a result, one advantage of the Bi-encoder is that it allows us to precompute and cache the representations of queries and items, which accelerates the inference. This property enables the Bi-encoder to be deployed for online serving with low latency.
On the other hand, in a Cross-encoder, the query and title text are concatenated into a single text sequence and jointly encoded into a single vector. By using the transformer, the model obtains a representation with more interaction between the query and item than the ones from the Bi-encoder.
h̅_Q, I = pool(T(concat(Q, I)))
where concat(Q, I) is the function that concatenates the query text and the title text with the special token "[SEP]". Thus, the input text of the transformer is in the form of "[CLS] <query text> [SEP] <item text> [SEP]".
For the Cross-encoder, the score is computed with an MLP.
s(Q, I) = MLP(h̅_Q,I)
Since the representation can capture strong interaction between queries and items, the Cross-encoder could get better results than the Bi-encoder. However, precomputing and caching are impossible in the Cross-encoder. It has to re-encode every new (query, item) pair and hence cannot satisfy the need for low latency for online serving.
As we mentioned above, both Bi-encoder and Cross-encoder have their pros and cons and fail to satisfy all these desired properties for serving: accurate, rationalizable, intervenable and fast speed. As all these properties are essential for real-world application, we propose EBRM to reach a decent and well-balanced performance on all these pespectives, as shown in Table <ref>, and empirically prove the comprehensiveness of EBRM in Section <ref>.
§ ENTITY-BASED SEARCH RELEVANCE MODEL
The goal of general relevance prediction is to determine whether the query and the item are relevant or not. For e-commerce search, the product type is the most important entity type as all queries must have intended products to search for and all items must have intended products to sell. Thus, we focus on product type relevance prediction regardless of attribute mismatch in this work [Nonetheless, our method can be easily extended to general relevance prediction with some modifications, which we discuss in the Section <ref>].
§.§ Model
Item titles are often verbose and contain more information than user queries. As a commonly used information extraction method, NER locates and classifies entity mentions in an unstructured text. We apply an off-the-shelf e-commerce NER model to extract entities from the item title. The extraction allows us to decompose the prediction procedure and offers us interpretable justifications for predictions, as will be explained below.[Entity extraction is seldom utilized in previous research on product search relevance models <cit.>. To make fair comparisons, in our experiments, we add comparisons with designed baselines that utilize NER prediction results.] The overall architecture is shown in Figure <ref>.
The e-commerce NER result of a title is a bag of entities E_I = {e_1, ... , e_k}. Focusing on product type relevance prediction, we only keep product type entity in the NER result and discard the others, such as brand entities. For example, the product type entity in the item "Intel Xeon Processor" is "Processor". "Intel Xeon" is the brand entity.
We follow the insight that a query and an item are relevant if and only if there exist some product type entities of the item that are relevant to the query. If a product type entity is a synonym or a hyponym of the query intent, it is relevant to the query. For example, a product type entity “phone cover” is relevant to the query “phone case”, because they are synonymous, while “case” is irrelevant to the query because it is more general and is a hypernym of the query.
Relevant(Q, I) ⋁_e ∈ E_IRelevant(Q, e)
Our EBRM consists of a Cross-encoder QE relevance module and a soft logic aggregation layer for the final QI relevance prediction. The Cross-encoder QE relevance module uses one pretrained transformer to jointly encode the query and entity into a single vector.
h̅_Q, e = pool(T(concat(Q, e)))
For every QE (query, entity) pair in a QI (query, item) pair, the relevance probability is computed with an MLP scoring layer followed by a sigmoid function.
P(r = 1 | Q, e) = σ(S(Q,e)) = σ(MLP(h̅_Q, e))
where r is the relevance label of the query and product type entity.
To predict the relevance label of the query and item, we regard the probabilities computed by the QE relevance module as soft matching of QE and aggregate them into soft matching of QI by applying a soft logic operator. Using the Zadeh soft logic <cit.>, we use max to replace disjunction and obtain the following formula.
P(y=1 | Q, I) = max_e ∈ E_IP(r=1|Q, e)
= max_e ∈ E_Iσ(S(Q, e))
= σ(max_e ∈ E_IS(Q, e))
From the derivation, we define the score of QI as
S(Q, I) = max_e ∈ E_IS(Q, e)
§.§ Inference & Learning
The binary QI relevance prediction is obtained by
y = max_y' ∈{0, 1}P(y' | Q, I) = [S(Q, I) ≥ 0]
= [max_e ∈ E_IS(Q, e)≥ 0] = max_e ∈ E_I[S(Q, e) ≥ 0]
where [·] is Iverson bracket.
It can be seen that the QI relevance prediction is divided into several QE relevance predictions. The results of QE model act as direct explanations and signals for the QI predictions.
In practice, we can cache the QE relevance prediction results together with the entity recognition results of items and use them as rules for fast and accurate online inference, as we explained in Section <ref> in Appendix.
Given a dataset D={(x_1, y_1), ..., (x_n,y_n) }, our training objective is to minimize the commonly-used negative log-likelihood to learn the parameters θ. x_i denotes a QI pair and y_i denotes the relevance label.
θ^* = min_θ1/n∑_(x_i,y_i) ∈ D-log P(y=y_i|x_i)
From the perspective of QE relevance learning, the supervision signals come from the labeled QI data.
§.§ Warm-up Pre-Training
QI Pretraining with Search Log
The QE relevance module is the core component of our model. As mentioned above, we would train the QE relevance module with indirect supervision from QI data. However, such human-annotated QI data is laborious. At the same time, a e-commerce platform creates millions of user logs every day. The user-behavioral signals like click or purchase can be used for model learning.
Therefore, we propose to collect large-scale pseudo-labeled QI pairs from search logs according to user behavior. Given one query, we collect all its exposure items and count numbers of their clicks on the e-commerce platform in the past two months. We collect the QI pairs as positive examples whose numbers of clicks are the top N among all QI pairs and take M random QI pairs as negative examples if their exposure is more than K and they are never clicked.
QE Pretraining with Search Log Directly utilizing the click-through QI data in e-commerce might be misleading. Previous work <cit.> find that the clicked results might be attributed to many factors including price, attractive titles or images[For example, during data collection, we find that some items are displayed many times but never or seldom clicked although the items are relevant to the query. This may be because these items do not have eye-catching photos.]. Therefore, we design a different approach that gathers QE pairs which can alleviate the mismatch problem in a single QI pair [An example of the procedure are shown in the Figure <ref> in the Appendix.]. Firstly, we collect all exposure items and count numbers of their clicks on the e-commerce platform given one query in the past two months. Then, we extract entities from the collected items and compute the number of each entity's click by summing the click numbers of all items which contain this entity. Lastly, we collect the QE pairs as positive examples whose click numbers are the top N among all QE pairs and take the lowest M pairs as negative examples. We construct a pseudo-labeled QE dataset in such a way and train our QE module on the pseudo-labeled dataset. Compared with QI relevance data, the QE training data is more useful to our QE module because the QE data can be seen as a distillation of the large-scale QI data that can be less noisy and more accurate. And the QE label is a direct training signal for our QE module. In our experiments, we set N to 3, M to 10 and K to 10 by default.
QI Pretraining without Search Log
For scenario where we cannot collect search logs from the same domain of the datasets, we propose to utilize the parameters of a well-trained Cross-Encoder model, which is named as "QIRM (Cross)" in Section <ref>, to initialize the parameters of the EBRM model.
§.§ Properties of EBRM
Our proposed entity-based relevance model utilizes entities to bridge the gap between queries and items. Here we discuss why our entity-based relevance model satisfies some of the properties list in Table <ref>. Empirical results verifying these properties are shown in Section <ref>.
Rationalizable Since our model uses a soft logic aggregation layer for final prediction, the logic layer allows us to interpret every prediction result of a QI pair at the entity level. If the predicted result of a query and an item is relevant, the cached entities for the query and item should have at least one overlapping entity, which can be regarded as the explanation of the relevance prediction.
Intervenable Since the prediction procedure is rationalizable, if we find a critical error in the QE relevance rule, we can hotfix the error to improve the performance of the relevance module. Specifically, we can add or delete a specific entity for each query to change the search results. By intervening one specific QE relevance rule, we could influence the predictions of the query and all items having this entity, which is efficiency.
Speed With precomputation and caching the QE relevance rules, the inference procedure only needs simple and fast entity matching, that is, the model only needs to check whether entities in QE rules appear in the item. The speed of our model could be comparable with Bi-encoders.
§ EXPERIMENTS
To verify the advantages of our model, we conduct a variety of experiments with different datasets.
§.§ Datasets
The experiments are conducted on a private dataset and an open-source dataset. For the private dataset, our training data is collected from the search logs of an e-commerce website. For the QI dataset, the relevance labels are annotated by humans. The train-split is used for model training and the dev-split is used for model selection. As mentioned above, we automatically generate a large amount of pseudo-labeled QI and QE data for pretraining.
Meanwhile, to enable reproducible research, we also utilize a recently released open-source e-commerce dataset WANDS <cit.> [According to its annotation guidelines, we treat the labels "exact match" and "partial match" as "product type relevant" and "irrelevant" as "product type irrelevant". We split the whole WANDS dataset into train, dev, test sets in 3:1:1 ratio. The split dataset will be released in our git project.].
The statistics of datasets is shown in Table <ref> in Appendix.
§.§ Compared Approaches
We organize the approaches into five groups:
§.§.§ Existing Text Matching Models
We conduct experiments with several existing neural text matching models: Arc-II <cit.>, CDSSM <cit.>, MatchPyramid <cit.>, KNRM <cit.> and ConvKNRM <cit.>.
§.§.§ Entity Recognition (Aug by KBs)
Relevance Prediction with Pure NER
We extract entities from the query and item in each QI pair by an off-the-shelf NER tool respectively, and then we match these extracted entity texts. If there exists one or more entities appearing in both the query and the item, we predict the QI as relevant; otherwise, we predict it as irrelevant. If a query does not have a product type entity, we regard all the exposed items to the query as relevant.
Relevance Prediction with Pure NER & Existing Knowledge Base To alleviate the surface form gap between the query and item, we try to enrich the information of the query based on the publicly released knowledge base ConceptNet <cit.>. We can expand the entities with relation "Synonym" (referred as ) or the "Synonym", "SimilarTo" and "RelatedTo" relations simultaneously (referred as )
ConceptNet has triplets ⟨ent_1, relation, ent_2 ⟩.
Following <cit.>, we take the entity extracted from the query as ent_1 and obtain all ent_2 as the expanded entity while the relation between these two entities is one of "Synonym", "SimilarTo" and "RelatedTo". In such way, we can extend structured information of the query representation.
We compare two settings while their difference is which kinds of relations we take into consideration.
After obtaining the synonym or related entities of the extracted entities from the query
, we match the original and expanded entities from the query and the entities from the title in the same way as in the "Relevance Prediction with Pure NER" approach.
§.§.§ Bi-encoder Models
QIRM (Bi) One important baseline model is the Bi-encoder model, which is described in Section <ref>. The Bi-encoder models have been widely used in previous work <cit.>, which have been shown to be a strong baseline. For QIRM (Bi), the inputs are the query and item title. We also enhance it by QI pretraining.
QEsRM (Bi) To analyze the influence of entity extraction, we modify the inputs of the Bi-encoder. Specifically, the item title is replaced with a concatenated product type entities text.
§.§.§ Cross-encoder Models for reference
QIRM (Cross) The input of the Cross-encoder is the concatenation of the query and item title.
QEsRM (Cross) Similar to the QEsRM (Bi) baseline, but with concatenated product type entity texts as the input of the item.
§.§.§ Our proposed Entity-based Models
EBRM
Our proposed entity-based relevant model (EBRM) is trained with the labeled QI data. We enhance it with QE pretraining and initialization with Cross-encoder.
For EBRMs, QIRMs and QEsRMs, we use BERT-base-uncased as the encoder[Since BERT uses the NSP objective in pretraining and uses different embeddings for different segments, we set the segment of the query to 0 and the other to 1. When using the Bi-encoder, the input of the transformer is single text. We set the segment of input to 0.].
§.§ Results on QI Relevance
For evaluation, we use the following metrics: Accuracy, macro F1 score. The experimental results are shown in Table <ref>.[Please refer to the Table <ref> in Appendix for detailed performance.]
Observation #1: By capturing deep interaction between queries and items, the Cross-encoders have better performance than the Bi-encoders as expected. Compared with the Bi-encoder baselines QIRM (Bi) and QEsRM (Bi), our EBRM outperforms them on accuracy and F1 score, which demonstrates the effectiveness of our model. The advantage over QEsRM (Bi) also verifies that the performance gain does not completely come from the entity extraction model.
Observation #2: On E-commerce dataset, with the help of QE module, EBRM has substantial improvement in performance and is comparable with the QIRM (Cross). We think the reason is that the QE pseudo-labeled data provides clean negative samples for the QE module, which improves its discriminative ability. On both datasets, initialization with Cross-encoder parameters can slightly improve the performance of EBRM.
Observation #3:
For both datasets, We can find that the entity recognition methods achieve much poorer overall accuracy than our proposed EBRM. For E-commerce dataset, we find the entity recognition methods have an extremely poor balance between accuracy of positive and negative samples.
This is due to that the gap between the surface forms of the query and item is huge, so that entities in relevant query and title pairs are often different (although their semantic meanings are similar).
With the help of the public knowledge base ConceptNet, we can reduce the gap.
However, such entity recognition methods have an obvious disadvantage that the coverage of entity augmentation for queries is low, because lots of queries may not have product type entities and hence cannot be easily expanded. Besides, the publicly released knowledge base may not match the e-commerce scenario well while building an accurate and high-coverage e-commerce knowledge base is difficult.
Observation #4:
Compared with the classic neural text matching models, all the models (including QIRMs, QEsRMs, EBRMs) with pretrained language models perform better on all Acc and F1 metrics, which shows the effectiveness of pretrained language models.
§.§ Intervention Ability
We simulate the intervention process of EBRM on our collected pseudo-labeled QI data from user logs to investigate the intervention efficiency.
We randomly select 120 queries, and for each query we collect about 500 purchased items as positive examples and 100 items never been clicked as negative examples.
We intervene our QI prediction by supplying/deleting entities to each query from its false negative/positive examples respectively.
From Table <ref>, we can find that our intervention process can improve the performance of QI prediction. Furthermore, for each QI intervention operation, our method only takes 0.26/1 ≈ 1/4 actions, which is much faster than the naive QI intervention method.
§.§ Inference Speed
As we mentioned before, one advantage of EBRM is its reasonable inference speed. We compare with the Cross-encoder model and the Bi-encoder model. For the Cross-encoder, we further test the speed of two small variants with BERT-small-uncased and BERT-tiny-uncased.
For the Bi-encoder, we obtain the vectors of queries and items in the experimental data from a trained Bi-encoder. For EBRM, we precompute with the QE relevance module and cache the QE relevance rules.[In our statistics of a real anonymous ecommerce website, we can cache over 80 percent of user queries that previously searched.] All experiments are conducted with the test set of the E-commerce dataset.
Experiment results are presented in Table <ref>. With smaller BERT variants, the Cross-encoders become faster but less accurate. With the help of precomputation, EBRM is much faster than QIRM (Cross) with different BERT variants. EBRM has a slight advantage in speed over QIRM (Bi) since its simpler computation. Besides, EBRM only needs to cache the sparse QE relevance rules, which is more memory-efficient than caching vectors from QIRM (Bi). These results show that our model is suitable for practical applications from the perspective of efficiency.
§ DISCUSSION: EXTENDING TO GENERAL RELEVANCE PREDICTION
In this paper, we focus on product type relevance prediction. For the general QI relevance prediction problem, we make modifications to the input processing and the soft logic layer of the model. In general, there are many other entity types except product, such as Brand and Color. We apply a NER model to extract the entities of the query. Our belief of general relevance is: for each entity type c that occurs in the query, a matched item needs to have a relevant entity of this type.
Relevant(Q, I) ⋀_c ∈C_Q⋁_e ∈E_I^cRelevant(Q, e)
where C_Q is the set of entity types appearing in the query and E_I^c is the entity set of type c in the item.
§ RELATED WORK
§.§ Search Relevance Models
The relevance model is an important part of a search system and is used in retrieval and ranking <cit.>. Recent studies <cit.> utilize pretrained language models, notably BERT <cit.>. However, PLM-based relevance models are time-consuming for online serving in practice. In order to narrow the gap, some works redesign the architecture for speedup <cit.>. Compared with these approaches, we focus on the relevance model for e-commerce product search and propose a novel entity-based model architecture. It can employ the expensive PLM-based relevance computation offline and provide a speedup for online prediction.
§.§ Entity-Oriented Search
Entities are semantic units for organizing information, which can be utilized as strong relevance ranking features <cit.>. For example, entity-based information can be utilized in improving language models <cit.>.
Another application is entity search <cit.>, which aims to retrieve relevant items from a semantic data set about entities. Compared with these studies, we propose to bridge the gap between queries and items with product type entities. Furthermore, our proposed approach explicitly utilizes entities to directly compute the relevance results.
§.§ Marrying Neural & Symbolic Methods
Several recent research works seek to integrate symbolic knowledge (such as logic rules) into neural networks. One direction is integrating logic rules into training through different methods, such as posterior regularization <cit.>, consistency loss <cit.> and discrepancy loss <cit.> between neural networks and logic, construction of adversarial sets <cit.>.
Another direction directly augments neural networks with logic neurons <cit.>. Motivated by these studies, our method integrates the logic rules of product relevance matching as a soft logic layer in the neural network and retains strong interpretability of prediction from the logic rules.
§ CONCLUSION
For solving the product type relevance prediction problem in e-commerce search, we propose a novel relevance model called EBRM. EBRM decomposes the QI prediction problem into several QE prediction problems and aggregates the results with a soft logic module. From the perspective of online serving, EBRM has advantages over the commonly-used Bi-encoders and Cross-encoders and is rationalizable, intervenable, fast and accurate. Experimental results verify that our model is effective and beneficial for practical applications.
acl_natbib
§ SERVING EBRM ONLINE
Similar to previous work on caching vectors for both queries and items <cit.>, we propose to cache the tagged entities for both queries and items. During the online serving period, we check whether a specific query and item pair has at least one overlapping entity to produce the relevance result.
Caching Entities for Query
To obtain the candidate relevant entities of each query and reduce the computation of QE prediction, we design a data collection method containing five steps. The previous three steps are used to collect candidate QE pairs and the procedure is same as above mentioned in Section 3.3. Based on the collected QE pair and their numbers of clicks, we take two more steps. 1). We choose the entities as candidate entities whose numbers of clicks are the top 100 for each query. 2). We use our proposed model to predict whether each candidate QE pair is relevant. Such procedures are performed in an offline manner. Finally, we store the entities relevant to the query for online prediction [In our statistics of a real anonymous ecommerce website, we can cache over 80 percent of user queries that previously searched.].
Caching Entities for Item We utilize an off-the-shelf entity extraction tool to directly recognize the product type entities in the item title and store them respectively.
§ CASE STUDY
In this section, we analyze the difference between models as shown in Figure <ref>. From these two examples, we find that our model EBRM correctly tags the query with accurate entities compared with the NER methods. Compared with the Bi-encoder, EBRM not only precisely predicts relevance output, but also offers entity-level explanations.
In example (a), EBRM finds that entities from items, including “Dog ID Tag”, “Collar Accessories”, “Dog Anti-lost Pendant', are relevant to the query, and hence predict the QI pair to be relevant. However, due to different surface forms of dog medals between the query and item, the NER methods failed to make the right prediction. Besides, we cannot easily analyze why Bi-encoder makes erroneous predictions for a specific case.
In example (b), EBRM regards all entities in the title as irrelevant to the query and therefore predict the label as irrelevant, but Bi-encoder makes a mistake again.
§ EXPERIMENTS: ONLINE EVALUATION
We deployed the proposed model online in the search system of a real anonymous e-commerce website. Compared with the online search relevance model as the baseline, we perform online A/B testing to investigate the effectiveness of EBRM. Both experiments take about 10% proportion of search traffic. To evaluate the empirical results on search relevance, we ask two human annotators to label the search results of randomly sampled queries over the website. If these two annotators produce different annotations for a QI pair, we will ask a third annotator to label the QI. We find that the annotation consistency is 95%. Motivated by previous work on online search relevance evaluation <cit.>, we evaluate the ratio of relevant QI for both the base and test buckets. Results show that our proposed method outperforms the online search system and improves the search relevance of the website by 1.29%.
Besides, we find that the scale of queries or items is much bigger than that of entities as shown in Table <ref>. Entities per query or item is not more than 10, which means that our proposed EBRM has a small memory storage cost, which is important for online e-commerce systems.
|
http://arxiv.org/abs/2307.03081v1
|
20230706154912
|
Power-Aperture Resource Allocation for a MPAR with Communications Capabilities
|
[
"Augusto Aubry",
"Antonio De Maio",
"Luca Pallotta"
] |
eess.SP
|
[
"eess.SP"
] |
Power-Aperture Resource Allocation for a MPAR with Communications Capabilities
Augusto Aubry, Senior Member, IEEE, Antonio De Maio, Fellow, IEEE, and Luca Pallotta, Senior Member, IEEE
A. Aubry and A. De Maio are with the Department of Electrical Engineering and Information Technology (DIETI), Università degli Studi di Napoli “Federico II”, via Claudio 21, I-80125 Napoli, Italy. E-mail: {augusto.aubry, ademaio}@unina.it.
L. Pallotta is with School of Engineering, University of Basilicata, via dell'Ateneo Lucano 10, 85100 Potenza, Italy. E-mail: [email protected].
====================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================
Multifunction phased array radars (MPARs) exploit the intrinsic flexibility of their active electronically steered array (ESA) to perform, at the same time, a multitude of operations, such as search, tracking, fire control, classification, and communications. This paper aims at addressing the MPAR resource allocation so as to satisfy the quality of service (QoS) demanded by both line of sight (LOS) and non line of sight (NLOS) search operations along with communications tasks. To this end, the ranges at which the cumulative detection probability and the channel capacity per bandwidth reach a desired value are introduced as task quality metrics for the search and communication functions, respectively. Then, to quantify the satisfaction level of each task, for each of them a bespoke utility function is defined to map the associated quality metric into the corresponding perceived utility. Hence, assigning different priority weights to each task, the resource allocation problem, in terms of radar power aperture (PAP) specification, is formulated as a constrained optimization problem whose solution optimizes the global radar QoS. Several simulations are conducted in scenarios of practical interest to prove the effectiveness of the approach.
dynamic resource allocation, integrated sensing and communication (ISAC), quality of service, resource management, RIS.
§ INTRODUCTION
Modern radar systems are becoming more and more sophisticated due to the stressing requirement of multifunctionality which can be defined as the capability of performing and managing a multitude of different operations. This is becoming of vital importance in the modern battlefield scenario, that could comprise a plethora of different challenging requirements so as to account for possibly different threats. Therefore, the MPAR must perform different functions, such as search, tracking, fire control, classification, communications, ECCM, and also a multitude of tasks associated with each radar function <cit.>. To realize the aforementioned operations, the radar exploits the intrinsic flexibility provided by its active ESA antenna, that allows to synthesize multiple diverse beams, as well as to steer them into specific directions with negligible delays and without angular continuity requirements. Moreover, on the transmit side different waveforms, PRI, dwells, and energy values can be used. The management of the system degrees of freedom is demanded to the RRM, which assigns priorities to the functions and to the tasks composing them. Additionally, it performs their dynamic scheduling together with the parameter selection and optimization <cit.>. Accordingly, the mentioned functions and tasks are generally accomplished dedicating (to each of them) specific amounts of the available radar resources, for instance multiplexing them over different time intervals and/or looking angles. It is also clear that, in assigning the resource to each function/task, the RRM has to comply with physical and technical constraints, so as to appropriately handle the limited resource budget and the task induced performance constraints. In this respect, the RRM must decide, on the basis of the assigned priorities, for the optimal controllable resource allocation in order to guarantee the necessary quality for the high priority tasks at the expense of the others. Needless to say, in the scheduling process, once the resources to manage are specified, a tailored figure of merit for each involved task as well as the associated utility function must be defined to realize an optimized distribution of the available radar degrees of freedom <cit.>. Additionally, priorities are represented via scalar weights associated with each task. Then, the optimization problem for the resource sharing is set-up on the basis of the above quantities, where the objective function that describes the satisfaction for the overall success of the radar mission is maximized <cit.>. In this respect, the RRM can use different optimization tools to perform resource allocation. Among them, it is worth mentioning the Q-RAM <cit.> and the CDAPS <cit.>.
The Q-RAM consists of few steps to handle a constrained optimization problem for discrete parameter selection. In a nutshell, starting from the situation where the resource for each task is zero, it performs an iterative subdivision of the degrees of freedom to each task in the order specified by the highest to the lowest marginal utility. Once the available resource is entirely allocated, the algorithm ends. Other interesting applications of the Q-RAM within the framework of radar resource management can be found in <cit.>. Analogously, the CDAPS models the tasks as agents, each of them having its own resource to utilize. Since, the total amount of resources for all tasks should not exceed a specific quantity, the problem is tackled through the application of a CDA market algorithm <cit.>. Some other interesting uses of the CDAPS related to the radar resource management problem can be found in <cit.>. Other studies devoted to the optimization of the power allocation in a distributed MIMO system performing both radar and communication functions have also been developed in the last years <cit.>. In particular, in <cit.>, an optimization problem is formulated to reach as better as possible the desired performance in terms of target detection along with the desired data rate of the communication function. Moreover, in <cit.>, the allocation paradigm is modified to boost the performance of the distributed MIMO system in terms of its LPI. Finally, in <cit.>, the above described resource allocation is expanded to the context of multi-target tracking.
Unlike the mentioned references, in this paper, a QoS optimization is developed for a suitable allocation of the resources in a MPAR system performing ISAC <cit.> activities, via a multitude of functions and tasks ranging from surveillance in both LOS and NLOS environments to data transmission operations. To this end, following the lead of <cit.>, after defining parameters characterizing multiple search sectors, RIS-aided search, as well as multiuser COM tasks, their respective quality metrics and utility functions are introduced. Hence, the resulting resource allocation paradigm is formulated as a constrained optimization problem, where the variable is the PAP whose bespoke distribution allows to maximize the overall QoS. Several case studies of practical interest are analyzed to demonstrate the validity of the approach.
The paper is organized as follows. In Section <ref>, the MPAR system is presented and the QoS optimization problem is formulated considering the PAP as degree of freedom. Then, in Section <ref> the quality metrics are defined for each task together with their respective utility functions. The problem is particularized and solved for some case studies of practical interest in Section <ref>. Finally, some concluding remarks are given in Section <ref>.
§ PROBLEM STATEMENT
In this paper, a MPAR system equipped with an active ESA antenna and capable of performing multiple functions, e.g., just mention a few, radar surveillance (search) in LOS scenarios, radar surveillance in NLOS scenarios (a.k.a. detection over the corner) by means of RISs, COM activities, tracking, etc, is considered (see Fig. <ref> for a notional operating scenario).
To allocate appropriately the resources required to each task, the radar employs a dynamic radar parameters assignment. In an ideal context, the system has the possibility to assign at each task the resources demanded to reach the desired performance. However, due to the limited availability, the radar system has to face with a suitable distribution of the degrees of freedom over the different tasks. Therefore in a MPAR, the resources at the radar disposal are not a-priori fixed as in the classic surveillance systems, but rather they are dynamically allocated during its operation on the basis of the specific mission and its actual state, as well as depending on some priorities associated with each task. From a practical point of view the active ESA is composed of many tiles each with a given PAP. They are clustered according to the requirements of the system tasks so that each group realizes an overall PAP value. A pictorial description of the concept can be seen in Figure <ref>.
The PAP (defined as the product between the transmitted power and the radar aperture) is considered as the limited resource that must be granted to perform the different tasks. Obviously, if the available PAP overcomes that needed to satisfy the requirements for all the active tasks, enough PAP is given to each of them. Nevertheless, being the PAP physically and practically limited, only a percentage of the resource demanded by each task can be, in general, allocated by the RRM at each schedule time. The aforementioned assignment is performed on the basis of a pool of figure of merits and utility functions depending, in general, on the specific resources to distribute as well as on the control and environmental parameters, say _i, i=1,…,L, where L is number of tasks.
To proceed further, let us indicate by q_i(PAP_i;_i) the quality metric characterizing the performance of the i-th task. The RRM should find the optimal partition of PAP between tasks such that the weighted sum of their utilities is maximized <cit.>, <cit.>. In this context, the task utility function provides the satisfaction level corresponding to the achieved task quality metric value. Moreover, to partially account for different degrees of relevance and priorities, these utilities are suitably weighted in the formation of the overall RRM utility metric. In other words, denoting by PAP = [PAP_1,PAP_2,…, PAP_L]^T ∈ℝ^L the vector containing as i-th entry the PAP attributed to the i-th task, i=1,…,L, the PAP distribution is obtained as the optimal solution to the following constrained optimization problem <cit.>
{[ max_PAP u(PAP); s.t. ∑_i=1^L PAP_i ≤PAP_tot; PAP_i ≥ 0, i = 1, …, L ].,
where
u(PAP) = ∑_i=1^L w_i u_i(q_i(PAP_i; _i)),
PAP_tot is the total amount of PAP available at the MPAR, u_i(·), i=1,…, L, is the utility function of the i-th task, whereas w_i, i=1,…, L, are the weights reflecting the priorities among the L tasks.
The next section describes the task quality metrics together with their corresponding utilities herein considered for the dynamic PAP allocation paradigm described by (<ref>).
§ TASK QUALITY AND UTILITY FOR QOS RESOURCE MANAGEMENT
The allocation strategy formalized by Problem (<ref>) depends on the considered figure of merits q_i(·;·), and utility functions u_i(·), i=1,…, L. The goal of this section is to specify them, so as to concretely define the scheduling machinery.
A meaningful figure of merit for the surveillance functions (both in the LOS and NLOS scenarios) is provided by the cumulative detection range, denoted as R, that is the range where the cumulative P_d is larger than or equal to a desired value <cit.>. The cumulative P_d is indeed defined as the probability that a target is detected at least once in a given number of dwells <cit.>. In fact, when a target enters in a search sector, its detection can be performed over multiple scans. Moreover, the cumulative P_d increases at each scan especially as the target approaches the radar.
Similarly, for the COM function, the quality metric can be defined as the communication range, indicated as R_com, corresponding to the maximum distance at which a minimum bit-rate can be conveyed reliably. These two metrics are deeply described in Subsections <ref> and <ref>.
Before proceeding further, it is worth recalling the one-way link equation, which is useful for subsequent derivations.
Remark 1: Let us consider a source located at the point A_1∈ℝ^3, transmitting an EM wave with a peak power of P_T and an antenna steered in the direction described by the azimuth and elevation angles ϕ_0 and θ_0, according to the coordinate system depicted in Fig. <ref>. Denoting by G_T the peak antenna gain when it points in the boresight direction, the spatial power density at point A_2 is
𝒫^in = P_T G_T/4π R^2 L_s L_steer,
where R=A_2-A_1, i.e., the distance between the transmitter and the receiver, and L_s is the combined system operational loss <cit.>. Moreover, L_steer is the term accounting for the scanning gain loss of the steered antenna in the pointing direction[It is worth to underline that, even if L_steer depends on the considered pointing angles, to simplify the notation the dependence on (θ_0, ϕ_0) is omitted in the rest of the paper.] (θ_0, ϕ_0), which implicitly embeds the spatial selectivity in the antenna gain. In fact, as the pointing angles deviates from the boresight, the beam broadens while its peak drops out. The loss in peak gain due to scanning for a generic planar array depends on both the pointing direction (i.e., azimuth and elevation) and the single element radiating pattern. Practically, the values of these losses are off-line evaluated and then stored in a look-up table to be applied during radar's operation. However, in the particular case of a URA under some technical assumptions as for instance large array size and omnidirectional array elements, L_steer assumes a simplified approximated form, depending only on the elevation angle cosine <cit.>.
§.§ Search task quality metric
Let us indicate with P_d(R') the single-look P_d at range R', and assume that S is the number of scans the target needs to reach the range R from the pop-up range R_m. Hence, the respective cumulative P_d for the search sector of interest at range R is given by <cit.>
P_c(R|R_m) = 1 - ∏_n=0^S-1[1 - P_d(R_m - n v_r t_f - Δ)],
with v_r the target radial speed, t_f the frame time (i.e., the time necessary to perform a single scan of the sector), and Δ a sample of a uniform random variable[Without loss of generality, Δ is set equal to zero in the next analyses.] in the interval [0, v_r t_f], with v_r t_f the distance traveled by the target in a single scan, modelling the initial target position in the corresponding radar cell. Note that R=R_m-(S-1)v_r t_f - Δ. The single-look P_d can be evaluated once the desired false alarm probability, say P_fa, is set. More specifically, assuming a SW 0 (respectively a SW 1) model for the target amplitude and assuming a coherent integration of the pulses in a dwell, the single-look detection probability at range R' can be obtained as <cit.>
P_d(R') = Q_M(√(2SNR), √(-2log P_fa)) (SW0)
and
P_d(R') = P_fa^1/(1+ SNR) (SW1),
where SNR is the coherent SNR at range R', and Q_M(·, ·) the Marcum Q-function. Note that, the functional dependence on the variable R' of the P_d is embedded in the expression of the coherent SNR.
Let us now consider a radar located at point A_1^LOS aimed at detecting a (possible) target at point A_2^LOS in a LOS environment. To contextualize the cumulative P_d expression to the resource allocation process, it is necessary to particularize the result of Remark 1 (<ref>) to the links A_1^LOS-A_2^LOS and A_2^LOS-A_1^LOS. Accordingly, the SNR can be expressed as <cit.>
SNR^LOS = P_T G_T G_R λ_0^2 σ n_p/(4π)^3 R_LOS^4 k T_s B L_s^LOSL_steer^LOS,
where G_R is the receiving antenna peak gain, R_LOS=A_1^LOS-A_2^LOS, T_s is the system noise temperature, L_s^LOS is the combined two-way system operational loss <cit.>, L_steer^LOS is the total scanning loss in the LOS scenario, σ is the target RCS, k is the Boltzmann's constant, λ_0 is the operating wavelength, and n_p is the number of integrated pulses in a dwell.
Assuming a monostatic radar configuration using the same beam in transmission and reception, (<ref>) can be arranged in the search-form of the RRE <cit.>. To this end, recall that <cit.>
t_d = t_f/M = t_f/Ω A_eλ_0^2,
where M is the number of beam positions to cover the solid angle search sector Ω and the effective area of the radar antenna A_e is related to the radar peak gain by <cit.>
G_T = 4πA_e/λ_0^2.
Hence, substituting (<ref>)-(<ref>) in (<ref>), the search-form of the RRE (<ref>)-(<ref>) boils down to
SNR^LOS = PAPσ/4π k T_s R_LOS^4 L_s^LOSL_steer^LOSt_f/Ω
where PAP = P_T A_e.
As to the NLOS scenario, encompassing a gapfiller RIS that aids the detection over the corner <cit.>, let us indicate with A_1^NLOS, A_2^NLOS, and A_3^NLOS the positions of the radar, RIS, and target, respectively, and, accordingly, R_1,NLOS = A_1^NLOS - A_2^NLOS and R_2,NLOS = A_2^NLOS - A_3^NLOS. Therefore, leveraging Remark 1, the expression for the SNR can be derived accounting for the multiple paths involved in the surveillance process, i.e., A_1^NLOS-A_2^NLOS, A_2^NLOS-A_3^NLOS, A_3^NLOS-A_2^NLOS, and A_2^NLOS-A_1^NLOS, along with the target RCS and the radiation patterns synthesized at the RIS equipment.
Specifically, the RRE assumes the form <cit.>
SNR^NLOS = G_T^2 G_RIS^2 A_RIS^2 η_RIS^2 λ_0^2 σ P_avg t_d/R_1,NLOS^4 R_2,NLOS^4 (4π)^5 k T_s L_s^NLOSL_steer^NLOS,
with P_avg the average transmit power, L_s^NLOS the combined system operational loss in the NLOS case <cit.>, L_steer^NLOS the total scanning loss in the NLOS scenario. A_RIS is the RIS area, that for a uniform rectangular geometry can be expressed as δ_x δ_y N_1 N_2, with δ_x=δ_y=λ_0/2 the patch size along x- and y-direction, respectively, and N_1, N_2 the respective number of patches. Additionally, η_RIS is the RIS efficiency (assumed, for simplicity, common to all the patches), which accounts for taper and spillover effects <cit.>. Hence, the product A_RISη_RIS is the effective aperture of the RIS. Finally, G_RIS is the RIS peak gain.
The SNR of a RIS-aided search radar can be again expressed in terms of PAP. Precisely, substituting (<ref>)-(<ref>) in (<ref>), the search-form of the RIS-aided RRE is
SNR^NLOS = PAP G_RIS^2 A_RIS^2 η_RIS^2 σ/R_1,NLOS^4 R_2,NLOS^4 (4π)^3 k T_s L_s^NLOSL_steer^NLOSt_f/Ω.
Before concluding this section, it is now worth observing that a commonly reference value for the objective P_c is 0.9. For this reason, the corresponding cumulative detection range denoted by R_90^LOS for LOS tasks, can be expressed as
R_90^LOS = P_c,LOS^-1(0.9, R_m),
having denoted by P_c,LOS^-1(x|R_m) the inverse of the function in (<ref>) for the LOS case, i.e., when the SNR is dictated by SNR^LOS in (<ref>). Analogously, for the NLOS search task
R_90^NLOS = P_c,NLOS^-1(0.9, R_m),
with P_c,NLOS^-1(x|R_m) the inverse of the function in (<ref>) for the NLOS case, i.e., when the SNR is given by SNR^NLOS in (<ref>).
§.§ COM task quality metric
The metric that describes the quality for a COM task is the maximum range, indicated as R_com, for which the channel capacity per bandwidth is equal to a specific value. Before evaluating R_com, let us consider the transmission of a signal composed by the superposition of U≤ B^COMT^sym orthogonal waveforms, x_i(t), i=1,…, U, to U COM receiving users, with B^COM the bandwidth reserved by the radar to COM operations, and T^sym the symbol interval. Then, the transmitted signal is
x(t) = ∑_i=1^U ∑_h=0^N^sym-1 s_i(ϕ_i,θ_i) x_i(t - h T^sym)α_i(h),
0≤ t ≤ T^COM,
where T^COM = T^symN^COM, N^COM indicates the number of symbols transmitted in each scheduled interval, α_i(h), h=0, …, N^sym-1, accounts for the information symbols for the i-th user, and s_i is the beamformer pointing toward the i-th user at position (θ_i, ϕ_i) w.r.t. the coordinate system centered at the transmitting antenna phase-center position.
Assuming an AWGN channel, with w(t) the noise contribution, the signal acquired at the k-th receiver can be expressed as
r_k(t) = ^†_kβ_k (t-τ_k) + w(t)
= β_k ∑_i=1^U ^†_k_i _i^†(t-τ_k)_i + w(t),
with _k the steering vector in the direction (θ_k,ϕ_k), β_k the complex scaling factor accounting for channel propagation effects, and τ_k the propagation time of the k-th user. Note that, the functional dependence of s_k on (ϕ_k,θ_k) is omitted for brevity.
At receiver side, the samples of the incoming signal after matched filter operation to x_k(t-τ_k) becomes
⟨ r_k(t), x_k(t-τ_k) ⟩ = β_k g_k α_k(h) + w_k(h), h = 0, …, N^sym-1,
where g_k is the transmitter beamformer complex gain in the direction (ϕ_k,θ_k) of the k-th user, and ⟨·, ·⟩ denotes the inner product operator.
Finally, the channel capacity per bandwidth (expressed in bit/s/Hz) for the k-th user can be defined as <cit.>
C = log_2(1 + SNR_k^COM),
where SNR_k^COM is the SNR at the k-th COM user receiver.
Let us indicate with A_1^COM and A_2^COM the positions of the transmitter and the k-th COM user, respectively, and R_k,COM = A_1^COM - A_2^COM. According to Remark 1, the SNR in (<ref>) can be computed with respect to the link A_1^COM-A_2^COM as
SNR_k^COM= P_k|g_k|^2 β_k/σ_k^2,
where P_k = [|α_k |^2] is the transmitting power for the k-th communication link, and σ_k^2 = k T_s^COM B^COM is the noise power at the k-th receiver, with T_s^COM and B^COM the respective noise system temperature and effective bandwidth. Let us observe now that
|g_k|^2 β_k = G_T A^rx,k_e/4π R_k,COM^2 L_s^COML_steer^COM
with A^rx,k_e the effective area of the k-th user receiving antenna, and L_steer^COM the total scanning loss in the COM scenario. Hence, following the above definitions, SNR_k in (<ref>) can be expressed in terms of PAP, i.e.,
SNR_k = P_k |h̅_k|^2 β_k/σ_k^2 = PAP_k A^rx,k_e/λ_0^2 R_k,COM^2 L_s^COM L_steer^COMσ_k^2.
Finally, denoting by C_desired the reference value for the objective channel capacity, its corresponding range, say R_com, is derived as follows
R_com = √(λ_0^2 L_s^COM L_steer^COM(2^C_desired-1)σ_k^2/PAP_k A^rx,k_e).
§.§ Task utility
Once the task quality metrics are defined, the joint optimum allocation of tasks' PAPs can be computed as the optimal solution to the QoS optimization problem in (<ref>). In this respect, the RRM needs to map the quality metrics to their corresponding utilities. As a matter of fact, the utility provides a description of the degree of satisfaction reached when each task is completed. A possible way to define the utility for the i-th considered task is through the following model <cit.>
u_i(R_c) = {
0, R_c < R_t_i
R_c - R_t_i/R_o_i - R_t_i, R_t_i≤ R_c≤ R_o_i
1, R_c > R_o_i.
where R_t_i and R_o_i are the threshold and objective ranges of the i-th task, respectively. Obviously, at ranges lower than the threshold, the utility is zero, because the considered ranges are too close to the MPAR making the function useless. Then, the utility increases linearly as the range increases since it reaches its objective value, beyond which it saturates to 1. It is worth noticing that both the threshold and objective range are task depending parameters.
§.§ Optimization algorithm
To obtain a solution to the challenging and non-convex resource allocation problem defined in (<ref>) the iterative optimization algorithm in <cit.> is exploited. Therein, the interior-point approach to constrained optimization[Maximizing a utility is tantamount to minimizing the associated cost, given by the opposite of the utility.] is employed, which amounts to solve a sequence of approximate minimization problems which include non-negative constrained slack variables (as many as the inequality constrains of the original problem) and equality constraints. These are easier to solve than the original inequality-constrained problem and are handled either via a direct solution of the corresponding KKT equations (via a linear approximation, i.e. a Newton step) or via a conjugate gradient method <cit.>. Specifically, the algorithm first attempts to pursue a direct step. If it cannot be applied, it employs a conjugate gradient approach. Notably, one relevant case where the direct step is not exploited arises when the approximate problem is not locally convex near the current iterate.
§ CASE STUDIES
In this section, some case studies for the pondered MPAR system performing both search and COM operations are analyzed. Specifically, the resource allocation is done after defining the priority weight for each task as well as the overall PAP available at the system. Problem (<ref>) is solved using the Mathworks Matlab® Quality-of-Service Optimization for Radar Resource Management <cit.> which performs a constrained minimization of a given objective function. The focus is on a scenario involving seven different tasks: three refer to search in LOS scenarios (shortly referred to as horizon, long-range, and high-elevation, respectively), three COMs with three different users, and a RIS-aided search to tackle a NLOS surveillance.
§.§ Parameter setting
Tests conducted in this paper refer to a MPAR operating in X-band with its central frequency f_0 = 10 GHz. Now,
before providing the definition of all the involved parameters, for each considered task, the antenna coverage sector is specified in terms of angle limits, and observation range. In particular, the angular parameter setup specifies the following sector limits:
* horizon, [-45, 45] degrees in azimuth and [0, 4] degrees in elevation,
* long-range, [-30, 30] degrees in azimuth and [0, 30] degrees in elevation,
* high-elevation, [-45, 45] degrees in azimuth and [30, 45] degrees in elevation.
* COM functions, [-45, 45] degrees in azimuth and [0, 45] degrees in elevation.
* RIS-aided, [15, 20] degrees in azimuth and [28, 32] degrees in elevation.
Additionally, the maximum range of interest (a.k.a. range limit) for each task is set as:
* horizon, 40 km,
* long-range, 70 km,
* high-elevation, 50 km.
* COM user 1, 45 km,
* COM user 2, 55 km,
* COM user 3, 65 km,
* RIS-aided, 4 km.
Other parameters for the three search tasks are summarized in Table <ref>, for the three COM tasks are reported in Table <ref>, and for the RIS-aided (a uniform rectangular RIS is considered during the analysis) search in Table <ref>. It is worth highlighting that, a practical example for a search radar, which in part agrees with Table <ref>, is that of a ground surveillance system SHORAD (short range air defence) for air reconnaissance. In fact, it can possibly transmit with a low effective radiated power, and can also operate above C-band, where free-space loss is high <cit.>.
§.§ Case study 1
The first case study refers to a MPAR with the parameters described in Section <ref> assuming a SW1 fluctuating target model for both the high-speed targets considered in three LOS search functions and for the small UAV to be detected via RIS-aided surveillance. In this scenario, the cumulative P_d (<ref>) and channel capacity per bandwidth (<ref>) are shown in Fig. <ref> versus range for three different values of the PAP assigned to each task, viz. [20, 40, 80] W·m^2. Subfigures a) and c) of Fig. <ref> refer to search tasks, whereas subfigure b) to COM operations.
For all the subfigures of Fig. <ref>, the corresponding range limit is also shown. QoS values beyond these limits are not of interest and set to zero as is evident for the COM tasks. Moreover, the desired value for the cumulative P_d (i.e., P_c_desired=0.9), and for the channel capacity per bandwidth (i.e., C_desired = 8 bit/s/Hz) are highlighted in the same graph. Hence, the corresponding range values R_90 and R_com are derived for each PAPs, numerically solving the equations P_c^LOS(R_LOS|R_m)-P_c_desired = 0, P_c^NLOS(R_2,NLOS|R_m)-P_c_desired = 0 and C(R_COM)-C_desired=0 with respect to the variable R_LOS, R_2,NLOS, and R_COM, respectively. These results are reported in Fig. <ref>, where the task quality is shown versus the allocated resource to any specific task, i.e., PAP_i = PAP_h, for any i,h=1,…,7. As expected, increasing the assigned PAP produces a growth of the task quality until its limit is attained. This means that if the current value of PAP for a specific task is such that the range limit is almost attained, it is no longer required to allocate additional resource, since it does not produce appreciable improvements in the corresponding quality.
In Fig. <ref> the utility functions for the above considered tasks are reported, particularizing the general form given by (<ref>) setting the objective ranges to R_o = [38, 65, 45, 35, 45, 50, 2] km and the threshold ranges to R_t = [25, 45, 30, 5, 15, 20, 0.153] km for the three search (subfigure a), three COM (subfigure b) and RIS-aided (subfigure c) tasks, respectively. Note that, the threshold ranges are set following different requisites for each task under study. Precisely, for the LOS search functions, it is the minimum range beyond which the mission is considered failed, because the target is too close to the radar for successfully activate subsequent actions. As to COM tasks, the communication is assumed valid within a specific segment between two circles centered at the radar location, i.e., with the user located beyond a minimum distance from the radar until the possible maximum range of interest. For the RIS-aided detection, the threshold range is set equal to the FFD that can be computed as <cit.>
R_FFD = 2 (max(δ_x N_1, δ_y N_2))^2/λ_0.
Therefore, for the parameter values summarized in Table <ref>, FFD computed via (<ref>) is approximately 153 m. Finally, the objective ranges, that allow to reach the maximum utility, are set according to the mission requirements.
Moreover, using the above-described utility functions, the PAP (namely, the resource) can be mapped to the utility space as shown in Fig. <ref>. From the inspection of these curves, it appears that the long-range search, high-elevation search, COM user 2 and 3 need to exploit non negligible PAP values to reach non zero utilities, viz., 56, 74, 22, and 40 W·m^2, respectively. Conversely, the rest of the tasks are capable of reaching nonzero utilities with very low values of assigned PAP. Moreover, the long-range and high-elevation search functions demand high PAP values to obtain the maximum utility, i.e., 422 and 435 W·m^2. Interestingly, the operation that requires the minimum PAP value to attain the maximum utility is the RIS-aided search with PAP of 38 W·m^2.
Now, the first simulation analyzes the case where the resource allocation is performed under normal operational conditions (i.e., no optimization is performed) in which the maximum utility is reached for each of the operating tasks. Hence, each task exploits all the necessary resource (i.e., the maximum utility PAP) to fulfill its demanded nominal objective, viz. cumulative P_d and/or channel capacity per bandwidth. To highlight this distribution, Fig. <ref> proposes a graphical representation of the antenna coverage sectors as well as the objective value R_90 (respectively R_com) for the different radar operations. Subfigures refer to a)
LOS search, b) COM, and c) NLOS search tasks. Additionally, on the right side of this diagram a bar chart indicating the PAP allocated to each task is also reported. Specifically, the maximum utility values are obtained with the allocation PAP = [74, 435, 422, 103, 190, 248, 37]^T W·m^2, corresponding to a total PAP used by the MPAR of about 1509 W·m^2 (i.e., the sum of the maximum utility PAP values for each task).
Comparing the bar chart of Fig. <ref> with the diagram representing the utility versus resource of Fig. <ref>, it is evident that in the case of normal operational conditions, all tasks are capable of obtaining the maximum utility. In this situation, therefore, independently of the task, the respective quality metric is greater than or equal to its desired objective value. However, in some operating conditions, the total amount of resources available at the MPAR cannot allow to assign the ideally required PAP to each task. This can be also explained observing that, often, a non negligible part of the available resources should be reserved to other tasks (e.g., tracking) <cit.>. For the above reasons, the RRM should compute the optimal PAP allocation, once its maximum available value is set. Hence, in this case study, the maximum PAP is set to the 50% of that under normal operational conditions, that is approximately 755 W·m^2. Moreover, the following set of priority weights is enforced, = [0.4, 0.1, 0.2, 0.05, 0.05, 0.05, 0.15]^T, providing low priorities to COM tasks with respect to search ones. Solving Problem (<ref>) with the above constraints results in the resource distribution reported in Fig. <ref>, where as before subfigures refer to a) LOS search, b) COM, and c) NLOS search tasks. More specifically, the allocated PAPs are equal to PAP = [74, 158, 293, 70, 63, 60, 37]^T W·m^2. To give insights into the obtained results, Fig. <ref> shows for each task the optimal resource allocation in terms of PAP versus the R_90 (respectively R_com) together with the corresponding utility, with subfigures referring to a)-c) LOS search, d)-f) COM, and g) NLOS search operations. As expected, the RRM allocates PAP so that the maximum utility is reached for the horizon search function, being the task with highest priority, with a corresponding R_90=38 km. Analogously, also the RIS-aided search experiences an allocation of PAP that allows to reach the maximum utility with R_90=2 km. This is because it has a medium priority (i.e., a weight 0.2) together with the fact that it has low requirements in terms of resource. The worst case is observed in the COM user 3 task where the PAP allocation only ensures a utility of 0.15, being its priority weight quite low and given by 0.05.
§.§ Case study 2
In this situation, the PAP allocation is performed for a different set of priority weights, again setting its maximum value to 755 W·m^2, i.e., half of that used under normal operational conditions. As a matter of fact, the priority weights for the COM tasks are fixed to 0, resulting in the vector = [0.4, 0.2, 0.2, 0, 0, 0, 0.2]^T. The solution to Problem (<ref>) with the above constraints produces the PAP assignment over the considered tasks illustrated in Fig. <ref>, where subfigures refer to a) LOS search tasks, c) COM tasks, and d) RIS-aided search task. Specifically, the allocated PAP values are PAP = [74, 266, 378, 0, 0, 0, 37]^T W·m^2. Again, Fig. <ref> shows for each task the optimal resource distribution in terms of PAP versus R_90 (respectively R_com) together with the corresponding utility, with subfigures referring to a)-c) LOS search tasks, d)-f) COM tasks, and g) RIS-aided search task. As expected the RRM does not allocate any PAP to the COM tasks reflecting the associated zero priority weights. On the contrary, the long-range and high-elevation experience a growth in the assignment of their resources, with a consequent increment of utility that increases from 0.65 to 0.88 and from 0.83 to 0.95 w.r.t. the case study 1, respectively. Obviously, the other two tasks (namely, horizon and RIS-aided search), having already reached their maximum utility, continue to maintain the same allocation as before.
§.§ Case study 3
The test performed in this subsection is devoted to the impact of the antenna pointing direction on the performance of the MPAR in terms of resource distribution over the different tasks. In particular, for all tasks, the term accounting for scanning losses is fixed according to the values summarized in Table <ref>. Moreover, as to the other parameters, this study refers to the same simulation setting as in Section <ref>, apart for, as already specified losses accounting for the spatial selectivity of the antenna gain are set equal to their respective worst case for each angular sector.
The conducted test considers the availability of maximum PAP of 755 W·m^2 (that is again approximately the 50% of that under normal operational conditions in the case study 1), with the same priority weights as in the first case study. Solving Problem (<ref>) with the above constraints results in the PAP assignment illustrated in Fig. <ref>, where subfigures refer to a) LOS search, c) COM, and d) RIS-aided search tasks. More in detail, the allocated PAPs are now equal to PAP = [74, 202, 323, 0, 0, 0, 155]^T W·m^2, respectively. Again, to further shed light on the results, Fig. <ref> shows for each task the optimal resource allocation in terms of PAP versus R_90 (respectively R_com) along with their corresponding utility, with subfigures referring to a)-c) LOS search, d)-f) COM, and g) RIS-aided search tasks. It is now interesting to observe that, the resource allocation does not follow the trend as in the scenario analyzed in Section <ref>. In fact, the COM tasks are all penalized with an assignment of zero PAP due to their very low priorities (i.e., 0.05). All resources are allocated to the other tasks, with the horizon search function that attains its maximum utility thanks to the attributed high priority. The RIS-aided search task also reaches the maximum utility because of a joint combination of a medium priority weight and a reduced PAP necessary to satisfy it. Finally, it is worth observing that all the considered tasks (except the horizon) suffer the effect of the scanning loss that in turn reflects on a higher PAP that is required to reach the same utility. Therefore, the RRM tends to sacrifice the tasks with the lowest priority, i.e., COM ones, to guarantee sufficient performance to the others.
§ CONCLUDING REMARKS
This paper has addressed the problem of optimal PAP allocation in a MPAR system performing ISAC operations. More specifically, the considered methodology has been aimed at solving the QoS optimization problem jointly accounting for search scenarios in LOS and NLOS as well as COM tasks. Therefore, to maximize the QoS, the resource allocation is formulated as a constrained optimization problem whose objective function is the weighted sum of the utilities achieved with the assigned PAP to each specific task. In this respect, the cumulative detection range is defined as a quality metric for search tasks, whereas for COM tasks it is chosen as the range ensuring a desired channel capacity per bandwidth. Several case studies have been analyzed to prove the validity of the designed allocation strategy in challenging operational scenarios, ranging from the analysis of different priority weights selections to the study of the impact of the spatial selectivity of the antenna pointing angle. From the analyses of the results, the evidence is that the MPAR tends to mostly allocate the available resources to the high priority tasks at the expense of the others. By doing so, it is ensured that the utilities for the most important tasks attain values close to their objectives, whereas for the remainder tasks a lower level of satisfaction is obtained.
Possible future researches could consider the extension of the framework to a multiface and/or multiband radar as well as to the multiradar systems. Moreover, the allocation of the beamformer weights to the different tasks is another valuable topic.
§ ACKNOWLEDGMENTS
The work of Augusto Aubry and Antonio De Maio was supported by the European Union under the Italian National Recovery and Resilience Plan (NRRP) of NextGenerationEU, partnership on “Telecommunications of the Future” (CUP J33C22002880001, PE00000001 - Program “RESTART”).
IEEEtran
|
http://arxiv.org/abs/2307.01819v1
|
20230704164945
|
On the weight zero compactly supported cohomology of $\mathcal{H}_{g, n}$
|
[
"Madeline Brandt",
"Melody Chan",
"Siddarth Kannan"
] |
math.AG
|
[
"math.AG",
"math.CO",
"05E14, 14H10"
] |
calc, snakes
thmTheorem[section]
prop[thm]Proposition
claim[thm]Claim
cor[thm]Corollary
conj[thm]Conjecture
definition
ex[thm]Example
defn[thm]Definition
innercustomthmTheorem
innercustompropProposition
arrows
decorations.markings,decorations.pathmorphing,arrows
mytipmytip
.5
0
-5
.5
-51.5
-5-1.5
myarrow/.style= decoration=bent,aspect=0.3, markings,mark=at
position 0.5 with [scale=1.2]latex', postaction=decorate
myarrowshort/.style= decoration=bent,aspect=0.3, markings,mark=at
position 0.3 with [scale=1.2]latex', postaction=decorate
myarrowshorter/.style= decoration=bent,aspect=0.3, markings,mark=at
position 0.2 with [scale=1.2]latex', postaction=decorate
->-/.style=decoration=markings, mark=at position #1 with
>,postaction=decorate
my_dot/.style=fill, circle, inner sep=0pt,minimum size=1.5pt
my_node/.style=fill, circle, inner sep=0pt,minimum size=3pt
inv/.style=fill, circle, inner sep=0pt,minimum size=0pt
Definition[thm]Definition
definition
Example[thm]Example
example
Exercise[thm]Exercise
exercise
Exploration[thm]Exploration
exploration
Fact[thm]Fact
fact
Theorem[thm]Theorem
theorem
Lemma[thm]Lemma
lemma
Remark[thm]Remark
remark
Proposition[thm]Proposition
proposition
Corollary[thm]Corollary
corollary
Question[thm]Question
question
Conjecture[thm]Conjecture
conjecture
Problem[thm]Problem
problem
remark
obs[thm]Observation
M. Brandt]Madeline BrandtDepartment of Mathematics, Brown University, Box
1917, Providence, RI [email protected]
M. Chan]Melody ChanDepartment of Mathematics, Brown University, Box
1917, Providence, RI [email protected]
S. Kannan]Siddarth KannanDepartment of Mathematics, Brown University, Box
1917, Providence, RI [email protected]
On the weight zero compactly supported cohomology of _g,n
[
August 1, 2023
=========================================================
For g≥ 2 and n≥ 0, let _g,n⊂_g,n denote the complex moduli stack of n-marked smooth hyperelliptic curves of genus g. A normal crossings compactification of this space is provided by the theory of pointed admissible /2-covers. We explicitly determine the resulting dual complex, and we use this to define a graph complex which computes the weight zero compactly supported cohomology of _g, n. Using this graph complex, we give a sum-over-graphs formula for the S_n-equivariant weight zero compactly supported Euler characteristic of _g, n. This formula allows for the computer-aided calculation, for each g≤ 7, of the generating function 𝗁_g for these equivariant Euler characteristics for all n. More generally, we determine the dual complex of the boundary in any moduli space of pointed admissible G-covers of genus zero curves, when G is abelian, as a symmetric Δ-complex. We use these complexes to generalize our formula for 𝗁_g to moduli spaces of n-pointed smooth abelian covers of genus zero curves.
§ INTRODUCTION
For integers g≥ 2 and n≥ 0, let _g,n⊂_g,n denote the complex moduli stack of n-marked smooth hyperelliptic curves of genus g. This space is a smooth Deligne–Mumford stack of dimension 2g + n - 1. The group S_n acts on _g,n by permuting the marked points, and the rational cohomology groups with compact support H^i_c(_g,n;) are S_n-representations in the category of mixed Hodge structures over . In particular, each cohomology group H^i_c(_g,n;) carries a weight filtration
W_0 H^i_c(_g,n;) ⊂ W_1 H^i_c(_g,n;) ⋯⊂ W_4g + 2n - 2 H^i_c(_g,n;) = H^i_c(_g,n;),
which is preserved by the S_n-action. In this paper, we study the S_n-representation defined by the weight zero piece of this filtration.
When X is a smooth and separated variety or Deligne-Mumford stack, Deligne's weight spectral sequence <cit.> computes the associated graded pieces of the weight filtration on the compactly supported cohomology of X. It identifies the weight zero piece with the reduced cohomology of the dual complex of any normal crossings compactification of X. We will furnish a normal crossings compactification of _g, n using the theory of pointed admissible /2-covers, as developed by Abramovich–Vistoli <cit.>, Abramovich–Corti–Vistoli <cit.>, and Jarvis–Kaufmann–Kimura <cit.>, following Harris–Mumford's original theory <cit.>. Denoting the dual complex of the resulting boundary divisor by Θ_g,n, we then study the weight zero compactly supported cohomology of _g,n via the identification
W_0 H^i_c(_g,n;) ≅H̃^i - 1(Θ_g,n;)
mentioned above, where H̃^* denotes reduced cohomology. Along the way, we also explicitly determine the dual complex of the boundary in any space of pointed admissible G-covers of genus zero curves, for abelian groups G (Theorem <ref>).
Our main result concerns the S_n-equivariant weight zero compactly supported Euler characteristic
χ^S_n( W_0 H^*_c(_g, n;) ) := ∑_i = 0^4g + 2n - 2 (-1)^i _n(W_0 H^i_c(_g,n;)) ∈Λ,
where _n(·) denotes the Frobenius characteristic of an S_n-representation: this is an element of the ring
Λ = lim_⟵[x_1, …, x_n]^S_n
of symmetric functions, which encodes the character of the representation. See <cit.> or <cit.> for more on symmetric functions and the Frobenius characteristic.
For each g ≥ 2, we define
𝗁_g := ∑_n ≥ 0χ^S_n( W_0 H^*_c(_g, n;) )
to be the generating function for these equivariant Euler characteristics. Note that 𝗁_g is an element of Λ̂, the degree completion of Λ. In Theorem <ref> below, we prove a sum-over-graphs formula for the generating function 𝗁_g. The precise definition of the terms in the formula can be found in Section <ref>. For now, we only remark that T_2g + 2^<3 is a finite set of trees, and given such a tree C there is a canonically associated vertex-weighted graph P_C which can roughly be understood as a “tropical double cover" of C: see Section <ref> for details on this perspective.
A
We have
𝗁_g = ∑_C ∈ T_2g + 2^<3
(-1)^|E_C|/|(P_C)|∑_τ∈(P_C)sgn(τ|_E_C) ∏_k ≥ 1 (1 + p_k)^f(P_C, τ, k)
where E_C is the set of edges of the tree C,
p_k = ∑_n > 0 x_n^k∈Λ̂ is the kth power sum symmetric function, and k · f(P_C, τ, k) is given by the compactly supported Euler characteristic of the set of points in P_C which have orbit of length k, under the action of τ.
Implementing Theorem <ref> on a computer, we are able to compute 𝗁_g explicitly for 2 ≤ g ≤ 7: see Table <ref>. The code is available at <cit.>. Our data allows us to extract the polynomials F_n(t) ∈[t], for each n≤ 9, which have the property that
F_n(g) = χ^0_c(_g,n) for each g≥ 2, where
χ^0_c(_g,n) := ∑_i = 0^4g + 2n - 2 (-1)^i _ W_0H^i_c(_g, n;)
denotes the numerical weight zero compactly supported Euler characteristic. See Proposition <ref> in Section <ref> below.
Our proof of Theorem <ref> relies on our description of the cellular chain complex of Θ_g, n as a graph complex generated by certain double covers of trees, which are a special case of the theory of graph-theoretic admissible covers we develop in Section <ref>. We require that several subcomplexes of this graph complex are acyclic; the proofs are given in Section <ref>. As in earlier work on _g,n <cit.>, one conceptually important subcomplex is the repeated marking subcomplex, i.e., the subcomplex spanned by graph-theoretic admissible covers containing a vertex supporting more than one marking. This subcomplex is acyclic (Theorem <ref>), and after quotienting by it, the resulting chain complex is related to configuration spaces of distinct points on graph-theoretic admissible covers; see <cit.> for related work. Since Theorem <ref> is about Euler characteristics, we may work one graph-theoretic admissible cover at a time, summing the individual contributions. For each individual graph-theoretic admissible cover, we use Proposition <ref>, explained more below, to calculate its contribution. This proves Theorem <ref>.
Proposition <ref> may be useful in other applications, so we mention it briefly here: it gives a formula for the completed symmetric function
∑_n ≥ 0χ_c^S_n( (Conf_n(X) ×Δ^∘)/G),
where X is any finite CW complex, Δ^∘ is an open simplex, G is a finite group, and G acts on X cellularly and on Δ^∘ by permuting vertices. See Section <ref>. This proposition is closely inspired by a result of Gorsky <cit.> concerning complex quasi-projective varieties X with an action of a finite group; our specific formulation is a new contribution. In particular, it does not appear in the work of Chan–Faber–Galatius–Payne on the top weight cohomology of _g,n, where an alternate argument, which is less geometric, is used <cit.>.
Now let us turn our attention to individual cohomology groups, rather than Euler characteristics. First, for n=0,1,2, and 3, the cohomology of _g,n was completely computed by Tommasi <cit.>; see Section <ref>.
The consequences of these computations for the weight zero part of cohomology with compact supports can be interpreted via our work as statements about chain complexes of graph-theoretic admissible covers. In Section <ref>, we prove some of these statements, using the acyclicity results mentioned above. In particular, we deduce the following facts, first proved by Tommasi:
B
For all g ≥ 2, we have
* W_0 H^i_c(_g, n; ) = 0 for all i, when n ≤ 1;
* When n = 2, we have
W_0 H_c^2g + 1(_g, 2 ;) ≅.
As an S_2-representation, we have
W_0 H_c^2g + 1(_g, 2 ;) ≅triv if g is even
sgn if g is odd.
Part (1) of Proposition <ref> is established via a spectral sequence argument, similar to the ones we use for acyclicity of other subcomplexes of Θ_g, n. For part (2), we write down an explicit cellular cycle on Θ_g, 2 corresponding to the nonzero class in W_0 H^2g + 1(_g, 2;): see Figure <ref> in Section <ref>. Tommasi shows additionally that W_0 H^i_c(_g, 2;) = 0 for i ≠ 2g + 1, but we do not see how to prove this directly using our graph complex, nor have we investigated whether we can use our methods to re-deduce W_0 H^*_c(_g,3;) for all g.
§.§ The support of W_0H^*_c(_g, n;)
It is worth noting that the weight zero compactly supported cohomology of _g,n is supported in at most two degrees. Precisely,
W_0 H^i_c(_g,n;)=0 unless i=2g-2+n or i=2g-1+n.
We now explain the claim (<ref>), which follows from an argument we learned from D. Petersen. To sidestep stack-theoretic issues, let us momentarily replace _g, n by its coarse moduli space H_g, n; this is inconsequential on the level of rational cohomology. It is well-known that H_g is affine, as it can be identified with the quotient _0, 2g+2/S_2g + 2. In general, H_g, n is not far from affine: as explained by D. Petersen in a MathOverflow post <cit.>, the affine stratification number <cit.> of H_g, n is 1 for all n > 0. By <cit.>, we may conclude that
H^i(_g, n; ) = 0 for i> 2g + n, and
H^i_c(_g, n; ) = 0 for i < 2g -2 + n,
the latter by Poincaré duality.
As the dual complex Θ_g, n of the normal crossings compactification of _g, n by pointed admissible /2-covers is a generalized cell complex of dimension 2g - 2 + n (Section <ref>), the claim (<ref>) follows immediately from (<ref>).
Thus, our formula for 𝗁_g is a formula for the difference of the two S_n-representations in (<ref>) and can be used to bound the multiplicities of Specht modules appearing in them individually. We have not investigated whether 𝗁_g is in fact a cancellation-free formula for this difference.
§.§ Related work on the cohomology of _g, n
Recently, there have been a number of significant advances on the geometry of moduli spaces of pointed hyperelliptic curves. Canning–Larson study the rational Chow ring of _g,n, in particular determining it completely for n≤ 2g+6 <cit.>. Their results also have implications for rationality of _g,n. More generally, there has been progress on understanding the birational geometry of _g,n; see, for example, the overview and references in that paper. In another direction, Bergström–Diaconu–Petersen–Westerland <cit.> compute the stable homology of braid groups with coefficients in (any Schur functor applied to) the Burau representation. These results have implications for the stable homology of moduli spaces of hyperelliptic curves with twisted coefficients. They can also be related to the Serre spectral sequence on rational cohomology for the fiber bundle _n(S_g) →_g,n→_g, as C. Westerland has explained to us.
Our focus here is the cohomology groups of _g,n with (untwisted) -coefficients, and specifically the weight zero compactly supported cohomology groups.
The topological Euler characteristic of _g, n has been computed by Bini <cit.>, but his techniques are not compatible with the weight filtration. Gorsky <cit.> calculates the equivariant Euler characteristic
χ^S_n(_g, n) := ∑_i = 0^4g + 2n - 2 (-1)^i_n(H^i(_g, n;)),
by fibering _g, n over _g. The fiber of this morphism over a point of _g representing a curve C is equal to Conf_n(C)/(C). Gorsky proceeds by stratifying _g by the S_n-equivariant Euler characteristic of the fibers, and then calculating the Euler characteristic of each stratum. Our techniques are similar in spirit to Gorsky's. The S_n-equivariant weight zero compactly supported Euler characteristic of _g, n is equal to h_n - χ^S_n(Θ_g, n), where h_n ∈Λ is the nth homogeneous symmetric function. As explained above, we first remove an acyclic locus from Θ_g, n, and then stratify the remaining space in terms of configuration spaces of graphs, summing up these contributions to give our formula (Section <ref>).
§.§ Relation to point-counting
For higher n, Bergström <cit.> studies the cohomology of _g, n via point-counting: for all g ≥ 2, he gives an algorithm to determine the count of _q-points of _g, n for n ≤ 7 and for all prime powers q. Together with the results of <cit.>, Bergström's work implies that for odd q, the number of _q-points of _g, n agrees with a polynomial P_g, n(q) for n ≤ 9 (there is a different polynomial for even q). By <cit.>, we have an equality
P_g, n(q) = ∑_j = 0^2g + n - 1χ_c^2j(_g,n) q^j,
where
χ_c^k(_g,n) := ∑_i = 0^4g + 2n - 2 (-1)^i_Gr_k^W H^i_c(_g, n;),
and
Gr_k^W H^i_c(_g, n;) := W_k H^i_c(_g, n;) / W_k - 1 H^i_c(_g, n;)
is the kth associated graded piece of the weight filtration. In particular, the constant term of P_g, n(q) is equal to the weight zero compactly supported Euler characteristic. Bergström's original work <cit.> is S_n-equivariant, and we have confirmed that our data agrees with his for n ≤ 7. He has explained to us that <cit.> and <cit.> imply that for each n ≤ 9, there exists a polynomial F_n(t) ∈[t], with degree bounded by n - 2 if n is even and n - 3 if n is odd, such that
χ_c^0(_g, n) = F_n(g)
for all g. With these bounds on the degrees, our formula allows us to compute this polynomial for all n ≤ 9, using the data in Table <ref>. The polynomials F_n(t) can certainly be calculated from Bergström's work, but did not explicitly appear there, so we record them below. In each case, the degree of F_n(t) attains the communicated bound.
C
We have χ^0_c(_g, n) = 0 for n ∈{0, 1, 3}, while χ_c^0(_g, 2) = -1. For 4 ≤ n ≤ 9, we have the following:
χ^0_c(ℋ_g,4) = g(1-g)
χ^0_c(ℋ_g,5) =5g(-1+g)
χ^0_c(ℋ_g,6) = 1/8 g (198 - 203 g + 18 g^2 - 13 g^3)
χ^0_c(ℋ_g,7) =7/4 g (-78 + 83 g - 18 g^2 + 13 g^3)
χ^0_c(ℋ_g,8) = 1/4 g (3420 -3784 g + 1355 g^2 - 1005 g^3 + 25 g^4 - 11 g^5)
χ^0_c(ℋ_g,9) = 9/4 g (-2700 + 3092 g - 1545 g^2 + 1195 g^3 - 75 g^4 + 33 g^5).
§.§ Relation to previous work on _g, n
Our calculations are a new step in understanding weight zero compactly supported rational cohomology of moduli spaces via combinatorics of normal crossings compactifications
<cit.>.
In our calculation of 𝗁_g, we proceed in a similar fashion to Chan–Faber–Galatius–Payne <cit.>, who calculate the S_n-equivariant weight zero Euler characteristic of _g, n. They use the dual complex Δ_g, n of the Deligne–Mumford–Knudsen compactification _g, n⊂_g, n, which can be interpreted as a tropical moduli space of curves <cit.>. They express the generating function
𝗓_g := ∑_n ≥ 0χ^S_n( W_0 H^*_c(_g, n; ))
as a sum over contributions from configuration spaces of graphs. The contribution from each graph is a sum of monomials in the inhomogeneous power sum symmetric functions P_i := 1+p_i, of degree equal to the topological Euler characteristic of the graph. A crucial difference between their work and ours, which has been an unexpected subtlety here, is that they find that the only graphs contributing to their formula are connected with first Betti number g. As such, their formula for 𝗓_g is a Laurent polynomial in the P_i's, homogeneous of degree 1 - g. The ability to focus on graphs with fixed Euler characteristic is a significant conceptual aid to their work. In contrast, we find that while all of the graphs contributing to 𝗁_g are connected double covers of metric trees, they do not have fixed first Betti number, so their topological Euler characteristics vary, and indeed for g ≥ 3 the formulas for 𝗁_g are not homogeneous in the P_i's. When g = 2, we have _2, n = _2,n, so 𝗁_2 = 𝗓_2 is homogeneous of degree -1.
§.§ Applications to moduli spaces of admissible G-covers in genus zero
While our main focus in this paper is the moduli space _g, n, our techniques are more general. As mentioned above, Theorem <ref> in Section <ref> contains a description of the dual complex of the boundary divisor in any moduli space of pointed admissible G-covers of genus zero curves, when G is an abelian group. We specialize to G = /2 in order to study _g, n. We can prove a generalization of Theorem <ref> to more general moduli spaces of pointed G-covers: see Remarks <ref> and <ref>, and Theorem <ref> in Section <ref>.
§.§ Acknowledgments
We are grateful to Dan Abramovich for teaching us about twisted stable maps and admissible G-covers, and to Jonas Bergström for explaining his work <cit.> and sharing his data on the weight zero compactly supported Euler characteristic of _g, n. Jonas Bergström, Dan Petersen, and Dhruv Ranganathan provided extremely valuable comments on a draft of this paper; we thank them very much. MB is supported by the National Science Foundation under Award No. 2001739. MC was supported by NSF CAREER DMS-1844768, a Sloan Foundation Fellowship and a Simons Foundation Fellowship. SK was supported by an NSF Graduate Research Fellowship. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of the National Science Foundation.
§ POINTED ADMISSIBLE G-COVERS AND THEIR MODULI
In this section we recall moduli spaces of pointed admissible G-covers, following <cit.>.
We determine the connected components of these spaces when g=0 and G is abelian (Proposition <ref>), and give a normal crossings compactification (Proposition <ref>). Later, in Section <ref>, we will determine the dual complex of this compactification. Ultimately, we will obtain a normal crossings compactification of ℋ_g,n and the corresponding dual complex as a special case
in Section <ref>.
§.§ Admissible G-covers
Let G be a finite group, and let g,n≥ 0 be integers such that 2g-2+n>0.
We recall the notion of an admissible G-cover of nodal curves of type (g,n) over an arbitrary base scheme T (<cit.>).
It is the data of an n-marked, stable genus g curve (C,p_1,…,p_n) over T, and a covering of nodal curves ϕ P→ C with an action of G on P leaving ϕ invariant, such that:
* ϕ is a principal G-bundle away from the nodes and markings of C,
* The analytic local equations for P→ C→ T at a point p∈ P over a node of C are
A[z,w]/(zw-t) → A[x,y]/(xy-t^r)→ A,
where t∈ A, x = z^r and y=w^r for some integer r>0.
* The analytic local equations for P→ C→ T at a point p∈ P over a marked point of C are
A[z]→ A[x] → A,
where x=z^s for some integer s>0.
* if x∈ P is a geometric node, then the action of the stabilizer G_x of x on the tangent spaces of the two analytic branches at x is balanced: the characters of these two one-dimensional representations of G_x are inverse to each other.
Admissible G-covers of type (g,n) form a Deligne-Mumford stack, denoted _g,n(G); this is a consequence of the identification of _g,n(G) with the space ^bal_g,n(G) of balanced twisted G-covers of type (g,n)
which is proven in <cit.> to be a Deligne-Mumford stack. We may write G-cover rather than admissible G-cover for short.
§.§ Admissible covers of smooth curves
Let _g,n^∘(G) denote the open substack of
G-covers in which the target curve (and hence also the source curve) is smooth. In this section, we will determine the connected components of _0,n^∘(G) (Proposition <ref>). We will use this result later when determining the connected components of the corresponding space of pointed admissible G-covers.
There is a forgetful map
π_g,n^∘(G)→_g,n
sending a G-cover P→ (C,p_1,…,p_n) to the n-pointed curve (C,p_1,…,p_n). This morphism is étale. Working over , the fiber over
(C,p_1,…,p_n)
is identified with the set
(π_1(C-{p_1,…,p_n},p_0), G)/G
where G acts by conjugation, and p_0 ∈ C-{p_1,…,p_n} is any choice of basepoint. An element of the set (<ref>) specifies a G-cover of the punctured curve C-{p_1,…,p_n}, which can be extended uniquely over the punctures. Then the data of the morphism π is equivalent to the data of the action of π_1 of the base on the fiber (<ref>) above.
We shall now consider this action in the case g=0, when the action may be understood via the classical Hurwitz theory of ^1. We denote by
ε^ni_n(G) := {(g_1,…,g_n)∈ G^n: g_1⋯ g_n = 1}
the set of Nielsen classes. We do not impose that g_1,…,g_n generate G; correspondingly, our source curves are not required to be connected. The group G acts by conjugation on ε^ni_n(G), and the elements of ε^ni_n(G)/G are called inner Nielsen classes. Recall the following relationship between the set (<ref>) to the set of inner Nielsen classes: choose loops ρ_1,…,ρ_n around p_1,…,p_n, respectively, based at p_0, such that ρ_1,…,ρ_n generate π_1(C-{p_1,…,p_n},p_0) subject only to the relation
ρ_1·⋯·ρ_n = 1.
Such a choice identifies the set (<ref>) with the inner Nielsen classes.
Now the following diagram of pullback squares relates _0,n^∘(G) to Hurwitz spaces of G-covers.
@R=6mm@C=6mm_0,n^∘(G) [d][r] _^1,n^G [d] [r] U^G_^1,n[d]
_0,n[r] _n(^1)[r] UConf_n(^1)
The spaces above are defined as follows. The configuration spaces (ordered and unordered) of n points in ^1 are denoted _n(^1) and UConf_n(^1), respectively.
The space U^G_^1,n is the moduli space parametrizing sets S⊂^1 of n points, together with a ramified G-cover f P→^1 whose branch locus is contained in S. The space ^G_^1,n is the ordered version of this space, obtained by pullback. The map _0,n→_n(^1) fixes (p_1,p_2,p_3) to be (0,1,∞), for instance.
If G is abelian, then U^G_^1,n→UConf_n(^1), and hence also _0,n^∘(G)→_0,n, is a trivial bundle. As a variety, _0,n^∘(G) is isomorphic to _0,n×ε^ni_n(G).
For an arbitrary finite group G, the way in which U_^1,n^G is a covering space over UConf_n(^1) is classically understood, essentially going back to Hurwitz <cit.>, see <cit.>.
The following is a complete description. For an appropriate choice of basis, (π_1(^1-S,p_0),G) is identified with ε^ni_n(G), and the spherical braid group π_1(UConf_n(^1)) has a presentation with generators γ_1,…,γ_n-1, acting on ε^ni_n(G) via
γ_i · (g_1,…,g_n) = (g_1,…,g_i-1, g_i g_i+1 g_i^-1, g_i, g_i+2,…, g_n).
In the case that G is abelian, the action described above induces a trivial action of the spherical braid group π_1(UConf_n(^1)) on ε_n^ni(G), proving the claim.
Stack-theoretically, we have
_0,n^∘(G) ≅_0,n× [ε^ni(G)/G]
if G is abelian,
where G acts trivially on ε^ni(G).
Under this identification, write
_0,n^∘(G;g_1,…,g_n)
for the connected component of _0,n^∘(G) corresponding to the Nielsen class (g_1,…,g_n); it is isomorphic to _0,n× BG.
§.§ Pointed admissible covers
We study spaces of pointed admissible covers and determine the connected components of these spaces in Proposition <ref>. This is an important calculation towards the computation of the connected boundary strata in Theorem <ref>, since the boundary strata of spaces of pointed admissible covers are quotients of products of smaller spaces of pointed admissible covers.
Let G be any group, not necessarily abelian. Let _g,n^G denote the space of n-marked pointed admissible G-covers
of genus g <cit.>. It is a moduli space for nodal admissible G-covers P→ (C,p_1,…,p_n), together with a choice of a lift p_i on P of each p_i.
The open substack _g,n^G is the moduli space of pointed admissible G-covers in which source and target are smooth. Summarizing, we have a Cartesian square
_g,n^G_g,n^G^∘_g,n(G)_g,n(G)⊂ππ⊂
which lays out the unfortunate lack of parallelism in the notation for these spaces. The notation comes from the literature, however.
The morphisms _g,n^G→^∘_g,n(G) and _g,n^G →_g,n(G) are étale.
For easy reference, we prove Proposition <ref> below. We note, however, that the argument appears as part of the proof in <cit.> of the fact that _g,n^G is a smooth Deligne-Mumford stack, flat, proper, and quasi-finite over _g,n.
We verify the second statement, which implies the first.
Recall the construction of _g,n^G, which we summarize following <cit.>. Let E→𝒞=[E/G] denote the universal source curve and stacky target curves, respectively, over _g,n^G, and let C denote the coarse space of . For i=1,…,n, let _i → denote the closed substack of ^sm whose image in C is the universal i^th marked point; _i is an étale gerbe over _g,n^G. Let E_i = E×__i. We have the following diagram, whose
top square is Cartesian and where the morphisms known to be étale are labeled:
E_i [r] [d]^ét E [d]^ét
_i [r][rdd]^ét = [E/G] [d]
C [d]
_g,n^G
The morphism E_i→_g,n^G is étale since it is a composition of E_i→_i, which is a pullback of an étale morphism and hence étale, and the étale gerbe _i→_g,n^G.
Therefore
_g,n^G = E_1×__g,n^G⋯×__g,n^G E_n
is also étale over _g,n^G.
The spaces _g,n^G and ^∘_g,n(G)
need not be connected, as observed in Remark <ref>. Given g_1,…,g_n∈ G, write
_g,n^G(g_1,…,g_n)
for the open and closed substack of _g,n^G in which the monodromy at the marking p_i in the source curve is g_i.
Let G be an abelian group.
Suppose g_1⋯ g_n = 1, so that _0,n^G(g_1,…,g_n) is nonempty. The connected components of _0,n^G(g_1,…,g_n) are in bijection with orbits of functions
{1,…,n}→ G/⟨ g_1,…, g_n⟩
under left G-translation.
The restriction of the map _0,n^Gπ^∘_0,n(G) to ^G_0,n(g_1,…,g_n) becomes a surjection
^G_0,n(g_1,…,g_n)π^∘_0,n(G;g_1,…,g_n) ≅_0,n× BG,
where the last isomorphism was established in Proposition <ref>. This morphism is étale by Proposition <ref>.
Now let P→ (C, p_1,…,p_n) be any unpointed admissible cover; the fiber of π over it is the action groupoid on all lifts p_1,…, p_n of p_1,…,p_n respectively, with the group G acting by simultaneous translation of the p_i. The connected components of _0,n^G(g_1,…,g_n) are in bijection with the orbits of this category under the further action of pure mapping class group Mod_0,n. Those orbits are in bijection with orbits of functions {1,…,n}→π_0(P) under left G-translation;
and π_0(P)≅ G/⟨ g_1,…,g_n⟩.
It will be convenient to work with pointed curves labelled by arbitrary finite sets. Thus let G be a finite group, S a finite set, and ρ S→ G any function. For g≥ 0 with 2g-2+|S|>0, let
^G_g,S(ρ)
denote the space of pointed admissible G-covers of genus g curves with specified monodromy ρ. Let ^G_g,S(ρ) denote the open subset parametrizing admissible G-covers in which the target curve is smooth.
The space
^G_g,S=∐_ρ^G_g,S(ρ) is a normal crossings compactification of ^G_g,S=∐_ρ^G_g,S(ρ).
This follows from the fact that ^∘_g,n(G) ⊂_g,n(G) is a normal crossings compactification, by the proof of <cit.>, and ^G_g,S is étale over _g,S(G) (Proposition <ref>).
§ BOUNDARY COMPLEXES OF POINTED ADMISSIBLE G-COVERS
In this section we write down the boundary complex for the normal crossings compactification
^G_0,S(ρ)⊂^G_0,S(ρ)
when G is abelian (Theorem <ref>). This will be used in Section 4, to provide a normal crossings compactification of ℋ_g,n and obtain its boundary complex.
The boundary complex
is governed by graph-theoretic admissible covers of graphs, which we develop below in <ref>.
The basic notion of an admissible cover in tropical geometry was established in <cit.> and <cit.>. More recently and closely related to our approach, for an arbitrary finite group G, the notion of a graph G-cover associated to a admissible G-cover was developed by Galeotti <cit.>—see especially <cit.>—for the purpose of studying the birational geometry, and singularities, of (coarse spaces of) moduli spaces of genus g curves with a principal G-bundle. Our definition is essentially a streamlined version of Galeotti's earlier definition, in the case that both apply. By putting into place our restrictions on g and G, we are able to give a completely explicit description of the boundary complex of (<ref>). See Remarks <ref> and <ref> for further comments on the general case and for further discussion of the surrounding literature.
§.§ Categories of covers of graphs
Throughout Section <ref>, let G be a finite abelian group. The boundary strata of the compactification
_0, S^G(ρ) ↪_0, S^G(ρ)
are in correspondence with graph-theoretic admissible G-covers, which we will now define.
A graph C = (V,H,i_C,r_C) is the data of two finite sets of vertices V=V(C), and half-edges H=H(C), together with maps
i_C H→ H, r_C H→ V
such that i_C is an involution. We abbreviate i= i_C and r = r_C. We permit i to have fixed points, and let L = L(C) denote the set of fixed elements of i, called legs. View r_C as the map taking a half-edge to its incident vertex.
The edge set E = E(C) is the set of pairs {h, i(h)} for i(h) h; view i_C as the “other half” map on the half-edges.
A morphism of graphs f C→ C' is
given by set maps f_V V→ V' and f_H H→ H' such that the relevant squares commute:
HHH'H'i_Cf_hf_hi_C' HVH'V'.r_Cf_Hf_Vr_C'
For a finite set S, an S-marking of C is an injection m = m_C S→ L(C). It will be convenient not to require that m is a bijection. A morphism of S-marked graphs (C,m_C) → (C', m_C') is a morphism of graphs f C→ C' that preserves the S-marking, i.e., f_H ∘ m_C = m_C'.
Let G be a finite abelian group, S a finite set.
An S-marked, admissible G-cover of graphs in genus 0 is
* A morphism f P→ C of S-marked graphs, such that C is a stable S-marked tree: for each vertex v ∈ V(C), we have |r_C^-1(v)| ≥ 3. We require that the legs of C are in bijection with S.
* A left action Φ G× P→ P leaving P→ C invariant, such that P→ P/G is canonically isomorphic to P→ C.
* A “monodromy marking” μ H(C) → G. Thus every half edge (including legs) of C is assigned an element of G. We require μ(i(h)) = μ(h)^-1 for every h ∈ H(C).
* A function g V(P) →_≥ 0; we call g(v) the weight or genus of v.
The above data must satisfy:
* For every v∈ V(C), f^-1(v) ≅ G/⟨μ(h): h∈ r^-1(v)⟩ as left G-sets, and
∏_h∈ r^-1(v)μ(h) = 1.
* For every h ∈ H(C), f^-1(h) ≅ G/⟨μ(h)⟩ as left G-sets.
* (local Riemann-Hurwitz) For all v∈ V(P), writing w = f(v) and n_w = r_C^-1(w), the genus g(v) of v is given by
2 - 2g(v) = |⟨μ(n_w) ⟩|(2 - ∑_h ∈ n_w|⟨μ(h) ⟩| - 1/|⟨μ(h) ⟩|).
We will use the boldface notation 𝐏→𝐂 to indicate a graph-theoretic admissible G-cover, with the understanding that this includes all of the data above. When we need to refer to the marking functions, we will write m_P for the marking of P and m_C for the marking of C.
It is clear from condition <ref> that the genus function g is determined by the monodromy marking μ as well as the morphism P → C. Moreover, since C is a tree, the data of C and μ, without the S-marking, actually determine P and Φ up to isomorphism. On the other hand, the S-marking on P is not in general determined by the S-marking on C.
If → is an S-marked admissible G-cover of nodal curves, with a stable S-marked curve of genus 0, then we obtain a corresponding S-marked admissible G-cover of dual graphs 𝐏→𝐂. The meaning of condition <ref> is that the stabilizer of an irreducible component of above a given irreducible component _v of is exactly the subgroup of G generated by the monodromy elements around the special points (nodes and marked points) on _v. The content here is that since _v is rational, π_1(_v) is generated by keyhole loops around the special points. Similarly, the data of a homomorphism π_1(_v)→ G, for appropriately chosen keyhole loops, is the data of an ordered tuple elements of G whose product is the identity. Condition <ref> is similar.
Let 𝐏→𝐂 and 𝐏' →𝐂' be graph-theoretic S-pointed admissible G-covers.
* An isomorphism (𝐏→𝐂) → (𝐏' →𝐂') is the data of graph isomorphisms ϕ P → P' and ψ C → C', compatible with the marking functions m_P and m_C, as well as the monodromy marking μ, which fit into a commutative square.
* Let e ∈ E(C) be an edge. The edge-contraction of 𝐏→𝐂, denoted (𝐏→𝐂)/e, is obtained by contracting the edge e in C, together with its preimages in P. The new monodromy marking is obtained by restricting the previous one.
We write Γ_0, S^G for the category of all graph-theoretic S-pointed admissible G-covers, where morphisms are given by compositions of isomorphisms and edge-contractions. Given a function ρ S → G, we put Γ_0, S^G(ρ) for the full subcategory of Γ_0, S^G on those graph-theoretic S-pointed admissible G-covers 𝐏→𝐂 such that the monodromy marking on 𝐂 extends ρ. Precisely, ρ = μ|_L(C)∘ m_C where m_C S→ L(C) is the S-marking on C.
§.§ The dual complex of the boundary
We now state Theorem <ref> on the boundary complex of the space of pointed admissible covers.
Recall the category of symmetric Δ-complexes (see <cit.>), i.e., the category Fun(𝖥𝖨^op, 𝖲𝖾𝗍), where 𝖥𝖨 is the category of finite sets with injections.
For q≥ -1 an integer, we henceforth write
[q] = {0,…,q}.
This notational convention includes the special case [-1] = ∅. Given X𝖥𝖨^op→𝖲𝖾𝗍 and an integer q≥ -1, write
X_q = X([q])
for the set of q-simplices of X.
Fix g=0 and G abelian. For data G,S, and ρ as above, we define a symmetric Δ-complex
Δ_0,S^G(ρ)𝖥𝖨^op→𝖲𝖾𝗍
as follows.
For each q≥ -1, the set Δ_0,S^G(ρ)_q is the set of isomorphism classes of pairs (𝐏→𝐂, ω), where
* 𝐏→𝐂 is an object of Γ_0, S^G(ρ)
* ω [q]→ E(C)
is a bijection, called an edge-labelling.
For morphisms, given i [q']↪ [q], and given a graph-theoretic admissible cover 𝐏→𝐂 as above, contract the edges E(C) - ω(i([q'])), to obtain a new object of Γ_0, S^G(ρ), and take the unique edge-labelling by [q'] which preserves the order of the remaining edges.
Let G be an abelian group, S a finite set. There is an isomorphism of symmetric Δ-complexes
Δ_0,S^G(ρ)≅Δ(^G_0,S(ρ)⊂^G_0,S(ρ)).
Let us start with the stratification of the boundary of _0,S(G;ρ). The space _0,S(G;ρ) is nonempty if and only if ∏_s∈ Sρ(s) = 1_G. The boundary complex of ^∘_0,S(G;ρ)⊂_0,S(G;ρ) is the complex of trees C with a bijective S-marking m S→ L(C), together with a monodromy marking μ H(C)→ G extending ρ, which must satisfy, for every vertex v∈ V(C) and e = {h_1,h_2}∈ E(C),
∏_h∈ r^-1(v)μ(h) = 1, μ(h_1)μ(h_2)=1.
More formally, as a symmetric Δ-complex, the boundary complex has a q-simplex for every such datum (C, m, μ) together with an arbitrary bijective edge-labelling ω [q] → E(C), one for each isomorphism class of (C,m,μ, ω).
Now since _0,S(G;ρ) is étale over _0,S(G;ρ), there is a morphism of boundary complexes from that of the former to that of the latter. We now study the fibers of this morphism. Suppose
(C, m S→ L(C), μ H(C)→ G)
is a stable S-marked tree with monodromy marking μ.
For v∈ C, write n_v = r^-1(v) for the set of half-edges (including legs) at v, and write
G_v = ⟨μ(h) h∈ n_v⟩.
Let μ_v be the restriction of μ to n_v. Then the data of the stable S-marked tree
indexes a boundary stratum of _0,S(G;ρ). Note that this stratum is indeed connected, since it is, up to finite quotient, isomorphic to a product ∏_v∈ V(C)_0,n_v(G;μ_v) of varieties that are themselves connected (<ref>).
The preimage in _0,S(G;ρ) of this stratum is isomorphic to the variety
∏_v∈ V(C)_0,n_v^G(μ_v) / G^E(C)
e.g., by <cit.>. Let us explain the action of G^E(C) in (<ref>). For a given edge e = {h,h'}, incident to vertices v and v', the copy of G indexed by e acts by translating the lifted marked point indexed by h, respectively h', in the moduli space _0,n_v^G(μ_v), respectively _0,n_v'^G(μ_v'). (In general, G would also change the values of the marking functions μ_v(h) and μ_v'(h'), respectively, by conjugation, but G is abelian here.)
The variety (<ref>) may not be connected, and it remains to describe
its connected components.
For each v∈ V(C), let
X_v = {Fun(n_v, G/G_v)}/G
where the quotient is with respect to the G-action on G/G_v. From Proposition <ref>, the connected components of (<ref>) are in bijection with
(∏_v∈ V(C) X_v )/ G^E(C).
The last step is a combinatorial identification of (<ref>) with the set of isomorphism classes of graph-theoretic S-pointed admissible G-covers. Let us begin by considering local data at a single vertex v∈ V(C). Consider an element f_v∈ X_v, together with the data of h|_n_v n_v → G. From f_v and h|_n_v we can extract a graph-theoretic n_v-pointed admissible cover involving graphs with legs but no edges: C_v is a single vertex, with legs n_v; V(P_v) = G/G_v as a left G-set; and above each leg h∈ n_v of C is a set of legs in P_v isomorphic to G/⟨ h⟩, with root map compatible with the map G/⟨ h⟩→ G/G_v. Finally, P_v has S-marking given by f_v.
Continue to fix a stable S-marked tree C and monodromy marking μ on C. Now, given (f_v)_v ∈∏ X_v, we assemble the local picture above into an admissible cover of graphs. For every edge e = {h,h'} of C, with root vertices v=r(h) and v'=r(h'), the half-edges of P_v above h and the half-edges of P_v' above h' are each isomorphic to G/⟨μ(h) ⟩ = G/⟨μ(h') ⟩ as G-sets. There is a unique G-equivariant bijection between these two sets that sends the chosen lift of h to the chosen lift of h', and another choice of lifts of h and h' produce the same bijection if they are related to the original choices by the same element of G. Therefore these identifications glue the half-edges above h and h' into edges above e, obtaining a graph-theoretic admissible cover P→ C which was independent of the action of G^E(C). It is straightforward to reverse this process, giving an element of the set (<ref>) starting from a graph-theoretic admissible cover.
Theorem <ref> furnishes an explicit description of the symmetric Δ-complex
Δ(_g,n^G ⊂_g,n^G)
when g=0 and G is abelian. It is sufficiently explicit that it can be programmed, and indeed we carry out computer calculations for the results in Appendix <ref>. Without restrictions on G and g, it is still possible to give a general description of (<ref>) using the framework of graphs of groups, roughly, decorating vertices of graphs with fundamental groups of punctured curves. This idea will appear in future work by M. Talpo, M. Ulirsch, and D. Zakharov; we thank Ulirsch for bringing it to our attention. This general description is not explicit in the above sense. It involves the very interesting sub-question of determining the connected components of the spaces _g,n^G in general; compare with Proposition <ref>. We also refer to forthcoming work of P. Souza, that constructs (<ref>) in the case of G cyclic with g arbitrary, and identifies it as the nonarchimedean skeleton of the toroidal pair. Moreover, that work is a precursor to further work by Y. El Maazouz, P. Helminck, F. Röhrle, P. Souza, and C. Yun studying the homotopy type of boundary complexes of unramified /p covers for g=2.
The graph-theoretic admissible G-covers in this paper (Definition <ref>) are exactly what are needed for an precise description of the boundary complex (Theorem <ref>). Thus they are reasonably expected to be similar to, but distinct from, the spaces of covers of tropical curves appearing in <cit.>, in <cit.>, and the references therein.
The work
<cit.> on tropicalizations of the space of admissible covers is
an important comparison point for this paper. Rather than G-covers, they study the admissible covers compactification of the Hurwitz space of degree d covers of smooth curves with fixed target genus h and fixed ramification profiles (and hence fixed source genus g) over n marked branch points in the target. All of the inverse images of the branch points are also marked. This moduli space is
canonically isomorphic to a cover of a component of the space _h,n(S_d).
In <cit.> the boundary complex, which may be identified with the link of the skeleton of the Berkovich analytification <cit.>, is compared, but not identified, with a certain space of tropical admissible covers, via a surjective morphism of generalized cone complexes from the former to the latter. The failure for this surjection to be an isomorphism is due to multiplicities fully accounted for in <cit.>, and is related to Remark <ref> above.
§ COMPACTIFICATIONS OF _G,N
Let g≥ 2 and n≥ 0. Throughout this section we will fix
S = {1, …, n}∪{w_1, …, w_2g + 2},
and fix G = /2 = {0,1}. We also define ρ S →/2 by ρ(i) = 0 for all i ∈{1, …, n}, and ρ(w_k) = 1 for k ∈{1, …, 2g+2}. We will discuss how the stack quotient
[^/2_0,S(ρ)/S_2g+2]
provides a normal crossings compactification of _g, n, and give an explicit description of the dual complex Θ_g, n of this compactification. The description will be in terms of the dual complexes studied in the previous section. We first consider the case of labelled Weierstrass points, and then quotient out by S_2g+2.
§.§ The complex Θ_g, n
First, let _g,n denote the moduli stack of hyperelliptic curves of genus g with n distinct marked points
and 2g+2 labelled Weierstrass points. The symmetric group on 2g+2 letters permutes the labels on Weierstrass points, and
_g,n≅ [_g,n/S_2g+2].
In this subsection, we will provide a normal crossings compactification of _g,n and give the corresponding dual complex. Then, we will quotient out by S_2g+2 to give a normal crossings compactification of _g,n.
In _g,n, a marked point is allowed to coincide with a Weierstrass point, and two marked points are allowed to form a conjugate pair under the hyperelliptic involution. Because of this,
two types of graphs will require special attention.
We call the following graph-theoretic admissible covers type (1) and type (2) respectively:
* For distinct i,j∈{1,…,n}, the admissible cover of graphs in Figure <ref> on the left.
* For each i∈{1,…,n} and w_k ∈{w_1, …, w_2g + 2}, the admissible cover of graphs in Figure <ref> on the right.
There is an open inclusion
_g,n↪^/2_0,S(ρ)
which is a normal crossings compactification, and whose boundary complex Θ_g, n is isomorphic to the subcomplex of
Δ_0,S^/2(ρ)
on simplices whose vertices are not of type (1) or (2) in Definition <ref>.
Let ^∘_g,n denote the open substack of _g,n in which a marked point may not collide with a Weierstrass point, and two marked points may not form a conjugate pair. Then
^∘_g,n≅_0,S^/2(ρ),
where _0,S^/2(ρ) denotes the interior of the moduli space _0,S^/2(ρ) of pointed admissible covers.
We define a partial compactification _g, n^* of _g, n^∘, such that
_g, n^∘⊂_g, n^* ⊂^/2_0,S(ρ),
and the second inclusion is normal crossings.
In ^/2_0,S(ρ), define _g,n^* to be the open complement of all boundary divisors except for those corresponding to dual graphs of type (1) or (2) (see Definition <ref>). Since _g, n^* is the complement of a subset of the boundary divisors, the divisor
_0, S^/2(ρ) ∖_g, n^*
still has normal crossings. Stabilization gives a canonical isomorphism _g,n^* ≅_g,n which is equivariant with respect to the action of S_n, thus giving the first part of the result.
We now turn our attention to the boundary complex.
Denote by Δ_0,S^/2(ρ) the dual complex of the compactification
^∘_g,n≅_0,S^/2(ρ)⊂_0,S^/2(ρ).
The target graphs of type (1) and (2) in Definition <ref> have one edge, and correspond to vertices in Δ_0,S^/2(ρ).
Then, the boundary complex Θ_g, n of the inclusion
_g,n⊂^/2_0,S(ρ)
is the subcomplex of
Δ_0,S^/2(ρ)
determined by those simplices which have no vertices of type (1) or (2) in Definition <ref>.
Let us now describe the complex Δ_0,S^/2(ρ) in more detail.
Its q-simplices are given by isomorphism classes of pairs (𝐏→𝐂, ω), where 𝐏→𝐂 is an object of the category Γ_0, S^/2(ρ) (Definition <ref>), and ω [q]→ E(C) is an edge-labelling. Moreover, on L(C), the monodromy marking μ satisfies μ(m_C(j)) = 0 if j ∈{1, …, n}, and μ(m_C(j)) = 1 if j ∈{w_1, …, w_2g + 2}. We will call the elements of
m_C({w_1, …, w_2g + 2}) ⊂ L(C)
the branch legs of C.
Notice that the above conditions on μ|_L(C) suffice to determine μ on all other half-edges of C, by condition (1) of Definition <ref>. Call a vertex v ∈ V(C) a leaf vertex if it is incident to only one edge. If a leaf vertex v ∈ V(C) supports an odd number of branch legs, then the non-leg half edge h incident to v must satisfy μ(h) = 1. On the other hand, if a leaf vertex v supports an even number of branch legs, then the non-leg half edge h incident to v must satisfy μ(h) = 0. Proceeding inductively, this determines μ on all half-edges incident to non-leaf vertices of C as well.
This discussion implies that given the monodromy data ρ and an S-marked stable tree C, the only additional data required to determine an object of the category Γ_0, S^/2(ρ) is
a lift of the marking function
m_C S→ L(C) to a function m_P S→ L(P)
such that the diagram
@R=6mm@C=12mm L(P)[d]
S [ur]^m_P[r]_m_C L(C)
commutes.
(Note that the morphism of graphs P → C, without the marking function on P, is already determined by C and μ.) Moreover, since each branch leg in C has a unique preimage in P, one only needs to choose, for each i ∈{1, …, n}, a leg in the preimage of m(i) ∈ L(C). Two such choices are equivalent if they differ by the /2-action on P. See Figure <ref> for an example.
§.§ The complex Θ_g, n
We now construct a normal crossings compactification of _g,n and the corresponding dual complex Θ_g, n.
By Proposition <ref>, in order to pass from Δ_0, S^/2(ρ) to Θ_g, n, we remove all edge-labelled pairs (𝐏→𝐂, ω) such that 𝐏→𝐂 admits a contraction to covers of type (1) or (2) in Definition <ref>. To that end, let
Γ_0, S^/2, *(ρ)
be the full subcategory of Γ_0, S^/2(ρ) on those covers which do not admit a contraction to covers of type (1) or (2).
We define the category Γ^_g, n as follows.
* The objects are S_2g + 2-orbits of objects of Γ_0, S^/2, *(ρ). Precisely, the objects are covers 𝐏→𝐂, where
* 𝐂 = (C, m_C) is the data of a stable tree C with 2g + 2 + n legs, together with an injective function m_C{1, …, n}→ L(C) such that there are no vertices v ∈ V(C) such that
|r^-1(v)| = 3, |L(C) ∩ r^-1(v)| = 2, and |m_C^-1(v)| = 1.
* 𝐏 = (P, m_P), where P is the unique graph-theoretic admissible /2-cover of C obtained by declaring each unmarked leg to have monodromy 1 ∈/2 and each marked leg to have monodromy 0, and m_P {1, …, n}→ L(P) is a marking of L(P) such that m_P(i) is a leg in the inverse image of m_C(i) for all i.
* The morphisms are compositions of isomorphisms and edge-contractions.
The inclusion _g,n⊂ [^/2_0,S(ρ)/S_2g+2] is a normal crossings compactification, and the boundary complex Θ_g,n
has the following explicit description.
* The set of q-simplices (Θ_g, n)_q is the set of isomorphism classes of pairs (𝐏→𝐂, ω) where 𝐏→𝐂 is an object of Γ_g, n^, and ω [q] → E(C) is an edge-labelling.
* Given an injection ι [q'] ↪ [q], we define ι^*(𝐏→𝐂, ω) ∈(Θ_g, n)_q' by contracting those edges which are not in the image of ι, and taking the unique induced edge-labelling which preserves the order of the remaining edges.
Since the action of S_2g+2 on _g,n⊂^/2_0,S(ρ) preserves _g,n, we have that
_g,n≅ [_g,n / S_2g+2] ⊂ [^/2_0,S(ρ) / S_2g+2]
is a normal crossings compactification with boundary complex equal to
Δ(_g,n⊂^/2_0,S(ρ)) / S_2g+2 = Θ̃_g, n/S_2g + 2,
and the described symmetric Δ-complex is precisely the quotient of Θ̃_g, n by S_2g + 2.
As a direct result of Proposition <ref>, we have the following corollary identifying the weight zero compactly supported cohomology of _g, n with the reduced cohomology of Θ_g, n: see <cit.>.
For each i, there are canonical S_n-equivariant isomorphisms
W_0 H^i_c(_g, n;) ≅H̃^i - 1(Θ_g, n;) ≅H̃_i - 1(Θ_g, n;)^∨,
where H̃^* and H̃_* denote reduced cohomology and homology, respectively.
We now establish some conventions for working with objects of the category Γ^_g, n. Given an object 𝐏→𝐂 of Γ^_g, n, we define the weight of a vertex v ∈ V(𝐂) to be the number of unmarked legs based at v.
The weight in this sense should not be confused with the notion of vertex weights corresponding to genera of irreducible curves. As a sanity check: the total weight of the vertices of C is 2g+2.
When depicting objects of Γ_g, n^, we adopt the following conventions. Instead of drawing the unmarked legs of 𝐂, we will label each vertex of 𝐂 with its weight. To avoid confusion with the genera of vertices in the source graph, we will depict the weight of a vertex in with the color grey, and genera of vertices with blue. Since each unmarked leg of C has a unique preimage in P, we will not draw those legs of P. When a leg of C has two preimages in P, so only one is marked, we will suppress the other leg. See
Figure <ref> for the images of the Γ_0, S^/2, *(ρ) objects from Figure <ref> under the functor to Γ^_g, n. See Figure <ref> for a complete list of isomorphism classes of Γ_g, n^-objects when g = 2 and n = 0.
We remark on the case n=0. In this case, the symmetric Δ-complex Θ_g,0 is isomorphic to the quotient of the dual complex
Δ_0, 2g+2:= Δ(_0, 2g+ 2⊂_0, 2g + 2)
by the S_2g + 2-action permuting the marked points. The dual complex (<ref>) is the moduli space of (2g+2)-marked tropical curves of genus zero and volume one <cit.>, also known as the space of phylogenetic trees <cit.>. The identification
Θ_g,0 = Δ_0,2g + 2 /S_2g + 2
can be seen directly from our description of the category Γ_g^, and holds despite the fact that the morphism
[_0, 2g+2^/2(ρ)/S_2g + 2] →
[_0, 2g + 2/S_2g + 2]
is not an isomorphism or even a /2-gerbe, due to the possible presence of extra automorphisms, more than /2, in the source curves of /2-admissible covers.
§ ACYCLIC SUBCOMPLEXES OF Θ_G, N
In this section we will study the cellular chain complex of Θ_g, n, establishing Theorem <ref> below, which states that several natural subcomplexes are acyclic. This will allow us to prove Proposition <ref> later in this section. The acyclicity results will be used in Section <ref> to obtain Theorem <ref>.
Fix g ≥ 2 and n ≥ 0. Then the following subcomplexes of Θ_g, n have vanishing reduced rational homology:
* the repeated marking locus Θ_g, n^rep, namely the subcomplex determined by those Γ_g,n^-objects 𝐏→𝐂 such that there exists v ∈ V(𝐏) supporting at least two markings from {1, …, n};
* the weight 3 locus Θ_g, n^≥ 3, determined by those Γ_g,n^-objects 𝐏→𝐂 such that 𝐂 has a vertex of weight at least 3 (Definition <ref>); and
* the intersection Θ_g, n^rep∩Θ_g, n^≥ 3.
There are stronger statements that are also true, namely that the three subspaces of the space Θ_g,n corresponding to (1), (2), and (3) are in fact contractible. It is possible to convert the proofs below, of vanishing reduced rational homology, to proofs of contractibility, using the vertex property technique of <cit.>.
§.§ The cellular chain complex of Θ_g, n
Following <cit.>, the reduced rational homology of Θ_g, n is computed by the graph complex 𝒞_*^(g, n) described as follows. In degree p, 𝒞_p^(g, n) is spanned by pairs (𝐏→𝐂, ω) where 𝐏→𝐂 is an object of Γ^ℋ_g, n, and ω [p] → E(𝐂) is a bijective edge-labelling. These pairs are subject to the relation
(𝐏→𝐂, ω) = sgn(ρ) (𝐏→𝐂, ω∘ρ) whenever ρ∈𝔖_p + 1 = ([p]).
The differential ∂𝒞_p^(g, n)→𝒞_p - 1^(g, n) is given by the signed sum of edge contractions:
∂ (𝐏→𝐂, ω) = ∑_i ∈ [p] (-1)^i (δ^i)^*(𝐏→𝐂, ω),
where δ^i [p - 1] → [p] is the unique order-preserving injection which misses i.
To prove Theorem <ref>, we will show that the corresponding sub-chain complexes of 𝒞^(g, n)_* are acyclic.
Denote by ℛ^(g, n)_* the sub-chain complex of 𝒞^(g, n)_* spanned by those pairs (𝐏→𝐂, ω) such that 𝐏 has a vertex v that has at least two markings from {1, …, n}; this is the chain complex which computes the reduced rational homology of Θ_g, n^rep. Denote by 𝒬^(g, n)_* the augmented cellular chain complex of Θ_g, n^≥ 3: this is the sub-chain complex of 𝒞^(g, n)_* spanned by those pairs (𝐏→𝐂, ω) where 𝐂 has at least one vertex v with weight at least 3.
We will show that
the chain complexes ℛ^(g, n)_* and 𝒬^(g, n)_* ∩ℛ^(g, n)_* are acyclic for all g ≥ 2 and all n ≥ 2 (Theorem <ref>),
that the chain complex 𝒬^(g, n)_* is acyclic for all g ≥ 2 and all n ≥ 0 (Theorem <ref>),
and that the chain complex 𝒞^(g, n)_* is acyclic for all g ≥ 2 and n ≤ 1 (Theorem <ref>).
Thus, Theorem <ref> and Theorem <ref> prove Theorem <ref>, and Theorem <ref> gives part (1) of Proposition <ref>.
The proofs of these theorems are informed by work of Chan–Galatius–Payne on contractibility criteria for symmetric Δ-complexes <cit.>, as well as work of Conant–Gerlits–Vogtmann <cit.> on the acyclicity of the subcomplex of Kontsevich's graph complex spanned by graphs with cut vertices.
§.§ The homology of Θ_g, n^rep
It will be useful to isolate specific types of edges of covers with repeated markings.
For a Γ^_g, n-object 𝐏→𝐂 with repeated markings, we say an edge e ∈ E(𝐂) is a supporting edge, with support equal to S ⊆ [n], if, upon contracting all edges of 𝐂 which are not equal to e, as well as their preimages in 𝐏, we obtain the cover 𝐁_S →𝐄_S depicted in Figure <ref>. If |S| = i, we will call e an i-supporting edge.
Given a Γ_g, n^-object 𝐏→𝐂, we define the supporting edge retraction of → to be the cover obtained by contracting all supporting edges in 𝐂 and their preimages in 𝐏.
For all g≥ 2 and n ≥ 2, the chain complexes ℛ^(g, n)_* and ℛ^(g, n)_* ∩𝒬^(g, n)_* are acyclic.
We will prove the theorem only for ^(g, n)_*, as the same argument works for ℛ^(g, n)_* ∩𝒬^(g, n)_*. For ease of notation, fix g, n ≥ 2 and put
_* := ^(g,n)_*.
First, filter _* as follows: let
^≥ i_* ↪_*
be the subcomplex generated by covers which have an k-supporting edge for some k ≥ i. More precisely, we mean that ^≥ i_* is spanned by covers obtained by edge-contraction from covers with supporting edges of this type. We apply this definition even when i=n+1, in which case ^≥n+1_* = 0. Then we have a filtration
0 = ^≥ n+1↪^≥ n_*
↪⋯↪^≥ 2_* = _*.
Passing to the associated spectral sequence, it suffices to show that for each i=2,…,n, the successive quotient chain complexes
^i_* := ^≥ i_* / ^≥ i + 1_*
are acyclic. These quotient chain complexes are spanned by covers with i-supporting edges and their edge-contractions, but do not include any covers with k-supporting edges or their edge contractions for any k > i. Now we filter ^i_*. Define
F_p^i_* ↪^i_*
to be the sub-chain complex spanned by graphs with at most p non-supporting edges. This is an ascending filtration
0 = F_-1^i_* ↪ F_0^i_* ↪⋯↪^i_*
and again by considering the associated spectral sequence, it suffices to show that successive quotients
G_p ^i_* := F_p ^i_* / F_p -1^i_*
are acyclic, in order to conclude that ^i_* and hence is acyclic. For fixed i and p, let A_i, p denote the set of isomorphism classes of Γ^_g, n-objects → where |E()| = p and which (1) do not have any supporting edges, (2) admit a contraction from a cover with an i-supporting edge, and (3) do not admit a contraction from any covers with k-supporting edges for k > i. Then we have a direct sum decomposition
G_p ^i_* = ⊕_→∈ A_i,p^→_*,
where ^→_* is the sub-chain complex consisting of those covers whose supporting edge retraction is equal to →. This direct sum decomposition holds because the differential on G_p ^i_* is given by a signed sum of supporting edge contractions, and hence preserves the supporting edge retraction of a given cover. Next, given →∈ A_i, p, we have a tensor product decomposition
_*^→≅(⊗_v ∈ V^rep_i() ())[1 - p],
where V^rep_i() denotes the set of vertices of which contain exactly i markings, and the first copy of is in degree 1. This tensor product decomposition holds because a generator of _*^→ is determined by a choice of subset of those vertices of which contain i markings: the corresponding generator is determined by expanding a single i-supporting edge from the image of each chosen vertex in . The degree shift is required to account for the p edges of →.
Altogether, this shows that ^→_* is a tensor product of acyclic chain complexes, so ^→_* is itself acyclic, and the proof is complete.
§.§ The homology of Θ_g, n^≥ 3
We will now show that the chain complex 𝒬^(g, n)_* is acyclic. It will again be convenient to name particular types of edges.
Suppose 𝐏→𝐂 is an object of Γ^ℋ_g, n, and that 𝐂 has a vertex of weight at least 3.
* We say e ∈ E(𝐂) is a 3-end if upon contracting all edges in 𝐂 except for e, and their preimages in 𝐏, we obtain the cover 𝐃→𝐅 in Figure <ref>.
* We say a cover 𝐏' →𝐂' is a 3-end expansion of 𝐏→𝐂 if 𝐏→𝐂 is obtained from 𝐏' →𝐂' by contracting a sequence of 3-ends.
It is straightforward to see that for any cover 𝐏→𝐂, the poset of 3-end expansions of → has a maximal element, as in the following lemma. We omit the
proof: see Figure <ref> for an example of how this expansion is constructed.
Let 𝐏→𝐂 be an object of Γ^_g, n. Then the poset of 3-end expansions of 𝐏→𝐂 has a unique maximal element 𝐏' →𝐂', and this expansion is canonical in the sense that any automorphism of 𝐏→𝐂 lifts to an automorphism of 𝐏' →𝐂'.
Given a Γ^_g, n-object 𝐏→𝐂, let A(𝐏→𝐂) be the set of isomorphism classes of covers obtained from 𝐏→𝐂 by contracting 3-ends. We define a chain complex 𝒬^𝐏→𝐂_* as follows: the vector space 𝒬^𝐏→𝐂_p is spanned by pairs (𝐇→𝐊, ω), where 𝐇→𝐊 is an element of A(→) with |E(𝐊)| = p + 1, and ω [p] → E(𝐊) is an edge-labelling. These generators are subject to the usual relation
(𝐇→𝐊, ω∘ρ ) = sgn(ρ) (𝐇→𝐊, ω)
for ρ∈([p]). The differential on 𝒬^𝐏→𝐂_* is given by the signed sum of 3-end contractions; we set it equal to 0 on any generators which do not have any 3-ends.
Suppose 𝐏→𝐂 has a 3-end and is maximal with respect to expanding 3-ends. Then 𝒬^𝐏→𝐂_* is acyclic.
First consider the case where 𝐂 has no automorphisms. This implies that all 3-end contractions of 𝐂 have no automorphisms, since an automorphism of a tree must lift to an automorphism of its maximal 3-end expansion. Let q + 1 be the number of distinct 3-ends of 𝐂. We can understand 𝒬^𝐏→𝐂_* as a shift of the augmented cellular chain complex of the standard q-simplex σ^q, viewed as the space parameterizing assignments of nonnegative lengths to the q + 1 distinct 3-ends of 𝐂, such that the lengths sum to one. So in the automorphism-free case, 𝒬^𝐏→𝐂_* is acyclic.
For the general case, when 𝐂 and its contractions may have automorphisms, fix a labelling of the edges of 𝐂, to get a decorated tree 𝐂^†. This induces a labelling of the edges of each contraction of 𝐂. Let A( 𝐂^†) be the set consisting of 𝐂^† and all of its contractions. We can make a chain complex 𝒬^, †_* which in degree p is spanned by pairs [𝐊, ω] where 𝐊 is an element of A(𝐂^†) with |𝐊| = p + 1, and ω [p] → E(𝐏) is a bijection, subject to the usual relations under the action of ([p]). Observe that there is a canonical action of (𝐂) on the chain complex 𝒬^𝐂, †_*, and 𝒬^𝐏→𝐂_* is identified with the (𝐂)-coinvariants of the complex 𝒬^𝐂, †_*, by the second part of Lemma <ref>. Since (𝐂) is finite, it has no homology over the rationals. Moreover, 𝒬^𝐂, † is acyclic by the first part of the proof. We conclude that
H_*((𝒬^𝐂, †_*)_(𝐂)) = (H_*(𝒬^𝐂, †_*))_(𝐂) = 0,
as desired.
We now prove that 𝒬^(g, n)_* is acyclic.
For g ≥ 2 and n ≥ 0, the chain complex 𝒬^(g, n)_* is acyclic.
Let F_p 𝒬^(g, n)_* denote the subspace spanned by those covers whose target tree has at most p edges which are not 3-ends. This defines a bounded, increasing filtration of 𝒬^(g, n). The E^0 page
E^0_p, q = F_p 𝒬^(g, n)_p + q/ F_p - 1𝒬^(g, n)_p + q
of the associated spectral sequence is spanned by covers whose target tree has exactly p edges which are not 3-ends. The differential ∂_0 E^0_p, q→ E^0_p, q-1 is given by a signed sum of 3-end contractions. Therefore, by Lemma <ref>, the pth row of the E^0 page breaks up into a direct sum of chain complexes of the form 𝒬^𝐏→𝐂_*, where 𝐂 has at least one 3-end, and the tree obtained from 𝐂 by contracting all 3-ends has p edges. Proposition <ref> then implies that the E^1 page vanishes, which completes the proof.
§.§ Calculations on Θ_g, n for n ≤ 2
We conclude this section by proving Proposition <ref>. The first part of Proposition <ref> asserts that 𝒞^(g, n)_* is acyclic for n ≤ 1, and the proof is similar to the one that 𝒬^(g, n)_* is acyclic. Once again, we isolate particular types of edges:
Let 𝐏→𝐂 be a Γ_g, n^-object. An edge e ∈ E(𝐂) is called a 2-end if upon contracting all edges of except for e and their preimages in , we obtain the cover 𝐉→𝐊 in Figure <ref>.
The key to the proof of acyclicity of 𝒞^(g, n) when n ≤ 1 is the following lemma.
Let 𝐏→𝐂 be an object of Γ^_g, n for n ≤ 1. Then the poset of expansions of 𝐏→𝐂 by 2-ends has a unique maximal element 𝐏' →𝐂'. Moreover, this expansion is canonical in the sense that any automorphism of 𝐏→𝐂 lifts to one of 𝐏' →𝐂'.
It is clear how to construct the graph 𝐂': for every vertex of 𝐂 with weight d ≥ 2, one expands ⌊ d/2 ⌋ many 2-ends from v, leaving behind a vertex of weight d - 2⌊ d/2 ⌋. This uniquely determines a cover P', but does not determine the marking function on P'. If n = 0, then there is no marking function, so 𝐏' is determined. For n = 1, the only ambiguity arises when v supports the unique marking, and the preimage of v in 𝐂' has 2 preimages in the graph P', so one has to make a choice as to which fiber to mark. However, since n = 1, both choices are equivalent, as they differ by the /2-action on P'. Therefore 𝐏' is also determined when n = 1. The statement on lifting of automorphisms is straightforward to check. The lemma fails when n > 1, because in general there is no canonical way of distributing the markings supported at v among the fibers over v in P'. See Figure <ref> for an example.
Given Lemma <ref>, the proof of the following theorem is completely analogous to the proof of Theorem <ref>; we will only outline the necessary steps.
For g≥ 2 and n ≤ 1, the chain complex 𝒞^(g, n)_* is acyclic.
First, define B(𝐏→𝐂) to be the set of isomorphism classes of Γ_g, n^-objects obtained from 𝐏→𝐂 by contracting 2-ends. Then, use this to define a chain complex ^𝐏→𝐂_* analogously to ^𝐏→𝐂_*, where the differential is given by a signed sum of 2-end contractions. The proof that ^𝐏→𝐂_* is acyclic, for 𝐏→𝐂 maximal with respect to expanding 2-ends, is exactly the same as the proof of Proposition <ref>. Finally, one proves the theorem by filtering 𝒞^(g, n): set F_p 𝒞^(g, n)_* to be the subspace of 𝒞^(g, n)_* spanned by those covers with at most p edges which are not 2-ends. Then the pth column of the E^0 page of the associated spectral sequence breaks up into a direct sum of complexes of the form ^𝐏→𝐂_* by Lemma <ref>, so the E^1 page vanishes, and the result follows.
Theorem <ref> gives part (1) of Proposition <ref>. Part (2) states that
W_0 H^2g + 1_c(_g, 2;) ≅,
and that the corresponding S_2-representation is trivial if g is even, and given by the sign representation if g is odd. We prove this now by writing down an explicit cycle in ^(g, 2)_2g corresponding to this class. See Figure <ref>.
We have an isomorphism of S_2-representations
W_0 H^2g + 1_c(_g, 2;) ≅H̃_2g(Θ_g, 2;)^∨
by Corollary <ref>. We have
H̃_2g(Θ_g, 2;) = H_2g(^(g, 2)_*).
Observe that 2g is the top homological degree of ^(g, 2): the maximal number of edges of a stable tree with 2g + 4 legs is 2g + 1. Therefore, any cycle in ^(g, 2)_2g defines a class in homology. Any target tree for a cover in ^(g, 2)_2g must be trivalent, and to be a nonzero element, it cannot have any automorphisms which act by an odd permutation of the edge set. It is straightforward to conclude that such a tree must be equal to the tree depicted in Figure <ref>. This tree has two covers, depicted in Figure <ref>. Therefore ^(g, 2)_2g = 2, where a basis is given by choosing any edge-labelling of the aforementioned tree. One can verify directly that neither one of these basis elements form a cycle on their own, but their difference does. From this we conclude that H_2g(_*^2g) ≅. To understand the S_2-representation, we note that when g is even, the transposition in S_2 preserves any edge-labelling of the given tree, and when g is odd, the transposition induces an odd permutation of the edge labels.
Theorem <ref> generalizes to other spaces of admissible covers. Fix an integer N > 0 and let G be an abelian group, which we now write additively to be consistent with our notation for G=/2. Let
μ{w_1, …, w_N}→ G
be a function such that the image of μ generates G, which additionally satisfies
∑_i = 1^Nμ(w_i) = 0.
where 0 ∈ G denotes the identity element. For any integer n ≥ 0, we can extend μ to a function
{1, …, n}∪{w_1, …, w_N}→ G
by setting the image of each i ∈{1, …, n} to be 0; for ease of notation, we will also call this extension μ. We set the notation
_0, n + N^G(μ):= _0, {1, …, n }∪{w_1, …, w_N }^G(μ)
and define _0, n + N^G(μ) similarly. We now define an intermediate locus
_0, n + N^G(μ) ⊂_0, n + N^G(μ) ⊂_0, n + N^G(μ)
in analogy with the space _g, n of n-marked hyperelliptic curves of genus g together with a labelling of their Weierstrass points, considered in <ref>. Given a graph-theoretic pointed admissible G-cover →∈Ob(Γ_0, n+N^G(μ)), where Γ_0, n+N^G(μ) is the category defined in Definition <ref>, we say that → is forbidden if all of the following conditions hold:
* |E()| = 1,
* If we erase all of the legs labelled by {1, …, n} from , the resulting {w_1, …, w_N}-marked tree is not stable in the sense of Definition <ref>, and
* the source graph has no vertices supporting repeated markings among {1, …, n}.
Each forbidden cover → corresponds uniquely to a boundary divisor of _0, n + N^G(μ), and we define _0, n + N^G(μ) to be the complement in _0, n + N^G(μ) of those boundary divisors which are not forbidden.
When G = /2 = {0, 1}, N = 2g + 2, and μ(w_i) = 1 for all i, the forbidden divisors are precisely those of type (1) and (2) in Definition <ref>, and we have
_0, n + 2g + 2^/2(μ) ≅_g, n.
For general G and μ, the space _0, n + N^G(μ) can be identified with the moduli space of smooth N-pointed admissible G-covers of ^1, with monodromy specified by μ, together with n distinct marked points on the source curve. This space admits an S_n-action given by permuting the n marked points on the source, and the isomorphism with _0, n + N^G(μ) is S_n-equivariant.
The dual complex Θ̃_0, n+N^G(μ) of the normal crossings compactification
_0, n + N^G(μ) ⊂_0, n + N^G(μ)
is the subcomplex of Δ_0, n+ N^G(μ) on those simplices which have no forbidden vertices. The analogue of Theorem <ref> holds for Θ̃_0, n+N^G(μ): the subcomplex parameterizing graph-theoretic admissible G-covers → where has a repeated marking is acyclic. Our proof of Theorem <ref> carries through to this setting, mutatis mutandis. In Remark <ref> below, we explain how this leads to a generalization of Theorem <ref> for these spaces.
§ A GRAPH SUM FORMULA FOR 𝗁_G
Recall from the introduction that
𝗁_g = ∑_n ≥ 0∑_i = 0^4g - 2 + 2n (-1)^i _n W_0 H^i_c (_g, n; ) ∈Λ̂
denotes the generating function for the weight zero equivariant Euler characteristics of the moduli spaces _g, n. In this section we will prove Theorem <ref>, thus establishing our sum-over-graphs formula for 𝗁_g. Recall that T_2g + 2 denotes the set of isomorphism classes trees with 2g + 2 unlabelled leaves, and each such tree C has a unique graph-theoretic admissible /2-cover P_C→ C. Let T_2g + 2^<3 denote the subset of T_2g + 2 consisting of those trees such that no vertex supports more than two leaves, and for a tree C we write E_C for its set of edges. We restate Theorem <ref> for convenience.
A
We have
𝗁_g = ∑_C ∈ T_2g + 2^<3
(-1)^|E_C|/|(P_C)|∑_τ∈(P_C)sgn(τ|_E_C) ∏_k ≥ 1 (1 + p_k)^f(P_C, τ, k)
where E_C is the set of edges of the tree C,
p_k = ∑_n > 0 x_n^k∈Λ̂ is the kth power sum symmetric function, and k · f(P_C, τ, k) is the compactly supported Euler characteristic of the set of points in P_C which have orbit of length k, under the action of τ.
We will prove Theorem <ref> through a series of intermediate results.
We have
𝗁_g = - ∑_n ≥ 0χ_c^S_n(Θ_g, n∖ (Θ_g, n^rep∪Θ_g, n^≥ 3)),
where χ_c^S_n( · ) denotes the S_n-equivariant compactly supported Euler characteristic.
Via the identification
W_0 H^i_c(_g, n ; ) ≅H_i -1(Θ_g, n; )^∨
of Corollary <ref>, we can write
𝗁_g = ∑_n ≥ 0∑_i = 0^4g - 2 + 2n (-1)^i _n H_i - 1 (Θ_g, n; )
= ∑_n ≥ 0 -χ^S_n(Θ_g, n),
where χ^S_n(·) denotes the S_n-equivariant reduced Euler characteristic. Since Θ_g, n is connected and compact, and S_n acts trivially on H_0(Θ_g, n;) ≅, we have
-∑_n ≥ 0χ^S_n(Θ_g, n) = ∑_n ≥ 0 h_n - ∑_n ≥ 0χ_c^S_n(Θ_g, n),
where h_n ∈Λ is the nth homogeneous symmetric function, defined as the Frobenius characteristic of the trivial S_n-representation. By the additivity of the compactly supported Euler characteristic under stratification, we can write
∑_n ≥ 0χ_c^S_n(Θ_g, n) = ∑_n ≥ 0( χ_c^S_n(Θ_g, n∖ (Θ_g, n^rep∪Θ_g, n^≥ 3)) + χ_c^S_n(Θ_g, n^rep∪Θ_g, n^≥ 3))
Since the union Θ_g, n^rep∪Θ_g, n^≥ 3 is compact and connected, with vanishing reduced rational homology by Theorem <ref>, and S_n acts trivially on H_0(Θ_g, n^rep∪Θ_g, n^≥ 3 ; ), we have
χ_c^S_n(Θ_g, n^rep∪Θ_g, n^≥ 3) = h_n,
and the proof is complete.
We have
𝗁_g = -∑_C ∈ T_2g + 2^<3∑_n ≥ 0χ_c^S_n( (Conf_n(P_C) × (Δ^|E_C| - 1)^∘)/(P_C)).
We can stratify the space
X_g, n : = Θ_g, n∖ (Θ_g, n^rep∪Θ_g, n^≥ 3)
by the Γ_g^-object that arises when we forget the markings of the legs. Such an object is uniquely specified by an element C of T_2g + 2^<3, which determines its covering P_C. The stratum corresponding to P_C → C is S_n-equivariantly homeomorphic to
(Conf_n(P_C) × (Δ^|E_C| - 1)^∘)/(P_C).
Above, (Δ^|E_C| - 1)^∘ denotes the interior of the standard |E_C| - 1 simplex Δ^|E_C| - 1, viewed as the space parameterizing metrics ℓ E_C →_> 0 of total length one. The space Conf_n(P_C) is the configuration space of n distinct points on P_C, and the action of (P_C) is diagonal: (P_C) naturally acts on C, and hence on |E_C| and (Δ^|E_C|- 1)^∘.
We now show how to calculate the terms in the sum, following Gorsky's calculation of the S_n-equivariant Euler characteristic of Conf_n(X)/G, where X is an algebraic variety and G is a finite subgroup of its automorphism group <cit.>.
Let X be a finite CW complex, and let E be a finite set. Set
Δ^∘ = {ℓ E →_>0|∑_e ∈ Eℓ(e) = 1 }.
Let G be a finite group acting on both X and E, and set
𝗁_X, E, G := ∑_n ≥ 0χ_c^S_n( (Conf_n(X) ×Δ^∘)/G).
Then
𝗁_X, E, G = -(-1)^|E|/|G|∑_g ∈ Gsgn(g|_E) ∏_k ≥ 1 (1 + p_k)^χ_c(X_k(g))/k,
where X_k(g) denotes the set of points of X which have orbit of length k under the action of g.
Before proving Proposition <ref>, we need two intermediate lemmas.
Suppose that X is any finite CW complex. Then
f(t) := ∑_n ≥ 0χ_c(Conf_n(X)) t^n/n! = (1 + t)^χ_c(X).
We have the identity
χ_c(X^n) = ∑_k = 1^n S(n, k) χ_c(Conf_k(X)),
where S(n, k), the Stirling number of the second kind, counts the number of partitions of n with k parts. It follows that
g(t): =∑_n ≥ 0χ_c(X^n) t^n/n! = e^χ_c(X)t
is the Stirling transform of f, so that f(t) = g(log(1+t)) = (1 + t)^χ_c(X), as claimed.
For any group H acting on a space Y, denote by
[Y]^h
the set of fixed points of h ∈ H acting on Y. Then, for X, E, and G as above, and σ∈ G, we have
χ_c([(Conf_n(X) ×Δ^∘)/G]^σ) = - (-1)^|E|/|G|∑_g ∈ Gsgn(g|_E) ·χ_c([Conf_n(X)]^g^-1σ).
Define
S = {(g, ℓ, y) ∈ G × E ×Conf_n(X) | g · (ℓ, y) = σ· (ℓ, y) }.
Then we have a map
S →[(Conf_n(X) ×Δ^∘)/G ]^σ,
which takes (g, ℓ, y) to (ℓ, y). The fibers of this map are all nonempty and have cardinality equal to |G|,
so
χ_c([(Conf_n(X) ×Δ^∘)/G]^σ) = 1/|G|χ_c(S).
On the other hand, the projection S → G has fiber over g ∈ G isomorphic to
[Δ^∘]^g× [Conf_n(X)]^g^-1σ.
Therefore we have
χ_c([(Conf_n(X) ×Δ^∘)/G]^σ) = 1/|G|∑_g ∈ Gχ_c([Δ^∘]^g) ·χ_c([Conf_n(X)]^g^-1σ).
The proof is finished upon noting that [Δ^∘]^g is again an open simplex, whose dimension modulo 2 is equal to |E| + sgn(g|_E) - 1.
We can now prove Proposition <ref>.
We have
𝗁_X, E, G = ∑_n ≥ 01/n!∑_σ∈ S_n∑_i ≥ 0(-1)^i Tr(σ|_H^i_c((Conf_n(X) ×Δ^∘)/G;)) p_1^k_1(σ)⋯ p_n^k_n(σ)
= ∑_n ≥ 01/n!∑_σ∈ S_nχ_c([(Conf_n(X) ×Δ^∘)/G]^σ) p_1^k_1(σ)⋯ p_n^k_n(σ).
The second equality follows from the Lefschetz fixed-point theorem, applied to the one-point compactification of (Conf_n(X) ×Δ^∘)/G, where we set k_i(σ) to be the number of cycles of length i in σ. Now using Lemma <ref>, we have
𝗁_X, E, G =-∑_n ≥ 01/n!∑_σ∈ S_n(-1)^|E|/|G|∑_g ∈ Gsgn(g|_E) ·χ_c([Conf_n(X)]^g^-1σ) p_1^k_1(σ)⋯ p_n^k_n(σ).
Now the proof follows that of Gorsky <cit.>: if we set
X_k(g) := {x ∈ X | x has orbit of size k under g },
and
X̃_k(g) = X_k(g)/(g),
then for fixed ℓ_1, …, ℓ_n such that ∑_i = 1^n i ℓ_i = n, we have a map
∐_σ∈ S_n
k_i(σ) = ℓ_i ∀ i [Conf_n(X)]^g^-1σ→∏_i = 1^nConf_ℓ_i(X̃_i(g))/S_ℓ_i
which is n!-to-1, so that
1/n!∑_σ∈ S_n
k_i(σ) = ℓ_i ∀ iχ_c([Conf_n(X)]^g^-1σ) = ∏_i = 1^nχ_c(Conf_ℓ_i(X̃_i(g)))/ℓ_i!.
Now the proposition follows from Lemma <ref>, upon summing over all possible tuples (ℓ_1, …, ℓ_n).
Now Theorem <ref> is proved by combining Lemma <ref> with Proposition <ref>.
As explained in Remark <ref>, the repeated marking locus in the dual complex Θ̃_0, n+N^G(μ) of the inclusion
_0, n + N^G(μ) ⊂_0, n + N^G(μ)
is also acyclic, and _0, n + N^G(μ) is naturally identified with the moduli space of smooth N-pointed admissible covers of ^1 with μ-specified monodromy, together with n distinct marked points on the source curve.
By the acyclicity of the repeated marking locus, we can write a graph sum formula for the generating function encoding the S_n-equivariant weight zero compactly supported Euler characteristics of these moduli spaces. Define
𝗁^G_N(μ) = ∑_n ≥ 0∑_i = 0^2N + 2n - 6 (-1)^i_n(W_0 H^i_c(_0, n+N^G(μ); )).
By removing the repeated marking locus from the dual complex and emulating the techniques of this section, we obtain the following theorem.
D
We have
𝗁^G_N(μ) = ∑_→∈Ob(Γ_0, N^G(μ))(-1)^|E_|/|()|∑_τ∈()sgn(τ|_E_) ∏_k ≥ 1 (1 + p_k)^f(, τ, k)
where E_ is the set of edges of the tree ,
p_k = ∑_n > 0 x_n^k∈Λ̂ is the kth power sum symmetric function, and k · f(, τ, k) is given by the compactly supported Euler characteristic of the set of points in which have orbit of length k, under the action of τ. The first sum is taken over isomorphism classes of objects in Γ_0, N^G(μ), which is the category defined in Definition <ref>.
Taking G = /2, N = 2g + 2, and μ{w_1, …, w_N}→/2 to be the constant function 1 in Theorem <ref>, we obtain the generating function for the S_n-equivariant weight zero compactly supported Euler characteristics of the moduli spaces _g, n of n-pointed hyperelliptic curves of genus g, together with labellings of their Weierstrass points.
§ CALCULATIONS FOR G≤ 7
In this appendix we present the computational data obtained by implementing Theorem <ref> on a computer. This was implemented in Mathematica using the package IGraph/M <cit.>. The code for these computations is available at <cit.>.
We compute 𝗁_g explicitly for 2 ≤ g ≤ 7: see Table <ref>. For scale, 𝗁_5 is computed as a sum over 96 graphs and takes 8 minutes to compute on a home laptop, while 𝗁_7 is computed as a sum over 2789 graphs and takes just under 3 days to compute on a home laptop.
We extract from this data exponential generating functions for the numerical weight zero compactly supported Euler characteristic by setting P_1 to 1+t and all other P_i to 1, see Table <ref>. We display these Euler characteristics for 0 ≤ n ≤ 10 in Table <ref>.
alpha
|
http://arxiv.org/abs/2307.01266v1
|
20230703180007
|
Symmetry fractionalization, mixed-anomalies and dualities in quantum spin models with generalized symmetries
|
[
"Heidar Moradi",
"Ömer M. Aksoy",
"Jens H. Bardarson",
"Apoorv Tiwari"
] |
cond-mat.str-el
|
[
"cond-mat.str-el",
"hep-th"
] |
tinge
red
empty
Symmetry fractionalization, mixed-anomalies and
dualities in quantum spin models with generalized symmetries
Heidar Moradi^1, Ömer M. Aksoy^2, Jens H. Bardarson^3, Apoorv Tiwari^3
^1 School of Physical Sciences, University of Kent, Canterbury CT27NZ,UK
^2Condensed Matter Theory Group, Paul Scherrer Institute, CH-5232 Villigen PSI, Switzerland
^3Department of Physics, KTH Royal Institute of Technology, Stockholm, 106 91 Sweden
[email protected], [email protected], [email protected]
We investigate the gauging of higher-form finite Abelian symmetries and their sub-groups in quantum spin models in spatial dimensions d=2 and 3.
Doing so, we naturally uncover gauged models with dual higher-group symmetries and potential mixed ‘t-Hooft anomalies.
We demonstrate that the mixed anomalies manifest as the symmetry fractionalization of higher-form symmetries participating in the mixed anomaly.
Gauging is realized as an isomorphism or duality between the bond algebras that generate the space of quantum spin models with the dual generalized symmetry structures.
We explore the mapping of gapped phases under such gauging related dualities for 0-form and 1-form symmetries in spatial dimension d=2 and 3.
In d=2, these include several non-trivial dualities between short-range entangled gapped phases with 0-form symmetries and 0-form symmetry enriched Higgs and (twisted) deconfined phases of the gauged theory with possible symmetry fractionalizations.
Such dualities also imply strong constraints on several unconventional, i.e., deconfined or topological transitions.
In d=3, among others, we find, dualities between topological orders via gauging of 1-form symmetries.
Hamiltonians self-dual under gauging of 1-form symmetries host emergent non-invertible symmetries, realizing higher-categorical generalizations of the Tambara-Yamagami fusion category.
0.4cm
.1pt
.2pt
§ INTRODUCTION
Global symmetries play a fundamental role in understanding many aspects of quantum physics.
The existence of symmetries aids in the organization of states and operators into representations of the global symmetry and imposes non-perturbative constraints on the dynamics and low-energy phases and phase transitions realized in a quantum system.
Applications of symmetry are at the heart of much of modern physics.
For instance, global symmetries constrain the particle content of the Standard Model of particle physics and provide the basis for Landau's classification scheme of phases of matter.
In the past decade, there has been paradigm shift in the understanding of global symmetries in quantum field theory, based on the insight that any topological sub-sector of operators in a quantum field theory embodies a symmetry <cit.>.
This has led to vast generalizations beyond the conventional notion of symmetry, which relied on the existence of global symmetry operators defined on all of space, or more generally on co-dimension-1 hypersurfaces in spacetime, which satisfy group composition rules and commute with the Hamiltonian.
The operators charged under such conventional symmetries are point-like or zero-dimensional and transform in representations of the global symmetry.
Such symmetries have been generalized in two broad directions, corresponding to identifying two classes of topological operators as symmetries.
Namely, allowing symmetry operators (i) to be defined on sub-manifolds of spacetime that have co-dimension higher than one has led to so-called higher-group symmetries <cit.>, and (ii) to satisfy more general composition rules than those of a group has lead to non-invertible symmetries <cit.>.
Within the domain of higher-group symmetries, a p-form symmetry is generated by a codimension-(p+1) dimensional operator and operators charged under such symmetries have dimension greater than or equal to p <cit.>[Here codimension is defined relative to the spacetime dimension. In d+1 spacetime dimensions, a p-form symmetry is a topological defect defined on (d+1)-(p+1) = d-p dimensional submanifolds.].
Non-invertible symmetries, instead, as the name suggests, involve symmetry operators that do not have any inverse.
Composition rules for non-invertible symmetry operators correspond to an algebra rather than a group.
All these generalizations, encompassing higher-group and non-invertible symmetries, collectively fall under the umbrella of global categorical symmetries.
This name owes itself to the significant role played by higher fusion categories <cit.> in describing such symmetries and their charges.
Just as group theory and group representation theory provide the mathematical language for conventional symmetries, fusion categories organize the composition rules of topological operators across all (co)-dimensions.
Moreover, fusion categories also capture intricate topological information, including quantum anomalies and other refined features such as symmetry fractionalization patterns.
In a short span of time, global categorical symmetries have already made numerous important contributions in advancing our understanding of fundamental problems in physics.
Key accomplishments include resolving the phase diagram of pure non-Abelian gauge theories <cit.>, elucidating the phase diagram of adjoint quantum chromodynamics in 1+1 dimensions <cit.> and expanding Landau's paradigm to incorporate topologically ordered phases of matter <cit.>.
Global categorical symmetries also played a central role in recent constructions that organize the symmetry aspects of a quantum system into a topological order in one dimension higher.
These go under the name of symmetry topological field theories <cit.> in the high-energy physics community and topological holography <cit.> or holographic or categorical symmetry <cit.> in the condensed-matter community.
Such constructions are an efficient way to unravel large webs of dualities, related to topologically manipulating the symmetry aspects of the system while leaving the local physics unchanged.
Constructions applicable to quantum lattice models have also been used to unify Landau and beyond Landau physics <cit.> in d=1 and in the domain of finite Abelian symmetries.
Gauging a global symmetry is a well-understood method to manipulate the symmetry structure of a system in a controlled yet nontrivial way.
It provides insights into subtle aspects of global categorical symmetries and facilitates an efficient search for quantum theories exhibiting diverse symmetries.
Gauging involves transforming a theory with a global symmetry into a theory with a local symmetry or redundancy, achieved by coupling the theory to a background gauge field and summing over its realizations.
The resulting gauged theory possesses global symmetries that extend beyond conventional 0-form global symmetries and can be deduced in full generality, allowing for the construction of models with potentially novel symmetries <cit.>.
As an example gauging a p-form finite Abelian group G in d+1 dimensions, delivers a theory which has a (d-p-1)-form symmetry corresponding to the Pontrjagin dual G^∨, the group of homomorphisms from G to U(1)[More precisely, this is the invertible sub-category inside the category of d-representations of a p-form group G. We will however only describe the invertible component. The rest of the symmetry category can be obtained by considering all possible condensation defects.].
When the total p-form group is a central extension of K by N determined by an extenstion class κ, gauging N⊂ G produces a gauged theory with a p-form symmetry K, a (d-p-1)-form symmetry N^∨ and a mixed anomaly between them, which depends on κ <cit.>.
Similarly, theories with non-invertible symmetries can be obtained by gauging subgroups that act via outer automorphisms on <cit.>, or have a mixed anomaly with the remaining symmetry structure <cit.>.
Another avenue for non-invertible symmetries is gauging invertible symmetries on sub-manifolds of spacetime <cit.>.
The symmetry defects thus obtained are referred to as condensation defects.
To summarize, many of the constructions of models with exotic global categorical symmetries employ some kind of generalized gauging procedure.
Gauging provides a systematic playground to start with a theory that has a familiar symmetry structure and `discover' quantum systems with novel symmetry structures.
In this early stage in the study of global categorical symmetries this contributes to a systematic understanding of these novel symmetry structures.
Yet another reason to study gauging of finite global symmetries is that such gaugings are realized as dualities in quantum systems.
For example, the well-known Kramers-Wannier and Jordan-Wigner dualities are essentially gaugings of the Z_2 internal and Z_2 fermion-parity symmetry in 1d lattice models <cit.>.
Dualities can be used to provide profound non-perturbative insights into quantum systems and are therefore very valuable.
Furthermore, gauging related dualities in dimensions higher than d=1 map short-range entangled states to long-range entangled or topologically ordered states.
For instance gauging a Z_n symmetric paramagnet in d=2 gives the Z_n topological order <cit.>.
Recently, it has been appreciated that gauging can be implemented in quantum circuits via measurements <cit.> and since it is desirable for quantum computation platforms to prepare such states <cit.>, understanding such dualities is a pre-requisite.
While much of the impetus driving the understanding of global categorical symmetries comes from quantum field theoretic studies, our work takes a distinct approach by examining various aspects of global categorical symmetries in the lattice setting.
We mostly restrict ourselves to higher-group symmetries with possible 't Hooft anomalies.
A theory has an 't Hooft anomaly with respect to a global symmetry if the partition function of the theory coupled to background symmetry gauge fields is not gauge invariant.
Instead the partition function transforms under background gauge transformations by a U(1) phase which cannot be absorbed by any local counter-terms, but can however be absorbed by an invertible topological field theory in one higher dimension <cit.>.
A related consequence is that such a symmetry cannot be promoted to a gauge symmetry.
However, certain so called mixed 't Hooft anomalies involve more than one symmetry group such that when restricted to any single symmetry group, the anomaly is nullified or trivialized.
In such cases, it is possible to gauge any single symmetry group but not the full symmetry structure.
The anomalies we encounter in this paper are mixed 't-Hooft anomalies involving higher groups.
We study spin systems defined on a d-spatial dimensional lattice with each p-dimensional cell (i.e., vertices for p=0, edges for p=1 etc.) equipped with a finite dimensional Hilbert space, typically the group algebra of a finite Abelian group G. In more conventional condensed matter physics language, these are nothing but spin degrees of freedom (for example, single species of standard spin-1/2 d.o.f. for G = ℤ_2).[In this paper we employ lattice gauge theory and simplicial calculus language, as it makes many aspects of the construction more transparent. But note that underlying everything is nothing but spin models.]
Within such a setup, a p-form symmetry corresponding to the group G is generated by operators defined on any closed (d-p)-dimensional sub-lattice (see Fig. <ref>).
We organize the space of p-form symmetric quantum Hamiltonians as an algebra B_p of operators that commute with the p-form symmetry.
Such a bond algebra has already been useful in understanding dualities and systematizing the space of quantum systems with fixed global symmetries <cit.>.
Gauging the p-form symmetry amounts to making the symmetry local.[The meaning of making a symmetry local requires clarification for higher-form symmetries. For conventional 0-form symmetries, a symmetry is parameterized by a 0-form λ_0: ϕ→ϕ+λ_0 such that dλ_0=0 (i.e. λ_0 is constant). Making it local means we want it to be invariant even when dλ_0 ≠ 0. This is done by introducing a 1-form gauge field a with transformation a→ a+dλ_0 and constructing minimal couplings to ϕ (covariant derivatives in the continuum). For more general p-form symmetries, the symmetry is parameterized by p-forms λ_p such that dλ_p=0, i.e. co-cycles. Making it local means it should be invariant under any co-chain, i.e. even when dλ_p≠ 0. This requires the introduction of a (p+1)-form gauge field a_p+1 with a_p+1→ a_p+1 + dλ_p.]
This is done by introducing G-valued gauge degrees of freedom on the (p+1)-cells and demanding local gauge invariance by requiring that a collection of Gauss operators act as the identity on the gauged system.
Doing so, one obtains a dual bond algebra B^∨_d-p-1, isomorphic to B_p .
We analyze the symmetries of B^∨_d-p-1 and recover the dual (d-p-1)-form G^∨ symmetries one expects upon gauging a finite Abelian 0-form symmetry.
We carry out an analogous procedure for gauging subgroups of p-form finite Abelian groups.
Therefore one finds the following gauging-related isomorphism of bond-algebras
B_p
[r, bend left=15,postaction=decoration=text along path, text align=center, text=||gauging Gp-form sub-symmetry,raise=2.5pt,decorate]
[14em] B_d-p-1^∨[l, bend left=15,postaction=decoration=text along path, text align=center,reverse path, text=||gauging G^∨(d-p-1)-form sub-symmetry,raise=-6.7pt,decorate]
.
In the case of gauging finite subgroups, this allows us to pinpoint lattice manifestations of mixed quantum anomalies.
Quantum anomalies in lattice systems are much less understood <cit.> as compared with their field theoretic counterparts.
In particular there has been an effort towards understanding mixed anomalies involving crystalline symmetries on the lattice due to their expected connection with Lieb-Schultz-Mattis constraints <cit.>.
In this regard, we hope that our work will shed light on how to diagnose mixed anomalies in lattice models.
Specifically, we find that a mixed anomaly between a p-form symmetry K and a (d-p-1)-form symmetry N^∨ manifests as the fractionalization involving the two symmetries K and N^∨ that participate in the anomaly.
See <cit.>, for a higher-categorical discussion of such anomalies.
Symmetry fractionalization is well-studied in topological orders <cit.>, with fractional quantum Hall (FQH) systems providing the paradigmatic examples where anyons display U(1) symmetry fractionalization by carrying a fractional U(1) charge.
Recently the role of symmetry fractionalization in understanding quantum anomalies has also been emphasized <cit.> however a lattice study remains missing.
In this work, we detail such a relation between symmetry fractionalization patterns and mixed anomalies in lattice spin models.
The fact that gauging is realized as an isomorphism of bond algebras implies a duality between the pre-gauged system T and gauged system T^∨.
Strictly speaking, such dualities are invertible only when one considers all the symmetry sectors of T and T^∨ <cit.>, where a symmetry sector is specified by symmetry twisted boundary conditions and symmetry eigenspaces.
More precisely, the duality implies that the spectrum of a Hamiltonian in a certain symmetry sector and its dual gauged Hamiltonian in a dual symmetry sector are the same.
Another consequence is the equality of correlation functions
⟨ O_1( x_1 , t_1)⋯ O_n( x_n , t_n)⟩_Φ=
⟨ O^∨_1( x_1 , t_1)⋯ O^∨_n( x_n , t_n)⟩_Φ^∨ ,
where Φ collectively denotes the symmetry sector and twisted boundary condition labels of theory T and O_j are operators in the bond algebra B,
while, Φ^∨ and O^∨_j are the images of Φ and O_j under the gauging map.
This in turn implies that the phase diagrams of T and T^∨ are isomorphic.
Therefore such dualities can be used to read off many non-perturbative constraints on the phase diagrams of the systems being investigated.
Gapped phases on either side of the duality, as well as universality classes of phases transitions, can be mapped.
Knowing the order parameters of a certain ordered phase in T can be used to immediately furnish the order parameters for the dual phase in T^∨ (see Fig. <ref>).
See also <cit.> for a detailed holographic perspective in 2+1/1+1 dimensions.
Recently, dualities in spin models related to partial gauging of finite Abelian symmetry have been studied <cit.>.
In <cit.>, the mapping of transitions under such dualities in 1d was emphasized and it was pointed out that deconfined quantum critical <cit.> transitions realized in the model after partial gauging are dual to conventional Landau transitions in the model before partial gauging.
Therefore such dualities provide a promising avenue to bootstrap the understanding of conventional transitions to explore unconventional transitions.
In this work, we have harnessed precisely this methodology to explore several unconventional transitions in d=2 and 3 dimensional models with mixed anomalies involving higher group symmetries.
For instance, in d=2 the Landau symmetry-breaking transition map models with Z_n 0-form symmetry map to anyon condensation transitions <cit.> between topological orders after partial-gauging.
Similarly, transitions between SPTs map to transitions between distinct symmetry enriched topological orders after partial gauging.
In d=3, among others, we find dualities between topological orders via gauging of 1-form symmetries.
Although in this paper we confine ourselves to studying dualities from (partial) gauging of (higher) symmetries, other types of dualities exist related to automorphism group or cohomology group of global symmetries <cit.>, for example by stacking SPT phases. Gauging dualities map p-form symmetries to (d-p-1)-form symmetries and thus there are usually no self-dualities.
Few exceptions are in even spacetime dimensions (0-form symmetries in 1+1 dimensions or 1-form symmetries in 3+1 dimensions).
However, by combining gauging with these other dualities, such as SPT stacking, it is possible to find dualities between phases of the same type of symmetry in any dimension.
Self-dual points of such dualities will give rise to exotic non-invertible symmetries. For example, a duality between 2+1 dimensional toric code and double semion model can be constructed by gauging a 1-form symmetry, stacking with a 0-form SPT phase and then gauge the 0-form symmetry.
There will exist a phase-transition between these topological orders that is self-dual under this mapping.
Summary of results:
In this work, we study the gauging of finite Abelian higher-form symmetries and their subgroups in quantum spin models.
Along the way we clarify various notions related to mixed anomalies and symmetry fractionalization patterns, as well as detail how gaugings of finite generalized symmetries are realized as dualities between classes of quantum spin models with certain dual global symmetries.
We discuss the mapping of gapped phases under such gauging-related dualities and also discuss more general consequences for the structure of the phase diagrams of the dual quantum systems.
Below is a summary of the main results:
* We describe the gauging of higher-form finite Abelian symmetries and sub-symmetries on the lattice.
* We study the mapping of the energy spectrum under gauging dualities. In particular of symmetry sectors, i.e., symmetry eigen-sectors and symmetry twisted boundary conditions, under dualities related to partial gauging of higher-form symmetries.
* We clarify how mixed-anomalies involving higher-form global symmetries manifest on the lattice.
For instance, we investigate the higher-group with a Z_2 p-form and Z_2 (d-p-1)-form symmetry and a mixed anomaly given by
S_ anom= iπ∫ A_p+1∪ Bock(A_d-p) ,
where A_p+1 and A_d-p are the background gauge field for the p-form and (d-p-1)-form symmetry.
The anomaly manifests as a symmetry fractionalization pattern such that the Z_2 p-form (respectively (d-p-1)-form) symmetry fractionalizes to Z_4 depending on the symmetry twisted boundary condition of the (d-p-1)-form (respectively p-form) symmetry.
More precisely
2 U_p^2(Σ^(d-p)) = T_d-p-1(Σ^(d-p)) =exp{ iπ∮_Σ^(d-p)A_d-p} ,
U_d-p-1^2(Σ^(p+1)) = T_p(Σ^(p+1)) =exp{ iπ∮_Σ^(p+1)A_p+1} ,
where U_q and T_q are the operators that implement the q-form symmetry and measure the q-form symmetry twisted boundary conditions respectively.
* We study how gapped phases dualize in d=2 and 3 dimensions under (partial) gauging of global 0-form and 1-form symmetries and point out symmetry fractionalization patterns that distinguish certain gapped phases. For example, this leads to interesting concrete spin models with anyonic excitations that carry a fractional charge of a global symmetry, reminiscent of the FQHE.
* We describe how the order parameters of all gapped phases map under gauging and partial gauging related dualities (see Fig. <ref>). These can be used to compute non-trivial phase-diagrams and study phase-transitions in higher-dimensions, similar to <cit.> in 1+1 dimensions.
* Describe a Z_n 1-form generalization of Kramer's Wannier duality in d=3.
Amongst many things, this enables us to show a certain duality between ℤ_k and ℤ_n/k topological orders in d=3.
Furthermore it also allows us to construct spin models in 3+1 dimensions with non-invertible symmetries, for example at phase-transitions between certain topological ordered phases.
* Along the way we connect various field-theoretic aspects of gauge models, parallel transport as well as notions from differential and simplicial geometry to the context of quantum spin models.
§.§.§ Organization of the paper
The paper is organized as follows. In Sec. <ref>,
we describe dualities
obtained by gauging finite Abelian
0-form (sub-)symmetries as bond algebra isomorphisms.
Section <ref> describes
such dualities from a quantum field theory point of view.
In Secs. <ref> and <ref>, we explore how the dualitieds act on the phase diagrams of
two and three spin models, respectively.
Section <ref> focuses on gauging finite Abelian
1-form global symmetries and corresponding dualities in
two- and three-dimensional space. Section <ref>
concludes.
§.§.§ Notation and conventions
Here we briefly summarize the notation and conventions adopted in this paper.
* We denote by d the spatial dimensions while spacetime dimension is denoted by (d+1).
* We denote by M the (d+1)-dimensional spacetime manifold and often assume that M = M^ _d× S^1, where M^ _d is a d-dimensional spatial manifold.
We use a triangulation of M^ _d denoted by M^ _d,.
In Sec. <ref>, we use a square or cubic lattice, but with slight abuse of notation, we continue to denote it as M^ _d,.
* We denote by Greek letters Σ^(p) and γ
non-contractible p and 1-cycles on M.
S^(p) and L are used for general p-chains and 1-chains on M respectively.
* On the triangulation M^ _d,, we denote by e∈ M^ _d, and p ∈ M^ _d,
the oriented edges and plaquettes, respectively.
We denote by o( e, p)=± 1 the relative orientation between the
edge e and plaquette p such that e ⊂ p, i.e.,
we assign o( e, p)= +1 or o( e, p)= -1 if
orientation of edge e aligns or anti-aligns
with that of plaquette p, respectively.
Similarly, given any 1-chain L, we denote by o( e, L)
and o( p, S^(p))
the relative orientations between 1-chain L and the edge e ⊂ L
and between p-chain S^(p) and the plaquette p ⊂ S^(p),
respectively.
* We denote by G^ _(p) a p-form symmetry group G.
* We denote by G_(0, d-1)^ϵ = [ K_(0), N_(d-1)]^ϵ a d-group with 0-form symmetry K_(0), (d-1)-form symmetry N_(d-1) and a mixed anomaly which depends on ϵ.
Other higher groups are denoted in a similar fashion.
* We denote by A^( H)_p the p-form background gauge field associated with
group H. We denote by lowercase a^( H)_p the dynamical p-form gauge fields
associated with group H.
We drop the superscript H when the group
corresponding to p-form gauge fields A^ _p a^ _p is clear from the context.
* ℬ^ _ G( V) denotes the bond algebra of G symmetric operators on the Hilbert space V.
* We make extensive use of simplicial calculus notation to discuss spin models. For a quick review of simplicial calculus, we refer the reader to Appendix E of
Ref. <cit.>. For more details, see the standard texts
<cit.> on algebraic topology.
For readers that are interested in spin models but unfamiliar with algebraic topology, we will briefly define the minimal set of objects and their relation to spin model language. A triangulation M_d, is a decomposition of a manifold M into n-simplices [ v_0, …, v_n]. A 0-simplex v is a vertex, 1-simplex e = [ v_0, v_1] is an edge, a 2-simplex p = [ v_0, v_1, v_2] is a plaquette and so on. The ordering of the vertices in [ v_0, …, v_n] defines an orientation. We denote by C^n(M, G), the set of G-valued n-cochains. In words,
ϕ∈ C^n(M, G) is a spin configuration (a map)
that assigns to each simplex [ v_0, …, v_n]
a G-calued spin.
For example, a 0-cochain ϕ∈ C^0(M, G) is a spin configuration of G-valued spins, i.e., an assignment of a value ϕ_ v∈ G to each vertex v. For G=ℤ_2, ϕ is just a spin
configuration of a quantum spin-1/2 model and whereby |ϕ⟩=|ϕ_ v_1, ϕ_ v_2, …⟩
denotes the spin-1/2 basis.[For clock model type spins we need G=ℤ_n, for k-layers of spins G=ℤ_2×…×ℤ_2 and so on.] Similarly, a 1-cochain a∈ C^1(M, G) is a spin configuration living on each edge a_ e = a_[ v_0, v_1], a 2-cochain b∈ C^2(M, G) is a spin configuration on each plaquette b_ p = b_[ v_0, v_1, v_2] and so on.
Next we need the so-called coboundary map d: C^n(M, G) → C^n+1(M, G). If ϕ is a 0-simplex, then dϕ is a 1-simplex and on edges it evaluates to dϕ([ v_0, v_1]) = ϕ_ v_1 - ϕ_ v_0. For example for G=ℤ_2, this measures whether two neighbouring spins point the same direction or not. Similarly for 1-simplices it is defined as da([ v_0, v_1, v_2]) = a_[ v_1, v_2] - a_[ v_0, v_2] + a_[ v_0, v_1].
Finally we need the cup product, which from a p-cochain a_p and a q-cochain b_q defined a p+q cochain c_p+q = a_p∪ b_q. This acts on a p+q simplex as a_p∪ b_q([ v_0, …, v_p+q]) = a_p([ v_0, …, v_p]) b_q([ v_p, …, v_p+q]).
Cochains, coboundary maps and cup products are the discrete versions of differential forms, exterior derivatives and wedge products from differential geometry, respectively, and they satisfy similar properties.
§ GAUGING AS BOND ALGEBRA ISOMORPHISMS
§.§ Gauging Abelian finite symmetry
In this section we review the gauging of finite 0-form symmetries G in quantum spin models. To simplify our presentation, we focus on the case where G=ℤ_n. However, the concepts and arguments can be readily extended to encompass any finite Abelian group.
Consider a d dimensional quantum spin model defined on the triangulation of an oriented manifold M_d denoted as M_d,.
Let each vertex v of M_d, be endowed with a local n dimensional complex Hilbert space V_ v≅ C^n.
The total Hilbert space is a tensor product
V=⊗_ v V_ v .
There is an action of Z_n clock and shift operators {X_ v,Z_ v}_ v on V such that
Z_ vX_ v'=ω_n^δ_ v v'X_ v'Z_ v , Z_ v^n=X_ v^n=1 , ∀ v, v' ,
where ω_n:=exp{2π i/n}.
We are interested in the space of Hamiltonians which are symmetric with respect to the Z_n symmetry generated by
U=∏_ vX_ v .
The space of linear local operators at each vertex v is spanned by 𝒪^( h, α)_ v = X^ h_ vZ^-α_ v, where h, α∈{ 0, …, n-1}. Here α∈Rep( Z_n)≅ Z_n labels the representation the operator 𝒪^( h, α)_ v transforms in under a global symmetry transformation
U^ g O_ v^( h,α) U^- g = R_α( g) O_ v^( h,α),
where R_α( g) = ω_n^α g. This decomposes the space of linear operators acting on V_ v into irreducible representations of ℤ_n. We will use the terminology that operators with non-trivial α are charged under ℤ_n.
Now, the space of ℤ_n symmetric Hamiltonians is isomorphic to the bond algebra
B_ Z_n,(0)( V)=⟨
X_ v , Z^_ s( e)Z^†_ t( e) | ∀ v , e
⟩ ,
where s( e) and t( e) are the source and target vertex of the oriented edge e
.
Said differently any Z_n symmetric Hamiltonian on V can be expressed as a sum of products of the generators X_ v , Z^_ s( e)Z^†_ t( e) and is therefore an element of the algebra B_ Z_n,(0)( V).
Note that the generators only act on a single vertex or edge, but products and sums of these generate any other operator that commutes with (<ref>).
As written in (<ref>), the bond algebra has no restrictions related to locality.
However when constructing physical Hamiltonians, one typically imposes locality-related constraints such that the Hamiltonian is a sum of operators that each have support
within some open ball-like region in M_d,.
We will not attempt to formalize such constraints.
Instead, we will put them in by hand when studying models in later sections.
§.§.§ Twisted boundary conditions: gauge connections and parallel transport
Gauging, and dualities in general, act non-trivially on symmetry-twisted boundary conditions and symmetry sectors <cit.>.
Therefore it is insightful to define symmetry-twisted bond-algebras to keep track of how various symmetry sectors map under gauging-related dualities.
Implementing a symmetry-twisted boundary-condition g∈ℤ_n along a non-contractible cycle γ[We denote the non-contractible 1-cycles by γ and more general paths or cycles by L.] means that any charged local operator O_ v^( h, α) transforms as
O_ v^( h, α)⟶ U^ g O_ v^( h, α) U^- g = ω_n^α g O_ v^( h, α),
when the operator is transported along γ.
From the space-time point of view such symmetry-twisted boundary conditions correspond to inserting a symmetry defect the extends along the time-direction, such that operators which cross the symmetry defect transform via the symmetry action (see figure <ref>).
For example in the transverse field Ising model, anti-periodic boundary conditions are implemented this way by flipping the sign of the bond that crosses the symmetry defect: σ^z_1 σ^z_L→σ^z_1 Uσ^z_L U^-1 = -σ^z_1 σ^z_L, where U = ∏_iσ^x_i is the ℤ_2 symmetry.
An alternative point of view, that is more in the spirit of this paper, is to couple the ℤ_n symmetric theory to a background gauge field (also called a connection). For translation symmetric theories on tori, this leads to symmetry-twisted translation operators with the property T_ g^N_γ = U_ g, where N_γ is the number of sites along the γ cycle.
This naturally implements (<ref>), and corresponds to a group extension of the translation group with ℤ_n leading to fractionalized momenta.
See appendix D of <cit.> for more details.
Here we are interested in general triangulations of general manifolds, where the notion of translation symmetry might not be present.
Consider a background gauge field A∈ Z^1(M_d,, ℤ_n) (a ℤ_n connection) with non-trivial holonomy along non-trivial 1-cycles.
Practically this means that we assign an element A_ e∈ℤ_n on each edge such that dA = 0 and
∮_γ A = g(γ)∈ℤ_n,
where g(γ) is the holonomy around γ sometimes also referred to, somewhat misleadingly as the flux through the loop γ.
The holonomy only depends on the homology class of γ, in particular g(γ)=0 when γ is contractible. Therefore, the background gauge-field assigns a g∈ℤ_n for each non-contractible cycle [γ], corresponding to the symmetry-twisted boundary condition on that cycle.
We can use the ℤ_n connection to define a form of parallel transport along any curve L.
First on each edge e consider the T_ e operator
T_ e𝒪_ vT_ e^-1 =
𝒪_ t( e) if v = s( e)
𝒪_ s( e) if v = t( e)
𝒪_ v if v ≠ s( e), t( e)
which permutes operators and states between vertices connected to the edge e. Here O_ v is any local operator acting the vertex v.
The T_ e operator can be constructed explicitly as
T_ e = ∑_p, q=0^n-1ω_n^pq X_ s( e)^pZ_ s( e)^q X_ t( e)^-pZ_ t( e)^-q,
see appendix <ref> for details.
For any curve L = { e_1, ⋯, e_k} from vertex v_1 to v_k(see Fig. <ref>), the parallel transport operator is defined as
T_ g[L] = ∏_ e∈ LT_ e[X_ s( e)^ o( e, L)]^A_ e = T_ e_k[X_ v_k^ o( e_k, L)]^A_ e_k⋯ T_ e_1[X_ v_1^ o( e_1, L)]^A_ e_1,
where the arrow in ∏_ e∈ L indicates the direction of the product, and o( e, L) = +1 if the orientation of the edge e aligns with L and o( e, L) = -1 if not.
Note that we have labeled T_ g[L] using the holonomies g in (<ref>), instead of the background gauge field A since the former is the gauge-invariant content of A.
One can readily check that for the local operator 𝒪^( h, α)_ v = X^ h_ vZ^α_ v transforming in the α representation of ℤ_n we have
T_ g[L] O^( h, α)_ v_1T_ g[L]^-1 = ω_n^α∫_L A O^( h, α)_ v_k ,
where ∫_L A = ∑_ e∈ L o( e, L) A_ e.
The parallel transport of the charged operator accrues a U(1) phase corresponding to the Wilson line along L with charge α.
In particular, for closed loops we get
T_ g[L] O^( h, α)_ vT_ g[L]^-1 = ω_n^α∮_L A O^( h, α)_ v = ω_n^α g(L) O^( h, α)_ v,
which is the holonomy associated to the background connection A, (<ref>).
With this, given any Hamiltonian constructed with the above bond-algebra we can define the symmetry twisted Hamiltonian through the substitution of bond operators[Note that in the absence of translation symmetry, the definition of twisted boundary condition is relative to another Hamiltonian.
Given the Hamiltonian H, we can twist the boundary condition to obtain H_ g by inserting symmetry operators in the time-direction or equivalently coupling to a background gauge field/connection.
We can only talk about H_ g as twisted, relative to H.]
Z^_ s( e)Z^†_ t( e) = Z^_ s( e)T_0[ e]Z^†_ s( e) T_0[ e]^-1⟶
Z_ s( e)^T_ g[ e]Z^†_ s( e) T_ g[ e]^-1.
Essentially, defining bond operators on an edge e using parallel transport from its source to its target vertex. The X_ e operators are unaffected by this as they are not charged.
Note that the product of bond-operators along a curve from v_1 to v_k becomes
∏_ e∈ L[Z^_ s( e)Z^†_ t( e)]^ o( e,L)
= Z^_ v_1Z_ v_k^† ⟶ ∏_ e∈ LZ^ o( e, L)_ s( e) T_ g[ e]Z^- o( e, L)_ s( e) T_ g[ e]^-1
= Z^_ v_1ω_n^∫_L AZ_ v_k^† ,
where ω_n^∫_L A is the Wilson line between charged operators which is the expected minimal coupling for a background gauge field.
For non-contractible loops along homology cycles γ∈ H_1(M_d,,ℤ_n) we find
∏_ e∈γ[Z^_ s( e)Z^†_ t( e)]^ o( e,L)⟶ω_n^∮_γ A=ω_n^ g(γ),
which correspond to the twisted boundary condition along that cycle.
Operators that cross the symmetry operator insertion along the time direction, transform accordingly.
With a slight abuse of notation, we will define the symmetry-twisted bond-algebra as
B_ Z_n,(0)( V ; g)= ⟨
X_ v , Z^_ s( e)Z^†_ t( e) |
∏_ e∈γ[Z^_ s( e)Z^†_ t( e)]^ o( e,L)!=ω_n^ g(γ) ∀ v , e
⟩ .
A common way to implement this in spin-chain models is if there are L sites along a cycle γ, define Z^†_L+1≡ω_n^ g(γ)Z^†_1.[Using parallel transport and the background gauge field A is a more precise way to do this, we will however abuse the notation somewhat for simplicity.]
It is convenient to define the operator
T = ∏_ e∈γ[Z^_ s( e)Z^†_ t( e)]^ o( e,L),
as a way of 'measuring' the twisted boundary condition, or equivalently the holonomy of the background connection A.
Since all the operators in B_ Z_n,(0)( V) (by definition) commute with U, we can simultaneously block-diagonalize B_ Z_n,(0)( V) into eigensectors of U labelled by α∈Rep( Z_n,(0)).
Doing so, the bond algebra decomposes as
B_ Z_n,(0)( V ; g) = ⊕_α B_ Z_n( V ; (α , g)) ,
B_ Z_n,(0)( V ; (α , g)) = ⟨
X_ v , Z^_ s( e)Z^†_ t( e) |
U!=ω_n^α , T!=ω_n^ g(γ) ∀ v , e
⟩ .
The notation U!=ω_n^α means that we are restricting to the corresponding eigenspace of U.
§.§.§ Gauging ℤ_n symmetry and the dual bond-algebra
Next, consider gauging the global Z_n symmetry which effectively amounts to turning the background gauge-field into a dynamical quantum field and make the theory invariant under local ℤ_n transformations.
In order to do so, we first introduce gauge degrees of freedom (ℤ_n spins) on each link e, living in the Hilbert space V_ e≅ C^n.
The edge Hilbert space also admits an action of clock and shift operators X_ e, Z_ e analogous to (<ref>).
We thus obtain the extended Hilbert space
V_ ext=⊗_ e V_ e⊗_ v V_ v=Span_ C{ |a,ϕ⟩ | a∈ C^1(M_d,, Z_n) , ϕ∈ C^0(M_d,, Z_n)} ,
where C^p(M_d,, Z_n) denotes the set of Z_n-valued p-cochains on M_d,, i.e., an assignment of Z_n elements to the p-cells of M_d,.
The clock and shift operators act on the basis states as
2
Z_ v|a,ϕ⟩ =ω_n^ϕ_ v|a,ϕ⟩ ,
X_ v|a,ϕ⟩ =|a,ϕ+δ^( v)⟩ ,
Z_ e|a,ϕ⟩ =ω_n^a_ e|a,ϕ⟩ ,
X_ e|a,ϕ⟩ =|a+δ^( e),ϕ⟩ ,
where δ^( v), δ^( e) are Z_n-valued 0 and 1-cochains such that
[δ^( v)]_ v'=δ_ v, v' , [δ^( e)]_ e'=δ_ e, e' .
The original spin degrees of freedoms on each vertex ϕ_ v can be thought of as a matter field while the newly introduced spins on each edge a_ e can be thought of as a ℤ_n gauge field.
The physical Hilbert space V_phys⊂ V_ ext is defined as the eigenvalue +1 subspace of the collection of Gauss operators (see Fig. <ref>)
G_ v=X_ v∏_ e| s ( e)= vX_ e^†∏_ e| t ( e)= vX_ e=: X_ vA_ v^† ,
here e| s ( e) and e| t ( e) mean all edges e such that v is their source and target, respectively.
The Gauss operator is defined such that the global ℤ_n transformation of charged operators
Z_ v⟶ U Z_ v U^† = ω_n^-1 Z_ v,
become local gauge transformations
Z_ v⟶ G[λ] Z_ v G[λ]^† = ω_n^-λ_v Z_ v, Z_ e⟶ G[λ] Z_ e G[λ]^† = ω_n^λ_ s( e) - λ_ t( e)Z_ e = ω_n^- dλ_ eZ_ e.
Here we have defined a general combination of Gauss operators parameterized by a Z_n-valued 0-cochain λ as G[λ]:=∏_ v( G_ v)^λ_ v that implements a Z_n gauge transformation.
On the states, the Gauss operators act as
G[λ]|a,ϕ⟩= |a+ dλ,ϕ+λ⟩ .
In this representation, the operators X_ e and Z_ e are the Z_n electric and gauge field respectively.
Since the operator Z_ v is charged under the global symmetry which is being gauged, one needs to minimally couple the Z^†_ s( e)Z_ t( e)^ to the gauge field via the replacement
Z^_ s( e)Z^†_ t( e)⟶
Z_ s( e)^Z_ e^Z_ t( e)^† ,
which is the quantum version of (<ref>). One can readily see that this minimally coupled operator is invariant under local gauge-transformations (<ref>).
Therefore the bond algebra after gauging is[We use the notation B_ Z_n,(d-1)( V_ext) for the gauged bond algebra B_ Z_n( V_ext)/ Z_n, since after gauging it is the bond algebra of a (d-1)-form symmetry ℤ_n,(d-1) as we will shortly see.]
B_ Z_n,(d-1)( V_ext) ≃ B_ Z_n/ Z_n=
⟨
X_ v , Z^_ s( e)Z_ e^Z^†_ t( e) | G_ v!=1 , ∏_ e∈ L[Z^_ s( e)Z_ e^Z^†_ t( e)]^ o( e,L)!=
1 ∀ e , v
⟩ .
Note that there is an additional constraint ∏_ e∈ L[Z^_ s( e)Z_ e^Z^†_ t( e)]^ o( e,L)!=1 for each contractible loop L on the lattice.
This follows from the fact that this operator is the image of the operator ∏_ e∈ L[Z^_ s( e)Z_ t( e)^†]^ o( e,L)=1 in the pre-gauged bond algebra (<ref>).
Since gauging is an isomorphism of bond algebras, it maps the identity operator in V to the identity operator in V_ ext.
See Appendix <ref>
for an alternative formulation where this appears more naturally (see
Eq. (<ref>)).
A consequence of this is that in the physical Hilbert space[Note that ∏_ e∈ LZ_ e|a,ϕ⟩ = ω^∮_L a|a,ϕ⟩ = ω^∫_S_Lda|a,ϕ⟩ by Stoke's theorem where S_L is a surface such that ∂ S_L=L. In particular for a loop L_ p around a plaquette p we have ∮_L_ p a = (da)_ p.]
da=0 ,
and therefore a∈ Z^1(M_d,, Z_n) corresponds to a bonafide Z_n gauge field.
When mapping the symmetry twisted bond algebra (<ref>) under the gauging-related bond algebra isomorphism, one obtains
B_ Z_n,(d-1)( V_ext ; (α, g))=
⟨
X_ v , Z^_ s( e)Z_ e^Z^†_ t( e) | G_ v!=1 , ∏_ e∈γ[Z^_ s( e)Z_ e^Z^†_ t( e)]^ o( e,γ)!=ω_n^ g(γ) ,
U!=ω^α ∀ e , v
⟩ .
The constraints ∏_ e∈γ[Z^_ s( e)Z_ e^Z^†_ t( e)]^ o( e,γ)!=ω_n^ g(γ), restrict the extended Hilbert space to a single gauge class of Z_n gauge fields labelled by g ∈ H^1(M_d,, Z_n), which satisfies
da=0 , ∮_γ a= g(γ) .
Note that for contractible loops we have g(γ)=0. It is always more convenient to work in a basis in which the Gauss constraint has been solved.
To do so, we perform a unitary transformation U such that the Gauss operator U G_ vU^† only acts on the vertex Hilbert space V_ v <cit.>.
More precisely, we require a unitary U such that
U G_ vU^†=X_ v .
In the basis (<ref>), the unitary transformed Gauss operator is
U G[λ]U^† =∑_a,ϕ|a,ϕ + λ⟩⟨ a,ϕ| .
Such a unitary can be conveniently expressed in terms of controlled-X operators as
U=∏_ v[∏_ e| t( e)= v(CX^†)_ v, e∏_ e| s( e)= v(CX)_ v, e] .
Here (CX)_ v, e and (CX^†)_ v, e act on the edge e and vertex v such that
(CX)_ v, e|a_ e,ϕ_ v⟩ = |a_ e+ ϕ_ v,ϕ_ v⟩ ,
(CX^†)_ v, e|a_ e,ϕ_ v⟩ = |a_ e- ϕ_ v,ϕ_ v⟩ ,
The various operators transform under this unitary transformation as
3
UZ_ vU^† = Z_ v , UX_ eU^† = X_ e ,
UX_ vU^† = X_ vA_ v , UZ_ eU^† = Z_s( e)^†Z^_ e
Z^_t( e) .
One then obtains a bond algebra unitarily equivalent to (<ref>) as
B_ Z_n,(d-1)( V_ edge)
=⟨
A_ v ,
Z_ e | ∏_ e∈ LZ_ e^ o( e, L)!=
1 ∀ e , v
⟩ ,
where A_ v was defined in (<ref>).
In writing this expression, we have removed the vertex degrees of freedom by implementing the unitary transformed Gauss constraint X_ v!=1.
Therefore the bond algebra is an algebra of operators on the edge Hilbert space V_ edge=⊗_ e V_ e.
The symmetry twisted bond algebra (<ref>) has the following form in this basis
B_ Z_n,(d-1)( V_ edge ; (α , g))=⟨
A_ v ,
Z_ e | ∏_ e∈γZ_ e^ o( e, γ)!=ω_n^ g(γ) , ∏_ vA_ v!=ω_n^α ∀ e , v
⟩ .
This bond algebra after gauging is symmetric with respect to a (d-1)-form symmetry that is dual to the 0-form symmetry (<ref>) and is generated by closed (Wilson) loops on L
[see Eq. (<ref>)
in Appendix <ref>]
W_L=∏_ e ∈ LZ_ e^ o( e, L) .
Note that the symmetry twisted boundary conditions for a (d-1)-form symmetry are defined with respect to a non-contractible d-cycle, i.e. all of space.
Therefore, we obtain that in the bond-algebra B_ Z_n,(d-1), the role of α and g are swapped, i.e., g label the symmetry eigen-sectors while α label the symmetry twisted boundary condition.
In Sec. <ref>, we will describe the mapping of symmetry twisted sectors after gauging in a more general setting.
§.§.§ Dual higher-symmetries and twist defects
Gauge theories are atypical in condensed matter, however they can emerge as low energy descriptions of condensed matter models.
It is often convenient to drop the constraint ∏_L Z_ e^ o( e, L)=1 for contractible loops L, and consider a larger Hilbert space V̂_ edge with the bond algebra
B̂_ Z_n,(d-1)( V_ edge)
=
⟨ A_ v , Z_ e | ∀ e , v ⟩.
This bond algebra has a subalgebra
B_[ Z_n,(d-1), Z_n,(1)]( V_ edge) =
⟨ A_ v , B_ p | ∀ p , v
⟩⊂B̂_ Z_n,(d-1)( V_ edge) ,
which is the bond algebra of models with a Z_n,(1) in addition to the Z_n,(d-1) (d-1) symmetry.
Here the plaquette operator is defined as
B_ p = ∏_ e∈ pZ_ e^ o( e, p),
which is the smallest contractible loop B_ p = W_∂ p around a plaquette p.
The (d-1)-form symmetry is generated by lines (<ref>), and the 1-form symmetry is generated by closed (d-1)-dimensional manifolds S^(d-2),∨ in the dual lattice
Γ(S^(d-1),∨) = ∏_ e X_ e^Int( e, S^(d-1),∨),
where Int( e, S^(d-1),∨) denotes the intersection number of the surface S^(d-1),∨ and the edge e.
Int( e, S^(d-1)^∨)=0 when the edge and surface do not intersect and +1 or -1 if the edge is oriented along or against the outward normal of the surface (see Fig. <ref>). Compare these to (<ref>) and (<ref>).
The vertex operators A_ v are the smallest contractible sphere S^(d-1),∨_ v in the dual lattice around the vertex v, i.e., A_ v = Γ(S^(d-1),∨_ v).
The generators of the bond algebra B_ p and A_ v all commute with each other and in fact B_[ Z_n,(d-1), Z_n,(1)]( V_ edge) is the commutant algebra of B̂_ Z_n,(d-1)( V_ edge).
The simplest Hamiltonian to write within this bond algebra is
H = -∑_ vA_ v - ∑_ pB_ p + H.c.,
which is nothing but the ℤ_n toric code in d spatial dimensions. This model spontaneously breaks the (d-1) and 1-form symmetries and is topologically ordered.
Ground-states of this model satisfy B_ p !=1, which is the constraint for the gauge-invariant Hilbert-space.
The gauge theory therefore emerges dynamically at low energies.
If we add a term λ∑_ eZ_e, for small λ, the 1-form symmetry is explicitly broken.
But the theory is still in the same topological phase, as the 1-form symmetry emerges at low energy and is spontaneously broken.
This is a general property of topologically ordered phases where at low-energy it is described by a topological quantum field theory (TQFT), which has higher-form symmetries that are spontaneously broken.
We saw that the constraint B_ p = ∏_ e∈ pZ_ e^ o( e, p)!= 1 was necessary for the mapping between (<ref>) and (<ref>) to be invertible.
However, there is nothing inconsistent with the full unconstrained bond algebra (<ref>). It is natural to wonder whether the duality holds on this larger algebra. In order to see how that works, let us decompose the Hilbert space into simultaneous eigenspaces of all plaquette operators B_ p
V̂_ edge = ⊕_Φ∈ C^2(M_d,, ℤ_n)V̂_ edge^Φ,
where each 2-cochain Φ = {ϕ_p} is an assignment of ϕ_ p∈ℤ_n values on each plaquette. All states in V̂_ edge^Φ are eigenstates of the plaquette operators with the eigenvalues B_p|ψ⟩_Φ = ω_n^ϕ_ p|ψ⟩_Φ. In particular, the space with all ϕ_ p=0 (denoted as Φ = 0), satisfies the previous constraint B_ p=1 for all p. We can similarly decompose the bond-algebra
B̂_ Z_n,(d-1)( V_ edge) = ⊕_Φ∈ C^2(M_d,, ℤ_n)B̂_ Z_n,(d-1)(V̂_ edge^Φ).
We have already seen that B̂_ Z_n,(d-1)(V̂_ edge^Φ=0) is dual to the bond algebra (<ref>).
Gauging gives rise to the following bond algebra duality
B̂_ Z_n,(d-1)(V̂_ edge^Φ) ≅ B^Φ_ Z_n,(0)( V)=⟨
X_ v , Z̃^_ s( e)Z̃^†_ t( e) | ∏_ e∈ p[Z̃_ s( e)Z̃^†_ t( e)]^ o( e, p)!=ω_n^ϕ_ p, ∀ v , e
⟩ .
In order to understand this better, let us consider Φ such that ϕ_ p=α(δ_ p, p_2-δ_ p, p_1) or equivalently B_ p=1 for all p≠ p_1, p_2, while B_ p_1=ω_n^-α and B_ p_2=ω_n^α. In the toric code, this corresponds to the subspace with plaquette-like anyonic excitations on p_1 and p_2. In order to define Z̃^_ s( e)Z̃^†_ t( e), we need to consider a line L^∨ on the dual lattice as in figure <ref>. We then have[For general Φ∈ C^2(M_d,, ℤ_n), we couple to a background gauge field A: Z̃^_ s( e)Z̃^†_ t( e) = Z^_ s( e)ω_n^A_ e Z^†_ t( e) such that ( dA)_ p = ϕ_p. Thus the presence of twist defects violate the 1-cocycle condition at the locations of the defects.]
Z̃^_ s( e)Z̃^†_ t( e) =
ω_n^α Z^_ s( e) Z^†_ t( e) e∈ L
Z^_ s( e) Z^†_ t( e) e∉L
,
where all the red bonds in figure <ref> have a phase ω_n^α. This guarantees the correct mapping
B_ p = ∏_ e∈ pZ_ e^ o( e, p)!=ω_n^ϕ_ p⟶∏_ e∈ p[Z̃_ s( e)Z̃^†_ t( e)]^ o( e, p)!=ω_n^ϕ_ p.
This can be understood as the insertion of an open 0-form symmetry surface U^α[S^(d)] along time such that it crosses the Hilbert space time-slice along the green curve in figure <ref>. Equation (<ref>) can be understood as every bond operator Z_ t( s)Z_ t( e) that crosses the green line, get transformed by U^α[S^(d)]. This creates two twist defects on the plaquettes p_1 and p_2. Any Hamiltonian constructed from the bond algebra B^Φ_ Z_n,(0)( V), will have extrinsic twist defects on plaquettes where Φ is not zero.
This means that an invertible duality exists for the full uncontrained (d-1)-form symmetric bond algebra (<ref>), but the dual algebra is a direct sum of 0-form symmetric bond algebras with all possible twist defect
B̂_ Z_n,(d-1)( V_ edge) ≅⊕_Φ∈ C^2(M_d,, ℤ_n) B^Φ_ Z_n,(0)( V).
This also makes sense as the dimension of the Hilbert space of spins on edges V_ edge is larger than the spins on vertices V.
The bond algebra (<ref>) and (<ref>), their duality to each other and their gapped phases can be directly and systematically derived using a topological order in one higher dimensions using a construction we call Topological Holography <cit.>.
This is closely related to concepts in high-energy physics and string theory, such as symmetry TFTs <cit.> or holographic symmetry <cit.>.
We will however not go pursue that approach in this paper.
Finally, it is worth mentioning that the full symmetry structure of B_ Z_n,(d-1) is a higher representation category dRep( Z_n) whose invertible subcategory is generated by W_L for L∈ Z_1(M_d,, Z) <cit.>.
The remaining higher dimensional symmetry operators are obtained via condensations of lines corresponding to subgroups of Rep( Z_n)≅ Z_n on sub-manifolds of dimension greater that 1 <cit.>.
§.§ Gauging finite Abelian sub-symmetry
In this section, we describe the gauging of a subgroup of Z_n.
In particular, if Z_q⊂ Z_n, then there is a short exact sequence
1⟶ Z_q⟶ Z_n= Z_pq⟶ Z_n/q= Z_p⟶ 1 .
Such a sequence is determined by an extension class [ϵ]∈ H^2( Z_p, Z_q)= Z_gcd(p,q).
This means we can think of ℤ_pq as ℤ_q×ℤ_p as a set but with a ϵ-twisted product
(a, b)×(a', b') = (a + a' + ϵ(b ,b'), b+b'), a ,a'∈ℤ_q , b ,b'∈ℤ_p.
We will often use the following notation for the ϵ-twisted products: ℤ_pq≃ℤ_q×_ϵℤ_p. Note that while (a, 0) corresponds to a subgroup ℤ_q⊂ℤ_n, (0,b) is not a ℤ_p subgroup.
We will see that gauging Z_q⊂ Z_n furnishes a theory with a Z_n/q global 0-form symmetry, a (d-1)-form Z_q symmetry and a mixed anomaly between them that is determined by ϵ <cit.>. Written schematically
ℤ_n, (0) G_(0,d-1)^ϵ = [ℤ_n/q, (0), ℤ_q, (d-1)]^ϵ,
where G_(0,d-1)^ϵ = [ℤ_n/q, (0), ℤ_q, (d-1)]^ϵ is a (higher) d-group consisting of 0-form and (d-1)-form symmetries with a mixed anomaly determined by ϵ. It couples to background 1-form and d-form gauge fields.
In general, anomalies impose strong constraints on the low energy physics realized in a quantum system.
For instance, any gapped non-degenerate state can only preserve a sub-symmetry that trivializes the anomaly.
We will explore such aspects of anomalies in later sections while studying phase diagrams of spin models with mixed anomalies.
Henceforth, we will use the simplest case with a non-trivial extension class which occurs when p=q=2 to pinpoint lattice manifestations of the mixed anomaly.
Although this is the simplest case that exemplifies these features, the lessons learnt can be generalized to any finite Abelian group.
Let us again consider the triangulation M_d, endowed with the tensor product of local vector spaces V_ v= C^4≅ C[ Z_4] assigned to each vertex v.
The operator algebra acting on V=⊗_ v V_ v is generated by the operators X_ v and Z_ v that satisfy the relations
Z_ vX_ v'= iX_ v'Z_ v , Z^4_ v=X^4_ v=1 .
We are interested in the space of Hamiltonians which are symmetric with respect to the Z_4 symmetry generated by U=∏_ vX_ v, which is contained within the bond algebra
B_ Z_4,(0)( V) =⟨
X_ v , Z^†_ s( e)Z^†_ t( e) | ∀ v , e
⟩ .
Similar to (<ref>) in the case of gauging Z_n, we define the bond algebra in a definite symmetry sector as
B_ Z_4,(0)( V ; (α , g))
=
⟨
X_ v , Z^†_ s( e)Z^†_ t( e) | U!= i^α , ∏_ e∈γ[Z_ s( e)^Z_ t( e)^†]^ o( e, γ)!= i^ g(γ) ∀ v , e
⟩ ,
where α and g are the symmetry eigen-sector and symmetry twisted boundary condition labels respectively.
In order to gauge the Z_2 subgroup of the global symmetry, we introduce Z_2 gauge degrees of freedom on the edges such that the extended Hilbert space is
V_ext=⊗_ e C^2_ e⊗_ v C^4_ v .
There is an action of Pauli operators σ^μ_ e (μ=0 ,x ,y ,z) on the edge Hilbert space V_ e.
We define a basis {|a,ϕ⟩} that spans V_ext, where ϕ∈ C^0(M_d,, Z_4) and a∈ C^1(M_d,, Z_2) such that
3
Z_ v|a,ϕ⟩ = i^ϕ_ v|a,ϕ⟩ , X_ v|a,ϕ⟩ =|a,ϕ+δ^( v)⟩ ,
σ^z_ e|a,ϕ⟩ =(-1)^a_ e|a,ϕ⟩ , σ^x_ e|a,ϕ⟩ =|a+δ^( e),ϕ⟩ ,
where δ^( v)∈ C^0(M_d,, Z_4) and δ^( e)∈ C^1(M_d,, Z_2) are defined in (<ref>) and the addition is implicitly modulo 4 or modulo 2 depending on the group the cochains involved are valued in.
The physical Hilbert space is the gauge-invariant subspace of V_ext.
The notion of gauge invariance follows from considering the local representative of the Z_2 symmetry being gauged (generated by U^2) and appending it with link operators, i.e.,
G_ v=X_ v^2∏_ e⊃ vσ^x_ e=: X_ v^2A_ v ,
where ∏_ e⊃ v denotes the product over edges which are connected to the vertex v.
A general Gauss operator G[λ]=∏_ v G_ v^λ_ v with λ∈ C^0(M_d,, Z_2) acts on the above mentioned basis spanning V_ ext as a Z_2 gauge transformation
G[λ]|a,ϕ⟩ = |a-dλ,ϕ+2λ⟩ ,
or on the level of operators
Z_ v⟶ G[λ] Z_ v G[λ]^† = (-1)^λ_v Z_ v, σ^z_ e⟶ G[λ] σ^z_ e G[λ]^† = (-1)^ dλ_ eZ_ e.
To gauge the bond algebra, (<ref>), we lift B_ Z_4( V) to the enlarged V_ ext, impose the Gauss constraint and consider gauge-invariant versions of each of the operators.
Doing so, we find
B_ G_(0,d-1)^ϵ( V_ ext)
≃ B_ Z_4
/ Z_2
=⟨
X_ v , Z^†_ s( e)σ^z_ eZ^†_ t( e) | G_ v!=1 , ∏_ e∈ L[Z^†_ s( e)σ^z_ eZ^†_ t( e)]^ o( e, L)!=1 , ∀ v , e
⟩ ,
where L are contractible loops on the direct lattice and we have defined the (higher) d-group
G_(0,d-1) = [ℤ_2,(0), ℤ_2,(d-1)]^ϵ.
The constraint on ∏_ e∈ L[Z_ s( e)^σ^z_ eZ_ t( e)^†]^ o( e, L) descends from the fact that this operator is the image of ∏_ e∈ L[Z^†_ s( e)Z^†_ t( e)]^ o( e, L)=1 under the bond algebra isomorphism.
We could impose additional constraints
∏_ vX_ v!= i^α , ∏_ e∈γ[Z^†_ s( e)σ^z_ eZ^†_ t( e)]^ o( e, L)!= i^ g(γ) ,
which allow us to track how the symmetry sectors map under partial gauging.
To understand this we need to ask, what these sectors mean in terms of the symmetry structure of the partially gauged bond algebra.
Symmetries of the partially gauged bond algebra:
There are two types of operators that commute with the entire algebra (<ref>).
Firstly, since we have gauged Z_2 ⊂ Z_4, we expect there to still be a residual Z_2 0-form symmetry, which is generated by an operator which acts on all of space M_d, simultaneously via the operator
U =∏_ vX_ v .
At first glance, this may look like a Z_4 symmetry generator as X_ v^4=1.
However since the Z_2 subgroup has been gauged, U actually generates a Z_2 symmetry.
Although, as is evident from (<ref>),
U^2 is not always 1 as one would expect for a usual Z_2 symmetry.
This peculiarity is rooted in the fact that the Z_2 0-form symmetry participates in a mixed anomaly with the second kind of symmetry, which is a Z_2 (d-1)-form symmetry generated by the following operator defined on a non-contractible 1-cycle γ
W_γ=∏_ e∈γw_ e^ o( e, γ) , w_ e=Z_ s( e)^σ^z_ eZ_ t( e)^† .
Notice that the local representative of the line operator cannot be the naive choice σ^z_ e since it is not gauge invariant.
Furthermore W_L, for a contractible 1-cycle L, acts as the identity on the constrained (flux-free) Hilbert space.
Just like U, it is unusual to think of
the line operators W_γ as (d-1)-form Z_2 generators since they do not square to the identity depending on g in (<ref>).
In order to clarify the mixed anomaly, we need to define operators that measure the symmetry twisted boundary conditions for the 0-form and (d-1)-form symmetries.
As discussed in Sec. <ref>, symmetry twisted boundary conditions with respect to the 0-form symmetry (STBC^(0)) are related to the holonomy g(γ) of a 1-form Z_2 gauge field around any non-contractible 1-cycle γ.
Likewise the symmetry twisted boundary conditions with respect to the d-1-form symmetry (STBC^(d-1)) correspond to a holonomy α of a d-form Z_2 background gauge field around the fundamental d-cycle, i.e. around all of space.
These holonomies are measured by the operators T^(0)_γ and T^(d-1)
T^(0)_γ = ∏_ e∈γZ_ s( e)^2Z_ t( e)^2= (-1)^ g(γ) , T^(d-1)=∏_ vA_ v=(-1)^α .
A manifestation of the mixed-anomaly is that the 0-form Z_2 symmetry is fractionalized to Z_4 in the symmetry twisted sector of the (d-1)-form symmetry and conversely, the (d-1)-form symmetry is fractionalized to a Z_4 symmetry in the symmetry twisted sector of the 0-form symmetry (see Fig. <ref>)
U^2= T^(d-1) , W_γ^2= T^(0)_γ .
Another related consequence of the mixed anomaly is the symmetry fractionalization on the local representative of a (d-1)-form symmetry generator.
More precisely, the operator W_L defined on an open line L, takes the form
W_L=Z_ s(L)^[∏_ e∈ Lσ^z_ e]Z_ t(L)^† .
Notice, that the end-points of the string are now appended with operators that are charged under U and in fact carry a fractional charge, i.e., the U^2 eigenvalue is -1 for such an operator (see Fig. <ref>).
This is the phenomena of symmetry fractionalization <cit.> and is related to a mixed anomaly between the 0-form symmetry and (d-1)-form symmetry as we will describe in more detail in the next section.
Solving the Gauss constraint: Let us now try to find a unitary that disentangles the edge degrees of freedom from the Gauss operator such that the Gauss constraint may be solved.
Specifically, we seek an operator U such that
U G_ v U^†=X_ v^2 .
We make an ansatz that the action is a generalized controlled operation, i.e., it acts as
U=∑_a,ϕ|a+f(ϕ),ϕ⟩⟨ a,ϕ| ,
Inserting the ansatz (<ref>) into (<ref>), one finds the constraint f(ϕ)+f(ϕ+2δ^( v))= dδ^( v),
which can be solved by f= d⌊ϕ/2 ⌋, where ⌊·⌋ denoted the floor function.
Therefore the unitary is
U=∑_a,ϕ|a+ d⌊ϕ/2 ⌋,ϕ⟩⟨ a,ϕ| .
Using (<ref>), the action on all the remaining operators in the bond algebra can be computed.
For instance σ^x_ e and Z_ v remain invariant under the action of U.
Meanwhile X_ v and σ^z_ e transform in a more involved way
UX_ vU^† = ∑_a,ϕ|a+ d(⌊ϕ/2⌋+ ⌊ (ϕ+ δ^( v))/2⌋),ϕ+δ^( v)⟩⟨ a,ϕ|
=X_ v[P_ v^(+)+A_ vP_ v^(-)] ,
UZ^_ s( e)σ^z_ eZ^†_ t( e)U^† = ∑_a,ϕexp{ iπ(a+ d⌊ϕ/2⌋)_ e + iπ ( dϕ)_ e/2}
|a,ϕ⟩⟨ a,ϕ|
=1/2(1- iZ_ s( e)^2)σ^z_ e(1+ iZ_ t( e)^2) ,
where we have defined P^(±)_ v=(1± Z_ v^2 )/2.
We are now in a position to write down the unitary transformed bond algebra
B_ G_(0,d-1)( V_ ext) =⟨
X_ v[P_ v^(+)+ A_ vP_ v^(-)] , 1- iZ_ s( e)^2/√(2)σ^z_ e1+ iZ_ t( e)^2/√(2) | X_ v^2!=1 ,
∏_ e∈ Lσ^z_ e!=1 , ∀ v , e
⟩ .
Since the constraint has now been localized on the vertices, it can be readily solved.
We define a restricted basis on the vertex Hilbert space V^rest._ v⊂ V_ v spanned by
|↑ ⟩ =1/√(2){|ϕ_ v=0 ⟩ + |ϕ_ v=2 ⟩} ,
|↓ ⟩ =1/√(2){|ϕ_ v=1 ⟩ + |ϕ_ v=3 ⟩} ,
for which X_ v^2=1.
The operators X_ v and Z_ v^2 acting on V_ v can be restricted to V^rest._ v since these operators commute with X_ v^2 and therefore leave the space spanned by <ref> invariant.
In this basis,
X_ v|_ V^rest._ v∼σ^x_ v , Z_ v^2|_ V^rest._ v∼σ^z_ v .
Since only combinations of X_ v and Z_ v^2 appear in the bond algebra in (<ref>), we can solve the Gauss constraint and directly work in the restricted Hilbert space
B_ G_(0,d-1)( V^rest.) =⟨σ^x_ v[P_ v^(+)+A_ vP_ v^(-)] , 1- iσ^z_ s( e)/√(2)σ^z_ e1+ iσ^z_ t( e)/√(2) | ∏_ e∈ Lσ^z_ e!=1 ∀ v , e
⟩ ,
where V^rest.=⊗_ v V_ v^rest.⊗_ e V_ v⊂ V_ext. and P^(±)_ v:= (1 ±σ^z_ v)/2 on V^res..
Symmetry and mixed anomaly for the transformed bond algebra: Since the bond algebras (<ref>) and (<ref>) are isomorphic, they have identical symmetry structures.
In the frame of B_ G_(0,d-1), the Z_2 0-form symmetry is generated by
U=∏_ vu_ v , u_ v=σ^x_ v[P^(+)_ v+ A_ vP^(-)_ v]
while the (d-1)-form symmetry generated by the (closed) line operator
W_γ=∏_ e∈γw_ e^ o( e, γ) , w_ e=1/2(1- iσ^z_ s( e))σ^z_ e(1+ iσ^z_ t( e)) .
These two symmetries have a mixed anomaly, which manifests as
U^2 =∏_ vA_ v= T^(d-1) ,
W_γ^2 = ∏_ e∈γσ_ e^z= T^(0)_γ ,
and reproduces (<ref>).
In the next sections, we will see that this anomaly has important consequences for the phase realized in the partially gauged model.
§ GAUGING AS TOPOLOGICAL DUALITIES
In this section, we describe a general procedure <cit.> to gauge a finite global symmetry and its sub-groups from a space-time point of view deriving relations between partition functions and energy spectra of dual theories.
We will pay attention to the global symmetry of the gauged theory thus obtained and to the mapping of the symmetry twisted sectors between the gauged and original theory.
As we described in Sec. <ref>, when gauging a finite subgroup of the full symmetry group, the dual or gauged theory has a symmetry structure with a mixed anomaly.
In such cases, the mapping of symmetry sectors can be subtle.
We detail how this works for the simplest case of gauging Z_2 ⊂ Z_4, which contains the main new result of this section.
Although, we describe this specific simplest case, our analysis and approach generalize to other finite Abelian groups.
§.§ Gauging finite Abelian symmetry
We begin by describing the gauging of a finite Abelian symmetry group in a d+1 dimensional quantum system T, before addressing its subgroups. Unlike the previous analysis working on the level of Hilbert spaces of spin models, here we take a space-time point of view and work on the level of partition functions.
§.§.§ Gauging, ungauging and dual symmetries
Let us consider a quantum system, denoted by T in d+1 spacetime dimensions and symmetric under a finite Abelian group G_(0).
Such a theory can be defined in the presence of a background G_(0) gauge field A_1, which can equivalently be understood as a network of codimension-1 (in spacetime) symmetry defects.
We denote the partition function of T coupled to a A_1 background by Z_ T[A_1].
If the theory T does not have any 't Hooft anomaly with respect to the group G_(0), then G_(0) can be gauged in T to obtain a new theory T^∨.
Gauging the 0-form symmetry amounts to summing over background gauge fields/symmetry defect networks <cit.>.
The partition function of the gauged theory has the form
Z_ T^∨=1/|G|^b_0(M)∑_a_1 Z_ T[a_1] ,
where we have assumed that M is path connected and the sum is over gauge classes of G-bundles, i.e., a_1∈ H^1(M, G).
Here b_n(M) is the n'th Betti number of M.
The theory T^∨ has a (d-1) form symmetry G^∨=hom( G, R/2π Z)≅ G <cit.>, denoted by G^∨_(d-1).
The symmetry operator corresponding to an element g^∨∈ G^∨ defined on a 1-cycle γ is
W_ g^∨(γ)=exp{ i g^∨∮_γa_1} .
We can couple T^∨ to a background d-form gauge field A_d^∨, which
corresponds to inserting a network of line-like symmetry operators.
The partition function of T^∨ in the presence of a A^∨_d is given by
Z_ T^∨[A^∨_d]=1/| G|^b_0(M)∑_a_1 Z_ T[a_1]exp{ i∫_Ma_1∪ A^∨_d} .
Gauge invariance of the partition function under background gauge transformations of A^∨_d is guaranteed by the fact that da_1=0.
The G^∨_(d-1) global symmetry of the gauged theory T^∨ can itself be gauged to deliver a theory T^∨∨ which again has a symmetry G_(0)
Z_ T^∨∨[A_1]=1/| G^∨|^ N_d(M)∑_a_d^∨ Z_ T^∨[a_d^∨]exp{ i∫_M a_d^∨∪ A_1} ,
where N_p(M)=∑_j=1^p(-1)^j+1b_p-j(M).
Note that N_1=b_0(M), which recovers (<ref>).
Then inserting (<ref>) into (<ref>), one obtains
Z_ T^∨∨[A_1]=exp{- χ(M)ln(| G|)} × Z_ T[(-1)^dA_1] ,
where χ(M) = ∑_j=0^d+1(-1)^j b_j(M) is the Euler characteristic of the manifold M.
Here we used the relation
δ(A_d+1-n) = 1/| G|^b_n(M)∑_a_n∈ H^n(M, G)exp{ i ∫_M a_n∪ A_d+1-n}
Hence gauging twice acts as “charge conjugation" in odd spatial dimensions <cit.> upto a local curvature counter term.
In fact, the so-called Euler counterterm in (<ref>) can be absorbed by redefining the normalization as N_p=√(| G|^b_p(M)) in (<ref>) and (<ref>).
We will however work with our initial choice of normalization as doing so simplifies the mapping of symmetry sectors between the original T and the gauged theory T^∨.
§.§.§ Mapping of symmetry sectors
Using (<ref>) and (<ref>), one can see how the symmetry sectors on the different sides of the gauging-related duality map into each other.
Let us consider a d+1 dimensional spacetime manifold that decomposes as M=S^1 × M_d, where M_d is the spatial d-manifold and S^1 is the circle in the time direction, such as to connect with the Hamiltonian description in the rest of the paper.
We work with imaginary time and will therefore be considering thermal partition functions.
A G background gauge field A_1 (upto gauge transformations) is valued in H^1(M, G), i.e., it is labelled by the holonomies of the gauge field A_1 along the homology cycles of M.
Using the Künneth theorem, the 1st homology group of M=S^1× M_d decomposes as
H_1(S^1× M_d, ℤ)= Z ⊕ H_1(M_d, ℤ)= Span_ Z⟨γ , γ_1 , γ_2 , … , γ_b_1(M_d)⟩ .
Then the gauge field A_1 can be labelled by its holonomies around the homology cycles of M as A_1=( g_t,g⃗) where g⃗≡ ( g_1 , g_2 , … , g_b_1(M_d))∈ H^1(M_d, G) and g_t∈ G, i.e.,
∮_γ_jA_1= g_j , ∮_γA_1= g_t .
The thermal partition function of a quantum system coupled to such a 0-form background gauge field is
Z_ T[A_1]≡ Z_ T[ g_t ,g⃗]= Tr[ U_ g_t e^-β H_g⃗] ,
where U_ g_t is the symmetry operator corresponding to g_t and H_g⃗ is the Hamiltonian of interest with g_j twisted boundary conditions along the j^ th homology cycle of M_d.
In Sec. <ref>, we could study the G symmetric bond-algebra in a definite symmetry sector labelled by α∈Rep( G) and g⃗∈ H^1(M_d, G) (see (<ref>)).
Physically the label α denoted the symmetry eigenspace we were restricting to and g⃗ was a choice of symmetry twisted boundary conditions.
In the spacetime partition function, this amounts to inserting a (codimension-1) projection operator P_α at a fixed time, extending over all of M_d, which has the form
P_α=1/| G|∑_ g_t∈ G R_α( g_t)^-1 U_ g_t ,
where R_α: G→ U(1) is the representation corresponding to the label α.
We define a symmetry character χ_ T[α,g⃗] as the thermal trace in the sector transforming in the α representation and with g⃗ twisted boundary conditions
χ_ T[α,g⃗]
= Tr_ V_M_d[ P_α e^-β H_g⃗] .
After gauging, we obtain a dual theory T^∨ which can be coupled to a d-form gauge field A_d∈ H^d(M, G)= Hom(H_d(M , Z) , G).
Again, using the Künneth theorem,
H_d(S^1× M_d , Z) = H_d-1(M_d , Z)⊕ H_d(M_d , Z)
≅ H_1(M_d, Z)⊕ Z .
In the second line we have used the assumption that M_d is closed and oriented which via the Universal coefficient theorem and Poincaré duality implies H_d-1(M_d, Z)=H_1(M_d, Z) and the fact that M_d is path connected, which implies H_d(M_d, Z)≅ Z.
More precisely we can canonically identify the generators Σ^(d-1)_j∈ H_d-1(M_d, Z) with the generators γ_j∈ H_1(M_d, Z).
The d-form background gauge field A_d can be labelled by its holonomies around the d-homology cycles of M as A_d=( g⃗^∨, g_t^∨ ) such that g_t^∨∈ G^∨ and g⃗^∨=( g^∨_1 , … , g^∨_b_1(M_d)) such that
∮_M_dA_d= g_t^∨ , ∮_Σ^(d-1)_j× S^1A_d= g^∨_j
The thermal partition function of the quantum system T^∨ coupled to A_d background has the form
Z_ T^∨[A_d^∨]= Tr[∏_j=1^b_1(M_d)W_ g_j^∨(γ_j) e^-β H^∨_ g_t^∨] ,
where W_ g^∨_j(γ_j) is the (d-1)-form symmetry operator (<ref>) defined on the homology 1-cycle γ_j and H^∨ is a Hamiltonian of interest with G^∨_(d-1)-form global symmetry and g^∨_t twisted boundary conditions.
As before, we can project onto definite G^∨ eigenspaces of each of the line operators by using the projection operator
P_ g(γ_j)=1/| G|∑_ g^∨_j∈ G^∨ R_ g_j^∨( g)^-1 W_ g_j^∨(γ_j) .
We define the symmetry character as a thermal trace in a definite symmetry sector with g_t^∨∈ G^∨ twisted boundary conditions and g_j eigenvalues of the symmetry operator on the 1-cycle γ_j
χ_ T^∨[g⃗, g_t^∨]= Tr_ V^∨_M_d[∏_j=1^b_1(M_d) P_ g_j(γ_j)
q^-β H^∨_ g_t^∨] .
Using (<ref>), (<ref>), (<ref>) and (<ref>) it can be shown that
χ_ T[α, g⃗]= χ_ T^∨[g⃗
,α] .
implying a duality between the corresponding sectors of the theory T with G_(0) and theory T^∨ with global symmetry G_(d-1)^∨. See figure <ref> for more details.
§.§ Gauging finite Abelian sub-symmetry
§.§.§ Dual symmetries and mixed anomaly
We now describe the more interesting case, where the full symmetry group G of T, is a central extension of K by N <cit.>.
More precisely, G sits in the short exact sequence
1⟶ N ⟶G⟶ K ⟶ 1 ,
whose extension class is ϵ∈ H^2( K, N).
This implies that a G background gauge field A_1 can be represented as a tuple (A_1^( N), A_1^( K)) ∈ C^1(M, N)× C^1(M, K) which satisfy the modified cocyle conditions <cit.>
d A_1^( N)=ϵ(A_1^( K)) , d A_1^( K)= 0 .
Correspondingly, the partition function of T coupled to a G gauge field is denoted by Z_ T[A_1^( N),A_1^( K)].
Recall the notation G = N×_ϵ K from section <ref>.
Instead of gauging the full group G, consider gauging N⊂ G, using Eq. (<ref>) to obtain the partially gauged theory T^∨.
The gauged theory has a residual 0-form symmetry ( G/ N)_(0)≅ K_( 0). Additionally, there is also a dual (d-1)-form symmetry N^∨_(d-1).
Hence after the partial gauging, the resulting symmetry is a higher d-group G_(0,d-1)^ϵ
G_(0) = N_(0)×_ϵ K_(0) G_(0,d-1)^ϵ = [ K_(0), N_(d-1)^∨]^ϵ.
The partition function of the gauged theory can be coupled to background gauge fields of G_(0,d-1)^ϵ = [ K_(0), N_(d-1)^∨]^ϵ as
Z_ T^∨[A_d^( N^∨) ,A_1^( K)]=1/| H|∑_a_1^( N) Z_ T[a_1^( N) ,A_1^( K)]exp{ i∫_Ma_1^( N)∪ A_d^( N^∨)} .
Interestingly, the theory T^∨ has a mixed 't-Hooft anomaly between K_(0) and N^∨_(d-1) which manifests in the lack of invariance of the partition function under background gauge transformations
A_d^( N^∨)⟼ A_d^( N^∨)+ dλ_d-1^( N^∨) ,
where λ_d-1^( N^∨)∈ C^d-1(M, N^∨) is a gauge transformation parameter.
Under such a background gauge transformation, the partition function (<ref>) transforms as
Z_ T^∨[A_d^( N^∨)+ dλ_d-1^( N^∨) ,A_1^( K)]/ Z_ T^∨[A_d^( N^∨),A_1^( K)]=
exp{ i∫_Mλ_d-1^( N^∨)∪ϵ(A_1^( K))} .
The lack of gauge invariance cannot be remedied by any choice of local counter-terms, however it can be absorbed by coupling T^∨ to an invertible topological field theory <cit.> known as the anomaly theory with the action
S_anom=∫_M_d+2A_d^( N^∨)∪ϵ(A_1^( K)) ,
Such an invertible field theory describes the ground state physics of a symmetry protected topological phase of matter protected by G_(0,d-1)^ϵ = [ K_(0), N_(d-1)^∨]^ϵ.
It is however crucial to emphasize that there is no physical need to associate T^∨ to a `bulk'.
As described in Sec. <ref>, such a theory can very well be described on an d-dimensional lattice model.
Instead the bulk or anomaly theory is a theoretical gadget to systematize our understanding of the anomaly, which has significant non-perturbative implications for the infra-red phases/ground states realized in T^∨.
Specifically, as we will demonstrate, a systems with such anomalies cannot have a gapped and symmetric (disordered) ground state.
Instead, any gapped ground state must break the symmetry down to a subgroup that trivializes the anomaly.
This is a consequence of anomaly matching between the ultraviolet and infra-red physics.
For the remainder of this section, we specialize to the case G_(0)= Z_4 and N_(0)= Z_2, in which case the symmetry of T^∨ is G_(0,d-1)^ϵ = [ Z_2,(0), Z_2,(d-1)]^ϵ with the anomaly action having the explicit form
S_anom= iπ∫_M_d+2A_d∪ Bock(A_1) ,
where A_p∈ H^p(M, Z_2) and Bock denotes the Bockstein homomorphism (see Appendix B of <cit.> for details) which is a map of cohomology classes
Bock :H^1(M, Z_2) ⟶ H^2(M, Z_2) , Bock(A_1)= 1/2 dA_1 ,
where A_1 is the lift of A_1 to a Z_4 gauge field.
§.§.§ Mapping of symmetry sectors
Under gauging of a subgroup, the symmetry sectors of the pre-gauged and gauged theories T and T^∨ respectively, map into each other in a non-trivial way.
In this section, we detail the map of sectors for the case of N= Z_2 and G= Z_4.
However the analysis readily generalizes to any finite Abelian group G with subgroup N.
Since we want to gauge Z_2 ⊂ Z_4, it will be convenient to write a group element g∈ Z_4≅{0,1,2,3} as a tuple ( n, k) such that n, k∈ Z_2≅{0 ,1}, with the identification g=2 n+ k and the product rule in Z_4 given by
( n_1 , k_1)· ( n_2 , k_2)=( n_1+ n_2 + k_1· k_2 , k_1+ k_2 ) .
A Z_4 gauge field A_1^( G)∈ H^1(M, Z_4) for M=M_d× S^1 is labelled by its holonomies on the homology 1-cycles of M and can correspondingly be expressed as a tuple of Z_2 gauge fields (A_1^( N) A_1^( K)) with a modified cocycle condition (<ref>) as
A_1^( G) =( g_t, g_1, ⋯ g_N)=( g_t,g⃗) ,
A_1^( N) =( n_t, n_1, ⋯ n_N)=( n_t,n⃗) ,
A_1^( K) =( k_t, k_1, ⋯ k_N)=( k_t,k⃗) ,
where N=b_1(M_d), g⃗∈ H^1(M_d, Z_4) and n⃗,k⃗∈ H^1(M_d, Z_2).
The thermal partition function of T coupled to the background Z_4 gauge field A_1^( G) has the form
Z_ T[ A_1^( G)]
≡ Z_ T[ g_t ,g⃗]
≡ Z_ T[ n_t , k_t ,n⃗ ,k⃗]
=Tr[ U^2n_ n_t + k_t e^-β H_n⃗,k⃗] ,
where U is the Z_4 generator and H_n⃗,k⃗ is the Z_4 symmetric Hamiltonian with g⃗=(n⃗,k⃗) twisted boundary conditions.
With the purpose, of tracking how symmetry sectors map under partial-gauging, we define P_α_ g which is a projector onto the sub Hilbert space transforming in the α_ g representation of Rep( Z_4)
P_α_ g =1/4∑_ g_t∈ Z_4 i^-α_ g g_t× U^ g_t=1/4∑_ n_t, k_t∈ Z_2 (-1)^α_ n n_t+α_ k k_t i^-α_ n k_t× U^2 n_t+ k_t ,
where we have used α_ g=2α_ k+α_ n.
Using (<ref>) we may define a symmetry character χ[q, α_ g, g] as the thermal trace in a definite eigensector α_ g∈Rep( Z_4) and with symmetry twisted boundary conditions g⃗=2n⃗+k⃗ as
χ_ T[α_ g, g] = Tr[ P_α_ g q^- H_n⃗,k⃗] = 1/4∑_ n_t, k_t (-1)^α_ n n_t+α_ k k_t i^-α_ n k_t Z_ T[ n_t , k_t ,n⃗ , k⃗] .
The expression (<ref>) can be readily inverted to express the partition function in terms of the symmetry characters as
Z_ T[ n_t , k_t , n⃗ , k⃗]=∑_α_ n , α_ k(-1)^α_ n n_t+α_ k k_t i^α_ n k_tχ_ T[2α_ k+α_ n , 2n⃗+k⃗] .
Next, gauging the subgroup N⊂ G simply corresponds to summing over the symmetry background A_1^( N).
The partition function of the partially gauged theory T^∨ has a d-group G_(0,d-1)^ϵ = [ K_(0), N_(d-1)^∨]^ϵ global symmetry and can therefore be coupled to a symmetry background (A_1^( K) , A_d^( N^∨)).
In particular, the symmetry background to A_d^( N^∨) are labelled by n⃗^∨ and n^∨_t, see (<ref>).
The partition function of T^∨ coupled to (A_1, K, A_d, N^∨) is
Z_ T^∨[A_d^( N^∨) , A_1^( K)] ≡ Z_ T^∨[n⃗^∨ , n^∨_t , k_t , k⃗]
=1/2∑_ n_t,n⃗ Z_ T[ n_t , k_t ,n⃗ ,k⃗](-1)^ n_t n_t^∨+ n⃗·n⃗^∨ .
As a thermal partition function, this may be expressed as
Z_ T^∨[n⃗^∨ , n^∨_t , k_t , k⃗]=Tr[ U^ k_t∏_j=1^NW^ n^∨_j_γ_j e^-β H^∨_k⃗ , n^∨_t] ,
where U and W_γ_j are the Z_2 0-form and (d-1)-form symmetry generators in the theory T^∨ respectively.
Since we are interested in relating the symmetry-resolved energy spectra of the two theories T and T^∨, we need to define the character of the dual theory T^∨ as well. However, due to the mixed anomaly in this theory, see equation (<ref>) and figure <ref>, some slight care is needed to define characters correctly. In particular, equation (<ref>) implies that in twisted sectors the ℤ_2 symmetry operators can square to -1 instead of +1 and thus have 'fractionalized' symmetry eigenvalues ± i instead of ± 1. The appropriate symmetry projector for the G_(0,d-1)^ϵ = [ K_(0), N_(d-1)^∨]^ϵ is thus[Note that technically only P^(W) and P^( U) are projectors to symmetry eigenspaces (for the (d-1) and 0-form symmetries, respectively). The 'twisted BC projectors' P^( T^(d-1) and P^( T^(0) are not real projectors, and somewhat trivial. But we include these here to easier keep track of how symmetry sectors and boundary conditions swap under gauging dualities.]
P[α⃗_ n^∨ ,α_ k , n^∨_t , k⃗]=
∏_j=1^N P^(W)_α_ n^∨_j, k_j
P^( U)_α_ k , n_t^∨ P^( T^(d-1))_ n_t^∨ P^( T^(0)_γ_j)_ k_j ,
where the superscript of each projector denotes the operator whose eigenspace is being projected onto while the subscripts denote the eigenvalues.
Explicitly, these projection operators have the form
3 P^(W)_α_ n^∨_j, k_j = 1+(-1)^α_ n^∨_j i^- k_jW_γ_j/2 , P^( T^(0)_γ_j)_ k_j =
1+(-1)^ k_j T_γ_j'^(0)/2
P^( U)_α_ k , n_t^∨ = 1+(-1)^α_ k i^- n_t^∨ U/2 P^( T^(d-1))_ n_t^∨ = 1+(-1)^ n^∨_t T^(d-1)/2 ,
where U, W_γ_j, T^(0)_γ_j and T^(d-1) are defined in (<ref>), (<ref>) and (<ref>).
Here the symmetry characters of the gauged theory are labelled by a representation α_ k and α⃗_ n^∨=(α_ n^∨_1 ,… ,α_ n^∨_N), while twisted boundary conditions are labeled by n_t^∨ and k⃗. Inserting this into the partition function, the characters of T^∨ take the form (see also <cit.> for a similar discussion in 1+1 dimensions)
χ_ T^∨[α⃗_ n^∨ ,α_ k , n^∨_t , k⃗]=1/2^1+N∑_ k_t,n⃗^∨ Z_ T^∨[n⃗^∨ , n^∨_t , k_t , k⃗](-1)^α_ k k_t+ α⃗_ n^∨·n⃗^∨ i^- n_t^∨ k_t-k⃗·n⃗^∨ .
The appearance of i in the characters above, is a consequence of symmetry fractionalization stemming from the mixed anomaly. The symmetry characters of T^∨ can be written in terms of the characters of T by using (<ref>) and (<ref>).
However, let us instead derive this relation using the lattice realization of the symmetry structures of T and T^∨ described in Sec. <ref>.
To extract the mapping of sectors, we note that a sector in the theory T^∨ labelled as (α⃗_ n^∨ ,α_ k , n^∨_t , k⃗) is the sub-Hilbert space of V^∨_M_d in the simultaneous image of the projection operators (<ref>).
The gauging map in Sec. <ref> relates Z_2,(0)× Z_2,(d-1) symmetric operators on V^∨_M_d to Z_4 symmetric operators on V_M_d.
In particular there is the following mapping of operators
3 U|_ V^∨ ⟼ U|_ V= ∏_ vX_ v , T^(d-1)|_ V^∨ ⟼ U^2|_ V ,
W_γ|_ V^∨ ⟼ T^(0)|_ V=∏_ e⊂γZ_ s( e)^Z_ t( e)^† , T^(0)_γ_j|_ V^∨ ⟼[ T^(0)]^2|_ V .
Using these operator isomorphisms, the product of projectors in (<ref>) in the theory T maps to a product of projectors in the theory T^∨.
More precisely, one obtains a projector onto the 2α_ k- n_t^∨∈Rep( Z_4) eigensector of U and the sector with 2α_n^∨_j- k_j∈ Z_4 symmetry twisted boundary conditions along the cycle γ_j.
Which implies the following map of symmetry sectors
χ_ T^∨[q,α⃗_ n^∨ ,α_ k , n^∨_t , k⃗]= χ_ T[q , 2α_ k+ n_t^∨ , 2α⃗_n^∨+k⃗] .
§ PHASE DIAGRAMS AND DUALITIES IN D=2
Having detailed the gauging of finite Abelian symmetries and their subgroups both in the lattice setting in Sec. <ref> and more generally in a space-time approach in Sec. <ref>, we now turn our attention to the action of gauging-related dualities on phase diagrams in two-dimensional space.
More precisely, let us again consider a theory T with global symmetry G by which we mean a parameter space of G-symmetric Hamiltonians.
This space of local Hamiltonians is contained within the bond algebra B_ G( V).
Upon gauging either the full group G or a certain subgroup, we obtain a new theory T^∨, i.e., a parameter space of models in the bond algebra B_ G^∨( V^∨), where G^∨ is typically a higher group, potentially with mixed anomalies.
Since the gauging map is an isomorphism between the bond algebras, the physics before and after gauging is intimately related, or more precisely dual.
This duality is evident in several aspects.
For instance, the spectrum of a G symmetric Hamiltonian H and its G^∨-symmetric image H^∨ under (partial) gauging have the same spectrum in `dual' symmetry sectors, in the sense detailed in Sec. <ref>.
Another consequence is the equality of correlation functions
⟨ O_1( x_1 , t_1)⋯ O_n( x_n , t_n)⟩_Φ=
⟨ O^∨_1( x_1 , t_1)⋯ O^∨_n( x_n , t_n)⟩_Φ^∨ ,
where Φ collectively denotes the symmetry sector labels of theory T and O_j are operators in the bond algebra B_ G.
Φ^∨ and O^∨_j are the images of Φ and O_j under the gauging map.
§.§ Gauging finite Abelian symmetry
In this section, we describe how the phase diagrams of a theory with 0-form and 1-form Z_n symmetry are related.
Our analysis generalizes to any finite Abelian group G with a few caveats which we will elucidate as we go along.
To organize the mapping of phase diagrams under (partial) gauging, we seek to enumerate
G-symmetric gapped phases in two dimensions.
Doing so, one immediately encounters a complication in the fact that there are infinitely many G-symmetric gapped phases of matter if one includes phases with long range entanglement.
This should not come as a surprise as the set of gapped phases (which admit a lattice Hamiltonian description) without any symmetry enrichment is already infinitely large and contains all topological orders with gapped boundaries.
This set is at least as rich as fusion categories since a topological order (with a gappable boundary) can be constructed from a given fusion category as input <cit.>.
Here, we content ourselves by investigating the parameter space of G_(0) symmetric short range entangled (SRE) systems.
SRE gapped phases are classified by tuples [ H,ν] where H⊆ G and ν∈ H^3( H, U(1)).
A phase thus labelled spontaneously breaks the global symmetry down to H.
Therefore, such a phase possesses | G/ H| ground states that form an orbit under the action of G.
Furthermore, each such ground state is a symmetry protected topological (SPT) phase labelled by ν∈ H^3( H, U(1)) of the unbroken H subgroup <cit.>.
We consider the following G-symmetric Hamiltonians which can access all SRE gapped phases
H[λ] =
∑_H, νλ_[ H, ν] H^ _[ H, ν],
where the parameters λ_[ H, ν]∈R and H^ _[ H, ν] is a fixed-point Hamiltonian for each gapped phase.
Setting all λ_[ H, ν] = 0 except for one pair [ H', ν']
selects a fixed-point Hamiltonian H^ _[ H', ν'] realizing the gapped phase labelled by [ H', ν'].
Upon gauging G_(0), one realizes a model with a dual G^∨_(1) global symmetry.
Correspondingly, each G_(0) symmetric gapped phase [ H,ν] maps to a G_(1)^∨ symmetric gapped phase we label [ H,ν]^∨.
A natural question then is where does [ H, ν]^∨ fit into the classification of G^∨_(1) symmetric gapped phases?
We will see that the data [ H, ν] correspond to information pertaining to (i) the 1-form symmetry that is preserved in the dual model and (ii) the topological properties of certain emergent 1-form symmetries that arise in phases with spontaneously broken 1-form symmetry.
To dissect these statements let us first describe what is meant by 1-form symmetry breaking?
1-form symmetry breaking:
Similar to conventional 0-form symmetry breaking, 1-form symmetry breaking is also signalled by long-range order of an operator which is charged under the relevant symmetry.
In the case of 0-form symmetry, the charged local operator is the local order parameter.
In contrast, 1-form symmetry breaking is diagnosed by a perimeter law expectation value of a closed line operator which is charged under the 1-form symmetry in question <cit.>.
In the parlance of gauge theory, the charged line operator is deconfined in the 1-form symmetry broken phase.
In the infra-red/low-energy limit, such a charged line operator becomes topological, i.e., its expectation value is invariant under topological deformations.
Furthermore, tautologically, there is a non-trivial linking between the charged line and 1-form symmetry generator (see fig <ref>).
Therefore the low energy theory contains topological line operators that braid non-trivially and is topologically ordered <cit.>.
These phases are paradigmatic examples of models with long-range entanglement and therefore one observes that gauging maps short range entangled phases to long range entangled phases in 2+1 dimensions <cit.>
Let us consider the case where G= Z_n.
The gapped phases are labelled by the pair [ Z_p, ℓ] with p a divisor of n and ℓ∈ H^3( Z_p, U(1))≅ Z_p.
Before analyzing the dualities on the lattice, let us first get some intuition about how the gapped phases are mapped under gauging.
To do so we turn to topological partition functions, which
encode the low energy physics within a given gapped phase.
For G= Z_n, the topological partition function for the gapped phase labelled by [ Z_p,ℓ] has the form
Z_ T[M ,A_1]
=n/pδ_pA_1,0exp{2π iℓ/p∫_MA_1∪ Bock(A_1)},
where the Bockstein homomorphism Bock: H^1(M, Z^ _p) →
H^2(M, Z^ _p) is implemented by
Bock(A_1)
=
1/p dA^ _1,
and A^ _1 is a lift of A^ _1 to Z^ _p^2.
The factor of δ_pA_1,0 implies that the theory cannot be coupled to a Z_n/p⊂ Z_p as it is broken in this phase.
Meanwhile the topological action in the exponent of (<ref>) is the SPT response theory/effective action of the Z_p global symmetry which remains unbroken.
Then the low-energy or ground state physics of the dual gapped phase obtained after gauging, can be extracted using (<ref>).
The topological partition function corresponding to this dual gapped phase has the form
Z_ T^∨[M] =1/p∑_a_1∈ H^1(M, Z_p)exp{2π iℓ/p∫_Ma_1∪ Bock(a_1)} ,
= 1/p∑_a_1, b_1∈ C^1(M, Z_p)exp{2π i/p∫_M[b_1∪ da_1 +ℓ a_1 ∪da_1/p]
} .
This theory is a Z_n Dijkgraaf-Witten theory <cit.> with the topological action labelled with ℓ∈ H^3( Z_p, U(1)) <cit.>.
In the second line, we have simply re-formulated the Dijkgraaf-Witten theory in terms of 1-cochains a_1 ,b_1∈ C^1(M, Z_p), which is the quantum double formulation (see <cit.> and references therein).
The additional field b_1 serves to parametrize/probe the aforementioned emergent 1-form symmetry and can be summed over to go back to the first line.
By inspecting the equations of motion, the most general topological operator can be read off to be
W_(q,m)(L)
=
exp{2π i/p∮_L( q a_1+ m b_1)} ,
where q,m∈ Z_p.
We note that W_(0,1)(L) generates an emergent 1-form symmetry.
Notice that due to the delta function δ_pA_1,0 in (<ref>), the following holds true
⟨exp{ ip∮_La_1}⟩ = 1 ,
for any 1-cycle L.
Since the operator exp{ ip∮_La_1} generates a Z_n/p,(1)⊂ Z_n,(1) symmetry, (<ref>) implies that all lines charged under Z_n/p are confined and do not appear in the infra-red fixed point.
This is equivalent to the fact that Z_n/p,(1)⊂ Z_n,(1) remains unbroken.
Next let us inspect the fate of the remaining Z_p,(1) generated by W_(1,0).
To do so, we turn on sources for the Z_p 1-form symmetry as well as the emergent 1-form symmetry.
This can be done by introducing two 2-form background gauge fields A_2 ,B_2 ∈ Z^2(M , Z_p) that enter the partition function as
Z_ T^∨[M, A_2,B_2] = 1/p∑_a_1, b_1exp{2π i/p∫_M[b_1∪ da_1 +ℓ a_1 ∪d a_1/p+A_2∪ a_1+ B_2∪ b_1]
} .
The correlation functions of the topological line operators can be computed straightforwardly using standard methods <cit.>.
For instance consider two 1-cycles γ_1 and γ_2 embedded in M=S^3 such that they form a Hopf-link.
The correlation function of two line operators with support on L_1 and L_2 is
⟨ W_(q_1,m_1)(L_1) W_(q_2,m_2)(L_2) ⟩ = exp{2π i/p(q_1m_2+q_2m_1)+4π i ℓ/p^2 m_1m_2} .
Hence, the parameter ℓ∈ H^3( Z_p, U(1)) changes the self-braiding and topological spin of the emergent 1-form symmetry generators.
§.§.§ SPT Hamiltonians on the lattice
We now turn to how these features manifest on the lattice.
As before we consider a triangulated manifold M_2, with each vertex endowed with a local Hilbert space V_ v≅ C^n.
We investigate how fixed-point Hamiltonians dualize under gauging.
A fixed point Hamiltonian in the phase labelled as [ Z_p,ℓ] has the form
H_[ Z_p,ℓ]=-∑_ vX_ v^n/pexp{2π iℓ/p^2∑_ e ⊂∂Hex_ v B_ e}-∑_ eZ_ s( e)^pZ_ t( e)^-p+H.c ,
where Hex_ v denotes the smallest hexagon in the direct lattice
enclosing vertex v and ∂Hex_ v is the set of six edges at the boundary of this hexagon (see Fig. <ref>).
B_ e is defined as
B_ e=∑_α=0^p-1α P_ e^(α) , P_ e^(α)= 1/p∑_τ=0^n-1e^-2π iτα/p(Z_ s( e)^Z_ t( e)^†)^τ .
We are interested in the groundstate symmetry properties of (<ref>).
Since X_ v^n/p and Z_ v^p commute, (<ref>) is a commuting projector Hamiltonian and the ground states lie in Z_ s( e)^pZ_ t( e)^-p=1 subspace of the Hilbert space.
It follows that in this subspace Z_ s( e)^Z_ t( e)^† have eigenvalues that are the p^ th roots of unity and consequently P^(α)_ e is a projector onto the exp{2π iα/p} eigenspace of this operator.
The Hilbert space splits in n/p super-selection sectors, that are dynamically disconnected in the thermodynamic (infinite size) limit as
V =⊕_ g=0^n/p-1 V_ g , V_ g= Span_ C{|ϕ⟩_ g | ϕ∈ C^0(M_, Z_p)} ,
such that
Z_ v|ϕ⟩_ g= exp{2π iϕ_ v/p+2π i g/n}|ϕ⟩_ g .
Note that the sub-Hilbert spaces V_g are distinct eigenspaces of Z_ v^p, which is the order parameter of spontaneously breaking of Z_n symmetry to Z_p⊂ Z_n.
Since the different super-selection sectors are dynamically disconnected, we may look at the Hamiltonian (<ref>) in a specific subspace V_ g, which we denote by
H_[ Z_p,ℓ]^(g)= H_[ Z_p,ℓ]|_ V_ g .
Since, Z_ v X_ v^n/p=exp{2π i/p} X^n/p_ v Z_ v', the operators {X_ v^n/p , Z_ v} restricted to V_ g generate a Z_p clock and shift algebra, i.e.,
Z_ s( e)Z_ t( e)^†|ϕ⟩_ g = exp{2π i/p( dϕ)_ e}|ϕ⟩_ g ,
X_ v|ϕ⟩_ g = |ϕ+δ_ v⟩_ g .
We define effective Z_p degrees of freedom in V_ g as
X_ v^n/p|_ V_ g= e^ iΘ_ v ,
Z_ v|_ V_ g= e^ iΦ_ v ,
which satisfy the commutation relations [Θ_ v , Φ_ v']=2π iδ_ v, v'/p.
In terms of these effective Z_p degrees of freedom, we have
H^( g)_[ Z_p,ℓ]=-∑_ ve^ iΘ_ vexp{2π iℓ/p^2∮_∂ Hex_ v dΦ} .
This is nothing but a fixed point Hamiltonian for the Z_p-SPT labelled by ℓ∈ H^3( Z_p, U(1)) <cit.>.
Therefore, we have demonstrated that the ground states of the Hamiltonian (<ref>) break the global symmetry down to Z_p⊂ Z_n and furthermore, each symmetry broken ground state realizes an SPT of the unbroken symmetry.
§.§.§ Dual Hamiltonian with higher-form symmetry
Now we turn to gauging the global Z_n 0-form symmetry.
The model after gauging is defined on M_2, with the degrees of freedom defined on the edges instead of the vertices.
The dual Hamiltonians can be obtained by using the bond algebra isomorphsim between (<ref>) and (<ref>).
The Z_n 0-form symmetric fixed point Hamiltonian (<ref>) in the phase [ Z_p , ℓ] maps to the following dual Hamiltonian
H^∨_[ Z_p,ℓ]=-∑_ vA^n/p_ vexp{2π iℓ/p^2∑_ e⊂∂ Hex_ v B^∨_ e}-∑_ eZ_ e^p+ H.c,
where
B^∨_e
=
∑_α^∨=0^p-1α^∨ P_ e^(α^∨) , P_ e^(α^∨)= 1/p∑_τ=0^n-1e^-2π iτα^∨/p Z^τ_ e.
Since the different terms in the Hamiltonian commute, it immediately follows that the groundstate is in the Z^p_ e=1 subspace of the Hilbert space.
As a straightforward consequence, we have that
∏_ e∈ L(Z_ e^p)^ o ( e,L)=1
for any closed loop L.
Since this line operator is the generator of Z_n/p,(1) 1-form symmetry, this implies that the ground state is invariant under this 1-form symmetry.
Let us now investigate the fate of the remaining Z_p⊂ Z_n.
As before, we note that in a definite eigenspace of Z_ e^p, the operators X_ e^n/p and Z_ e generate a clock and shift algebra.
We use the representation
Z_ e|_Z^p_ e=1=e^ i a_ e , X^n/p_ e|_Z^p_ e=1=e^ i b_ e^∨ ,
where we have used the convention that the b^ _e^∨ operators
are defined on the links of the dual lattice.
They satisfy the algebra
[b_ e^∨ , a_ e']=2π i Int_ e^∨, e'/p, where Int_ e^∨, e' denotes the intersection number of edge e^∨ and e'.
Using this representation, we may express the Hamiltonian (<ref>) in the restricted Z_ e^p=1 subspace as
H^∨_[ Z_p,ℓ]|_Z_ e=1=-∑_vexp{ i∮_S^1_ v b+ iℓ/p∮_∂ Hex_ v a} .
This is nothing but the Z_p twisted quantum double Hamiltonian.
The Hamiltonian is a sum over framed ribbon operators linking with the vertices of the direct lattice (see Fig. <ref>).
More generally, the line
W(Γ)=exp{ i∮_L^∨ b+ iℓ/p∮_L a} ,
commutes with the Hamiltonian and is therefore topological, i.e., an emergent 1 form symmetry generator.
Note that on the lattice one needs to provide two curves–L and L^∨ since W(Γ) is a framed line operator.
In terms of the Wilson operators defined in (<ref>), this topological line operator is W_0,1(Γ) with Γ=(L,L^∨).
Table <ref> summarizes the dualities
between Hamiltonians (<ref>) and
(<ref>), when n=4 and p=1,2,4.
The discussion in this section generalizes to any finite Abelian group G.
Similar to the case of Z_n, the gapped phases are labelled [ H,ν], labelling the symmetry H that is preserved by the ground state and the SPT class ν∈ H^3( H , U(1)) of each symmetry broken ground state.
There are three possible types of cocycles for any finite Abelian group which are denoted as type-I, type-II and type-III cocycles <cit.>.
The physics of type-I and type-II cocycles is a straightforward generalization of the case of Z_n presented in this section, however the physics of type-III is different.
In particular after gauging an SPT labelled by a type-III cocycle ν, one obtains a topological order with emergent non-invertible symmetries <cit.>.
§.§ Gauging finite Abelian sub-symmetry
In this section, we study the dualities between two theories related by the partial gauging of Z_2⊂ Z_4 0-form symmetry.
Concretely, we gauge the Z_2 subgroup of a Z_4 symmetric system that can access all SRE gapped phases, i.e., combinations of SPT and symmetry broken phases.
Such phases are labelled by H⊆ Z_4 and a 3-cocycle (SPT action) ν∈ H^3( H, U(1)).
There are a total of seven such gapped phases since there are three subgroups (Z_4, Z_2 and Z_1) of Z_4 and H^3( Z_n, U(1))= Z_n.
We follow two complimentary strategies (i) partially gauge the topological (fixed-point) partition functions of each of the gapped phases in the Z_4 model as in Sec. <ref> and (ii) carry out a partial gauging of a specific lattice spin model using the bond algebra isomorphism described in Sec. <ref>.
We summarize the results in Table
<ref>.
§.§.§ Dualizing topological partition functions
As in the Sec. <ref>, it is useful to define a Z_4 gauge field, A_1∈ Z^1(M, Z_4) as a tuple of Z_2 fields (A_1^( N),A_1^( K)) that satisfy the following modified cocycle conditions
dA_1^( N)= Bock(A_1^( K))=1/2dA_1^( K) , dA_1^( K)=0 ,
where Bock denotes the Bockstein homomorphism (see for example App. B of <cit.> for details) and A_1^( K) is the lift of A_1^( K) to a Z_4 gauge field.
The expressions for the topological partition functions for the seven gapped phases labelled by [ Z_p,ℓ] with p∈{1,2,4} and ℓ∈ Z_p are
2𝒵_ T^[ Z_1,0][A_1^( N),A_1^( K)] =4δ_A_1^( N),0δ_A_1^( K),0 ,
𝒵_ T^[ Z_2,ℓ][A_1^( N),A_1^( K)] =2δ_A_1^( K),0exp{ iπℓ∫_M A_1^( N)∪Bock(A_1^( N))} ,
𝒵_ T^[ Z_4,ℓ][A_1^( N),A_1^( K)] =exp{ iπℓ/2∫_M (2A_1^( N) + A_1^( K)) ∪Bock(A_1^( N))} .
The gapped phases dual to each of these phases, i.e., related via gauging of Z_2⊂ Z_4 can be obtained by mapping the topological partition functions using
Z_ T^∨^[ Z_p,ℓ]^∨[A_2^( N^∨) ,A_1^( K)]=1/2∑_a_1^( N) Z^[ Z_p,ℓ]_ T[a_1^( N) ,A_1^( K)](-1)^∫_Ma_1^( N)∪ A_2^( N^∨) .
The topological partition function dual to the fully symmetry broken phase [ Z_1 ,0] is
𝒵_ T^∨^[ Z_1,0]^∨[A_2^( N^∨) ,A_1^( K)] =2 δ_A_1^( K),0 .
The factor δ_A_1^( K),0 signals that the theory T^∨ cannot be coupled to the background A_1^( K) since the 0-form K_(0) = Z_2 symmetry is spontaneously broken.
The prefactor of 2 corresponds to the ground state degeneracy owing to this spontaneous symmetry breaking.
Furthermore since there is no constraint on A_2^( N^∨), the dual phase trivially preserves the 1-form N^∨_(1) = ℤ_2 symmetry.
The partition function of the gapped phase dual to the partial symmetry broken phases takes the form
𝒵^[ Z_2,ℓ]^∨_ T^∨[A_2^( N^∨) ,A_1^( K)]=
2δ_A_1^( K),0^0-form SSB×1/2∑_a_1^( N)(-1)^∫_M[ℓ a_1^( N)∪ Bock(a_1^( N))+ a_1^( N)∪ A_2^( N^∨)]^Dijkgraaf-Witten (1-form SSB) .
The expression in the first brace corresponds to the fact that the 0-form symmetry K is broken in the gapped phase [ Z_2,ℓ]^∨ in T^∨.
The expression in the second brace corresponds to the Dijkgraaf-Witten partition function for a topological Z_2 gauge theory with the topological action given by
S_ DW^(ℓ)[a_1^( N)]= iπℓ∫_Ma_1^( N)∪ Bock(a_1^( N)) .
Equivalently, this is the deconfined phase of the (twisted) Z_2 gauge theory which has an emergent 1-form symmetry.
The emergent 1-form symmetry is manifest in the quantum double presentation of the theory in terms of cochains b_1^( N) and a_1^( N).
In this presentation, the theory is described by the action
S_ T^∨^[ Z_2,ℓ] =
iπ∫_M[b_1^( N)∪ da_1^( N) +ℓ a_1^( N)∪ da_1^( N)/2+A_2^( N^∨)∪ a_1^( N)+ B_2^( N)∪ b_1^( N)] .
The most general topological line operator has the form
W_(q,m)(L)
=
exp{ iπ∮_L( q a_1^( N)+ m b_1^( N))} ,
with the emergent Z_2 1-form symmetry generated by W_0,1(L).
There is a non-trivial braiding between lines which is captured by the correlation function
⟨ W_(q_1,m_1)(L_1) W_(q_2,m_2)(L_2) ⟩ = exp{ iπ Link(L_1,L_2)
(q_1m_2+q_2m_1 + ℓm_1 m_2)} ,
where Link(L_1,L_2) is the linking number between the 1-cycles L_1 and L_2.
By inspecting this topological correlation function we learn two important things.
Firstly, the fact that the line W_(0,1) which is charged under the N^∨_(1) = Z_2 1-form symmetry is topological signals the spontaneous breaking of this 1-form symmetry.
Secondly, the self-braiding of the emergent 1-form symmetry depends on the choice of ℓ and therefore distinguishes the two gapped phases that are dual to the two Z_2 SPTs labelled by ℓ.
To summarize, the gapped phase [ Z_2,ℓ]^∨ in T^∨ spontaneously breaks the 0-form symmetry K_(0)= Z_2 as well as the 1-form symmetry N^∨_(0)= Z_2.
There are two ways of breaking the 1-form symmetry
that are distinguished by the choice of ℓ and,
equivalently, by the self-braiding of the emergent 1-form symmetry generated by W_(0,1).
Next, let us move on to the phases that are dual to [ Z_4, ℓ] under Z_2 gauging of the Z_4 symmetry.
Under the gauging of N_(0), a gapped phase which preserves K_(0) maps to a dual phase which also preserves K_(0).
Conversely, a phase that preserves N_(0) maps to a dual phase which breaks the dual symmetry N^∨_(1).
1-form symmetry breaking phases are topologically ordered and it can be shown using (<ref>) that the phase [ Z_4,ℓ]^∨, has the following quantum double action
S_ T^∨^[ Z_4,ℓ] = iπ∫_M[b_1^( N)∪ da_1^( N)
+
ℓ/2 a_1^( N)∪ da_1^( N)]
+ iπ∫_M[
a_1^( N)∪ A_2^( N^∨)
+
b_1^( N)∪ B_2^( N)]+ iπℓ/4∫_M A_1^( K)∪da_1^( N) .
We note that this is again simply a Z_2 Dijkgraaf-Witten theory with a topological action labelled by ℓ mod 2 ∈ H^3( Z_2, U(1)), therefore the topological line operators have the form (<ref>) and the topological correlations functions are given by (<ref>).
There is however an additional subtlety due to the 0-form symmetry.
Summing over b_1^( N) imposes that da_1^( N)=B_2^( N).
We obtain a new term in the response theory of the form
iπℓ/4∫_M A_1^( K)∪ B_2^( N)⊂ S_ T^∨, resp.^[ Z_2,ℓ][A_2^( N^∨),A_1^( K), B_2^( N)] .
This term signals that the emergent 1-form symmetry carries a fractional charge under K_(0) with the fractionalization labelled by ℓ∈ Z_4.
Concretely this means that the Z_2 eigenvalue of the K_(0) symmetry operator squares to exp{2π iℓ/4} on the emergent 1-form symmetry generator.
§.§.§ Dualizing fixed-point Hamiltonians
We now describe how fixed point Hamiltonians in each gapped phase transform under gauging Z_2⊂ Z_4.
The fixed-point Hamiltonians have the form (<ref>) with n=4, p∈{1,2,4} and ℓ∈ Z_p.
The dual Hamiltonians after gauging Z_2⊂ Z_4 can be directly obtained by noting that the bond algebra of Z_4 symmetric quantum systems (<ref>) transforms into an isomorphic bond algebra in (<ref>).
Using this bond algebra isomorphsim, any Z_4 symmetric Hamiltonian can be mapped to its dual counterpart under the partial gauging.
Under the bond-algebra isomorphism, the fully symmetry breaking fixed point Hamiltonian
H_[ Z_1,0]=-1/2∑_ e[Z_ s( e)^Z_ t( e)^†+H.c] ,
maps into the dual Hamiltonian
H^∨_[ Z_1,0]^∨ = -1/2∑_ e[1- iσ^z_ s( e)/√(2)σ^z_ e1+ iσ^z_ t( e)/√(2) + H.c]= -∑_ eσ^z_ e P^(+)_ e ,
where P^(+)=(1+σ^ z_ s( e)σ^ z_ t( e))/2.
This Hamiltonian has two ground states
|GS⟩_↑ =⊗_ e, v|σ^z_ e =↑⟩⊗|σ^z_ v=↑⟩ ,
|GS⟩_↓ =⊗_ e, v|σ^z_ e =↑⟩⊗|σ^z_ v=↓⟩ ,
which spontaneously break the Z_2 0-form symmetry (<ref>) and preserve the Z_2 1-form symmetry (<ref>).
Similarly, the fixed point Hamiltonian describing partial symmetry breaking
from Z^ _4 to Z^ _2
H^ _[ Z^ _2, ℓ]
=
-
1/2∑_ v
X^2_ v exp{2πi ℓ/4∑_ e ⊂∂ Hex^ _ v B^ _ e}
-1/2∑_ e
Z^2_ s( e) Z^2_ t( e)+H.c.,
maps into the dual Hamiltonian
H^∨_[ Z^ _2, ℓ]^∨ =
-
∑_ v
A^ _ v exp{π iℓ/4∑_ e ⊂∂ Hex^ _ v[
1
-
1+ iσ^z_ s( e)/√(2) σ^z_ e1- iσ^z_ t( e)/√(2) ]
} -
∑_ eσ^z_ s( e) σ^z_ t( e).
The ground state properties of this Hamiltonian can be obtained by first, minimizing the term σ^z_ s( e) σ^z_ t( e)
by setting σ^z_ v = ± 1.
This amounts to studying the model in either of the two superselection sectors that break the Z_2 0-form symmetry
H^∨_[ Z^ _2, ℓ]^∨|_σ^z_ v = ± 1
=
-
∑_ v
A^ _ v exp{π iℓ/4(1-σ^z_ e)}
We note that this projected Hamiltonian corresponds to the Z_2 Toric code and double semion topological order for ℓ = 0 and ℓ = 1 respectively <cit.>.
We thus conclude that upon partial gauging of the phase labelled as [ Z_2,ℓ], the dual Hamiltonian realizes 2^b_0(M_2)+b_1(M_2) ground states such that the contributions of 2^b_0(M_2) and 2^b_1(M_2) are due to 0-form and 1-form symmetry breaking respectively.
Lastly, as was demonstrated by Levin and Gu <cit.>, ℓ can be diagnosed by the self-braiding of the emergent topological line operator in the ground state subspace.
Finally, we describe the duality of the fixed-point Hamiltonian [ Z_4 ,ℓ].
For simplicity, we restrict to the case ℓ=0, in which case the fixed-point Hamiltonian has the form
H_[ Z_4,0]=-1/2∑_ v[X_ v^+ X_ v^†] .
Under the bond-algebra isomorphism, (<ref>) maps into the dual Hamiltonian
H^∨_[ Z_4,0]^∨=-∑_ vσ_ v^x[1+A_ v/2] ,
which realizes ground states which are disordered product state in the vertex degrees of freedom while the edge degrees of freedom realize the Toric code or Z_2 topological order ground state.
There are a total of 2^b_1(M) ground states labelled by elements in a∈ H^1(M, Z_2).
Explicitly, the ground states have the form
| GS[a]⟩=⊗_ v|σ^x_ v=→⟩⊗[1+A_ v/2] |a⟩ .
where |a⟩ is a reference state in the σ_ e^z basis such that
W_γ|a⟩=(-1)^a(γ)|a⟩ , W(γ)=∏_ e⊂γσ_ e^z .
Such ground states preserve K_(0)= Z_2 and spontaneously break N^∨_(1)= Z_2.
Therefore, the symmetry breaking transition between the Z_4 symmetric and fully symmetry broken phases realized by the minimal model H_[ Z_4,0]+ H_[ Z_1,0]
maps to the dual model H^∨_[ Z_4,0]^∨+ H^∨_[ Z_1,0]^∨,
which realizes a topological deconfined transition between a Z_2 topological order (Toric code) and a Z_2,(0) symmetry broken phase.
While the Z_2 0-form and 1-form symmetry groups appear independent, they are related via the mixed anomaly, which is responsible for the direct unconventional transition between these two phases.
In fact, the phase diagram of the anomalous [ Z_2,(0), Z_2,(1)]^ϵ symmetric spin model after partial gauging realizes many interesting unconventional transitions that can be understood by studying the more conventional transitions realized in the phase diagram of the Z_4 symmetric spin model.
§ PHASE DIAGRAMS AND DUALITIES IN D=3
In this section, we extend our analysis in Sec. <ref> to the case of three-dimensional space.
For the group G= Z_n, we describe how gapped phases realized in G-symmetric spin models are mapped under dualities related to gauging either the full group G or a subgroup thereof.
As in Sec. <ref>, we restrict ourselves to short range entangled gapped phases.
In three dimensions, such phases are enumerated by tuples [ H,ν], where H is a subgroup of G and ν∈ H^4( H, U(1)).
In a gapped phase labelled by [ H,ν], the subgroup H remains unbroken in the | G/ H| dimensional ground-state manifold and each ground state realizes an H SPT labelled by ν.
It suffices to look at the simplest non-trivial case to illustrate the general features of how phases map under such dualities.
Therefore, in what follows, we consider models with G^ _(0) = Z^ _4,(0)
0-form symmetry and relate them to models with 2-form dual symmetries.
For G^ _(0)=ℤ^ _4,(0), there are no non-trivial SPT phases in three dimensions since H^4(ℤ^ _4, U(1)) is trivial.
This simplification allows us to limit the subsequent analysis to topologically trivial gapped phases labelled by their symmetry breaking pattern.
Such phases can be described by Hamiltonians
H^ _[ Z^ _p]
=
-
1/2∑_ v
X^4/p_ v
-
1/2∑_ e
Z^p_ s( e)
Z^-p_ t( e)
+
H.c. ,
p= 1,2,4.
For a given p, the Hamiltonian H^ _[ Z^ _p] has gapped ground states that are 4/p-fold degenerate.
The global 0-form Z^ _4 symmetry is broken down to Z^ _p symmetry.
This follows from the fact that operators
X^4/p_ v and Z^p_ s(e) commute with each other
and degenerate ground states are characterized by
the expectation value
⟨ Z^p_ s(e) Z^-p_ t(e)⟩=1.
More generally one consider a superposition of Hamiltonians
H[{λ}] =
∑_ Z^ _pλ_p H^ _[ Z^ _p],
λ_p∈ R, which can access all three gapped phases and transitions between them.
§.§ Gauging finite Abelian symmetry
As described in Sec. <ref> and Sec. <ref>, upon gauging a Z_4,(0) symmetry in d=3, there is a Z_4,(2) symmetry in the dual gauged model.
Here we describe the mapping of short range entangled phases under such a duality, which is summarized in Table <ref>.
The ground state properties of Hamiltonians in a gapped phase labelled as [ Z_p] are captured by the topological partition functions
Z_ T^[ Z_p][A_1]
=n/pδ_pA_1,0 .
The dual partition function is obtained by inserting (<ref>) in (<ref>), which delivers
Z_ T^∨^[ Z_p]^∨ =1/n∑_a_1∈ H^1(M, Z_n)δ_p a_1,0 = 1/p∑_a_1∈ H^1(M, Z_p)1 =
1/p∑_a_1∈ C^1(M, Z^ _p)
b^ _2∈ C^2(M, Z^ _p)exp{2π i/p∫_M
b_2∪ da_1 } .
The Z_4 2-form symmetry is generated by exp{ i∮_γa_1}.
In the last equality in (<ref>), we have used the quantum double formulation in terms of 1 and 2-cochains a_1∈ C^1(M, Z_p) and b_2∈ C^2(M, Z_p) respectively.
Summing over b_2 gives back the second equality.
The merit of the quantum double description is that it makes an emergent 1-form symmetry in [ Z_p]^∨ for p≠ 1 manifest.
More precisely, in this formulation there are topological line and surface operators
W_L^ q=exp{ i q∮_La_1} ,
T^ m_S^(2)=exp{ i m∮_S^(2)b_2} ,
which have the correlation functions <cit.>
⟨ W_L^ q T^ m_S^(2) ⋯⟩ = exp{2π i Link(L , S^(2))/p}⟨⋯⟩ ,
where ⋯ denotes any other operators in the correlation function and we have assumed L and S^(2) do not link with any other operators in ⋯.
In other words, T_S^(2) is charged under the 2-form symmetry.
Additionally since T_S^(2)^p=1 and T_S^(2) is topological therefore T_S^(2) generates an emergent 1-form symmetry.
The existence of a topological charged operator signals that the Z^ _4, (2) symmetry is broken down to
Z^ _4/p, (2).
The fixed point Hamiltonians (<ref>) are mapped to the following dual Hamiltonians under the isomorphism between the bond algebras
(<ref>) and (<ref>)
H^∨_[ Z^ _p, 0]^∨
=
-
∑_ v
A^4/p_ v
-
∑_ e
Z^p_ e
+
H.c. .
In the spin model the dual 2-form symmetry ℤ^ _4,(2) is generated by
W^ _L
=
∏_ e ⊂ L
Z^o(e, L)_ e
with L being any 1-cycle [recall Eq.
(<ref>)] and o( e, L)=± 1 denotes the
orientation of the edge e relative to L.
In the ground state manifold, A^4/p_ v have a unit expectation value.
This defines a topological surface operators in the low energy description.
In comparison with (<ref>), an operator defined on 2-cycles S^(2), a 2-cycle on the dual latice may be defined as
T^ m_S^(2),∨=∏_ vA_ v^4/p× Link(S^(2),∨ , v) .
Such an operator is charged under the 2-form symmetry.
This is to say that 2-form Z_4,(2) symmetry is spontaneously broken down to
Z_4/p,(2).
When p=1, the first term in (<ref>) vanishes.
The Hamiltonian then describes a “2-form paramagnet” with a non-degenerate and gapped ground state.
This is nothing but the Higgs phase of the 1-form gauge theory <cit.>.
This phase preserves the dual 2-form symmetry.
As we shall see, when p=2,4, the dual 2-form symmetry is spontaneously broken because of which the ground state manifold supports topological order.
When p=2,4, the ground state of Hamiltonian (<ref>)
is obtained by simultaneously minimizing Z^p_ e and A^4/p_ v terms.
Recall that the Hilbert space on which
the Hamiltonian acts is constrained by the condition
∏_ e ⊂ L
Z^ o( e, L)_ e
=
1,
where L is any contractible 1-cycle which is required such that the dimension of Hilbert space is 4^|M^ _3,Δ|, the sama as the dimension of pre-gauged Hilbert space.
Minimizing the second term restricts
the ground state manifold to the subspace
where Z^p_ e=+1 is satisfied.
Let us denote this restricted subspace by V^4/p_ rest., i.e.,
V^4/p_ rest.
:=
V_ ext |_Z^p_ e=1.
On the Hilbert space V^4/p_ rest., operators A^4/p_ v
and ∏_ e⊂ LZ^ _ e act as Z_p-valued variables.
The configurations in V^4/p_ rest., can be spanned by eigenstates |a⟩ of Z_ e such that
Z_ e|a⟩ = exp{2π i a_ e/p}|a⟩ , a∈ C^1(M_3, , Z_p) .
The constraint
(<ref>) imposes that, in fact a∈ Z^1(M_3, ℤ_p), i.e., da=0.
In turn, the the operator A^4/p_ v acts on a configuration a as
A^4/p_ v|a⟩
=
|a+ dδ^( v)⟩ ,
where δ^( v)∈
C^0(M^ _3, Z^ _p).
Hence, the constraint
A^4/p_ v=+1 together with Eq. (<ref>) is satisfied on the states
|[a]⟩
=
1/|C^0(M_3, ℤ_p)|∑_λ∈ C^0(M_3, ℤ_p)|a + dλ⟩, [a]∈ H^1(M_3, ℤ_p),
where the states |[a]⟩ are labeled by the cohomology group
H^1(M_3, ℤ_p), i.e., set of all Z^ _p-valued 1-cycles [a ∈ Z^1(M^ _3, Z_p)] up to 1-coboundaries [dλ∈ Z^1(M^ _3, Z_p)]. The topological ground state degeneracy is then given by the cardinality of
H^1(M_3, ℤ_p), i.e.,
|H^1(M_3, ℤ_p)|
=
p^b^ _2(M^ _3),
where b^ _2(M^ _3) is the second Betti number of
3-manifold M^ _3. The corresponding topological order
supported by the ground state manifold is the deconfined phase of the d=3 Z_p topological gauge theory or equivalently, the Z_p d=3 Toric code.
Table <ref> gives a summary of
phases described by fixed-point Hamiltonians in d=3
and their respective duals under gauging Z^ _4,(0)
symmetry.
§.§ Gauging finite Abelian sub-symmetry
We now describe the dualities related to gauging the Z_2,(0)⊂ Z_4,(0) symmetry.
We will focus on Hamiltonians dual to the
fixed-point Hamiltonians (<ref>) for
p=1,2,4.
The phases described by these Hamiltonians and their respective
duals are summarized in Table <ref>.
As described in Sec. <ref>,
the corresponding dual models are symmetric under a 2-form dual
symmetry Z_2,(2) and a residual 0-form symmetry Z_2,(0).
Using the isomorphism between the bond algebras
(<ref>) and
(<ref>), we identify the
generator of the remaining 0-form symmetry to be
(recall Eq. (<ref>))
U=∏_ vu_ v , u_ v=σ^x_ v[P^(+)_ v+ A_ vP^(-)_ v],
while the dual 2-form symmetry generator is (recall Eq. (<ref>))
W_L
=
∏_ e ⊂ L
w_ e^ o( e , L),
w^ _e
=
1/2(1- iσ^z_ s( e))σ^z_ e(1+ iσ^z_ t( e)).
The fixed-point Hamiltonian (<ref>) describes a phase where
Z_4,(0) symmetry is spontaneously broken down to Z_p,(0)
subgroup.
When p=1, under the isomorphism between the bond algebras
(<ref>) and
(<ref>), the fixed point Hamiltonian
(<ref>) is mapped to
H^∨_[ Z^ _1, 0]^∨
=
-
∑_ eσ^z_ e 1+σ^z_ s(e) σ^z_ t(e)/2.
There are two ground states of this Hamiltonian given by
|GS⟩_↑ =⊗_ e, v|σ^z_ e =↑⟩⊗|σ^z_ v=↑⟩,
|GS⟩_↓ =⊗_ e, v|σ^z_ e =↑⟩⊗|σ^z_ v=↓⟩.
The two-dimensional ground state manifold is separated from
excited states by a finite gap.
Since σ_ e^z=1 in the ground states, the Z_2,(2) dual symmetry
is unbroken in the ground state manifold.
In contrast, the twofold degeneracy of the ground states is due to
spontaneous breaking of the Z_2,(0) symmetry that survives after gauging.
One verifies that the two ground states are mapped to each other under the generator
of Z_2,(0) symmetry, i.e.,
U |GS⟩_↑
=
|GS⟩_↓,
U |GS⟩_↓
=
|GS⟩_↑.
When p=2, under the isomorphism between the bond algebras
(<ref>) and
(<ref>), the fixed point Hamiltonian
(<ref>) is mapped to
H^∨_[ Z^ _2, 0]^∨
=
-
∑_ v
A_ v
-
∑_ eσ^z_ s(e) σ^z_ t(e).
The degrees of freedom on vertices and on edges
are decoupled. The ground state properties
can be obtained by minimizing each term separately.
The second term acts only on the vertices and imposes the
constraint
σ^z_ s(e) σ^z_ t(e) = +1
which has two solutions σ^z_ v=± 1.
Therefore, the vertex degrees of freedom are ferromagnetically ordered.
The two ground states on the vertices are
|GS_ vrt⟩_↑ =
⊗_ v|σ^z_ v=↑⟩,
|GS_ vrt⟩_↓ =
⊗_ v|σ^z_ v=↓⟩.
The Z_2,(0) symmetry is clearly spontaneously broken
by the ferromagnetically ordered ground states.
The first term acts only on the edge degrees of freedom.
The bond algebra (<ref>)
requires the condition
∏_ e ⊂ Lσ^z_ e
=
1
to hold on any contractible 1-cycle. Configurations
of edge degrees of freedom that satisfy this condition are
one-to-one with 1-cocycles |b⟩ such that b∈ Z^1(M_3, ℤ_2)
σ^z_ e|b⟩ = (-1)^b_ e|b⟩ .
In turn, the configuration |b⟩ is mapped to
A^ _ v|b⟩
=
|b+ dδ^( v)⟩,
under the action of A^ _ v,
where δ^ v∈ C^0(M^ _3, Z_2).
Hence, the edge degrees of freedom support the ground states
|[b]⟩
=
1/|C^0(M_3, ℤ_2)|∑_λ∈ C^0(M_3, ℤ_2)|b + dλ⟩, [b]∈ H^1(M_3, ℤ_2),
where the states |[b]⟩ are labeled by the cohomology group
H^1(M_3, ℤ_2).
The topological ground state degeneracy
is then given by the cardinality of
H^1(M_3, ℤ_2), i.e.,
|H^1(M_3, ℤ_2)|
=
2^b^ _1(M^ _3)
=
2^b^ _2(M^ _3),
where b^ _p(M^ _3) is the p^ th Betti number of 3-manifold M^ _3. This is the
ground state of three-dimensional Toric code, on which
Z_2,(2) symmetry is spontaneously broken. The corresponding order parameter is any products of A_ v
which is supported on 2-cycles and has a non-vanishing/topological
ground state expectation value.
Together the
2^b^ _2(M^ _3)+1 dimensional total
ground state manifold is spanned by the states
{
|GS_ vrt⟩_↑⊗|[b]⟩,
|GS_ vrt⟩_↓⊗|[b]⟩}.
When p=4, under the isomorphism between the bond algebras
(<ref>) and
(<ref>), the fixed point Hamiltonian
(<ref>) is mapped to
H^∨_[ Z^ _4, 0]^∨
=
-
∑_ vσ^x_ v 1+A_ v/2.
The ground state of Hamiltonian (<ref>) is similar to
that of Hamiltonian (<ref>).
Using the same argument we observe that the condition A^ _ v=+1 must be satisfied and
the ground state of
three-dimensional Toric code is
stabilized on the edge degrees of freedom.
However, as opposed to Hamiltonian (<ref>), there is no conventional order
supported by the ground state and the vertex degrees of freedom realize a paramagnet. Therefore, the ground state manifold is spanned by the states
(⊗_ v|σ^x_ v =→⟩)
⊗|[b]⟩,
[b]∈ H^1(M^ _3, Z_2),
with the total degeneracy 2^b^ _2(M^ _3)
that is only due to the topological order supported on
the edges. The dual Z_2,(2) symmetry is spontaneously broken while the remaining Z_2,(0)
symmetry is preserved.
§ GAUGING 1-FORM (SUB) SYMMETRY
In this section, we shift our attention to describing the gauging of 1-form finite Abelian (sub)-symmetries in quantum spin models.
We will closely follow the approach in Sec. <ref> and Sec. <ref> adapted to higher-form symmetries.
Higher-form symmetries have been useful in providing non-perturbative constraints that help solve the phase diagrams of quantum gauge theories <cit.>.
The goal of this section is to study how the phase diagram of 1-form Z_n symmetric quantum spin models map under a duality related to gauging either the full or partial Z_n 1-form symmetry.
§.§ Gauging finite Abelian 1-form symmetry
Let us consider a quantum spin system defined on a d=2 or 3 dimensional oriented lattice M_d,.
For concreteness, we work with a square and cubic lattice in d=2 and 3 respectively, with the orientation convention as in Fig. <ref>.
However the analysis in this section can be generalized straightforwardly to any other lattice and dimension.
We are interested in describing spin systems with 1-form symmetries, i.e., those that are implemented by co-dimension-2 operators in spacetime and act on line operators <cit.>, or more generally on operators defined on loci of dimension greater than one <cit.>.
In the Hamiltonian presentation of a quantum spin model, 1-form symmetries are generated by co-dimension-1 operators in space that commute with the Hamiltonian.
Let us restrict to Z_n 1-form symmetries, for which we consider a Hilbert space V to be the tensor product of Hilbert spaces V_ e assigned to each edge of the square or cubic lattice
V=⊗_ e V_ e , V_ e≅ C^n .
The algebra of operators acting on V_ e is generated by X_ e and Z_ e which satisfy the Z_n clock and shift algebra analogous to (<ref>).
The 1-form symmetry in d dimensions is generated by the following operators defined on a closed and oriented (d-1)-dimensional sub-lattice S^(d-1),∨ of the dual lattice
U_ g( S^(d-1),∨)
=
∏_ e ∈ S^(d-1),∨X_ e^ g o( e, S^(d-1),∨) ,
where o( e, S^(d-1),∨) denotes the orientation of e with respect to the orientation (outward normal) of S^(d-1),∨ (see Fig. <ref>).
The 1-form symmetry operators satisfy Z_n composition rules when defined on the same line or surface such that
U_ g_1( S^(d-1),∨) × U_ g_2( S^(d-1),∨)= U_ g_1+ g_2( S^(d-1),∨) , g_1 , g_2∈ Z_n ,
while more generally two co-dimension-1 operators fuse at codimension-2 junctions according to the group composition in Z_n (see Fig. <ref>).
The algebra of operators that commute with any such network of symmetry defects is the bond algebra
B_ Z_n,(1)( V)=⟨ X_ e , B_ p | U(S^(d-1),∨)
!=1 , ∀ e, p⟩ ,
where B_p are operators defined on plaquettes or 2-cells of the lattice and have the form
B_ p=∏_ e⊂ pZ_ e^ o( e , p) .
The product is over the edges on the boundary of p and o( e , p) is the orientation of e with respect to the orientation of p.
This operator may be familiar from the Toric code Hamiltonian <cit.> for the case of n=2.
See Fig. <ref> for the explicit form of the B_ p operators in terms of the Z_n clock and shift operators.
The bond algebra (<ref>) is defined on a Hilbert space with constraints on contractible (d-1)-cycles on the dual lattice labelled as S^(d-1),∨.
Such constraints are important for two reasons—(i) they ensure that the bond algebra isomorphism related to gauging the Z_n,(1) symmetry is invertible and (ii) after gauging Z_n,(1), (<ref>) maps to a bond algebra which has a trivial background of symmetry twist defects of the dual symmetry.
Instead if one relaxes the constraints, and imposes some other fixed but non-trivial assignment of symmetry eigenvalues of U(S^(d-1),∨), after gauging Z_n,(1) this becomes a non-trivial background of symmetry twist defects of the dual symmetry.
Now, we gauge the Z_n 1-form symmetry by introducing Z_n gauge degrees of freedom on the plaquettes of the lattice.
We thus obtain the extended Hilbert space
V_ ext=⊗_ e V_ e⊗_ p V_ p=Span_ C{ |b,a⟩ | b∈ C^2(M_d,, Z_n) , a∈ C^1(M_d,, Z_n)}
such that the clock and shift operators act on the basis states as
2
Z_ e|b ,a⟩ =ω_n^a_ e|b ,a⟩ ,
X_ e|b ,a⟩ =|b ,a+δ^( e)⟩ ,
Z_ p|b ,a⟩ =ω_n^b_ p|b ,a⟩ ,
X_ p|b ,a⟩ =|b+δ^( p),a⟩ .
Here, δ^( p) is a Z_n-valued 2-cochain such that
[δ^( p)]_ p'=δ_ p, p' ,
and δ^( e) was introduced in (<ref>).
Additionally, one needs to impose gauge invariance via the Gauss operators (see Fig. <ref>)
G_ e= X_ e A_ e^†,
A_ e^†
:=
∏_ p⊃ e X_ p^ o( e, p),
where the product is over plaquettes that contain the edge e on their boundary and o( e , p)=1 or -1 depending on whether the boundary of p is oriented along or against the edge e.
The most general Gauss operator can be parametrized by a 1-cochain λ∈ C^1(M_d,, Z_n) as G[λ]=∏_ e G_ e^λ_ e.
Such a Gauss operator implements the gauge transformation
G[λ]:|b,a⟩⟶ |b+ dλ,a+λ⟩ .
where ( dλ)_ p= ∑_ e⊂ po( e, p)λ_ e.
We note that the plaquette degrees of freedom Z_ p and X_ p embody a Z_n 2-form gauge field and “electric field” respectively, while the edge degrees of freedom embody the Z_n 1-form charged matter.
The physical space of states and operators are invariant under the action of G[λ] for all λ∈ C^1(M_d,, Z_n).
In order to construct the gauged bond algebra, we need to consider operators that are gauge invariant.
In particular, the operators B_p in the bond algebra (<ref>) are not gauge invariant and need to be minimally coupled to the 2-form gauge field Z_ p as B_ p⟼ B_ pZ_ p^†.
The other generator, X_ e of (<ref>) is gauge invariant as is and therefore the bond algebra after gauging the Z_n 1-form symmetry is
B_ Z_n,(1)( V_ ext)=⟨ X_ e , B_ p Z_ p^† | U(S^(d-1),∨)
!=1 , G_ e!=1 ,
∏_p ⊂ S^(2)[B^ _ p Z_ p^†]^ o( p, S^(2))!=1 ,
∀ e , p⟩,
where S^(2) is any contractible 2-cycle and we impose the constraint
∏_p ⊂ S^(2)[
B^ _ p Z_ p^†]^ o( p, S^(2)) = 1 ,
since this operator is in the image of
∏_p ⊂ S^(2) B_ p^ o( p , S^(2))=1.
As before, the Gauss constraints can be removed via a unitary transformation that makes the Gauss operators local (on the edge degrees of freedom).
More precisely, the unitary acts as follows
3
U X_ e U^† = X_ eA_ e , U Z_ e U^† = Z_ e ,
U Z_ p U^† = Z_ pB_ p , U X_ p U^† = X_ p .
Importantly, under the unitary action, the Gauss operator transforms as U G_ e U^† =X_ e and therefore effectively freezes out the edge degrees of freedom.
After the unitary transformation, the bond algebra is represented on the Hilbert space built up of only the plaquette degrees of freedom and has the form
B'_ Z_n,(1)( V_ plaq)
=U B_ Z_n,(1)( V_ ext) U^†
=
⟨ A_ e , Z_ p^† |
∏_ e ∈ S^(d-1),∨A_ e^ o( e,S^(d-1),∨)!=1 , ∏_p⊂ S^(2)Z_ p^ o( p , S^(2))!=1 , ∀ p ⟩ ,
where V_ plaq=⊗_ p V_p⊂ V_ ext.
Note that the constraint
∏_ e ∈ S^(d-1),∨A_ e^ o( e,S^(d-1),∨)!=1 ,
holds, unless we are working in a background of symmetry twist defects.
We will henceforth leave this constraint implicit for brevity.
Next, we implement a final isomorphism to bring (<ref>) into a conventient form.
This isomorphism comprises of (i) the rotation (Z_ p , X_ p)→ (X_ p^-1 , Z_ p) and (ii) the dualization of the square or cubic lattices.
Recall that the plaquettes of a square or cubic lattices become the vertices or edges of the dual square or cubic lattices, respectively.
Implementing this isomorphism we obtain the dual bond algebra, which in d=2 dimensions is
B^ dual_ Z_n,(0)( V_ dual)=⟨ Z^_ s( e)Z_ t( e)^† , X_ v | ∀ e, v ⟩ , (d=2) .
This is nothing but the bond algebra of Z_n 0-form symmetric operators on the dual square lattice.
Note that there is no constraint on contractible cycles S^(2), since there are no contractible cycles on a closed two dimensional manifold.
Furthermore, since we already discussed the mapping of sectors and the phases between 0-form symmetric and 1-form symmetric algebras/models in detail in Sec. <ref> and Sec. <ref>, we will focus on the case of d=3 in the remainder of this section.
For d=3, the dual bond algebra has the form
B^dual_ Z_n,(1)( V_ dual)=⟨ B_ p , X_ e | U(S^(2),∨)!=1 , ∀ e , p ⟩ , (d=3) ,
where edge operators in the x-direction in the original bond algebra dualize to plaquette operators in the yz plane, and so on. The converse also holds as
illustrated in Fig. <ref>.
This is essentially a generalization of the Kramers-Wannier duality to 1-form Z_n symmetric models.
Just like the usual Kramers-Wannier duality, the symmetry sectors, i.e., symmetry eigenspaces and symmetry twisted boundary conditions map non-trivially under this automorphism of the bond algebra.
Let us consider the following symmetry twisted partition function for a theory T with Z_n,(1) symmetry on a manifold M=M_3 × S^1 coupled to a background gauge field A_2∈ H^2(M, Z_n)
Z_ T[A_2]≡ Z_ T[g⃗ , h⃗ ]= Tr[∏_j=1^b_2(M_3) U_ h_j(Σ^(2)_j) exp{-β H_g⃗}] .
This equation needs some unpacking.
Firstly, the gauge field A_2 can be labelled by its holonomies around non-contractible cycles in H_2(M, Z),
which, using the Künneth theorem,
H_2(S^1× M_3 , Z) = H_2(M_3 , Z)⊕ H_1(M_3 , Z) ,
H_2(M_3 , Z) = Span_ Z{Σ^(2)_j} ,
H_1(M_3 , Z) = Span_ Z{Σ^(1)_j} .
Therefore, we can label A_2=(g⃗ , h⃗) where g⃗= ( g_1, …, g_N) and h⃗ = ( h_1, …, h_N) and N=b_1(M_3)=b_2(M_3), such that
∮_Σ^(2)_jA_2= g_j , ∮_S_1×Σ^(1)_j A_2= h_j .
With the purpose of tracking how the symmetry sectors map under gauging Z_n,(1), we define a projector P_α(Σ^(2)), that projects onto the sub-Hilbert space that transforms in the α representation of the 1-form symmetry operator defined on the homology 2-cycle Σ^(2).
P_α(Σ^(2))=1/n∑_ hω_n^-α h U_ h(Σ^(2)) .
Using (<ref>), we may define a symmetry character χ[α⃗ , g⃗] as the thermal trace in a definite eigensector α_j of the 1-form symmetry and symmetry twisted boundary condition g_j on Σ^(2)_j as
χ_ T[α⃗ , g⃗]= Tr[∏_j=1^b_2(M_3) P_α_j(Σ^(2)_j) exp{-β H_g⃗}] .
In the canonical approach, the following operator identities hold in this symmetry sector
U_ h(Σ^(2)_j)=ω_n^ h α_j , T(Σ^(2)_j):= ∏_ p⊂Σ^(2)_jB_ p=ω_n^ g_j .
Since gauging Z_n,(1) has the effect
U_ h(Σ^(2)_j) gauging Z_n,(1) T^ h(Σ^(2)_j) ,
the symmetry sectors in the original and gauged theory T and T^∨ map as
χ_ T[α⃗,g⃗]= χ_ T^∨[g⃗, α⃗] .
This mapping of sectors can also be derived directly from topological gauging as described in Sec. <ref>.
There, we recall that the partition function of the gauged theory coupled to a background Z_n,(1) gauge field A_2^∨ has the form
Z_ T^∨[A_2^∨]=1/n^b_1(M)-b_0(M)∑_a_2 Z_ T[A_2]exp{i∫_Ma_2∪ A_2^∨} .
§.§.§ Phase diagrams and the 1-form Kramers-Wannier duality in d=3
The automorphism of the bond algebra of Z_n,(1) symmetric operators under gauging of the Z_n,(1) symmetry imposes strong constraints on the phase diagram.
In particular, the spectrum of a Hamiltonian H in a symmetry sector (α⃗ ,g⃗) is the same as the spectrum of a dual Hamiltonian H^∨ obtained by acting with the bond-algebra automorphism on H, in the symmetry sector (g⃗ , α⃗).
This has consequences for the both the Z_n,(1) symmetric gapped phases as well as the phase transitions.
Let us first focus on the gapped phase where the Z_n 1-form symmetry is spontaneously broken to a Z_p subgroup in the ground state.
The fixed point Hamiltonian for such a gapped phase is
H_[ Z_p]=-1/2∑_ eX_ e^n/p-1/2∑_ pB_ p^p+H.c .
Clearly all the operators in the Hamiltonian commute with one another and therefore the ground state(s) would be in the shared eigenvalue 1 subspace of all the operators appearing in (<ref>).
First, we may restrict to the subspace of V_ rest.^(n/p)⊂ V on which X_ e^n/p=1 is satisfied.
This effectively reduces the edge Hilbert space dimension to n/p.
In this restricted Hilbert space, the Hamiltonian has the form
H_[ Z_p]|_ V_ rest.^(n/p)=
-1/2∑_ pB_ p^p+H.c .
which is an operator of order n/p, i.e.,
[B_ p^p]^n/p|_ V_ rest.^(n/p)=1 .
The Hamiltonian (<ref>) in fact describes a Z_n/p⊂ Z_n topological gauge theory.
One manifestation of this is that the Hamiltonian has (n/p)^b_2(M_3) ground states, which are locally indistinguishable but mapped into each other under the action of topological line or surface operators.
This can be seen by inspecting the ground state degeneracy on a spatial manifold M_3 with non-trivial topology[For simplicity, we assume M_3 has no torsion.].
First note that the states (or spin configurations) are one-to-one with 1-cochains
|a⟩, a∈ C^1(M_3, ℤ_n/p),
where a is nothing but an assignment of ℤ_n/p values on each edge (1-simplex).
The action of the plaquette operators are
B_ p^p|a⟩ = ω_n/p^ da_ p|a⟩ .
Therefore the subspace of states that satisfy B_ p^p|a⟩ = |a⟩ correspond to 1-cochains that satisfy da = 0.
In other words, states such that B_ p^p = 1 are labeled by 1-cocycles a∈ Z^1(M_3, ℤ_n/p).
Furthermore, we are working in a restricted space (see (<ref>)) where
U( S^(2),∨_ v)!=1 ,
where U( S^(2),∨_ v) is the 1-form symmetry generator defined on a minimal two sphere on the dual lattice that links with the vertex v.
The action of U( S^(2),∨_ v) on a state |a⟩ without such a constraint is
U(S_ v^(2),∨)|a⟩ = |a+ dδ^( v)⟩,
where δ^( v)∈ C^0(M_3, ℤ_n/p).
Hence a general product of U(S_ v^(2),∨) implements a gauge transformation as
∏_ v U(S_ v^(2),∨)^λ_ v|a⟩= |a+ dλ⟩ .
States satisfying both B_ p^p = 1 and U(S_ v^(2),∨)= 1 are thus labeled by cohomology classes H^1(M_3,ℤ_n/p) = Z^1(M_3,ℤ_n/p)/B^1(M_3,ℤ_n/p). In particular
|[a]⟩ = 1/|C^0(M_3, ℤ_n/p)|∑_λ∈ C^0(M_3, ℤ_n/p)|a + dλ⟩, [a]∈ H^1(M_3, ℤ_n/p).
The ground-state degeneracy is thus
|H^1(M_3, ℤ_n/p)| = (n/p)^b_1(M_3) = (n/p)^b_2(M_3).
The last equality comes from H^1(M_3, ℤ_n/p)≅ H^2(M_3, ℤ_n/p) for 3-manifolds.
Note that dλ correspond to contractible loop configurations in the dual lattice with ℤ_n/p branching rules while a+ dλ are loop configurations that wrap around non-contractible cycles according to the cohomology class of a.
In other words, the states (<ref>) are nothing but string-net condensates.
This condensate of strings is another manifestation of spontaneous breaking of 1-form symmetry.
The ground-state degeneracy is due to existence of line operators for each γ∈ H_1(M_3, ℤ_n/p) and surface operators for each Σ^(2)∈ H_1(M_3, ℤ_n/p) that commute with the Hamiltonian but not each other <cit.>.
Concretely, these line operators are
W(γ)=∏_ e∈γZ_ e^ o(γ , e) ,
while the surface operators are the 1-form symmetry generators (<ref>) defined on Σ^(2).
These operators generate an emergent 2-form symmetry and are charged under the Z_n/p 1-form symmetry.
This signals a breaking of Z_n/p 1-form symmetry and the ground state of (<ref>) only preserves Z_p⊂ Z_n, spontaneously breaking the remaining group.
Under Z_n 1-form Kramers-Wannier duality, one obtains a dual Hamiltonian to (<ref>) which is
H^∨_[ Z_p]^∨=-1/2∑_ eX_ e^p-1/2∑_ pB_ p^n/p-1/2∑_ v U(S_ v^(2),∨)+H.c≅ H_[ Z_n/p] .
We therefore learn that under such a duality,
[ Z_n/p,(1) symmetry breaking] gauging Z_n,(1) [ Z_p,(1) symmetry breaking] .
The phases described by fixed-point Hamiltonians (<ref>)
and their duals (<ref>)
are summarized in Table <ref>
when n=4 and p=1,2,4.
Another interesting application of dualities is to the study of phase transitions.
Dualities are particularly useful when there are multiple symmetry breaking phases.
In such cases, dualities between different kinds of transitions can be used constrain the universality classes of transitions, assuming knowledge about a subset of transitions.
For instance, consider a theory with Z_n= Z_p_1 p_2 p_3,(1), 1-form symmetry with p_1 , p_2 , p_3 prime numbers.
Then the 1-form Kramer's Wannier duality predicts
* The 1-form symmetry breaking transition between the fully symmetric phase [ Z_p_1 p_2 p_3,(1)] and the partial symmetry broken phase [ Z_p_1 p_3,(1)] is dual to the transition between the fully-symmetry broken phase [ Z_1,(1)] and the partial symmetry broken phase [ Z_p_2,(1)]. There are two additional dualities obtained by cyclic permutations of (1 ,2 ,3). Note that these are dualities between transitions involving 3+1 dimensional topological orders and are analogous to anyon condensation type transitions in 2+1 dimensional topological orders <cit.>.
* The deconfined topological transitions between the partial symmetry broken phases [ Z_p_1 p_2,(1)] and [ Z_p_2 p_3,(1)] is dual to the deconfined topological transition between [ Z_p_3,(1)] and [ Z_p_1,(1)].
There are two additional dualities obtained by cyclic permutations of (1 ,2 ,3).
* The 1-form partial symmetry breaking transition between the phases [ Z_p_1 p_2,(1)] and [ Z_p_1,(1)] is dual to a similar transition between [ Z_p_2 p_3,(1)] and [ Z_p_3,(1)]. Again, there are two additional dualities obtained by cyclic permutations of (1 ,2 ,3).
* Additionally there are several transitions that are self-dual under gauging Z_n,(1), These are:
* The symmetry breaking transitions between the fully symmetry broken and the fully symmetric phases [ Z_1,(1)] and [ Z_p_1 p_2 p_3,(1)] respectively.
* The deconfined topological transition between [ Z_p_1 p_2,(1)] and [ Z_p_3,(1)]. Again, there are two additional dualities obtained by cyclic permutations of (1 ,2 ,3).
Since there are no 't Hooft anomalies constraining the phase diagram, one would expect such transitions to be accidental.
In principle there are several other dualities related to partial gauging of some subgroup of Z_n.
We will describe these in some detail in the next section.
Such dualities are powerful as they constrain the spectra of transitions involving combinations of topologically ordered phases in 3+1 dimensions, a subject about which little is known.
The self-dual transitions are particularly interesting from a symmetry point of view.
It was recently appreciated <cit.>, that theories that are self-dual under gauging a higher-form symmetry host non-invertible symmetries that are higher-dimensional higher-form generalizations of the Tambara-Yamagami fusion category <cit.>.
The simplest such self-dual Hamiltonian is the transition between the symmetry breaking transition between the symmetric and fully symmetry broken phases, described by the minimal Hamiltonian
H=-1/2∑_ eX_ e-1/2∑_ pB_ p + H.c .
Then it is expected that this Hamiltonian has an emergent non-invertible symmetry operator D which acts on all of space and has the fusion rules
D× D = ∏_j=1^b_2(M_3)∑_ g_j=0^n-1[ U_ g_j(Σ^(2))_j] .
We leave a detailed study of non-invertible symmetry defects in 1-form Kramer's Wannier self-dual lattice models for future work.
§.§ Gauging finite Abelian 1-form sub-symmetry
In this section, we describe the gauging of a sub-symmetry of a finite Abelian 1-form symmetry.
As in earlier sections, we focus on the simplest non-trivial case, which corresponds to gauging a Z_2,(1) subgroup of a Z_4,(1) 1-form symmetry.
Our starting point is the bond algebra of Z_4,(1) symmetric operators defined in (<ref>) with n=4.
A Z_4,(1) symmetry can be understood via the short exact sequence
1⟶ N_(1)= Z_2,(1)⟶ Z_4,(1)⟶ K_(1)= Z_2,(1)⟶ 1 .
More precisely, Z_4,(1) should be understood as the second Eilenberg-Maclane space and the short exact sequence as that between homotopy 2-types. However, we will suppress such technicalities in our presentation.
It suffices to note that such a sequence is captured by the extension class ϵ_2 ∈ H^3( Z_2,(1), Z_2,(1)), where ϵ_2=Bock.
Then a Z_4,(1) bundle can be expressed as a tuple of 2-form gauge fields A^( N)_2 and A^( K)_2 which satisfy
dA^( N)_2
=
Bock(A^( K)_2),
Bock(A^( K)_2)
=
1/2 dA^( K)_2 .
Starting from a d+1 dimensional theory T with Z_4, (1) symmetry, one may gauge to N_(1) to obtain a dual theory T^∨ with a symmetry group N^∨_(d-2)× K_(1) where N^∨= K= Z_2.
Furthermore, there is a mixed anomaly between the two Z_2 symmetries captured by the d+2 dimensional invertible topological field theory
∫_M_d+2A^( N^∨)_d-1∪ Bock(A^( K)_2) .
Such an invertible field theory describes the ground state physics of a symmetry protected topological phase of matter protected by
G^ϵ_2_(1,d-2) = [ Z_2,(1), Z_2,(d-2)]^ϵ_2 .
It is however crucial to emphasize that there is no physical need to associate T^∨ to a `bulk'.
As we will see, such a theory can well be described on an d-dimensional lattice model.
Instead the bulk or anomaly theory is a theoretical gadget to systematize our understanding of the anomaly, which has significant non-perturbative implications for the infra-red phases/ground states realized in T^∨.
Now, we describe the gauging of Z_2,(1)⊂ Z_4,(1) on the lattice.
Our starting point is the bond algebra (<ref>) with n=4.
In order to gauge the Z_2,(1) subgroup, we introduce
Z_2 degrees of freedom on each plaquette of the lattice.
We thus obtain the extended Hilbert space
V_ ext=⊗_ e V_ e⊗_ p V_ p=Span_ C{ |b,a⟩ | b∈ C^2(M_, Z_2) , a∈ C^1(M_, Z_4)}
such that
3
Z_ e|b ,a⟩ = i^a_ e|b ,a⟩ , X_ e|b ,a⟩ =|b ,a+δ^( e)⟩ ,
σ^z_ p|b ,a⟩ =(-1)^b_ p|b ,a⟩ , σ^x_ p|b ,a⟩ =|b+δ^( p),a⟩ ,
where, δ^( e) and δ^( p)
were introduced in (<ref>) and (<ref>), respectively.
We impose the Gauss constraint through the operator
G_ e= X^2_ e A_ e,
A_ e
:=
∏_ p⊃ eσ^x_ p.
The gauge-invariant bond algebra is obtained by minimally coupling
the bond algebra (<ref>) as
B_ G^ϵ_2_(1,d-2)'( V_ ext)=⟨ X_ e , B_ p σ_ p^z | U_(1)(S^(d-1),∨)!=1 , U_(d-2)(S^(2))!=1 , G_ e!=1 ⟩ ,
where U_(1)(S^(d-1),∨) and U_(d-2)(S^(2)) are the 1-form and d-2-form symmetry generators defined on contractible cycles S^(d-1),∨ and S^(2) respectively.
These take the following form
U_(1)(S^(d-1),∨)=∏_ e∈ S^(d-1),∨ X_ e^ o( e, S^(d-1), ∨) , U_(d-2)(S^(2))= ∏_ p⊂ S^(2)(B_ pσ^z_ p)^ o( p , S^(2)) .
Importantly, there is a mixed anomaly between the 1-form and (d-2)-form symmetry.
This mixed anomaly manifests itself in the
symmetry fractionalization patterns of the two symmetries.
More precisely, although the 1-form symmetry corresponds to the group Z_2,(1),
it fractionalizes into the group Z_4,(1)
in the presence of a non-trivial background gauge field of the (d-2)-symmetry.
Conversely, the Z_2,(d-2) symmetry fractionalizes into the group Z_4,(d-2) in the presence of a non-trivial background of the Z_2,(1) symmetry.
To see this, note that the way background fields A_2 and A_d-1 for the Z_2,(1) and Z_2,(d-2) symmetries appear in the bond algebra is via the minimal coupling
B_ p⟼ B_ pe^ iπ A_2, p , A_ e⟼ A_ e e^ iπ A_d-1, e .
It is worth emphasizing, that A_d-1, e should be understood as the integral/evaluation of A_d-1 on the (d-1)-cell on the dual lattice, which is dual to e.
Then one obtains the important identities
U_(1)(Σ^(d-1),∨)^2 = exp{ iπ∮_Σ^(d-1),∨ A_d-1} ,
U_(d-2)(Σ^(2)) = exp{ iπ∮_Σ^(2) A_2} ,
where Σ^(d-1),∨ and Σ^(2) are non-contractible cycles.
Having a non-trivial holonomy for A_d-1 or A_2 on a given (d-2)- or 2-cycle simply corresponds to imposing symmetry twisted boundary conditions corresponding to Z_2,(d-2) or Z_2,(1) on that cycle, respectively.
This is to say that the mixed anomaly manifest itself as the
generators for Z_2,(d-2) or Z_2,(1)
squaring to the operators that detect the twisted boundary conditions for Z_2,(1) or Z_2,(d-2)
symmetries, respectively.
Next, as in Sec. <ref>, we solve the Gauss constraint by implementing a unitary transformation that localizes the Gauss operators onto the edges
U G^ _ e U^†
=
X^2_ e ,
such that the unitary transformed Gauss constraint X^2_ e= 1 can be readily solved.
In the basis (<ref>),
this unitary operator has the form
U
=
∑_b, a|b + ⌊ da/2⌋⟩⟨b, a|,
where ⌊·⌋ is the floor function.
Note that this operator is different than the unitary operator
(<ref>) since the coboundary operator d
is inside the floor function ⌊·⌋.
The remaining operators in the bond algebra transform as
UX_ eU^† = ∑_b,a|b+⌊ da/2⌋ + ⌊ d(a+ δ^( e))/2⌋ , a⟩⟨ b , a|
= X_ e[P^+_ e+P^-_ eA_ e] ,
Uσ_ p^zU^† = ∑_b,a(-1)^b_ p+⌊ da/2⌋_ p|b , a⟩⟨ b , a|
= 1/2σ^z_ p[(1- i)B_ p + (1+ i)B_ p^†] ,
U σ^z_ pB_ pU^† = ∑_b,a(-1)^b_ p+⌊ da/2⌋_ p i^( da)_ p|b , a⟩⟨ b , a|
= 1/2σ^z_ p[(1- i)B^2_ p + (1+ i)] ,
where P^(±) _ e=(1± Z_ e^2)/2 and A_ e=∏_ p ⊃ eσ^x_ p.
The unitarily transformed version of the bond algebra (<ref>) therefore has the form
B_ G^ϵ_2_(1,d-2)'( V_ ext) = U B_ G^ϵ_2_(1,d-2)( V_ ext) U^†
= ⟨ X_ e[P^(+)_ e+P^(-)_ eA_ e] ,
1/2σ^z_ p[(1- i)B^2_ p + (1+ i)] |
U_(1)(S^(d-1),∨)!=1 , U_(d-2)(S^(2))!=1 , X_ e^2!=1
⟩ ,
where the constraints are on the unitary transformed versions of the contractible symmetry operators defined in (<ref>).
The constraint X_ e^2=1 can be solved by projecting to the effective two dimensional Hilbert space on each edge on which X_ e^2=1.
The operators X_ e and Z_ e^2 commute with X_ e^2 and therefore act within this restricted subspace V_ rest..
We work in a basis where X_ e∼σ^x_ e, Z_ e^2 ∼σ^z_ e and B_p^2=∏_ e∈ pσ^z_ e (see (<ref>)) in V_ rest..
Therefore the bond algebra in its final form is
B_ G^ϵ_2_(1,d-2)( V_ rest.)
= ⟨σ^x_ e[P̅^(+)_ e+P̅^(-)_ eA_ e] ,
1/2σ^z_ p[(1- i)B̅_ p + (1+ i)] |
U_(1)(S^(d-1),∨)!=1 , U_(d-2)(S^(2))!=1
⟩ ,
where
P̅^(±)_ e=1/2(1±σ^z_ e) , B̅_ p=∏_ e∈ pσ^z_ e .
We are now ready to use the isomorphism of bond algebras (<ref>) and (<ref>) to study dualities between phase diagrams of quantum systems with Z_4,(1) symmetry and G^ϵ_2_(1,d-2) symmetry.
§.§.§ Phase diagrams and dualities
Let us study the duality in d=2 or 3 dimensional spin models arising from partial gauging of Z_2,(1)⊂ Z_4, (1).
After such a gauging, one obtains a spin model with a G^ϵ_2_(1,d-2) symmetry, i.e., a Z_2,(1)× Z_2,(d-2) global symmetry with a mixed anomaly.
For brevity, we only consider the simplest gapped phases in the spin system with Z_4, (1) symmetry, i.e., those corresponding to symmetry breaking of the 1-form symmetry to Z_p,(1)⊆ Z_4,(1).
We simply consider the fixed-point Hamiltonians in each gapped phase and obtain the dual partially gauged Hamiltonians by isomorphism of bond algebras between (<ref>) and (<ref>).
Such fixed-point Hamiltonians are given in (<ref>) for n=4 and p=1,2,4.
The results are summarized in Table <ref> for d=3 space dimensions.
The Hamiltonian corresponding to no symmetry breaking and its dual under partial gauging are
H_[ Z_4]=-1/2∑_ eX_ e + H.c⟼ H^∨_[ Z_4]^∨=-∑_ eσ^x_ e1+A_ e/2 .
The ground states of H^∨_[ Z_4]^∨ are eigenvalue +1 states of A_ e and σ^x_ e for all e.
This implies that the 1-form symmetry
U_(1)(Σ^(d-1), ∨)=∏_ e ⊂Σ^(d-1), ∨σ^x_ e[P̅^(+)_ e+P̅^(-)_ eA_ e] ,
acts as the identity on the ground states and therefore it is preserved.
Let us now consider a product of A_ e operators taken along an open line L in d=2 or along an open disc D_2 in d=3 on the dual lattice.
In d=2, such a product delivers a bi-local operator σ^x_ i(L)σ^x_ f(L) with i(L) and f(L) being the plaquettes at the two ends of L.
Since σ^x_ p is charged under the 0-form dual symmetry U_(d-2), this signals the spontaneous breaking of the 0-form dual symmetry.
Similarly, in d=3, one obtains a line operator
W(∂ D_2^∨)=∏_ e∈ D_2^∨A_ e=∏_ p ∈∂ D_2^∨σ^x_ p .
The line W is topological in the low energy subspace in the sense that it commutes with the Hamiltonian and therefore does not cost any energy to deform and is charged under the (d-2=1)-form symmetry.
Therefore, H^∨_[ Z_4]^∨ preserves Z_2,(1) while it breaks Z_2, (1) dual symmetry.
Next, we consider the partial symmetry breaking
Hamiltonian H_[ Z_2] which dualizes as
H_[ Z_2]=-∑_ eX^2_ e-1/2∑_ pB_ p^2⟼ H^∨_[ Z_2]^∨=-∑_ eA_ e-∑_ pB̅_ p .
Since the ground states of H^∨_[ Z_2]^∨ are eigenvalue +1 states of A_ e, it follows due to the above reasoning that Z_2,(d-2)
dual symmetry is spontaneously broken.
Similarly, the fact the B̅_ p has eigenvalue +1 on the ground state subspace implies that one may consider an open disc on the direct lattice, which furnishes a line of σ^z_ e operators on the boundary of this disc.
This line operator has a unit expectation value in the ground state of the fixed-point Hamiltonian but more generally a non-vanishing expectation value anywhere in the gapped phase labelled as [ Z_2]^∨.
Finally noting that this line operator is charged under Z_2,(1), implies that the 1-form symmetry is spontaneously broken.
To summarize, we find that in the gapped phase [ Z_2]^∨ the full symmetry Z_2,(d-2)× Z_2,(1) is broken.
Finally, we move onto the Z_4,(1) symmetric gapped phase labelled as Z_1,(1) where the 1-form symmetry is completely broken.
The fixed-point Hamiltonian and its dual under partial-gauging have the form
H_[ Z_1]=-1/2∑_ pB_ p+H.c⟼ H^∨_[ Z_1]^∨=-∑_ pσ^z_ p1+B̅_̅p̅/2 .
This Hamiltonian breaks Z_2,(1) since its ground states are in the B̅_ p=1 eigenspace.
It however preserves the Z_2,(d-2) dual symmetry as can be readily confirmed.
§ CONCLUSION
In this paper, we explored various symmetry aspects of quantum spin models models with global higher-form finite Abelian symmetries on arbitrary d-dimensional lattices.
Given a p-form symmetry corresponding to a finite Abelian group G, we described (i) a systematic gauging of the group G or
any subgroup H ⊂ G for p=0,1, (ii) the gauging related duality maps between models with a
G p-form symmetry and G_(p, d-p-1) higher group symmetry which has a G/ H p-form symmetry, a H^∨ (d-p-1)-form symmetry and a mixed 't-Hooft anomaly between these two higher-form symmetries and (iii) dualities between phase diagrams of spin models with the corresponding symmetries.
In Sec. <ref>,
we detailed how the gauging of finite Abelian 0-form (sub)-symmetries can be understood as an isomorphism between a bond algebra symmetric with respect to G^ _(0) and a dual bond algebra symmetric under the dual G_(0,d-1) symmetry.
In particular, we described how the symmetry sectors, i.e., twisted-boundary conditions and symmetry eigenvalue sectors, map under such an isomorphism of bond algebras.
For the case of gauging a subgroup,
we clarified how the mixed anomaly manifests in the symmetry structure of the dual bond algebra.
In doing so, we clarified some anomaly-related subtle symmetry fractionalization patterns of higher-form symmetries in lattice spin models.
In Sec. <ref>,
we discussed these gauging-related dualities from a quantum field theory perspective.
In Secs. <ref> and <ref>, we explored consequences of such gauging related dualities to phase diagrams of two and three dimensional spin models respectively.
We specialized to Z^ _n clock models with
Z^ _n,(0) symmetry and studied a Hamiltonian built as a linear combination of fixed-point Hamiltonians, one for each Z^ _n,(0)-symmetric short-range entangled gapped phase.
By dualizing such a Hamiltonian, using the bond algebra isomorphism obtained in Sec. <ref>, we could study various aspects of the phase diagram.
In particular, we could pin-point how all these gapped phases dualize and how certain unconventional (beyond Landau) transitions are dual to Landau transitions under such gaugings.
These studies have potential applications in understanding aspects of such exotic transitions in quantum spin models with global categorical symmetries, a subject about which little is understood.
In Sec. <ref>, we studied the gauging of Z_n,(1) sub-symmetries in two and three spatial dimensions and applied the corresponding gauging related isomorphisms to spin models with such symmetries.
Among other findings, we showed that in d=3, a gauging of Z_n,(1) symmetry is realized as an automorphism on the symmetric bond algebra.
This automorphism implied dualities between three-dimensional Z_k and Z_n/k topological orders and Hamiltonians self-dual under such automorphisms host emergent non-invertible symmetry structures <cit.>.
§ ACKNOWLEDGEMENTS
We thank Lakshya Bhardwaj, Lea Bottini, Clement Delcamp, Julia D. Hannukainen, Faroogh Moosavian, Christopher Mudry and Sakura Schafer Nameki for discussions.
HM is supported by Engineering and Physical Sciences Research Council under New Horizon grant award no. EP/V048678/1.
ÖMA is supported by the Swiss National Science Foundation (SNSF) under Grant No. 200021 184637.
AT and JHB received funding from the European Research Council (ERC) under the European Union’s Horizon 2020 research and innovation program (Grant Agreement No. 101001902), the Swedish Research Council (VR) through grants number 2019-04736 and 2020-00214 and the Knut and Alice Wallenberg Foundation (KAW) via the project Dynamic Quantum Matter (2019.0068).
The work of JHB was performed in part at the Aspen Center for Physics, which is supported by National Science Foundation grant PHY- 2210452.
§ BF TYPE DESCRIPTION OF BOND ALGEBRA
In this Appendix, we present an alternative description of
the gauging procedure presented in Sec. <ref> as a BF-like theory of
compact scalar fields.
We may represent Z_n clock operators as
Z_ v∼ e^ iΦ_ v , X_ v∼ e^ iΦ_ v ,
which satisfy the commutation relations [Φ_ v,Φ_ v']=2π iδ_ v v'/n 2π.
Here we can think of Φ_ v as a compact scalar field and Φ_ v as its canonical momentum operator.
Comparing with (<ref>) and (<ref>), we may write the Z_n symmetry operator as
U=∏_ ve^ iΦ_ v ,
and the bond algebra takes the form
B_ Z_n,(0)=⟨ e^ iΦ_ v , e^- i∫_ e dΦ | ∀ v , e ⟩ ,
where e^- i∫_ e dΦ = e^ iΦ_ s( e)e^- iΦ_ t( e) = Z_( e)Z^†_ t( e).
The local Hilbert space on each vertex is n-dimensional such that tensor product Hilbert space is spanned by states labelled by ϕ∈ C^0(M_d,, Z_n), where ϕ={ϕ_ v}_ v with ϕ_ v=0,1,…, n-1.
We choose to work in the eigenbasis of e^ iΦ_ v such that
e^ iΦ_ v|ϕ⟩ = ω_n^ϕ|ϕ⟩ ,
e^ iΦ_ v|ϕ⟩ = |ϕ+δ^( v)⟩ ,
where + denotes addition modulo n and δ^( v) is a 0-cochain which evaluates to 1 on the vertex v and 0 elsewhere, i.e., δ^( v)_ v'=δ_ v, v'.
In order to gauge the Z_n global symmetry, we similarly introduce a Z_n degree of freedom on each edge of the lattice.
We denote the operators acting on the edges as e^ iA_ e and e^ i B_ e^∨, via the identification
Z_ e∼ e^ i A_ e , X_ e∼ e^ iB_ e^∨ .
Note that the B operators are defined on e^∨ which are (d-1)-cells of the dual lattice.
Since, there is a canonical bijection between these (d-1)-cells of the dual lattice and the 1-cells of the direct lattice (see, for instance, Fig. <ref>) it is always possible to do so. These operators satisfy the commutation relations
[B_ e^∨,A_ e']=2π i δ_ e, e'/n= 2π i Int_ e^∨, e'/n ,
where in the final expression, Int_ e^∨, e' is the intersection number of the (d-1)-cell e^∨ with the edge e'.
Upon introducing edge degrees of freedom, we span the Hilbert space by basis states |a,ϕ⟩, labelled by a∈ C^1(M_d,, Z_n) and ϕ∈ C^0(M_d,, Z_n). The vertex operators act on the basis states as (<ref>), while the edge operators act similarly as
e^ iA_ e|a,ϕ⟩ = ω_n^a_ e|a,ϕ⟩ ,
e^ iB_ e^∨|a,ϕ⟩ = |a+δ^( e),ϕ⟩ ,
where δ^( e) is a 1-cochain which evaluates to 1 on the edge e and 0 elsewhere, i.e., δ^( e)_ e'=δ_ e, e'. After gauging, the physical Hilbert space is the gauge invariant subspace of the full Hilbert space spanned by {|a,ϕ⟩}.
The gauge invariant subspace is obtained as the identity eigen-sector of the Gauss operator
G_ v=exp{ iΦ_ v+ i∮_S^(d-1),∨_ vB} ,
where S^(d-1),∨_ v is a minimal (d-1)-sphere on the dual lattice that links with the vertex v (see Fig. <ref>). This a field theory notation version of equation (<ref>), which was written in a more spin-model language.
A general gauge transformation is implemented by the operator G[λ]=∏_ v G_ v^λ_ v parametrized by a 0-cochain λ∈ C^0(M_d,, Z_n) which acts on the basis states as
G[λ]|a,ϕ⟩=|a+ dλ,ϕ+λ⟩ .
The bond algebra needs to be suitably modified post gauging such that all the operators are gauge invariant.
We can do so by minimal coupling dΦ⟶ dΦ + A, such that bond algebra becomes
B_ Z_n,(d-1)=⟨ e^ iΦ_ v , e^- i∫_ e( dΦ+A) | e^ i∮_LA!=1 , G_ v!=1 ∀ v , e , L ⟩ .
There is an additional constraint exp{ i∮_LA}!=1 for each loop L on the lattice.
This follows from the fact that this operator is the image of the operator exp{ i∮_L dΦ}=1 in the pre-gauged algbera.
Since gauging is a bond algebra isomorphism, it must map the identity operator to the identity operator. Compare with (<ref>).
A consequence of this is the fact that da=0 mod n and therefore a∈ Z^1(M_d,, Z_n) and corresponds to a Z_n-valued field.
Next, we seek a unitary transformation that disentangles the edges from the Gauss constraint, i.e.,
U G_ v U^†=e^ iΦ_ v .
Such a transformation is achieved by the unitary
U=∑_a,ϕ|a+dϕ,ϕ⟩⟨ a,ϕ|=∏_ vexp{ iΦ_ v∮_S^(d-1),∨_ vB} ,
which acts on the remaining operators as
U exp{ iA_ e} U^† = exp{ i(A+dΦ)_ e} ,
U exp{ i∮_S^(d-1),∨_ vB} U^† = exp{ i∮_S^(d-1),∨_ vB} ,
U exp{ iΦ_ v} U^† = exp{ iΦ_ v} ,
U exp{ iΦ_ v} U^† = exp{ iΦ_ v+ i∮_S^(d-1),∨_ vB} .
The bond algebra (<ref>) becomes the following after the action of the unitary U
B_ Z_n,(d-1) =
⟨
e^- iA_ e , exp{ iΦ_ v- i∮_S^(d-1),∨_ vB} | e^ iΦ_ v!=
1 ,
e^ i∮_LA!=
1 , ∀ v , e , L
⟩,
=
⟨exp{- iA_ e} , exp{- i∮_S^(d-1),∨_ vB} |
e^ i∮_LA!=
1 , ∀ v , e , L
⟩ .
In the second line we have solved the Gauss constraint and frozen out all scalar field d.o.f., the dual bond algebra is organized by conjugate fields A and B reminiscent of BF-theories. This is the bond algebra of a (d-1)-form ℤ_n, (d-1) symmetry generated by closed loops operators
W_L = exp(- i∫_L A),
compare this to equation (<ref>). Similar to the discussion in section <ref>, let us drop the constraint on contractible loops and define the bond algebra
B̂_ Z_n,(d-1)= ⟨exp{- iA_ e} , exp{- i∮_S^(d-1),∨_ vB} | ∀ v , e , L
⟩ .
Similar to the discussion in section <ref>, we can extend the duality to this larger algebra but the other side will contain algebras with twist defects. Consider the commutative subalgebra
B̂_[ Z_n,(d-1), Z_n,(1)]= ⟨exp{- i∮_L A} , exp{- i∮_S^(d-1),∨_ vB} | ∀ v , e , L
⟩ ⊂B̂_ Z_n,(d-1).
This subalgebra has an extra emergent 1-form symmetry, generated by (d-1)-dimensional submanifolds
Γ(S^(d-1)^∨) = exp(- i∮_S^(d-1)^∨B).
Compare these to (<ref>) and (<ref>). Similar to (<ref>), we can write the ℤ_n toric code Hamiltonian as
H = - ∑_ v e^- i∮_S^d-1_ v^∨B - ∑_ p e^- i∮_L_ p A + H. c.
= - ∑_ v e^- i∫_X^d_ v^∨H - ∑_ p e^- i∫_D_ p F + H. c.,
where S^d-1_ v^∨ is a d-1 dimensional sphere in the dual lattice wrapping around the vertex v, L_ p is the curve around a plaquette p, and X^d_ v^∨ is solid d-dim ball such that ∂X^d_ v^∨ = S^d-1_ v^∨ and D_ p is a disk such that ∂ D_ p = L_ p. In the second line we used Stokes theorem and defined the 2- and d-form Field strength operators
F = dA, H = dB.
The ground state subspace of toric code requires
F = 0, and B = 0,
in other words that we have flat A and B connections. This is exactly the Hilbert space of the underlying TQFT, a BF-theory with the action
S = n/2π∫_MB∧ dA.
§ PARALLEL TRANSPORT OPERATOR FOR ℤ_N SPIN-CHAINS
A ℤ_n symmetric spin chain can be coupled to a background gauge field, also called a ℤ_n connection. This allows us to define parallel transport on the space of operators. In this appendix we construct the parallel transport operator explicitly in terms of spin operators. This construction works in any dimension and on any triangulation, no need for translation symmetry.
Consider a triangulation M_d, of an oriented d dimensional manifold M_d. There is a Hilbert space V_ v≃ℂ^n associated to each vertex v of M_d,, with the total Hilbert space V = ⊗_ v V_ v. All linear operators on 𝒱 are generated by X_ v and Z_ v with the algebra
Z_ v^αX_ v^ g = ω_n^α g δ_ v vX_ v^ gZ_ v^α.
We want to construct a permutation operator P_ v v with the property
P_ v v𝒪_ vP_ v v^-1 = 𝒪_ v,
written in terms of the operators X_ v and Z_ v. Here P_ v v permutes local operators acting only on the vertices v and v, while commuting with any other local operators. The space of linear invertible operators at each vertex
GL( V_ v)≃span_ℂ{X_ v^ gZ_ v^α | g,α=0,…,n-1},
can be endowed with the inner product
⟨ A,B⟩ = 1/n_ V_ v[A^† B], A, B∈ GL( V_ v).
A short calculation shows that with this inner product, the above chosen basis is orthonormal
⟨ X_ v^ gZ_ v^α, X_ v^ gZ_ v^α⟩ = δ_ g, gδ_α,α.
We can extend this basis to all of GL( V) ≡ℒ( V), by using the standard isomorphism
GL(⊗_ v V_ v) ≃⊗_ v GL( V_ v),
where on V we normalize the inner product as
⟨ A,B⟩ = 1/n^#vertices_ V[A^† B], A, B∈ℒ( V).
Using the eigenbasis of the Z_ v operators
Z_ v|ϕ⟩ = ω_n^ϕ_ v|ϕ⟩,
we can explicitly construct the permutation operator as
P_ v v = ∑_ϕ|S_ v v·ϕ⟩⟨ϕ|,
where S_ v v acting on the labels ϕ = {ϕ_ v}, swaps ϕ_ v and ϕ v. One can easily see that following properties
P_ v v|…, ϕ_ v, …, ϕ_ v, …⟩ = |…, ϕ_ v, …, ϕ_ v, …⟩, P_ v v^2 = 1,
as wanted. In order to write this operator in terms of the aforementioned basis of ℒ( V), let us consider a general superposition
P_ v v = ∑_ g, α=0^n-1∑_ g,α=0^n-1ℳ^ g, α_ g, α X^ g_ vZ^α_ vX^ g_ vZ^α_ v.
We are only expanding this in the subspace of ℒ( V) corresponding to operators that only act on the vertices v and v, since it must commute with any operator 𝒪_ v' such that v'≠ v or v. The coefficients can be computed using the inner product (<ref>) and the orthonormality of our basis
⟨∏_ vX_ v'^ g_ v'Z_ v'^α_ v', P_ v v⟩ = ℳ^ g_ v, α_ v_ g_ v, α_ v∏_ v'≠ v, vδ_ g_ v', g_ v'δ_α_ v, α_ v.
In particular, using (<ref>) and only the basis vectors of the relevant two-body subspace, one can show by a calculation that
ℳ^ g_ v, α_ v_ g_ v, α_ v =⟨ X_ v^ g_ vZ_ v^α_ v X_ v^ g_ vZ_ v^α_ v , P_ v v⟩,
=1/nω^ g_vα_v δ_α_ v + α_ v, 0 δ_ g_ v + g_ v, 0.
We can therefore write P_ v v in the form we wanted
P_ v v = 1/n∑_ g,α = 0^n-1ω^ gα X_ v^ gZ_ v^α X_ v^- gZ_ v^-α.
One can readily check that this satisfies all the needed properties such as (<ref>). For a standard spin-model with n=2 we get
P_ v v = 1/2[ 1 + X_ v⊗ X_ v - (XZ)_ v⊗(XZ)_ v + Z_ v⊗ Z_ v]
= 1/2[ 1 + σ^x_ v⊗σ^x_ v + σ^y_ v⊗σ^y_ v + σ^z_ v⊗σ^z_ v].
For each edge e, we can define
T_ e = P_ s( e), t( e),
where s( e) and t( e) is the source and target of e, respectively.
For regular lattices, this can be used to construct translation operators in terms of spin operators. For example for a one-dimensional lattice of L sites, we have
T = ∏_i=1^L-1P_i,i+1 = P_1,2P_2,3… P_L-1,L,
which satisfies
T|ϕ_1…ϕ_L⟩ = |ϕ_Lϕ_1ϕ_2…ϕ_L-1⟩.
On general triangulations we can construct Parallel transport operators, when coupling the global symmetry to a background connection (gauge field). See equation (<ref>) and surrounding discussion.
bibstyle2017
|
http://arxiv.org/abs/2307.02450v1
|
20230705172336
|
On Deep Learning Classification of Digitally Modulated Signals Using Raw I/Q Data
|
[
"John A. Snoap",
"Dimitrie C. Popescu",
"Chad M. Spooner"
] |
eess.SP
|
[
"eess.SP"
] |
./figures/
|
http://arxiv.org/abs/2307.01388v1
|
20230703225440
|
Improving ATLAS Hadronic Object Performance with ML/AI Algorithms
|
[
"Benjamin Hodkinson"
] |
hep-ex
|
[
"hep-ex"
] |
Identification of Causal Relationship between Amyloid-β Accumulation and Alzheimer’s Disease Progression via Counterfactual Inference
Haixing Dai†1,
Mengxuan Hu†1,
Qing Li†2,3,
Lu Zhang4,
Lin Zhao1,
Dajiang Zhu4
Ibai Diez5,
Jorge Sepulcre5,
Fan Zhang6,
Xingyu Gao7,
Manhua Liu8,
Quanzheng Li5,
Sheng Li9,
Tianming Liu1,
and Xiang Li5
1School of Computing, University of Georgia, Athens, GA, USA
2State Key Lab of Cognitive Neuroscience and Learning, Beijing Normal University, Beijing 100875, China
3School of Artificial Intelligence, Beijing Normal University, Beijing 100875, China
4Department of Computer Science and Engineering, The University of Texas at Arlington, Arlington 76019, USA
5Department of Radiology, Massachusetts General Hospital and Harvard Medical School, Boston 02114, USA
6Department of Radiology, Brigham and Women's Hospital and Harvard Medical School, Boston 02115, USA
7School of Electronic Information and Electrical Engineering, Shanghai Jiao Tong University, Shanghai 200240, China
8The MoE Key Lab of Artificial Intelligence, AI Institute, Shanghai Jiao Tong University, Shanghai, 200240, China
9School of Data Science, The University of Virginia, Charlottesville 22903, USA
† These authors contributed equally to this paper.
Corresponding author: Xiang Li (email: [email protected]).
9 June 2023
====================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================
§ INTRODUCTION
Hadronic objects are ubiquitous in the proton–proton collision events recorded by the ATLAS detector <cit.> at the LHC.
This includes hadronic jets, which are reconstructed from a large number of low-level calorimeter/track-based constituent objects, and missing transverse momentum (), which involves every detector component and final-state object.
The complexity and abundance of jets and in the ATLAS dataset makes their reconstruction a promising setting for machine learning (ML) applications.
This contribution introduces several recent developments in ATLAS which use ML to improve the performance of reconstruction <cit.>, pion reconstruction <cit.> and jet tagging <cit.>.
These applications can broadly be separated into two categories:
* Regression of truth-level quantities from detector-level information.
* Classification of hadronic objects.
§ METNET: A COMBINED WORKING POINT
ATLAS employs several working points for reconstruction <cit.>, each of which is optimal for different event topologies and pile-up conditions.
<cit.> is a neural network (NN) designed to pick and combine the reconstructed for each working point into a single estimate.
This is achieved by regressing particle-level (`true') given the detector-level predictions for each working point and information characterising pile-up and event topology.
The NN is trained on a mixture of and di-boson Monte Carlo (MC) events.
The performance of two iterations of is presented here: one trained using the Huber loss <cit.> function, and another including an additional Sinkhorn <cit.> contribution to the loss to reduce an observed negative bias, denoted METNet (Sk).
Figure <ref> shows the root-mean-squared error (a metric for resolution) of and several current working points for fig:ttbar_TruthMETRMS and fig:Zmm_NPVRMS events, in bins of True and number of primary vertices respectively.
METNet has improved resolution for both topologies and shows an ability to generalise to topologies such as which were not seen during training.
The variable significance <cit.> is also used in ATLAS to separate processes with `real' (from genuine invisible particles, such as neutrinos) and `fake' (from detector mis-measurement).
is extended to produce a `confidence' σ as well as a central prediction by using the Gaussian negative log-likelihood (GNLL) loss.
A machine learning-based significance variable is then defined as METNetSig= p_T^miss, NN / σ.
Figure <ref> shows fig:METNetSig and fig:ObjMETSig object-based significance <cit.> (the current ATLAS state-of-the-art) for a supersymmetric signal process plus two Standard Model backgrounds.
shows the ability to separate real and fake and has similar behaviour to object-based significance.
§ PION RECONSTRUCTION
The ATLAS detector has non-compensating calorimetry, so being able to distinguish charged and neutral pions allows the corresponding hadronic energy depositions to be restored to the correct scale.
Figure <ref> indicates the π^0 vs. π^± classification performance of several ML models trained in Reference <cit.> compared to a non-ML baseline classifier (labelled 𝒫^EM_clus).
All methods outperform the baseline and the Graph Neural Network (GNN) shows the best performance overall.
Additionally, several ML models are trained to calibrate pion energy.
Given that pions are produced in abundance in nearly all hadronic showers, understanding and improving pion reconstruction is central to improving jet reconstruction.
The models in Reference <cit.> show significant improvement on current (non-ML) calibration baselines, particularly when combining tracking and calorimeter information.
Figure <ref> shows energy resolution for several ML methods along with the track resolution.
The resolution is quantified as one-half the interquantile range (IQR) divided by the median predicted energy, where the IQR represents the width of the response data (where response is the ratio of the predicted pion energy to particle-level pion energy) from 1σ to -1σ of the median.
This captures a measure of the spread of energy predictions.
The ML methods approximate the tracker energy resolution up to around 50 before the calorimeter energy resolution dominates, indicating that ML is providing the best of both tracking and calorimetry performance.
§ BOOSTED JET TAGGING
Large-radius jets from massive particles (such as W-bosons, Z-bosons and top quarks) can be distinguished from light quark/gluon-initiated jets using jet substructure information.
ATLAS employs a variety of taggers for this purpose, and improving their classification accuracy enhances the performance of both searches and precision measurements.
The latest taggers developed in ATLAS use jets reconstructed from Unified Flow Objects <cit.> which combine topocluster and tracking information to provide improved pile-up resilience and jet mass resolution.
In Reference <cit.>, several top-quark taggers which were developed outside ATLAS using simplified Delphes simulated data-sets <cit.> are evaluated using realistic GEANT4-simulated samples.
Figure <ref> compares the efficiency of the taggers, which use constituent-based input information, to baseline deep neural networks trained on high-level (hDNN) and constituent-level (DNN) inputs.
ParticleNet and the Particle Flow Network (PFN) outperform the baselines, while the Energy Flow Network (EFN) and ResNet50 underperform relative to the previous Delphes-based studies in Reference <cit.>.
This highlights the need to develop taggers in a realistic context.
As a result, the simulated data set used in this study has been made publicly available in Reference <cit.>.
In Figure <ref>, ParticleNet shows a dependence on the QCD modelling. This is also seen for PFN, but not for EFN due to its requirement of infra-red collinear safe inputs.
References <cit.> and <cit.> include the latest developments in W/Z tagging.
A DNN shows improved performance compared to the current cut-based taggers, which can be seen by comparing z_NN with D_2 in Figure <ref>.
However, Figure <ref> shows that the z_NN tagger sculpts the QCD background jet-mass distribution to match the W jets signal topology, which would complicate background estimation strategies which use side-band regions.
To address this, an adversarial NN (ANN) is trained to de-correlate the jet mass.
This corresponds to a decrease in performance for the ANN, labelled z_ANN^(λ=10) in Figure <ref>.
The performance could partially be recovered with analysis-specific mass-window requirements.
§ CONCLUSION
Hadronic object reconstruction at the LHC is ripe for ML applications.
This contribution has presented some recent highlights in ATLAS, including regressing truth-level and pion energy and classifying pions and boosted jets.
Development of all of these applications is ongoing and promises to enhance the performance of precision Standard Model measurements and beyond-the-Standard Model searches.
JHEP
|
http://arxiv.org/abs/2307.00913v1
|
20230703101639
|
Generation of narrow beams of ultrarelativistic positrons (electrons) in the resonant strong electromagnetic field-assisted Breit-Wheeler process
|
[
"S. P. Roshchupkin",
"V. D. Serov",
"V. V. Dubov"
] |
physics.plasm-ph
|
[
"physics.plasm-ph",
"hep-ph"
] |
Revisiting equilibrium condensation and rocky planet compositions
Anina Timmermann1
Yutong Shan3
Ansgar Reiners1
Andreas Pack2
Received XX XX XXXX / Accepted XX XX XXXX
===================================================================
The resonant external field-assisted Breit-Wheeler process (Oleinik resonances) for strong electromagnetic fields with intensities less than the critical Schwinger field has been theoretically studied. The resonant kinematics has been studied in detail. The case of high-energy initial gamma quanta and emerging ultrarelativistic electron-positron pairs is studied. The resonant differential cross section is obtained. The generation of narrow beams of ultrarelativistic positrons (for Channel A) and electrons (for Channel B) is predicted with a probability significantly exceeding corresponding to the non-resonant process.
§ INTRODUCTION
Over the past several decades, there has been significant interest in studying the processes of quantum electrodynamics (QED) in external electromagnetic fields (see, for example, reviews <cit.>-<cit.>, monographs <cit.>-<cit.> and articles <cit.>-<cit.>). This is mainly associated with the appearance of lasers with high radiation intensities and beams of small transverse dimensions <cit.>-<cit.>.
An important feature of high-order by the fine structure constant QED processes in an external field is the potential for their resonant occurrence, where virtual intermediate particles enter the mass shell. Such resonances were first considered by Oleinik <cit.>. Under resonance conditions, the conservation laws of energy and momentum are satisfied for intermediate particles in an external field. As a result, second-order processes by the fine structure constant effectively reduce into two sequential first-order processes. A detailed discussion of resonant processes is presented in reviews <cit.>, monographs <cit.>, as well as recent articles <cit.>-<cit.>. It is important to note that the probability of resonant processes can significantly exceed the corresponding probabilities of non-resonant processes.
The process of electron-positron pair production by two gamma quanta was first considered by Breit and Wheeler <cit.>. Currently, there is a significant number of works devoted to the study of the Breit-Wheeler process in an external electromagnetic field (see, for example, <cit.>-<cit.>). It should be noted that a distinction should be made between the external field-stimulated Breit-Wheeler process (a first-order process with respect to the fine structure constant) and the external field-assisted Breit-Wheeler process (a second-order process with respect to the fine structure constant). In this paper, Oleinik's resonances for the external strong field-assisted Breit-Wheeler process will be investigated. It should be noted that in a weak field, this process was considered in the article <cit.>. It is important to note that under the conditions of resonance and the absence of interference between different reaction channels, the original second-order process effectively reduces to two first-order processes: the external field-stimulated Breit-Wheeler process and the external field-stimulated Compton effect <cit.>.
The main parameter for describing the Breit-Wheeler process in the field of a plane electromagnetic wave is the classical relativistic-invariant parameter
η=eF/mc^2,
numerically equal to the ratio of the work of the field on the wavelength to the rest energy of the electron. Here e and m are the charge and mass of the electron, F and =c/ω are the electric field strength and wavelength, ω is the frequency of the wave <cit.>.
In this paper, we consider the resonant strong electromagnetic field-assisted Breit-Wheeler process for high-energy gamma quanta with energies ħω_1,2≲10^2 GeV. Therefore, we will consider high-energy gamma quanta in the following, ensuring that the produced electron-positron pair in a field of the wave is ultrarelativistic
ħω_1,2≫ mc^2, E_±≫ mc^2.
Here ħω_1,2 and E_± are energies of the initial gamma quanta and final positron or electron. Therefore, we will assume that the magnitude of the classical parameter η is upper bounded by the condition:
η≪η_max, η_max=min(E_±/mc^2).
Let's estimate the maximum intensity of the electric field in the wave. For electron-positron pair energies E_±≲10^2 GeV, it follows from equation (<ref>) that η≪η_max∼10^5, or for the field strength we have F≪ F_max∼10^15 Vcm^-1 (I≪ I_max∼10^28 Wcm^-2). Thus, the problem will consider sufficiently large intensities of the electromagnetic wave. However, these fields must be smaller than the Schwinger critical field F_*≈1.3·10^16 Vcm^-1 <cit.>.
In the following, the relativistic system of units is used: c=ħ=1.
§ AMPLITUDE OF THE PROCESS
Let us consider this process in the field of a plane circularly
polarized wave propagating along the z axis:
A(φ)=F/ω(e_xcosφ+δ e_ysinφ), φ=(kx)=ω(t-z), δ=±1.
Here e_x, e_y are the polarization 4-vectors of the external field that have the following properties:
e_x=(0,𝐞_𝐱), e_y=(0,𝐞_𝐲), e_xe_y=0, (e_x)^2=(e_y)^2=-1. The external field-assisted Breit-Wheeler process is characterized by two Feynman diagrams (Fig.<ref>).
The amplitude of the considered process is written as follows
S_if=ie^2∬ d^4x_1d^4x_2Ψ_p_-(x_1|A)Â_1(x_1;k_1)G(x_2x_1|A)Â_2(x_2;k_2)Ψ_-p_+(x_2|A) + (k_1↔ k_2),
where k_1,2=(ω_1,2,𝐤_1,2) — 4-momenta of the initial gamma quanta, p_±=(E_±, 𝐩_±) — 4-momenta of the final electron and positron. Here and further, the notation for the convolution of a 4-vector with the Dirac gamma matrices is used: Â_1,2≡γ_μA^μ_1,2 μ=0,1,2,3. The 4-potentials of the initial gamma quanta A_j in expression (<ref>) are determined by the functions
A_j(x;k_j)=√(2π/ω_j)ε_je^-ik_jx, j=1,2,
where ε_j — 4-vectors of the polarization of the initial gamma quanta.
In the amplitude (<ref>), the electron-positron pair corresponds to the Volkov functions <cit.>:
Ψ_p(x|A)=𝔍_p(x)u_p/√(2E), 𝔍_p(x)=[1+e/2(pk)k̂Â(kx)]e^iS_p(x),
S_p(x)=-(px)-e/(kp)∫_0^kxdφ[pA(φ)-e/2A^2(φ)],
where u_p is the Dirac bispinor. The intermediate state in the amplitude (<ref>) corresponds to the Green's function of the electron in the field of a plane wave G(x_2x_1|A) <cit.>:
G(x_2x_1|A)=∫d^4p/(2π)^4𝔍_p(x_2)p̂+ m/p^2-m^2𝔍_p(x_1).
After simple transformations, the amplitude (<ref>) can be represented as follows:
S_if=∑_l=-∞^+∞S_l,
where the partial amplitude S_l corresponds to the absorption or emission of |l| photons of the external wave. For the Channel A, the partial amplitude can be represented in the following form:
S_l=iπ e^2(2π)^4e^-id/√(E_-E_+ω_1ω_2)[u_p_-M_lv_p_+]δ^(4)(k_1+k_2-p_–p_+-lk).
Here d is the phase, independent of the summation indices, M_l — the matrix determined by the expression
M_l=ε_1με_2ν∑_r=-∞^+∞K^μ_l+r(p_-,q_-)q̂_-+m/q_-^2-m_*^2K^ν_-r(q_-,-p_+), μ,ν=0,1,2,3.
In relation (<ref>), the functions K^μ_l+r and K^ν_-r have the following form:
K^μ'_n(p',p)=a^μ'L_n(p',p)+b_-^μ'L_n-1+b_+^μ'L_n+1.
Here, the matrices a^μ' and b^μ'_± have the following form:
a^μ'=γ^μ'+m^2k̂/2(kp')(kp)k^ν, b^μ'_±=1/4η m(ê_±k̂γ^μ'/(kp')+γ^μ'k̂ê_±/(kp)),
e_±≡ e_x± ie_y, μ'=μ,ν, n=l+r,-r, p=-p_+,q_-, p'=q_-,p_-.
In relations (<ref>), (<ref>) there are special functions L_n <cit.>, which in the case of circular polarization of the wave can be represented using Bessel functions with integer indices
L_n(p',p)=exp(-inχ_p'p)J_n(γ_p'p),
where is denoted
γ_p'p=mη√(-Q^2_p'p), tanχ_p'p=δ(Q_p'pe_y)/(Q_p'pe_x), Q_p'p=p'/(p'k)-p/(pk).
In the expressions (<ref>) and (<ref>) p_±=(E_±, 𝐩_±) and q_- are the 4-quasimomenta of the electron (positron) and intermediate electron, m_* is the effective mass of the electron in the
field of a circularly polarized wave (<ref>) <cit.>:
q_-=k_2+rk-p_±,
p_±=p_±+η^2m^2/2(kp_±)k, q_-=q_-+η^2m^2/2(kq_-)k,
p_±^2=m_*^2, m_*=m√(1+η^2).
§ THE RESONANT KINEMATICS
Under resonance conditions, both an electron and a positron can be intermediate particles. Therefore, instead of two Feynman diagrams in the non-resonant case (see Fig. <ref>), under resonance conditions we will have 4 Feynman diagrams (see Fig. <ref>): Channels A and B, as well as Channels A' and B', which are obtained from channels A and B by rearranging the initial gamma quanta (k_1↔ k_2). Each channel in the resonance conditions effectively decays into two first-order processes by the fine structure constant: the external field-stimulated Breit-Wheeler process (EFSBWP) and the external field-stimulated Compton effect (EFSCE) with intermediate electrons and positrons entering the mass shell:
q_-^2=m_*^2, q_+^2=m_*^2.
Further consideration will be carried out for resonant Channels A and B (see Fig. <ref>). It is important to emphasize that the laws of conservation of energy-momentum for intermediate processes of resonant Channels A and B have the form:
EFSBWP: k_2+rk=q_∓+p_± r=1,2,3…;
EFSCE: k_1+q_∓=p_∓+r'k r'=1,2,3… (r'=l+r).
Since the problem considers high-energy initial gamma quanta and ultrarelativistic energies of the final electron-positron pair (<ref>), under such conditions, the momenta of the initial and final particles should lie within a narrow angle cone, which should be far away from the direction of wave propagation:
θ_j±≡∠(𝐤_j, 𝐩_±)≪1, θ_i≡∠(𝐤_1, 𝐤_2)≪1,
θ≡∠(𝐩_±,𝐤)∼1, θ_j≡∠(𝐤_j, 𝐤)∼1, j=1,2; θ≈θ_1≈θ_2.
Let us note that under conditions (<ref>), (<ref>), the expression for the positron (electron) quasienergy can be simplified:
E_±=E_±[1+1/4sin^2θ_±/2(mη/E_±)^2]≈ E_±.
Let us determine the resonance energy of the positron (electron) for the second vertex (see Fig. <ref>). Taking into account relations (<ref>), (<ref>), (<ref>), (<ref>) from the conservation of 4-momentum law (<ref>) for the external field-stimulated Breit-Wheeler process, we obtain the resonance energies of the positron (for Channel A) or electron (for Channel B) in units of the total energy of the initial gamma quanta:
x_j'(r)=ω_2/2ω_i(ε_2BW(r)+δ^2_2j')[ε_2BW(r)±√(ε_2BW(r)(ε_2BW(r)-1)-δ^2_2j')], j'=+,-.
Here it is indicated:
x_±(r)=E_±(r)/ω_i, ω_i=ω_1+ω_2, δ_2±=ω_2/2m_*θ_2±.
In this case, the ultrarelativistic parameter δ_2±, which determines the outgoing angle of the positron or electron, is contained within the interval
0≤δ^2_2+≤δ^2_2+max, δ^2_2+max=ε_2BW(r)(ε_2BW(r)-1).
It is important to emphasize that in equation (<ref>), the quantity ε_2BW(r) is bounded from below by unity
ε_2BW(r)=rε_2BW≥1, ε_2BW=ω_2/ω_BW,
where ω_BW is the characteristic quantum energy of the external field-stimulated Breit-Wheeler process:
ω_BW=m_*^2/ωsin^2θ/2={[ 174 ω=3,I=1.675·10^19^-2; 5.22 ω=0.1,I=1.861·10^22^-2; 52.2 ω=10,I=1.861·10^26^-2 ].
When estimating the value of the characteristic energy, frequencies of electromagnetic waves in the optical and X-ray ranges were used in equation (<ref>), as well as values of parameters η=1 and θ=π. It is worth noting that the ratio between the initial energy of the gamma quantum and the characteristic energy ω_BW determines the value of parameter ε_2BW (<ref>), which can be either greater or less than unity. This significantly affects the number of photons absorbed in the EFBWP. Specifically, if the initial energy of the gamma quantum is less than the characteristic energy, then from equations (<ref>) and (<ref>) it follows that this process occurs if the number of absorbed wave photons is above a certain minimum r_min value, which is greater than unity:
r≥ r_min=⌈ε_2BW^-1⌉ (ω_2<ω_BW).
If the initial energy of the gamma quantum is greater than the characteristic energy, then this process takes place already when one photon of the wave is absorbed:
r≥1 (ω_2≥ω_BW).
Thus, the resonant energy of a positron (for Channel A) or an electron (for Channel B) is determined by two parameters: the corresponding outgoing angle of the positron (δ^2_2+) or electron (δ^2_2-), and the parameter ε_2BW(r). At the same time, with a fixed parameter ε_2BW(r), for each outgoing angle of the positron or electron, there are two possible energies (see equation (<ref>)).
Figure <ref> shows the dependence of the energy of the positron (for Channel A) or electron (for Channel B) (see equations (<ref>)-(<ref>)) for the external field-stimulated Breit-Wheeler process with absorption of one and two photons of the wave at different frequencies, intensities of the electromagnetic wave (equation (<ref>), and various initial gamma quanta energies. From this figure, it follows that the interval for the outgoing angle of the positron (electron) significantly depends on the number of absorbed photons of the wave. Additionally, for the same outgoing angle, there are two possible particle energies, except for the maximum outgoing angle.
Now let's determine the resonant electron (positron) energy at the first vertex (see Fig. <ref>). Taking into account equations (<ref>), (<ref>), (<ref>), and (<ref>), from the conservation law of the 4-momentum (equation (<ref>) of the external field-stimulated Compton effect, we obtain the resonant energies of the electron (for Channel A) or the positron (for Channel B) in terms of the total energy of the initial gamma quanta:
x_∓(r')=ω_1/2ω_i(ε_1C(r')-δ^2_1∓)[ε_1C(r')+√(ε_1C(r')^2+4(ε_1C(r')-δ^2_1∓))].
Here is denoted:
x_∓(r')=E_∓(r')/ω_i, δ_1∓=ω_1/m_*θ_1∓.
ε_1C(r')=r'ε_1C, ε_1C=ω_1/ω_C, ω_C=1/4ω_BW.
Here ω_C s the characteristic quantum energy of the external field-stimulated Compton effect. This energy is four times less than the characteristic energy for the external field-stimulated Breit-Wheeler process. Additionally, it should be noted that the ultrarelativistic parameter δ^2_1∓, which determines the outgoing angle of the electron or positron, should not take values close to ε_1C(r'), in order to satisfy the condition x_∓(r')<1 (see equation (<ref>)). It should also be noted that there are no limitations on the parameter ε_1C(r') for the external field-stimulated Compton effect. Therefore, this process occurs for any number of emitted photons of the wave r'≥1.
Furthermore, we will assume that the energies of the initial gamma quanta, within the framework of conditions (<ref>), satisfy the additional conditions:
ω_2>ω_BW, ω_1≪ω_BW.
Conditions (<ref>) mean that parameter ε_2BW>1, and parameter ε_1BW≪1 (see equations (<ref>) and (<ref>)). Therefore, in Channels A and B, the external field-stimulated Breit-Wheeler process occurs with a number of absorbed photons of the wave r≥1, and for the exchange resonant diagrams A' and B' the number of absorbed photons is r≥ r_min=⌈ε_1BW^-1⌉≫1. Thus, within the framework of conditions (<ref>), resonant Channels A' and B' will be suppressed, and we will only consider two resonant Channels A and B (see Fig. <ref>). It is also important to consider that for Channel A, the resonant energy of the positron is determined by its outgoing angle relative to the momentum of the second gamma quantum in the EFBWP, while the resonant energy of the electron is determined by its outgoing angle relative to the momentum of the first gamma quantum in the EFSCE. For Channel B, we have the opposite situation, where the energy of the electron is determined by its outgoing angle relative to the momentum of the second gamma quantum, and the energy of the positron is determined by its outgoing angle relative to the momentum of the first gamma quantum (see Fig. <ref>). Therefore, Channels A and B are distinguishable and do not interfere with each other.
It is important to note that under resonance conditions (<ref>), the resonant energies of the positron and electron for each reaction channel are determined by different physical processes: the external field-stimulated Breit-Wheeler process (<ref>) and the Compton external field-stimulated effect (<ref>). At the same time, the energies of the electron-positron pair are related to each other by the general law of conservation of energy
x_++x_-≈1 (x_±=E_±/ω_i).
It should be noted that in equation (<ref>) we have neglected a small correction term |l|ω/ω_i≪1. Taking into account equations (<ref>) and (<ref>), as well as the law of conservation of energy (<ref>) for Channels A and B, we obtain the following equations relating the outgoing angles of the positron and electron:
δ^2_1∓=ε_1C(r')-(ω_1/ω_i)/(1-x_±(r))[ε_1C(r')+(ω_1/ω_i)/(1-x_±(r))].
Here the upper (lower) sign corresponds to Channel A (B). In equation (<ref>), the left side represents the ultrarelativistic parameter associated with the outgoing angle of the electron (positron) relative to the momentum of the first gamma quantum, and the right side is the function of the ultrarelativistic parameter δ_2±, associated with the outgoing angle of the positron (electron) relative to the momentum of the second gamma quantum. Under given parameters ε_1C(r') and ε_2BW(r), equation (<ref>) uniquely determines the outgoing angles of the electron and positron, and therefore their resonant energies (see Fig. <ref> and Fig. <ref>).
Figure <ref> presents the dependence of the energy of the electron (for channel A) or positron (for channel B) (<ref>), (<ref>) for the external field-stimulated Compton effect at different frequencies, intensities of the electromagnetic wave (<ref>), and initial gamma quanta energies under the condition of energy conservation in the first and second vertices (<ref>). The graphs are given for different numbers of absorbed (r) and emitted (r') photons of the wave.
It is also worth noting the important case when the quantum parameter ε_2BW(r)≫1. In this case, from the expression (<ref>) with the "+" sign before the square root, the energy of the positron (Channel A) or electron (Channel B) approaches the energy of the highly energetic second gamma quantum:
E_±≈ω_2[1-(1+δ^2_2±)/4ε_2BW(r)]⟶ω_2 (δ^2_2±≪ε_2BW(r)).
The expression with the "–" sign before the square root in equation (<ref>) leads to the minimum energy of the positron or electron E_±∼ω_2/ε_2BW(r)≪ω_2. However, this case is unlikely. Similarly, for the first gamma quantum, when the quantum parameter ε_1C(r')≫1, we obtain the energy of the electron (Channel A) or positron (Channel B) approaching the energy of the first gamma quantum:
E_∓≈ω_1[1-(1+δ^2_1∓)/ε_1C(r')]⟶ω_1 (δ^2_1∓≪ε_1C(r')).
Thus, if the quantum parameters ε_1C(r') and ε_2BW(r) take large values, the resonant energies of the positron and electron tend towards the energies of the corresponding initial gamma quanta.
§ THE RESONANT DIFFERENTIAL CROSS SECTION
Previously, it has been shown that in conditions (<ref>), (<ref>), and (<ref>), exchange Channels A' and B' are suppressed. In addition, Channels A and B are distinguishable and therefore do not interfere (see text after equation (<ref>)). It is also important to note that resonance processes with different numbers of absorbed and emitted wave photons correspond to significantly different probabilities and energies of electron-positron pair. Therefore, they do not interfere either. Due to this, summation over all possible processes with absorption of r wave photons is not necessary in the amplitude (<ref>):
M_rr'=ε_1με_2νK^μ_r'(p_-,q_-)q̂_-+m/q_-^2-m_*^2K^ν_-r(q_-,-p_+), r'=l+r.
The resonant differential cross section for Channels A and B and unpolarized initial gamma quanta and the final electron-positron pair is obtained from the amplitude (<ref>), (<ref>), (<ref>) in a standard way <cit.>. After simple calculations, we obtain:
dσ_rr'=2m^6r^2_e/E_-E_+m_*^2δ^2_η iK_1∓(r')P_2±(r)/|q_∓^2-m_*^2|^2δ^(4)[k_1+k_2-p_–p_+-(r'-r)k]d^3p_-d^3p_+.
Here, the upper (lower) sign corresponds to Channel A (B), r_e=e^2/m is the classical electron radius. In obtaining the resonant differential cross-section (<ref>), the resonant probability was divided by the flux density of the initial gamma quanta <cit.>:
j=(k_1k_2)/ω_1ω_2≈m^2_*/2ω_1ω_2δ^2_η i, δ^2_η i≡ω_1ω_2/m^2_*θ^2_i.
In expression (<ref>), the function P_2±(r) determines the probability of the external field-stimulated Breit-Wheeler process <cit.>, and the function K_1∓(r') determines the probability of the external field-stimulated Compton effect <cit.>:
P_2±(r)=J^2_r(γ_2±(r))+η^2(2u_2±(r)-1)[(r^2/γ_2±(r)^2-1)J^2_r+J'^2_r],
K_1∓(r')=-4J^2_r'(γ_1∓(r'))+η^2[2+u_1∓(r')^2/1+u_1∓(r')](J^2_r'-1+J^2_r'+1-2J^2_r').
The arguments of the Bessel functions for the external field-stimulated Breit-Wheeler process (<ref>) and the external field-stimulated Compton effect (<ref>) have the following form:
γ_2±(r)=2rη/√(1+η^2)√(u_2±(r)/v_2±(r)(1-u_2±(r)/v_2±(r))),
γ_1∓(r')=2r'η/√(1+η^2)√(u_1∓(r')/v_1∓(r')(1-u_1∓(r')/v_1∓(r'))).
Here, the relativistic-invariant parameters are equal to:
u_1∓(r')=(k_1k)/(p_∓k)≈(ω_1/ω_i)/x_∓(r'), v_1∓(r')=2r'(q_∓k)/m^2_*≈ε_1C(r')(x_∓(r')/(ω_1/ω_i)-1),
u_2±(r)=(k_2k)^2/4(p_±k)(q_∓k)≈(ω_2/ω_i)/4x_±(r)(1-x_±(r)/(ω_2/ω_i)), v_2±(r)=r(k_2k)/2m_*^2≈ε_2BW(r).
The elimination of resonant singularity in expression (<ref>) is carried out by the Breit-Wigner procedure <cit.>:
m_*⟶μ_*=m_*-iΓ_∓(r), Γ_∓(r)=q_∓^0/2m_*W_1,
where W_1 is the total probability (per unit of time) of the external field-stimulated Compton effect on the intermediate electron (for Channel A) or positron (for Channel B).
W_1=α m^2/4πq_∓^0K(ε_1C),
K(ε_1C)=∑_s=1^∞∫_0^sε_1Cdu/(1+u)^2K(u,sε_1C).
Here, α is the fine-structure constant, and the function K(u,sε_1C) is determined by the expression:
K(u,sε_1C)=-4J^2_s(γ_1(s))+η^2[2+u^2/1+u](J^2_s-1+J^2_s+1-2J^2_s)
γ_1(s)=2sη/√(1+η^2)√(u/sε_1C(1-u/sε_1C)).
Taking into account the relations (<ref>)-(<ref>), the resonant denominator in the cross-section (<ref>) takes the following form:
|q_∓^2-m_*^2|^2⟶ m_*^4x^2_∓(r')/(ω_1/ω_i)^2[(δ^2_1∓(0)-δ^2_1∓)^2+Υ^2_∓(r')].
Here, the ultrarelativistic parameter δ^2_1∓ is related to the resonance energy of the electron (for Channel A) or positron (for Channel B) by the relation (<ref>), and the corresponding parameter δ^2_1∓(0) can take arbitrary values unrelated to the energy of the electron (positron). In this case, the corresponding angular width of the resonance Υ_∓(r') is determined by the expression:
Υ_∓(r')=α m^2/4π m_*^2ω_1/ω_ix_∓(r')K(ε_1C).
Considering relation (<ref>), we can set d^3p_±≈ d^3p_± and integrate the three-dimensional momentum of the electron (positron) as well as the energy of the positron (electron) for Channel A (for Channel B) using the delta-function in expression (<ref>). After simple calculations, we obtain the following expression for the resonant differential cross-section for Channels A and B:
R_2±(rr')=dσ_rr'/dδ^2_2±=8π r^2_e(m/δ_η iω_i)^2x_±(r)/x_∓(r')^3(m/m_*)^4(ω_1/ω_2)^2K_1∓(r')P_2±(r)/[(δ^2_1∓(0)-δ^2_1∓)^2+Υ^2_∓(r')].
Here, the upper (lower) sign corresponds to Channel A (B). It should be noted that the differential cross-section (<ref>) has a characteristic Breit-Wigner resonance structure <cit.>. Let's determine the maximum resonant differential cross-section when
(δ^2_1∓(0)-δ^2_1∓)^2≪Υ^2_∓(r').
Under conditions (<ref>), the resonant cross-section (<ref>) takes its maximum value, which is equal to:
R_2±(rr')^max=dσ_rr'^max/dδ^2_2±=r^2_ec_η iΨ_±(rr').
Here, the function c_η i is determined by the initial setup parameters
c_η i=2(4π)^3/α^2K^2(ε_1C)(m/δ_η iω_2)^2∼10^8(m/δ_η iω_2)^2,
and the functions Ψ_±(rr') determine the spectral-angular distribution of the generated electron-positron pair:
Ψ_±(rr')=x_±(r)/1-x_±(r)K_1∓(r')P_2±(r).
It is important to emphasize that the magnitude of the maximum resonant differential cross-section significantly depends on the value of the function c_η i (<ref>). Let's require that the function c_η i>1. Then, from relation (<ref>), we obtain a condition on the initial ultrarelativistic parameter δ^2_η i (<ref>):
δ^2_η i<(10^4m/ω_2)^2.
It should be noted that the corresponding Breit-Wheeler differential cross-section without an external field in this kinematics (<ref>) has the following order of magnitude <cit.>:
dσ_BW/dδ^2_2±∼ r_e^2(m/δ_iω_i)^2, δ_i=√(ω_1ω_2)θ_i/m.
From relations (<ref>)-(<ref>) and (<ref>), it can be seen that the maximum resonant cross-section significantly exceeds the corresponding Breit-Wheeler cross-section without an external field.
Figure <ref> shows the dependencies of the maximum resonance differential cross-section (<ref>) on the positron outgoing angle (for Channel A) or electron outgoing angle (for Channel B) for various frequencies and intensities, as well as the numbers of absorbed and emitted photons at the first and second vertices (see Fig. <ref>). The study focused on the regions of optical and X-ray frequencies of the external strong electromagnetic wave at different sufficiently high energies of initial gamma quanta. It is important to note that the energy of the second high-energy gamma quantum for each frequency and intensity of the wave was chosen according to condition (<ref>), in order for the stimulated Breit-Wheeler process to occur with the highest probability, and the energy of the first gamma quantum was chosen to be much lower than the energy of the second gamma quantum (<ref>). In this case, with increasing frequency of the external field, the characteristic energy of the Breit-Wheeler process decreased (see relation (<ref>)). Therefore, energies of initial gamma quanta were chosen to be lower for the X-ray frequency range than for the optical frequency range. As a result, the function (<ref>) increased, leading to an increase in the maximum resonance cross-section. This case is shown in Figures <ref>a) to <ref>c). However, if the energy of initial gamma quanta remains constant and the intensity of the external field increases, then the maximum resonance cross-section decreases (see Figures <ref>c) and <ref>c')). Table 1 displays the values of positron (for Channel A) and electron (for Channel B) energies, as well as the corresponding maximum values of the resonance differential cross-section according to their spectral-angular distribution (see Figures <ref>a) to <ref>c')) for different frequencies and intensities of the wave, as well as different energies of initial the gamma quanta.
From Table 1, it can be observed that if the energy of one of the initial gamma quanta slightly exceeds the characteristic Breit-Wheeler energy, the production of electron-positron pairs occurs with a very large cross-section. For the optical frequency range, the resonance differential cross-section can exceed the value in magnitude by a factor of 44, while for the X-ray frequency range, it can exceed the value by eight orders of magnitude. In this case, the positrons (electrons) are emitted in a narrow cone and with very high energy.
§ CONCLUSION
We considered the resonant Breit-Wheeler process modified by an external strong electromagnetic field for high-energy initial gamma quanta when the energy of one of them significantly exceeded the energy of the other. The following results were obtained:
* The resonant kinematics of the process has been studied in detail. It was demonstrated that the problem involves two characteristic energies: the Breit-Wheeler energy ω_BW (<ref>) and the Compton effect energy ω_C (<ref>). These energies differ from each other by a factor of four. The ratios of the initial gamma quanta energies to these characteristic energies significantly affect the number of absorbed or emitted wave photons and, ultimately, the probability of the process.
* The resonant energies of the positron and electron strongly depend on their outgoing angles, as well as the characteristic quantum parameters ε_2BW(r) (<ref>) and ε_1C(r') (<ref>). Furthermore, the outgoing angles of the electron and positron are interdependent (<ref>).
* The maximum resonant differential cross-section is achieved when the energy of one of the initial gamma quantum slightly exceeds the characteristic Breit-Wheeler energy. In this case, for the optical frequency range and ω_2=180, the maximum resonant cross-section is R_2±(rr')^max=44r^2_e, whereas for the X-ray frequency range, it is R_2±(rr')^max∼(10^6÷10^8)r^2_e.
The obtained results can be utilized to achieve ultrarelativistic positron (electron) beams with a very high probability in the external field-modified Breit-Wheeler process. Additionally, these results can be employed to explain the fluxes of ultrarelativistic positrons (electrons) near neutron stars and magnetars, as well as in the modeling of physical processes in laser-induced thermonuclear fusion.
The research was funded by the Ministry of Science and Higher Education of the Russian Federation under the strategic academic leadership program “Priority 2030” (Agreement 075-15-2023-380 dated 20.02.2023).
99
1Ritus, V.I.; Nikishov, A.I. Quantum electrodynamics phenomena in the intense field. In Trudy FIAN; Nauka: Moscow, Russia, 1979; Volume 111, pp. 1–276.
2Roshchupkin, S.P. Resonant effects in collisions of relativistic electrons in the field of a light wave. Laser Phys. 1996, 6(5), pp. 837-858.
3Roshchupkin S.P.; Tsybul’nik V.A.; Chmirev A.N. The Probability of Multiphoton Processes in Quantum-Electrodynamic Phenomena in a Strong Light Field. Laser Phys. 2000, 10, pp. 1256–1272.
4Roshchupkin, S.P.; Lebed’, A.A.; Padusenko, E.A.; Voroshilo, A.I. Quantum electrodynamics resonances in a pulsed laser field. Laser Phys. 2012, 22, pp. 1113-1144.
5Di Piazza, A.; Müller, C.; Hatsagortsyan, K.Z.; Keitel, C.H. Extremely high-intensity laser interactions with fundamental quantum systems. Rev. Mod. Phys. 2012, 84(3), pp. 1117-1228.
6Mironov, A.A.; Meuren, S.; Fedotov, A.M. Resummation of QED radiative corrections in a strong constant crossed field. Phys. Rev. D 2020, 102(5), 053005, 18 pp.
7Gonoskov, A.; Blackburn, T.G.; Marklund, M.; Bulanov, S.S. Charged particle motion and radiation in strong electromagnetic fields. Rev. Mod. Phys. 2022, 94(4), 045001, 63 pp.
8Roshchupkin, S.P.; Voroshilo, A.I. Resonant and Coherent Effects of Quantum Electrodynamics in the Light Field; Naukova Dumka: Kiev, Ukraine, 2008.
9Roshchupkin, S.P.; Lebed’, A.A.; Padusenko, E.A.; Voroshilo, A.I. Resonant effects of quantum electrodynamics in the pulsed light field, in Quantum Optics and Laser Experiments, edited by S. Lyagushyn (Intech, Rijeka, Croatia, 2012), Chap. 6, pp. 107–156.
10Roshchupkin, S.P.; Lebed’, A.A. Effects of Quantum Electrodynamics in the Strong Pulsed Laser Fields; Naukova Dumka: Kiev, Ukraine, 2013.
11Bula, C.; McDonald, K.T.; Prebys, E.J.; et al. Observation of Nonlinear Effects in Compton Scattering. Phys. Rev. Lett. 1996, 76(17), pp. 3116-3119.
12Mourou, G.A.; Tajima, T.; Bulanov, S.V. Optics in the relativistic regime. Rev. Mod. Phys. 2006, 78(2), pp. 309-371.
13Bagnoud, V.; Aurand, B.; Blazevic, A.; et al. Commissioning and early experiments of the PHELIX facility. Appl. Phys. B 2010, 100, pp. 137-150.
15Burke, D.L.; Field, R.C.; Horton-Smith, G.; et al. Positron Production in Multiphoton Light-by-Light Scattering. Phys. Rev. Lett. 1997, 79(9), pp. 1626-1629.
16Bamber, C.; Boege, S.J.; Koffas, T.; et al. Studies of nonlinear QED in collisions of 46.6 GeV electrons with intense laser pulses. Phys. Rev. D 1999, 60(9), 092004, 43 pp.
17Kanya, R.; Morimoto, Y.; Yamanouchi, K. Observation of Laser-Assisted Electron-Atom Scattering in Femtosecond Intense Laser Fields. Phys. Rev. Lett. 2010, 105(12), 123202, 4 pp.
18Hartin, A. Strong field QED in lepton colliders and electron/laser interactions. Int. J. Mod. Phys. A 2018, 33(13), p. 1830011.
19Magnusson, J.; Gonoskov, A.; Marklund, M.; et al. Laser-Particle Collider for Multi-GeV Photon Production. Phys. Rev. Lett. 2019, 122(25), 254801, 6 pp.
20Oleinik, V.P. Resonance effects in the field of an intense laser beam. Sov. Phys. JETP 1967, 25(4), pp. 697-708.
21Oleinik, V.P. Resonance effects in the field of an intense laser ray ii. Sov. Phys. JETP 1968, 26(6), pp. 1132-1138.
22Florescu, A.; V. Florescu, V. Laser-modified electron bremsstrahlung in a Coulomb field. Phys. Rev. A 2000, 61(3), 033406, 12pp.
23Flegel, A.V.; Frolov, M.V.; Manakov, N.L.; Starace, Anthony F.; Zheltukhin, A.N. Analytic description of elastic electron-atom scattering in an elliptically polarized laser field. Phys. Rev. A 2013, 87(1), 013404, 18pp.
24Zheltukhin, A.N.; Flegel, A.V.; Frolov, M.V.; Manakov, N.L.; Starace, A.F. Resonant electron-atom bremsstrahlung in an intense laser field. Phys. Rev. A 2014, 89(2), 023407, 16 pp.
25Zheltukhin, A.N.; Flegel, A.V.; Frolov, M.V.; Manakov, N.L.; Starace, A.F. Rescattering effects in laser-assisted electron–atom bremsstrahlung. Phys. B 2015, 48, p. 075202.
26Li,A.; Wang, J.; Ren, N.; Wang, W.; Zhu, W.; Li, X.; Hoehn, R.; Kais,S. The interference effect of laser-assisted bremsstrahlung emission in Coulomb fields of two nuclei. J. Appl. Phys. 2013, 114, 124904.
27Heinzl, T.; Ilderton, A. Exact Classical and Quantum Dynamics in Background Electromagnetic Fields. Phys. Rev. Lett. 2017, 118(11), 113202, 5 pp.
28Nedoreshta, V.N.; Voroshilo, A.I.; Roshchupkin, S.P. Resonant scattering of a photon by an electron in the moderately-strong-pulsed laser field. Phys. Rev. A 2013, 88, p. 052109.
29Lebed', A.A.; Padusenko, E.A.; Roshchupkin, S.P.; Dubov, V.V. Parametric interference effect in nonresonant spontaneous bremsstrahlung of an electron in the field of a nucleus and two pulsed laser waves. Phys. Rev. A 2016, 94(1), 013424, 12 pp.
31Krachkov, P.A.; Di Piazza, A.; Milstein, A.I. High-energy bremsstrahlung on atoms in a laser field. Phys. Lett. B 2019, 797, p. 134814.
32Roshchupkin, S.P.; Larin, N.R.; Dubov, V.V. Resonant effect of the ultrarelativistic electron–positron pair production by gamma quanta in the field of a nucleus and a pulsed light wave. Laser Phys. 2021, 31(4), p. 045301.
33Roshchupkin, S.P.; Larin, N.R.; Dubov, V.V. Resonant photoproduction of ultrarelativistic electron-positron pairs on a nucleus in moderate and strong monochromatic light fields. Phys. Rev. D 2021, 104(11), 116011, 11 pp.
34Roshchupkin, S.P.; Dubov, A.; Dubov, V.V. Resonant effects in the spontaneous bremsstrahlung process of ultrarelativistic electrons in the fields of a nucleus and a pulsed light wave. Laser Phys. Lett. 2021, 18, 045301, 16 pp.
35Roshchupkin, S.P.; Starodub, S.S. The effect of generation of narrow ultrarelativistic beams of positrons (electrons) in the process of resonant photoproduction of pairs on nuclei in a strong electromagnetic field. Laser Phys. Lett. 2022, 19(11), 115301, 10 pp.
36Roshchupkin, S.P.; Dubov, A.V.; Dubov, V.V.; Starodub, S.S. Fundamental physical features of resonant spontaneous bremsstrahlung radiation of ultrarelativistic electrons on nuclei in strong laser fields. New J. Phys. 2022, 24(1), p. 013020.
37Breit, G.; Wheeler, J.A. Collision of two light quanta. Phys. Rev. 1934, 46(12), pp.1087-1091.
38Zhao, Q.; Wu, Y.; Ababekri, M.; Li, Z.; Tang, L.; Li, J. Angle-dependent pair production in the polarized two-photon Breit-Wheeler process. Phys. Rev. D 2023, 107(9), p. 096013.
39He, Y.; Yeh, I.L.; Blackburn, T.G.; Arefiev, A. A single-laser scheme for observation of linear Breit–Wheeler electron–positron pair creation. New J. Phys. 2021, 23(11), p. 115005.
40Ivanov, D.Y.; Kotkin, G.L.; Serbo, V.G. Complete description of polarization effects in e+ e-pair productionby a photon in the field of a strong laser wave. Eur. Phys. J. C 2005, 40(1), pp. 27-40.
41Krajewska K.; Kaminski J. Z. Breit-Wheeler process in intense short laser pulses. Phys. Rev. A 2012, 86, p. 052104.
42Bragin, S.; Di Piazza, A. Electron-positron annihilation into two photons in an intense plane-wave field. Phys. Rev. D 2021, 102(11), p. 116012.
43Titov A. I.; Takabe H.; Kämpfer B. Nonlinear Breit-Wheeler process in short laser double pulses. Phys. Rev. D 2018, 98, p. 036022.
44Titov, A.I.; Kämpfer, B. Non-linear Breit–Wheeler process with linearly polarized beams. Eur. Phys. J. D 2020, 74(11), p. 218.
45Tang, S. Fully polarized nonlinear Breit-Wheeler pair production in pulsed plane waves. Phys. Rev. D 2022, 105(5), 056018, 17 pp.
46Blackburn, T.G.; King, B. Higher fidelity simulations of nonlinear Breit–Wheeler pair creation in intense laser pulses. Eur. Phys. J. C 2022, 82(1), 44, 16 pp.
47Seipt, D.; King, B. Spin-and polarization-dependent locally-constant-field-approximation rates for nonlinear Compton and Breit-Wheeler processes. Phys. Rev. A 2020, 102(5), 052805, 22 pp.
48Pustyntsev A.A.; Dubov V.V.; Roshchupkin S.P. Resonant Breit-Wheeler process in an external electromagnetic field. Mod. Phys. Lett. A 2020, 35, 2040027, 4 pp.
49Serov, V.D.; Roshchupkin, S.P.; Dubov, V.V. Resonant Effect for Breit–Wheeler Process in the Field of an X-ray Pulsar. Universe 2020, 6(11), 190, 11 pp.
50Volkov, D. On a class of solutions of the Dirac equation. Z. Phys. 1935, 94, pp. 250–260.
51Wang, H.; Zhong, M.; Gan, L.F. Orthonormality of Volkov Solutions and the Sufficient Condition. Commun. Theor. Phys. 2019, 71, pp.1179-1186.
52Schwinger, J. On Gauge Invariance and Vacuum Polarization. Phys. Rev. 1951, 82, pp. 664–679.
53Brown, L.S.; Kibble, T.W.B. Interaction of Intense Laser Beams with Electrons. Phys. Rev. 1964, 133, pp. A705–A719.
54Breit, G.; Wigner, E. Capture of Slow Neutrons. Phys. Rev. 1936, 49(7), pp. 519-531.
55Berestetskii, V.B.; Lifshitz, E.M.; Pitaevskii, L.P. Quantum Electrodynamics: Volume 4, Vol. 4; Butterworth-Heinemann: London, England, 1982.
|
http://arxiv.org/abs/2307.00387v1
|
20230701170930
|
Simplifying the large mass expansion
|
[
"V. A. Smirnov"
] |
hep-ph
|
[
"hep-ph",
"hep-th"
] |
SINP]V.A. Smirnov
[email protected]
[SINP]Skobeltsyn Institute of Nuclear Physics of Moscow State University,
119992 Moscow, Russia
It is shown how the well-known large mass expansion can be simplified to obtain more terms of the expansion
in an analytic form. Expanding two-loop four-point Feynman integrals which contribute to the process H → ggg is used as an example.
Feynman integrals dimensional regularization large mass expansion
§ INTRODUCTION
The large mass expansion is known for more than forty years. It was successfully applied in numerous calculations.
The large mass limit, as well as the off-shell large momentum limit, are examples of
limits typical of Euclidean type which are characterized by considering some external momenta as large or small in the
Euclidean sense. Formally, an external momentum q_i is characterized as small if it is scaled as q_i→ρ q_i
with ρ→0, and all the other external momenta are called large. So, in the large mass limit, all the external momenta are small in this sense and some of the masses are large.
The behaviour of a given Feynman integral in the large mass limit can be described
<cit.> (see also <cit.> and Chapter 9
of <cit.>) by a simple formula with a summation over subgraphs which include all the large masses and whose connectivity components are one-particle-irreducible with respect to the lines with small masses.
Let us also mention that, for a general limit defined by treating some parameters such as kinematic invariants
and masses as small, one can use the universal strategy of expansion by regions <cit.> (see also <cit.>). To do this, one can apply the public computer
code asy <cit.> (also available with the FIESTA5 distribution
package <cit.>) based on the geometry of polytopes associated with the two basic functions in the
Feynman parametric representation.
The goal of this letter is to present a setup to analytically evaluate many terms within the
large mass expansion. This setup has been developed in the framework of a
project[M. Bonetti and L. Tancredi, to appear.] on the evaluation of
two-loop form factors for the process H → ggg, where the Higgs boson couples to the quarks through a pair of
massive vector bosons V, where V is either W^± or Z. This project has been frozen, for some reasons.
I believe that it will be sooner or later completed with the help of my setup. Moreover, I believe that the setup could be applied also in other similar calculations.
Applying the expansion in inverse powers of m_t^2 which is the biggest parameter in the problem
looks natural because evaluating the corresponding Feynman integrals analytically is a
very complicated problem.
The large mass expansion can be applied either to each of the integral involved in the calculation, or,
to the corresponding master integrals of a given family. The contribution of each of the corresponding
subgraphs belongs to a new family of Feynman integrals, with a new set of propagators and numerators.
These integrals can be expressed, via an integration by parts (IBP) reduction <cit.>, to the corresponding
master integrals. All the master integrals in all the contributions are considerably simpler than
the master integrals of the initial family. As it will be explained later, in the case of this project,
there are only eight ingredients appearing in the expansion.
Six of these ingredients can be evaluated
in terms of gamma functions at a general value of the dimensional regularization parameter, d=4-2,
and two of them can be evaluated in terms of multiple polylogarithms in an expansion up to the desired
weight four. A technical problem at this point is to reduce integrals appearing in the expansion
to the corresponding master integrals. It turns out that, with the increase of the order of expansion,
the IBP reduction gets very complicated and time needed for such a reduction essentially increases.
It turns out, however, that this increase of complexity can be damped when applying the combination of
FIRE and LiteRed <cit.> and using the possibility to construct
explicit analytical reduction rules with LiteRed.
In the next Section, I present details of application of the large mass expansion to the Feynman
integrals which appear in two loops in the project on H → ggg mentioned above. I will then discuss
some other possible improvements as well as some accompanying technical problems in Conclusion.
§ APPLYING THE LARGE MASS EXPANSION
There are six families of integrals appearing in two loops for the process H → ggg.
They correspond to the graphs shown in Fig. 1.
The general integral of each of these families takes the form
G_a_1,a_2,…,a_9 =
∫∫^d k ^d l/∏_i=1^9 D_i^a_i ,
where the corresponding six sets of the propagators are
{k^2, l^2 - m_W^2, (k - l)^2 - m_t^2, (k - p_1)^2, (k - p_1 - p_2)^2, (k - p_1 - p_2 -
p_3)^2,
(l - p_1 - p_2 - p_3)^2 -m_W^2,
(l - p_1)^2, (l - p_1 - p_2)^2} ,
{k^2 - m_t^2, l^2 - m_W^2, (k - l)^2, (k - p_1)^2 - m_t^2, (k - p_1 - p_2)^2 -
m_t^2,
(k - p_1 - p_2 - p_3)^2 - m_t^2, (l - p_1 - p_2 - p_3)^2 - m_W^2, (l - p_1)^2, (l - p_1 - p_2)^2} ,
{k^2, (k - l)^2 - m_t^2, (k - p_1)^2, (l + p_3)^2-m_W^2, (k - p_1 -
p_2)^2,
(l - p_1 - p_2)^2-m_W^2,(k - l - p_3)^2-m_t^2 , (l - p_1)^2 -m_W^2, (k - p_1 - p_3)^2} ,
{k^2 - m_t^2, (k - l)^2, (k - p_1)^2 - m_t^2, (l + p_3)^2-m_W^2,
(k - p_1 - p_2)^2 - m_t^2,
(l - p_1 - p_2)^2-m_W^2, (k - l - p_3)^2, (l - p_1)^2-m_W^2, (k - p_1 - p_3)^2} ,
{k^2 - m_t^2, (k - l)^2 - m_t^2, (k - p_1)^2 - m_t^2, (l + p_3)^2-m_W^2 ,
(k - p_1 - p_2)^2 - m_t^2,
(l - p_1 - p_2)^2 -m_W^2, (k - l - p_3)^2 - m_t^2,
(l - p_1)^2 -m_W^2 , (k - p_1 - p_3)^2} ,
{k^2 - m_t^2,
l^2 - m_W^2, (k - l)^2 - m_t^2, (k - p_1)^2 - m_t^2, (k - p_1 - p_2)^2 - m_t^2,
(k - p_1 - p_2 - p_3)^2 - m_t^2, (l - p_1 - p_2 - p_3)^2 - m_W^2, (l - p_1)^2, (l - p_1 - p_2)^2} .
The last two of the nine indices can be only non-positive so that they stand for numerators.
Let us imply the case V=W, for definiteness.
Let us apply the large mass expansion in the limit m_t →∞, in spite of the fact that three end-points are on the light cone and there are massless particles. Theoretically, there is no mathematical proof of the large mass expansion in such situations but experience shows that it works.
The mass of the top quark is bigger that the other kinematical parameters but it is not much bigger so that
we need a setup which will enable us to go to higher terms of the large mass expansion.
For Family 1, the subgraphs contributing to the expansion are
γ_0=Γ (the graph itself), γ_1={1,3,4,5,6}, γ_2={2,3,7} and γ_3={3}.
For Families 2 and 6, these are γ_0=Γ and γ_1={1,3,4,5,6}.
For Family 3, these are γ_0=Γ, γ_1 = {1, 2, 3, 5, 7}, γ_2 = {2, 4, 6, 7}, γ_3 = {2, 7}.
For Families 4 and 5, these are γ_0=Γ, and γ_1 = {1, 2, 3, 5, 7}.
The expansion in the large limit m_t→∞ can be described equivalently in the
language of regions which looks here preferable from the technical point of view.
For Families 1 and 3, the relevant regions corresponding to the above mentioned subgraphs are
γ_0: k and l large; γ_1: k large, l small; γ_2: k small, l large;
γ_3: k and l small. For the rest of the families, these are
γ_0: k and l large and γ_1: k large, l small.
In particular, the contribution of γ_0 is given as a Taylor expansion of the integrand in
m_W, p_1, p_2, p_3. To write down the contribution, we make the replacements m_W^2 →ρ^2 m_W^2, p_1 →ρ p_1, p_2 →ρ p_2, p_3 →ρ p_3, pull out an overall power of ρ and then perform a Taylor expansion in ρ at ρ=0. Odd powers of ρ should give zero results,
and this is a useful check. Finally, ρ is set to 1. The other contributions are similarly constructed:
all the parameters m_W, p_1, p_2, p_3 as well as the small loop momenta (momentum) for a given region are multiplied
by ρ.
Each contribution generates a linear combination of Feynman
integrals in every order of the large mass expansion, Even if such an integral can be evaluated in terms of Γ functions at general d, there are cumbersome numerators so that it is more effective to use immediately an IBP reduction and then it will be enough to evaluate only the corresponding master integral(s). For example, for γ_0,
the resulting Feynman integrals are vacuum integrals with one non-zero mass.
In particular, these are propagators/numerators of the four families of integrals which arise
in the expansion in the four contributions γ_0,γ_1,γ_2,γ_3 for Family 1:
{ (k - l)^2 - m_t^2, k^2, l^2, p_1· k, p_1 · l, p_2 · k, p_2 · l, p_3 · k, p_3 · l} ,
{k^2 - m_t^2, l^2 - m_W^2, (l - p_1 - p_2 - p_3)^2 - m_W^2, k · l, p_1 · k, p_1· l,
p_2· k, p_2 · l, p_3· k } ,
{k^2, (k - p_1)^2, (k - p_1 - p_2)^2, (k - p_1 - p_2 - p_3)^2, l^2 - m_t^2, k · l,
p_1 · l,
p_2· l, p_3· l } ,
{ k^2, l^2 - m_W^2, (k - l)^2, (k - p_1)^2, (k - p_1 - p_2)^2, (k - p_1 - p_2 - p_3)^2,
(l - p_1 - p_2 - p_3)^2-m_W^2 , (l - p_1)^2, (l - p_1 - p_2)^2} .
The indices a_i which can be positive are i=1,2,3 for the first two auxiliary subfamilies,
i=1,…,5 for the third subfamily and i=1,2,4,5,6,7 for the fourth subfamily.
For each of the contributions to the large mass expansion of integrals of all the six families, the IBP reduction is much simpler that the reduction of initial integrals. However, if we want to obtain many terms of the expansion,
the IBP reduction gets complicated because of the increase of the absolute values of the indices.
Still it is necessary to have the possibility to evaluate many terms indeed because we are oriented at the above mentioned physical problem and because the mass m_t is not essentially bigger with respect to the other parameters.
It turns out that the package LiteRed can help in this situation. The point is that, for all the subfamilies originated from the large mass expansion of integrals of the six families under consideration,
it is possible to construct explicit analytic rules for the reduction in all the corresponding sectors.
Such rules are similar in character to analytic recurrent relations obtained when solving IBP relations `by hand`
in many calculations in the period between the discovery of the IBP method <cit.> and the appearance of
the first computer program to solve IBP relations.
These rules are constructed with the LiteRed command SolvejSector. It is enough to restrict the time for the command by one minute.
After running FIRE <cit.> and taking into account these rules, the IBP reduction of the integrals
in the large mass expansion gets much faster and it becomes possible to go to higher order in 1/m_t^2.
We encounter only eight different ingredients
in all the contributions to the large mass expansion of integrals of all the six families.
Only one of them is two-loop: this is the two-loop vacuum integral with the masses {m_t,0,0}.
All the others ingredients are one-loop integrals.
Six of the ingredients are expressed in terms of gamma functions at general d: the one-loop vacuum integrals with the mass m_t or m_W, the two-loop vacuum integral with the masses {m_t,0,0}, the one-loop massless propagator integrals with the external momentum squared s,t or m_H^2. One more ingredient is the one-loop propagator integral with the two masses m_W, the external momentum squared m_H^2 and the indices {2,1}. Finally, we have the massless box integral with three end-points on the light cone and one end-point at m_H^2. Although it is clear that
results for them can be found somewhere in published papers, I evaluated these integrals using the method of
differential equations <cit.> with canonical bases <cit.>.
Analytic results for these `expansion master integrals' can be found in the files attached to the paper. I am also attaching
sample files with constructing terms of the large mass expansion, together with some useful Mathematica
auxiliary commands.
Theoretically, any order of expansion can be evaluated within this setup in terms of the basic ingredients with coefficients
which are rational in the kinematic invariants and d. Practically, some technical complications arise.
To illustrate them, let us consider the large mass expansion of the integral G_1,…,1,0,0 of the first family.
It turns out that the contribution of the subgraph γ_2 is most complicated when going to higher orders
because the corresponding results become most cumbersome.
At the order 1/(m_t^2)^n, the corresponding contribution is written as a linear combination of
G_1,1,1,1,1+2n,-2n,0,0,0 and many other integrals with less deviations from the corner point {1,…,1,0,0}.
Each of these integrals can be IBP-reduced to a linear combination of four expansion master integrals with
rational coefficients which become cumbersome with the growth of n. For example, for 2n=24, the file with results for the contribution to the expansion
is with around 10 gb. It is not easy to handle it, i.e. to expand resulting expressions in and this problem with
2(n+1) will only be more serious.
One can switch to numerical evaluation at given values of kinematic invariants. For example, for this contribution of
γ_2, turning to this mode in the IBP reduction allows to go to 2n=30 with results less than 1 gb.
Then it is reasonable to turn to calculations within modular arithmetic and a subsequent rational reconstruction
<cit.>.
If all the kinematic parameters are set to given values then this is a reconstruction of a rational function of d.
In the case of several general kinematic parameters are general one can use the latest variant <cit.>
of rational reconstruction based on balanced relations. Still in the project H → ggg at which this letter is oriented
the complication connected with big expressions in results arises in higher orders of the large mass
expansion so that the evaluation at fixed values can be more relevant.
On the other hand, it can happen that the order 1/(m_t^2)^12 which is already accessible will be quite enough for
qualitative estimates.
§ CONCLUSION
Limits typical of Euclidean space are much simpler than limits typical of Minkowski space in various respects.
In particular, contributions of subgraphs/regions can naturally be represented as Feynman integrals also with quadratic
propagators with a standard IBP reduction.
The crucial point of the simplifications described is the possibility to construct explicit analytic rules
for an IBP reduction in all the sectors for all the subfamilies of integrals appearing in the large mass expansion.
Then the application of these rules by FIRE and LiteRed enables us to go to higher orders of
the expansion in 1/m_t^2.
In fact, one could try to derive simpler explicit reduction rules, like this was done
for massless four-loop propagators. The corresponding master integrals were evaluated in <cit.>
and, to weight twelve, in <cit.>. It turns out that, for each of the families of these integrals,
it is possible to construct explicit analytic rules with LiteRed. However, the authors of <cit.>
were not satisfied by such a solution of IBP relations and produced a `hand-guided computer program' <cit.>
named Forcer which is similar to its three-loop prototype MINCER <cit.>.
Forcer is certainly more powerful than the combination FIRE+LiteRed
for IBP reduction of massless four-loop propagators.
One can hope that it will be possible to construct a similar hand-guided computer program for
integrals appearing in the large mass expansion of integrals of the six families of integrals
discussed in this paper.
I also believe that, in the case of other two-loop four-point Feynman integrals
considered in a limit typical of Euclidean space, explicit reduction rules can exist and, alternatively,
it can be possible to constructed similar hand-guided computer programs for IBP reduction of integrals in expansion.
Acknowledgments.
The work was supported by the Russian Science Foundation, agreement no. 21-71-30003.
I am grateful to Marco Bonetti and Ben Ruijl for discussions.
longnamesfirst,sort compress
elsarticle-num
|
http://arxiv.org/abs/2307.01855v1
|
20230704180002
|
Dynamic paramagnon-polarons in altermagnets
|
[
"Charles R. W. Steward",
"Rafael M. Fernandes",
"Joerg Schmalian"
] |
cond-mat.str-el
|
[
"cond-mat.str-el",
"cond-mat.mes-hall"
] |
Institute for Theory of Condensed Matter, Karlsruhe Institute of Technology,
76131 Karlsruhe, Germany
School of Physics and Astronomy, University of Minnesota, Minneapolis,
Minnesota 55455, USA
Institute for Theory of Condensed Matter, Karlsruhe Institute of Technology,
76131 Karlsruhe, Germany
Institute for Quantum Materials and Technologies, Karlsruhe Institute
of Technology, 76126 Karlsruhe, Germany
The combined rotational and time-reversal symmetry breakings that define an altermagnet lead to an unusual d-wave (or g-wave) magnetization order parameter, which in turn can be modeled in terms of multipolar magnetic moments.
Here, we show that such an altermagnetic order parameter couples to the dynamics of the lattice even in the absence of an external magnetic field.
This coupling is analogous to the non-dissipative Hall viscosity and describes the stress generated by a time-varying strain under broken time-reversal symmetry.
We demonstrate that this effect generates a hybridized paramagnon-polaron mode, which allows one to assess altermagnetic excitations directly from the phonon spectrum. Using a scaling analysis, we also demonstrate that the dynamic strain coupling strongly affects the altermagnetic phase boundary, but in different ways in the thermal and quantum regimes. In the ground state, we find that a hardening of the altermagnon mode leads to an extended altermagnetic ordered regime, whereas for non-zero temperatures, the softening of the phonon modes leads to increased fluctuations that lower the altermagnetic transition temperature. We also discuss the application of these results to standard ferromagnetic systems.
Dynamic paramagnon-polarons in altermagnets
Jörg Schmalian
August 1, 2023
===========================================
§ INTRODUCTION
Ferromagnets and antiferromagnets are states of broken time-reversal symmetry with finite uniform or staggered magnetic dipole moments arising from uniform or periodic configurations of the electronic spin. A rather different type of magnetic order, which has recently received significant attention <cit.>, is multipolar magnetism, which exhibits more complex patterns despite having zero net (staggered) magnetization. Time-reversal symmetry breaking in these cases is due to the formation of magnetic quadrupoles, octupoles, toroidal moments, or higher-order configurations of dipole moments whose averaged magnetization vanishes by symmetry. When the symmetry characterizing the ordered state involves a combination of rotations and time reversal, the system is known as an altermagnet <cit.>. In many of the cases studied so far, the altermagnetic order parameter corresponds to a d-wave or g-wave magnetization, which in turn can also be expressed in terms of multipolar magnetic moments <cit.>.
In dipolar magnetic materials, such as standard ferro- and antiferromagnets, changes in the lattice parameter modify the exchange interaction between the spins. Such a magneto-elastic coupling has important consequences, e.g. the emergence of hybrid magnon-acoustic phonon modes in the magnetically ordered state, called magnon-polarons <cit.>. In multipolar magnetic materials, however, different types of coupling between the magnetization M_i and the strain ε_ij are allowed <cit.>.
Formally, while the magneto-elastic effect is associated with the magnetostriction response tensor N_ijkl, defined as ε_ij = N_ijkl M_k M_l, several higher-order multipolar magnetic states, including altermagnets, have a non-zero piezomagnetic response tensor Λ_ijk, defined by ε_ij = Λ_ijk M_k <cit.>.
A direct consequence of the piezomagnetism of these altermagnets is that application of a magnetic field should lead to a (possibly symmetry-breaking) lattice distortion in the ordered state – or, alternatively, a lattice distortion should induce a non-zero magnetization <cit.>.
In this regard, multipolar order plays a role that is in some aspects analogous to nematic order <cit.>, in particular when the multipolar order is associated with a rotational symmetry of the lattice, e.g. octupolar magnetic order. The difference is of course that nematic order does not break time-reversal symmetry which, as we will see, gives rise to fundamental differences.
In this paper, we show that a subset of altermagnets (and even some standard ferromagnets) displays another non-trivial coupling between magnetic and elastic degrees of freedom. By this effect, a strain mode ε_Γ^+ that transforms as the Γ^+ irreducible representation of the relevant point group couples to the momentum operator π that is canonically conjugate to the fluctuating multipolar order parameter ϕ that characterizes the altermagnetic state:
ℋ^ dyn_ c=λ_0/2c^2∫ d^3xε_Γ^+(x)π(x).
This effect is reminiscent of the Hall viscosity response <cit.>,
which describes the stress σ_ij generated (non-dissipatively) by a time-varying strain, σ_ij = η_ijkl∂ε_kl/∂ t. In our case, it is the multipolar magnetic moment ϕ characterizing the altermagnetic state that is generated by the time-changing strain.
First, we study the impact of this dynamic strain-multipolar moment coupling on the elastic-magnetic collective modes of the paramagnetic phase, i.e. before any multipolar magnetic long-range order sets in. We demonstrate the emergence of a hybridized paramagnon-polaron mode, which opens the possibility of detecting the dispersion of the paramagnetic altermagnons (or magnons) directly from the phonon spectrum. Moreover, we also point out that this coupling can be understood as a two-mode squeezing <cit.> of the elastic and magnetic multipolar modes.
Second, we investigate how the altermagnetic-to-paramagnetic transition is affected by the dynamic strain coupling. We consider both thermal and quantum fluctuations – indeed, by tuning appropriate parameters, it is in principle possible to reach a quantum critical point <cit.> where the altermagnetic transition temperature vanishes, or a quantum disordered regime where altermagnetic fluctuations affect the ground state without long-range order. Surprisingly, we find distinct effects of the dynamic coupling on thermal and quantum fluctuations. In the important regime where the bare magnon velocity is larger than the phonon velocities, thermal fluctuations are boosted by the altermagnon-phonon coupling, suppressing the ordering temperature. On the other hand, quantum fluctuations are suppressed by the same coupling, leading to an increased regime of stable altermagnetic ground states. These behaviors are illustrated by the phase diagrams in Fig. <ref>.
This paper is organized as follows: in Section <ref>, we define our multipolar magnetic order parameter and its coupling to strain. In Section <ref> we construct a ϕ^4 theory for the altermagnetic degrees of freedom and write down the elastic theory for the crystal in question. Having done so, we are then able to calculate an effective field theory for the altermagnons, and derive the altermagnon and phonon dispersions and spectral functions, this is presented in Section <ref>. In Section <ref> we then perform an RG (renormalization group) calculation for the altermagnon propagator in the crossover regime, yielding the phase diagram for the system. Section <ref> contains the summary and conclusions, including the possible extension of our results to certain ferromagnets.
§ DYNAMIC COUPLING BETWEEN STRAIN AND MULTIPOLAR ORDER
We start by defining the multipolar moment of an altermagnet. Consider a magnetic order parameter ϕ transforming under an irreducible
representation Γ^- which, by construction, is odd under time
reversal, as indicated here by the superscript “-”. Let Γ^-_J_α be the representation of the
magnetic field component H_α, i.e. the representation according
to which magnetic dipoles transform under the point group operations. If Γ^-=Γ^-_J_α
we have the usual ferromagnetic order parameter
ϕ^α(x)∼∑_abc_a^†(x)J_ab^αc_b(x).
where J^α is an angular momentum operator and a and b stand for spin and orbital indices. c_a^†(x) and c_a(x) are corresponding electron annihilation and creation operators. In the simplest
case J^α is one of the Pauli matrices; in general it also includes
an orbital moment.
Other representations that are odd under time reversal but different than Γ^-_J_α form higher order multipolar magnetic order
parameters that behave like (see also Ref. <cit.>)
ϕ(x)∼∑_ab∫ d^3x'f(x')c_a^†(x+x'/2)J_ab^μc_b(x-x'/2),
with some form factor f(x). Clearly, f(x) = δ(x) recovers the ferromagnetic order parameter in Eq. (<ref>). The same is true if
f(x) transforms trivially under point-group operations.
However, other form factors that transform non-trivialy give rise to higher-order multipoles <cit.>. In particular, when f(x) corresponds to d-wave, g-wave or i-wave form factors, these order parameters describe an altermagnet <cit.>. The underlying
distribution of the spin (or orbital moment) density in the unit cell is most naturally
a consequence of multiple atoms per unit cell, even in situations where there is only
one electronic band crossing the Fermi surface. In what follows we
focus on these types of multipolar order parameters.
In order to motivate the coupling of strain and multipolar magnetic order, we briefly summarize the established case of strain coupling of a nematic order parameter η. Suppose η transforms under a representation Γ^+, which is, by definition of a nematic state, time-reversal even. Then it couples to strain
in the Hamiltonian ℋ via the nemato-elastic coupling
ℋ^ n.e._ c=λ_ n.e.∫ d^3xε_Γ^+(x)η(x).
ε_Γ^+ is the combination of strain tensor elements that transforms like Γ^+; see below for examples. This coupling gives rise to a structural distortion at a nematic transition. Even without nematic long-range order one can relate the nematic susceptibility and the elastic constants <cit.>. The latter gets softened whenever there is a large nematic susceptibility and a sizeable coupling constant λ_ n.e..
A coupling of the type Eq. (<ref>) is not allowed for multipolar order ϕ, even if it transforms like the elastic strain tensor under the symmetry operations of the crystal. The issue is that strain is even under time reversal and will not directly couple to magnetism. One way to resolve this is by adding an external magnetic field H_α. Then a symmetry-allowed coupling of the kind
ℋ^H_ c=∑_i∑_α=x,y,zλ^H_α, iH_α∫ d^3xε_Γ^+_i(x)ϕ(x)
emerges, provided Γ^-∈Γ^-_J_α⊗Γ^+_i, i.e. the product representation of field and strain contains Γ^-. This is just another way of expressing the piezomagnetic response of a multipolar magnet, ε_ij = Λ_ijk M_k, which makes explicit the proportionality between the relevant piezomagnetic tensor elements Λ_ijk and the multipolar magnetic order parameter ϕ. Couplings of the type Eq. (<ref>) and (<ref>) are static and thus occur for generic, non-dynamic order parameter configurations <cit.>.
If one, however, considers the dynamics of the order parameter, one can identify another direct coupling to strain that does not require a finite magnetic field. First, note that an order parameter ϕ(x) has under rather generic dynamics a conjugated momentum π(x). In the quantum regime, this implies the canonical commutation relations [ϕ(x),π(x')]=iħδ(x-x'), while in the classical regime the dynamics follows from the corresponding Poisson brackets. Since ϕ is odd under time reversal, π is even and hence transforms as Γ^+. This implies that strain and multipolar order couple like the one given in Eq. (<ref>), provided that there are combinations of the strain tensor ε_ij that transform as Γ^+. Expressed in terms of an action, this coupling takes, after eliminating the conjugated momentum π in favor of the time derivative of the order parameter ∂_τϕ, the form
𝒮^ dyn_ c=λ_0/2∫_0^β dτ∫ d^3xε_Γ^+(x,τ)∂_τϕ(x,τ).
Here, λ_0 is a coupling constant and β=1/T the inverse temperature and τ the imaginary time. c given in Eq. (<ref>) is a velocity, which will be properly defined below. As we will see below, λ_0 is dimensionless. Hence, for systems with strong coupling to the lattice, a natural value of the coupling constant is λ_0≈ 1.
In contrast to the piezomagnetic coupling of Eq. (<ref>), the interaction Eq. (<ref>) or, equivalently, Eq. (<ref>) does not affect static field configurations. However, it strongly mixes the dynamics of lattice and magnetic degrees of freedom and is present even in the absence of an external magnetic field. As we will see, it opens up the possibility to observe dynamical multipolar magnetic fluctuations via Raman <cit.> or neutron scattering <cit.>, even in the magnetically disordered state. We further motivate such a coupling in Appendix <ref>. The analysis of the coupling in Eq. (<ref>) is the content of the rest of this paper.
The formulation in Eq. (<ref>) allows us to make the aforementioned connection to the Hall viscosity explicit. The relationship between dynamic strain and stress is given by <cit.>
σ_ij=C_ijklε_kl-η_ijkl∂_t ε_kl,
with the usual elastic constants C_ijkl and
the viscosity tensor η_ijkl. The second term takes into account that deformations performed
at a finite speed are dissipative and produce heat.
Indeed, elements of the viscosity tensor that are symmetric under the
exchange of ij ⟷ kl contribute
to the entropy production. However, antisymmetric contributions are non-dissipative.
Due to the Onsager reciprocity relation such antisymmetric components
occur as a consequence of broken time-reversal symmetry. Let us consider a system without altermagnetic fluctuations but in an external magnetic field. In the
presence of a finite magnetic field it is allowed for the
antisymmetric components to be non-zero. An example, relevant to the point group D_4h, which we discuss in detail below, is the Hall viscosity
η^H≡η_xyxx(B_z)=-η_xxxy(-B_z)=⋯.
The Hall viscosity contribution in the action that yields the equation of motion Eq. (<ref>) is then
S_ Hall = -1/2η^H∫ dτ d^3x[(ϵ_xx-ϵ_yy)2∂_τϵ_xy.
- .(ε_xy+ε_yx)(∂_τε_xx-∂_τε_yy)].
Comparing this with Eq. (<ref>) shows that η^H(B_z)∂_τε_xy plays the same role as ∂_τϕ if we consider Γ^+=B_1g. Since η^H(B_z)=-η^H(-B_z) both are odd under time reversal and both transform the same way under
point group operations. The fluctuating field due to the altermagnetic order parameter suffices, which is why
Eq. (<ref>) does not require an external magnetic field. The dynamics of the order parameter induces non-dissipative stress, in analogy to the Hall viscosity response.
To proceed, we consider specific crystalline point groups and altermagnetic order parameters. Here for simplicity we first focus on layered but three-dimensional systems.
Let us consider a tetragonal system with point group D_4h <cit.>. Magnetic order that preserves lattice translations should transform under one of the five irreducible representations of the point group that are odd under time reversal (see also Ref. <cit.>). Of those, A_2g^- and E_g^- correspond to ferromagnetic states with magnetization along the z-axis and in the x-y plane, respectively. Those form magnetic dipoles, i.e. usual magnetic order. In addition, one can form higher order moments that transform like A_1g^-, B_1g^- or B_2g^-.
Along the k_z=0 plane, the form factors of Eq. (<ref>) in momentum space are f_B_1g^-(k)=sin k_xsin k_y,
f_B_2g^-(k)=cos k_x-cos k_y,
and f_A_1g^-(k)=f_B_1g^-(k)f_B_2g^-(k). In all cases, the spins point out of the plane.
Note that the A_1g^- state corresponds to a magnetic dotriacontapole, while the other
two form magnetic octupoles. They can be understood as charge
multipoles, characterized by the form factor f(x),
which modulate a pseudo-vector s^μ that by itself transforms line
a magnetic dipole <cit.>. Thus, if f(x)
describes a quadrupolar (hexadecapolar) distribution, i.e. with angular moment l=2 (l=4),
then ϕ(x) corresponds to an octupolar (dotriacontapole)
magnetic moment with j=l+1=3 (j=l+1=5).
To clarify our notation, we use the subscript f_Γ^-(k) as the irreducible representation of the order parameter, not of the form factor function f(k) itself. From Eq. (<ref>) follows Γ^-= Γ_J_α^-⊗Γ_f, where Γ_f is the representation of the form factor. The above results follow with Γ_J_z=A_2g^-.
All three states of multipolar order in D_4h are single component states and can be described by an Ising order parameter ϕ <cit.>. To be specific, we assume below that ϕ transforms according to B_1g^-, which is the altermagnetic order parameter proposed for MnF_2 <cit.>; the modifications for the other symmetries are straightforward.
The coupling to strain given in Eq. (<ref>) is then given as
H^ dyn_ c=λ_0/2c^2 ∫ d^3xε_B_1g(x)π(x),
where ε_B_1g(x)=ε_xx(x)-ε_yy(x).
To give another example, consider the octahedral group O_h.
Dipolar magnetic order in this group transforms as the three-dimensional
irreducible representation T_1g^-, amounting to ferromagnetic order
along the crystalline axes. In addition, there are four multipolar
order parameters that do not break inversion symmetry: A_1g^-, A_2g^-, E_g^-, and T_2g^-. Strain transforms either as A_1g^+, amounting to volume changes ϵ_xx+ϵ_yy+ϵ_zz;
E_g^+, with doublet ( 2ϵ_zz-ϵ_xx-ϵ_yy, ϵ_xx-ϵ_yy);
and T_2g^+ with triplet ( ϵ_xy,ϵ_xz, ϵ_yz).
Thus, if the order parameter transforms like A_2g^-, there exists
no strain field that can dynamically couple via the conjugated momentum, while a
field induced coupling to T_2g^+-strain via Eq. (<ref>) is allowed.
Both types of couplings are allowed for the other two order parameter options.
The group O_h is relevant for PrM_2Al_20 systems
with transition metal M where multipolar order has been discussed extensively <cit.>. However, the candidate state for multipolar magnetic order is A_2g^- (ferro-octupolar order) and no strain combination transforms as this irreducible representation.
Hence, the coupling, Eq. (<ref>) is not realized here.
§ MANIFESTATIONS OF THE DYNAMIC STRAIN COUPLING ON THE COLLECTIVE MODES
The dynamic coupling in Eq. (<ref>) between strain and a multipolar magnetic order parameter is particularly interesting in the regime where quantum fluctuations are strong, i.e. in the quantum critical regime where the ordering temperature has been suppressed to zero. In this regime we can describe the multipolar order in terms of a long-wavelength collective field theory. We will first explore the implications of the interaction Eq. (<ref>) in the case of a single-component order parameter with point group D_4h. In the next subsection we will briefly summarize the theory of multipolar order and fluctuations without the dynamic coupling to strain. Then we will add the strain coupling and analyze the resulting coupled problem within a renormalized Gaussian approach. While the latter is justified by the fact that we operate at the upper critical dimension, we go beyond the Gaussian theory and include critical fluctuations using a one-loop renormalization group approach in the next section. This method is particularly suitable to determine the impact of the dynamic elastic coupling on the phase boundary of multipolar magnetism.
§.§ Field theory for coupled multipolar and elastic degrees of freedom
We briefly summarize the collective field theory of multipolar order in the absence of coupling to elastic degrees of freedom.
We consider an insulating system and analyze the regime near its quantum critical point, i.e. the regime where the quantum dynamics of the order parameter is most important. The single-component system is described in terms of an Ising order parameter and governed by the action
𝒮_ϕ=1/2∫_x ϕ(x)(r_0-c^-2∂_τ^2-∇^2)ϕ(x)+u∫_x ϕ(x)^4.
Here x=(x,τ) combines the spatial coordinates and the imaginary time, while ∫_x⋯=∫ d^3x dτ⋯. The parameter r_0, which is the mass term for the altermagnons, tunes the system through the quantum critical point. c is the altermagnon velocity, which is of order of the typical magnetic interaction J times the lattice constant. The coefficient u penalizes large-amplitude fluctuations and bounds the action.
Before proceeding, we note that the situation is slightly different for two-component multipolar order parameters, like the E_g^- state of O_h (see also Ref. <cit.>). In this case, the order parameter is governed by the action
S_ϕ = 1/2∑_i=1,2∫_x ϕ_i(x)(r_0-c^-2∂_τ^2-∇^2)ϕ_i(x)
+ u∫_x (ϕ_1(x)^2+ϕ_2(x)^2)^2
+ v∫_x (ϕ_1(x)^2+ϕ_2(x)^2)^3
+ w∫_x ϕ_1(x)^2 (ϕ_1(x)^2-3ϕ_2(x)^2)^2.
This corresponds to the six-state clock model, and as such the ground state is six-fold degenerate with the relative amplitude between ϕ_1 and ϕ_2 obtained by minimizing the last term in the action above. In the remainder of the paper, we will focus on the case of an Ising-like magnetic multipolar order parameter.
At long wavelengths, we can write the elastic action in terms of longitudinal and transverse phonon modes <cit.>
𝒮_ε=-1/2∑_ν=L,T∫_x u_ν(x)(∂_τ^2+v_ν^2∇^2) u_ν(x).
Here ν=L and T corresponds to longitudinal and transverse phonons with displacement u_L and u_T and velocity v_L and v_T, respectively. The velocities depend upon the elastic constants of the system and, for the tetragonal crystal under consideration, on the polar angle θ of the momentum.
v_T^2 = 1/4 (c_11-c_12+2c_44
+ (-c_11+c_12+2c_44)cos(2 θ)),
v_L^2 = 1/2 (c_11+c_44+(-c_11+c_44)cos(2θ)).
For the system to be stable, it must follow that v_L> v_T. Note that, for three-dimensional crystals, there is an additional transverse mode. Since we are interested in the point group operations relevant for the B_1g^- order parameter ϕ, the crucial lattice displacements are in-plane. Therefore, hereafter we focus on a single transverse mode with predominant in-plane polarization.
Finally, using the fact that ε_ij=(∂_i u_j+∂_j u_i)/2, we can rewrite the dynamic coupling in Eq.(<ref>) in terms of the longitudinal and transverse displacements. After Fourier transformation to momentum and Matsubara frequencies, we find:
𝒮^ dyn_ c = λ_0∫_qω_n/|q|ϕ(q)[(q_x^2-q^2_y ) u_L(-q) .
- . 2q_xq_y u_T(-q)].
Here ω_n=2π n T are Matsubara frequencies and q=(q,ω_n).
The coupling is anisotropic, since at q=(q_x,0) and q=(0,q_y) only the longitudinal phonon couples to the multipolar order parameter. On the other hand, for q=(q_x,± q_x), the coupling is solely to the transverse phonons. This symmetry-selective interaction can also be used to determine the nature of the multipolar magnetism of unknown symmetry. If one studies the spectrum of a phonon of well-defined symmetry and observes traces of a magnon mode that vanish along specific high symmetry directions, one can deduce the symmetry of the altermagnon.
§.§ Spectral functions and hybridized collective modes
For d=3 and T=0, the quartic interaction u in Eq. (<ref>) is marginally irrelevant. Below we will account for the corresponding logarithmic divergencies using a renormalization group approach. Anticipating the result of this analysis, we account for the effects of the dynamic coupling on the collective elastic-altermagnetic modes by considering an effective Gaussian theory with r_0 replaced by the renormalized mass r. This allows us to analyze the spectral properties that follow from the coupling in Eq. (<ref>) or, equivalently, Eq. (<ref>).
This is accomplished by integrating out one set of degrees of freedom and then calculating the propagator for the remaining one.
Starting with the altermagnetic propagator, we note that in the absence of the dynamic coupling, the collective modes in the disordered phase are gapped altermagnons (i.e. alter-paramagnons) with velocity c, as given by Eq. (<ref>). After integrating out the two phonon modes and performing the analytic continuation to real frequencies iω_n →ω +i0^+, we obtain the Gaussian renormalized altermagnon propagator:
χ(q,ω)=1/r+q^2-ω^2/c^2Δ(q,ω),
where
Δ(q,ω)=1-c^2λ_0^2/q^2((q_x^2-q^2_y)^2/ω^2-v_L^2q^2+4q_x^2q^2_y/ω^2-v_T^2q^2).
The poles of this propagator in the case c>v_L,T are shown in Fig.<ref>. Where we considered q_z=0 such that θ=π/2 and the phonon velocities are constant. This regime is relevant whenever electronic energy scales are larger than the lattice ones. For completeness, we discuss the opposite regime below.
We see in Fig. <ref> that the modes which initially correspond to phonons also appear in the altermagnetic propagator. This is a direct consequence of the hybridization promoted by the dynamic coupling. As such, the phonon and alter-paramagnons are not independent modes, but hybridize into new collective modes that we dub paramagnon-polarons, inspired by the nomenclature used in Ref. <cit.>. Fig. <ref> shows that, as the dynamic coupling increases, the phonon-like modes are softened whereas the gapped altermagnon-like mode is hardened. The coupling is clearly anisotropic, as only one phonon mode is softened along each of the high-symmetry directions considered in Fig. <ref> – namely, q_x = 0 (positive horizontal axes) and q_x = q_y (negative horizontal axes). Away from the critical point (r>0), the altermagnon-like mode acquires a gap, whereas at the critical point, three gapless modes would emerge.
We emphasize that this result is valid in the regime c>v_L/T. In the other regime c<v_L/T we find instead a level repulsion and a greatly diminished softening of the phonon mode, as illustrated in Fig. <ref>. Such a regime is applicable in materials with a very small magnetic interaction J. The regime c>v_L/T is more common and forms the focus of this paper, although we will make reference to both regimes.
After calculating all three propagators χ, we can obtain the corresponding spectral functions by computing Im(χ) from the propagators:
χ_T(q,ω)=1/Δ_T(q,ω)-ζ_T(q,ω) ω^2+v_T^2q^2,
where χ_T is the propagator for the transverse phonon with renormalized coefficient
ζ_T(q,ω)=1+λ_0^2/q^2q_x^2q_y^2/r_0+q^2-ω^2/c^2,
of the dynamic term, while Δ_T(q,ω) corresponds to coupling of the two phonons away from both high symmetry directions
Δ_T(q,ω)=(4λ_0^2ω^2q_xq_y(q_x^2-q_y^2)/q^2(r_0+q^2-ω^2/c^2))^2/-ζ_L(q,ω) ω^2+v_L^2q^2,
with
ζ_L(q,ω)=1+ λ_0^2/q^2(q_x^2-q_y^2)^2/r_0+q^2-ω^2/c^2.
For the longitudinal phonon propagator we have
χ_L(q,ω)=1/Δ_L(q,ω)-ζ_L(q,ω)ω^2+v_L^2q^2,
where
Δ_L(q,ω)=(4λ_0^2ω^2q_xq_y(q_x^2-q_y^2)/q^2(r_0+q^2-ω^2/c^2))^2/-ζ_T(q,ω)ω^2+v_T^2q^2.
To model the finite lifetimes arising from damping or other processes, we add a small imaginary part to the frequency ω on the real axis. Fig. <ref> shows the spectral functions as a density plot. For the altermagnetic propagator we see that along each high-symmetry direction, only one phonon-like mode has a non-zero spectral weight, whereas the altermagnon-like mode is symmetric. This effect mirrors the behavior of the poles of the altermagnetic propagator discussed above in Fig. <ref> and is a direct consequence of Eq.<ref>. For q_x = ± q_y, the altermagnon only couples to the longitudinal phonon, whereas for q_x = 0 or q_y=0, the altermagnon couples to the transverse mode, making this mode visible.
The anisotropy of the dynamic coupling is also manifested in the gapped altermagnon-like mode when we plot the spectrum of the longitudinal phonon. Because this phonon mode does not hybridize with the altermagnon along q_x=q_y, it can only be observed in certain directions. Similarly, in the spectrum of the transverse mode, the gapped altermagnon-like mode has a vanishing spectral weight along the q_x = 0 direction. In either case, we also see that along the directions where the altermagnon-phonon coupling is non-zero, the gapless phonon-like mode softens.
These results show that even though directly measuring an altermagnon is a nontrivial task, by measuring the phonon spectrum at finite momentum, even away from the critical point, it is possible to assess the altermagnon mode – provided the measurement is along a specific momentum-space direction. Conversely, by measuring the phonon spectrum along high-symmetry directions and identifying which ones display a gapped mode allows one to obtain the symmetry of the altermagnetic order parameter.
In order to further illustrate the anisotropic nature of the coupling between the altermagnetic and phonon modes, we show a density plot of the spectral weight of each branch of the altermagnetic spectral function along the entire q_x, q_y plane in Fig. <ref>. We see that the weight of the longitudinal phonon vanishes along q_x=± q_y, as along these directions the phonon decouples and no longer contributes to the altermagnon propagator. Analogously, the weight of the transverse phonon no longer contributes along q_x,y=0, such that the weight of this branch vanishes along these directions. While the spectral weight of the altermagnon branch of course never drops to zero, it becomes fourfold anisotropic.
The coupling between the canonical momentum of one degree of freedom and the coordinate of another one is at the heart of our discussion. As we considered effective Gaussian theories, interesting insight can be gained by considering a simple Hamiltonian that describes two coupled oscillators in which the displacement of one is coupled to the momentum of the other:
H=∑_i=1,2(p_i^2/2m_i+m_iω_i^2/2x_i^2)+λ/2p_1x_2.
As usual, the problem can be diagonalized using a 4× 4 symplectic matrix S.
The transformation (x_1,x_2,p_1,p_2)→ S^-1(x_1,x_2,p_1,p_2)
can also be cast as a unitary transformation of the operators, such
as x_i→ Ux_iU^-1 where
U=e^i(ap_1p_2+bx_1x_2),
where the coefficients a and b can be expressed in terms of the m_i,ω_i, and the coupling λ. Applied to the vacuum, such an operator creates two-modes squeezed
states made by the two coupled oscillators <cit.>. The two-mode squeezing occurs regardless of whether the magnetic system is gapped or not. It reflects the fact that the momentum-coordinate coupling strongly changes the relative fluctuations of the involved degrees of freedom, where the softening of one mode enforces the hardening of the other.
§ IMPACT OF THE DYNAMIC STRAIN COUPLING ON THE ALTERMAGNETIC PHASE DIAGRAM
Having established how the collective altermagnetic and phonon modes are hybridized by the dynamic strain coupling, we now discuss how the altermagnetic phase transition is impacted by this coupling. In order to determine the phase diagram, we perform a one-loop renormalization group (RG) calculation for the quartic coefficient u and the mass term coefficient r. We integrate out the phonon modes and obtain
S=1/2∫_q ϕ(q)χ^-1(q)ϕ(-q)+u∫ϕ(x)^4,
with the phonon-renormalized inverse altermagnon propagator on the real frequency axis written in Eq. <ref>. The form of coupling Eq. <ref> implies rather different behaviors in the regimes where ω is small or large compared to c q. Considering first ω≫ v_T,L|q|, we have
Δ(q,ω) ≈ 1-c^2λ_0^2q^2/ω^2,
such that the coupling renormalizes the coefficient of the q^2 term of the altermagnetic propagator. Since the q^2 coefficient is proportional to the inverse squared correlation length, its suppression implies an enhancement of the spatial fluctuations mediated by the dynamic strain coupling. On the other hand, when ω≪ v_L,T| q| we find
Δ(q,ω) ≈ 1+c^2λ_0^2/v_T^2, q_x=q_y ,
Δ(q,ω) ≈ 1+c^2λ_0^2/v_L^2, q_x=0 .
In this regime, it is the altermagnon velocity c that is renormalized downwards by the coupling, which suppresses quantum fluctuations.
As our goal is to calculate the phase diagram at nonzero temperatures, we employ the crossover method outlined in detail in Ref. <cit.>. We start from the flow equations given by a perturbative RG calculation, which is controlled at the upper critical dimension d=3 and includes the logarithmic corrections beyond mean field theory.
dr/dl = 2r+3ud/dl∫_q^>χ_0(q)-3urd/dl∫_q^>χ_0^2(q),
du/dl = -9u^2d/dl∫_q^>χ_0^2(q).
where χ_0 is the propagator (Eq. <ref>) taken at r=0. To derive these expressions, we have integrated out the long wavelength modes leaving us with a set of shell integrals over Λ e^-l<|q|<Λ for some momentum cutoff Λ, with l a parameter used to vary length-scales. Note that we employ no cutoff for the Matsubara frequencies. The shell equations can be solved in a straightforward way:
d/dl∫_q^> χ_0^m(q) = d/dl∫_Λ e^-l<|q|<Λd^3q/(2π)^3T∑_n=-∞^∞χ_0^m(q)
= Λ^3/(2π)^3T∑_n=-∞^∞∫_0^2π∫^π_0 dθ dϕsin(θ) χ_0^m(Λ),
where we suppress the dependency of χ_0 on ω_n, ϕ and θ. The flow equations at one-loop level are then
dm/dl = 2m+gΛ cF_1(T)-gmc^3Λ^3 F_2(T),
dg/dl = -3g^2Λ^3c^3F_2(T),
dT/dl = T .
where we defined the dimensionless quantities
m=r/Λ^2, g=3uc/(2π)^3 ,
and
F_m(T ) =T∑_n=-∞^∞∫_0^2π∫^π_0 dθ dϕsin(θ) χ_0^m(Λ,ω_n,θ,ϕ).
Note that T is the running temperature, whereas the physical temperature corresponds to T(l=0).
Taking the limit T→ 0 and solving the flow equations naturally yields a phase diagram with a quantum critical point.
We can define the distance from the critical point <cit.>
t=m+cΛ/2F_1(0)g.
We plot the phase diagrams at T=0 in Fig. <ref> for the two regimes. For the case c > v_L,T; we used the parametrization:
v_T(θ)/c = 1/6√(9/2+7/2cos 2θ) ,
v_L(θ)/c = 1/3√(3/2+1/2cos 2θ).
As for the case c<v_L,T, we considered the following parametrization:
v_T(θ)/c = 1/6√(450 + 350cos 2θ),
v_L(θ)/c = 1/3√(150+50cos 2θ).
The key result is that increasing the dynamic strain coupling constant λ_0 expands the regime with long-range altermagnetic order. This is consistent with what we found above that, in the quantum regime, λ_0 renormalizes c and suppresses quantum fluctuations. Thus, larger values of the coupling leads to a larger ordered regime characterised by a smaller |m_0| due to the hardening of the hybridized altermagnon-like mode. Hence, the dynamic coupling to phonons suppresses quantum altermagnetic fluctuations, reinforcing altermagnetic order. This is the case for both regimes c<v_L,T and c>v_L,T.
We can also solve the flow equations for the case of small but finite T, which yields an expression for the transition temperature T_c. This crossover regime is relevant when g_0=g(l=0)≪ 1 and T≪ cΛ. In this regime, to first-order in g_0, it is sufficient to simply use the T=0 solution for g(l) <cit.>
g(l)=g_0/1+3c^3Λ^3F_2(0)g_0l.
Thermal fluctuations only enter at 𝒪(g_0^2). We now turn our attention to m, and consider the ansatz
m=m_0e^ξ(l)h(l).
This ansatz solves our flow equation when
ξ(l) =2l-Λ^3c^3∫_0^lg(l')F_2(l')dl',
h(l) =1+Λ c/m_0∫_0^le^-ξ(l')F_1(l')g(l')dl'.
We can then substitute these expressions in the flow equation for m and integrate by parts to find an expression for the transition temperature T_c to first-order in g_0. At the critical point, by definition, We can also set l→∞. We find:
m^c_0+Λ cg_0/2∫_0^2π∫_0^πdθ dϕsin(θ)∑_i=1^3A_i/2E_i
+2g_0Λ cT_c^2∫_0^π∫_0^2πdϕ dθsin(θ)∑_i=1^3A_i(π^2/6E_i^3.
.+1/E_i^2T_clog(1-e^-E_i/T_c)-1/E_i^3 Li_2(e^-E_i/T_c))=0.
where E_i is the energy for each mode and A_i the corresponding weight for the three branches, determined via
χ(q,ω)=∑_ic^2A_i(q)ω^2-E_i(q)^2
and Li_2(z) is the poly-logarithm.
Since this equation cannot be solved analytically for T_c, we resort to numerical methods to find the solution. We plot the obtained phase diagram in Fig. <ref>, using the parametrization for the velocities of Eq. (<ref>), i.e. c > v_L,T. We see that, in general, increasing the dynamic coupling to phonons λ_0 leads to a decrease in the transition temperature. This is consistent with the results of Fig.<ref>, which shows that the coupling leads to a softening of phonons. A consequence of this softening is that for larger coupling, the system contains a larger population of soft phonons, which suppresses altermagnetic order. As T=0 is approached this effect is less relevant, as there are no phonon modes occupied at zero temperature. By comparing Fig. <ref> and Fig. <ref>, we note that, for a given coupling constant value, unless m_0 is within the T=0 ordered regime, there is no transition at non-zero temperature. Increasing λ_0 leads to an increase in m_0^c and for any m_0<m_0^c(λ_0), the transition temperature rises to a maximum before being suppressed. This initial rise is most likely due to the system being in the regime where the altermagnon hardening is still the dominant effect. While these conclusions refer to the case where c>v_L/T, we can also calculate the phase diagram for the other regime, c<v_L/T. Such a regime is less common but would be the case for systems with small magnetic interaction J. As discussed above in Fig. <ref>, the impact of the dynamic coupling on the phonon-like mode is significantly diminished, whereas the altermagnon-like mode still hardens. Consequently, as shown in Fig. <ref>, where the parametrization of Eq. (<ref>) was used, in the regime c<v_L/T we still find a very similar phase diagram near the quantum critical regime; however, in the thermal regime the coupling has a much smaller effect on the transition temperature.
§ CONCLUSIONS
In summary, we showed from symmetry considerations that a dynamic coupling between strain and the momentum of a magnetic collective mode naturally emerges in a class of systems with multipolar magnetic order. An important application of these results is for the case of altermagnets, as they are described by d-wave, g-wave, and i-wave magnetization order parameters, which in turn correspond to non-zero magnetic multipoles. While in this paper we focused on a tetragonal crystal with D_4h symmetry and an altermagnetic order parameter transforming as the irreducible representation B_1g^- (relevant for instance for the altermagnet candidate MnF_2), the results are more general, as we pointed out by commenting on crystals with O_h point group. One of our main results is the demonstration that, due to this dynamic strain coupling, altermagnons can in principle be probed directly from the phonon spectrum. This is important, as detecting such a state with zero net-magnetisation via the magnetic spectrum is a challenging task.
The coupling discussed here can be understood as an internal, fluctuation-induced non-dissipative response, which gives rise to stress σ_ij generated by a time-varying strain in the presence of the magnetic multipolar collective mode. It is analogous to the stress that occurs due to a finite Hall viscosity.
The coupling induces a symmetry-sensitive dynamic hybridization of phonon and altermagnon modes, i.e. an altermagnon-polaron. It softens the former and hardens the latter, giving rise to significant changes of regions where altermagnetism occurs in the temperature-quantum fluctation phase diagram.
At T=0 the effect of the coupling leads to an enhancement of order, hence in the T=0 plane of the phase diagram, the ordered regime is enlarged.
At non-zero T the situation changes. Now, thermal fluctuations (phonons) become the dominant effect, and the renormalization of these fluctuations due to the dynamic coupling leads to a high population of soft phonons. These, in turn, suppress order, leading to a reduction in the transition temperature.
While the focus of this paper was on altermagnets, it is important to note that the coupling in Eq. (<ref>) should also be relevant for certain ferromagnets. The condition for this coupling to be present is that the magnetization and some of the strain components must transform as the same irreducible representation Γ of the point group, the difference being that the former is time-reversal-odd (Γ^-) and the latter, time-reversal-even (Γ^+). While this is not possible in the cubic group O_h, it is allowed for tetragonal D_4h and hexagonal D_6h ferromagnets with in-plane moments. In those cases, respectively, the two-component in-plane magnetization transforms as E_g^- and E_1g^-, whereas the out-of-plane shear strain doublet (ε_xz, ε_yz) transforms as E_g^+ and E_1g^+. An even more promising class of systems is that of D_2h orthorhombic ferromagnets. In these cases, each of the three components of the magnetization transform separately as one of the one-dimensional irreducible representations B_ig^- with i=1,2,3. But it turns out that each of the three shear strains, ε_xy, ε_xz, and ε_yz, transforms as one of the B_ig^+ irreps. The same conclusions hold for the other two orthorhombic point groups, D_2 and C_2v. Therefore, the effects discussed here should be present in any orthorhombic ferromagnet. A promising family of materials to search for this effect are the ferromagnetic Mott insulating perovskites ATiO_3, with appropriate rare-earth A <cit.>.
§ ACKNOWLEDGEMENTS
We are grateful to B. Flebus, I. I. Mazin, J. Sinova, L. Šmejkal, R. Valentí for helpful discussions. This work was supported by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) through TRR 288, 422213477 Elasto-Q-Mat through projects A07 (J.S) and the DFG project SCHM 1031/12-1 (C.R.W.S.) R.M.F was supported by the Air Force Office of Scientific Research under Award No. FA9550-21-1-0423. R.M.F. also acknowledges a Mercator Fellowship from the German Research Foundation (DFG) through TRR 288, 422213477 Elasto-Q-Mat.
§ ALTERNATIVE DERIVATION OF THE DYNAMIC COUPLING
The coupling
of Eq. <ref> can also be obtained by following the approach
of Refs. <cit.>. Consider a fermionic field operator
c(x) that is a spinor in spin and orbital
space. Performing a deformation of the lattice with non-symmetrized
strain ε_αβ=∂_αu_β,
we consider a coordinate transformation
x'=Γ̂(t)^Tx
where
Γ̂(t)=e^ε
We consider an arbitrary strain field such that Λ̂(t) is an arbitrary matrix with a positive determinant. The fermionic field transforms as <cit.>
c_ε(x)=√(Γ) c(Γ^Tx)
In the case of a small change, we have ε→ε+δε
∂/∂ε_αβc_ε(x)=δ_αβ/2c_ε(x)+x_α'∂/∂ x'_βc_ε(x)
In order to study an infinitesimal change we set ε=0 such that x'=x. In this case the infinitesimal transformation can be written as:
c_ε(x)=[1-i∑_αβε_αβℒ_αβ]c(x)
where
ℒ_αβ=iδ_αβ/2+ix_α∂/∂ x_β=-1/2(x_αp_β+p_βx_α).
One can also include rotations that act in the internal space. We refer to the generator for this transformation as 𝒮_αβ and consider 𝒥_αβ=ℒ_αβ+𝒮_αβ. For specific examples, see <cit.>.
The important point is that 𝒥_αβ is even under parity, odd under time reversal and its symmetric part transforms like a symmetric second rank tensor, i.e. just like a multipolar order parameter discussed in this paper.
The field operator c_ϵ(r)
of the strained system is hence related to the unstrained case
via:
c_ε(x)=e^-i Tr(ε^T J)c(x)=U(t)c(x),
with strain generators J_αβ=-1/2(x_αp_β+p_βx_α)+i/8[σ_α,σ_β].
x and p=-i∇ are the position
and momentum operators and the Pauli matrices σ_α act
in orbital space.
Because U(t) represents a time-dependent transformation, a term can be introduced into the action via
S_ c=-i∫ dτ d^3x c^†(x)U(t)d/dt(U(t)^-1)c(x).
The coupling term that emerges from the relation between the
strained and unstrained system is hence <cit.> S_ c= ∫ dτ d^3x∑_αβε_αβd/dτc^† J_αβc.
This is the coupling of strain to the time derivative of a fermionic bilinear that transforms like the multipolar magnetic order parameter. It is the analog of the coupling Eq. <ref> to the conjugate momentum that appears on the level of the Hamiltonian.
|
http://arxiv.org/abs/2307.03055v1
|
20230706151932
|
Electromagnetic cloak design with mono-objective and bi-objective optimizers: seeking the best tradeoff between protection and invisibility
|
[
"Ronald Aznavourian",
"Guillaume Demesy",
"Sébastien Guenneau",
"Julien Marot"
] |
physics.comp-ph
|
[
"physics.comp-ph"
] |
addressMarseille]Ronald Aznavourian
addressMarseille]Guillaume Demesy
addressLondres1,addressLondres2]Sébastien Guenneau
addressMarseille]Julien Marotmycorrespondingauthor
[mycorrespondingauthor]Corresponding author
[email protected]
[addressMarseille]Aix Marseille Univ, CNRS, Centrale Marseille, Institut Fresnel,
13397, Marseille, France
[addressLondres1]
UMI 2004 Abraham de Moivre-CNRS, Imperial College London, SW7 2AZ, London, UK
[addressLondres2]
The Blackett Laboratory, Department of Physics, Imperial College London, London SW7 2AZ, UK
We revisit the design of cloaks, without resorting to any geometric transform. Cancellation techniques and anomalous resonances have been applied for this purpose. Instead of a deductive reasoning, we propose a novel mono-objective optimization algorithm, namely a ternary grey wolf algorithm, and we adapt a bi-objective optimization algorithm. Firstly, the proposed chaotic ternary grey wolf algorithm searches three-valued spaces for all permittivity values in the cloak while minimizing the summation of a protection criterion and an invisibility criterion. Secondly, a bi-objective genetic algorithm is adapted to find pairs of optimal values of invisibility and protection.
ternary grey wolf optimizer, chaotic metaheuristics, finite elements, invisibility cloak
Soumettre à Tony Lelievre
PROPOSITION DE PLAN
§ INTRODUCTION
Why the ternary approach for cloaks is important, and why bi-objective algorithms are interesting for this.
§ MATERIALS AND METHODS
§.§ Materials: Short introduction to cloaks
Ronald, Sebastien, Guillaume
We wish, contrary to what is done with topologic optimization, to use three materials instead of two.
§.§ Methods: bio-inspired Optimization algorithms
Ronald et Julien
§.§.§ A new variant of the mono-objective GWO
We propose a ternary version of GWO, which searches specifically ternary search spaces. In practice, the search space is associated to three values of epsilon in the considered problem.
ternary Power map
§.§.§ Overview of the bi-objective NSGA-II
§ RESULTS
Ronald, Julien, Sébastien, Guillaume
§.§ Single-objective approach
On travaille sur la cape complète.
Cf Appendix B.1. La fig B.6 est très bien. On sent que CTGWO pourrait descendre en core un peu. On pourrait tenter une expérience avec même nombre d'agents et d'itérations mais un quart de cape, donc 150 inconnues environ au lieu de 600. Ou refaire avec la cape complète, et 2000 itérations.
******************************************************************************
ON OUBLIE
FIGURE: designs de capes
il nous faut les capes:
REFAIRE TOURNER!!!
50
UNE FIGURE AVEC: Figure 15 (GWO) Figure 17 (Amixed GWO), Figure 19 (CTGWO)
TOUT BON:
100
UNE FIGURE AVEC: Figure 21 (GWO) Figure 23 (Amixed GWO), Figure 25 (CTGWO)
*****************************************************************************
******************************************************************************
ON GARDE
FIGUREs: designs de capes et cartes de champ
3 capes côte à côte et 3 cartes de champ
50
UNE FIGURE AVEC: Figure 30 (GWO) Figure 17 (Amixed GWO), Figure 19 (CTGWO)
UNE FIGURE AVEC: Figure 31 (GWO) Figure 17 (Amixed GWO), Figure 19 (CTGWO)
TOUT BON:
100
UNE FIGURE AVEC: Figure 21 (GWO) Figure 23 (Amixed GWO), Figure 25 (CTGWO)
*****************************************************************************
Changer la position de la source, à 45 degrés
§.§ Bi-objective approach
Choice of the best compromise on the Pareto front provided by NSGA-II.
Three exemples:
- selected setup for bad invisibility and good protection,
- selected setup for a good compromise between invisibility and protection,
- selected setup for bad protection and good invisibility,
§ CONCLUSION
§ APPENDIX
Other results: weighted combination of invisibility and protection criteria in a single-objective approach.
§ INTRODUCTION
Since the publication of works by the teams of Leonhardt <cit.> and Pendry <cit.> in the same issue of the Science magazine over 16 years ago, cloaking has become a mature research area optics. It is by now well known that one can design invisibility cloaks via geometric transforms that either lead to the anisotropic heterogeneous material parameters (e.g. rank-2 tensors of permittivity and permeability in optics), see <cit.>, or spatially varying, yet scalar valued, parameters <cit.>. The latter is achieved through conformal maps, hence constrained to the 2D case, and besides from that the cloak is of infinite extent. There is yet a third route to cloaking, that relaxes the severe material constraints in <cit.> (notably some infinite anisotropy on the inner boundary of cloak, rooted in the blow up of a point onto a ball of finite extent known as invisibility region, that can only be achieved over a narrow frequency bandwidth in practice <cit.>), and avoids the infinite extent of the cloak in <cit.>: so-called carpet cloaking is a combination of the previous two approaches that is based on quasi-conformal grids <cit.>. This third route requires only some moderate anisotropy, but only achieves invisibility for an object placed on a mirror.
Some works on cloaking focus on the mathematical aspects that are connected to famous inverse problems in particular on electric impedance tomography <cit.> wherein one wishes to uniquely determine the conductivity within a bounded region, by applying a known static voltage to the surface and recording the resulting current at the boundary (a Dirichlet-to-Neumann map).
The Dirichlet-to-Neumann map determines the conductivity <cit.>, but this can only happen if the conductivity is scalar-valued, positive and finite.
However, if some of these conditions are not met (e.g. the conductivity is matrix valued) electric impedance tomography fails <cit.>.
This has been exploited to create non-unique conductivities sharing the same boundary measurements <cit.>.
Main contributions
In the present work, we would like to revisit the design of cloaks, without resorting to any geometric transform. Not surprisingly, there is prior work that explored this alternative route, notably through scattering cancellation techniques <cit.> and anomalous resonances <cit.>. However, our rational for the design of cloaks is not based on a deductive reasoning, but rather on some optimization algorithm, and more precisely on some nature-inspired optimizer known as the Grey Wolf Optimizer (GWO). Here again, one may point out former work on design of invisibility cloaks <cit.> based on topology optimization <cit.>. In <cit.>, some mono-objective genetic optimization algorithm has been applied to estimate the best value, in terms of bias with respect to free-space propagation conditions, of 6 parameters which define the desired cloak. However, we stress that the nature of the optimization algorithms we shall use here is radically different: We aim at estimating a very elevated number of parameters, compared to some previous works such as <cit.>, and we aim at minimizing two contradictory criteria instead of one.
Layout of the paper
In section <ref> we introduce the cloak design problem, pointing out the need for the minimization of two antagonist criteria: an invisibility criterion and a protection criterion. In section <ref>, we provide a state-of-the-art of mono-objective and bi-objective optimization algorithms. We focus on the mono-objective grey wolf optimizer, and on the bi-objective non dominated sorting genetic algorithm. In section <ref> we propose a novel ternary version of the grey wolf algorithm, which is dedicated to search spaces with three values. We name it chaotic ternary grey wolf optimizer (CTGWO). In section <ref> we present the results obtained on cloak design with the mono-objective approach, including CTGWO, and with the bi-objective approach involving NSGA-II. In section <ref> we discuss the results obtained. We point out the superiority of CTGWO over comparative algorithms in the mono-objective approach; and we emphasize the interest of the bi-objective approach for an end-user. Conclusions are drawn in section <ref>.
Notations
The following notations are used throughout the paper:
Manifolds are denoted by blackboard bold, A, matrices by boldface uppercase roman, A. Vectors are denoted by boldface lowercase roman, a, and scalars by lowercase or uppercase roman, a, b or A.
The P scalar components of a vector a are accessed via a^1,a^2,…,a^P, such that a=[a^1,a^2,…,a^P]^T with superscript T denoting the transpose. The interval of real values between scalars a and b is denoted by [a:b] with square brackets. A set of values is denoted by {a,…,b} with curly brackets.
The symbol ∘ denotes the Hadamard (also called component-wise) product of two vectors: a∘b is a vector whose K components are equal to a_1 b_1,a_2 b_2,…,a_K b_K.
The symbol |a| means element-wise absolute value and is a vector whose K components are equal to |a_1|, |a_2|,…, |a_K|.
§ CLOAK DESIGN PROBLEM
We consider the 2D scattering problem sketched in Fig. <ref>(a). A point source is located at (x_s,y_s) and radiates from freespace (in blue color) in the vicinity of the yellow zone S_1 which is the area to be protected. This yellow region is cloaked by the green rectangular zone. For computational purposes, the blue freespace zone is surrounded by Perfectly Matched Layers (PMLs) that model an unbounded medium <cit.>. The scalar scattering problem amounts to finding the total scalar field u such that:
÷[ σ u]+k_0^2 χ u=δ_S
where k_0 is the freespace wavenumber (associated with the unbounded region of space outside of the object and its surrounding cloak and corresponding to a freespace wavelength of λ_0=2π/k_0), and σ and χ represent the scalar material properties.
In the context of acoustic pressure waves in isotropic non-viscous fluids, σ is the inverse of density and χ the inverse of compressibility, and p is the amplitude of the pressure wave. For anti-plane shear waves in isotropic solids, σ is the shear modulus and χ the density, whereas u stands for the component of the displacement field perpendicular to the (xy)-plane. Finally, for electromagnetic waves in transverse magnetic (TM) polarization σ is the inverse of the relative permittivity and χ the relative permeability. The relative permittivity will be denoted by ϵ in the rest of the paper. This is when u represents the component of the magnetic field perpendicular to the (xy)-plane. The roles of permittivity and permeability are interchanged in transverse electric (TE) polarization, whereby the electric field is perpendicular to (xy)-plane. All these physical setups are equivalent from a mathematical standpoint. In the rest of the paper, the considered wavelength is λ=500 nm, the side of a triangle mesh has length λ/6 in the freespace; and λ/12 in the cloak and the protected zone.
VALEUR DE lambda et taille d'un côté de triangle ?
In what follows, we focus on the TM case. In this scalar Helmholtz equation (<ref>), one can still consider a certain form of anisotropy <cit.>, provided that σ can be written as a 2 by 2 matrix.
=
[ _xx _xy 0; _yx _yy 0; 0 0 _zz ]
=
[ _xx _xy 0; _yx _yy 0; 0 0 1 ]_[ 1 0 0; 0 1 0; 0 0 _zz ].
Indeed, even if isotropic cloak and protected area only are considered here, the implementation of the PMLs relies on anisotropy (and absorption), see <cit.>. As in standard topology optimization, our design space is constructed on the mesh of the cloak shown in Fig. <ref>. Each triangular element constitutes a voxel that can be filled with a particular material whose isotropic physical properties are represented by the scalar quantity σ.
The practical Finite Elements discretization and implementation of the problem follow quite closely those described in Ref. <cit.> and its associated ONELAB open source tutorials found in Ref. <cit.>. In short, a first dummy run allows to retrieve the constant-by-element table for a particular mesh, and this table is used to control the discrete values of γ throughout the whole optimization procedure thanks to and functions <cit.>. Then, Eq. <ref> is solved using second order Lagrange elements in a standard manner.
Finally, we are in a position to define the optimization criteria. The first one, C_1 is a protection criterion, the integral over the region to be protected of the square norm of the field :
C_1 = 1/|S_1|∫_S_1|u|^2dS,
where |S_1| is the area of S_1 in Fig. <ref>.
The second one, C_2 is an invisibility criterion, the integral over the freespace region of interest of the square norm of the difference between the field and a reference field denoted u_0 :
C_2 = 1/|S_2|∫_S_2|u-u_0|^2dS,
where |S_2| is the area of S_2 in Fig. <ref> and u_0 is the freespace solution to the problem (i.e. the scalar Green function of the problem without any cloak or region to protect).
Our goal is to minimize C_1 and C_2 to achieve both protection and invisibility. Intuitively, we expect there should be some trade-off between protection (i.e. a vanishing field u inside the yellow region) and invisibility (i.e. u as close as possible to u_0 in the blue region). For instance, surrounding the yellow region by an infinite conducting boundary would ensure a vanishing field u inside the yellow region, but u and u_0 would be very different in freespace blue region S_2 due to a large scattering. On the other hand, if we consider freespace in the green region, then S_1, the cloak and S_2 are impedanced matched at the interfaces between all three regions and thus u=u_0 in S_2 (the cloak and S_1 being transparent). But in that case there is no protection at all in S_1. Depending upon their need, cloak designers might just wish to give more weight to criterion C_1 or C_2. C_1 and C_2 depend both on P parameters, where P is equal to the number of voxels (equivalently triangles) in the cloak. These parameters, denoted by K^1,K^2, …,K^i, …, K^P, take their values in a so-called 'search space', which can be either discrete or continuous. In this paper we will firstly consider the realistic case where three possible permittivity values can be associated with each triangle. These values correspond to three different materials and yield the search space {7,10,12}. Equivalently, once these values are set, we may access them through the index values {0,1,2}. This will compound our 'ternary search space'.
Secondly, we will perform a study which is less realistic: in our simulations the permittivity for each voxel may be any real value in [7:12] (up to seven decimal digits).
In a nutshell, we notice that we face a bi-objective problem with either ternary, or continuous search spaces. In the rest of the paper, the criteria C_1 and C_2 will also be denoted by f_1(x): RP↦R_+ and f_2(x): RP↦R_+. Vector x contains the parameter values K^1,K^2, …,K^i, …, K^P.
Définir les critères à minimiser, les paramètres à optimiser, les fonctions objectifs dans le cas mono-objectif et dans le cas bi-objectif.
Let S_1 be the region surrounded by the cloak that one would like to protect from incoming waves. Let S_2 be the region outside of the cloak that should not scatter incoming waves. Ideally, one would like to achieve both protection and invisibility, but in practice we are looking for a minimization of criteria f_1 and f_2 normalized for a solution x, with 0<f_1<1 and 0<f_2<1.
These criteria are defined as follow: We consider successively, for a given image I (of triangles I(i) with values between 0 and 1):
B_I(S) = S∑ |I(i)|/card(S)
where card(S) is the number of pixels in S.
We define I_b the uncloaked image with the area to protect, and I_e the image for for free space, I_c the image with the object and the cloak.
And the criteria
A_1 =B_I_c(S_1), A2 =B_I_e - I_c(S_2).
A voir
On peut penser au critère intégral en Eq. (<ref>):
f = ∫^∘ |Im1 - Im2|
où Im1 est une image en espace libre, telle <ref>(a), et Im2 est une image avec obstacle capé, telle <ref>(b). Dans Im1 et Im2, on annule tous les pixels qui sont dans la région capée.
f = ∫^∘ |Im1 - Im2|
Le choix de ce critère permet de quantifier la déformation de l'onde après franchissement de la cape.
C'est également ce critère que nous voulons choisir pour définir les propriétés de la cape.
EXPLANATIONS ABOUT THE TWO MINIMIZED CRITERIA
Définitions des PML, comment on les implémente
Milieu intérieur et extérieur à la cape, valeurs de epsilon, intervalles de valeurs possibles, choix pour epsilon1, 2, et 3.
Table with search spaces and possible values for K^1, K^2, ..., K^P: full continu GWO.
§ BACKGROUND AND STATE-OF-THE-ART OF OPTIMIZATION METHODS
Optimization algorithms are meant to retrieve the location of the minimum value reached with a set of parameters.
In the case of single-objective optimization, only one function is considered for optimization; in the case of bi-objective optimization, two functions should be minimized simultaneously.
We assume that P parameters should be estimated: K^1,K^2, …,K^i, …, K^P, where P ≥ 1. We remind that, as explained in section <ref>, in the considered problem, P is equal to the number of voxels in the cloak.
The following notations will be used:
∙ P is the number of expected parameters, which are indexed with i.
∙ iter denotes one iteration and T_max the maximum allowed number of iterations.
∙ f(·) is a function to be optimized, also called the criterion. It depends on the P parameters mentioned above. In this paper, unless specified, minimization problems are considered.
In the case of a single-objective optimization, there is one function f(·) to be minimized. In the case of bi-objective optimization, there are two functions f_1(·) and f_2(·) to be minimized.
Both GWO and NSGA-II are agent-based algorithms.
∙ iter is a vector corresponding to an agent q=1,…,Q, at iteration iter. It takes the form of a vector with a P-tuple of tested values iter=[K^1,K^2, …, K^P]^T.
In subsection <ref>, we give a background on a single-objective optimization: the grey wolf algorithm (GWO). In subsection <ref>, we give a background on a bi-objective optimization algorithm, namely non-dominated sorting genetic algorithm (NSGA) <cit.> and a fast version (NSGA-II) <cit.>.
§.§ Background on single-objective optimization and Grey Wolf Optimizer
The GWO is a nature-inspired optimizer based on the observation of the social life of grey wolves in nature <cit.>. In this method an agent is called a wolf. It simulates the common behaviour and hunting strategies of grey wolves in their environment.
The seminal GWO searches a continuous space <cit.>. Among the search agents, there are three leaders α, β, and δ. All other agents are the ω wolves.
∙ αiter, βiter, and δiter denote the position of the leaders α, β, and δ respectively, at iteration iter.
The position of any wolf at iteration iter+1 is calculated as:
iter+1 = 1/3(y_α( iter) + y_β( iter) + y_δ( iter))
It results from the equal contribution of the α, β, and δ wolves. These contributions are computed at each iteration iter as follows, for any leader l, either α, β, or δ:
iter = iter - ∘l
with: l = |∘iter - iter|, |·| denoting absolute value.
The vectors and are calculated as =2∘_1 - and =2 _2. In these expressions, vectors _1, _2 have random components between 0 and 1.
For the i^ th parameter (i=1,…,P):
The component b^i of is provided by:
b^i=2 a r_1 - a,
the same whatever i.
The component l of l is provided by:
l=|2 r_2l - iiter|
where r_1 and r_2 are two random values between 0 and 1;
iiter is the i^ th component of the q^ th agent at iteration iter; l is the i^ th component of leader l.
The component l of iter is provided by:
l = l - b^il
During the hunt, the wolves firstly diverge from each other to search for the prey, or equivalently to encircle it. Secondly, they converge to kill the prey. This is mathematically modeled through the deterministic vector . The components of vector are all equal to a, a scalar value which is a key parameter in the algorithm. When a>1, the search agents are obliged to diverge from the prey: this is the exploration phase. Conversely, when a ≤ 1, the search agents are obliged to attack towards the prey: this is the exploitation phase. In the vanilla version of GWO <cit.>, the key parameter a decreased regularly from 2 to 0:
a=2(1 - iter/ T_max)
In more recent works, various expressions have been proposed for a such as a quadratic <cit.> or adaptive <cit.> function. Whatever the version <cit.> the exploration phase lasts until a=1, then the exploitation phase lasts from a=1 to a=0.
Storing all values of f(αiter) across all iterations from 1 to T_max yields a so-called convergence curve. The outcomes of a single-objective optimization method are mainly the solution α T_max, but also the convergence curve.
§.§ Background on bi-objective optimization and non-dominated sorting genetic algorithm
Non-dominated sorting genetic algorithms (NSGA and NSGA-II) are inspired by Darwin's rules of evolution. In this method an agent is called a chromosome. The interest of multi-objective optimization has already been emphasized in physical phenomenon modeling for instance in <cit.>, where NSGA-II is used to model turbulence; or in the design of finite 3D periodic structures <cit.>, etc. The main steps of the algorithm are as follows:
generate a random population of chromosomes, calculate function values f_1 and f_2 for each chromosome, sort chromosomes in the population, choose parents in the next generation by tournament algorithm, generate children by crossover and mutation, extract a new generation through ranking, and repeat the process from parent choice. The expected outcome of NSGA-II is different from the outcom of GWO: For a single-objective method, we will represent the results as convergence curves, and the solution is a single set of values extracted from the search spaces. We will try to minimize our cost function so that our objective tends as much as possible towards zero. It is up to the user to determine a threshold value for our cost function, from which we will retrieve one optimal solution. For a bi-objective method, we will no longer have convergence curves, but Pareto fronts. The solution is composed of several sets of values extracted from the search space and located on the Pareto front. The principle of the Pareto front is that we will represent the value of the first cost function on the horizontal axis, and the value of the second cost function on the vertical axis. Thus, visually, we will see very quickly if a solution favors either f_1 or f_2. The main method to estimate the quality of a solution is the 'domination' principle. In fact, for each solution, we will calculate the distance between the point of origin of coordinates (0,0), and the considered solution, with its coordinates. Of course, the solution which has the smallest distance will 'dominate' the solution which has a larger distance. Considering two solutions x_ q1 and x_ q2, we can say that x_ q1 'dominates' x_ q2, if the following condition is respected:
((f_1q1≤f_1q2) and (f_2q1≤f_2q2))
and
((f_1q1 < f_1q2) or (f_2q1 < f_2q2))
It is up to the user to choose the best solution(s) to keep. Indeed, the user may very well want to favour one of the two objectives, or seek the best compromise between the two objectives.
Actually, in the last decades, different methods to determine the 'domination' principle have been proposed, and then, the 'non-domination' principle emerged, notably thanks to Srinivas and Deb <cit.> who proposed the NSGA algorithm <cit.> and then an improved version, called NSGA-II <cit.>. This fast sorting method by 'non-domination' has been widely spread by other algorithms as an efficient technique. The particularity of NSGA-II is to hierarchize the levels of 'domination', with a first Pareto front containing only the non-dominated solutions, a second Pareto front with the solutions dominated by one or two solutions, and finally, a third Pareto front with all the other solutions, those dominated by more than two solutions. For this last category, we compute 'crowding distances' for the solutions of this category, then we sort the set of results thus obtained. The 'crowding distance' is calculated criterion by criterion. For example, for the criterion represented on the horizontal axis, we will first determine the extreme solutions, which we will call 'min' and 'max', it being understood that 'min' will be the solution which will have the smallest value on the horizontal axis, and 'max', the solution which will have the largest value on the horizontal axis. Considering that we have Q'<Q solutions on the Pareto front, we will then assign an index to each solution, with the index 1, for 'min', and the index Q', for 'max'. For each solution of index q with 1 < q < Q', we will calculate the distance on the horizontal axis between the (q-1)^ th solution and the solution (q+1)^ th solution, and we will divide this distance by the distance on the horizontal axis between 'max' and 'min'. Thus, it is this result that will be considered as the crowding distance of each solution.
The next step is to create a 'descendant'. To do this, we first organize a selection tournament, i.e. we will randomly draw pairs of solutions in the set of solutions, and for each pair of solutions, we will determine which one can become 'parent', by comparing the hierarchical levels of the Pareto front. For example, if the first solution belongs to the second Pareto front and the second solution belongs to the third Pareto front, then the selection tournament will be won by the first solution because the second Pareto front contains solutions that are better than the third Pareto front. If two solutions belonging to the same Pareto front were drawn, then the solution with the smallest crowding distance would be selected as the 'parent'. Then, for each pair of 'parents', we will generate a child which will have the values of the first 'parent' for some unknowns, and the values of the other "parent" for the other unknowns. This step is called 'cross-over'. Finally, according to a percentage defined beforehand, the values of some unknowns of the child will be slightly modified. This last step is called the 'mutation'. The whole process is repeated, as many times as there are iterations, and in the end, we obtain the set of 'optimized' solutions.
§ CHAOTIC TERNARY GREY WOLF ALGORITHM
We propose a ternary version of GWO, which searches specifically ternary spaces with enhanced exploration abilities.
Our first motivation is that, in the considered application, the search space is associated to three values of epsilon. But this method could be applied to other situations and applications, involving for instance sensors with three possible states.
Our second motivation is to propose a method with enhanced exploration properties. Indeed metaheuristics with enhanced exploration properties are of great interest to cope with applications where the objective function exhibits an elevated number of local minima. We aim at achieving such enhanced exploration properties while proposing a ternary map which evolves across iterations, and inserting chaotic sequences in the update rules of the agents.
Our third motivation is to improve the diversity of the agents behavior.
Indeed, GWO exhibits premature convergence due to poor diversity of the population of wolves. So we propose to divide the wolf pack into two groups: the first with enhanced 'exploration' abilities, and the second with 'exploitation' abilities.
The proposed chaotic ternary GWO is denoted by CTGWO. In subsection <ref>, we derive the update rules which relies on specific transform functions depending on a parameter a.
We wish to preserve the original philosophy of GWO: the number of leaders ruling the update of the agents is superior to 1, and the parameter a permits to distinguish between an exploration phase at the beginning of the algorithm and an exploitation phase at the end. In subsection <ref>, we just assume about parameter a that it decreases from 2 to 0 across the iterations.
Then in subsection <ref>, we investigate a chaotic expression for a.
§.§ Ternary update rules based on dedicated transform maps
We propose here innovative update rules, dedicated to a ternary search space, performed with the help of an ad hoc transform function. Firstly propose a novel manner to compute the contribution of a leader, and the mean contribution of several leaders. Secondly, we propose an update rule depending on this mean contribution.
Contribution of a leader
We remind that in the continuous case, the contribution l of a leader l is computed as in Eq. (<ref>). In this case l=l - b^il depends on the product b^il, which decreases to 0 simultaneously with a, reaching 0 when iter=T_max. So l is a real value which gets closer to l across the iterations. In the following we propose to compute a contribution l which is a real value between 0 and 2. We propose, as expression of the contribution of leader l:
l=(l - b^il) mod 2
where b^i is defined as in Eq. (<ref>) and l is defined as in Eq. (<ref>). Based on the hypothesis that the maximum value of a is 2, we deduce from Eqs. (<ref>) and (<ref>) that the values of b^il are between -8 and 8. So the values of l - b^il can be out of the interval [0:2].
The 'modulo' operator, denoted by mod is meant to enforce the contributions l to remain into the interval [0:2]. It is defined as follows: whatever the values z1 ∈R and z2 ∈R*:
z1 mod z2={[ z1 - z2 ⌊ z1/z2 ⌋ if z1≠ z2; z2 if z1=z2, or z1=0 ].
where ⌊ z1 ⌋ denotes the largest value in Z which is smaller than the scalar z1 ∈ R.
Weighted contribution of the leaders
We denote by the weighted contribution of four leaders α, β, δ, and ρ:
={[ 1/3(α+β+((1-a/2)δ+a/2ρ)) if a>1; 1/3(α+β+δ) if a ≤ 1 ].
In Eq. (<ref>), leader ρ is a wolf which is selected at random among the wolf pack.
We notice that gets closer to 1/3(α+β+δ) when the iteration index increases.
We can now derive a ternary rule which updates the position of any wolf in a ternary search space.
Ternary update rule
We propose an evolving map which enhances exploration at the beginning of the algorithm, and exploitation at the end of the algorithm. Compared to recent works about the binary GWO <cit.>, the new feature in the proposed transform map is of course the division of the map into three regions instead of 2, but also, the fact that the map evolves across iterations: we will introduce a term which is proportional to a in the transform functions and which emphasizes exploration at the beginning of the algorithm and exploitation at the end of the algorithm.
The proposed novel process dedicated to ternary search spaces permits to select either value 0, 1, or 2. In dimension i (i=1,…,P), wolf q is updated from iteration iter to iteration iter+1 as follows:
iiter+1=
{[ 0 if r ≥φ^u(,a); 1 if r < φ^u(,a) and
r ≥φ^d(,a); 2 if r < φ^d(,a) ].
where the scalar r is a random value between 0 and 1 and taken from a normal distribution.
In Eq. (<ref>) we introduce two functions, which are necessary to define the ternary map. These functions are denoted by φ^u and φ^d:
φ^u: [0:2] ×ℝ_+→ [0:1]; y ↦φ^u(y,a)
φ^d: [0:2] ×ℝ_+→ [0:1]; y ↦φ^d(y,a)
Function φ^u separates the uppermost part of the map from the rest of the map; and function φ^d separates the lowermost part of the map from the rest of the map. The basic idea is that if the random number r leads to the region in-between, the value 1 will be chosen as an updated value. If r leads to the region which is above φ^u (resp. below φ^d), the value 0 (resp. 2) will be selected. We will now detail the shape of the frontiers between regions 0, 1, and 2. We set a ternary map based on a 'power function'.
The basic function we use is a power function applied to any scalar y and depending on 5 parameters c1, c2, c3, c4, and a:
yc1c2c3c4a=
yc1c2c3c4 + ya
In Eq. (<ref>), the first term depending on c1, c2, c3, and c4 gives the overall shape of the function. The second term depending on a permits to get 0c1c2c3c4a≥0 and 2c1c2c3c4a≤1. We use two versions of this function to define the ternary map. The first one, with c3=2 and c4=1; the second one, with c3=0 and c4=0:
φ^u(y,a)=y2321a
φ^d(y,a)=y2300a
The second term depending on a (see Eq. (<ref>)) permits to get values on y=0 which are slightly larger than 0, and values on y=2 which are slightly smaller than 1:
0c1c2c3c4a=a/5(1-exp(-2)),
and 2c1c2c3c4a=1-a/5(1-exp(-2)) in both Eqs. (<ref>) and (<ref>).
So:
φ^u(0,a)=φ^d(0,a)=a/5(1-exp(-2))
and
φ^u(2,a)=φ^d(2,a)=1-a/5(1-exp(-2))
The functions φ^u(y,a) and φ^d(y,a) defined in Eqs. (<ref>) and (<ref>) can then be used in Eq. (<ref>) to get the 'Power' transform map.
Representation of the ternary transform map and interpretation
Fig. <ref> shows the 'Power' ternary map. In each case the uppermost region maps for 0, the central region maps for 1, and the lowermost region maps for 2. It can be noticed that the shape of the functions φ^u and φ^d is consistent with the derivations in Eqs. (<ref>), and (<ref>).
We can check that:
* the term which is proportional to a in Eq. (<ref>) yields a map which evolves across the iterations; this is an important difference compared to the binary map presented in <cit.>;
* for a value of parameter a which is 2, either the value 0 or 2 may be selected with an elevated probability when all leader contributions are equal to 0, or 2;
* for a value of parameter a which is 0, the value 0 (resp. 2) is selected with probability 1 when =0 (resp. =2);
* whatever the value of a, either the values 0, 1, or 2 may be selected when =1.
Indeed, as defined in Eq. (<ref>) the values of l are real and between 0 and 2, whatever i and l. In subsection <ref> we embed chaotic sequences in the expression of a to improve again the exploration abilities of our algorithm.
§.§ Embedding chaotic sequences in the ternary grey wolf optimizer
Chaotic expression of parameter 'a' for improved exploration abilities
We modify the expression of parameter a with respect to other versions of GWO, and propose:
a=2(1- ( iter/ T_max)^(η_ q (1+ Γ(c_ q^1, iter))) )
In Eq. (<ref>), we notice that, for the first time in this paper, the expression of a depends on the agent index q.
Firstly, we reposition the worst agents closer to the three leaders, with a value of η which depends on the score of the agent: for the worst half of the agents (associated with the largest scores), η_ q=2/3;
for the best half of the agents (associated with the smallest scores), η_ q=3/2.
Secondly, inserting a chaotic sequence Γ(c_ q^1, iter) enhances the variability of the behavior of the agents. c_ q^1 is the initial value of the chaotic sequence, which is different for every agent q.
The principles of the proposed method is that the a parameter of the GWO which rules the displacement of the agents is perturbed through the value of a chaotic sequence.
Choosing a different value of c_ q^1 for each agent permits to emphasize diversity in the displacement of the agents. Meanwhile, we choose a sequence with one attractor to ensure that Γ(c_ q^1 , T_max) is the same whatever q.
Construction of the chaotic sequences
To privilege exploration abilities, the behaviors of the agents should differ one from the other. To privilege exploitation abilities at the end of the algorithm, the agents behavior should get closer to each other while the iteration index increases. So we set the following constraints on the chaotic sequences:
* the values of Γ(c_ q^1, iter) are positive, in the interval [0:1];
* the last value should be Γ(c_ q^1, T_max) = 0 whatever the initial value Γ(c_ q^1,1). In this way, at the last iteration iter= T_max, the behavior of all agents is the same.
To fulfill easily those constraints, we have to choose a sequence with one known attractor.
We base our sequence Γ on a logistic sequence, denoted by c( iter). Given an initial term c( 1), each subsequent term (for iter=2,…, T_max) is defined as:
c( iter+1) = κ c( iter)(1-c( iter))
where κ ∈ ℝ_+^*. The number of attractors for this sequence depends on the value of κ. Fig. <ref> shows chaotic sequences with one (Fig. <ref>(a)), two (Fig. <ref>(b)), or an infinite number of attractors (Fig. <ref>(c)). We chose κ=2.8 in Fig. <ref>(a), κ=3.2 in Fig. <ref>(b), κ=3.99 in Fig. <ref>(c). In Figs. <ref>(a),(b), and (c), each plot with a given color corresponds to a different value for c( 1). There are ten chaotic sequences in each case.
We choose a sequence such as in Fig. <ref>(a), where c( T_max) bears the same value, equal to 0.65 approximately, whatever the sequence. For this we set with κ=2.8.
For any agent q, and for iter=1,…, T_max, we define the ad hoc chaotic sequence Γ(c_ q^1 , iter) as follows, based on the logistic sequence of Eq. (<ref>):
Γ(c_ q^1 , iter) = 0.1(c( iter)-c( T_max))
Hence, for any agent q Γ(c_ q^1 , T_max)=0. We notice that the initial term c_ q^1 is defined as follows:
c_ q^1 = 0.1(c( 1)-c( T_max))
We set the initial term c( 1) of the logistic sequence as a random value between 0 and 1, taken from a normal distribution.
As c( 1) is a random value, c_ q^1 is also random and a different sequence is generated for each agent. Fig. <ref>(d) shows the sequences of values for a for all agents.
We clearly distinguish two families of agents: the first family with η_ q=3/2 and rather elevated values of a, and the second family with η_ q=2/3 and smaller values of a, which tend more rapidly towards 0.
§ RESULTS: MONO-OBJECTIVE AND BI-OBJECTIVE APPROACHES
In this section, the test environment is a Win10 flagship 64-bit operating system with dual-core Intel Core i5-4210 M @2.60 GHz and 16 GB internal storage.
The software is MATLAB^, version 2021a.
§.§ Experimental conditions and metrics
In this section, the test environment is a server running Linux, equipped with 4 Intel(R) Xeon(R) CPU X7560 @ 2.27GHz (64 processors, Hyper-threading activated) and 1000 GB RAM.
The software is Python.
We consider cloaks with P=329 voxels (of triangular shape) and the expected outcomes of the algorithms are vectors x containing permittivity values K^1,K^2, …,K^i, …, K^P.
In subsection <ref> we display the results obtained with a monoobjective approach. The criterion which is minimized is f(x): RP↦R_+: f(x)=1/2(f_1(x)+f_2(x)). We remind (see section <ref>) that f_1 stands for protection, and f_2 stands for invisibility.
We remind that topological optimization limits the number of possible materials to two. Instead, and for the first time in this paper, we adapt our new variant CTGWO of the GWO which searches solutions among three materials. We compare the proposed CTGWO with the adaptive mixed GWO in discrete mode <cit.> (denoted by amixedGWO), and the vanilla continuous GWO <cit.>. The computational time for one trial of either f_1 or f_2 is 0.99 sec. by trial.
For GWO and amixedGWO, we use the expression of a presented in Eq. (<ref>). For CTGWO, we use the expression of a presented in Eq. (<ref>). The search space for the values of the permittivity ϵ is {7,10,12}. CTGWO and amixedGWO access these values via indices retrieved from the discrete search space {0,1,2}. GWO access these values via rounded indices retrieved from the continuous search space [0:2]. We run the three algorithms with Q=100 agents and T_max=250 iterations, that is, 25 10^3 trials of the objective function. The agents are initialized with random integers between 0 and 2.
In subsection <ref> we display the results obtained with a biobjective approach. The couple of criteria which are jointly minimized are f_1(x): RP↦R_+, and f_2(x): RP↦R_+. We display the results of GWO, amixed GWO, and the proposed CTGWO; as well as the results obtained by NSGA-II in four situations.
These five experimental conditions are summarized in Table <ref>.
The ternary search space mentioned in Table <ref> is {7,10,12}: NSGA-II accesses these values via rounded indices retrieved from the continuous search space [0:2]. The continuous search space mentioned in Table <ref> is [7:12]. This last situation is prospective in the sense that we assume that we afford any material with any permittivity value between 7 and 12.
§.§ Mono-objective approach
In this subsection, we display the results obtained by CTGWO, amixedGWO, and GWO: the convergence curves in Fig. <ref>; the scores reached by each method, and corresponding protection and invisibility performances in Table <ref>; the cloak design and wave propagation field in Fig. <ref>.
In Table <ref>, we display, for GWO, amixedGWO, and CTGWO, the score of wolf α at the last iteration f(α T_max). For instance, the solution provided by CTGWO is α T_max=[7,12,12,…,10]^T. We notice that, in CTGWO, the power map is such that there is still a small probability that the number of the area in which we are going to place ourselves does not correspond to the value obtained, used as a position on the horizontal axis. The interest of such a mismatch is to avoid our algorithm to be locked in a local minimum. The chaotic sequences also play a role in the good behavior of CTGWO shown by the convergence curve in Fig.
<ref>. Fig. <ref>(f) indicates that the protection abilities of the cloak designed by CTGWO are clearly better than for GWO and amixedGWO. Values in <ref> confirm this impression: the protection criterion is two times smaller, while the mean criterion is also significantly smaller.
§.§ Bi-objective approach:
In a second step, we used a bi-objective genetic algorithm as an optimization method.
Thus, we no longer work on convergence curves, but on Pareto fronts, to balance the two minimized criteria.
The parameter values for NSGA-II are as follows: mutation and crossover probabilities are p_m = 1/P ≃ 0.003 and p_c = 0.9;
mutation and crossover distribution indices are η_m=20 and η_c=15.
§.§.§ Bi-objective ternary optimization with 250 iterations, 100 research agents.
In this subsubsection, we display the results obtained by NSGA-II: the Pareto front in Fig. <ref>; the protection and invisibility performances in Table <ref>; the cloak design and wave propagation field in Fig. <ref>.
§.§.§ Bi-objective ternary optimization with 1000 iterations, 200 research agents.
In this subsubsection, we display the results obtained by NSGA-II: the Pareto front in Fig. <ref>; the protection and invisibility performances in Table <ref>; the cloak design and wave propagation field in Fig. <ref>.
§.§.§ Bi-objective continuous optimization with 250 iterations, 100 research agents.
In this subsubsection, we display the results obtained by NSGA-II: the Pareto front in Fig. <ref>; the protection and invisibility performances in Table <ref>; the cloak design and wave propagation field in Fig. <ref>.
§.§.§ Bi-objective continuous optimization with 1000 iterations, 200 research agents.
In this subsubsection, we display the results obtained by NSGA-II: the Pareto front in Fig. <ref>; the protection and invisibility performances in Table <ref>; the cloak design and wave propagation field in Fig. <ref>.
§.§ Spectral tolerance and comparison with random cloaks
To further verify the robustness of our approach to a change in frequency, we compute the invisibility and protection criteria for several frequency values around the central freespace wavelength λ_0. We compare the results obtained on the optimized cloaks to the results obtained with random cloaks obtained by filling each voxel of the design space by an arbitrary integer (resp. real) value in {7,10,12} (resp. [7,12]). For the central freespace wavelength λ_0: for both C1 (protection) and C2 (invisibility) these criteria are 20 times smaller for the optimized cloaks: 0.00015 vs 0.0022 for C_1; and 0.00006 vs 0.0012 for C_2.
We infer from these results that it is worth optimizing the structure of the cloaks: this good behavior at the target frequency is obtained at the expense of slightly worse performances at other frequencies, in particular for the invisibility which appears to be quite resonant (optimized cloaks lead to a better invisibility than any random realizations for λ/λ_0∈[0.95,1.03]), as opposed to the protection which turns out to be more broadband (the cloak optimized with C-NSGA-II+ lead to a better protection than any random realizations for λ/λ_0∈[0.82:1.22]). We note that the optimization method operating over a continuous space lead to a more broadband response (see red and purple curves in Fig. <ref>).
§ DISCUSSION
We distinguish two situations: the ternary case, where we afford three possible values for epsilon, and the continuous case, where we afford any real value between 7 and 12.
In the ternary case, when T_max=250 and Q=100, the mono-objective approach implemented with the proposed CTGWO yields the best results in terms of protection, that is, 1.56082 10^-4 and score, that is, f_1+f_2/2=5.12828 10^-4. Fig. <ref>(f) confirms this with a good protection behavior obtained with CTGWO. So in these conditions the proposed CTGWO algorithm yields the best trade-off between protection and invisibility.
NSGA-II yields the best result in terms of invisibility, that is, 5.36271 10^-4, at the expense of a mean value f_1+f_2/2=8.06398 10^-4 which is larger than the score obtained by CTGWO.
However the interest of NSGA-II is to enable an end-user to choose to favor one criterion (for instance invisibility) rather than the other. When T_max=1000 and Q=200 affording therefore 8 times more trials of each objective function, the best protection reaches 1.23355 10^-4, with an associated mean value f_1+f_2/2=4.02089 10^-4.
In Figs. <ref> and <ref> we can check the coherence of the shape of the wavefront and the score values obtained.
A less realistic manner to improve the results is to enable the search for any real permittivity value between 7 and 12.
In the continuous case, still with 25000 trials of both f_1 and f_2, we reach a best protection criterion 7.84514 10^-5, a best invisibility criterion 3.27419 10^-5, and a best mean value 1.41643 10^-4. The results are improved but the corresponding physical constraints are very much strengthened, as we assume we can choose any material for any voxel in the cloak. In the continuous case, with T_max=1000 and Q=200, we reach very low best protection (6.30584 10^-5), best invisibility (2.59173 10^-5), and best mean (1.10185 10^-4) values. The visual results in the continuous case (see Figs. <ref> and <ref>) are very convincing, and clearly illustrate the difference between the best protection, the best invisibility, the best compromise, and the best protection cases.
Finally, the tolerance with respect to the operating freespace incident wavelength showed a broadband behavior in terms of protection and a rather narrowband behavior for the invisibility criterion, as shown by the comparison to random cloaks in Fig. <ref>.
§ CONCLUSION
In this work, we address the issue of an electromagnetic cloak's design in the transverse magnetic (TM) polarization (whereby the magnetic field is perpendicular to the (xy)-plane containing the computational domain). This polarization has been chosen as the model can then be adapted to plasmonics by assuming some Drude-like dispersion in the permittivity <cit.>. More precisely, this means that σ in (<ref>) should be frequency dependent, and χ is set to 1.
However, the transverse electric polarization would be also worth investigating, in that case σ is set to 1 in (<ref>), and χ plays the role of the permittivity.
Our objective is here to achieve the best compromise between protection and invisibility for TM waves. In other words, we are looking for the best trade-off between protection while considering the wave amplitude inside the cloak, and invisibility while considering the wave behavior outside of the cloak.
This is a large scale bi-objective optimization problem. We propose two approaches: in the first one we transform this problem into a mono-objective optimization problem and seek for the best mean value of protection and invisibility criteria. In the second one we look for the best protection the best invisibility, and the best trade-off with a bi-objective optimization algorithm. GWO is a well known mono-objective optimization algorithm which reaches the desired solution with a reduced number of iterations. We propose a novel mono-objective version of GWO, namely the chaotic ternary GWO, with three main innovations: ad hoc update rules to face ternary search spaces, evolving map and chaotic sequences to improve exploration abilities, and division of the pack into two groups to improve diversity. We apply this algorithm, and the comparative GWO and amixedGWO (both mono-objective) as well as the NSGA-II (bi-objective) to solve the considered cloak design problem. In the considered cloaking application, and with the help of 25000 evaluations of these criteria, the proposed CTGWO algorithm yields the best mean value, that is, 5.128 10^-4, hence the best 'trade-off', between protection and invisibility. It surpasses GWO (29.871 10^-4), amixedGWO (6.526 10^-4); and the best trade-off provided by NSGA-II (5.306 10^-4).
A possible prospect could consist in altering the ternary map in CTGWO to favor one particular material, among the three which are chosen to build the cloak. This could help in implementing the cloak with a preferred material.
§ CREDIT AUTHORSHIP CONTRIBUTION STATEMENT
Ronald Aznavourian: Conceptualization, Software, Validation, Writing. Guillaume Demesy: Software, Validation, Formal analysis. Sebastien Guenneau: Conceptualization, Validation, Supervision, Final preparing.
Julien Marot: Conceptualization, Software, Editing, Writing, Supervision, Final preparing.
§ DECLARATION OF COMPETING INTEREST
No author associated with this paper has disclosed any potential or pertinent conflict which may be perceived to have impending conflict with this work.
elsarticle-num
|
http://arxiv.org/abs/2307.02662v1
|
20230705213556
|
Only Pick Once -- Multi-Object Picking Algorithms for Picking Exact Number of Objects Efficiently
|
[
"Zihe Ye",
"Yu Sun"
] |
cs.RO
|
[
"cs.RO"
] |
Only Pick Once – Multi-Object Picking Algorithms for Picking Exact Number of Objects Efficiently
Zihe Ye and Yu Sun
*This material is based upon work supported by the National Science Foundation under Grants Nos. 1812933 and 191004.
The authors are from the Robot Perception and Action Lab (RPAL) of Computer Science and Engineering Department, University of South Florida, Tampa, FL 33620, USA. Email: .
August 1, 2023
========================================================================================================================================================================================================================================================================================================================
Picking up multiple objects at once is a grasping skill that makes a human worker efficient in many domains. This paper presents a system to pick a requested number of objects by only picking once (OPO). The proposed Only-Pick-Once System (OPOS) contains several graph-based algorithms that convert the layout of objects into a graph, cluster nodes in the graph, rank and select candidate clusters based on their topology. OPOS also has a multi-object picking predictor based on a convolutional neural network for estimating how many objects would be picked up with a given gripper location and orientation. This paper presents four evaluation metrics and three protocols to evaluate the proposed OPOS. The results show OPOS has very high success rates for two and three objects when only picking once. Using OPOS can significantly outperform two to three times single object picking in terms of efficiency. The results also show OPOS can generalize to unseen size and shape objects.
§ INTRODUCTION
In warehouses, workers usually perform batch picking to improve efficiency, also called multi-order picking. It is picking several same objects from a bin at once for multiple orders. For instance, a worker could be instructed to pick four boxes of toothpaste or three jars of a cosmetic product from a bin. In manufacturing, when putting nuts on a set of screws, workers usually get several nuts by only picking once (OPO) and then put them on one by one. During food prep, we usually pick multiple chunks from a cutting board at once and drop them into a pan after we chop vegetables into identical chunks. Figure <ref> shows four examples of multiple objects randomly lying in a bin.
People pick up several objects at once instead of picking single object for multiple times for efficiency. For example, if a robotic system can pick up only one item and drop it into a bin in 3 seconds, to pick up two identical items, the robot would need to pick two times, which is 6 seconds. In contrast, a human worker can get the same two items by OPO and needs only 3 seconds. That is why human workers are much faster than a robot.
Efficient and reliable robot picking systems are in urgent demand to relieve recent labor shortage issue<cit.>. Numerous robot-picking systems have been developed for bin-picking, mainly applying single-object picking (SOP) strategies since the main research and development focus on SOP. Several early works on grasping stability analysis provide fundamental investigations on the mechanism of holding multiple objects <cit.>. Studies on force closure and kinematics of multiple objects grasping have also been performed in <cit.>. Nevertheless, none of these works studied how to pick multiple objects.
Only lately, the concept of picking up multiple objects started to emerge in response to the demand for robots to match human-level efficiency. Recently, researchers started looking into how to pick multiple objects for different goals under different settings<cit.>.
This work tackles the problem of getting a requested number of identical objects in a shallow bin by OPO using a simple parallel gripper. It compensates for our previous work on multiple-object grasping (MOG) by OPO, since our previous work focused on piled-up objects in a bin <cit.> with a dexterous robotic hand. The previous work cannot handle the scenario where all objects lie un-stacked on the bottom of the bin. However, the un-stacked scenario requires more attention since it is very common in e-commerce warehouses and flexible manufacturing where stocks are kept low.
To pick up multiple objects efficiently, we develop the Only-Pick-Once system (OPOS) that models the relationship among the objects as a graph. The algorithms in OPOS first group nearby objects into local clusters based on their graph representation and rank them based on their clique orders and connections with out-clique nodes. OPOS also has a multi-object picking (MOP) module guided by OPO predictor to estimate the outcome of picking a cluster of objects based on their layouts. With it, OPOS can select the best collision-free picking pose from a few candidates calculated based on the cluster's layout for a requested number of objects by only picking once. OPOS also includes algorithms balancing the computational cost and success rate for complex layouts to make the approach feasible to run in real applications.
We have designed four evaluation metrics and protocols to rigorously and thoroughly evaluate the proposed approach. The approach is evaluated in both a simulation environment and a real setup. The evaluation results show that our approach achieves high success rates matching the requested number when only picking once. It leads to significantly reduced number of picking actions when using it in batch picking compared to the single-object picking strategy. The evaluation has also demonstrated that the approach can generalize well to other unseen objects with different sizes and shapes.
Overall this paper has the following contributions:
* as far as we know, this paper is the first attempt to provide a solution for picking up the requested number of objects by only picking once;
* this paper proposes a novel and comprehensive system to solve the OPO problem and achieves much better efficiency than SOP while ensuring picking accuracy;
* this paper models objects in a graph and applies several graph algorithms to analyze the object layouts for multi-object picking;
* this paper presents a cluster ranking algorithm to significantly reduce the computation cost in searching for the right cluster;
* this paper presents a MOP predictor that can estimate the picking outcomes based on the cluster layout;
* for evaluation, the paper defines novel metrics and protocols to measure the performance of OPOS;
* this paper presents a rigorous and thorough evaluation and results of the proposed approach in a simulation and the real world;
§ RELATED WORKS
§.§ Single Object Picking
Single object bin-picking has been a topic of significant research for several decades. It usually uses a vision system to estimate an object's pose given its model or directly find grasp points. Traditional approaches use the known object CAD model and features in 2D or 3D images to estimate the object's pose <cit.> and then transform predefined grasps in the CAD coordinate into the application coordinate <cit.>. Lately, with large labeled datasets and complicated prediction models (usually deep neural networks), researchers have developed approaches that can skip pose estimation and directly find proper grasp points from dense 3D point clouds <cit.>. A comprehensive review can be found in <cit.>.
§.§ Stability and Force Closure Analysis of Multi-Object Grasping
Static grasp stability analysis of multiple objects with a robotic hand has also been investigated. <cit.> discuss the enveloping grasp of multiple objects under rolling contacts and studied force closure of multiple objects. It builds the theoretical basis for later work on active force closure analysis for the manipulation of multiple objects in <cit.>. <cit.> studied kinematics and internal forces during multi-grasping process. <cit.> talks about neighborhood equilibrium in multiple object holding.
<cit.> try to achieve stably grasping of multiple objects through force-closure-based strategies. None of these works consider how to pick up multiple objects.
§.§ Multi-Object Picking in different settings
Recently several research groups have started to look into the multi-object grasping problem. Our previous work <cit.> has modeled the multi-object grasping as a stochastic process for picking multiple objects from a pile in a bin. It can pick up the requested number of objects from a pile with a good success rate when only picking once. Our later work has also analyzed the grasp types and taxonomy exhibited during grasping and holding multiple objects <cit.>.
For picking objects on a flat surface using a gripper, <cit.> uses a two-step dynamic programming approach to pick one or two objects depending on availability. For picking two objects, they either pick two objects at once or push-grasp two objects when possible, their goal is to minimize the movement length of robot arm end effector. <cit.> propose a novel grasp planner to do multi-object picking to reduce number of picking actions, their approach aims to remove all objects on a table fast without requirement on the picking number each time. All works have similar workspace setting as ours, but their goal is to clean the entire workspace while our goal is to pick up the requested number of objects by only picking once.
<cit.> trains a neural network based on data collected in real world, the input to their neural network is the object cluster and picking pose information, while the neural network in our OPOS only uses the image of the area between the gripper fingers as its input. Our estimation model generalization gives robot the ability to pick up unseen objects with different sizes and shapes with high accuracy, and our training procedure can be used on different grippers.
§.§ Multi-Object Manipulation
Robot manipulation in cluttered environment is always research focus of robotics field. <cit.> does a survey about past works working on multiple object manipulation, including singulation, navigation, declutter, rearrangement, packing, placing, sorting-by-packing, sorting-by-clustering.
§ PROBLEM SETTINGS
The objective is to develop a robotic solution for picking up multiple identical objects from a shallow bin by only picking once. The targeted scenarios include a shallow bin and several identical objects laying loosely in the bin without stacking. It is a common setup in many warehouses. The size and location of the bin are known to the robotic system and within the robotic system's workspace. The shapes and sizes of the objects can be different in different setups, and they are also known to the robotic system and reasonable to the robotic system's gripper limitations. In this paper, we assume the objects have regular shapes, such as cubes, cuboids, cylinders, and hexagons.
There could be multiple goals when picking up numerous identical objects, such as picking up as many as possible, picking up the requested number, and picking up the requested number as efficiently as possible. This paper focuses on developing a solution for picking up the requested number of objects by only picking once. This paper also explores the efficiency benefit the OPOS could bring to batch picking.
§ METHODOLOGY
§.§ Overview
The proposed OPOS has a simple hand-eye robotic system. Figure <ref> shows our robotic system in real and CoppeliaSim. The system has an RGBD vision sensor (Intel RealSense Depth Camera D435), a 6-axis robotic arm (UR5), and a simple “bang bang” parallel gripper since logistics environments usually prefer low-cost pneumatic grippers. We use a Robotiq 2F-85 to simulate a “bang bang” gripper, only allowing full close and open.
Figure <ref> provide an overview of the proposed OPOS. The input to the system is a single RGBD image - the top view of the bin, and the desired number k of objects to be picked up. We assume k is smaller than the max preset number m based on the capability of the gripper. OPOS has a set of algorithms that process the image through seven modules and outputs a 2-DOF planar position (x, y) and an in-plane rotation γ to the robotic arm for picking.
The module1 of OPOS is called neighbor graph generation. In this module, an algorithm processes the RGBD image, extracts the objects from the image, obtains their locations, and generates a neighbor graph based on their relative positions.
The module2 is called clustering, in which clusters of k to m objects in the neighbor graph are identified. In module three, cluster ranking, clusters are ranked based on a set of rules and stored in the ranked cluster list based on their ranking.
The first cluster in the ranked cluster list will be checked through the following three modules. If it is eliminated, the next cluster in the list becomes the top-ranked cluster and will be checked until the list runs out of clusters.
In module4, picking pose proposal, a sampling algorithm proposes several picking positions and orientations for the top-ranked cluster.
In module5, collision checking, the proposed picking poses are checked for collision. The collision-free poses are kept in a picking pose list. If there is no collision-free pose, the cluster is eliminated.
In module6, picking confidence estimation, a trained neural network, multi-object picking predictor, takes the local scene between the gripper fingers of a proposed pose in the picking pose list and estimates the confidence of picking up zero, one, two, and up to m objects with that picking pose.
In module7, picking pose selection, the picking confidences of picking k objects of various picking poses and clusters are compared with a pre-defined threshold and among themselves. The optimal picking pose is selected for execution.
Sections <ref> to <ref> provide detailed descriptions of the algorithms in the seven modules.
§.§ Neighbor Graph Generation
To pick up more than one object, we would need to inspect the relationships among objects and connect the ones the gripper could pick up together. Therefore, in the neighbor graph generation module, our algorithm processes the input RGBD image, localizes the objects in the bin and generates a neighbor graph of objects based on their relative locations. So, the algorithm for this module has two parts: object center detection and graph generation.
An RGBD camera right above the bin takes an image of the bin with objects inside. Our algorithm first segments objects in the bin from the background, detects their contours and then estimates their centers. The algorithm then treats each object as a node and connects it with its neighbor nodes within a predefined distance. The neighborhood distance threshold H_d is defined based on the gripper specifications (selection of H_d is described in Section <ref>). Figure <ref> illustrates the two modules in the process using an example.
The final neighbor graph is an un-directed weighted graph G = {N, E}. The nodes set N contains the object indices and their location information. The edges set contains the connections and distance values. Figure <ref> shows two more example neighbor graphs. In this module, the neighbor graph is generated, and the image of the bin area is normalized so that each pixel of the image represents a 1mm×1mm area in the real space for next modules.
§.§ Clustering
To pick up k objects, we need to identify clusters with at least k objects, the same as finding a k-clique in the neighbor graph. A k-clique means a k-complete graph where all k nodes are fully connected. Therefore, our goal is to identify all cliques of k order or higher in the neighbor graph.
We use the algorithm in the NetworkX library <cit.> proposed in Zhang et al. <cit.> to find all cliques from one node clique to the max clique in the neighbor graph. The cliques with orders lower than k are discarded, and the remaining are saved in the initial cluster list (ICL).
The clique requirement is relatively loose; not every cluster in ICL could fit in the open gripper. First, we calculate the effective gripping area of the open gripper. The width of the effective gripping area is defined as the gripper's open spread (distance between the gripper fingers). The length of the effective gripping area is defined as the length of the gripper finger plus one target object length (counting the half-target object length at both ends of the gripper finger). It is a generous definition since when the gripper closes, the object will usually slide out of the gripper if only a small portion of the object is in between the gripper fingers at the beginning. Therefore, if the center of an object is not in the effective gripping area, the object cannot be picked up by the gripper.
So our algorithm Algorithm <ref> checks if the all objects of each cluster in ICL can fit in an effective gripping area. For each cluster, It first calculates the convex hull of all objects in the cluster, then uses the convex hull points to calculate a minimal area rectangle to enclose the entire convex hull <cit.>. We call this rectangle cluster rectangle. Min_Area_Rec subroutine takes in a cluster and outputs the cluster rectangle. Rec_in_Rec subroutine takes in two arguments and checks if first rectangle can fit in the second rectangle.
To test if the cluster rectangle can fit in the effective gripper area, we use the well-known rec-in-rec theory <cit.>. According to the rec-in-rec theory, as illustrated in Figure <ref>(A), the yellow rectangle (p × q area, q ≤ p) can fit in the red rectangle (a × b area, b ≤ a) if and only if either of the following conditions is true:
(a) p ≤ a and q ≤ b
(b) p > a, q ≤ b, and (a+b/p+q)^2+(a-b/p-q)^2 ≥ 2
In our problem, the cluster rectangle is the yellow rectangle, while the effective gripper area is the red rectangle. After obtaining their widths and heights, we check if the two sets of widths and heights satisfy either of the conditions. If yes, the cluster is kept in the cluster list (CL). Otherwise, it is eliminated. Figure <ref>(B) shows one example of the condition a, while Figure <ref>(C) shows one example of the condition b.
§.§ Cluster Ranking
To quickly find a suitable cluster among all clusters in CL for picking up k objects, we have developed an algorithm (Algorithm <ref>) that will rank them based on their clique orders and the likelihood of finding a collision-free picking pose. Weight_Sum subroutine calculates the external connection weight sum for every cluster, and SORT subroutine ranks CL based on the weight, higher weight cluster will be ranked lower since it is prone to be closer to external objects. First, the algorithm ranks all clusters in CL based on their clique orders. Since the goal is to pick up k objects, the clusters with the clique order of k should rank highest, followed by clusters with the clique order of k+1, and so on.
The likelihood of finding a collision-free picking pose can be associated with how isolated the cluster is from other objects in the bin. If a cluster is more isolated than another cluster, it is more likely we can find a picking pose from where the arm can lower the gripper to the bottom of the bin collision-free.
We have considered using an existing clique isolation factor proposed in ITO et al. <cit.>. It counts the number of external connections to one clique and then divides it by the clique order. We found it does not fit our problem well since it doesn't consider how close the connected nodes are, which is an important factor in gauging the risk of collision. Therefore, we define a external crowd index. It puts more weight on the ones that are close to the clique. The weight w_i are calculated based on Equation <ref>.
[ Δ l = H_d - width/5; w_i = 5 - round(d_i - width/Δ l); wif = ∑ w_i ]
where H_d is the Neighbor Distance Threshold, and width is the effective width of the object. For each edge that is connected to the clique, its length d_i is converted to weight. The weight is between 1-5. It is inversely proportional to the edge length. The total weight of all edges connected to the clique is its external crowd index. Since it is used for ranking, normalizing it is unnecessary.
Figure <ref> illustrates how the external crowd index calculation uses the edge distances. Both examples Figure <ref> have two 3-clusters; their objects are marked as blue, and the neighbor objects to the cluster are marked as yellow.
The cluster in Figure <ref> (A) has four edges connecting to three external objects. Based on their lengths, their weights are 5, 4, 4, and 1. So, its external crowd index is 14.
The cluster in Figure <ref> (B) has six edges connecting to four external objects. Based on their lengths, their weights are all 1's. So, its external crowd index is 6.
We can see that even though the cluster in Figure <ref> (B) has four external neighbors and its isolation indicator is worse than the cluster in Figure <ref> (A), its external crowd index is less. It is consistent with our intuition that picking the cluster in Figure <ref> (A) has a higher risk of collision than picking the cluster in Figure <ref> (B). So we rank the cluster in Figure <ref> (B) higher than in Figure <ref> (A).
The cliques in CL are ranked based on their clique orders first and then crowd indices to break the ties. The cluster with k nodes and the smallest crowd factor will rank at the top.
§.§ Picking Pose Proposal
Once a candidate cluster is selected based on the ranking, our approach will process the layout of the objects in the cluster and then propose several gripper-picking poses that could pick up k objects. We want the approach to propose several poses instead of just one because many of them may not pass the collision check in the next module.
Since the objects are identical and lying un-stacked in a box, the picking height is fixed and can be calculated. Our approach will focus on obtaining the picking in-plane position and orientation and define them as x, y, and γ. We compute the cluster center (c_x, c_y) in the world coordinate system and define γ=0^∘ when the gripper's x_g axis is aligned with the world x_w axis. The definition of world and gripper axes are shown in Figure <ref>.
To propose poses, we first sample 12 γ's from 0^∘ to 165^∘ (the gripper is symmetric) since we found the outcomes of two picking poses are similar if they are different by less than 15^∘.
To sample the positions at each γ_i, we rotate the gripper by γ relative to the world coordinate system and sample along the rotated gripper axes x_g and y_g. The sampling range along x_g and y_g are computed to ensure the pose's effective gripping area still encloses the object's convex hull as shown in Figure <ref>. We found overly fine sampling will increase computation costs and generate many poses that produce the same outcome. Therefore, we sample 10 steps each from the center to the left/right/up/down to their ranges if the ranges are larger than 20 mm. Otherwise, we sample 2 mm for each step since poses with less than a 2 mm difference would produce very similar outcomes.
Algorithm <ref> describes the procedure to sample picking poses on a given cluster for 12 degrees. For each rotation, we first get four boundaries of the EGA to cover the cluster using GetBound subroutine. Then, we check if current direction's bound is possible to cover the cluster. If one direction can cover the cluster, we use GetStepSize subroutine to calculate x and y direction step size according to procedure illustrated above. The sampled picking poses of x,y,γ are attached in a list.
The samples along x_g and y_g are then converted back to the world coordinate system. With γ, they are associated with the cluster as its picking-pose proposals.
§.§ Picking Pose Collision Checking
We shouldn't select a picking pose that would lead to a collision between the open gripper and the objects. We assume the workspace of the gripper has been confined based on the bin's location and geometry. So, our algorithm checks the collisions in the projected 2D plane since the objects are identical and lying un-stacked. Figure <ref> illustrates three typical collisions and a collision free layout. (A)-(C) in Figure <ref> are three collision examples with internal object, bin, and external object, (D) is a collision free example. Any proposed picking poses leading to a collision are removed. At this point, we have a cluster list in which each cluster has a list of collision-free picking poses.
§.§ Picking-Confidence Estimation
Using a proposed picking pose, we can obtain a local image of its effective gripping area (defined in Section <ref>). We call it gripping area image. We use the pattern in the gripping area image to predict how many objects the gripper will pick up when only picking once. To learn the patterns, we have developed a deep neural network structure that uses the MobileNet-V2<cit.> for feature extraction and a fully connected (FC) ReLU layer combined with a softmax output layer as a classifier. For efficient training, we add a batch normalization (BN) layer between the MobileNet-V2 and the ReLU layer, as shown in Figure <ref>. The output of the network gives the confidences of picking from 0 to m objects. We call the neural network multi-object picking predictor.
For each cluster, we input the gripping area images generated using all the proposed picking poses to the multi-object picking predictor in a batch and receive their confidences in picking 0 to m objects. If the desired number of objects is k, the picking poses with the highest confidence at the k bit are kept; the rest are discarded. If all proposed poses of a cluster are discarded, the cluster is removed from the CL.
Data collection and training
We train two MOP predictor models: one for the short gripper and one for the long gripper. The short gripper MOP
predictor model is trained with the images of 92,433 random layouts of two small objects: a 1-inch cube and a 2.8-cm cylinder. The long gripper MOP predictor model is trained with the images of 96,836 random layouts of three large objects: a 2-inch cube and 3.8cm cylinder, and a cuboid. The full list of the objects used both in training and testing is in Table <ref>.
Each layout has 2 - m objects randomly placed in the effective gripping area. Their labels are obtained with their picking outcomes in simulation.
The MobileNet-V2 has been pre-trained on ImageNet. To train the rest of the neural network, we adopt the Adam optimizer with a fixed 1e-4 learning rate and loss function as Categorical Cross-entropy. We set the dropout rate at 0.3 for the fully connected layer.
§.§ Selecting pose with high picking confidence
So far, our approach processes the object layout in the bin and produces a list (CL) of ranked clusters and their proposed picking pose with their confidences in picking k objects. We could simply go through all clusters in CL and their proposed picking poses and pick the picking pose with the highest confidence of picking k object to execute. However, it is a brute-force approach and could take too much time if many clusters are in a large bin.
Algorithm <ref> shows the procedure to find an action based on target number and confidence threshold. SamplePose Subroutine calculates sampled poses based on one cluster. CheckCollision Subroutine filters out poses that have collision with objects or bin. GetLocalImage get the corresponding local image for each picking pose. Max_Conf Subroutine get the max_confidence for all local images that predict to pick k objects, index is the corresponding index of this instance . If no pose predicts on k, then the max_confidence will be set as 0. If the max_confidence for one cluster to pick k objects is above H_c, then the picking pose is returned and executed. If the max_confidence for picking k is below H_c, then we store this action as backup until the end if no picking pose's confidence is over the threshold. If all clusters are checked and no action higher than threshold is found to pick k objects, then the action with highest confidence in backup list is selected and passed to robot.
Since efficiency is the goal, we set a good-enough confidence threshold H_c and start checking from the highest-ranked cluster in CL. The selection of the good-enough confidence threshold H_c is described in Section <ref>. When we find a collision-free picking pose with confidence of picking k object over H_c, we send it to the robot to execute. This way, we don't need to run through the modules from Picking Pose Proposal to Picking-Confidence Estimation for all clusters in CL. However, for some difficult layouts, it is possible to run through all clusters in CL without seeing one pose with over H_c confidence. Then the algorithm falls back to the brute force approach.
If the CL is empty because either no cluster has been found or all cluster has been eliminated in the process, the process will report failure and reject the request of picking k objects with one pick.
§.§ Parameter Calculation and Selection
§.§.§ Neighbor Distance Threshold H_d
If two objects are too far apart, they cannot be picked together by the gripper if only picking once. The farthest distance between two objects that could still be picked together is defined as the Neighbor Distance Threshold H_d. When the centers of two objects are at the diagonal corner of the open gripper, they are farthest apart but still could be picked up together, as shown in Figure <ref>. The threshold H_d is computed based on the gripper and object sizes. Figure <ref> (A) and Figure <ref> (B) respectively has 9.5cm and 9.4cm as their threshold.
§.§.§ Good-Enough Confidence Threshold H_c
The good-enough confidence threshold H_c is introduced to early stop the search for the cluster and picking pose with the highest confidence of picking k objects. It should be set to limit the computation set without sacrificing much of the success rate. If we set H_c too low, the search will stop very early and a picking pose with a low confidence will be used and the execution outcome would have a lower chance of picking up k objects. On the other hand, if we set H_c too high, the search will skip through many good picking poses and would not stop until going through all clusters and proposed poses.
To select the proper H_c, we designed an experiment to obtain the success-rate and number-of-clusters (SRNC)curve for picking three 1-inch cubes in simulation. The curve is plot with seven thresholds: 0.7, 0.75, 0.8, 0.85, 0.9, 0.95, and 1 (no early stopping). As shown in Figure <ref>, when the threshold increases, the overall success rate improves while the computation cost also increases. We select 0.9 as our threshold because its corresponding success rate is close to the one without early stopping, while the averaged computation time is more than 70% less than without early stopping. The 0.9 threshold is tested on cylinder shape and other different targets in simulation and achieve similar result to balance well between success rate and number of clusters needed to inspect.
We have also plotted the SRNC curves for picking two, three, and four objects. Setting H_c to 0.9 works for all those targets. Therefore, we set H_c to 0.9 for all objects and targets.
§ EXPERIMENTS AND EVALUATION
§.§ Setup
Figure <ref> shows the setting of our target objects and the grippers. Figure <ref> (A) shows four types of objects used for evaluation and their corresponding size metrics. Figure <ref> (B) shows the target objects used to evaluate our algorithms in the real-world setup. We divide all objects into two sets based on their sizes and design grippers for them accordingly. For the three objects in the upper part of Figure <ref> (B), we design a gripper with 7.5cm length and 8.4cm spread. The length is around three times as the 1-inch wood cube side length, and this gripper is shown in the left half of Figure <ref> (C). For the three objects in the lower part in Figure <ref> (B), we design a gripper with 15.0cm length and 8.4cm spread. The length is around three times the 2-inch gift box side length. This paper does not focus on designing grippers, but our procedure could be transferred to other objects and different size grippers.
The base size of the bin used in the real setup is 30.5cm×38.1cm. It is the size of a standard storage bin. We select the bin with a short height for better visualization. In the simulation, we use a 38.0cm×38.0cm bin. The bin height is arbitrary as there is collision checking with the container boundary. The two setups have different bin sizes, and our algorithms work equally well on both setups. Our algorithms are not sensitive to the bin size.
Since one important target of the OPOS is logistic applications, we select three common shapes in warehouses. They are cube, cylinder, and cuboid. They are common packaging or container shapes. So, in the real setup, we use cardboard packaging boxes, small- and median-size cosmetic jars, toothpaste boxes, and wood cubes. We then select hexagonal nuts commonly used in manufacturing to evaluate the generalization capability of the proposed approach. The sizes of the objects in the simulation are designed to match the ones in real.
Table <ref> lists all shape types, their indices, and the gripper used for picking.
Their dimension parameters are defined in Figure <ref> (A) and the dimension specifications are shown in Table <ref>. The bold font in Table <ref> indicates the object is not in the training set, and * indicates the object is also tested in the real setup. Overall, we have evaluated the proposed approach on 12 different objects in simulation and six objects in real. Among them, seven of them are not in the training set. All hexagon objects (nuts) are not in the training set.
Figure <ref> shows examples of multi-object grasping of 6 different objects in real-setup.
§.§ Evaluation Metrics and Protocols
Since the goal is to pick up k objects efficiently and accurately for any arbitrary scenes, we define the following evaluation metrics:
* Availability rate (AR): among all arbitrary scenes, the percentages of the scenes for which the OPOS believes it could pick up k objects at once;
* Execution success rate (ESR): among all available scenes for picking k objects, the percentages of the scenes for which the OPOS actually picks up exact k objects;
* Overall success rate (OSR): among all arbitrary scenes, the percentages of the scenes for which the OPOS picks up k objects with one picking motion, OSR is the multiplication of AR and ESR: OSR = AR× ESR;
* Number of picking motions (NP): the number of picking motions needed to pick up exact k objects if not required to only pick once.
The difference between OSR and ESR lies at the scenes, for which our approach fails to produce any picking pose. It could happen when the algorithm cannot find a cluster that contains k or more objects. For example, it happens when all objects are sparsely scattered in a bin, and they are all farther apart from each other than the gripper length. Since this paper does not consider pushing motions to gather the objects, our approach would not produce any picking pose for this example. It could also happen when the algorithm cannot find a collision-free picking pose when there are too many objects in the bin, and the gripper is large.
The number of picking motions (NP) provides a direct measure of the efficiency of the approach. The NP of a traditional single-object picking (SOP) approach would be equal to or above k. The proposed MOP approach could pick up k objects with one picking motion for some scenes but may need several picking motions to recover from failures so that the total number of the picked objects is exactly k. If the proposed MOP has an averaged NP lower than k, it would be considered more efficient than the SOP.
To rigorously evaluate the proposed approaches, we design three evaluation protocols. In the simulation, to obtain reliable OS rates and ES rates in different situations, we follow this protocol:
* Train MOP predictors. We train two MOP predictors as described in Section <ref>.
* Create random layouts. For a small object in testing, we randomly create 1,000 layouts in five densities - having 20, 25, 30, 35, and 40 objects in the bin. For the median object (3.8 cm cylinder) in testing, we also randomly create 1000 layouts in three densities - with 10, 15, 20, 25, 30 objects in the bin. We use the testing layouts collected when we train the long-gripper MOP predictor for large objects.
* Run MOP algorithms. We run the proposed MOP algorithms on the test layouts given a desired number of objects. The number ranges from 2 to the max number indicated in the column “Max Num” in table <ref>. The max numbers are selected based on how many objects the gripper can pick up in reality.
* Collect evaluation data. For each run, we observe and record: the desired number of objects (target), the object index, the number of objects in the bin, if the algorithms find a picking pose, and how many are actually picked up.
In the real environment, we directly use the MOP predictors trained in simulation without transfer learning in real-setup. So its protocol doesn't have the "Train MOP predictors" module. The layout numbers are 80 for each object and each density. The rest is the same as the one in the simulation.
To evaluate the capability of generalization, we designed a protocol to test hexagons in both simulation and real. We use the short gripper MOP predictor trained in simulation with other shapes. The rest of the protocols are the same as the ones in the simulation and real.
§.§ Success Rate Results
Table <ref> and <ref> show the result of the AR, ESR, and OSR in simulation and the real world, respectively.
The number in the Obj Index column is density of objects in the bin before picking. Both tables show the result on three objects seen in training stage: cube_m_s, cylin_m_s, cylin_l.
Table <ref> shows the result of AR, ESR, and OSR for three objects above in simulation.
It shows that OSR and ESR for all three objects are higher or equal to 96.50% when target#=2. When target#=3, OSR and ESR for cube_m_s, cylin_m_s, cylin_l are 57.00% and 74.03%, 89.50% and 94.31%, 77.00% and 96.86%. When target#=4, OSR and ESR for cube_m_s, cylin_l are 6.50% and 61.90%, 8.00% and 94.12%, the result for cylin_m_s is NA as the Max Num for this object is 3. The AR value of cube_m_s for target#=2, 3, and 4 are 100.00%, 77.00% and 10.50%. A clear trend is that for each setting(identical object, identical density), AR value significantly decreases when the target number k increases.
In Real Result Table <ref>, the result shows that OSR and ESR for object cube_m_s, cylin_m_s, cylin_l are 93.75% and 96.15%, 92.50% and 94.87%, 81.25% and 97.01% when target#=2. When target#=3, OSR and ESR for cube_m_s, cylin_m_s, cylin_l are 51.25% and 77.36%, 86.25% and 95.83%, 40.00% and 91.43%. When target#=4, OSR and ESR for cube_m_s, cylin_l are 12.50% and 76.92%, 10.00% and 88.89%, the result for cylin_m_s is NA as the Max Num for this object is 3. The pattern of AR for different target number is identical to simulation result.
The result from the two tables indicates the followings:
* OSR and ESR are high for all objects under simulation and real setup when target#=2.
* When target#=3 or 4, both OSR and ESR decrease for all objects, and the gap between these two values becomes larger because of AR value decreasing when target number increasing, and the reason AR decreases is that there are more unavailable cases without a solution.
* ESR for larger target number decreases for the same setting because of randomness and the picking number estimator accuracy.
* Under the same setup (identical object, identical density), it is harder to find an available cluster of a larger target number which is one reason causing low AR and thereafter the bigger difference between OSR and ESR. For an instance, The OSR rate for cube_m_s in Table <ref> is 97.50%, 57.00%, and 6.50% for target#=2, 3, and 4 under same density 20. The ESR rate for the three densities are 97.50%, 74.03%, and 61.90%. The decrement in OSR is due to one reason there are fewer 3 and 4 clusters under 20 objects density.
* Another reason causing lower OSR for larger target numbers is more collision when trying to pick a higher number. In following contents, we analyze how density affects OSR in Section
<ref>.
* Furthermore, for each setting (same object, same density), the ESR decreases when target number increases, this is another reason causing OSR decreasing. The reason ESR decreases as target number increases is because there are more randomness when picking higher number of objects, therefore the predictor is less accurate on more objects picking.
Since it is difficult to fit many large objects: cube_l and cuboid_l in the bin loosely, we use test layouts for the longer griper MOP predictor for evaluation. Figures <ref> (A) and (B) show the confusion matrix of the testing set in simulation for cube_l and cuboid_l; Figures <ref> (C) and (D) show the confusion matrix in the real setting for cube_l and cuboid_l. The overall MOP predictor success rates for cube_l and cuboid_l are 96.14% and 97.74% in simulation, 97.00% and 97.00% in real setup.
§.§ Efficiency Evaluation
To compute the number of picking motions (NP), we define a hypothetical picking and transferring procedure that handles picking failures to make sure the system picks and transfers exactly k objects. We assume k could be 2, 3, or 4. The procedure runs the proposed approach to searches for a picking pose for p=k. If it cannot find a pose, it will search for a picking pose for p=k-1 and so on, until p=1. If the approach finds a pose, it will be executed, and the output will be either successful or not. If it is successful, the procedure will pick the remaining k-p using single-object picking (SOP). Otherwise, the procedure will handle two kinds of failures in the following ways:
* Failure type 1 (FT1)- the number of the picked objects q is smaller than k (including nothing is picked), the procedure runs SOP to pick the remaining k-q objects.
* Failure type 2 (FT2)- the number of the picked objects q is larger than k, the procedure runs SOP to pick up q-k objects from the receiving bin.
So, for the cases when p=k and are successful, their number of picking motions is 1. For the cases having FT1, their number of picking motions is 1+k-q. For the cases having FT2, their number of picking motions is 1+q-k. We assume SOP has a 100% success rate. Using the statistics obtained for OSR and ESR, we can compute the hypothetical averaged number of picking numbers to measure the efficiencies of MOP and SOP. Table <ref> shows the result of retrieving k objects for cube_m_s and cylin_m_s in real world and simulation. We use a consistent density of 15 in real testing for both objects and five densities (from 20 to 40 with a step size of 5) in the simulation to investigate how the density of objects may affect the result.
Overall, if the task is to pick and transfer two objects, on average, it would take our approach less than 1.1 picking motions to pick and transfer exactly two objects, while the SOP would need 2. If the task is to pick and transfer three 1-inch cubes, on average, it will take our approach less than 1.7 picking motions to pick and transfer exactly three objects, while the SOP would need 3. If the task is to pick and transfer four 1-inch cubes, on average, it will take our approach less than 2.6 picking motions to pick and transfer exactly four objects, while the SOP would need 4. For the small cosmetic jar (cylin_m_s), when asked to pick and transfer exact three jars, it takes our approach, on average, 1.085 to 1.150 picking motions to pick and transfer exactly three jars, while the SOP would need 3. It shows because of the capacity of the short gripper. The best strategy is to pick either 2 or 3 objects at a time if the desired number k is over 3.
We can also see a trend – when the density increases, the average picking trials decreases. It is because the low-density layouts are less likely to provide a decent number of clusters. We further study the effect of the density on the success rates next.
§.§ Study on Density Effect
As shown in previous tables, the number of objects inside the picking bin may greatly affect the picking result and accuracy, especially when the target number is higher. Therefore, we have conducted simulation experiments to explore how different densities affect picking results. The study has been done on all 12 objects. The density effects are similar for all of them. So, here we show the results on cube_m_s as an example in Table <ref>.
When the target picking number is 2, there is no significant difference in OSR or ESR among different object densities, this is because AR very high since finding a pose to pick 2 objects is easy. The lowest OSR and ESR are both 96.00%, and the highest OSR and ESR are both 97.50%.
When the target number is 3 or 4, we can see the AR strictly increases when the density increases, this causes OSR increases as well since ESR does not have a clear trend of changing for different density. This is because when more objects are in the bin, there are more candidate clusters to select from.
Overall, since high density provides more clusters, the proposed approach is more likely to find a suitable cluster, especially for picking 3 and 4 objects. It doesn't affect picking two because clusters of two are common and a picking pose is easier to find compared to higher number picking.
The ESR rate is not associated with the density because our MOP predictors have a consistent accuracy over different densities.
§.§ Ablation Study
The proposed approach (PA) could be simplified by eliminating the cluster ranking module and confidence threshold in searching for the picking pose. To demonstrate the benefit of including them, we define three baseline approaches. The baseline #1 (B-1) does not contain the cluster ranking module but uses the good-enough confidence threshold for early stopping. The baseline #2 (B-2) has the cluster ranking module, but set no confidence threshold, it returns the first found action that is predicted to pick k according to the MOP predictor. The baseline #3 (B-3) does not contain the cluster ranking module and exhaustively searches through all clusters.
We performed three ablation studies in simulation on cube_m_s in five different densities with target picking number 3. The study results are shown in Table <ref>. They are separated by /. The first value represents the number of searched clusters before finding the action. The second value is OSR. The third value is ESR. All numbers are the average numbers of 200 random scenes for a given density. Different algorithms are tested on the same series of random scenes.
The table shows five different density results and the average number of five different density results.
When the density is 20, PA takes 2.76 clusters on average to find an action to pick three cubes, which is 31.4% less than B-1 and 20.0% more than B-2. As for picking success rate, ESR for PA is 57.00% for OSR and 74.03% for ESR, 5.50% and 7.15% higher compared to B-1, and 1.00% and 1.30% higher compared to B-1. B-3 performs the worst for both cluster searching efficiency and success rate. As the object density increases, the difference between PA and algorithms without ranking becomes larger since the layout becomes much more complicated.
On average, for all five densities, PA reduces 41.95% number of searched clusters compared to B-1, and 6.10% and 7.00% higher accuracy on OSR and ESR compared to B-2. To make a conclusion for the ablation study, PA performs the best in success rate with sacrificing a bit more computation time compared to B-2, however as our algorithm processes each cluster pretty fast, each action generation for PA usually takes less than 3-4 seconds and is tolerable. The cluster ranking module is a key part of the approach to increase searching efficiency without sacrificing anything.
Our algorithm on average takes 8.40 clusters to search and achieves a 62.70% overall accuracy, which is a 41.95% save compared to no cluster ranking baseline and a 6.10% improvement compared to no confidence threshold. This shows that our cluster ranking algorithm and confidence threshold play important roles in reducing clusters to search and improving overall success rate.
§.§ Generalization Study
To evaluate if the trained object picking number estimation model can generalize to unseen size and shape objects, we design and test a series of experiments to measure the performance of our algorithm on different size and shape objects.
§.§.§ Unseen Sizes
Table <ref> shows the simulation result on different size objects unseen during the training stage. cube_s_s and cube_l_s are the smaller and larger version of original trained cube_m_s object.
The result shows that cube_s_s achieves relatively similar results in all three different target number picking experiments: 3.00% lower and 3.00% lower than cube_m_s in OSR and ESR when the target is 2, 9.50% higher and 0.27% higher than cube_m_s in OS and ES when the target number is 3, 2.00% higher and 8.93% higher than cube_m_s in OS and ES when the target number is 4.
When testing on cube_l_s, target number 2 result is not worse than cube_m_s. However, the OSR decreases significantly in target 3 and 4 picking experiments. This is because cube_l_s is about 20% larger than cube_m_s and causes picking more objects harder. Meanwhile, larger object occupies more space and leaves fewer available picking actions.
The result for cylin_s_s and cylin_l_s is similar to the result cylin_m_s in target 2 picking. When target number is 3, cylin_s_s is 2.00% lower than cylin_m_s in OSR and 4.00% lower than cylin_m_s in ESR. cylin_l_s is 31.00% lower than cylin_m_s in OSR and 21.99% lower than cylin_m_s in ESR.
This extensive study on different sizes from the trained objects shows that our model can generalize well to different size objects, especially smaller objects, due to more available picking actions in less crowded spaces.
§.§.§ Unseen Shape
We also evaluated our approach to an unseen shape, a hexagon, in both simulation and real testing. Our shorter gripper MOP predictor has never seen a hexagon shape during the training stage. Therefore this testing can give us insight into the possibility our model can generalize to more complex shapes in future works. Table <ref> shows the picking result of the hexagon in three different sizes in simulation and metal hexagonal nuts. The result shows hexa_m_s has the highest OSR and ESR in simulation when picking two objects, reaching 67.00% in both success rates. We believe it is because this hexagon's size is closest to the trained 1inch cube cube_m_s.
In real-world testing, the hexagonal metal nuts have the same size as hexa_m_s. The result OSR reaches 63.75% and ESR reaches 65.38%. Therefore, our trained model and algorithm can partially generalize to a shape never seen before.
Our algorithm also shows the trained model can generalize to different size cube and cylinder: 2.0cm and 3.0cm cubes achieved more than 90% overall accuracy when picking 2 objects. The result shows the model can generalize to both size, but the overall success rate for larger than original object decreases due to more collision. The evaluation result also shows that hexagon can achieves 67.00% and 63.75% overall success rate for target#=2 picking in simulation and real. This success rate is lower than original trained shape, but still better than SOP theoretical result.
§ CONCLUSION AND DISCUSSION
The proposed OPOS can use a robotic gripper to pick up the requested number of objects from a shallow bin by only picking once with a reasonable success rate. Using OPOS can improve picking efficiency even when considering scenes that are not naturally suitable for OPO. OPOS has a novel framework building on several graph algorithms that use the graph topology of object layouts to identify suitable clusters and a novel neural-network MOP predictor to estimate the outcome of a picking from a proposed gripper pose. For complex scenes with many objects, the framework has a cluster ranking algorithm and an early-stopping mechanism to reduce the computation cost significantly.
The approach has been evaluated with 12 objects in the simulation and 9 objects in the real setup through three protocols on four metrics. The results show when the requested number is two, our OPOS achieves close to or above 95% overall OPO success rates for all tested objects in various shapes and sizes in the simulation and real when the density is not too high. When enforcing the requested number, OPOS may have to pick more than once to reach the exact requested number. The results show the OPOS approach can improve efficiency by almost three times for some objects and settings and close to two times for objects and all settings. The ablation study shows our algorithms achieve the best overall performance in terms of planning efficiency and overall success rate. The generalization results show the trained model can generalize to slightly different sizes and shapes without any training. Their OPO success rates and efficiencies decrease a bit, but they are still better than a perfect SOP.
Picking a large (above three) requested number of objects by OPO is usually difficult because of lack of clusters and the high chance of collision (a large number requires a long gripper that has a high chance of collision). For identical settings (the same object, the same density), the AR decreases as the target number increases. The AR significantly increases when density increases for the same object and the same requested number of objects (especially a high number). It is because higher density is more likely to provide clusters of a larger number of objects to be picked up together.
Overall, the proposed OPOS can improve the efficiency of picking multiple identical objects. When implementing OPOS for a particular setup, the requested number could be broken down to two or three for each OPOS picking to achieve the best efficiency. OPOS could be improved if we introduce a gathering motion because it will increase the OPO availability rate. In the future, we plan to investigate different gathering motions and how to incorporate them into OPOS.
IEEEtran
|
http://arxiv.org/abs/2307.01197v1
|
20230703175801
|
Segment Anything Meets Point Tracking
|
[
"Frano Rajič",
"Lei Ke",
"Yu-Wing Tai",
"Chi-Keung Tang",
"Martin Danelljan",
"Fisher Yu"
] |
cs.CV
|
[
"cs.CV"
] |
Segment Anything Meets Point Tracking
Frano Rajič^1,3Lei Ke^1,2Yu-Wing Tai^2Chi-Keung Tang^2Martin Danelljan^1Fisher Yu^1
^1ETH Zürich^2HKUST^3EPFL
August 1, 2023
==========================================================================================================================
< g r a p h i c s >
figure
Segment Anything Meets Point Tracking (SAM-PT).
SAM-PT is the first method to utilize sparse point propagation for Video Object Segmentation (VOS). The essence of SAM-PT is to extend SAM <cit.> with long-term point trackers to effectively operate on videos in a zero-shot manner. SAM-PT takes a video as input together with annotations of the target object in the first frame. These annotations are called “query points” and denote either the target object (positive points) or designate non-target segments (negative points). The points are tracked throughout the video using point trackers that propagate the query points to all video frames, producing predicted trajectories and occlusion scores. SAM is subsequently prompted with the non-occluded points in the trajectories as to output a segmentation mask for each video frame independently.
The Segment Anything Model (SAM) has established itself as a powerful zero-shot image segmentation model, employing interactive prompts such as points to generate masks.
This paper presents SAM-PT, a method extending SAM's capability to tracking and segmenting anything in dynamic videos.
SAM-PT leverages robust and sparse point selection and propagation techniques for mask generation, demonstrating that a SAM-based segmentation tracker can yield strong zero-shot performance across popular video object segmentation benchmarks, including DAVIS, YouTube-VOS, and MOSE.
Compared to traditional object-centric mask propagation strategies, we uniquely use point propagation to exploit local structure information that is agnostic to object semantics.
We highlight the merits of point-based tracking through direct evaluation on the zero-shot open-world Unidentified Video Objects (UVO) benchmark.
To further enhance our approach, we utilize K-Medoids clustering for point initialization and track both positive and negative points to clearly distinguish the target object. We also employ multiple mask decoding passes for mask refinement and devise a point re-initialization strategy to improve tracking accuracy.
Our code integrates different point trackers and video segmentation benchmarks and will be released at <https://github.com/SysCV/sam-pt>.
§ INTRODUCTION
Video segmentation benefits a myriad of applications, including autonomous driving, robotics, and video editing. Despite significant progress made in the past few years with deep neural networks <cit.>, the current methodologies falter when faced with unseen data, particularly in zero-shot settings. These models struggle to maintain consistent performance across diverse scenarios without specific video segmentation data for fine-tuning.
The prevailing methods <cit.> in semi-supervised Video Object Segmentation (VOS) and Video Instance Segmentation (VIS) exhibit performance gaps when dealing with unseen data, particularly in a zero-shot setting, i.e., when these models are transferred to video domains they have not been trained on and encompass object categories that fall outside of the training distribution.
A potential route towards overcoming these challenges lies in adapting successful models in the image segmentation domain for video segmentation tasks. One such promising model is the Segment Anything Model (SAM) <cit.>. SAM is a powerful foundation model for image segmentation, trained on the large-scale SA-1B dataset, which contains an astounding 11 million images and over 1 billion masks. This extensive training set enables SAM's impressive zero-shot generalization capabilities. The model is highly adaptable, able to produce high-quality masks from single foreground points, and has demonstrated robust performance across a range of downstream tasks under zero-shot transfer protocols. While SAM demonstrates powerful zero-shot capabilities for image segmentation, it is not innately suited for video segmentation tasks.
Recent efforts have been made to adapt SAM for video segmentation. For instance, TAM <cit.> integrates SAM with the state-of-the-art memory-based mask tracker XMem <cit.>. Likewise, SAM-Track <cit.> combines SAM with DeAOT <cit.>. While these methods mostly recover the performance on in-distribution data, they fall short in preserving the original performance of SAM in more challenging, zero-shot settings. Other methods that do not leverage SAM, such as SegGPT <cit.>, can successfully solve a number of segmentation problems using visual prompting, but still require mask annotation for the first video frame. This problem represents a significant barrier in zero-shot video segmentation, particularly as we seek to develop methods that can easily generalize to unseen scenarios and consistently deliver high-quality segmentation across diverse video domains.
We introduce SAM-PT (Segment Anything Meets Point Tracking), depicted in <ref>. This is the first method to utilize sparse point tracking combined with SAM for video segmentation, offering a new perspective on solving the problem. Instead of employing object-centric dense feature matching or mask propagation, we propose a point-driven approach that capitalizes on tracking points using rich local structure information embedded in videos. As a result, it only requires sparse points annotation to denote target object in the first frame and provides better generalization to unseen objects, a strength demonstrated on the open-world UVO <cit.> benchmark. This approach also helps preserve the inherent flexibility of SAM while extending its capabilities effectively to video segmentation.
SAM-PT prompts SAM with sparse point trajectories predicted using state-of-the-art point trackers, such as PIPS <cit.>, harnessing their versatility for video segmentation. We identify that initializing points to track using K-Medoids cluster centers from a mask label was the strategy most compatible with prompting SAM. Tracking both positive and negative points enables the clear delineation of target objects from their background. To further refine the output masks, we propose multiple mask decoding passes that integrate both types of points. In addition, we devised a point re-initialization strategy that increases tracking accuracy over time. This approach involves discarding points that have become unreliable or occluded, and adding points from object parts or segments that become visible in later frames, such as when the object rotates.
Notably, our experimental results highlight that SAM-PT competes with existing zero-shot methods <cit.> or outperforms them <cit.> on several video segmentation benchmarks. This comes without the need for any video segmentation data during training, underscoring the robustness and adaptability of our approach. SAM-PT holds the potential to enhance progress in video segmentation tasks, particularly in zero-shot scenarios.
§ RELATED WORK
Point Tracking for Video Segmentation.
Classical feature extraction and tracking methods such as Lucas-Kanade <cit.>, Tomasi-Kanade <cit.>, Shi-Tomasi <cit.>, SIFT <cit.>, and SURF <cit.>, as well as newer methods such as LIFT <cit.>, SuperPoint <cit.>, and SuperGlue <cit.>, have all demonstrated proficiency in identifying or tracking sparse features and establishing long-range correspondences. However, their effectiveness is confined to a specific set of distinct interest points and they often struggle when applied to non-rigid, dynamic scenes.
Flow-based methods, such as RAFT <cit.>, excel in tracking dense points between successive frames. However, they stumble with deriving accurate long-range point trajectories. When chaining flow predictions over time, errors tend to accumulate and lead to drift, while occlusions result in tracking failures.
Significant strides have recently been made in long-term point tracking across video frames, as evinced by methods such as TapNet <cit.> and PIPS <cit.>, as well as the concurrent and state-of-the-art OmniMotion <cit.> and TAPIR <cit.> techniques. These approaches optimize long-range point trajectories across an entire video, navigating mostly well through periods of occlusion.
Our work stands apart as the first to integrate these successful long-term point tracking methods, utilizing them to guide a promptable foundation model for image segmentation toward performing video segmentation tasks.
Segment and Track Anything models.
SAM <cit.> is an innovative image segmentation model for promptable image segmentation, trained on over 1 billion segmentation masks. It showcases remarkable zero-shot generalization abilities and can produce high-quality masks from a single foreground point.
To further improve the quality of the masks, especially when segmenting objects with intricate structures, HQ-SAM <cit.> extends SAM with a learnable high-quality output token which proves efficient in diverse segmentation domains. However, SAM and HQ-SAM cannot be directly used to solve video segmentation tasks.
A few concurrent works extend SAM, for example, TAM <cit.> and SAM-Track <cit.> combine SAM with state-of-the-art mask trackers (such as XMem <cit.> and DeAOT <cit.>) to perform interactive video object segmentation. These methods employ SAM for mask initialization or correction and XMem/DeAOT for mask tracking and prediction. Using the pre-trained mask trackers recovers the in-distribution performance, but hinders the performance in zero-shot settings. PerSAM <cit.> also demonstrates the ability to track multiple reference objects in a video.
Instead of building an interactive tracking pipeline or SAM fine-tuning, we focus on learning robust associations for diverse objects in zero-shot scenarios.
Zero-shot VOS / VIS.
Among the non-SAM-based methods, Painter <cit.> and its SegGPT <cit.> extension are another sort of generalist models for solving a variety of image and segmentation tasks. These methods likewise use visual prompting techniques but are inherently different frameworks from SAM. Despite its wide applicability, Painter shows lacking performance in video segmentation tasks. Conversely, SegGPT successfully uses in-context prompting to achieve one-shot video object segmentation performance comparable to ours, also without training on any video data. The training domains, however, notably differ between SegGPT and our method.
STC <cit.> and DINO <cit.> also do not use any video segmentation data during training. In the semi-supervised video object segmentation, they take a reference mask as input and perform frame-by-frame feature matching, which propagates the reference mask across the entirety of the video. Our SAM-PT, on the other hand, diverges substantially from these methodologies by adopting point tracking, eschewing the process of frame-by-frame feature matching. Additionally, our method requires only sparse points to represent the target object, rather than a full reference mask, and yields superior performance on conventional semi-supervised video object segmentation benchmarks.
§ METHOD
We propose SAM-PT to adapt SAM, a foundation model for image segmentation, for addressing video segmentation tasks in a zero-shot setting. SAM-PT combines the strengths of existing prominent point trackers, such as PIPS <cit.> and TapNet <cit.>, with the powerful image segmentation of SAM to enable tracking of anything in videos. First, <ref> briefly describes SAM. <ref> then introduces our SAM-PT method with its four constituent steps. Finally, <ref> analyzes and highlights the method's novelty as the first point-driven video segmentation method compared to existing works.
§.§ Preliminaries: SAM
The Segment Anything Model (SAM) <cit.> is a novel vision foundation model designed for promptable image segmentation. SAM is trained on the large-scale SA-1B dataset, which contains 11 million images and over 1 billion masks. SA-1B has 400 times more masks than any existing segmentation dataset. This extensive training set facilitates SAM's impressive zero-shot generalization capabilities to new data. SAM has showcased its ability to produce high-quality masks from a single foreground point, and has demonstrated robust generalization capacity on a variety of downstream tasks under a zero-shot transfer protocol using prompt engineering. These tasks include, but are not limited to, edge detection, object proposal generation, and instance segmentation.
SAM comprises of three main components: an image encoder, a flexible prompt encoder, and a fast mask decoder. The image encoder is a Vision Transformer (ViT) backbone and processes high-resolution 1024 ×1024 images to generate an image embedding of 64 × 64 spatial size. The prompt encoder takes sparse prompts as input, including points, boxes, and text, or dense prompts such as masks, and translates these prompts into c-dimensional tokens. The lightweight mask decoder then integrates the image and prompt embeddings to predict segmentation masks in real-time, allowing SAM to adapt to diverse prompts with minimal computational overhead.
§.§ Ours: SAM-PT
While SAM shows impressive capabilities in image segmentation, it is inherently limited in handling video segmentation tasks. Our Segment Anything Meets Point Tracking () approach effectively extends SAM to videos, offering robust video segmentation without requiring training on any video segmentation data.
SAM-PT is illustrated in <ref> and is primarily composed of four steps: 1) selecting query points for the first frame; 2) propagating these points to all video frames using point trackers; 3) using SAM to generate per-frame segmentation masks based on the propagated points; 4) optionally reinitializing the process by sampling query points from the predicted masks. We next elaborate on these four steps.
1) Query Points Selection. The process begins with defining query points in the first video frame, which either denote the target object (positive points) or designate the background and non-target objects (negative points). Users can manually and interactively provide query points, or they may be derived from a ground truth mask. For example, in the case of semi-supervised video object segmentation, the ground truth mask is provided for the first frame where the object appears. We derive the query points from ground truth masks using different point sampling techniques by considering their geometrical locations or feature dissimilarities, as depicted in <ref>. These sampling techniques are:
* Random Sampling: An intuitive approach where query points are randomly selected from the ground truth mask.
* K-Medoids Sampling: This technique takes the cluster centers of K-Medoids clustering <cit.> as query points to ensure good coverage of different parts of the object and robustness to noise and outliers.
* Shi-Tomasi Sampling: This method extracts Shi-Tomasi corner points from the image under the mask as they have been shown to be good features to track <cit.>.
* Mixed Sampling: A hybrid method combining the above techniques since it might benefit from the unique strengths of each.
While each method contributes distinct characteristics that influence the model's performance, our ablation study reveals that K-Medoids sampling yields the best results with a good full coverage of various segments of the complete object. Shi-Tomasi sampling follows closely, indicating their respective strengths in this context. The selection and arrangement of these points considerably affect the overall video segmentation performance, thus determining the optimal method is crucial.
2) Point Tracking. Initiated with the query points, we employ robust point trackers to propagate the points across all frames in the video, resulting in point trajectories and occlusion scores. We adopt the state-of-the-art point tracker PIPS <cit.> to propagate the points as PIPS shows moderate robustness toward long-term tracking challenges such as object occlusion and re-appearance. This is also shown more effective than methods such as chained optical flow propagation or first-frame correspondences in our experiment section.
3) Segmentation. In the predicted trajectories, the non-occluded points serve as indicators of where the target object is throughout the video. This allows us to use the non-occluded points to prompt SAM, as illustrated in <ref>, and leverage its inherent generalization ability to output per-frame segmentation mask predictions. Unlike conventional tracking methods that require training or fine-tuning on video segmentation data, our approach excels in zero-shot video segmentation tasks.
We combine positive and negative points by calling SAM in two passes. In the initial pass, we prompt SAM exclusively with positive points to define the object's initial localization. Subsequently, in the second pass, we prompt SAM with both positive and negative points along with the previous mask prediction. Negative points provide a more nuanced distinction between the object and the background and help by removing wrongly segmented areas.
Lastly, we execute a variable number of mask refinement iterations by repeating the second pass. This utilizes SAM's capacity to refine vague masks into more precise ones. Based on our ablation study, this step notably improves video object segmentation performance.
4) Point Tracking Reinitialization. We optionally execute a reinitialization of the query points using the predicted masks once a prediction horizon of h=8 frames is reached, and denote the variant as -reinit. Upon reaching this horizon, we have h predicted masks and will take the last predicted mask to sample new points. At this stage, all previous points are discarded and substituted with the newly sampled points. Following this, steps 1) through 4) are repeated with the new points, starting from the horizon timestep where reinitialization occurs. The steps are iteratively executed until the entire video is processed. The reinitialization process serves to enhance tracking accuracy over time by discarding points that have become unreliable or occluded, while incorporating points from object segments that become visible later in the video. Other reinitialization variants are discussed in <ref> and included in the ablation study in <ref>.
§.§ vs. Object-centric Mask Propagation
With sparse point tracking combined with prompting SAM, distinguishes itself from traditional video segmentation methods that depend on dense object mask propagation, as noted in <ref>. To propagate the first-frame GT label to the remaining video frames, traditional techniques commonly use feature matching with masks cached to a mask memory <cit.>, frame-by-frame feature matching <cit.>, feature matching with the first-frame mask <cit.>,
optical flow <cit.>,
and, recently, in-context visual prompting <cit.>. In contrast, introduces a unique approach to video object segmentation, employing the robust combination of point tracking with SAM, which is inherently designed to operate on sparse point prompts.
The point propagation strategy of SAM-PT offers several advantages over traditional object-centric tracking methods. First, point propagation exploits local structure context that is agnostic to global object semantics.
This enhances our model's capability for zero-shot generalization, an advantage that, coupled with SAM's inherent generalization power, allows for tracking diverse objects in diverse environments, such as on the UVO benchmark.
Second, SAM-PT allows for a more compact object representation with sparse points, capturing enough information to characterize the object's segments/parts effectively. Finally, the use of points is naturally compatible with SAM, an image segmentation foundation model trained to operate on sparse point prompts, offering an integrated solution that aligns well with the intrinsic capacities of the underlying model.
Comparing with conventional methods in <ref>, emerges as superior or comparable to methods that refrain from utilizing video segmentation data during training. However, there is a performance gap that exists between such methods and those that leverage video segmentation training data in the same domain, such as XMem <cit.> or DeAOT <cit.>. Further, the potential of our model extends beyond video object segmentation to other tasks, such as Video Instance Segmentation (VIS), thanks to the inherent flexibility of our point propagation strategy.
In summary, is the first method that introduces sparse point propagation combined with prompting a image segmentation foundation model to perform zero-shot video object segmentation.
It provides a fresh perspective and adds a new dimension to the study of video object segmentation.
§ EXPERIMENTS
§.§ Datasets
In the following subsections, we present an overview of the datasets used in our study. Section <ref> provides a brief introduction to the Video Object Segmentation task and outlines the specific datasets we utilize for this task. Similarly, Section <ref> discusses the Video Instance Segmentation task and the dataset associated with it.
§.§.§ Video Object Segmentation
Video Object Segmentation (VOS) refers to the process of segmenting a specific object across an entire video sequence. Semi-supervised VOS (also known as one-shot VOS or semi-automatic VOS) is the primary setting for VOS on which we evaluate our method. In this setting, the ground truth object mask of the first frame is provided, and the task is to predict the masks for subsequent frames. Alternatively, the first frame label can be a bounding box instead of a segmentation mask, or a set of points as is the case for our method.
We evaluate our method on four VOS datasets: DAVIS 2016, DAVIS 2017 <cit.>, YouTube-VOS 2018 <cit.>, and MOSE 2023 <cit.>.
DAVIS 2016 <cit.>.
DAVIS 2016 is a single-object VOS benchmark, consisting of 20 highly diverse video sequences, each of which possesses well-annotated segmentation masks.
DAVIS 2017 <cit.>.
A multi-object extension of its 2016 version, DAVIS 2017 includes 60 videos in the training set and 30 videos in the validation set, comprising a total of 197 different objects. The video scenarios within this dataset are small but diverse.
YouTube-VOS 2018 <cit.>.
YouTube-VOS 2018 is a large-scale dataset collected from YouTube, comprising 3471 training videos encompassing 65 categories and 474 validation videos with an additional 26 unseen categories. The diversity in categories and the inclusion of seen and unseen classes allow for a comprehensive evaluation of a given model's generalization capability.
MOSE 2023 <cit.>.
MOSE 2023 is a recently introduced dataset designed for multiple object segmentation and tracking in complex scenes. This dataset is replete with challenges such as the transient visibility of objects, the presence of minute or less noticeable entities, extensive occlusions, and scenes with a high object density. By design, each video in this dataset must contain multiple objects so that occlusions must be present, and objects must show sufficient motion, as opposed to being stationary or showing little movement.
Metrics. We report the standard evaluation metrics for video object segmentation <cit.>, including region similarity 𝒥, contour accuracy ℱ, and their average, 𝒥&ℱ.
§.§.§ Video Instance Segmentation
Video Instance Segmentation (VIS) is a task that combines object detection, instance segmentation, and object tracking across video frames, which aims to identify and segment each object instance over the whole video sequence. This is a much less explored task compared to VOS but has been gaining interest. We evaluate our method on the dense-video task of the UVO v1.0 <cit.> dataset.
UVO v1.0. The Unidentified Video Objects (UVO) dataset is designed to recognize and segment all objects regardless of the categories, even those unseen during training, thereby focusing on VIS in the open world. Each video in UVO features on average 12.3 object annotations, a considerable increase from previous datasets having only 2 or 3 objects per video on average. UVO sources its videos from the Kinetics-400 <cit.> dataset and contains three different splits: FrameSet, VideoSparseSet, and VideoDenseSet. The VideoDenseSet consists of 3-second clips annotated densely at 30fps and tracked over time. The primary goal of VideoDenseSet is to study video open-world segmentation. Objects identifiable under COCO categories carry their respective COCO labels, while ambiguous objects or those outside the COCO taxonomy are labeled as “other”. This meticulous and exhaustive annotation structure makes the VideoDenseSet ideal for research areas that require an understanding of videos in a dense and comprehensive manner, such as robotics, autonomous driving, and augmented-reality applications.
Metrics.
We evaluate our method using standard evaluation metrics in image instance segmentation, adapted for video instance segmentation <cit.>. These include Average Precision (AP) and Average Recall (AR) IoU-based metrics. Given that each instance in a video comprises a sequence of masks, unlike image instance segmentation, IoU computation is carried out not only in the spatial dimensions but also in the temporal dimension. This implies that the sum of intersections at every single frame is divided by the sum of unions at every single frame. These metrics are generally computed on a per-category basis and subsequently averaged across all categories. However, we work with the class-agnostic version of UVO.
§.§ Implementation Details
Training Data. For our experiments, we use pre-trained checkpoints provided by the respective authors for both PIPS <cit.> and SAM. PIPS is trained exclusively on a synthetic dataset, FlyingThings++ <cit.>, derived from the FlyingThings <cit.> optical flow dataset. This dataset includes multi-frame amodal trajectories with synthetic occlusions caused by moving objects. SAM, on the other hand, has been trained on the large-scale SA-1B dataset, the largest image segmentation dataset to date, with over 1 billion masks on 11M licensed and privacy-respecting images. It is noteworthy that neither of these datasets includes video segmentation data, and they do not overlap with any of our evaluation data. This effectively positions our model in a zero-shot video segmentation setting.
Model Variations. Our experiments led to two optimal model hyperparameters, distinguished as (without reinitialization) and -reinit (with reinitialization). These configurations were derived from our ablation study in <ref>. However, we found that using iterative refinement negatively impacted both and -reinit on the MOSE dataset, and likewise hindered -reinit on the YouTube-VOS dataset. Consequently, iterative refinement was deactivated for these specific datasets. For DAVIS, we additionally report results for replacing SAM with HQ-SAM <cit.> and denote the model variants as HQ-and HQ--reinit. The HQ-SAM variants use 3 iterative refinement iterations instead of 12 iterations.
VOS Evaluation. When evaluating on VOS, we use the provided ground truth mask for the first frame to sample the query points required by our method. Then, we give only the sampled points as input to our method, not the mask. For all datasets, we use the full-resolution data and resize it to the longest side of 1024 to match SAM's input resolution.
VIS Evaluation. For evaluating our method on the VIS task, we leverage SAM's automatic mask generation capacity to generate up to 100 mask proposals for the initial frame. We then propagate these proposed masks throughout the entire video sequence using our method. We evaluate TAM <cit.>, a concurrent method we compare against, in the same manner. Our mask proposal generation process is currently simplistic and does not create any proposals for subsequent video frames. Consequently, it cannot identify objects that emerge in later frames, placing it at a disadvantage compared to VIS methods that are capable of doing so. Despite this limitation, our approach provides a consistent platform for comparing zero-shot methods in terms of how effectively they propagate diverse mask proposals from the first frame.
§.§ Ablation Study
We conducted detailed ablation experiments on the DAVIS 2017 validation subset to validate various components and designs of SAM-PT. We employed SAM's ViT-H as the backbone, for all tests. Each aspect was examined sequentially, integrating the optimal settings obtained from prior experiments. To ensure statistical soundness, multiple iterations of each experiment were carried out (between 4 and 12 runs per setup), with findings represented as mean and standard deviation across these runs.
While these results provide insight, there may be a risk of overfitting due to our limited validation dataset. While we endeavored to maintain a consistent evaluation protocol, future research should aim for a larger validation set, possibly derived from the YouTube VOS 2018 train dataset, to mitigate this concern.
Query Point Sampling
<ref> illustrates that the number of positive points and the choice of point selection methods significantly influence performance. Using 8 points per mask showed a remarkable 40-point performance enhancement compared to a single point. This substantiates the argument that a single positive point is inadequate for prompting SAM
as it often results in the segmentation of partial objects only. Among the point selection methods, K-Medoids and Shi-Tomasi produced comparable results, with a slight preference towards K-Medoids owing to its marginally higher mean score and resilience to the number of positive points per mask.
Point Tracking. <ref>a shows that PIPS <cit.> demonstrated superior performance over TapNet <cit.>, SuperGlue <cit.>, and RAFT <cit.>. TapNet's limitations stem from its lack of effective time consistency and its training on 256x256 images, which hampered its performance with higher-resolution images. SuperGlue, while proficient in matching sparse features across rigid scenes, grapples with effectively matching points from the reference frame in dynamic scenes, particularly under object deformations. RAFT, being an optical flow model, faced difficulties handling occlusions. Although PIPS's prior use in our experiments may have offered some hyperparameter advantages, its superior performance is primarily attributable to its more robust design that emphasizes trajectory modeling over eight subsequent frames. This approach fosters the generation of coherent point trajectories and enhances occlusion detection.
Negative Points. <ref>b highlights that incorporating negative points had a favorable impact, particularly in reducing segmentation errors when points deviated from the target object. The addition of negative points empowered SAM to better handle the point trackers' failure cases, leading to improved segmentation and a 1.8-point enhancement over the non-use of negative points. Note that throughout all experiments, we always used the mixed point sampling method for sampling negative points which amounts to using random sampling when there is only one negative point per mask.
Iterative Refinement. The iterative refinement approach contributed to higher-quality masks and mitigated the impact of artifacts in SAM's output. <ref>c displays that this yielded an improvement of 2.2 points over the non-refinement approach.
Patch Similarity. Our initial findings in <ref>d suggest that using patch similarity to filter unreliable tracking points was overly restrictive in our context, leading to substantial deletion of points. Although it did not prove beneficial in our current setup, this aspect certainly warrants further exploration, particularly in scenarios involving point re-initialization.
Reinitialization.
<ref> presents the performance of different reinitialization variants. In <ref> and <ref>, we also show it brings 2.5 and 2.0 points improvements on MOSE and UVO benchmarks respectively. The re-initialization process enhanced robustness against points falling off objects. By reinitializing all points based on the current mask prediction, we account for errors in point tracker outputs by discarding incorrect points and starting fresh from the current mask prediction. However, this assumes that we trust the currently outputted mask, which may not always be the case and sometimes leads to failures.
In summary, our best-performing SAM-PT model employs K-Medoids for point selection with 8 points per mask, PIPS for point tracking, a single negative point per mask, and employs 12 iterations for iterative refinement without patch similarity filtering. Meanwhile, using reinitialization achieved optimum performance with 12 refinement iterations and 72 negative points per mask.
§.§ Comparison with State-of-the-art Methods
All reported results were computed with official tools or official evaluation servers. <ref> reports Video Object Segmentation results, including qualitative results on unseen web videos. <ref> reports Video Instance Segmentation results.
§.§.§ Video Object Segmentation
Performance Overview.
Our proposed method outperforms others that have not been trained on any video object segmentation data on the DAVIS 2017 dataset, as reflected in <ref>. A mean 𝒥&ℱ score of 76.6 points exceeds the PerSAM-F by 4.7 points and the SegGPT generalist model by a single point. The experiments were repeated 8 times for statistical robustness, and we report the mean and standard deviation of our method's performance.
We also outperform PerSAM-F on the YouTube-VOS 2018 and MOSE 2023 datasets, achieving mean scores of 67.0 and 41.0 as shown in <ref>. However, with different mask training data, our performance falls short when compared to SegGPT on the two datasets.
Qualitative Analysis.
Visualizations of successful video segmentation on DAVIS 2017 for SAM-PT and SAM-PT-reinit can be seen in <ref> and <ref> respectively. Notably, <ref> presents successful video segmentation on unseen web videos – clips from the “Avatar: The Last Airbender” anime-influenced animated television series, demonstrating the zero-shot capabilities of our method.
Limitations and Challenges.
Despite the competitive zero-shot performance, certain limitations persist, primarily due to the limitations of our point tracker in handling occlusion, small objects, motion blur, and re-identification. In such scenarios, the point tracker's errors propagate into future video frames. <ref> illustrates these problematic instances on DAVIS 2017, while <ref> presents additional cases on “Avatar: The Last Airbender” clips. Although using point re-initialization and negative points somewhat alleviates the failures of the point tracker, they still prevent the performance from being on par with methods trained on video data.
§.§.§ Video Instance Segmentation
Results and Analysis. Given the same mask proposals, SAM-PT outperforms TAM <cit.> significantly even though SAM-PT was not trained on any video segmentation data. TAM is a concurrent approach combining SAM and XMem <cit.>, where XMem was pre-trained on BL30K <cit.> and trained on DAVIS and YouTube-VOS, but not on UVO. On the other hand, SAM-PT combines SAM with the PIPS point tracking method, both of which have not been trained on video segmentation tasks.
§ CONCLUSION
We present SAM-PT, an innovative solution that extends SAM's segmentation ability from static images to dynamic videos. Integrated with long-term point trackers, our approach demonstrates strong performance across several benchmarks including DAVIS, YouTube-VOS, MOSE, and UVO. While our method has limitations such as difficulty handling occlusions, small objects, and motion blur, and inconsistencies in mask predictions, it contributes a simple and effective new point-based perspective to video object segmentation research. By illustrating a promising way to extend foundational models like SAM into the video domain, our research provides a potential pathway for advancements in diverse applications from autonomous driving to video labeling. Furthermore, the future incorporation of more advanced point trackers can enhance the performance of SAM-PT.
ieee_fullname
§ POINT TRACKING REINITIALIZATION
In our SAM-PT-reinit method, we introduce a reinitialization strategy. Here, the point tracker begins anew after every h frames, where h represents a pre-set tracking horizon (e.g., 8 frames), or is dynamically determined based on SAM's mask predictions for each timestep within the horizon (e.g., using most-similar-mask-area heuristics). Upon reaching this horizon, the query points given to the tracker are reinitialized according to the mask prediction SAM outputted at the horizon frame. While this method may increase the computational load (especially if some of SAM's computed masks are disregarded), it demonstrates substantial performance improvement in demanding video sequences, such as those in the MOSE dataset.
We explored four reinitialization methods, each varying in how they compute the value of h:
* Reinit-on-Horizon-and-Sync-Masks: This straightforward variant reinitializes points after a fixed number of frames (e.g., every 8 frames). However, it may stumble if the mask is absent at the reinitialization timestep. Despite this potential pitfall, it operates at the same speed as methods that do not employ reinitialization.
* Reinit-at-Median-of-Area-Diff: In this variant, the tracker outputs trajectory points for each frame within the horizon, and SAM predicts masks based on these trajectories. Reinitialization happens at the frame within the horizon that has the mean mask area among the non-empty masks predicted by SAM. Notably, this approach may be significantly slower than methods without reinitialization, as it may reject several SAM masks (e.g., out of 8 computed masks, reinitialization might occur on the second one, necessitating recomputation of the remaining 6 masks in the next step).
* Reinit-on-Similar-Mask-Area: This method triggers reinitialization when the mask area is similar to the initial mask area, causing it to be several times slower than methods without reinitialization.
* Reinit-on-Similar-Mask-Area-and-Sync-Masks: This variant reinitializes when the mask area for all masks in the batch is similar to the initial mask areas, synchronizing the masks to be tracked from the same timestep. This synchronization allows for the use of negative points from other masks when querying SAM, but it also runs several times slower than methods without reinitialization.
From our investigations, we found the (A) Reinit-on-Horizon-and-Sync-Masks strategy to be the most effective, as indicated by its superior performance on the DAVIS 2017 validation subset. The choice of reinitialization method may depend on the specific validation subset and the degree of hyperparameter tuning involved. Note that we have always used reinitialization along with negative points.
§.§ Computational Cost and Speed Optimization
The introduction of reinitialization in SAM-PT-reinit comes with a trade-off: it slows down the inference speed by a factor of 2 to 8, depending on the reinitialization method and parameters used. The major bottleneck is the invocation of SAM's backbone for each video frame. We propose caching the backbone outputs for unprocessed video frames as a possible solution to mitigate this slowdown. This strategy requires storing embeddings for all video frames in the working memory but offers the potential for significant speedup, particularly useful for applications requiring faster inference.
§ MORE DAVIS SUBSETS
We report results on DAVIS 2016 Validation and DAVIS 2017 Test-dev in <ref>.
§ PER-SEQUENCE DAVIS 2017 VALIDATION RESULTS
See figure <ref> for per-sequence DAVIS 2017 Validation results. For exact numbers and tables, check our GitHub experiment summaries or the Wandb project.
|
http://arxiv.org/abs/2307.02475v1
|
20230705174845
|
The Calissons Puzzle
|
[
"Jean-Marie Favreau",
"Yan Gerard",
"Pascal Lafourcade",
"Léo Robert"
] |
cs.CG
|
[
"cs.CG",
"05B45 52C20"
] |
A Dataset of Inertial Measurement Units for Handwritten English Alphabets
Hari Prabhat Gupta, Senior Member IEEE, Rahul Mishra, Member IEEE
Department of Computer Science and Engineering, IIT (BHU) Varanasi
Instructors: Hari Prabhat Gupta and Tanima Dutta, Senior Member, IEEE
Report writing: Rahul Mishra, Member, IEEE, and Garvit Banga
TA: Shubham Pandey, Krishna Sharma, and Himanshu Sahu
Volunteers <cit.>
August 1, 2023
=========================================================================================================================================================================================================================================================================================================================================================
In 2022, Olivier Longuet, a French mathematics teacher, created a game called the calissons puzzle. Given a triangular grid in a hexagon and some given edges of the grid, the problem is to find a calisson tiling such that no input edge is overlapped and calissons adjacent to an input edge have different orientations.
We extend the puzzle to regions R that are not necessarily hexagonal.
The first interesting property of this puzzle is that, unlike the usual calisson or domino problems, it is solved neither by a maximal matching algorithm, nor by Thurston's algorithm. This raises the question of its complexity.
We prove that if the region R is finite and simply connected, then the puzzle can be solved by an algorithm that we call the advancing surface algorithm and whose complexity is O(|∂ R|^3) where ∂ R| is the size of the boundary of the region R. In the case where the region is the entire infinite triangular grid, we prove that the existence of a solution can be solved with an algorithm of complexity O(|X|^3) where X is the set of input edges. To prove these theorems, we revisit William Thurston's results on the calisson tilability of a region R. The solutions involve equivalence between calisson tilings, stepped surfaces and certain DAG cuts that avoid passing through a set of edges that we call unbreakable. It allows us to generalize Thurston's theorem characterizing tilable regions by rewriting it in terms of descending paths or absorbing cycles. Thurston's algorithm appears as a distance calculation algorithm following Dijkstra's paradigm. The introduction of a set X of interior edges introduces negative weights that force a Bellman-Ford strategy to be preferred. These results extend Thurston's legacy by using computer science structures and algorithms.
§ INTRODUCTION
Tilings have been a subject of interest for mathematicians for centuries, and more recently for famous mathematicians such as John Conway or William Thurston. Some of the most common tilings are tilings by calissons i.e lozenges or rhombus. The name calisson comes from the name of a French sweet made in Aix-en-Provence, a small town in the south of France.
Calisson tilings have the nice property to be interpreted in 3D as the perspective image of a stepped surface.
In this framework, Olivier Longuet, a french teacher of mathematics, created in 2022 an interesting logic puzzle called the Calissons Puzzle (in french, the original name is le jeu des calissons). This puzzle has the merit of developing children's sense of the third dimension and of being recreational.
A full description -in french- with many instances and an app to play online are available on a https://mathix.org/calisson/blog/blog led by Olivier Longuet.
The rules are very simple. The problem is presented in a triangular grid bounded by a regular hexagon.
A calisson is a pair of adjacent triangles. There are three types of calissons, each associated with a yellow, red or blue color, depending on their direction.
An instance of a calissons puzzle is made up of edges of the triangular grid. The problem is to tile the grid with calissons in such a way that the edges given as input are not overlapped by a calisson and are adjacent to two calissons of different colors (Fig. <ref>).
For a first try, two instances of the puzzle are drawn in figure <ref>.
Our first goal is to determine the complexity of the puzzle.
We solve this question and a bit more in the triangular grid.
§.§ Notations
The triangular grid can be defined as the projection of the cubic grid.
The grids □ and □ _n. The primary cube C⊂ ^3 is [0,1]^3. The cubes of the cubic grid are simply denoted (x,y,z)+C with (x,y,z)∈ ^3. These are the translates of C by (x,y,z).
The sets of cubes, faces, edges and vertices of the cubic grid are respectively denoted □ ^3, □ ^2, □ ^1
and □ ^0 according to their dimension. Their union is a cubic complex denoted □ = □ ^3 ∪□ ^2 ∪□ ^1 ∪□ ^0.
For an integer n, we focus on the cellular complex □ _n =□ ^3 _n ∪□ _n ^2 ∪□ _n ^1 ∪□ _n ^0 containing the cubes,
faces, edges and vertices of cubes (x,y,z)+C
where (x,y,z)∈{ 0 ⋯ n-1}^3 with particular interest in the set of its cubes □ _n ^3.
The grids and _n. The infinite triangular grid and its restriction _n to the regular hexagon φ([0,n]^3) are obtained by projecting the cell complexes □ and □ _n along φ where φ is the projection of the 3D space ^3 onto a plane H of equation x+y+z=h in the direction (1,1,1).
Rather than using two coordinates in the planar grid , the classic choice for working in the triangular grid is to use so-called homogeneous coordinates. A point in the φ (x,y,z) plane is identified by its three coordinates (x,y,z), but to avoid any ambiguity, we keep the letter φ to differentiate between points in space noted (x,y,z) and points φ (x,y,z) in the plane. We obviously have φ(x,y,z)=φ (x+k,y+k,z+k). Adding k changes the depth of the point in the (1,1,1) direction without changing its projection. This notion of depth was put forward by the mathematician William Thurston under the name of height, which we use from now on, knowing that it is the height in the (1,1,1) direction.
The sets ^0 and _n ^0 of the vertices of the triangular grids and _n are respectively the projections of the vertices
of □ and □ _n. The sets of edges ^1 and _n ^1 of the triangular grids and _n are the projections of the sets of edges □ ^1 and □ _n ^1. From any vertex in ^0, we have six edges. Their directions are φ(1,0,0), φ(0,1,0), φ (0,0,1) and their opposite.
The faces of the and _n grids, whose sets are ^2 and ^2 _n, are not projections of the faces of the □ or □ _n complexes, but triangles. We have two types of triangles. All have a vertical edge, but some point to the left and others to the right. We call them left or right.
A calisson (or rhombus or lozenge) is the φ projection of a face of the □ grid. These are lozenges obtained by joining a left triangle to an adjacent right triangle of . As the faces of □ have three directions, we have three types of calissons:
blue, red and yellow calissons are respectively the projections of faces of normal direction (1,0,0), (0,1,0) and (0,0,1).
The set of calissons of the grids and _n are denoted and _n. We have =φ (□ ^2) and _n=φ (□ _n ^2).
§.§ Statements and Results
With previous notations, original Olivier Longuet's calissons puzzle can be stated as follows.
* An integer n and a subset X⊂ ^1 _n of edges of the triangular grid.
* a tiling of _n by 3n calissons so that (i) no edge of X is ovelapped by the interior of a calisson and (ii) the two calissons adjacent to any edge of X have different colors.
Condition (i), called non-overlap condition, is a natural condition in tiling definition. Condition (ii), that we call the saliency condition, takes on its full meaning in dimension 3, where it means that the edges of X are salient edges of the staircase surface associated with the solution.
The initial problem we are interested in is to determine the complexity of the calissons puzzle. Passing through the notion of stepped surfaces defined as a cut of a DAG, we show the following theorem.
An instance of the calissons puzzle can be solved with an algorithm of complexity O(n^3).
The algorithm that we use is called the advancing surface. It can be implemented directly on a printed puzzle with a pencil and a rubber.
This first calissons puzzle is however a bit frustrating because there is no specific reason to be uniquely interested in tiling the triangular grid in the hexagon _n. This class of hexagonal puzzles is however a warm-up before extending the puzzle to more general regions.
The extended version of the puzzle is denoted where R is the region to be tiled and X is the set of imposed salient edges.
* A region R⊂^2 and a subset X⊂ ^1 of edges of the triangular grid.
* A calisson tiling of the region R so that (i) no edge of X is overlapped by the interior of a calisson and (ii) the two calissons adjacent to any edge of X have different colors.
We show how to solve this puzzle without using complex algorithms. The tools which allow us to solve it are even two of the most simple algorithms of graphs. They are the computation of a connective component and Bellman-Ford algorithm for computing the distances of the vertices of a graph from a source <cit.>.
It stems from the extremely simple structure of the calisson tilability problems that William Thurston highlighted in the early 1990s.
We rewrite our general tilability problem in three different ways in Theorem <ref>.
The exact statement requires notations introduced in the later, but without going into the details,
the existence of a solution of the extended calissons puzzle is equivalent to the existence of a cut in a graph itself equivalent to the non-existence of a descending path, and at last to the non existence of an absorbing cycle in a weighted projected graph.
The DAG cut formulation can be resolved by computing a connective component while the absorbing cycle can be detected with Bellman-Ford algorithm.
By solving the general tilability problem , we revisit Thurston legacy under the light of computer science with very classical structures of DAGs, cuts, absorbing cycles and classical algorithms.
We decompose the problem into two classes of instances depending on whether the region R is finite or not.
In the case where the region R is simply connected and finite, we denote its boundary ∂ R and we generalize the previous advancing surface algorithm solving to . It leads to the next result.
Any instance of the extended calissons puzzle for a finite, simply connected region R can be solved with an algorithm of complexity O(|∂ R |^3).
In the case of an unbounded region with no holes, the question is not to provide an explicit tiling of R but to determine whether the instance admits a solution. The infinity of the region R introduces a lock which is the computation of distances in an infinite graph. When this lock is open, as it is for the infinite triangular grid , we use the absorbing cycle formulation to show the following result:
Any instance of the extended calissons puzzle on the entire triangular grid can be solved with an algorithm of complexity O(|X|^3).
Following this introduction, the paper is organized into five sections. The section <ref> presents William Thurston legacy about the question of calisson tilability.
The section <ref> shows that standard methods fail for solving the calissons puzzles. Then, contrary to usual practice, we do not present the general theory of and then apply it to the particular case of calissons puzzles . We first present in Section <ref> how to solve an instance of . The section <ref> ends the paper with the extended version and its resolution through equivalent propositions.
§ THURSTON'S LEGACY
One of the questions explored by John Conway and William Thurston is whether a region is tilable by a given set of tiles, a question that applies to the triangular grid with calissons. John Conway gave an algebraic expression to the tilability problem by reducing it to the word problem. This problem consists in determining whether the word at the edge of the region represents the neutral element in the group generated by elementary displacements equipped with relations defined by the boundary of the tiles <cit.>.
Thurston revisited Conway's work in the papers <cit.> by introducing a notion of height.
§.§ Height in the triangular grid .
Height is naturally defined in the three-dimensional space ^3. We define it according to the direction (1,1,1). The height of a point (x,y,z)∈□ ^0 is h(x,y,z)=x+y+z. We can not define the height of a point on the grid in an absolute manner, but we can define it in relative terms for points on a path.
Consider a path δ made up of consecutive points δ _i∈ ^0 linked by edges δ_i , δ _i+1∈ ^1. This path can be lifted in □ to a path γ, a consecutive sequence of points γ _i∈□ ^0 such that φ(γ _i)=δ_i and γ _i,γ _ i+1∈□ ^1. This lift is not unique, as it can be made at different heights, but it is unique up to any vector translation (k,k,k). The height differences between the points γ _i are therefore independent of the chosen lift. If we set the height of γ _0 to h(γ _0)=0, we have a sequence of heights h(δ _i) defined by h(δ _i)=h(γ _i). The heights of the vertices on the δ path can be computed directly in the triangular grid. A step in the directions -φ(1,0,0), -φ(0,1,0), or -φ(0,0,1) increases the height by 1, while a step in the directions +φ(1,0,0), +φ(0,1,0), or +φ(0,0,1) decreases the height by 1 (Fig.<ref>).
§.§ Tilability Characterization
William Thurston has left his mark on problems involving the tilability of a region by calissons. We recall the two main results. The first theorem characterizes simply connected regions R tilable by calissons.
A simply connected region R⊂ is tilable by calissons if and only if for any pair u,v of vertices on the edge of R, we have h(u)-h(v)≤ d(u,v) where h denotes the height computed from a vertex on the edge of R and where d(u, v) is the distance between u and v in the graph with vertices ^0 ∩ R and edges oriented in the directions -φ(1,0,0), -φ(0,1,0) and -φ(0,0,1).
The second result is an optimal algorithm for determining whether a simply connected region can be tiled by calissons and providing a solution tiling if there exists one.
§.§ Thurston's Algorithm
The algorithm is illustrated Fig. <ref>. It is a beautiful algorithm simply based on heights computations. We decompose it in two steps.
* Start from a vertex on the boundary ∂ R ⊂ ^0 of the region R to be tiled, and set its height to 0. Then follow the edges of the boundary and increase the height by 1 for a step φ (1,0,0), φ (0,1,0), φ (0,0,1) or decrease it by 1 for a step -φ (1,0,0), -φ (0,1,0), -φ (0,0,1).
If, on returning to the starting point after the tour of R, the height is different from 0, then the region R is not tilable. If the height is 0 after one turn, proceed to the next step.
* The second step consists in progressively tiling the region R from its boundary. The remaining region to be tiled is denoted R' and its boundary ∂ R'.
The algorithm repeats the following routine. Select a vertex s of the path ∂ R' of minimum height. Tile it so that the vertices adjacent to s in the tiling have a larger height. In other words, the edges of the new calisson(s) from s must be directed by φ (1,0,0), φ (0,1,0) or φ (0,0,1). Then compute the heights of the new vertices of ∂ R'.
Repeat the second step until one of the following two situations is reached:
* An inconsistency arises because we want to overlap a vertex on the edge of R with a new vertex of smaller height. In this case, according to Theorem <ref>, there is no solution because we have h(u)-h(v)≤ d(u,v) between two vertices u and v on the edge of R.
* In the second case, the region R is decimated until an empty R' region is obtained. The region R is tiled by calissons.
We have a symmetrical version of the algorithm in which vertices s of maximum height are tiled with calissons whose edges are directed from s by -φ (1,0,0), -φ (0,1,0), -φ (0,0,1).
These two versions of the algorithm respectively provide a maximum-height tiling and a minimum-height tiling.
The complexity of Thurston's algorithm is linear in the size of the region (O(|R|)), i.e. linear in the size of the solution tiling. It is optimal.
For more details on domino and calisson tilings problems, apart from Thurston's work <cit.>, there is of course a large literature on the subject. See, for example, <cit.> or Vadim Gorin's recent book <cit.>.
What else ? Thurston's results have a definitive character, as they elegantly and optimally solve a natural geometric problem. Nevertheless, we take on the challenge of revisiting them in the light of the calissons puzzle.
The puzzle is more general than a simple tilability problem, it introduces other constraints and can be posed in an infinite region.
Thurston's algorithm cannot solve it.
This perspective, at the frontier of computer science and mathematics, with discrete structures and classical algorithms, provides an enlightening vision of the subject. It allows us to understand in depth the nature of Thurston's inheritance... and to extend it a little further.
§ MATCHING AND 3-SAT
A reasonable idea for solving calissons puzzles is to use classical techniques from tîling problems. We already noticed that Thurston's algorithm cannot take account of the interior edges of X, nor of saliency constraints. It is therefore unable to solve the calissons puzzles.
However, there are other approaches, either used for tilability by dominos
or for general combinatorial problems. Two methods are worth examining. The first reduces the problem to 3-SAT, while the other involves the computation of a matching in a bipartite graph.
§.§ 3-SAT
The calissons puzzle is easily expressed as a 3-SAT formula.
Consider a variable a_c for each calisson c in _n. It is equal to 1 if the calisson c is included in the solution's tiling and 0 otherwise.
We have four classes of clauses.
* The first clauses express the conditions that all triangles of ^2 _n must be covered by at least one calisson. This constraint is expressed in the form of 3-clauses, since there are no more than three calissons covering a triangle. For each triangle t∈ ^2 _n, we impose a_c ∨ a_c'∨ a_c” where c, c' and c” are the calissons covering the triangle t (for boundary triangles, these are 2-clauses and even 1-clauses).
* The second class of clauses is still necessary to guarantee that we have a tiling: the tiles must not overlap. For each pair c,c' of calissons with a triangle in common, we impose a_c ∨a_c' to ensure that there do not overlap.
* The third class of clauses expresses the non-overlap constraint (i) of the puzzle. Some variables are set to 0.
* The last class of clauses expresses the saliency constraints (ii).
Around an edge for instance covered by the interior of a yellow calisson, the red calisson c on one side imposes a blue calisson c' on the other, and vice-versa. We thus have clauses a_c ∨ a_c'.
The number of variables and clauses is O(n^2). It provides a simple way of expressing the problem and solving it with a solver. As the 3-SAT problem is NP-complete, this reduction does not allow us to solve the calissons puzzle in polynomial time.
§.§ Matching
A classic, non-exponential approach to compute tilings by dominoes or dimers (and calissons are dominoes made up of two adjacent triangles) is to compute a matching in the adjacency graph of triangles (see for example <cit.>). This approach is illustrated in Fig. <ref>.
From the set of edges X of the calissons puzzle instance, we create the graph Γ whose vertices are the triangles of the grid and whose edges are the pairs of adjacent triangles that are not separated by an edge of X. A perfect matching of the Γ graph is computed. If there is no solution, the calissons instance admits no solution. If there is a perfect matching M of Γ, then M provides a tiling of _n that satisfies the edge non-overlap rule (i) but may violate the saliency conditions (ii), as is the case on the right-hand side of the example shown in Fig. <ref>.
To take account of saliency constraints (ii), we might want to adapt the matching algorithms so as to guarantee that if one edge is chosen, so is another. However, this seems unrealistic, as it can easily be shown that such associations harden the matching problem. The problem of computing intersection-free matching in a geometric graph, for example, is NP-hard, which is all the more detrimental as we can easily reduce to this problem.
In other words, the matching approach does not solve the calissons puzzle.
§ THE ADVANCING SURFACE ALGORITHM FOR SOLVING
For solving the calissons puzzles , we start by introducing the 3D notion of stepped surface of _n (term used in <cit.>). We define them as the cuts of a DAG of vertices in □_n. Then we express the constraints induced by the non-overlap rule (i) and the saliency conditions (ii) on the DAG in order to analyse the problem according to this perspective.
§.§ Stepped Surface of _n as DAG cuts
We first introduce the stepped surfaces above _n.
We complete the set of cubes □ ^3 _n (its cubes are (x,y,z)+C with 0≤ x ≤ n-1, 0≤ y ≤ n-1, 0≤ z ≤ n-1) with two other sets denoted _n and _n. The set _n contains the cubes (x,y,z)+C with two integral coordinates between 0 and n-1 and the last coordinate equal to -1.
The set _n contains the cubes (x,y,z)+C with two integral coordinates between 0 and n-1 and the last coordinate equal to n.
Then we introduce a first DAG structure =̋(□ ^3,∧) on the whole set of cubes □ ^3 with an edge from any cube (x,y,z)+C to the cubes (x+1,y,z)+C, (x,y+1,z)+C and (x,y,z+1)+C. We use the notation ∧ for the set of edges since they are ascendant according to the height x+y+z. Notice that each edge of $̋ can be represented geometrically by the common face of the two cubes.
We denote_̋nthe induced graph of$̋ on the set of vertices _n ∪□ _n ∪ _n. In other words, we have the DAG _̋n=( _n∪□ ^3_n ∪ _n, ∧). The transitive closure of _̋n is a partial ordered set (poset). This partial order relation is denoted (x,y,z)+C ≤ (x',y',z')+C so that we have (x,y,z)+C ≤ (x',y',z')+C if and only if x≤ x' and y≤ y' and z≤ z'. Incomparable cubes are denoted by (x,y,z)+C ∼ (x',y',z')+C.
DAG Cut.
There is a general notion of a cut in a graph that we call graph cut. It is a partition of the set of vertices into two parts, and we are particularly interested in the edges going from one part to the other.
There is another notion of a cut in a poset or DAG which is more restricted and that we call equivalently DAG cut or poset cut.
In a poset, a set is said to be low if it is the union of all elements less than or equal to its elements, and high if it is the union of all elements greater than or equal to its <cit.> elements. Given a low part of a poset, its complementary is necessarily high, and vice versa.
A poset cut is then a non-trivial partition (no empty parts), of the set of vertices into a lower part L and an upper part H. A DAG cut of a DAG Γ is the poset cut of the transitive closure of Γ. Rather than focusing on the subsets L and H, it is natural to look at the edges of the DAG from L to H.
A stepped surface of _n is the set of the edges E⊂∧ of the DAG =̋(□ ^3 , ∧) going from the lower part to the upper part of a DAG cut of _̋n separating _n from _n (Fig. <ref>).
The projection φ is a one-to-one map between the stepped surfaces and the calisson tilings of _n. This theorem can be seen as folklore. We do not prove it but a close theorem -Theorem <ref>- relating tilings and cuts is proved in the later.
§.§ Constraints
Thanks to the one-to-one map φ between calisson tilings and stepped surface, the calissons puzzle consists in determining a stepped surface that satisfies the constraint (i) of not overlapping the edges of X and the saliency constraint (ii).
Given an edge e in X, what is the condition on the DAG cuts of the constraints (i) and (ii) imposed by e?
The translation of these constraints onto a stepped surface can be expressed through the following lemmas:
We consider a vertical edge e=φ(x,y,z),φ(x,y,z+1)∈ ^1 _n. The cubes of □ ^3 one of whose projected faces is adjacent or overlapping e are denoted L_k=(x+k,y+k-1,z+k)+C, R_k=(x+k-1,y+k,z+k)+C,
F_k=(x+k,y+k,z+k)+C and B_k=(x+k-1,y+k-1,z+k)+C (Fig. <ref>).
The calisson tiling of the stepped surface S satisfies the non overlapping constraint (i) of e if and only if the cut S does not separate a pair of cubes F_k and B_k+1.
The calisson tiling of the stepped surface S satisfies the saliency constraint (ii) of e if and only if the cut S separates neither a pair of cubes F_k and B_k+1 (this is constraint (i)), nor a pair of cubes L_k and R_k.
The oriented edges F_k→ B_k+1, B_k+1→ F_k, L_k → R_k and R_k → L_k of $̋ are said unbreakable.
The key point is that the union of the four cube sequences L_k, R_k, B_k, R_k (Fig. <ref>) is almost totally ordered according to the partial order relation ≤ given by the transitive closure of the DAG =̋(□ ^3 , ∧ ). Only L_k and R_k are incomparable.
Then we have
⋯≤ F_k-1≤ B_k≤ L_k∼ R_k≤ F_k≤ B_k+1≤ L_k+1∼ R_k+1≤ F_k+1≤⋯
We have a chain of calissons
and any stepped surface intersects it at a certain level. Within one index shift, we have four different DAG cut cases, illustrated in Fig.<ref> and each giving a different configuration around the edge e:
* The DAG cut separates B_k and the two cubes L_k∼ R_k. In this case, the stepped surface contains the face common to B_k and L_k and the face common to B_k and R_k. These are the two faces adjacent to e and they are of different colors. Conditions (i) and (ii) are satisfied.
* The DAG cut separates the two cubes L_k and R_k. We have the sub-case where L_k is under the DAG cut/behind the surface and R_k is in front of the stepped surface. In this sub-case 2, the stepped surface contains the face common to L_k and F_k and the face common to B_k and R_k. These are the two faces adjacent to e and they are both red.
Then there's the sub-case where R_k is under the DAG cut/behind the surface and L_k is in front of the stepped surface. In this sub-case 2', the stepped surface contains the face common to B_k and L_k and the face common to R_k and F_k. These are the two faces adjacent to e and they are both blue.
In these two sub-cases, condition (i) is satisfied and condition (ii) is violated.
* The DAG cut separates the two cubes L_k∼ R_k from F_k. In this case, the stepped surface contains the face common to L_k and F_k and the face common to R_k and F_k. These are the two faces adjacent to e and they are of different colors. In this case, both conditions (i) and (ii) are satisfied.
* The DAG cut separates F_k and B_k+1. In this case, the stepped surface contains the face f common to F_k and B_k+1. The projected calisson φ (f) of this face overlaps the edge e. In this case, condition (i) is violated.
|
http://arxiv.org/abs/2307.01236v1
|
20230703114214
|
Rockmate: an Efficient, Fast, Automatic and Generic Tool for Re-materialization in PyTorch
|
[
"Xunyi Zhao",
"Théotime Le Hellard",
"Lionel Eyraud",
"Julia Gusak",
"Olivier Beaumont"
] |
cs.LG
|
[
"cs.LG",
"cs.PL"
] |
[
Rockmate: an Efficient, Fast, Automatic and Generic Tool
for Re-materialization in PyTorch
equal*
Xunyi Zhaoequal,inria
Théotime Le Hellardequal,ens
Lionel Eyraud-Duboisinria
Julia Gusakinria
Olivier Beaumontinria
inriaInria Center at the University of Bordeaux
ensÉcole Normale Supérieure, PSL University, Paris
Xunyi [email protected]
Lionel [email protected]
Machine Learning, ICML
0.3in
]
We propose Rockmate to control the memory requirements when training
PyTorch DNN models. Rockmate is an automatic tool that starts from
the model code and generates an equivalent model, using a predefined
amount of memory for activations, at the cost of a few re-computations.
Rockmate automatically detects the structure of computational
and data dependencies and rewrites the initial model as a sequence of
complex blocks. We show that such a structure is widespread and can be
found in many models in the literature (Transformer based models, ResNet,
RegNets,...). This structure allows us to solve the problem in a fast
and efficient way, using an adaptation of Checkmate (too slow on the
whole model but general) at the level of individual blocks and an
adaptation of Rotor (fast but limited to sequential models) at the level
of the sequence itself. We show through experiments on many models
that Rockmate is as fast as Rotor and as efficient as Checkmate,
and that it allows in many cases to obtain a significantly
lower memory consumption for activations (by a factor of 2 to 5)
for a rather negligible overhead (of the order of 10% to 20%).
Rockmate is open source and available at <https://github.com/topal-team/rockmate>.
§ INTRODUCTION
In recent years, very large networks have emerged.
These networks induce huge memory requirements both
because of the number of parameters and
the size of the activations that must be kept in memory to perform
back-propagation.
Memory issues for training have been identified for a long time. Indeed,
training is usually performed on computing resources such as GPUs or TPUs, on
which memory is limited. Therefore, different approaches have been proposed.
The first category of solutions consists in relying on parallelism. Data
parallelism allows to distribute the memory related to the activations, at the
cost of exchanging the network weights between the different resources using
collective communications such as MPI_AllReduce which can be expensive
for networks such as those of the GPT2 class. On the contrary, model
parallelism allows to distribute the weights of the network, at the cost of the
communication of activations and memory overheads in case it is used in a
pipelined way, and its scalability is limited by nature.
The second category of solutions is purely sequential. Offloading makes it
possible to move some activations computed during the forward phase from the
memory of the accelerator (GPU or TPU) to the memory of the CPU, and then to
prefetch them back at the appropriate moment into the memory of the GPU during
the backward phase. This solution therefore consumes bandwidth on the PCI-e bus
between the CPU and the accelerator, which is also used to load training data.
Another solution, called re-materialization, consists in deleting from
accelerator memory some activations computed during the forward phase and then
recomputing them during the backward phase. This approach does not consume
communication resources, but it does induce a computational overhead.
In the present paper, we focus on the latter re-materialization approach on a
single GPU or TPU, which is sufficient in practice for the size of the networks
we consider in the experiments and which can be trivially combined with data parallelism to
accelerate training. In this framework, for a given memory constraint, the optimization problem consists in finding a
sequence of computing, forgetting and
recomputing actions which allows to perform the training for given
inputs
and batch sizes, while fulfilling the memory constraint and minimizing the
computational overhead.
To find the optimal sequence, different approaches have been
proposed. In the first approach, like in
Rotor <cit.>, it is assumed that the dependencies within the model
have a particular structure, typically a sequence of
operations. In this case, using dynamic programming, it is possible to
find the optimal order of computations in reasonable time. On the other hand, in
the case where the computations performed by the model do not
naturally consist in a sequence of operations, this approach requires
to aggregate elementary operations into complex blocks to make the
chain structure emerge. In this case, re-materialization decisions
have to be made at the level of blocks, which reduces optimization
opportunities. The left of Figure <ref> shows the graph of a GPT-like model, where each block
corresponds to one half of a transformer block. On such a graph, this
approach has to decide during the forward phase whether to keep all
internal activations or to delete all of them (and to recompute them during
the backward phase).
In the case of general graphs that are not structured as a sequence of
elementary operations, another approach has been proposed in
Checkmate <cit.>. It consists in describing the
operations corresponding to both forward and backward phases as a
Directed Acyclic Graph (DAG) and to find the optimal solution through
solving an Integer Linear Program (ILP).
The number of integer
variables is proportional to V × E, where V is the number of
operations and E is the number of arcs of the DAG. Hence, a major
shortcoming of this approach is the computational time induced by
solving the ILP. Typically, even using commercial solvers such as
CPLEX or Gurobi, it is not possible (in one day of computation) to
consider GPT2 models with more than 10 transformer blocks (see Figure <ref>),
while classical instances include several dozens.
In the present paper we propose Rockmate, a new re-materialization strategy,
in which models are seen as a sequence of blocks (in the sense of
Rotor), but where several optimal strategies are pre-computed for each
block (using a Checkmate-like approach). A simple example of the
resulting execution is shown on the right of
Figure <ref>, where in each block, a different set of
activations is saved, resulting in different backward execution
times. In reality, for a GPT model, Rockmate divides each block into 9
or 6 operations for the first or second half of the transformer block
respectively, and the execution can also contain re-executions of some
blocks.
As our experimental results demonstrate, for a
large variety of networks Rockmate can compute
near-optimal solutions (close to Checkmate quality in terms of
throughput) in a reasonable time (close to Rotor runtime, faster than Checkmate), by
combining the advantages of both approaches. A preview shown on
Figure <ref> presents the throughput and
solving time of all three solutions for GPT neural networks with
varying number of transformer blocks.
Another contribution is that we have built a framework,
which can be easily applied on any PyTorch . It contains
complete implementation of our
algorithm (Algorithm <ref>), including main phases with newly proposed
computation-data graph builder, integer linear programming, and dynamic programming
techniques described in section <ref>.
Rockmate takes the model as input and automatically builds the data-flow
graph with measurements (computation time, output size and peak memory
of each operation, etc.). The optimal schedule is then determined based on the graph
and used to build a new which runs forward and backward phases
within a given memory constraint. In Section <ref>, we demonstrate that
the resulting new GPT2 models can achieve the same result as the original ones
with 25% computational overhead, while using only 25% of the original memory
needs to store the activations.
Note that all the benefits
of Rockmate do not induce any accuracy loss for the model:
given the same batch of training data, the Rockmate model will
compute exactly the same gradient values for every trainable parameter compared to the
original model. Hence, both models achieve the same accuracy after the same number
of training epochs.
§ RELATED WORKS
During training, memory requirements are very demanding. On the one hand, they
come from the storage of the network weights, and the associated intermediate data,
such as gradients and optimizer states. On the other hand, memory requirements also come from the
storage of the activations associated with gradient descent, since (almost) all
the results computed during the forward phase must be kept in memory until they
are used by the gradient computation during the backward phase.
There are different strategies for saving memory when training Deep Neural Networks (DNNs), adapted to
these different memory requirements. We can differentiate between strategies
that rely on the use of parallelism (data parallelism, model parallelism),
those that use the possibility of transferring data to another device than the
memory of the accelerator (denoted as offloading or paging in the literature),
and those that rely on the redundant computation of activations deleted from
the memory (denoted as checkpointing or re-materialization in the literature).
Of course, these strategies can naturally be combined since they rely on
different resource consumption (use of several computing resources for
parallelism, external storage for offloading or re-computations for
re-materialization). This combination is more or less difficult because some
resources are consumed by several approaches (computations on the GPU or the
TPU of course, but also communications on the PCI-e bus or on the NVLink). In
the rest of this section, we will focus mainly on re-materialization
strategies after having briefly discussed the other approaches.
Among the most popular parallel strategies for DNN training are data
parallelism and model parallelism. Data parallelism is based on the idea of
performing forward and backward phases in parallel on different data and on
several GPUs. The gradients computed on the different GPUs must then be
reduced, which requires collective communication of all the weights. This
approach used in isolation was very popular in more or less
synchronous <cit.> or
asynchronous <cit.> variants for convolutional networks
in which the network weights were small compared to the activations, but is nowadays
mainly used in combination with model parallelism. Model parallelism
consists in distributing layers and weights over different resources and
communicating forward activations and their gradients between GPUs. This
approach has been popularized in frameworks like GPipe <cit.>,
Pipedream <cit.> or Varuna <cit.> and its complexity has been
studied in <cit.>. It is often combined with systematic
re-materialization such as in GPipe <cit.>.
Offloading (sometimes denoted as paging) can be applied to both network weights
and activations. The idea is simply to remove the memory load from the GPU and
store data in CPU memory, that is typically much larger than GPU memory. This
idea is relevant both for
activations, computed during the
forward phase but which will not be used for a long time by the backward phase <cit.>
and for network weights <cit.>, which are also used only
once during the forward phase and once during the backward. Offloading can also
be combined with re-materialization <cit.>, which is
particularly relevant for decentralized training on tiny
devices <cit.>.
Historically, re-materialization strategies have their origins in the
checkpointing techniques developed in the context of automatic differentiation
(AD). Because of this application context, these works have focused mainly on
the case of homogeneous chains, i.e. models consisting of a sequence of
identical blocks. In this context, it is possible to rely on dynamic
programming to find optimal solutions and even closed form formulas can be
derived to automatically find the activations to keep and the ones to delete.
From the complexity point of view, it has been shown that the problem is
NP-complete as soon as one considers graphs more general than chains in
<cit.>. In the re-materialization literature dedicated to DNN
training, we can distinguish approaches that focus on the case of sequential models
and those that consider more general graphs.
In the case of sequences, in <cit.>, the sequence on length N
is divided into √(N) equal-length segments of length √(N),
and only the input of each segment is
materialized during the forward phase.
This strategy is implemented in PyTorch in .
Rotor <cit.> provides optimal solutions in the case of fully
heterogeneous sequential models. Rotor is based on the evaluation of the
parameters of each block the sequence (computational and memory costs), on the
resolution of the re-materialization problem using dynamic programming and on
the implementation of the resulting computation sequence. Rotor is fast, but
it is limited to sequential models. It is one of the two ingredients (with
Checkmate <cit.> described below) of the present contribution.
Other contributions target more general graphs. For example, the approach
described in <cit.> is based on the computation of a
tree-width decomposition of the graph to determine the minimum computational
cost associated with the minimum possible memory footprint.
In <cit.>, an enumeration of subgraphs is required to design
efficient re-materialization strategies. In general, finding the evaluation
order of the graph that minimizes the memory consumption is a hard problem,
independently of any re-materialization strategy, as demonstrated
in <cit.>. The case of (non-optimal) dynamic re-materialization,
especially in the case where the input size in unknown in advance, has been
addressed in <cit.>.
An important contribution in the case of general graphs
has been provided in Checkmate <cit.>, which proposes an
Integer Linear Program (ILP) to find the optimal re-materialization sequence
for general graphs, which is consistent with the NP-Completeness results
of <cit.>. An important limitation of Checkmate (see
Section <ref>)
is the long solving time of the ILP solver, which limits its use to relatively small graphs.
This paper addresses this
limitation of Checkmate by proposing to combine it with Rotor.
§ ROCKMATE
§.§ Sketch of the Algorithm
As explained in Section <ref>, the main idea of this paper is to
combine the ideas of (i) Checkmate, which finds good solutions in the case of
general graphs but is slow, and (ii) Rotor, which finds the optimal solution
only in the case of sequential networks, but is fast.
The GPT neural networks used as motivational example above is not completely sequential, but it can
be decomposed in a sequence of blocks, where each block contains
several operations. It is a typical example where, in order to use Rotor, it is necessary to
aggregate all the operations of the same block together. Rotor
therefore decides at the scale of the whole block whether to keep all
the data or to delete them all during the forward phase. Checkmate, on
the other hand, sees the whole graph describing the model and can
therefore decide, independently and at the level of each operation,
whether to keep its data or not.
The solution we propose is called Rockmate; a pseudo-code is provided
in Algorithm <ref> and explained below. The main idea is
to apply Checkmate inside each block and to apply
Rotor on the complete sequence of blocks. For this purpose,
it is necessary to obtain the complete graph of all operations of the
neural network, and to adapt both Checkmate and Rotor to this new
setting.
The first phase is called rk-GB (for GraphBuilder). It
occurs on line 2 of Algorithm <ref> and is described in more
details in Section <ref>. rk-GB takes as input a model expressed as
a PyTorch and automatically
(i) extracts the Directed Acyclic Graph
(DAG) of all the operations performed in the model, (ii) divides it
into a sequence of blocks and (iii) detects all the blocks which have
identical structures. For each unique block, the processing times of
all operations and the sizes of all intermediate data that are
produced by these operations are measured. These measurement (graphs
of each block, labeled with the execution times and memory footprints
of the produced data) contain all the necessary information to find
the re-materialization sequence.
In the second phase of Rockmate (line 3-8 of
Algorithm <ref>), we consider each single block
independently. As we saw in the example in Figure <ref>,
Rotor fails to compute very good re-materialization strategies because
it can only choose between two options: keep all or delete all
activations in the block. In Rockmate, we use a refined version of
Checkmate to generate a larger set of re-materialization
strategies. This refined version is denoted as rk-Checkmate and
described in Section <ref>.
A re-materialization strategy is characterized by (i) the memory peak
during the execution of the block (either during forward or backward)
and (ii) the total size of the internal activations of the block that
are kept between the forward phase and the backward phase. The first
one ensures that this strategy can be executed within a given memory
limit. The second one allows the dynamic program to know how much
memory will be left for the next blocks. The number of different
options to consider is a parameter of Rockmate. We analyze its effect
on performance in Section <ref> and show that
quantizing each parameter into 20 different thresholds is enough to
get good solutions in practice. This leads to at most 400 different
strategies in total for each block. Since rk-Checkmate is applied at
the level of a block (and not on the whole network), the
corresponding graph is small enough that the runtime remains small,
even for generating the whole family of strategies. Moreover, as
rk-GB automatically detects identical blocks, rk-Checkmate is
performed only on unique types of blocks (for instance, GPT2 models
only involve five unique types of blocks). In practice, it takes less
than 2 minutes to solve rk-Checkmate 400 times for a rk-block in GPT2,
while it's impossible to solve the entire network with the ILP method
because of its exponential complexity.
The third phase of Rockmate (line 9 of Algorithm <ref>,
described in Section <ref>) is called rk-Rotor and computes the global
re-materialization strategy. rk-Rotor features an adapted dynamic
program of Rotor that, instead of having two solutions per block, can exploit the different
re-materialization strategies computed during the second phase. The
output of rk-Rotor therefore consists in a schedule which
describes which block should be computed, in which order, and with
which re-materialization strategy. If necessary, some blocks can be
computed without keeping any data at all, and thus be recomputed later
(possibly several times).
Finally, the fourth phase (line 10 of Algorithm <ref>,
described in Section <ref>) is called rk-Exec. It transforms this schedule into a new
PyTorch , which performs all the
corresponding elementary operations in the correct order. The
resulting module computes exactly the same gradients as the original
version while respecting a global constraint on the memory usage of
activations, at the cost of duplicating some computations.
§.§ Phase 1: rk-GB, Graph Builder
A typical training iteration of neural networks can be separated
as forward and backward phases. Both phases can be represented by
a data-flow graph. The computational graph
is explicit in TensorFlow, for which Checkmate was originally implemented.
In PyTorch, however, graphs need to be obtained by certain tools.
We developed a tool named rk-GraphBuilder (rk-GB)
which takes as input a and
an example input for it, and builds the data-flow graph of the module.
Having an example input is necessary to inspect the time and memory cost
of all the operations used during forward and backward phases.
Obtaining the graph
rk-GB does not require any modification or annotation of the module source code,
instead it uses to trace the forward execution of the
module on the example input. This function executes the forward
code and provides the list of all primitive operations used.
Based on this list, we build a forward graph where each node represents one assignment.
However, multiple variables may share the same memory space
due to and operations in PyTorch.
Such variables would thus be kept or
removed together when performing re-materialization.
Therefore, rk-GB merges all the nodes sharing the same memory space
to obtain a simplified forward graph.
For a 12-layer GPT model, the number of nodes decreases from 934 to
185 after simplification. The simplified forward graph is further cut
around 1-separators: a node is a 1-separator if by removing it, we
obtain a disconnected graph (1 node to separate the graph).
This produces a sequence of blocks, as required by rk-Rotor.
For a 12-layer GPT model, this results in 26 blocks, where each
Transformer layer is separated to an Multi-Head Attention block
and an MLP block.
Identical blocks
Afterwards, rk-GB goes through all the blocks to recognize identical
blocks, i.e. blocks whose computational graphs are the same.
Since rk-GB is deterministic,
two blocks representing the same function share the same graph structure,
including the same topological ordering of nodes.
Following this ordering, rk-GB checks equivalency node by node.
A group of identical blocks can be measured and solved
together to improve the solving time.
Identifying identical blocks is an optimization
of the Rockmate solving time but does not change
its solution quality. So even if some blocks are
wrongly declared as different by Rockmate,
this would not change the memory gains or the computational overhead.
For a 12-layer GPT model, this procedure identifies only 5
identical blocks from the 26 rk-blocks produced after separation.
CD_graphs
One underlying assumption in the original Checkmate graph model is
that each operation has exactly one output data. However, when
several forward operations share the same input, the corresponding
backward operations contribute to the same data (by summing all the
contributions). This means that removing the result of one of these
backward operations has an impact on the other operations, which can
not be taken into account in the graph model of
Checkmate. Additionally, some elementary operations in PyTorch
actually create intermediate data (they are called
), which can be deleted independently of the
output of the operation.
For these reasons, we introduce a new graph called
, which contain two categories of nodes:
Computation and Data. A represents an operation,
labeled with the time it takes and the temporary memory overhead
during execution.
A represents a data tensor stored in the memory.
A can be forgotten to free memory, and
restored by recomputing the corresponding s. An edge between
a and a
represents the execution dependency between
the operation and its output data tensors.
The benefits of considering such a is to enable
finer rematerializations,
such as releasing memory from a subset of outputs of one operation.
The final product of rk-GB is a sequence of s.
More details about rk-GB can be found in
Section <ref> of the Appendix.
§.§ Phase 2: rk-Checkmate, Options at Block Level
Given a , it is a non-trivial problem to find the
optimal execution schedule of all the operations within a given memory
limitation. To solve this problem, we use rk-Checkmate, an Integer
Linear Programming (ILP) adapted from
Checkmate <cit.>. Just like Checkmate, rk-Checkmate
requires a topological order of all the operations, which is provided
by rk-GraphBuilder. rk-Checkmate provides several improvements over
the original Checkmate formulation.
First, additional variables are introduced to represent the execution
of each separately from the memory allocation of each
. Constraints are also adapted to ensure that the
execution order follows the dependencies between computational nodes
and data nodes. In the case where one operation generates multiple
outputs, there are multiple s depending on the same
. Deleting these outputs is considered separately in
rk-Checkmate, whereas they are grouped together in the Checkmate
formulation. For example, this improvement is useful for an operation
which produces two large outputs, each required by a different
operation: with rk-Checkmate, it is possible to delete the second
output before performing the operation that requires the first output,
which reduces the memory usage.
Second, rk-Checkmate takes into account the temporary memory usage of
all operations: because of temporary data allocated and deleted during the operation,
the peak memory might be higher than the size of input and output.
Checkmate ignores this possibility, and
thus may produce solutions whose actual peak memory is higher than the
budget.
Finally, since rk-Checkmate is aware of the separation between forward
and backward phases, it is possible to include a constraint on the
memory usage when going from the forward to the backward phase. This
constraint expresses the limit M_save on the size of the
activations which are kept in memory between both phases of a block
(and thus, during the execution of the following blocks).
This memory occupancy is necessary to control the overall
memory cost of all the blocks.
More details about rk-Checkmate can be found in
Section <ref> of the Appendix.
For each block, rk-Checkmate will be applied with different values for
the memory budgets M_peak and M_save, as explained in
Section <ref>. We first compute the minimum and maximum
possible values for M_peak, by analyzing the memory usage of the
schedule which deletes activations as soon as possible, and of the
schedule which performs no recomputation, respectively. The number of
budgets is a hyperparameter of Rockmate whose effect is analyzed in
Section <ref>. The values of M_peak are evenly
spaced within [min_peak; max_peak]. Given one value
for M_peak, the values of M_save are evenly spaced within
[output_size; M_peak]. This ensures that all pairs (M_peak,
M_save) given to rk-Checkmate are relevant.
Note that different budgets may lead to the same optimal solution.
In practice, when we apply the number of M_peak and M_save
as (20, 20) for GPT2,
there are less than 30 unique solutions per rk-block.
Note that identical blocks are solved only once with the same budgets,
so that all identical blocks have the same set of block-level
execution options provided to rk-Rotor.
However, rk-Rotor sees all of these blocks as different parts of the sequence,
which just happen to have the same set of options
In the resulting sequence, each of these identical blocks may be
executed with a different option in the output of rk-Rotor.
§.§ Phase 3: rk-Rotor, Global Sequence Generation.
Principle of Rotor
The main idea of the dynamic programming algorithm of Rotor is as
follows. An optimal solution for the forward-backward computation from
block s to t with memory m can be of two different types: either
the first block s is computed only once, or more than once. In the
first case, the computation starts with computing block s and
keeping all intermediate data, and continues with an optimal solution
for blocks s+1 to t (with less memory available). In the second
case, the computation starts with computing blocks s to s+i for
some i, stores the result of s+i, continues with an optimal
solution for blocks s+i to t, and finally recomputes from s to
s+i with an optimal solution for this part.
Note that no intermediate data is saved for blocks s to s+i.
An illustration of each
case is represented on Figure <ref>.
In each case, the subproblems that need to be solved have a smaller
value of t-s. Assuming that the solutions to these smaller problems
are known, the algorithm can make the choice which leads to the
smallest overhead among all valid choices, ie those for which
the memory usage is not higher than the budget m. We can thus
iteratively compute optimal solutions until we find the solution for
the complete model.
rk-Rotor
In the Rockmate context, we have several different options for the
first case: we can choose to keep more or less intermediate data for
the first block s. Each of these options leads to a different memory
usage for storing the intermediate data and for computing the backward
operation. There is thus a larger set of choices to choose from, but
the main idea is still there: assuming that solutions to all smaller
problems are known, we can select the option that yields the lowest
overhead among all options which respect the memory budget. This
improved case is represented at the bottom of Figure <ref>.
Complexity analysis
With a model that contains L blocks, and a memory of size M, the
Rotor algorithm has a complexity in O(L^3M): for each value of s,
t and m, there are O(L) choices to consider. In the Rockmate
case, with B budget options, the dynamic programming algorithm
considers O(L+B) choices at each step, and thus has a complexity in
O(L^2M(L+B)). More details about the rk-Rotor algorithm can be found
in Section <ref> of the Appendix.
Sub-optimality of the solution
Although both rk-Checkmate and rk-Rotor obtain optimal solutions
for the given sub-tasks, the final Rockmate solution is not always optimal
on the overall network. Two reasons can lead to sub-optimality
in Rockmate: (i) since the number of memory budgets is finite, only
a limited number of execution schedules are produced by rk-Checkmate.
(ii) in rk-Rotor, intermediate data is only used to improve the
execution time of the backward phase. However, if the forward phase of
a block is executed several times, it might be beneficial to save some
intermediate tensor on the first pass, and use it to compute the
output faster on subsequent passes. This possibility is not considered
in rk-Rotor: forward passes in case 2 do not save
intermediate data.
§.§ Phase 4: rk-Exec
Rockmate creates a PyTorch that performs a forward-backward
computation based on the optimal schedule solved by the algorithm described above.
The execution of the forward phase is based on the Python code obtained via
.
For backward, the PyTorch autograd engine stores
the “computational graph” during the forward phase, which allows backward
computation from the output back to the input.
For the sake of clarity, we call this an autograd graph.
In Rockmate, we detach the operations during the forward phase,
so that the full network is represented as many small autograd graphs.
This allows the backward operations to be performed separately, thus
deletions of tensors can be easily inserted between two backward operations.
Specifically, rk-Exec creates one autograd graph for each
defined in Section <ref>. Every autograd graph
contains all the operation which create tensors sharing the same memory space,
such as operations.
During the backward phase, the gradient of each tensor is
automatically supplied to the previous backward function.
Furthermore, the recomputation of the operations have
to be performed in different ways, so that the existing autograd graph
will not be rebuilt.
Also, when one operation will be recomputed before the backward,
the execution will be with mode so that
saved_tensors will not be created.
More details about the rk-Rotor algorithm can be found
in Section <ref> of the Appendix.
§ EXPERIMENTS
§.§ Experimental Settings
All experiments presented in this paper are performed using Python 3.9.12 and PyTorch 1.13.0. Rockmate, Rotor, and Checkmate compute their solutions on a 40-core Intel Xeon Gold 6148, while training is performed on an Nvidia Tesla V100 GPU with 15.75 GB of memory. For comparison with Rotor on ResNet and GPT2, both networks are implemented as a module of PyTorch. For RegNet, we do not provide a comparison with Rotor due to the lack of a Rotor-compatible implementation.
All experiments use the version of Rockmate available at:
<https://github.com/topal-team/rockmate/releases/tag/v1.0>.
Rockmate is used to reduce peak memory usage due to storage of activations. We measure and control the memory footprint of activations during the experiments. The memory used by the model parameters is excluded from our peak memory budget, as it remains constant during training. In practice, we adjust the maximum peak memory for activations by subtracting the size of the model from the total memory.
§.§ Precision
As discussed in Section <ref>, the same number of budget options are
used to solve each block in rk-Checkmate. Increasing the number of options increases
the time to run Rockmate, since it is directly related to the number of times rk-Checkmate
is called for each block. However, it provides finer re-materialization
strategies for rk-Rotor. Figure <ref> shows how
the number of budget options affects the quality of the Rockmate solution.
Budget options range from (3,3) to (30,30) for GPT2-medium and ResNet-101.
Overall, increasing the number of budget options improves Rockmate performance up to a point.
Specifically, the improvement in the Rockmate solution is stronger on GPT2-medium
than on ResNet when more budget options are allowed.
This is because a GPT2 block contains more complicated structures (more nodes in block),
while a ResNet block is too small to apply sophisticated re-materialization strategies.
In the following experiments, we use the number of (M_peak, M_save) = (20,20) for rk-Checkmate.
§.§ Efficiency
In Figure <ref>, we compare the solving time and performance of Rockmate,
Checkmate <cit.>, and Rotor <cit.> on GPT networks
with 2 to 10 Transformer blocks.
For each network, we choose the smallest memory budget
for which we obtain feasible solutions.
Checkmate is solved until convergence. Rockmate achieves very similar throughput to Checkmate,
while Rotor can be significantly worse if the network is not deep enough.
Since Rotor has only the option to recompute a whole block, it is more effective when the network is deep.
The time to solution of Rockmate remains nearly the same as the network gets deeper.
The processing time of Rockmate consists of three parts:
1. inspection time during graph building;
2. rk-Checkmate processing time;
3. rk-Rotor processing time.
As described in Section <ref>,
rk-GB automatically detects identical blocks.
Only one inspection is performed for a class of identical blocks,
and they are solved by the same rk-Checkmate models.
Therefore, the inspection time and the rk-Checkmate time remain
the same when the number of identical transformer blocks increases.
The rk-Rotor solving time is similar to Rotor's,
which is much faster than the total Rockmate solving time.
The complexity of Checkmate's ILP model grows exponentially with the size of the network,
making it infeasible on modern neural networks with thousands of nodes.
In Figure <ref> we compare the solving time and
performance of Checkmate and Rockmate on GPT2 with 2 to 10 Transformer blocks.
The memory budgets are chosen as the minimum achievable budgets for both models.
The solving time of Checkmate is exponential in the number of blocks,
exceeding 30 hours on the 10-blocks GPT2. On the other hand,
the solution time of Rockmate remains almost constant because
the same rk-Checkmate models are applied to all identical
Transformer blocks. Despite the significant difference in
solution time, Rockmate achieves similar or better overhead than Checkmate within the same budget.
§.§ Performance
We compare Rockmate with Rotor on ResNet and GPT2 models over a range of memory budgets.
Figure <ref> shows the computational overhead in terms of peak memory
usage during the forward-backward computations.
For the same memory peak, Rockmate has a lower overhead than Rotor in most cases.
For ResNet models, Rockmate does not show a significant improvement over Rotor,
especially when the neural networks are deep enough,
in which cases Rotor has more re-materialization options.
On the other hand, Rockmate shows much better performance
than Rotor on GPT2 networks. For GPT2-large it is noteworthy
that Rockmate saves 50% memory by introducing only 5% overhead,
while Rotor has more than 10% overhead for the same budget.
In addition, Rockmate allows training with a smaller memory budget. To train GPT2-large, Rotor requires at least 720 MB memory budget, while Rockmate only requires 440 MB.
The reason why Rockmate significantly outperforms Rotor is that there are "cheap" operations inside a Transformer block, such as and . The tensors generated by these operations consume a lot of memory, but there is almost no cost to recompute these operations. Because Rotor rematerializes one block at a time, it cannot take advantage of the "cheap" operations to optimize performance. Rockmate works particularly well on models with a sequential-like structure, where each part contains a complicated structure.
While Rotor can only handle -type models as input,
Rockmate can be applied to more general types of neural networks.
In Figure <ref> we show the results of using Rockmate directly on the RegNet model
imported directly from ,
whereas using it in Rotor would require to rewrite the code to highlight the sequential part.
Although performance may vary depending on the structure of the neural networks,
Rockmate can be used to save memory on general PyTorch models.
§ CONCLUSION AND PERSPECTIVES
In this paper, we propose Rockmate, a fully automatic tool that takes as input a
PyTorch model in the form of a and a memory limit for
activations and automatically generates another , perfectly
equivalent from the numerical point of view, but that fulfills the memory limit
for activations at the cost of a small computational overhead. Through
experiments on various models, we show that the computation time of the resulting
is negligible in practice and that the computational overhead
is acceptable, even for drastic reductions in memory footprint. Rockmate is
therefore a tool that can transparently allow increasing model size, data
resolution and batch size without having to upgrade GPUs. This work opens
several new scientific questions. First, Rockmate is very efficient for graphs
that can be written as a sequence of blocks, which corresponds to numerous
models in practice but not to all of them, which raises the question of its
extension to any type of graph. Then, the combination of Rockmate with data
parallelism is trivial, but the question of finding a partition of the model
adapted to model parallelism that balances well the computational load and the
memory footprint on the different nodes is also an open problem.
Acknowledgements
This work has been partially funded by the Inria DFKI ENGAGE Project: nExt geNeration computinG environments for Artificial intelliGEnce.
icml2023
§ RK-GRAPHBUILDER
§.§ Outline
We have developed the Rockmate Graph Builder (rk-GB) as a tool for producing the graphs required for Rockmate. Although it was developed for this specific purpose, it can also be used independently. Given a torch.nn.Module and a given input for that module, our goal is to generate a graph showing all the operations that occur during the forward and backward of the module on that given input. Having a specific input is important for both the resolution of if statements in the forward code and the inspection of each operation. Indeed, since our goal is to identify recomputations, we have to know the time spent and the memory footprint for each operation, that both depend on the input.
rk-GB relies on to trace the execution of the
forward module on the input. This function runs the
forward code and returns the list of all performed operations.
Based on this list, we build the forward graph and transform
it through several steps to generate what is needed by Rockmate.
Note that currently rk-GB may fail due to limitations of
torch.jit.trace, for instance it does not work properly
on the GPT model from HuggingFace.
However, PyTorch 2.0 was recently announced with a new
way to capture graphs: TorchDynamo.
According to its authors, TorchDynamo is expected to work
with almost all modules.
We plan to try changing from jit.trace to TorchDynamo.
rk-GB consists of five steps. First, we build the forward graph based on
jit.trace, also collecting some information about each node.
Then we simplify the graph. This part significantly reduces the
number of nodes, which is a nice feature for the ILP used in rk-Checkmate.
This is more than just an optimization, it's a requirement for
correct re-materialization. Next, we split the
simplified forward graph into blocks using the 1-separator[a node is a 1-separator if by removing it, we
obtain a disconnected graph (1 node to separate the graph)] list.
This produces a sequence of forward graphs, which we refer to as
blocks. Rockmate will process each block with rk-Checkmate,
and then process the whole chain with rk-Rotor. At this point, similar
blocks are detected to avoid solving the same problem multiple times.
Finally, for each unique forward block, we build the forward + backward
graph and monitor the time and memory usage of each node during this step.
§.§ The forward graph
First, we call torch.jit.trace_module to get the forward
code of any given torch.nn.Module. Specifically, we use the
code_with_constants attribute of the output object, which
is a code string of the assignments made during the forward phase.
In Rockmate, a list of assignments is required, where each line consists
of exactly one target and one operation.
Therefore,
becomes
We also need to inline submodules code, so that
becomes
We can access the code of the submodules through the object returned by jit.trace_module with :
jit_output.submodule name.code
We assign a unique number to each target to avoid name collisions when building the submodule code.
§.§ Simplification
In Rockmate, we want to perform both forward and backward operations on nodes, with the ability to delete and recompute tensors. Both view and in-place operations share their data with their input, so that deleting a node without also deleting its views is not allowed. All nodes related to the same data must be merged. After the simplification part, each node consists of exactly one primary allocation, which creates a data item, and some secondary allocations associated with that data, that do not allocate new memory.
First, we need to run each node to analyze it,
since the name of the function might not be enough to determine whether the operation creates a new data item or not. For example, the function torch.contiguous may create a new data item depending on whether the input data is contiguously stored in memory or not. Another common example is torch.reshape, which is a visualization function if and only if the input and the requested shape have compatible strides. A robust way to check if two tensors refer to the same data is to compare their attribute data_ptr. To do this, we must create each variable, determine its type, and, if it is a tensor, check whether its data_ptr is new or not.
*Analysis of each node
Note that we cannot analyze all of the code at once, as this would result in all intermediate activations being stored in memory, which is inconsistent with Rockmate's goal of using as little memory as possible. We will analyze each node separately, without storing any tensor. We proceed as follows
* First, we randomly generate its inputs. Since we traverse the nodes in a topological order, the inputs have already been analyzed. For the reasons mentioned earlier, the tensors have not been stored, but their data type and shape have been recorded. Therefore, we can randomly generate the inputs based on this information. . For example, as mentioned earlier, the behavior of torch.reshape depends on whether the input and the required shape have compatible strides. Therefore, to generate the inputs correctly, we regenerate the data and perform all view operations on it before analyzing the node.
* Then we run the code to analyze, which consists of an assignment,
so that we get a value. If it is a torch.Size (or something similar),
we store it and mark it as a node of type size.
Otherwise it consists of either a tensor or a list of tensors.
In both cases it has a data_ptr. First we check if the value already exists in the local directory, in which case it is an in-place operation. Consider the following example:
It appears in the forward graph as
When analyzing the node of , we must recognize that it is the result of an in-place operation on . In this case, refers to the same Python object as . So to detect an in-place operation, we compare the address of the object with its inputs (using the Python keyword is). If we find a match, we mark the node as in-place and record the name of its data owner (in the example above, the data owner of is ). Otherwise, we check if the value shares its data_ptr with one of its inputs, in which case it is a node of type view, and we record the name of its data owner. Finally, by default, the tensor is an original data and its data owner is itself. We call this case by default the operation real.
* Finally, we store information such as the type and shape of the data so that we can randomly regenerate data when analyzing upcoming nodes. Note that we also need to store the requires_grad information. It is essential for building the backward graph, and we need to use it when regenerating tensors.
*Simplification process
Once the analysis has been performed, the simplification of the graph can be started. The simplification is done in three steps:
* First, we process so-called cheap operations, such as list and tuple constructors. In contrast to the following simplification steps, here we insert the code to be simplified directly into the user code, for example
becomes
Simplifying the list and tuple constructors is
mandatory to ensure each node represents only one tensor, it also
makes the code easier to read.
cheap operations also include torch.add, sub, mul, and div. This is due to the way autograd handles intermediate results. Normally, nested operations (neither primary nor inline) create intermediate variables that are stored in grad_fn. Therefore, we repeat the submodules until we reach primary operations. But these specific operations do not create intermediate data in grad_fn (because we do not need them during backward). Therefore, it is preferable not to inline them, as otherwise they would explicitly create intermediate variables and thus use more memory. Note that we could duplicate the code:
becomes
Since these operations are fast enough (e.g. compared to torch.matmul), it will not take too much time. However, these simplifications represent a trade-off between the number of intermediate variables and the number of dependencies. In the example above, D now depends on both B and C. Therefore, it is optional to consider Add, Sub, Mul and Div as cheap operations.
* The second step in the simplification process concerns nodes of type size. Since these operations do not create data, we move them as secondary assignments of the node they refer to, which we call the body_code. To avoid creating new dependencies, for example in the case where a node depends only on the shape of a tensor and not on the tensor itself:
In this example, we are not yet sure whether depends directly on
, but merging into would actually enforce
the dependency. That would be wrong because it would cause
Rockmate to conclude that cannot be forgotten before computing
. To avoid this, even though the assignment of
is inserted into the body code of , we do not delete the node of
, but simply mark it as artifact.
Artifacts are nodes concerning size-type operations that are needed to avoid creating
dependencies between real nodes. After each simplification, we perform
a test to determine if any of the artifact nodes can be removed. In the example above, if it turns out that is a view of , in the third simplification step, the assignment of is moved to the body code of , after which depends directly on the node of , so we can remove the artifact node of .
* Finally, the view and insert operations are simplified.
View nodes are merged with the node of their data owner by inserting
their assignment into the body code of the data owner.
The same idea applies to in-place operations,
i.e. we insert them into the node of their data owner.
However, since Rockmate wants to control the backward execution,
after each main operation we detach the tensor
to split the backward graph. This detach operation must
be performed before creating different independent views,
otherwise we would have to detach each view independently,
which is impossible in PyTorch. On the other hand, the detach
operation must be performed after in-place operations because they
impact the data, even though these in-place operations can be applied
to views and not directly to the original tensor.
PyTorch is aware of that, so it ensures it's impossible to have
in-place operations over different independent views,
therefore we always have a valid way to run the code and detach
at the right position. An attribute in-place_code
is provided to handle the detach operation.
Artifact nodes which survived until the end of simplification
will be considered as soft-dependencies when
at the moment we generate a topological order for the
final forward-backward graph, only to ensure the owner of an artifact
always comes
before any of its users. Apart from that they are removed.
Thus, we do not create explicit dependencies,
but since ILP follows the topological order for the first computation of
each node, the size information is computed before it is used and is not
forgotten.
At the end of the simplification, we end up with a
forward graph, where each node consists of exactly one
real operation that creates a data,
and some secondary operations on it. As for the topological order,
we follow the original order by using the unique number we assigned
to each variable.
Thus, we keep the order in which jit performed the forward
code of the original module. Finally, although we will not discuss it in detail,
we also handle random operations.
§.§ Cut the forward graph
The subsequent steps of Rockmate rely on a list of blocks in order to
apply rk-Checkmate to each block independently. We cut the simplified
forward graph into a sequence of blocks. The cuts are made along the
1-separators of the graph: these are nodes whose removal result in a
disconnected graph. We obtain the list of separators by using a
variant of Breadth First Search (BFS). Note that although we cut the
simplified graph, a first draft of the separators list is computed
before we perform the simplifications. We mark potential separators as
protected to avoid oversimplifying the graph, which could
break its overall structure. Since torch.add operations are
simplified by default, without this protection, all residual edges
would traverse the entire graph from input directly to output. As a
result, the simplified graph would consist of a single large block
with many undesirable dependencies. The protected nodes,
which are potential separators, bypass the cheap
simplification step. At the end of the simplification step, the list
of separators is recomputed, as it may have changed due to other
simplifications.
§.§ Recognition of Similar Blocks
To reduce solving time, we would like to avoid solving the same ILP
multiple times on identical blocks. For example, GPT2 consists of n
attention blocks interleaved with n feedforward blocks, and the
resolution of this ILP n times on identical instances should be
avoided. We developed a tool to generate classes of identical blocks.
* First, we provide a function to anonymize simplified graphs.
Given a simplified graph, we build a parser that maps all target
names to numbers starting with 1 and also anonymizes the parameter
names.
* To test if two blocks are identical, we compare their anonymized
versions. This is done by visiting each node in turn, following some
topological order (which should be the same if the blocks are
identical). For each pair of nodes, we compare their attributes: the
code, but also the information about each variable, including its
type and shape. The lists of parameters should also have the same
anonymized names (i.e. they are used in the same nodes), but also
the same data types and shapes.
* After building the equivalence classes, we directly build the
forward+backward graph for each unique anonymous
graph, and translate it back to un-anonymize and
get the forward+backward of each block. In
Rockmate, we solve the ILP once for each equivalence class, and the
resulting R and S matrices can be shared across
identical blocks.
§.§ Inspection and backward nodes
Since we now have exactly one data defined per node, we can associate
one backward operation with each forward node, whose
code is given by [ The (with
an underscore) refers to the variable before detaching,
while (without) is the detached one. ]
In rk-Checkmate two categories of nodes are introduced:
Computation nodes (C_nodes) and Data nodes
(D_nodes). A C_node represents an operation that
takes a certain amount of time to execute with a certain amount of
memory overhead. It is either a Forward or a
Backward node. A D_node represents an item stored
in memory. D_nodes can be deleted to free memory, and
C_nodes can be recomputed to restore D_nodes,
although it takes some time. . There are three types of
D_nodes: tensor.data attribute,
tensor.grad attribute, and what we call phantoms,
which are intermediate results (saved_tensors) stored in the
grad_fn attribute.
We consider each node of the simplified forward graph one at a time in
topological order. To create the C and D_nodes we
need to do an inspection, to run the code several times to measure
time and memory usage[ For the memory usage, since we are
assuming Rockmate is being used on GPUs, we can trust
torch.cuda.memory_allocated, which is not the case on
CPUs. rk-GB raises a warning when used on a CPU and skips the
inspection part. ]. First we run the forward code
(including the body part) and measure how much memory has been
allocated, then we set the .data attribute to
torch.empty(0) to free the memory[ If it exists, we also enforce the
._base.data attribute to torch.empty(0), because
it is a view of the .data (otherwise we would not be able
to free memory)].
The memory successfully freed is the memory used by the
tensor.data's D_node. If some memory was not freed,
it means that the node has phantoms. Phantoms are
intermediate values stored in tensor.grad_fn in preparation
for backward.
To handle phantoms properly, we handle any ._saved_tensors
or .variable attributes. We use the
grad_fn.next_functions attribute to recursively open the
grad_fn graph. The .variable attribute is an
explicit reference to a variable (e.g. the input of the
forward operation), while _saved_tensors are
either views of known tensors or original tensors. The opening of
grad_fn is crucial for three reasons:
* First, we need to open grad_fn to properly build C_nodes dependencies. Indeed, an input of the forward
operation is needed to perform the backward operation if and only
if there is a reference to it in the grad_fn. Consider a first
example:
In such a case, A.data is indeed necessary to run
B.backward and a reference to it can be found
in B.grad_fn.
Therefore, B's backward
C_node depends on A's data
D_node. Now consider a second example:
Here, given C.grad, both A.grad and B.grad
can be computed without A.data or B.data, so that
we do not want of C to depend on A and B's
data D_nodes. Note that autograd always checks
the shape of the .data attribute of inputs, even if they are not used. To solve
this problem, we introduce two types of dependencies (edges of the
graph): actual and fake. fake edges do not appear in
the ILP, we do not want to force the data to be alive if it is
not used. fake edges are only used in Rockmate's final code
generator: in the example above, C's fakely depends on
A and B's data, so by the time we want to run C's
backward, A.data may already be forgotten. To pass the
autograd shape equality check, we need to assign an empty tensor
with the correct shape to A.data. To avoid wasting memory for
this, we use
Using this trick we allocate only 512 octets, whereas:
would allocate as much memory as the original
A.data and cancel our efforts to let the solver
free A's data to reduce memory footprint.
* In the previous paragraph we explained the user's viewpoint:
given a tensor, we want to find its dependencies. Let us now take
the input perspective. If a _saved_tensor is a view of the
input, it means that
will not free anything. To free a data we must put the
data attribute of all the tensors having the same
data_ptr to torch.empty(0), including the
_saved_tensors which refer to it. The correct way to
forget a data is therefore
Similarly, after recomputing input's data, all
_saved_tensors that refer to it are rebuilt. Even if it is
not directly the same data, but a view of it, we take care to
rebuild it properly, including operations that affect the strides.
autograd can store any view of the data, but there is no way
to guess if the stride will be affected. However, we use all known
views of the data that are not phantoms, and we find one that is
compatible with the _saved_tensor. Therefore, for a node, we
need the names of the phantoms that are views of it.
* As mentioned before, phantoms can be either references to
existing tensors or original tensors. If original tensors are found
in grad_fn, there will be a difference between the memory
generated during forward and the memory freed when
forgetting .data. In this case we create a
phantom D_node. This node has exactly one dependency
(to the C_node) and one user (the C_node).
By inspecting forward execution and backward when
requires_grad is set, we obtain the memory and time
attributes of all the mentioned nodes. Finally, there is another
special . It applies to everything related to loss. The
special loss depends on the model's output data and is required by the model's output grad D_node. This
node has no code attribute, it is a placeholder to split the
final schedule into forward and backward phases. In
the rk-Checkmate generated schedule, the part starts as soon as
this is computed. Furthermore, in rk-Checkmate's ILP, in addition
to the generation constraint: memory_allocated < M_peak, we can
control that at the moment when the special loss_node is
computed, memory_allocated < M_save.
Let us conclude with two final remarks
* In addition to actual and fake edges, we introduce
global edges. For example, the first C_nodes of a
block global depend on the data of the
previous block. These edges assist the final code
generator. Using these edges, we have the entire +graph.
* Measuring GPU memory usage with cuda is accurate, but running
the same operation twice in different contexts can result in
different amounts of memory being allocated. Therefore, the final
execution may allocate more or less memory than we predicted. This
is due to the way cuda allocates memory, trying to minimize
memory fragmentation. But it's usually very small.
§ RK-CHECKMATE: ILP DETAILS
§.§ Graph and objective
As it was discussed in section <ref>, at block level we find an
optimal re-materialization strategy given memory budget via proposed rk-Checkmate
algorithm.
Note that in Checkmate paper, they assume only one output is generated from each operation across the
forward and backward graph. In PyTorch, such an assumption is not feasible in
general cases: tends to generate the gradients of all
relevant input tensors. Therefore, our rk-Checkmate is based on a graph where each
computation node can produce multiple data nodes. Moreover, if
is used in computation of different 's,
may be generated in any backward node of .
Input to the rk-Checkmate is a built by rk-GB.
is a directed acyclic graph,
which containts:
* D data nodes {v_1,…,v_D} and T computational nodes {w_1,…,w_T},
* edges of type v_d → w_t and w_t → v_d that show
dependencies between computational operations and data.
For example, v_d is used to perform computation w_t,
and computation w_t outputs data v_d as a result.
One computational node can have several incoming data nodes and vice versa.
To find an optimal re-materialization strategy for one block
given memory budget and computation-data dependencies described with ,
we solve an integer linear programming (ILP) problem, which minimizes
computational costs required for propagation through the block given
feasibility and memory constraints.
Denote by stage_t-1→ t a period,
which starts after the result of computation w_t-1 is obtained
for the first time and ends when the computation w_t is firstly performed.
During one stage several computations from {w_t'}_t'≤ t and deletions might happen.
The solution of ILP provides a schedule 𝐑 (low-triangular binary matrix T× T) that
determines which computations should be performed during each stage.
R_t,t'=
1, if we compute w_t' during the stage_t-1→ t
0, otherwise .
Each stage can be seen as a sequence of steps,
such that during one step_t'-1→ t' one computation w_t' is done (or not
if the schedule doesn't require that, i.e. if R_t, t' = 0) and some tensors are deleted.
Also, the solution of ILP provides an information 𝐒 about data nodes saved during each stage.
S_t,(t', d) =
1, if during stage_t-1→ t
an output data tensor v_d of computation w_t' is saved
0, otherwise .
Consider all edges in that connect computation nodes with their
children data nodes,
ChildrenOfComp:={(t', d) | v_d ∈ children(w_t'), t'=1,…,T},
and let their number equals |ChildrenOfComp| = E^t→ d. Then S can be seen as binary matrix of size
T× E^t→ d.
Thus, given memory budget, ILP finds schedule R, S such that computational costs
∑_1 ≤ t'<t≤ T C_t'R_t,t'
are minimized given feasibility and
memory constraints (where C_t' is a cost of computation w_t').
Now let us take a closer look to the constraints.
§.§ Feasibility constraints
Consider all edges in that connect data nodes with their
parent computation nodes
ParentsOfData:={(t', d) | w_t'∈ parents(v_d), d=1,…,D}
ChildrenOfData:={(t', d) | w_t'∈ children(v_d), d=1,…,D}
then a set of edges, which connects each data node with its children and parent
computation nodes, can be expressed as
ChildrenParentsOfData:=ChildrenOfData∪ ParentsOfData
and let their number equals |ChildrenParentsOfData| = E^t→ d→ t.
Let also introduce a binary matrix P of size T× D, where
P_t, d =
1, if we have data tensor v_d in memory after the end of stage_t-1→ t
0, otherwise .
Note that P_t, d≥ S_t, (t', d) / |parents(v_d)|.
The following constraints for ILP should be hold
* ∑_t=1^T∑_t'=t+1^TR_t,t'=0: ensures that we can recompute only operations that have been executed during previous stages.
* ∑_e^∑_t=1^t”-1S_t,e=0, where e=(t”, d), e∈ ChildrenOfComp: ensures that before the first computation, data cannot be saved.
* ∑_d=1^D∑_t=1^t'P_t, d = 0, where t' = min{t”| w_t”∈ parents(v_d)}: ensures data tensor v_d isn't stored before the execution of first coputation that contributes to its value.
* ∑_t=1^T R_t,t =T: ensures that w_t is executed at the end of stage_t-1->t.
* ∑_t=1^T R_t, t_loss = 1, where t_loss is an index of node that computes the loss: ensures that the loss is computed only once during the forward-backward phase.
* S_t, e≤ P_t, d, where e=(t', d)∈ ChildrenOfComp and t,t'=1,…,T
* S_t+1, e≤ S_t, e + R_t, t', where e=(t', d)∈ ChildrenOfComp and t=1,…,T-1
* R_t, t'≤ R_t, t” + S_t, e, where t'∈ children(d), e=(t”, d), e∈ ChildrenOfComp, and t=1,… T: ensures that all computations w_t' which are required for generation of data v_d are present, where v_d is an input data for computation w_t.
§.§ Memory constraints
Let ChildrenOfData[d] denotes a set of indices of computation nodes
w_t, which requires tensor d.
And let ParentsOfData[d] returns a set of indices of computation nodes
w_t, which generates tensor d.
We remind that each stage can be seen as a sequence of steps, such that during one step
one computation (or not if the schedule doesn't require that) and some tensors are deleted.
To represent the presence of certain tensors at different steps step_t'-1→ t' of each
stage stage_t-1→ t, we introduce the following variables:
create_t,d,t'∈{0,1}, ∀ t'∈ ParentsOfData[d]:
whether tensor v_d is created during step_t'-1→ t' at stage stage_t-1→ t.
delete_t,d,t'∈{0,1}, ∀ t'∈ (ParentsOfData[d]+ChildrenOfData[d]):
whether tensor d is deleted during step_t'-1→ t' at stage stage_t-1→ t.
Let us define expression for t=1,…,T and (t', d)∈ ChildrenParentsOfData:
alive[t,d,t'] = P_t,d+∑_t”≤ t'create_t,d,t” - ∑_t”≤ t'delete_t,d,t”∈{0,1}.
A tensor v_d is either alive or deleted immediately after the computation of parent nodes:
alive[t,d,t']+delete_t,d,t'≥ R_t,t',
A tensor v_d is retained during stage_t-1→ t if it is alive during the last posssible step
of stage_t-2→ t-1:
alive[t,d,t'] = P_t,d, t' =
max(ParentsOfData[d], ChildrenOfData[d])
A tensor can only be created from the parent computation:
create_t,d,t'≤
R_t,t'
A tensor should be deleted if it would not be needed or saved in the current
stage:
delete_t,d,t' = R_t,t' * ∏_d' ∈ att(d)(1-P_t+1,d') *
∏_t”∈ children(v_d) | t”>t'(1-R_t,t”)
No tensor should be alive after the final stage:
alive[T,d,t'] = 0, t' =
max(ParentsOfData[d]+ChildrenOfData[d])
`'
Let U_t, t' denotes the memory saved at the end of step_t'-1 → t' during stage_t-1 → t and
M_d is the memory required to store tensor v_d, then
U_t, 1=∑_d=1^DM_d P_t,d + ∑_d=1^DM_d create_t,d,1 - ∑_d=1^DM_d delete_t,d,1
U_t, t' = U_t, t'-1+∑_d=1^DM_d create_t,d,t' - ∑_d=1^DM_d delete_t,d,t'
The peak memory at step_t'-1 → t' during stage_t-1 → t is within memory
budget:
tmpM_t' R_t, t' + U_t, t'+∑_∀ iM_d delete_t,d,t'≤
M_budget
where tmpM_t' is the temporary memory overhead needed in the
computation node w_t'.
§ RK-ROTOR
§.§ Notations
Assume our model is a sequence of L blocks, numbered from 0 to
L-1. For each block, we have 1+B budget options, where option 0
does not save any intermediate data, and each of the other B options
saves a different amount of data. We denote by io the
forward computation of block i with option o, and if o > 0,
io is the corresponding backward computation. Since
i0 does not store any intermediate data, we consider that it
does not have a corresponding backward computation.
The input activation of i is i, and its output is
i+1. Similar to the Rotor paper <cit.>, for each
option o>0, we denote by io the union of i and of
all the intermediate data generated by io. For ease of
notation, we will also use i and io to denote the
size of the corresponding data.
For any computation, we use · to denote the temporary memory
usage of this computation: this is the amount of memory that needs to be
available for this computation to succeed, and that is released
afterwards. We also use · to denote the running time of an
computation. As an example, since the input and output data need to be
in memory, the memory usage for running i0 is i + i+1 +
i0, and this takes time i0.
§.§ Formulation
We denote by (s, t, m) the optimal execution time for computing
the sequence from block s to block t, assuming that the input
s will be kept in memory. There are two possible cases for the
start of this computation:
* If block s is only computed once in this sequence, then it is
computed with one of the io options for o > 0 so that it
is possible to perform the backward computation. This requires to
have at least io available memory for the
forward and at least io available for the backward.
The corresponding execution time is io +
io, and the memory available for the rest of the
computation is m-io. The best choice is given by:
_1(s, t, m) = min_valid option oio +
io + (s, t, m-io)
In this equation, an option is considered valid if the temporary
memory requirements for the forward and backward computations are
satisfied.
* If block s is computed more than once, then its first
computation does not need to keep any intermediate data. It is thus
computed with s0, and the choice now is about which is the
next activation to be kept in memory. Let us denote by i the index
activation kept in memory, so that activations s+1,
s+2, …, i-1 are discarded just after being used. It
is possible to compute i by performing s0,
s+10, …, i-10. Once this activation is computed
and stored in memory, optimizing the rest of the computation becomes
a subproblem: we need to compute the optimal execution time from
block i to t. Afterwards, since no activation was stored between
blocks s and i, this corresponds to another subproblem, from s
to i. The best choice is given by:
_2(s, t, m) = min_valid choice i with s < i<ts0+s+10 +⋯+ i-10
+ (i, t, m-i) + (s, i, m)
In this equation, a choice is considered valid if the temporary
memory requirements for all computations s0, s+10,
…, i-10 are satisfied.
In both cases, if there is no valid choice, the corresponding min
value is considered to be +∞. Finally, the optimal decision for
our problem is computed with:
(s, t, m) = min(_1(s, t, m), _2(s, t, m))
Additionally, if s=t+1, only the first case can be considered, but
this time the rest of the computation is empty. We can thus compute
(s, s+1, m) for all s and all m. The resulting algorithm is
close to the Rotor algorithm, using the updated
equation (<ref>), and is provided in
Algorithm <ref>.
§ RK-EXEC
Rockmate's final re-materialization schedule is a list of operations, either compute or forget. rk-Exec takes care of executing this schedule properly. It creates a new that produces exactly the same results (both data and gradients) while respecting the requested budget. The schedule given to rk-Exec refers to and .
§.§ Computation
Remember that a consists of a main assignment that creates the , and a that contains secondary statements about shapes, views, and in-place operations. By default in PyTorch, during forward execution, puts in output's all the information needed to go back directly from the loss to input's gradients. The principle of memory saving is to control the backward and how intermediate activations are saved. To prevent from creating the whole computational graph in output's , rk-Exec detach each tensor after computing it, so that only keeps track of the last operation. Consider the following example
For the forward we have :
* For C_node a
* For C_node b, viewing operations are done after
* For C_node d, in-place operations are done before
* For C_node output
We always name with an underscore the variable before detaching, we call it the proxy. Based on this the backward computation is:
§.§ Deletions
Remember that there are three types of :
* To free a , we assign all views that refer to it to . In the example above, to free d's data we set , and to 0. But as mentioned in the rk-GB appendix, this also includes views that are stored in users' grad_fn as . For example, to free b's data you must perform
* To free a we just need to perform
It will delete the , but the variable defined by the detach operation (, without the underscore) will keep the alive. So we just forget about the phantoms.
* To release a , all we need is
§.§ Recomputation
To recompute a rk-Exec reassign the proxy, but we do not detach again. Since the variable post-detach could be mentioned directly in its user grad_fn. We simply reassign the data attribute. In the example above, to recompute we do:
Furthermore, since re-materialization is the opposite of forgetting, we need to reassign the data attribute of all views, including the that refer to them. Recomputing the and is trivial.
*Last issues
* In the graph produced by rk-GB we introduced the notion of real and fake dependencies, they imply several tricks in rk-Exec. See the last part of the rk-GB appendix for explanations.
* To generate the code correctly, we compile the list of operations described in the schedule. That is, we generate the code one by one, but we keep track of which tensors are alive to avoid any if statements if the final code. For example, when we need to reassign all the views of a tensor in users grad_fn, we know which users are alive.
* Rockmate manipulates code that are either string or Python AST object we can execute. Therefore, rk-Exec generate a list of strings to execute. Then we can generate a big string code for the forward, and one for the backward, finally we use Python compile function to execute these functions without wasting time on parsing the string.
* We take care of random operations, in particular to be able to recompute a random function deterministically, we store random states on the first computation and restore them when needed.
* Note that sometimes, due to the float approximation precision, the results obtained from the original module and the new one may be a little different. But they are as close as running the original module twice. With precision there are always strictly equal on the model we tested.
§ DETAILED EXPERIMENTS ON GPT2
Model Input size Algorithm Budget (GiB) Peak mem Makespan Makespan
(GiB) mean (ms) std (ms)
GPT2-large (2, 512) PyTorch M_GPU 6.708 457.173 1.207
GPT2-large (2, 512) Rotor 0.850 0.846 593.507 1.789
GPT2-large (2, 512) Rockmate 0.850 0.816 564.041 1.378
GPT2-large (2, 512) Rotor 2.800 2.360 522.163 0.889
GPT2-large (2, 512) Rockmate 2.800 2.741 481.650 0.992
GPT2-large (2, 512) Rotor 7.600 6.515 456.971 1.255
GPT2-large (2, 512) Rockmate 7.600 6.516 465.376 0.985
GPT2-large (4, 256) PyTorch M_GPU 5.158 432.646 1.207
GPT2-large (4, 256) Rotor 0.850 0.808 561.066 1.791
GPT2-large (4, 256) Rockmate 0.850 0.795 531.255 0.732
GPT2-large (4, 256) Rotor 1.800 1.473 526.807 1.402
GPT2-large (4, 256) Rockmate 1.800 1.745 486.976 1.305
GPT2-large (4, 256) Rotor 5.600 4.690 439.797 1.451
GPT2-large (4, 256) Rockmate 5.600 4.967 440.410 1.469
GPT2-medium (2, 1024) PyTorch M_GPU 10.782 480.330 0.975
GPT2-medium (2, 1024) Rotor 1.000 1.188 751.478 1.070
GPT2-medium (2, 1024) Rockmate 1.000 0.994 619.055 1.012
GPT2-medium (2, 1024) Rotor 4.000 3.351 558.858 1.208
GPT2-medium (2, 1024) Rockmate 4.000 3.941 516.758 0.899
GPT2-medium (2, 1024) Rotor 11.600 10.326 494.043 1.097
GPT2-medium (2, 1024) Rockmate 11.600 10.582 490.490 0.924
GPT2-medium (4, 512) PyTorch M_GPU 7.407 430.596 1.206
GPT2-medium (4, 512) Rotor 1.000 1.188 658.736 1.105
GPT2-medium (4, 512) Rockmate 1.000 0.986 547.343 0.873
GPT2-medium (4, 512) Rotor 2.000 1.824 532.637 1.686
GPT2-medium (4, 512) Rockmate 2.000 1.988 497.298 0.833
GPT2-medium (4, 512) Rotor 7.600 6.664 445.434 0.710
GPT2-medium (4, 512) Rockmate 7.600 7.207 440.769 1.410
|
http://arxiv.org/abs/2307.03269v2
|
20230706201128
|
A Hybrid Quantum-Classical Generative Adversarial Network for Near-Term Quantum Processors
|
[
"Albha O'Dwyer Boyle",
"Reza Nikandish"
] |
quant-ph
|
[
"quant-ph"
] |
A Hybrid Quantum-Classical
Generative Adversarial Network
for Near-Term Quantum Processors
Albha O'Dwyer Boyle and Reza Nikandish
A. O'Dwyer Boyle was with the School of Electrical and Electronic Engineering, University College Dublin, Ireland.
R. Nikandish is with the School of Electrical and Electronic Engineering, University College Dublin, Ireland, and also with the Center for Quantum Engineering, Science, and Technology, University College Dublin, Ireland (e-mail: [email protected]).
August 1, 2023
============================================================================================================================================================================================================================================================================================================================================================================================================================
In this article, we present a hybrid quantum-classical generative adversarial network (GAN) for near-term quantum processors. The hybrid GAN comprises a generator and a discriminator quantum neural network (QNN). The generator network is realized using an angle encoding quantum circuit and a variational quantum ansatz. The discriminator network is realized using multi-stage trainable encoding quantum circuits. A modular design approach is proposed for the QNNs which enables control on their depth to compromise between accuracy and circuit complexity. Gradient of the loss functions for the generator and discriminator networks are derived using the same quantum circuits used for their implementation. This prevents the need for extra quantum circuits or auxiliary qubits.
The quantum simulations are performed using the IBM Qiskit open-source software development kit (SDK), while the training of the hybrid quantum-classical GAN is conducted using the mini-batch stochastic gradient descent (SGD) optimization on a classic computer. The hybrid quantum-classical GAN is implemented using a two-qubit system with different discriminator network structures.
The hybrid GAN realized using a five-stage discriminator network, comprises 63 quantum gates and 31 trainable parameters, and achieves the Kullback-Leibler (KL) and the Jensen-Shannon (JS) divergence scores of 0.39 and 0.52, respectively, for similarity between the real and generated data distributions.
Generative adversarial network (GAN), hybrid quantum-classical model, noisy intermediate-scale quantum (NISQ), quantum circuit, quantum computing, quantum machine learning, quantum neural network (QNN), variational quantum algorithm (VQA).
A Hybrid Quantum-Classical
Generative Adversarial Network
for Near-Term Quantum Processors
Albha O'Dwyer Boyle and Reza Nikandish
A. O'Dwyer Boyle was with the School of Electrical and Electronic Engineering, University College Dublin, Ireland.
R. Nikandish is with the School of Electrical and Electronic Engineering, University College Dublin, Ireland, and also with the Center for Quantum Engineering, Science, and Technology, University College Dublin, Ireland (e-mail: [email protected]).
August 1, 2023
============================================================================================================================================================================================================================================================================================================================================================================================================================
§ INTRODUCTION
Quantum computing can potentially lead to an unparalleled boost in computational power by using prominent features of quantum physics including superposition, entanglement, interference, and parallelism. This computational advantage can lead to revolutions in many emerging technologies which are hindered by current computing limitations in the software and hardware levels. The current quest for a practical quantum computer, when successful, can provide significant benefits to several applications including machine learning <cit.>, autonomous driving <cit.>, healthcare and drug discovery <cit.>, cryptography and secure communications <cit.>, teleportation <cit.>, and internet <cit.>.
Since the conception of quantum computing in 1980s <cit.>, this field has experienced several landmark developments to prove its promising capabilities and attract recent interests. The most notable inventions in the algorithm level include the Shor's algorithm for prime number factorization and discrete logarithm on a quantum computer <cit.>, the Grover's algorithm for efficient search in unstructured large databases <cit.>, the Deutsch–Jozsa algorithm for classification of a binary function as constant or balanced <cit.>, and the HHL algorithm for solving linear systems of equations <cit.>.
Quantum computers can outperform classic computers by the virtue of using quantum algorithms <cit.>. Quantum algorithms are traditionally developed for long-term fault-tolerant quantum computers with a large number of nearly perfect qubits (e.g., over 1 million). The quality of such algorithms is evaluated by their asymptotic computational complexity, i.e., how the quantum algorithm can perform a given computational task faster than a classic algorithm. The impressive speed advantage predicted by quantum algorithms, e.g., quadratic speedup for Grover's search algorithm, 𝒪(√(N)), and exponential speedup for quantum support vector machine (SVM), 𝒪(logN), has been a driving force in the recent extensive efforts on quantum computers <cit.>.
The near-term quantum computers, aka Noisy Intermediate-Scale Quantum (NISQ) computers <cit.>, are realized using imperfect and limited number of qubits (e.g., 10–100). The quantum circuits realized using these noisy qubits should have a limited depth to avoid the loss of quantum state by the quantum decoherence. These quantum computers cannot provide computational resources required to leverage the fault-tolerant quantum algorithms and, as a result, can outperform classic computers only in a few computational tasks <cit.>. The near-term quantum computers are realized to demonstrate quantum advantages for some specific computational tasks <cit.>.
A quantum algorithm should account for the inherent limitations of the NISQ computers, e.g., limited number of qubits, decoherence of qubits, limited depth of quantum circuits, and poor connectivity of qubits, to be able to achieve a quantum advantage. Variational quantum algorithms (VQAs) have emerged as one of the, if not the, most effective approaches in the near-term quantum computing era <cit.>.
Quantum machine learning can significantly benefit from the VQAs which can also be interpreted as quantum neural networks (QNNs) <cit.>. The quantum circuit parameters can be trained by an optimizer using a predefined loss function running on a classic computer. Nevertheless, there are several challenges in trainability, efficiency, and accuracy of the VQAs which should be addressed through innovative solutions <cit.>.
Most of quantum machine learning models are inspired by a classic model. Generative adversarial network (GAN), proposed by Goodfellow in 2014 <cit.>, is a powerful tool for classical machine learning, in which a generator model captures the data distribution and a discriminator model maximizes the probability of assigning the correct true/fake label to data. Quantum GAN was proposed in 2018 for accelerating the classical GAN <cit.>. It is shown that in a fully quantum implementation, i.e., when the generator and discriminator are realized on a quantum processor and the data is represented on high-dimensional spaces, the Quantum GAN can achieve an exponential advantage over the classic GAN <cit.>.
This pioneering work on Quantum GAN were followed by a number of other developments presented in the literature <cit.>. In <cit.>, the efficient loading of statistical data into quantum states is achieved using a hybrid quantum-classical GAN. In <cit.>, a hybrid quantum-classical architecture is proposed to model continuous classical probability distributions. An experimental implementation of the Quantum GAN using a programmable superconducting processor is presented in <cit.>, in which the generator and discriminator are realized as multi-qubit QNNs, and the generator could replicate an arbitrary mixed state with 99.9% fidelity. In another experimental Quantum GAN, also implemented on a superconducting quantum processor <cit.>, an average fidelity of 98.8% is achieved for both of the pure and mixed quantum states. A Quantum GAN architecture is proposed in <cit.> which improves convergence of the minimax optimization problem by performing entangling operations between the generator output and true quantum data. These developments showcase some potentials of the Quantum GAN as a promising near-term quantum algorithm, while still there are many open challenges to the efficient realization, training, and application of the GAN in the quantum domain.
In this article, we present a hybrid quantum-classical GAN for the near-term quantum processors. The generator network comprises a quantum encoding circuit and a variational quantum ansatz. The discriminator network is realized using a modular design approach which enables control on the depth of quantum circuits to mitigate impacts of the imperfect low-fidelity qubits in the near-term quantum processors.
The article is structured as follows. In Section II, architecture of the hybrid quantum-classical GAN, the gradient evaluation for quantum circuits, and the Barren Plateaus issue are presented. The quantum neural architecture search approach is discussed in Section III. In Section IV, training of the variational quantum ansatzes in the hybrid GAN is presented. In Section V, results of the implemented hybrid GAN are presented and discussed. Finally, concluding remarks are summarized in Section VI.
§ HYBRID QUANTUM-CLASSICAL GAN
§.§ Hybrid GAN Architecture
The proposed hybrid quantum-classical GAN architecture is shown in Fig. <ref>. The generator QNN is composed of a quantum encoding circuit and a variational quantum ansatz. The encoding quantum circuit transforms the input classic data into quantum states by using a specific algorithm. The encoding is realized using a unitary quantum operator which can also be regarded as a feature map that maps the input data to the Hilbert space of the quantum system. This feature map serves similar to a kernel in classic machine learning. The encoding circuit structure can have a significant impact on the generator QNN behavior and should not be overlooked as a trivial operation. The variational quantum ansatz is controlled by the vector parameter θ_G which is trained on a classic computer. The discriminator QNN is realized as a variational quantum ansatz controlled by the vector parameter θ_D. The generator is trained using a randomized input noise and based on its success in fooling the discriminator. The discriminator is trained using a given dataset and the output of generator until it reaches an acceptable accuracy.
The generator and discriminator QNNs are trained using a zero-sum game. The loss function for the GAN is defined as
ℒ_GAN (θ_D, θ_G) = -_x[log(D(x, θ_D))] -
_z[log(1-D(G(z, θ_G), θ_D))]
where x∼ p_data(x) is the real data, z∼ p_z(z) is the noise, D(x, θ_D) denotes the discriminator network, and G(z, θ_G) denotes the generator network. The GAN ideally should solve the minimax game, i.e., the generator tries to minimize the loss function while the discriminator tries to maximize it. In practice, the two networks are trained using separate loss functions <cit.> that should be minimized through iterative optimization
ℒ_D (θ_D) = -_x[log(D(x)] - _z[log(1-D(G(z)))],
ℒ_G (θ_G) = - _z[log(D(G(z)))],
where the dependency of the discriminator and generator functions on the training parameters are not shown for simplicity.
We will discuss the details and practical considerations for the realization and training of the two networks.
§.§ Structure of Quantum Neural Networks
A promising approach to realize the QNNs is using VAQs. A VQA is composed of an ansatz which can be realized as a parametric quantum circuit, an input training dataset, and an optimizer which find the optimum values of the ansatz's quantum circuit parameters to minimize a defined cost function. The optimizer usually operates on a classic computer which permits the use of vast knowledge developed on classical machine learning. The ansatz structure, as the core of the VQA, is generally selected based on features of the quantum processor and the specific given task.
The ansatz can be described by a general unitary function U(x, θ), which is conventionally considered as the product of two separate unitary functions U(θ) W(x). The parameter-dependent unitary U(θ) is defined as the product of cascaded unitary functions
U(θ) = U(θ_1, ..., θ_L) = ∏_k=1^L U_k(θ_k),
where U_k(θ_k), the unitary function of the layer k, is conventionally defined by an exponential function as
U_k(θ_k) = e^-j H_kθ_k.
Here, H_k is a Hermitian operator, θ_k is the element k of the training vector θ, and L is the number of layers in the ansatz <cit.>. The unitary operator (<ref>) can be interpreted as consecutive parametric rotations with an orientation determined by the Hermitian H_k. This is usually selected as one of the Pauli's operators, H_k ∈{σ_x , σ_y , σ_z }, or their linear superposition ∑_i c_i σ_i.
The data-dependent unitary function W(x) can be similarly modeled as
W(x) = ∏_i=1^N e^ - j G_ix_i,
where G_i is a constant and N is the length of dataset.
The quantum state prepared by (<ref>) is achieved by
|ψ (x, θ)⟩ = U(x, θ) |ψ_0⟩,
where |ψ_0⟩ is an initial quantum state. This quantum state can be used to define a quantum model.
Classical optimization problems in the VQAs are generally in the NP-hard complexity class <cit.>, and, therefore, should be solved using specifically developed methods, e.g., adapted stochastic gradient descent (SGD) <cit.> or meta-learning <cit.>.
Major limitations of the VQAs are related to their trainability, efficiency, and accuracy. Trainability of the VQAs is limited by the Barren plateaus phenomenon (Section II-D), where gradient of the cost function can become exponentially vanishing for a range of training parameters <cit.>. Efficiency of the VQAs is evaluated by the number of measurements required to estimate expectation values in the cost function. Generally, the efficiency is dependent on the computational task of the VQA. Accuracy of the VQAs is mostly impacted by hardware noise which is a notorious feature of the NISQ processors. Noise slows down the training process, degrades accuracy of the algorithm by impacting final value of the cost function, and can lead to the noise-induced barren plateaus <cit.>. These practical challenges should be considered for efficient architecture development and training of the VQAs.
Conventionally the VQAs are realized using parameterized quantum circuits where the parameters specify the rotation angles for the quantum gates. These angles are trained to minimize a specified loss function defined for a given computational task. The loss function is dependent on the system inputs x_k, circuit observables denoted by O_k, and trainable parameterized unitary circuit U(θ)
ℒ(θ) = f(x_k, O_k, U(θ)).
The ansatz parameters are optimized using a classic optimizer to minimize the cost function
θ^* = θmin ℒ(θ).
The choice of ansatz architecture is an important consideration for QNN operation. There are some generic quantum circuit architectures which have performed well on a variety of computational problems and conventionally are used in other applications <cit.>. These ansatzes have a fixed architecture and their trainable parameters are usually rotation angles. These algorithms are relatively easy to implement and are hardware efficient, as they enable control of the circuit complexity. We implement the generator QNN using this approach. On the other hand, the quantum circuit architecture can also be modified to find optimal architecture for the given task. We use this quantum neural architecture search approach to realize the discriminator QNN.
The number of layers is an important design parameter of a QNN. Unfortunately, there is no well-defined rule for finding the optimum number of layers. In a fault-tolerant quantum computer, a deep QNN comprising large number of layers, as allowed by the processor computational power, can provide higher accuracy. However, in the NISQ computers performance of the quantum algorithm can degrade by the decoherence of qubits, especially if the QNN includes many layers. Therefore, in this work we use a modular design approach to control depth of QNNs. This can mitigate the impacts of noise and barren plateaus, and improve trainability of the QNNs.
§.§ Gradient Evaluation
The cost function in VQAs is derived using hybrid quantum-classical processing which should be minimized through a classical optimization approach. It is useful to derive exact gradients of quantum circuits with respect to the gate parameters. Using the parameter shift rule, gradients of a quantum circuit can be estimated using the same architecture that realizes the original circuit
∇_θf(x, θ) = k [f(x, θ + Δθ) - f(x, θ - Δθ) ],
where Δθ denotes parameter shift and k is a constant <cit.>. This reduces the computational resources required to evaluate gradient of quantum circuits.
A quantum rotation gate with a single-variable rotation θ can be written in the form G(θ) = e^-j h θ, where h is a parameter representing the behavior of the gate. It is straightforward to show that the first-order derivative of this function can be derived as
∂ G(θ)/∂θ = k (G ( θ + π/2) - ( θ - π/2) ).
This leads to the conclusion that gradient of a quantum circuit composed of rotation gates can be derived from the same quantum circuit evaluated at the shifted angles θ±π/2. This result can be extended to a quantum circuit which its Hermitian operator has two distinct eigenvalues <cit.>. For a quantum circuit with a measurement operator Ô and the quantum state |ψ (θ)⟩, as defined in (<ref>), the first-order derivative of the output quantum state ⟨ψ(θ)| Ô | ψ(θ)|$⟩ with respect to the elementθ_iof the vectorθ = {θ_1, ..., θ_i, ..., θ_n}can be derived as
∂⟨ψ(θ)| Ô | ψ(θ)|⟩/∂θ_i = 1/2(⟨ψ(θ_i^+)| Ô | ψ(θ_i^+)|-⟩.
. ⟨ψ(θ_i^-)| Ô | ψ(θ_i^-)|⟩),
where
θ_i^± = {θ_1, ..., θ_i ±π/2, ..., θ_n}.
All of the angular rotation gates used in this paper feature Hermitian operators with two distinct eigenvalues. Therefore, the gradient calculation approach can be applied. It is noted that this method requires the output to be calculated twice for each trainable parameter. On the other hand, it doesn't need any ancila qubit as in other methods <cit.>. In the hybrid GAN algorithm, there is a large number of trainable parameters in the generator and discriminator QNNs and, as a result, the use of parameter shift rule can obviate the need for using many ancila qubits.
§.§ Barren Plateaus
Barren plateaus issue is one of the challenges in training process of hybrid quantum-classical algorithms where a parameterized quantum circuit is optimized by a classical optimization loop. This results in exponentially vanishing gradients over a wide landscape of parameters in QNNs <cit.>. Mechanisms of the barren plateaus are not completely understood and its behaviors are still under investigation. In <cit.>, it is shown that deep quantum circuits with random initialization are especially prone to the barren plateaus. Furthermore, the number of qubits plays an important role and for sufficiently large number of qubits there is a high chance that the QNN training end up to a barren plateau.
A number of strategies have been proposed to mitigate the effects of barren plateaus in the training of QNNs. In <cit.>, it is proposed to randomly initialize the parameters of the first circuit blockU_k(θ_k,1)and add a subsequent block whose parameters are set such that the unitary evaluates toU_k(θ_k,2) = U_k(θ_k,1)^†. This means that each circuit block and therefore the whole circuit will initially evaluate to the identity, i.e.,U_k(θ_k,1) U_k(θ_k,1)^†= 1. This technique can be effective in reducing the effects of barren plateaus, but it can alter the quantum circuit structure depending on the initial choice of the ansatz, which could degrade the generalization of the quantum classifier. Other approaches including the use of structured initial guesses and the segment-by-segment pretraining can also mitigate this issue <cit.>.
In the hybrid quantum-classical GAN proposed in this paper, there are at least two features which reduce the barren plateaus effect. First, the modular structure of the QNNs with a primarily advantage of controlling their depth to improve fidelity of the quantum circuits realized using noisy qubits, can also reduce the gradient complexity and the barren plateaus. Second, the modular design approach used to find optimal QNN structures leads to a trainable encoding scheme (Section III-B) which can help to identify the network architectures with the barren plateaus problem and modify them to resolve the issue.
§.§ Discriminator Network
The discriminator QNN of the hybrid quantum-classical GAN architecture is composed of a trainable encoding circuit and separate layers of a parameterized circuit ansatz. Details of the encoding circuit are discussed in Section III.
The discriminator network is used to realize a quantum variational classifier. The expectation value of a single qubit in the discriminator QNN is used to compute the neural network output. The output of the discriminator network is a real number between0and1indicating the probability of an input being a real data sample. The Z-basis expectation value measured at the output of the discriminator network is given by⟨ψ| σ_z | ψ|$⟩, where σ_z is the Pauli-Z matrix. Assuming |ψ⟩ = α|0⟩ + β|1⟩, where α and β are complex numbers satisfying |α|^2 + |β|^2 =1, it can be shown that
⟨ψ | σ_z | ψ|=⟩ |α|^2 - |β|^2.
This quantity changes from -1 for |ψ⟩ = |1⟩ to 1 for |ψ⟩ = |0⟩. We can scale it to achieve a number between 0 and 1 at output of the discriminator network
D = 1/2( 1 + ⟨ψ|σ_z|ψ|⟩).
The loss function for the discriminator network is defined in (<ref>). The loss can be minimized using the minibatch stochastic gradient descent (SGD). This requires the gradient of the loss function ℒ_D (θ_D) to be evaluated with respect to the vector parameter θ_D. Using (<ref>), the derivative of the loss function with respect to the component θ_D^i of the vector can be derived as
∂ℒ_D (θ_D)/∂θ_D^i = ∂ℒ_D/∂ D∂ D/∂⟨ψ|σ_z|ψ|⟩∂⟨ψ|σ_z|ψ|⟩/∂θ_D^i.
There are two terms in the discriminator loss function (<ref>) which we denote them as ℒ_D1 and ℒ_D2.
The first derivative term of (<ref>) can be derived using (<ref>) as
∂ℒ_D1/∂ D = - 1/D(x, θ_D)
∂ℒ_D2/∂ D = 1/1 - D(G(z), θ_D).
The second derivative term using (<ref>) is given by
∂ D/∂⟨ψ|σ_z|ψ|⟩ = 1/2.
The third derivative term can be calculated using the parameter shift rule, (<ref>) and (<ref>), as
∂⟨ψ|σ_z|ψ|⟩/∂θ_D^i =
1/2( ⟨ψ(θ_D^i+)|σ_z|ψ(θ_D^i+)|-⟩⟨ψ(θ_D^i-)|σ_z|ψ(θ_D^i-)|⟩),
where
θ_D^i± = {θ_D^1, ..., θ_D^i ±π/2, ..., θ_D^n}.
Comparing (<ref>) with (<ref>) indicates that the third derivative term can be related to the discriminator output as the difference between its value evaluated at θ_D^i+ and θ_D^i-. Therefore, using (<ref>)–(<ref>), the derivative of the two terms of the loss function can be derived as
∂ℒ_D1 (θ_D)/∂θ_D^i = - 1/2D(x, θ_D^i+) - D(x, θ_D^i-)/D(x,θ_D)
∂ℒ_D2 (θ_D)/∂θ_D^i = 1/2D(G(z), θ_D^i+) - D(G(z), θ_D^i-)/1 - D(G(z), θ_D)
This process can be performed for all elements of the vector parameter θ_D to derive gradient of the discriminator loss function as
∇ℒ_D = ∑_i=1^n∂ℒ_D (θ_D)/∂θ_D^iû_D^i,
where û_D^i is the unit vector in the direction of parameter θ_D^i.
§.§ Generator Network
The generator QNN of the hybrid quantum-classical GAN architecture is composed of a quantum encoding circuit and a parameterized circuit ansatz. A fixed encoding scheme is used in the generator network unlike the discriminator network which included a trainable encoding circuit. Details of this approach are presented in Section III.
Similar to the discriminator network, the Pauli-Z operator is used as the basis for measurement of the expectation values of output qubits in the generator network, i.e., ⟨ψ | σ_z | ψ|$⟩. The expectation values can be normalized using (<ref>) to constraint output of the generator network between0and1.
The loss function for the generator network is defined in (<ref>). The network is trained using the minibatch SGD and the loss associated with each batch is derived by averaging across all data samples in that batch. Using (<ref>), the derivative of the loss functionℒ_G (θ_G)with respect to the componentθ_G^iof the vectorθ_Gcan be derived as
∂ℒ_G (θ_G)/∂θ_G^i = ∂ℒ_G/∂ D∂ D/∂ G∂ G/∂θ_G^i.
The first term can be directly calculated as
∂ℒ_G/∂ D = - 1/D(G(z), θ_D).
For the second term, we note that the output of the discriminator network is related to the generator network function asD = D(G(z, θ_G), θ_D), whereθ_Dis treated as a constant when the generator network is evaluated. The general form of the parameter shift rule, defined by (<ref>), can be applied provided that
∂ D/∂ G = k_G ( D(G + Δ G) - D(G - Δ G) ),
wherek_GandΔGare two properly selected parameters.G(z, θ_G)is controlled by the vector parameterθ_Gand, therefore,k_GandΔGshould also be generally dependent on it. These parameters cannot be explicitly determined without a complete information about theG(z, θ_G)function. A solution is to initialize their value based on (<ref>) and (<ref>), i.e.,k_G^(1) = 1/2andΔG^(1) = π/2, and then optimize them during the overall SGD process.
Furthermore, third term in (<ref>) can be derived using the parameter shift rule, (<ref>) and (<ref>), as
∂ G/∂θ_G^i = 1/2( G(z, θ_G^i+) - G(z, θ_G^i-) ),
where
θ_G^i± = {θ_G^1, ..., θ_G^i ±π/2, ..., θ_G^n}.
The gradient of the generator loss function can be derived by running this process for all elements of the vector parameterθ_Gas
∇ℒ_G = ∑_i=1^n∂ℒ_G (θ_G)/∂θ_G^iû_G^i,
whereû_G^iis the unit vector in the direction of parameterθ_G^i.
§ DATA ENCODING
§.§ Fundamentals
Data representation is a critical aspect for performance of quantum machine learning. Classical data must first be transformed into quantum data before it can be processed by quantum or hybrid quantum-classical algorithms. The data encoding can be realized using different schemes, including basis encoding, angle encoding, and amplitude encoding <cit.>. In the NISQ computers, trainability and robustness of an algorithm are dependent on the encoding scheme.
The encoding is conventionally realized using certain quantum circuits with a fixed structure (Section III-B). In quantum machine learning, this approach prepares the data for training of the QNN independent of the trainable parameters of the network. We use this approach for the generator network. However, in the discriminator network, we use a trainable encoding scheme which improves the ultimate accuracy (Section III-C).
§.§ Fixed Encoding Schemes
The data encoding entails mapping the classic dataxto a high-dimensional quantum Hilbert space using a quantum feature mapx →|ψ(x)⟩<cit.>. We evaluate different encoding approaches for the hybrid quantum-classical GAN application on the NISQ computers[Three fundamental encoding methods are investigated in this paper.].
§.§.§ Basis Encoding
In the basis encoding, the classical data is converted to the binary representationx ∈{0,1}^⊗Nand then is encoded as the basis of a quantum state
|x⟩ = 1/√(N)∑_i=1^N|x_i⟩.
The number of qubits is the same as the number of bits in the classical data representation,Q = log_2N. The basis encoding transforms the data samples into quantum space where qubits can store multiple data samples using the quantum superposition. However, the realization of basis encoding requires additional auxiliary qubits <cit.>. This is particularly undesirable for the NISQ computers as the excessive qubits can degrade the fidelity of quantum algorithms.
§.§.§ Angle Encoding
In the angle encoding, the classic datax = [x_1, ..., x_N]^Tis embedded into the angle of qubits as
|x⟩ = ⊗_i = 1^Ncos(x_i)|0⟩ + sin(x_i)|1⟩.
The realization of this approach requires the same number of qubits as the number of data samples,Q = Nqubits. It is therefore less qubit efficient compared to the basis and amplitude encoding methods. On the other hand, the angle encoding can be implemented using a constant depth quantum circuit, e.g., single-qubit rotation gates, e.g., RX, RY, RZ, and is therefore amenable to the NISQ computers.
In the angle encoding, unlike the basis and amplitude encoding methods, the data features are consecutively encoded into the qubits. This increases the data loading time by a factor ofN. It can be resolved by usingNparallel quantum gates. The basic angle encoding defined by (<ref>) can be modified to the dense angle encoding <cit.> where two data features are encoded per qubit
|x⟩ = ⊗_i = 1^N/2cos(π x_2i-1)|0⟩ + e^j2π x_2isin(π x_2i-1)|1⟩.
This methods needsQ = 1/2Nqubits and improves the data loading speed. However, a more complicated quantum circuit is required to implement the encoding.
A subtle important point is that the angle encoding performs nonlinear operations on the data through the sine and cosine functions. This feature map can provide powerful quantum advantage in quantum machine learning, e.g., separate the data such that simplify the QNN circuit architecture and improve its performance.
§.§.§ Amplitude Encoding
The amplitude encoding embeds the classic data into the amplitudes of a quantum state
|x⟩ = 1/||x||∑_i =1^Nx_i|i⟩ ,
where||x|| = √(∑_i |x_i|^2) denotes the Euclidean norm ofx. The number of data featuresNis assumed to be a power of two,N = 2^n, and if necessary, this can be achieved by zero padding. This encoding method requiresQ = log_2 Nqubits. Quantum circuits used to realize the amplitude encoding do not have a constant depth <cit.>. Therefore, the amplitude encoding suffers the circuit implementation complexity and a higher computational cost compared to the angle encoding. These considerations make the amplitude encoding less favorable for the NISQ computers.
§.§.§ Encoding Circuit Implementation
For each of the presented data encoding methods, there are many quantum circuits which can be used to realize that encoding. The circuits can feature different number of qubits, auxiliary qubits, quantum gates, and the circuit depth. The problem of finding the optimal encoding scheme for a given computational task still has not been explored in the literature. The encoding circuit can be selected based on some criteria including the number of required qubits and auxiliary qubits, computation cost, trainability and robustness to noise other imperfections in the quantum computer hardware.
The generator network of the GAN should provide a broad diversity of output synthetic quantum data to be fed to the discriminator network. There is no certain desired output data distribution to be used as the basis for optimization of the input encoding circuit. Therefore, we select a fixed encoding quantum circuit followed by a parameterized circuit ansatz as the generator network (Section IV-A). The angle encoding is realized using multiple parallel rotation gates.
§.§ Trainable Encoding Schemes
The fixed encoding schemes treat the encoding task merely as a classical-to-quantum data transformation for subsequent processing by quantum circuits. The data encoding is performed completely independent of the trainable parameterized quantum circuits. We can envision a different avenue by directly embedding the classical data into the quantum circuit as their control parameters. This allows the data to be encoded using hardware-efficient quantum circuits which are conjectured to be hard to simulate classically for data encoding <cit.>. Furthermore, the nonlinear operations which can be realized using encoding circuits, e.g., the angle encoding discussed in Section III-B, can be included in different layers of the quantum circuit. The interleaving of the rotation gates conditioned by the input data with the variational rotation gates creates a trainable encoding circuit which can be trained based on the given input data and expected output. It is shown that this approach can reduce complexity of the required parameterized quantum circuit or, in some cases, the trainable encoding circuit can accomplish the certain task without using any additional circuit ansatz <cit.>. The trainable encoding schemes can provide unique properties which call for future research.
In the developed hybrid quantum-classical GAN in this paper, we have implemented the discriminator network using the trainable encoding approach. Two trainable encoding circuits and an ansatz circuit are used as the possible building blocks of the discriminator network. The discriminator network architecture is selected by evaluating multiple possible architectures achieved through different combinations of the encoding and ansatz circuits (Section IV-B).
§ QNN ARCHITECTURES
§.§ Generator QNN
The generator QNN architecture is shown in Fig. <ref> which comprises a fixed encoding quantum circuit followed by a parameterized circuit ansatz. The angle encoding is realized using parallel RY gates with scaled input data featuresR_y(πx_i)to control the range of rotation angle. The RY gate, defined asR_y(θ) = exp(-jθ/2 σ_y), performs the rotation about the Y axis. For example,R_y(πx_i)transforms the initial qubit state|0⟩to|ψ_i⟩ = cos(π/2x_i) |0⟩ + sin(π/2x_i) |1⟩.
The variational ansatz circuit is realized using parameterized RY and entangling RXX gates. The RXX gate is defined asR_xx(θ) = exp(-jθ/2 σ_x ⊗σ_x), with the maximum entanglement atθ= π/2and the identity matrix (no entanglement) atθ= 0. The generator QNN includes a total of 11 trainable parametersθ_i.
§.§ Discriminator QNN
The discriminator QNN is realized using a combination of trainable encoding circuits and a parameterized ansatz circuit. We consider two trainable encoding circuits and the ansatz circuit shown in Fig. <ref> as the possible building blocks of the discriminator network.
In the first encoding circuit,E_1in Fig. <ref>, the first layer comprises the Hadamard gates which transform the input qubits state to superposition of the basis states with equal probabilities. The second layer includes the RX gates withπ-scaled input data featuresx_1andx_2. The RX gate, given byR_x(θ) = exp(-jθ/2 σ_x), performs rotation about the X axis. The third layer is a two-qubit RYY gate which creates entanglement between its input qubits. The gate is defined asR_yy(θ) = exp(-jθ/2 σ_y ⊗σ_y), providing the maximum entanglement atθ= π/2and the identity matrix atθ= 0. The next layers are consecutive RX and RZ gates with scaled input data featuresx_1andx_2operating on one of the qubits from the previous gate. The RZ gate, given byR_z(θ) = exp(-jθ/2 σ_z), applies rotation about the Z axis. The next layer is a parameterized RXX entangling gate and the last layer is realized using the RZ gates.
The second encoding circuit,E_2in Fig. <ref>, has a same architecture as the first encoding circuit except for the RXX gate being replaced with an RYY gate. This modifies properties of the entanglement generated between the two qubits.
The ansatz circuit,Ain Fig. <ref>, is realized using five layers including six trainable quantum gates. The circuit architecture is selected such that include the fixed Hadamard gates, trainable single-qubit rotation gates RX and RZ, and trainable two-qubit entangling gates RXX and RYY.
The discriminator QNN has been trained using the two-moon dataset which is well-suited to the evaluation of quantum machine learning algorithms on the NISQ computers <cit.>. This is a two-feature dataset with a crescent shape decision boundary. The dataset samples are normalized to the range 0 to 1. We have explored an extensive number of QNN architectures realized using different combinations of the encoding and ansatz circuits of Fig. <ref>. The networks are evaluated based on their training time and accuracy. Five representative networks considered as candidates for the discriminator QNN are shown in Fig. <ref>. This modular design approach enables control on trade-offs between accuracy of circuit complexity of the QNNs which is particularly important for the NISQ processors.
The discriminator loss during training using the two-moon dataset for different network candidates is shown in Fig. <ref>. The networks 1 and 2, which are comprised of only trainable encoding circuits, achieve the lowest loss levels (0.14 and 0.18, respectively). This validates our intuition about the power of trainable encoding schemes discussed in Section III-C. Furthermore, it is noted that the network 1 converges to the final loss level faster than the network 2. The two networks differ in one of their entangling layers (Fig. <ref>), but this leads to different behaviors in their training dynamics. This conclusion highlights the importance of using proper quantum gates to create the entanglement.
The networks 3, 4, and 5 which comprise the ansatz circuit as well as one of the trainable encoding circuits achieve higher loss levels, in spite of using more trainable quantum gates. The network 4 reaches a loss of 0.35 after only about 20 training epochs and fluctuates around this loss level upon further training. This can be an indicator of the barren plateaus which prevent efficient training of the network. It is concluded that adding more trainable gates to the network architecture not only necessarily improves its accuracy, but also can degrade the accuracy.
In Fig. <ref>, output of the discriminator network trained using the two-moon dataset shown. The network is realized using multiple stages of the trainable encoding circuit shown in Fig. <ref>(a). Increasing the number of stages improves the effectiveness of the boundaries separating the two classes of data samples. The five-stage discriminator network as a quantum classifier can efficiently separate data samples of the two classes with accuracy of 100% for the class 0 samples and 99% for the class 1 samples.
§ IMPLEMENTATION RESULTS OF HYBRID GAN
The quantum circuits implementation and simulations are performed using the IBM's Qiskit open-source software development kit (SDK) which provides access to prototype superconducting quantum processors through cloud-based quantum computing services.
The training procedure for the hybrid quantum-classical GAN is outlined in Table <ref>. The discriminator and generator networks are trained iteratively using the mini-batch SGD algorithm with a learning rate of 0.01 and the mini-batch size of 10.
§.§ Uniform Data Distribution
The real (train) and generated data distributions before training the hybrid GAN are shown in Fig. <ref>(a). The generated data samples are achieved using the randomly initiated generator network. The generated data sampled are bounded between 0 and 1. The real data samples are uniformly distributed between 0.4 and 0.6. As expected, there is a very low similarity between the two random distributions.
In Fig. <ref>(b), the real and generated data distributions after only 60 training epochs of the hybrid GAN are shown. The distribution of the generated data has been significantly changed toward the distribution of the real data. This confirms successful training of the hybrid GAN. The Inception Score is a performance metric proposed for classical GAN <cit.> which partly measures diversity of generated data. However, it is difficult to apply this metric to the hybrid GAN due to the need for large number of data samples (typically 30,000) <cit.>.
In Fig. <ref>, loss of the discriminator and generator networks for 300 training epochs of the hybrid GAN is shown. The generator loss is high in the beginning of training as a result of random initialization. As the training proceeds, the discriminator loss increases as the discriminator network is trained to recognize the real data from the generator data. Ultimately, the discriminator and generator loss reach a steady state. In this condition, the generator is effectively able to fool the discriminator, while the discriminator identifies about half of the data samples as the real data and the another half as the fake data.
The class boundaries of the discriminator network and separated data samples by the trained hybrid GAN are shown in Fig <ref>. The hybrid GAN can effectively separate the real and generated data samples with the boundary close to 0.5.
§.§ Nonuniform Data Distribution
We also evaluate performance of the hybrid GAN using training data samples with a nonuniform distribution. The training data distribution and output data distribution of the generator with random initial states are shown in Fig. <ref>(a). There is a very low similarity between the distributions. The real and generated data distributions after training the hybrid GAN are shown in Fig. <ref>(b). The generated data distribution has significantly changed toward the real data distribution. We evaluate similarity between the real and generated data distributions using the Kullback-Leibler (KL) and the Jensen–Shannon (JS) divergence scores
KL (p_d||p_g) = ∑_i p_d(x_i) log( p_d(x_i)/p_g(x_i))
JS (p_d||p_g) = 1/2 KL ( p_d ||p_d + p_g/2) + 1/2 KL ( p_g ||p_d + p_g/2)
wherep_d(x)is the probability density of real dataxandp_g(x)is the posterior probability density of generated dataG(z). In the optimum conditionp_d(x) = p_g(x), the KL and JS scores will be zero.
For the data distributions in Fig. <ref>, the KL score is 1.71 before training and 0.39 after training the hybrid GAN. The JS score decreases from 2.63 before training to 0.52 after training. These results validate successful operation of the hybrid GAN for the nonuniform data distribution.
For the nonuniform training data distribution, unlike the uniform distribution, the discriminator network features different certainty in separating the data samples as real/fake across the range of samples. This mitigates a training failure in the hybrid GAN namely the mode collapse, which will be discussed in Section V-C.
§.§ Training Challenges of Hybrid GAN
The training process of the hybrid GAN has been quite challenging and has encountered multiple issues including convergence failure, slow training, and mode collapse.
§.§.§ Convergence Failure
Convergence of the GAN, in either classical or quantum domain, is difficult as a result of the simultaneous training of the generator and discriminator networks in a zero-sum game. In every training epoch that parameters of one of the models are updated, the optimization problem for the other model is changed. This means that an improvement in one model can lead to deterioration of the other model. This trend can be observed in Fig. <ref> epochs 1 to 100 where the initial improvements in the generator loss lead to increasing loss of the discriminator. The vanishing gradient problem in the GAN can be due to an over-trained discriminator which rejects most of the generator samples (rather than half of the samples). This issue can be prevented by avoiding over-training of the discriminator network <cit.>. If the GAN training is successful, the two networks reach a state of equilibrium between the two competing loss criteria. This phase of the training can be noted in Fig. <ref> epochs 100 to 200. Afterward, losses of the generator and discriminator networks are plateaued, indicating that the training process has converged.
§.§.§ Slow Training
The barren plateaus as a general problem in the training of QNNs was discussed in Section II-D. The barren plateaus is a vanishing gradients issue which can extremely slow down or stop the progress of training process. It can be mitigated through modified initial state of the quantum circuits and, if not successful, changing the quantum circuits architecture. However, a major challenge is that it cannot be predicted if a certain initial state or a network architecture will encounter this issue before the network is trained for a sufficient number of iterations which itself is also unknown. As a result, the development and training of the QNNs usually entail several trials and revisions of the network architecture.
§.§.§ Mode Collapse
The convergence of training process is not a sufficient condition to guarantee successful realization of the GAN. Another important issue is the mode collapse which refers to the condition that the generator learns to produce only a limited range of samples from the real data distribution. We have encountered the mode collapse during training of the hybrid GAN. In Fig. <ref>, distributions of the real and generated data samples are shown in the case of hybrid GAN with mode collapse failure. The generated data samples are concentrated around a limited range of samples (close to 0.45), while for most of the real data samples there is no corresponding generated data samples. We have resolved this issue by using different quantum circuits to realize the generator QNN.
Furthermore, we have noted that the mode collapse occurs more frequently when the GAN is trained using a uniform distribution. This is a result of the equal probability of the data samples which the discriminator network classifies them as the real data. For the nonuniform training data distribution, however, the certainty of the discriminator in real/fake data classification is different across the data samples.
In the classic GANs, some modified methods are developed to extend the range of generated data samples, including Wasserstein GAN <cit.> and Unrolled GAN <cit.>. It is an opportunity for future research to evaluate effectiveness of such approaches in the hybrid quantum-classical GAN.
§ CONCLUSION
We presented a hybrid quantum-classical generative adversarial network (GAN). The generator quantum neural network (QNN) is realized using a fixed encoding circuit followed by a parameterized quantum circuit. The discriminator QNN is implemented using a trainable encoding circuit through a modular design approach which enables embedding of classical data into a quantum circuit without using explicit encoding and ansatz circuits. It is shown that gradient of the discriminator and generator loss functions can be calculated using the same quantum circuits which have realized the discriminator and generator QNNs. The developed hybrid quantum-classical GAN is trained successfully using uniform and nonuniform data distributions. Using the nonuniform distribution for training data, the mode collapse failure, which GANs are prone to, can be mitigated. The proposed approach for the realization of hybrid quantum-classical GAN can open up a research direction for the implementation of more advanced GANs on the near-term quantum processors.
IEEEtran
|
http://arxiv.org/abs/2307.02249v1
|
20230705124452
|
Rethinking Multiple Instance Learning for Whole Slide Image Classification: A Good Instance Classifier is All You Need
|
[
"Linhao Qu",
"Yingfan Ma",
"Xiaoyuan Luo",
"Manning Wang",
"Zhijian Song"
] |
cs.CV
|
[
"cs.CV"
] |
Rethinking Multiple Instance Learning for Whole Slide Image Classification: A Good Instance Classifier is All You Need
Linhao Qu, Yingfan Ma, Xiaoyuan Luo, Manning Wang, and Zhijian Song
This work is supported by the National Natural Science Foundation in China under Grant 82072021. (Linhao Qu and Yingfan Ma contributed equally to this work. Corresponding author: Manning Wang and Zhijian Song.)
All the authors are with Digital Medical Research Center, School of Basic Medical Science, Fudan University, Shanghai 200032, China. And they are also with Shanghai Key Lab of Medical Image Computing and Computer Assisted Intervention, Shanghai 200032, China. (e-mail: [email protected]; [email protected]; [email protected]; [email protected]).
August 1, 2023
======================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================
Weakly supervised whole slide image classification is usually formulated as a multiple instance learning (MIL) problem, where each slide is treated as a bag, and the patches
cut out of it are treated as instances. Existing methods either train an instance classifier through pseudo-labeling or aggregate instance features into a bag feature through
attention mechanisms and then train a bag classifier, where the attention scores can be used for instance-level classification. However, the pseudo instance labels constructed
by the former usually contain a lot of noise, and the attention scores constructed by the latter are not accurate enough, both of which affect their performance.
In this paper, we propose an instance-level MIL framework based on contrastive learning and prototype learning to effectively accomplish both instance classification and bag
classification tasks. To this end, we propose an instance-level weakly supervised contrastive learning algorithm for the first time under the MIL setting to effectively learn
instance feature representation. We also propose an accurate pseudo label generation method through prototype learning. We then develop a joint training strategy for weakly
supervised contrastive learning, prototype learning, and instance classifier training. Extensive experiments and visualizations on four datasets demonstrate the powerful
performance of our method. Codes will be available.
multiple instance learning, contrastive learning, prototype learning, whole slide image classification
§ INTRODUCTION
The deep learning-based whole slide image (WSI) processing technology is expected to greatly promote the automation of pathological image diagnosis and analysis <cit.>. However,
WSIs are quite different from natural images in size, and the size can range from 100 million to 10 billion pixels, which makes it impossible to directly utilize deep learning models developed for natural
images to WSIs. It is a common approach to divide WSI into many non-overlapping small patches for processing, but providing fine-grained annotations for these patches is very expensive
(a WSI can typically produce tens of thousands of patches), making patch-based supervised methods infeasible <cit.>. Therefore, weakly supervised learning approaches based on
Multiple Instance Learning (MIL) have become the mainstream in this field. In the MIL setting, each WSI is regarded as a bag, and the small patches cut out of it are regarded as instances
of the bag. For a positive bag, there is at least one positive instance, while for a negative bag, all instances are negative. In clinical applications, there are two main tasks for WSI
classification: bag-level classification, which accurately predicts the class of a whole slide, and instance-level classification, which accurately identifies positive instances <cit.>.
Currently, MIL methods for WSI classification can be divided into instance-based methods <cit.> and bag-based methods <cit.>. Instance-based methods typically
train an instance classifier with pseudo labels and then aggregate the predictions of instances to obtain the bag-level prediction. The main problem of this
approach is that instance pseudo labels contain a lot of noise. Since the true label of each instance is unknown, the quality of the pseudo labels assigned to all instances is a key factor that
determines the performance of this type of methods. However, existing studies usually assign instance pseudo labels by inheriting the label of the bag they belong to, which leads to a large amount
of noise in the pseudo labels and greatly limits their performance. Figure <ref> A shows the typical training process and issues of instance-based methods.
Bag-based methods first use an instance-level feature extractor to extract features for each instance in a bag, and then aggregate these features to obtain a bag-level feature,
which is used to train a bag classifier. Most recent bag-based methods utilize attention mechanisms to aggregate instance features and they introduce an independent scoring module
to generate learnable attention weights for each instance feature, which can be used to realize instance-level classification. Although this type of method overcomes the problem of
noisy labels in instance-based methods, it also has the following issues: 1) Low performance in instance-level classification. We find that the difficulty of identifying different
positive instances is different in the same positive bag (e.g., instances with larger tumor areas are easier to be identified than those with smaller tumor areas). Attention-based
methods define losses at the bag level, which often leads to the result that only the most easily identifiable positive instances are found while other more difficult ones are
missed <cit.>. In other words, the network assigns higher attention weights to the most easily recognizable positive instances to achieve correct bag classification, without the
need to find all positive instances, which greatly limits its instance-level classification performance. 2) Bag-level classification performance is not robust. As mentioned earlier,
bag-level classification relies heavily on the attention scores assigned by the scoring network to each instance. When these attention scores are inaccurate, the performance of the
bag classifier will also be affected. A typical example is the bias that occurs in classifying bags with a large number of difficult positive instances while very few easy positive
instances. Figure <ref> B shows the typical training process and issues of bag-based methods.
In the history of deep learning-based MIL research, although instance-based methods were first proposed, their reliance on instance pseudo labels, which are difficult to obtain,
has led to a bottleneck in their performance. In contrast, an increasing number of researchers <cit.> have focused on using stronger attention-based methods at the bag level to
provide more accurate attention scores. However, intuitively, as long as the loss function is defined at the bag level, attention-based scoring methods will inevitably exhibit
"laziness" in finding more difficult positive instances <cit.>. Different from the above-mentioned studies, we propose an instance-based MIL framework based on contrastive
learning and prototype learning, called INS. Figure <ref> C illustrates the main idea and basic components of INS. Our main objective is to directly train an efficient instance
classifier at the fine-grained instance level, which needs to fulfill two requirements: first, obtaining a good instance-level feature representation, and second, assigning
an accurate pseudo label to each instance. To this end, we propose instance-level weakly supervised contrastive learning (IWSCL) for the first time in the MIL setting to learn
good instance representations, better separating negative and positive instances in the feature space. We also propose the Prototype-based Pseudo Label Generation (PPLG) strategy,
which generates high-quality pseudo labels for each instance by maintaining two representative feature vectors as prototypes, one for negative instances and the other for positive
instances. We further develop a joint training strategy for IWSCL, PPLG and the instance classifier. Overall, IWSCL and PPLG are completed under the guidance of the instance
classifier's predictions. At the same time, the good feature representations from IWSCL and the accurate pseudo labels from PPLG can then further improve the instance classifier.
More importantly, we efficiently utilize the true negative instances from negative bags in the training set to guide all the instance classifier, the IWSCL and the PPLG, ensuring
that INS iterates towards the right direction. After training the instance classifier, due to its strong instance classification performance, we can complete accurate bag
classification using simple mean pooling.
We comprehensively evaluated the performance of INS on six tasks, using a simulated CIFAR10 dataset and three real-world datasets containing breast cancer,
lung cancer, and cervical cancer. Extensive experimental results demonstrate that INS achieved better performance in both instance and bag
classification than state-of-the-art methods. More importantly, our experiments not only include the tasks that human doctors can directly judge from H&E-stained
slides, like tumor diagnosis and tumor subtyping, but also the tasks that human doctors cannot directly make decisions from HE slides, including predicting lymph
node metastasis from primary lesion, patient prognosis, and prediction of immunohistochemical markers. Given the strong instance-level classification ability of INS,
we can use it for explainable research and new knowledge discovery in these difficult clinical tasks. In the task of predicting lymph node metastasis from primary lesion
of cervical cancer, we use INS to classify high- and low-risk instances, thereby identifying the "Micropapillae" pathological pattern that indicates high risk of lymph node metastasis.
The main contributions of this paper are as follows:
∙ We propose INS, an instance-based MIL framework that combines contrastive learning and prototype learning. This framework serves as an efficient instance classifier, capable of effectively addressing instance-level and bag-level classification tasks at the finest-grained instance level.
∙ We propose instance-level weakly supervised contrastive learning (IWSCL) for the first time in the MIL setting to learn good feature representations for each instance. We also propose the Prototype-based Pseudo Label Generation (PPLG) strategy, which generates high-quality pseudo labels for each instance through prototype learning. We further propose a joint training strategy for IWSCL, PPLG, and the instance classifier.
∙ We comprehensively evaluated the performance of INS on six tasks of four datasets. Extensive experiments and visualization results demonstrate that INS achieves the best performance of instance and bag classification.
§ RELATED WORK
§.§ Instance-based MIL Methods
Instance-based methods train an instance classifier by assigning pseudo labels to each instance, and bag classification is achieved by aggregating the prediction of all instances in a bag.
Early methods <cit.> typically assign a bag's label to all its instances, leading to a large number of noisy labels in positive bags. Some recent methods <cit.> select key instances and only use them
for training, thus reducing the impact of noise to some extent. In this paper, we present a strong instance classifier and we believe that a good instance classifier requires both good instance-level
representation learning and accurate pseudo instance labels. To fulfill this goal, we propose for the first time the instance-level weakly supervised contrastive learning under the MIL setting,
which achieves efficient instance feature representation. We also propose a prototype learning-based strategy to generate high-quality pseudo labels.
§.§ Bag-based MIL Methods
Bag-based methods first extract instance features and then aggregate these features in a bag to generate bag features for training a bag classifier.
Attention-based methods <cit.> are the mainstream of this paradigm, which typically use an independent scoring network for each instance feature to produce
learnable attention weights, which can also be used to generate instance predictions. The main problem of these methods is that they cannot accurately identify difficult
positive instances, resulting in limited instance and bag classification performance. Recently, some studies <cit.> have added instance-level classification loss to
bag-level losses, but the pseudo labels assigned to instances are still noisy.
Some methods have also been proposed to accomplish WSI classification using reinforcement learning <cit.>, graph learning <cit.>, and bayesian learning <cit.>.
However, none of them can effectively fulfill the instance classification task. In contrast, we directly start from the finest instance-level and use weakly supervised contrastive
learning and prototype learning to complete instance feature learning and pseudo-label updating, thereby addressing both instance and bag classification.
§.§ Contrastive Learning for WSI Classification
Existing methods <cit.> usually first use WSI patches to pretrain an instance-level feature extractor through self-supervised learning and then perform model training
using the extracted features. Most of them <cit.> use contrastive self-supervised learning methods <cit.> to extract instance features, but this process is completely
unsupervised and it can only attempt to separate all instances as much as possible instead of effectively separating positive and negative instances. In the latest research, Wang et al. <cit.>
proposed a feature-based contrastive learning method at the bag-level, but it still cannot perform effective instance classification. In contrast, we for the first time propose instance-level
weakly supervised contrastive learning (IWSCL) under the MIL setting, which effectively separates negative and positive instance features. Figure <ref> shows the differences between existing
contrastive learning methods and our proposed IWSCL.
§.§ Prototype Learning for WSI Classification
Prototype learning, derived from Nearest Mean Classifiers, aims to provide a concise representation for instances. Recent studies demonstrate the potential of using representations or prototypes
for classification, with variations in construction and utilization. PMIL <cit.> employs unsupervised clustering to construct prototypes, enhancing bag features by assessing instance similarity within bags.
TPMIL <cit.> creates learnable prototype vectors, utilizing attention scores as soft pseudo-labels to assign instances. However, PMIL's non-learnable prototypes focus on improving bag features, posing
challenges for fine-grained instance classification. TPMIL heavily relies on attention scores, which often fail to accurately identify challenging positive instances. Additionally, their prototype learning
lacks effective integration of feature-level contrastive learning, resulting in limited performance. In contrast, our proposed PPLG strategy generates high-quality pseudo-labels, utilizing joint training
with instance contrastive representation learning, prototype learning, and the instance classifier. Our prototype learning also incorporates guidance from the instance classifier and includes true negative
instances from negative bags in the training set.
§ METHOD
§.§ Problem Formulation
Given a dataset X={X_1,X_2,…,X_N} containing N WSIs, and each WSI X_i is divided into non-overlapping patches {x_i,j,j=1,2,… n_i}, where n_i denotes the number of patches
obtained from X_i. All the patches from X_i constitute a bag, where each patch is an instance of this bag. The label of the bag Y_i∈{0,1}, i={1,2,...N}, and the labels of
each instance {y_i,j,j=1,2,… n_i} have the following relationship:
Y_i={[ 0, if ∑_j y_i, j=0; 1, else ].
This indicates that all instances in negative bags are negative, while in positive bags, there exists at least one positive instance. In the setting of weakly supervised MIL, only the labels of
bags in the training set are available, while the labels of instances in positive bags are unknown. Our goal is to accurately predict the label of each bag (bag classification) and the label of each
instance (instance classification) in the test set.
§.§ Framework Overview
Figure <ref> presents the overall framework of the proposed INS, which aims to train an efficient instance classifier using instance-level weakly supervised contrastive learning (IWSCL) and
Prototype-based Pseudo Label Generation (PPLG). We use the true negative instances from negative bags in the training set to guide all the instance classifier, the IWSCL and the
PPLG, ensuring that INS iterates towards the right direction. IWSCL and PPLG are also guided by the instance classifier, and they also help improve the instance classifier through iterative optimization.
Specifically, in one iteration, we first randomly select an instance x_i,j from all instances in the training set, and generate a query view and a key view using two different augmentations.
In the Query View branch, we input the query view into an Encoder and then feed its output to both the instance classifier and the MLP-based projector to obtain the
predicted class ŷ_i,j∈ℝ^2 (a one-hot vector indicating negative or positive) and the feature embedding q_i,j∈ℝ^d, respectively.
In the Key View branch, we input the key view into an Encoder and then feed its output to a Projector to obtain the feature embedding k_i,j∈ℝ^d, where
both Encoder and Projector are updated through momentum-based methods from the Query View branch. Inspired by MOCO <cit.> and Pico <cit.>, we maintain a large Embedding Queue
to store the feature embeddings of the Key View branch together with the predicted class labels of the corresponding instances. Then, we use the
current instance's ŷ_i,j, q_i,j, k_i,j, and the Embedding Queue from the previous iteration to perform instance-level weakly supervised contrastive
learning (IWSCL). For q_i,j, we pull closer the instance embeddings in the Embedding Queue that have the same predicted class and push away those with different
predicted class. In the PPLG module, we maintain two representative feature vectors as prototypes for positive and negative classes during training. We use q_i,j and
the prototype vectors from the previous iteration to generate pseudo labels for x_i,j. Finally, we update the Embedding Queue using ŷ_i,j and k_i,j, update
the prototype vectors using ŷ_i,j and q_i,j, and train the instance classifier with the generated pseudo label, which completes the current iteration.
In addition, to prevent bag-level degradation during training, we add a bag-level constraint loss function. The IWSCL module and the PPLG module are presented in Section <ref> and Section <ref>, respectively. The bag constraint and total loss are given in Section <ref>.
§.§ Instance-level Weakly Supervised Contrastive Learning
In contrastive learning, the most important step is to construct positive and negative sample sets, and then learn robust feature representations by pulling positive samples closer and pushing
negative samples farther in the feature space <cit.>. To distinguish from the positive and negative instances in the MIL setting,
we use family/non-family sample sets to represent the positive/negative sample sets in contrastive learning, respectively.
In traditional self-supervised contrastive learning, the standard method for constructing family and non-family sets is to use two augmented views of the same sample as family samples,
while all other samples are considered as non-family sets. This can only force all samples to be as far away from each other as possible in the feature space, but cannot separate
positive and negative instances in the MIL setting. In contrast, in the MIL setting, all instances from negative bags in the training set have true negative labels, and they
naturally belong to the same set. This weak label information can effectively guide the instance-level contrastive learning, which is neglected in existing studies. We
maintain a large Embedding Queue during training to store the feature embeddings k_i,j of a large number of instances and their predicted classes ŷ_i,j by the
instance classifier. Note that for true negative instances, we no longer save their predicted classes, but directly assign them a definite negative class, i.e., ŷ_i,j=0.
Family and Non-family Sample Selection. For the instance x_i,j and its embedding q_i,j, we use the instance classifier's predicted class ŷ_i,j and the
Embedding Queue to construct its family set F(q_i,j) and non-family set F'(q_i,j), and then perform contrastive learning based on q_i,j. Specifically, F(q_i,j) comes
from two parts, of which the first part consists of the embeddings q_i,j and k_i,j and the second part consists of all embeddings in the Embedding
Queue whose class label equals ŷ_i,j. Embeddings with the other class label in the Embedding Queue form the non-family set F'(q_i,j).
Mathematically, for a given mini-batch, let all query and key embeddings be denoted as B_q and B_k, and the Embedding Queue as Q.
For an instance (x_i,j,q_i,j,ŷ_i,j), its contrastive embedding pool is defined as:
P(q_i, j)=(B_q ∪ B_k ∪ Q) \{q_i, j}
In P(q_i,j), its family set F(q_i,j) and non-family set F'(q_i,j) are defined as:
F(q_i, j)={m | m ∈ P(q_i, j), ŷ_m=ŷ_i, j}
F'(q_i, j)=P(q_i, j) \ F(q_i, j)
Contrastive Loss. We construct a contrastive learning loss based on the embedding q_i,j:
ℒ_𝐼 𝑊 𝑆 𝐶 𝐿(q_i, j)=
-1/|F(q_i, j)|∑_k_+∈ F(q_i, j)logexp(q_i, j^⊤k_+ / τ)/Σ_k^-∈ F^'(q_i, j)exp(q_i, j^⊤k_- / τ),
where k_+ denotes the family sample of the current q_i,j, k_- denotes the non-family sample of the current q_i,j, and τ≥0 is the temperature coefficient.
Embedding Queue Updating. At the end of each iteration, the current instance's momentum embedding k_i,j and its predicted label ŷ_i,j or true negative label are added to the Embedding Queue,
and the oldest embedding and its label are dequeued.
§.§ Prototype-based Pseudo Label Generation
On the basis of obtaining a meaningful feature representation, we assign more accurate pseudo labels to instances by prototype learning. To this end, we maintain
two representative feature vectors, one for negative instances and the other for positive instances, as prototype vectors μ_r∈ℝ^d,r=0,1.
The generation of pseudo labels and the updating process of prototypes are also guided by true negative instances and the instance classifier. If the
current instance x_i,j comes from a positive bag, we use its embedding q_i,j and the prototype vectors μ_r to generate its pseudo label s_i,j∈ℝ^2.
At the same time, we update the prototype vector of the corresponding class using its predicted label ŷ_i,j and embedding q_i,j. If the current instance x_i,j
comes from a negative bag, we directly assign it a negative label and update the negative prototype vector using its embedding q_i,j. Then, we use the generated pseudo labels
to train the instance classifier and complete this iteration.
Pseudo Label Generation. If the current instance x_i,j comes from a positive bag, we calculate the inner product between its embedding q_i,j and the two
prototype vectors μ_r, and select the prototype label with the smaller feature distance as the update direction z_i,j∈ℝ^2 for the pseudo
label of x_i,j. Then, we use a moving updating strategy to update the pseudo label of the instance, defined as follows:
s_i, j=α s_i, j+(1-α) z_i, j, z_i, j=onehot(argmax q_i, j^⊤μ_r),
where α is a coefficient for moving updating, and onehot(·) is a function that converts a value to a two-dimensional one-hot vector.
The moving updating strategy can make the process of updating pseudo labels smoother and more stable.
Prototype Updating. If the current instance x_i,j comes from a positive bag, we update the corresponding prototype vector μ_c according to its
predicted category ŷ_i,j and embedding q_i,j using a moving updating strategy as follows:
μ_c=𝑁𝑜𝑟𝑚(βμ_c+(1-β) q_i, j), c=argmaxŷ_i, j,
where β is a coefficient for moving updating and 𝑁𝑜𝑟𝑚(·) is the normalization function.
If the current instance x_i,j comes from a negative bag, i.e., x_i,j is a true negative instance, we update the negative prototype vector μ_0 using its embedding q_i,j as follows:
μ_0=𝑁𝑜𝑟𝑚(βμ_0+(1-β) q_i, j)
Instance Classification Loss. We use the cross-entropy loss between the predicted value p_i,j∈ℝ^2 of the instance classifier and the pseudo label s_i,j to train the instance classifier.
ℒ_c l s=𝐶 𝐸(p_i, j, s_i, j),
where 𝐶𝐸(·) represents the cross-entropy loss function.
§.§ Bag Constraint and Total Loss
Bag Constraint. To further utilize the bag labels, we record the bag index of each instance and apply the following bag constraint loss:
ℒ_b c=𝐶 𝐸(𝑀 𝐿 𝑃(Mean(q_i, j, j=1,2, … n_i)), Y_i),
where Mean(q_i,j,j=1,2,… n_i) represents the mean pooling of all instance embeddings in a bag to obtain a bag embedding.
Total Loss. The total loss ℒ is composed of the contrastive loss ℒ_𝐼𝑊𝑆𝐶𝐿, instance classification loss ℒ_cls, and bag constraint loss ℒ_bc, defined as follows:
ℒ=ℒ_𝐼 𝑊 𝑆 𝐶 𝐿+λ_1 ℒ_c l s+λ_2 ℒ_b c,
where λ_1 and λ_2 are weight coefficients used for balancing.
§ EXPERIMENTAL RESULTS
§.§ Datasets
We used four datasets to comprehensively evaluate the instance and bag classification performance of INS, including a simulated dataset called CIFAR-MIL, constructed using CIFAR 10 <cit.>, as well as three real WSI datasets: the Camelyon 16 Dataset <cit.> for breast cancer, the TCGA-Lung Cancer Dataset, and an in-house Cervical Cancer Dataset. More importantly, our experiments not only include the tasks that doctors can directly judge from H&E stained slides, including tumor diagnosis (on the Camelyon 16 Dataset) and tumor subtyping (on the TCGA-Lung Cancer Dataset), but also the tasks that doctors cannot directly make decisions from HE slides, including lymph node metastasis from primary lesion, patient prognosis, and prediction of immunohistochemical marker (all on the Cervical Cancer Dataset).
§.§.§ Simulated CIFAR-MIL Dataset
To evaluate the performance of INS under different positive ratios and compare it with the comparison methods, following WENO <cit.>, we used 10-class natural image datasets CIFAR-10 <cit.> to construct a simulated WSI dataset called CIFAR-MIL with different positive ratios.
The CIFAR-10 dataset consists of 60,000 32×32-pixel color images divided into 10 categories (airplane, automobile, bird, cat, deer, dog, frog, horse, ship, truck), with each category containing 6,000 images. Out of these images, 50,000 are used for training and 10,000 for testing. To simulate pathological Whole Slide Images (WSIs), we combined a set of random images from each category of the CIFAR-10 dataset to construct the WSI. Specifically, we treated each image from each category of the CIFAR-10 dataset as an instance, and only all instances of the "truck" category were labeled as positive while the other instances were labeled as negative (the truck category is chosen at random). Then, we randomly selected a positive bag consisting of a set of a positive instances and 100-a negative instances (without repetition) from all instances, with a positive ratio of a/100. Similarly, we randomly selected a negative bag consisting of 100 negative instances (without repetition). We repeated this process until all positive or negative instances in the CIFAR-10 dataset were used up. By adjusting the value of a, we constructed 5 subsets of the CIFAR-MIL dataset with positive ratios of 5%, 10%, 20%, 50%, and 70%, respectively.
§.§.§ Camelyon16 Public Dataset
The Camelyon16 dataset is a publicly available dataset of histopathology images used for detecting breast cancer metastasis in lymph nodes <cit.>. WSIs containing metastasis are labeled positive, while those without metastasis are labeled negative. In addition to slide-level labels indicating whether a WSI is positive or negative, the dataset also provides pixel-level labels for metastasis areas. To satisfy weakly supervised scenarios, we used only slide-level labels for training and evaluated the instance classification performance of each algorithm using the pixel-level labels of cancer areas. Prior to training, we divided each WSI into non-overlapping 512×512 image patches under 10× magnification. Patches with entropy less than 5 were removed as background, and a patch was labeled positive if it contained 25% or more cancer areas, otherwise, it was labeled negative. A total of 186,604 instances were obtained for training and testing, with 243 slides used for training and 111 for testing.
§.§.§ TCGA Lung Cancer Dataset
The TCGA Lung Cancer dataset comprises 1054 WSIs obtained from the Cancer Genome Atlas (TCGA) Data Portal, which consists of two lung cancer subtypes, namely Lung Adenocarcinoma and Lung Squamous Cell Carcinoma. Our objective is to accurately diagnose both subtypes, with WSIs of Lung Adenocarcinoma labeled as negative and WSIs of Lung Squamous Cell Carcinoma labeled as positive. This dataset provides only slide-level labels and patch-level labels are unavailable. The dataset contains about 5.2 million patches at 20× magnification, with an average of approximately 5,000 patches per slide. These WSIs were randomly partitioned into 840 training slides and 210 test slides (4 low-quality corrupted slides are discarded).
§.§.§ Cervical Cancer Dataset
The Cervical Cancer dataset is an in-house clinical pathology dataset that includes 374 H&E-stained WSIs of primary lesions of cervical cancer from different patients, after slide selection.
We conducted all experiments at 5× magnification, and we divided each WSI into non-overlapping patches of size 224×224 to form a bag. Background patches with entropy values less than 5 were discarded from the original WSI.
For prediction of lymph node metastasis in primary tumors, we labeled the corresponding slides of patients who developed pelvic lymph node metastasis as positive (209 cases) and those who did not develop pelvic lymph node metastasis as negative (165 cases). We randomly divided the WSIs into a training set (300 cases) and a test set (74 cases).
For prediction of patient survival prognosis, following Skrede et al. <cit.>, we grouped all patients based on detailed follow-up records using the median as a cutoff, where those who did not experience cancer-related death within three years were labeled as negative (good prognosis) and those who did were labeled as positive (poor prognosis). Then, we randomly divided the WSIs into a training set (294 cases) and a test set (80 cases) according to the labels.
For prediction of the immunohistochemical marker KI-67, following Liu et al. <cit.> and Feng et al. <cit.>, we grouped all patients based on detailed KI-67 immunohistochemistry reports using the median as a cutoff, where KI-67 levels below 75 were labeled as negative and those above 75 were labeled as positive. Then, we randomly divided the WSIs into a training set (294 cases) and a test set (80 cases) according to the labels.
§.§ Evaluation Metrics and Comparison Methods
For both instance and bag classification, we used Area Under Curve (AUC) and Accuracy as evaluation metrics. Bag classification performance is evaluated on all datasets but instance-level classification performance is only evaluated on the CIFAR-MIL Dataset and the Camelyon 16 Dataset, since only these two datasets have instance labels.
We compared our INS to 11 competitors, including three instance-based methods: MILRNN<cit.>, Chi-MIL<cit.>, and DGMIL<cit.>, and eight bag-based methods: ABMIL<cit.>, Loss-ABMIL<cit.>, CLAM <cit.>, DSMIL<cit.>, TransMIL<cit.>, DTFD-MIL<cit.>, TPMIL <cit.> and WENO<cit.>. In accordance with DSMIL <cit.>, we employed SimCLR <cit.> as the self-supervised approach to pre-extract patch features for all techniques. Regarding all comparative approaches, we replicated them using the published codes and conducted a grid search on the crucial hyperparameters across all methods. We cited the reported results from their papers under the same experimental settings.
§.§ Implementation Details
In line with DSMIL <cit.>, we conducted pre-processing on WSI datasets such as patch cropping and background removal.
For all datasets, the encoders are implemented using the ResNet18. The Instance classifier is implemented using MLP. The Projector is a 2-layer MLP and the Prototype vectors are 128 dimensions. No pre-training of the network parameters is performed. The SGD optimizer is used to optimize the network parameters with a learning rate of 0.01, momentum of 0.9 and the batch size is 64. The length of Embedding Queue is 8192. In order to smooth the training process, we empirically set up the warm-up epochs. After warm-up, we updated pseudo labels and gave true negative labels. The hyperparameter thresholds vary for each dataset, and we used grid search on the validation set to determine the optimal values.
For more details, please refer to our codes, which will be available soon.
§.§ Results on the Synthetic Dataset CIFAR-MIL
We constructed the synthetic WSI datasets CIFAR-MIL with varying positive instance ratios to investigate the instance and bag classification performance of each method at different positive instance ratios. The results are shown in Table <ref> and Table <ref>. It can be seen that INS achieved the best performance in both instance and bag classification
tasks at all positive instance ratios. Most methods cannot work well in low positive instance ratios of 5% and 10%, for which the AUC of INS exceeds 0.94. At positive instance ratios
above 20%, the performance of INS is comparable to that of fully supervised methods.
Another interesting phenomenon is that most methods have higher bag classification performance when the positive ratio is higher than 20%, but the instance classification
performance is still fairly poor. This suggests that accurately identifying all positive instances is not necessary to complete correct bag classification. As the
positive ratio increases, there are many positive instances in positive bags, which makes the bag classification task easier. The network often only needs to
identify the simplest positive instances to complete bag classification, losing the motivation to accurately classify all positive instances. In contrast, INS
maintains high instance and bag classification performance at all positive instance ratios.
§.§ Results on Real-World Datasets
§.§.§ Camelyon 16 Dataset
Table <ref> shows the classification performance of INS and other methods on the Camelyon 16 Dataset. The low positive ratio of Camelyon 16 Dataset (about 10%-20%) makes
the classification task quite difficult. INS outperforms all the compared methods with a large margin, exceeding the second-best method by 2.1% and 3.8% in AUC for instance and bag
classification, respectively. Figure <ref> shows typical visualization results on this dataset. It can be seen that INS accurately localizes almost all positive instances in the positive bags.
§.§.§ TCGA-Lung Cancer Dataset
Table <ref> shows the results on the TCGA-Lung Cancer Dataset. Unlike the Camelyon 16 Dataset, this dataset has a high positive instance ratio
(more than 80%), so the performance of all methods is fairly good, while INS still achieves the best performance.
§.§.§ Cervical Cancer Dataset
Table <ref> shows the experimental results of three tasks on the in-house Cervical Cancer Dataset, including lymph node metastasis prediction from primary lesion,
patient prognosis, and immunohistochemical marker KI-67 prediction from H&E stained WSIs. Unlike the previous two real-world datasets, these three tasks are very tough
even for human doctors. It can be seen that INS significantly outperforms other methods in all three tasks, demonstrating the strong performance of INS.
§.§ Interpretability Study of the Lymph Node Metastasis Task
As can be seen from Table <ref>, INS can predict the preoperative lymph node status of cervical cancer with high performance using HE-stained slides of the primary tumor. However, this does not offer pathologists additional interpretable pathological patterns. While cancer diagnosis is possible through visual inspection of HE-stained pathological slides, the same cannot be said for accurately determining lymph node metastasis status from the primary lesion. Given INS’s powerful instance classification ability, we hope to find explicit pathological patterns that can suggest the lymph node metastasis status and perform interpretable analysis as well as new knowledge discovery.
Specifically, we used INS to predict the probability of each instance being positive within the positive bags, and visualized the top 0.1% instances with the highest and lowest probabilities separately. These instances with the highest predicted positive probabilities are likely to contain pathological patterns that identify high-risk lymph node metastasis, as shown in Figure <ref>.
As can be seen, in images suggesting lymph node metastasis, structures resembling "micropapillae" are more prevalent, indicating a high-risk pathological pattern. Micropapillary structures are characterized by small clusters of infiltrating cancer cells forming hollow or mulberry-like nests without a central fibrovascular axis, surrounded by blank lacunae or lacunae between interstitial components. Conversely, negative lymph node images more commonly exhibit a "sheet-like" pattern, in which cells form tightly connected nests of varying sizes within the tumor interstitium, with fissures or lacunae rarely observed.
This conclusion highlights the interpretability of using INS to assess lymph node metastasis and its significant guiding implications for future clinical research.
§ ABLATION STUDY AND FURTHER ANALYSIS
§.§ Ablation Study on Key Components
We conducted comprehensive ablation experiments on the components of INS using the Camelyon 16 Dataset, and the results are shown in Table <ref>.
Here, w/o ℒ_𝐼𝑊𝑆𝐶𝐿 means that weakly supervised contrastive learning is not used, w/o ℒ_bc means that bag-level
constraint is not used, and w/o MU means that α=0 in formula <ref>, indicating that the pseudo label updating strategy with moving updating is not used.
It can be seen that IWSCL is the crucial component of INS, and without contrastive learning, the performance of INS declines significantly.
Both the bag constraint and the pseudo label updating strategy with moving updating can effectively improve the performance of INS.
It is worth noting that INS still outperforms all comparing methods even without either one of these two components.
§.§ Further Analysis
We conducted more detailed evaluation and interpretive experiments of INS.
Effective Representation Learning of INS. Currently, many studies <cit.> use pre-trained feature extractors to extract instance features for subsequent training,
among which ImageNet pretrained method <cit.> and contrastive self-supervised methods <cit.> are most commonly used. We compared these feature extraction methods with
INS on the Camelyon16 Dataset. Specifically, we first used feature extractors pre-trained by ImageNet <cit.>, SimCLR <cit.>, DGMIL <cit.>, and our INS, respectively, to extract all instance features.
All methods used ResNet-18 as the network structure. Then we used the true labels of each instance only based on these features to train a simple SVM classifier and a linear classifier and
tested the two classifiers on the test set to evaluate these features. Please note that the pre-training process of
feature extractors is unsupervised (not using labels) or weakly supervised (using only bag labels), while the process of training SVM and Linear with the pre-extracted features is
fully supervised using instance labels, enabling an effective evaluation of the quality of the pre-extracted features. We also trained the network in an end-to-end way as an upper
bound. The results are shown in Figure <ref> A. The features extracted from INS achieve the best results, indicating effectiveness of our method.
In addition, we use SimCLR and our INS to extract features of all instances from a typical slide in the Camelyon16 Dataset and visualize the feature distribution using
t-SNE, as shown in Figure <ref>. It can be visually observed that the positive and negative instances are separated in the features extracted by INS. Surprisingly,
we can even manually draw a clear boundary between them on the two-dimensional plane, indicating the powerful feature representation ability of INS. In contrast,
although the features of SimCLR are relatively scattered in the feature space, there is significant overlapping between the positive and negative instances, which cannot be easily separated.
Evaluation of Pseudo Labels. To intuitively demonstrate the quality of the instance pseudo labels, we plotted the AUC curve of the pseudo labels for all instances
in the training set of the CIFAR-MIL Dataset (with a 0.2 positive instance ratio) against the training epochs, as shown in Figure <ref> B. It can be clearly seen that the quality
of the pseudo labels continues to improve, providing more and more effective guidance to the instance classifier.
§ CONCLUSION
In this paper, we propose INS, an instance-level MIL framework based on contrastive learning and prototype learning, which effectively addresses both instance and bag classification tasks.
Guided by true negative instances, we propose a weakly-supervised contrastive learning method for effective instance-level feature representation under the MIL setting.
In addition, we propose a prototype-based pseudo label generation method that generates high-quality pseudo labels for instances from positive bags.
We further propose a joint training strategy for weakly-supervised contrastive learning, prototype learning, and instance classification. Extensive experiments on one synthetic
dataset and five tasks on three real datasets demonstrate the strong performance of INS. With only slide labels, INS has the ability to accurately locate positive instances and
has the potential to discover new knowledge or perform interpretability research on tough clinical tasks.
IEEEtran
|
http://arxiv.org/abs/2307.01931v1
|
20230704213626
|
Assessing the impact of Higher Order Network Structure on Tightness of OPF Relaxation
|
[
"Nafis Sadik",
"Mohammad Rasoul Narimani"
] |
math.OC
|
[
"math.OC"
] |
A quadratically enriched count of rational curves
Kirsten Wickelgren
July 2023
=================================================
AC optimal power flow (AC OPF) is a fundamental problem in power system operation and control. Accurately modeling the network physics via the AC power flow equations makes AC OPF a challenging nonconvex problem that results in significant computational challenges. To search for global optima, recent research has developed a variety of convex relaxations to bound the optimal objective values of AC OPF problems. However, the quality of these bounds varies for different test cases, suggesting that OPF problems exhibit a range of difficulties. Understanding this range of difficulty is helpful for improving relaxation algorithms.
Power grids are naturally represented as graphs, with buses as nodes and power lines as edges. Graph theory offers various methods to measure power grid graphs, enabling researchers to characterize system structure and optimize algorithms. Leveraging graph theory-based algorithms, this paper presents an empirical study aiming to find correlations between optimality gaps and local structures in the underlying test case's graph. Network graphlets, which are induced subgraphs of a network, are used to investigate the correlation between power system topology and OPF relaxation tightness. Specifically, this paper examines how the existence of particular graphlets that are either too frequent or infrequent in the power system graph affects the tightness of the OPF convex relaxation.
Numerous test cases are analyzed from a local structural perspective to establish a correlation between their topology and their OPF convex relaxation tightness.
Optimal power flow, Convex relaxation, Network graphlets
§ INTRODUCTION
The optimal power flow (OPF) problem seeks an operating point that optimizes a specified objective function (often generation cost minimization) subject to constraints from the network physics and engineering limits. Using the nonlinear AC power flow model to accurately represent the power flow physics results in the AC OPF problem, which is non-convex,
generally NP-Hard problems <cit.> and may have local optima <cit.>. The inclusion of AC power flow in the OPF problem presents non-convex feasible spaces, leading to considerable computational complexity <cit.>. Many convex relaxation techniques have been used to solve OPF problems <cit.>. These relaxation techniques converge to a lower bound for the OPF problem. In some cases if the relaxation solution satisfies specific condition, the calculated lower bound by relaxation method can be inferred as the global solution of the problem. An optimality gap can be referred as the percent difference between the objective values for a local solution and the lower bound obtained by relaxation techniques. A relatively smaller optimality gap certifies global optimal solution of corresponding local solution <cit.>.
Many research efforts have been done in the past decades to develop convex relaxation algorithms for NP-hard, non-convex problems <cit.>. Numerous relaxation techniques including Semi-definite programming (SDP) <cit.>, second order cone programming (SOCP) <cit.>, quadratic convex relaxation (QC) <cit.>. QC relaxation encloses the trigonometric, squared, and product terms in a polar representation of power flow equations within convex envelopes <cit.>. We utilized the QC relaxation in this paper for investigating correlation between relaxation optimality gap and topology of the test cases.
The correlation between power systems and graph theory is strong, as graph theory provides a powerful tool for analyzing and optimizing complex power systems. This correlation was first explored in <cit.>. Graph theoretical analysis approach have been used in many power system applications such as system vulnerability <cit.>, detecting structural anomalies <cit.>, identifying critical components in power systems <cit.>, and generating authentic synthetic grids <cit.>. In the mean time, some research has been done on optimal power flow problem with the perspective from network science. According to <cit.>, power flow in power networks can be traced by using graph theoretical algorithms such as breadth first search and depth first search. Connection between cliques and semi-definite solver performance is discussed in <cit.> and it is suggested that semi-definite constraints can be decomposed into smaller constraints according to maximal cliques of the power network.
In addition, to make the SDP relaxation problem easier to solve, identifying the problematic lines that contribute to its computational complexity is crucial. This can be accomplished by using graph theory techniques, such as tree decomposition, to analyze the structure of the graph and identify those lines, as described in <cit.>.
Though not focused in optimal power flow, the relationship between the network topology, as characterized by the maximal cliques, and the number of power flow solution has been explored in <cit.>. More recently, graph neural networks are gaining attention to solve OPF problems. Particularly, graph neural networks can be used to approximate optimal interior point optimizer solution in OPF <cit.>. However, to the best of our knowledge, no studies have investigated the correlation between the optimality gap of OPF relaxations and the local structures of the underlying power grid's graph in power system test cases.
Graphlets and motifs are essential tools in complex network analysis. They are subgraphs that occur frequently in a given network and can provide insights into the network's structure and function. Graphlets are small, connected subgraphs that can be used to characterize the local structure of a network. Motifs are larger subgraphs that occur more frequently than expected by chance and can represent functional units in the network. By identifying and analyzing graphlets and motifs, researchers can gain a deeper understanding of the network's organization and dynamics, and identify key nodes or pathways that are critical to the network's function. Graphlet and motif analysis can be used to identify similarities and differences, and classify networks based on their structure and function.
Use of network motifs statistically to extract information about the local structure of the data was first proposed in <cit.>. At first, analysis of motif detection in a network was exclusively restricted in the field of bio-informatics <cit.> <cit.>. Using network motifs, comparative grid vulnerability analysis in case of contingency has been discussed in <cit.> and it was found that vulnerable and robust grid have different motif patterns and decay of motifs also reveals a different pattern when comparing robust and fragile grids. What these local structure of a grid network means to a power grid in case of a contingency has already been discussed <cit.> and it has been found that certain motifs did play determining the robustness of a network.
This paper first explores network graphlet patterns across various test cases and next determines the optimality gaps for QC relaxation in those OPF test cases. In particular, we attempt to ascertain if there is any connection between the local structure of the network and the subsequent OPF optimality gap for that specific network. To be more specific, this paper delves deeper to discover if any graphlets contribute to a larger optimality gap in OPF test cases. By identifying graphlets that contribute to a larger optimality gap in test cases, we can identify important nodes in test systems, wherein enforcing redundant constraints on them reduces the optimality gap for those test cases. Moreover, such analysis can lead to the development of more effective optimization algorithms tailored to specific network topologies and requirements. Thus, identifying significant graphlets that contribute to larger optimality gaps can ultimately lead to the development of more efficient and reliable network infrastructures.
This paper is organized as follows. Sections <ref> and <ref> review the OPF formulation and the QC relaxation, respectively. Section <ref> describes the network graphlet theory. Section <ref> then presents an algorithm by which graphlets will be detected. Section <ref> discusses the numerical result and Section <ref> concludes the paper.
§ OVERVIEW OF THE OPTIMAL POWER FLOW PROBLEM
This section formulates the OPF problem using a polar
voltage phasor representation.
Let's assume voltage at bus i and j are V_i and V_j, respectively. Current flowing from bus i to bus j is I_ij, AC power generation of bus i is S_i^g, complex power demand of bus i is S_i^d and complex power flowing from bus i to bus j is S_ij. E and F are the set of sending and receiving ends of lines, i.e. edges, respectively. Y_i j is the admittance of line i to j. Power flow equation for all buses can be written as follows.
S_i^g-S_i^d = ∑_(i,j)∈ E ∪ FS_ij ∀ i ∈𝒩
S_i j=Y_i j^* V_i V_i^*-Y_i j^* V_i V_j^* (i, j) ∈ E ∪ F
The OPF problem consists of different engineering constraints that should be enforced along with the power flow equations. Generators in the system should produce active and reactive power within their limits which can be addressed by following constraints.
S_i^gl≤ S_i^g ≤ S_i^gu ∀ i ∈𝒩
Line thermal limit is another constraint that enforces an upper bound on apparent power flow in lines.
|S_ij| ≤ s_ij^u ∀ i ∈𝒩
Bus voltage limits of any grid are defined by national grid code of any country and it is typically ±10% of nominal grid voltage <cit.>.
v_i^l≤ |V_i| ≤ v_i^u ∀ i ∈𝒩
For ease of power flow formulation squaring this equation gives us,
v_i^l^2 ≤|V_i|^2 ≤v_i^u^2 ∀ i ∈𝒩
For power flow between buses, voltage angle difference between buses must be confined. Accordingly, voltage angle difference upper and lower limits can be shown as,
-θ_ij^Δ≤∠(V_i V_j^*) ≤θ_ij^Δ ∀(i,j)∈ℰ
To convexify the OPF problem, phase angle difference should be limited within [0,π/2] <cit.>.
0 ≤θ_ij^Δ≤π/2 ∀(i,j)∈𝒩
If we observe equation (<ref>) as linear relation of real and imaginary parts of ∠(V_i V_j^*) then it can be shown that,
tan(-θ_i j^Δ) (V_i V_j^*) ⩽(V_i V_j^*) ⩽tan(θ_i j^Δ) (V_i V_j^*)
Objective of the OPF problem is to minimize generator fuel costs and that can be defined as,
min∑_i ∈𝒩 c_2, i(P_i^g)^2+c_1, i P_i^g+c_0, i
In overall, the OPF problem can be written as,
variables: S_i^g(∀ i ∈ N), V_i (∀ i ∈ N)
min∑_i ∈𝒩 c_2, i(P_i^g)^2+c_1, i P_i^g+c_0, i
subject to:v_i^l⩽|V_i| ⩽ v_i^u ∀ i ∈ N
S_i^g l⩽ S_i^g⩽ S_i^g u ∀ i ∈ N
|S_i j| ⩽ s_i j^u ∀(i, j) ∈ E ∪ F
S_i^g-S_i^d=∑_(i, j) ∈ E ∪ F S_i j ∀ i ∈ N
S_i j=Y_i j^* V_i V_i^*-Y_i j^* V_i V_j^* (i, j) ∈ E ∪ F
-θ_i j^Δ⩽∠(V_i V_j^*) ⩽θ_i j^Δ ∀(i, j) ∈ E
tan(-θ_i j^Δ) (V_i V_j^*) ⩽(V_i V_j^*) ⩽tan(θ_i j^Δ) (V_i V_j^*)
From above equations, it can be realized that the non-convex nature of the OPF problem arises from product of the voltage variables V_i V_j^*. Assume, a new lifted variable W_ij in place of V_i V_j^* for applying convex relaxation methods.
W_i j=V_i V_j^*
Equations (<ref>), (<ref>), and (<ref>) can be written as,
S_i j=Y_i j^* W_i j-Y_i j^* W_i j (i, j) ∈ E ∪ F
-θ_i j^Δ⩽∠(W_i j) ⩽θ_i j^Δ ∀(i, j) ∈ E
tan(-θ_i j^Δ) (W_i j) ⩽(W_i j) ⩽tan(θ_i j^Δ) (W_i j)
Next we explain how defining lifted variables can be leveraged to convexify the OPF problem.
§ CONVEX RELAXATIONS IN OPTIMAL POWER FLOW
Traditional OPF solving methods may find global optima of the solution but they might stuck in local optima <cit.>. Conversely, convex relaxation techniques can obtain bounds on the optimal objective values, certify infeasibility, and in some cases, achieve globally optimal solutions. There has been many relaxation techniques applied to the OPF problems i.e Second order cone relaxations, Quadratic Convex relaxations and Semi-definite relaxations <cit.>, <cit.>. Each relaxation method follows separate methodology to solve the non-convex OPF problem. To be specific, they approach differently to convexify the source of non-convexification V_i V_j^*. In this study we will focus only on quadratic convex (QC) relaxation.
Quadratic Convex Relaxations: The quadratic convex (QC) relaxation is a approach that encloses the trigonometric and product terms in the polar representation of power flow equations within convex envelopes <cit.>. It represents the voltage variables in polar coordinates and expands equation (<ref>) in following way,
W_i i = v_i^2 ∀ i ∈ N
(W_ij) = v_iv_jcos(θi-θj) ∀(i, j) ∈ E
(W_ij) = v_iv_jsin(θi-θj) ∀(i, j) ∈ E
QC relaxation relaxes these constraints by drawing tight envelopes around nonconvex terms. Such as, convex envelopes for square terms can be defined as <cit.>,
⟨ x^2⟩^T≡x⩾ x^2
x⩽(x^u+x^l) x-x^u x^l.
Here, x, x^u and x^l corresponds to convex envelopes of x, upper and lower bound of x, respectively.
Additionally, convex envelopes for bilinear terms can be defined as,
⟨ x y⟩^M≡{[ x y⩾ x^l y+y^l x-x^l y^l; x y⩾ x^u y+y^u x-x^u y^u; x y⩽ x^l y+y^u x-x^l y^u; x y⩽ x^u y+y^l x-x^u y^l ].
Convex envelopes for sine and cosine function for x ∈ [0,π/2] can be given as <cit.>,
[ ⟨sin (x)⟩^S≡{[ S⩽cos(x^u/2)(x-x^u/2)+sin(x^u/2); S⩾cos(x^u/2)(x+x^u/2)-sin(x^u/2) ].; ⟨cos (x)⟩^C≡{[ C⩽ 1-1-cos(x^u)/(x^u)^2 x^2; C⩾cos(x^u) ]. ]
Now using equations (<ref>)–(<ref>) convex relaxations of product terms in power flow equation can be obtained as follows.
W_i i=⟨ v_i^2⟩^T i ∈ N
(W_i j)=⟨⟨ v_i v_j⟩^M⟨cos(θ_i-θ_j)⟩^C⟩^M ∀(i, j) ∈ E
(W_i j)=⟨⟨ v_i v_j⟩^M⟨sin(θ_i-θ_j)⟩^S⟩^M ∀(i, j) ∈ E
Incorporating equations (<ref>)–(<ref>) into equation (<ref>) would convexify the nonconvex terms and result in the QC relaxation of the OPF problem. The solution of the QC relaxation is a lower bound for the original nonconvex OPF problem. The tighter the relaxation, the better a lower bound can be computed for the OPF problem. Next, we leverage the graphlets analysis to understand the correlation between power system topology and the optimality of the QC relaxation for different test cases.
§ NETWORK GRAPHLET
To observe the local structure of any power grid we consider the grid as a undirected graph G(V,E) where V as nodes is the buses in the grid and E as edges stands for Lines in the grid. The order and the size of graph G can be defined as total number of nodes and total number of edges of the graph. A graph G'(V',E') is a subgraph of graph G if G'⊆G such that V'⊆V and E'⊆E. That subgraph G' can be called as induced subgraph of G if E' contains all edges e_uv∈ E such that u,v∈ V'. Graphs G' and G” can be called isomorphic if there exists a bijection h:V'→ V” such that any two adjacent nodes u,v ∈ V' of G' are also adjacent in G” after the mapping occurs. If G_k = (V_k,E_k) is a k node subgraph of G and there exists an isomorphism between G_k and G' where G' ∈ G, then there is an occurrence of G_k in G. A motif can be defined as a multi node subgraph pattern that occurs too frequently in a graph comparing to some random graphs. These recurrent patterns are considered as building blocks of networks, and different combinations of a small number of motifs can generate enormously diverse forms. Albeit the notion of motif originated in biological networks, different motifs could be found in a variety of different types of complex networks. Traditional network attributes, which characterize nodes, connections, or the entire network, do not really afford for the profiling of local structural characteristics. As it is well known, motif patterns differ substantially among networks <cit.>. In this paper, we use 4 node connected undirected subgraphs which are called graphlets. Therefore, 6 types of graphlets that can be found in the power network are used to asses the local structure. These graphlets are shown in figure <ref>. In this paper, we first explore the existence of different graphlet types among different buses of the network and then classify those subgraph patterns.
§ DETECTION OF GRAPHLETS
In this section, how graphlet can be identified in different networks is discussed.
This paper uses an algorithm that follows the footprints in <cit.><cit.>.
Given a graph G = (V, E), the following algorithm enumerates all of its 4-nodes subgraphs. There is two sets, one is subgraph set, and another is extension set. Extension set consists of neighboring nodes that are not in subgraph set. When summation of subgraph set and extension set element equals to our desired subgraph level which is four, this algorithm returns a subgraph.
* In the first stage, the subgraph level is one which is basically a node. In figure <ref>, for this stage, we consider node 1 as a subgraph set. Therefore, extension set consists of {2, 4, 5}. For node 2 as a subgraph set, extension set consists of node 3. However, Node 1 is a neighboring node of node 2 but is not considered in extension set as it has already been considered as a subgraph set. Following this formula, other combinations of subgraph and extension set are [{2},{3}], [{3},{4}], [{4},{5}]. This stage returns only one 4 node subgraph, {1,2,4,5}.
* In the second stage, the subgraph level is two which is an edge. In this stage, for node {1, 2} as a subgraph set, extension set consists of neighboring nodes of edge 1-2 which is {3, 4, and 5}. For edge {1,4} as a subgraph set, extension set consists of {3,5}. Here node 2 is not in extension set as edge 1,2 already have been considered as a subgraph set. Only other possible combination are [{2,3},{4,5}]. This stage returns two 4 node subgraph. Those are,
{1,4,3,5} and {2,3,4,5}.
* Following this formula for third and fourth stage, this algorithm gives two more subgraph combination. Those are {1,2,3,4} and {1,2,3,5}.
* After that, this algorithm classifies those subgraphs or graphlets. For example, {2, 3, 4, 5} is a type 2 graphlet as previously mentioned. Following the classification, occurrence of each graphlet is counted. Such as, type 2 graphlet has occurred 4 times out of total 5 graphlets in this graph. Furthermore, for all other graphlets its graphlet occurrence time divided by total graphlet count gives us a percentage. For different test cases in Matpower <cit.> and PGLib-OPF v18.08 benchmark library <cit.>, this percentages for all graphlet types are given in Table <ref>.
§ RESULTS
We implement the QC relaxation and local structure of power systems' graph on various test cases from PGLib-OPF v18.08 benchmark library <cit.> and Matpower to find the correlation between local structure of power systems' graph and OPF optimality gap.
Table <ref> tabulates optimality gap from QC relaxation and their corresponding different graphlet type percentages for different test cases. This table is sorted with ascending "QC gap" values. Clearly, some patterns are visible in this Table. The most noticeable trend is that, of the 33 cases investigated, 12 cases with the low optimality gap have one characteristic in common. That is they share the common trait of having no type 6 graphlets. From figure <ref>, it is apparent that type 6 graphlet is equivalent to 4-nodes complete subgraph. However, existence of complete 4-node subgraph is not very frequent in any kind of networks. That is why in other cases such low percentage of type 6 graphlets are observed. For instance, "pglib_opf_case162_ieee_dtc" has total 2891 graphlets. Yet, it has only 11 type 6 graphlets. Particularly, this trend shows us that existence of type 6 graphlets even in such low percentage can affect the QC relaxation optimality gap. Additionally, there is another pattern that highlights less percentage of type 1 graphlets in low optimality gap networks in comparison with networks with high optimality gaps. Particularly, networks with low optimality gap have 15-25% type 1 graphlets whereas most networks having high optimality gap have 25-40% type 1 graphlets. Furthermore, not as significant pattern as type 6, type 5 graphlets are also absent in some of the lowest optimality gap networks. In conclusion, from Table <ref>, it can be observed that existence of type 6, type 1 and type 5 graphlets can significantly impact OPF optimality gap in different test cases.
The graphlet structure can be leveraged to introduce constraints or relationships between variables in the convex relaxation problems. For instance, in a graph with specific graphlets, the variables within the graphlets may have strong dependencies or correlations. By incorporating these graph constraints into the relaxation formulation, the relaxation can better capture the underlying structure and improve its tightness. Moreover, the graph structure can provide insights into the connectivity of variables. If certain subsets of variables are strongly connected in the graph, it suggests that they should have similar values or follow certain patterns. By enforcing such connectivity constraints in the relaxation, the relaxation can better capture the relationships between variables and improve its accuracy. Overall, by incorporating and leveraging the graph structure in the formulation of the relaxation, it is possible to exploit the inherent properties of the problem and improve the tightness of the relaxation. The proposed approach results in more accurate solutions and better bounds for convex relaxation.
§ CONCLUSION
This paper investigates the correlation between the optimality gap of the OPF convex relaxation and the local structure of power system graphs addressed by the distribution of graphlet types. In this context, the paper calculates the percentage of different 4-node graphlets in various networks and examines the relationship between graphlet counts and optimality gaps for the QC relaxation of the OPF problem. Consequently, the results clearly indicate that networks with high QC optimality gaps share a common characteristic of having type 6 graphlets or complete 4-node graphlets. Additionally, it is noticeable that networks with high QC optimality gaps tend to have a high percentage of type 1 or 4-node star-shaped graphlets. Conversely, most networks with very low optimality gaps do not have type 5 graphlets. In conclusion, this study suggests that the existence of certain graphlets can cause a high optimality gap in the QC relaxation. In the future, this research can be expanded to associate electrical parameters, such as branch admittance, with graphlet-level analysis. This research is currently studying the identification of nodes in the power system where enforcing redundant constraints can tighten the QC relaxation of the OPF problem for those test cases.
Regenerate response
IEEEtran
|
http://arxiv.org/abs/2307.00857v1
|
20230703085611
|
Minimal-time nonlinear control via semi-infinite programming
|
[
"Antoine Oustry",
"Matteo Tacchi"
] |
math.OC
|
[
"math.OC"
] |
footnote
Ecole des Ponts, Marne-la-Vallée, France.
footnote
Laboratoire d'informatique de l'École polytechnique, Institut Polytechnique de Paris, Palaiseau, France.
footnote
Univ. Grenoble Alpes, CNRS, Grenoble INP (Institute of Engineering Univ. Grenoble Alpes), GIPSA-lab, 38000 Grenoble, France.
footnote
Corresponding author.
Beyond the Snapshot: Brain Tokenized Graph Transformer for Longitudinal Brain Functional Connectome Embedding
Zijian Dong1,2 Yilei Wu1 Yu Xiao1 Joanna Su Xian Chong1 Yueming Jin4,2 Juan Helen Zhou1,2,3()
August 1, 2023
=============================================================================================================
We address the problem of computing a control for a time-dependent nonlinear system to reach a target set in a minimal time. To solve this minimal time control problem, we introduce a hierarchy of linear semi-infinite programs, the values of which converge to the value of the control problem. These semi-infinite programs are increasing restrictions of the dual of the nonlinear control problem, which is a maximization problem over the subsolutions of the Hamilton-Jacobi-Bellman (HJB) equation. Our approach is compatible with generic dynamical systems and state constraints. Specifically, we use an oracle that, for a given differentiable function, returns a point at which the function violates the HJB inequality. We solve the semi-infinite programs using a classical convex optimization algorithm with a convergence rate of O(1/k), where k is the number of calls to the oracle. This algorithm yields subsolutions of the HJB equation that approximate the value function and provide a lower bound on the optimal time. We study the closed-loop control built on the obtained approximate value functions, and we give theoretical guarantees on its performance depending on the approximation error for the value function. We show promising numerical results for three non-polynomial systems with up to 6 state variables and 5 control variables.
Nonlinear control, Minimal time control, Weak formulation, Semi-infinite programming.
§ INTRODUCTION
§.§ Motivation and related works
This paper deals with the control of a deterministic
dynamical system to reach a target set in a minimal time. We consider a general case of a time-dependent nonlinear system under nonlinear state constraints. Several applications in various fields, such as robotics <cit.>, aerospace <cit.>, maritime routing <cit.> or medicine <cit.>, can be formulated as minimal time control problems. Minimal time control, also known as time optimal control, can be seen as a special case of the general framework of Optimal Control Problems (OCP). Solving an OCP for such generic dynamics and constraints is a difficult challenge, although deep theoretical tools are available such as the Pontryagin Maximum Principle (PMP) <cit.> and the Hamilton-Jacobi-Bellman (HJB) equation <cit.>. Those theoretical tools, initially developed in the unconstrained setting, have been extended to the case of state constraints <cit.>. From an numerical point of view, the multiple shooting techniques <cit.> are based on the PMP, and reduce to the solution of a two-point boundary value problem. The direct methods reduce to the solution a nonlinear programming problem after discretizing the time space, or parameterizing the control u(t) in a finite dimensional subspace <cit.>. The celebrated Model Predictive Control (MPC) approach belongs to the category of direct methods <cit.>. Another approach is to compute the value function of the problem as a maximal subsolution of the HJB equation <cit.>. This approach is related to the weak formulation of the OCP, which is an infinite dimensional linear program (LP) involving occupation measures. The dual problem of this LP is exactly the problem of finding a maximal subsolution of the HJB equation <cit.>. In <cit.>, the Moment Sum-of-Squares (SoS) hierarchy is used to approximate the solution of the resulting infinite dimensional LPs, in the case where the dynamics and the constraints of the OCP are defined by polynomials. The convergence rate of this numerical scheme is studied in <cit.> for infinite-time discounted polynomial control problems. Still in the context of polynomial control problems, a work <cit.> based on the dual LP and the SoS hierarchy also studies the design of a closed-loop controller based on the approximate value function that is computed. In <cit.>, an extension of the SoS hierarchy based on Kernel methods is employed to extend this computation to general nonlinear system. Regarding the methods specifically dedicated to time-optimal control, we find the same categories: direct methods such as MPC <cit.>, indirect methods based on the PMP and the bang-bang property <cit.> or methods based on convex optimization <cit.>.
§.§ Contribution
In this paper, we focus on the problem of computing a control to reach a target set in a minimal time. We follow the line of works that use convex optimization to solve the dual problem of the nonlinear control problem, over the subsolutions of the HJB equation <cit.>. In contrast to several works using the Moment-SoS hierarchy <cit.>, the dynamical system and the state constraints considered here are generic and, in particular, are not assumed to be defined by polynomials. Instead of using polynomial optimization theory and the associated positivity certificates, our approach relies on the existence of a separation oracle capable of returning, for a given differentiable function V, a point (t, x) where the function V does not satisfy the HJB inequality. Such an oracle can be provided by a global optimization solver or by a sampling scheme in a black-box optimization approach. In particular, our approach is compatible with the sampled-data control paradigm <cit.>. Our contribution is manifold
* We introduce a hierarchy of linear semi-infinite programs, the values of which converge to the value of the control problem. After regularization, we solve these semi-infinite programs using a classical algorithm with a convergence rate in O(1/k), where k is the number of calls to the oracle. This yields subsolutions of the HJB equation that lower-approximate the value function and provide a certified lower bound on the minimum time.
* It is known that one can leverage any function V(t,x) approximating the value function, to design a closed-loop, i.e., feedback controller <cit.>. In this paper, we study the existence of trajectories generated by such a controller.
* We study the performance of such a closed-loop controller, depending on how well V(t,x) approximates the value function, in a way distinct from the analysis in <cit.>. In particular, this novel analysis enables us to give a sufficient condition for the closed-loop controller to effectively generate a trajectory reaching the target set within the considered time horizon.
* We perform numerical experiments on three non-polynomial controlled systems and compute lower and upper bounds on the minimum time.
§.§ Mathematical notation
For any p ∈ℕ^*, and k∈ℕ∪{∞}, we denote by C^k(ℝ^p) = C^k(ℝ^p,ℝ) the vector space of real-valued functions with k continuous derivatives over ℝ^p. For a given set A ⊂ℝ^p, for any function f ∈ C^k(ℝ^p), we denote by f_|A the restriction of f on A; moreover, we define the vector space C^k(ℝ^p | A) = { f_|A f ∈ C^k(ℝ^p) } of restrictions on A of C^k functions. For any locally Lipschitz function f, we denote by ∂^c f its Clarke subdifferential <cit.>, to be distinguished from ∂_x_i g, the partial derivative of a differentiable function g with respect to x_i.
For any two Lebesgue integrable functions f,g ∈ L^1(ℝ^p), we define the convolution product f ⋆ g = g ⋆ f as f ⋆ g(x) = ∫_ℝ^p f(x) g(x-h) dh. We emphasize that this convolution product is also well defined if f is supported on a compact set, and g is locally integrable. We denote by ℝ[x_1, … x_p] the vector space of real multivariate polynomials with variables x_1, …, x_p, and ℝ_d[x_1, … x_p] the vector space of such real multivariate polynomials with degree at most d.
For any set A ⊂ℝ^p, we write 𝖼𝗈𝗇𝗏(A) for the convex hull of the set A. For any nonempty set A, and any x ∈ℝ^p, we denote d(x,A) =inf_a ∈ A‖ x - a ‖_2 the distance between the set A and the point x. We also define the contingent cone to A at x ∈ A, denoted T_A(x) as the set of directions d ∈ℝ^p, such that there exist a sequence (t_k) ∈ℝ_++^ℕ, and a sequence (d_k) ∈ (ℝ^p)^ℕ, satisfying t_k → 0, d_k → d, and x + t_k d_k ∈ A, for all k ∈ℕ. Finally, we say that a property P holds “almost everywhere” (a.e.) on A, or equivalently “for almost all x ∈ A”, to denote that there exists a set N of Lebesgue measure zero such that the property P holds for all x ∈ A ∖ N.
§ PROBLEM STATEMENT AND LINEAR PROGRAMMING FORMULATIONS
§.§ Definition of the minimal time control problem
Let n and m be nonzero integers. We consider on ℝ^n the control system
ẋ(t) = f(t,x(t), u(t)),
where fℝ×ℝ^n×ℝ^m →ℝ^n is Lipschitz continuous, and where the controls are bounded measurable functions, defined on intervals [t_0, t_1] ⊂ [0, T], and taking their values in a compact set U of ℝ^m. Let X and K⊂ X be compact sets of ℝ^n and x_0 ∈ℝ. For t_0, t_1 ≥ 0, a control u is said admissible on [t_0, t_1] whenever the solution x(.) of (<ref>), such that x(t_0) = x_0, is well defined on [t_0, t_1] and satisfies the constraints
(x(t),u(t)) ∈ X × U, a. e. on [t_0, t_1],
and satisfies the terminal state constraint
x(t_1) ∈ K.
We denote by 𝒰(t_0,t_1,x_0) the set of admissible controls on [t_0, t_1]. We consider the question of the minimal time problem from x_0 to K,
V^* (t_0,x_0) = c t_1 ∈ [t_0, T]
u(·) ∈𝒰(t_0,t_1,x_0)inf t_1-t_0.
This is a particular case of the OCP with free final time <cit.>, associated with the cost ∫_t_0^t_1ℓ(t,x(t),u(t)) dt for ℓ(t,x(t),u(t)) = 1. The function V^* is called the value function of this minimal time control problem: this describes the smallest time to reach the target set K, starting from x_0 at time t_0.
For any (t,x) ∈ [0,T] × X, the set f(t,x,U) is convex.
We underline that we do not have any convexity assumption on the constraint set X and on the target set K.
Even if the dynamical system of interest does not satisfy Assumption <ref>, we can apply the present analysis to the convexified inclusion ẋ(t) ∈𝖼𝗈𝗇𝗏 f(t,x(t), U). According to the Filippov-Ważewski relaxation Theorem <cit.>, the trajectories of the original control problem are dense in the set of trajectories of the convexified inclusion. The trajectories of the convexified inclusion may be seen as the limit of chattering trajectories, i.e., when the control oscillates infinitely fast and where the constraint set is infinitesimally dilated.
Under Assumption <ref>, the minimal time control problem (<ref>)-(<ref>) associated with a starting point (t_0, x_0) ∈ [0, T] × X is either infeasible or admits an optimal trajectory.
We consider the case where a feasible trajectory exists. This is a direct application of <cit.>, which, among others, characterizes the existence of an optimal trajectory for a control problem over a differential inclusion. To emphasize the correspondence with the notation of <cit.>, we highlight that we apply the theorem with: the running cost function ℓ(t,x,p) = 1, the terminal cost function g(t,x) = 0, the set-valued map F(t,x) = f(t,x, U), the constraint set A = [0,T] × X and the target set C = [0,T] × K. We underline that the assumptions (H1)-(H5) in <cit.> are satisfied here; more precisely, we highlight that our Assumption <ref> enforces (H2) and the hypothesis that a feasible trajectory exists enforces (H4).
§.§ Hamilton-Jacobi-Bellman equation and subsolutions
In optimal control theory, a well-known sufficient condition for a function V to be the value function V^* is to satisfy the Hamilton-Jacobi-Bellman (HJB) Partial Differential Equation (PDE). This PDE may be seen as a continuous time generalization of Bellman's dynamic programming optimality principle in discrete time <cit.>. In our minimal time control setting, the HJB PDE reads
∂_t V (t,x) + min_u ∈ U{ 1 + ∇_x V(t,x)^⊤ f(t,x,u) } = 0, ∀ (t,x) ∈ [0,T] × X
V(t,x) = 0, ∀ (t,x) ∈ [0,T] × K.
In general, differentiable solutions of this PDE may not exist, so the concept of viscosity solutions is typically used <cit.>. Another approach to get around the lack of a differentiable solution to the HJB PDE consists in leveraging the concept of subsolutions <cit.>, i.e., functions V ∈ C^1(ℝ^n+1) satisfying the following inequalities:
∂_t V (t,x) + min_u ∈ U{ 1 + ∇_x V(t,x)^⊤ f(t,x,u) }≥ 0, ∀ (t,x) ∈ [0,T] × X
V(t,x) ≤ 0, ∀ (t,x) ∈ [0,T] × K.
The following lemma states that any subsolution of the HJB PDE is an under-approximation of the value function.
For any V ∈ C^1(ℝ^n+1) satisfying Eqs. (<ref>)-(<ref>), the following holds:
V(t,x) ≤ V^*(t,x), ∀ (t,x) ∈ [0,T] × X.
We take any (t,x) ∈ [0,T] × X and we consider that V^*(t,x)< ∞, since the case V^*(t,x) = ∞ is trivial. Hence, according to Th. <ref>, there exists an admissible control u(·) ∈𝒰(t,t_1, x) for t_1 ∈ [t, T] such that V^*(t,x) = t_1 - t, and an associate trajectory x(t) such that x(t) = x and x(t_1) ∈ K. We observe that d/dt [V(t,x(t))] = ∂_t V(t,x(t)) + ∇_x V (t, x(t))^⊤ f(t,x(t),u(t)) ≥ -1 a.e. on [t,t_1], the inequality holding since V satisfies Eq. (<ref>). By integration, we observe that V(t_1,x(t_1)) - V(t,x) ≥ t - t_1 = - V^*(t,x), i.e., V(t_1,x(t_1)) + V^*(t,x) ≥ V(t,x). As V satisfies Eq. (<ref>) and as x(t_1) ∈ K, we observe that 0 ≥ V(t_1,x(t_1)), and therefore V^*(t,x) ≥ V(t,x).
§.§ Infinite dimensional Linear Programming formulations
In the rest of the paper, we consider a given point x_0 ∈ X, and we raise the issue of computing the minimal time from x_0 to K and the associated control. We make the following assumption:
There exists an admissible control u ∈𝒰(0,t_1,x_0) associated with t_1 ∈ [0, T]. In other words, V^*(0,x_0) ≤ t_1 < ∞.
We consider the optimization problem of finding the subsolution of the HJB PDE that maximizes the evaluation in (0, x_0). This problem may be cast as an infinite dimensional linear program:
[ V ∈ℱsup V(0,x_0) ; s.t. ∂_t V (t,x) + 1 + ∇_x V(t,x)^⊤ f(t,x,u) ≥ 0 ∀ (t,x,u) ∈ [0,T] × X × U; V(t,x) ≤ 0 ∀ (t,x) ∈ [0,T] × K, ]
with ℱ∈{ C^1(ℝ^n+1), C^∞(ℝ^n+1),ℝ[t,x_1,…, x_n] }. For a given V ∈ℱ, the feasibility in (<ref>) is clearly equivalent to the satisfaction of Eqs. (<ref>)-(<ref>). We also note that this infinite dimensional LP formulation corresponds to the dual LP formulation in <cit.>; in fact, this is the dual problem of an infinite dimensional LP formulation of the control problem based on occupation measures. According to the next theorem, the problem (<ref>) on C^1 functions has the same value as the minimal time control problem.
Under Assumption <ref> and Assumption <ref>, and for ℱ = C^1(ℝ^n+1), the value of the LP formulation (<ref>) equals V^*(0,x_0).
As for the proof of Th. <ref>, this is a direct application of <cit.>, which also states the absence of duality gap between a control problem over a differential inclusion and a maximization problem over subsolutions of the HJB equation. We underline that the assumptions (H1)-(H5) in <cit.> are satisfied here; more precisely, our Assumption <ref> enforces (H2) and our Assumption <ref> enforces (H4).
Theorem <ref> extends this result by stating that we can require the subsolutions of the HJB equation to be in C^∞(ℝ^n+1), while preserving the value of (<ref>). Before stating this theorem, we introduce an auxiliary lemma.
For any V ∈ C^1(ℝ^n+1) satisfying Eqs. (<ref>)-(<ref>) with feasibility error less or equal than η≥ 0, V(t,x) + η(t-1-T) satisfies Eqs. (<ref>)-(<ref>).
We introduce Ṽ(t,x) = V(t,x) + η(t-1-T). By assumption on V(t,x), we have ∂_t V (t,x) + 1 + ∇_x V(t,x)^⊤ f(t,x,u) ≥ -η, for all (t,x,u) ∈ [0,T] × X × U. By linearity, and since ∂_t (t-1-T) = 1 and ∇_x (t-1-T) = 0, ∂_t Ṽ (t,x) + 1 + ∇_x Ṽ(t,x)^⊤ f(t,x,u) ≥ 0. By assumption on V(t,x), we have V(t,x) ≤η, for all (t,x) ∈ [0,T] × K. Hence, Ṽ(t,x) ≤η + η(t-1-T) ≤η + η(T-1-T) = 0 for all (t,x) ∈ [0,T] × K.
Under Assumption <ref> and Assumption <ref>, and for ℱ = C^∞(ℝ^n+1), the value of the LP formulation (<ref>) equals V^*(0,x_0).
We consider ℱ = C^∞(ℝ^n+1) and we use the notation Y to denote the compact set [0, T] × X. We fix ϵ > 0, and we will prove that there exists V ∈ C^∞(ℝ^n+1) that is feasible in (<ref>) and such that V(0,x_0) ≥ V^*(0,x_0) - ϵ. According to Th. <ref>, there exists V_1 ∈ C^1(ℝ^n+1) that is feasible in (<ref>) and such that V_1(0,x_0) ≥ V^*(0,x_0) - ϵ/2. For any σ∈ (0,1], we introduce the mollified function V_1 σ = V_1 ∗ϕ_σ∈ C^∞(ℝ^n+1), where _σ is the standard mollifier defined as _σ(y) = 1/σ^n+1(y/σ), where (y) = {[ ξ e^-1/1-‖ y ‖^2 if ‖ y ‖ < 1; 0 if ‖ y ‖≥ 1 ]. for a given constant ξ > 0 such that ∫_ℝ^n+1(y)dy = 1. Hence, a simple change of variable shows that ∫_ℝ^n+1_σ(y)dy = 1. We also underline that _σ is non-negative and supported on the ball B(0,σ). For any y = (t,x) ∈ Y and any σ∈ (0,1], we have that |V_1(y) - V_1σ(y)| = |V_1(y) - ∫_B(0,σ) V_1(y - h) _σ(h) dh | = |∫_B(0,σ) (V_1(y) - V_1(y - h)) _σ(h) dh | as ∫_B(0,σ)_σ(h)dh = 1. We denote by L_V an upper bound for the continuous function ‖∇ V_1(y) ‖_2 over the compact set Ŷ = { y ∈ℝ^n+1 d(y,Y) ≤ 1 }, which is a Lipschitz constant for the function V_1. We deduce, by triangular inequality and non-negativity of _σ that for any y ∈ Y,
|V_1(y) - V_1σ(y)| ≤∫_B(0,σ) |V_1(y) - V_1(y - h)| _σ(h) dh
≤∫_B(0,σ) L_V ‖ h ‖_σ(h) dh
≤ L_V σ∫_B(0,σ)‖ h/σ‖(h/σ) 1/σ^n+1 dh
≤ L_V σ∫_B(0,1)‖h̃‖(h̃) dh̃._constant, denoted ℐ.
By property of the mollifiers <cit.>, we have ∂_i V_1σ(y) = ∂_i ( V_1 ∗_σ) = (∂_i V_1 ∗_σ) for any i ∈{t,x_1, …, x_n}. Therefore, ∂_t V_1σ (y) = ∫_B(0,σ)∂_t V_1 (y-h)_σ(h)dh and ∇_x V_1σ(y) = ∫_B(0,σ)∇_x V_1 (y-h)_σ(h)dh. Using the equality ∫_B(0,σ)_σ(h)dh = 1, we deduce that for any y ∈ Y,
∂_t V_1σ (y) + 1 + (∇_x V_1σ(y))^⊤ f(y,u) = ∫_B(0,σ) (∂_t V_1 (y-h) + 1 + (∇_x V_1(y-h))^⊤ f(y,u))_σ(h)dh
= ∫_B(0,σ) (∂_t V_1 (y-h) + 1 + (∇_x V_1(y-h))^⊤ f(y-h,u))_σ(h)dh
+ ∫_B(0,σ)∇_x V_1(y-h)^⊤(f(y,u) - f(y-h,u))_σ(h)dh.
We compute lower bounds for the two terms of the sum. We start with the second term: using Cauchy-Schwarz inequality, we notice that ∫_B(0,σ)∇_x V_1(y-h)^⊤(f(y,u) - f(y-h,u))_σ(h)dh ≥ - ∫_B(0,σ)‖ V_1(y-h) ‖‖ f(y,u) - f(y-h,u) ‖_σ(h)dh. Noticing that ‖∇_x V_1(y-h) ‖≤ L_V, since y-h ∈Ŷ for any h ∈ B(0,σ) ⊂ B(0,1), and introducing the Lipschitz constant L_f for f, we have
∫_B(0,σ)∇_x V_1(y-h)^⊤(f(y,u) - f(y-h,u))_σ(h)dh ≥ - L_V L_f ∫_B(0,σ)‖ h ‖_σ(h)dh
= - L_V L_f σℐ.
We define η = ϵ/2(T+2). We introduce the compact set Z = [0,T] × X × U and the family of compact sets Z_δ = { z ∈ℝ^N d(z,Z) ≤δ} for δ∈ (0, 1]. For any z = (y,u) ∈ Z_1, we introduce ψ(z) = ∂_t V_1 (y) + 1 + (∇_x V_1(y))^⊤ f(y,u). The function ψ(z) is continuous and according to Lemma <ref>, there exists σ_1 > 0 such that min_z ∈ Z_σψ(z) ≥min_z ∈ Zψ(z) - η/2 for any σ∈ (0, σ_1]. By feasibility of V_1 in (<ref>), we know that min_z ∈ Zψ(z) ≥ 0, which yields that ψ(z) ≥ - η/2 for any z ∈ Z_σ_1. We deduce that
∫_B(0,σ) (∂_t V_1 (y-h) + 1 + (∇_x V_1(y-h))^⊤ f(y-h,u))_σ(h)dh ≥ - ∫_B(0,σ)η/2_σ(h)dh = - η/2,
since (y-h,u) ∈ Z_σ for any h ∈ B(0,σ). Combining the decomposition of Eqs. (<ref>)-(<ref>), with the lower bounds of Eq. (<ref>) and (<ref>), we deduce that
∂_t V_1σ (y) + 1 + (∇_x V_1σ(y))^⊤ f(y,u) ≥ - (L_V L_f σℐ + η/2),
for any (y,u) = (t,x,u) ∈ [0,T] × X × U and σ∈ (0,σ_1]. We define σ̃ = min{σ_1, η/2 L_V L_f ℐ, η/L_V ℐ}. From Eq. (<ref>) and Eq. (<ref>), we deduce that
V_1 σ̃(0,x_0) ≥ V_1(0,x_0) - η≥ V^*(0,x_0) - ϵ/2 - η
V_1 σ̃(t,x) ≤ V_1(t,x) + η≤η , ∀ (t,x) ∈ [0,T] × K
∂_t V_1σ̃ (t,x) + 1 + ∇_x V_1σ̃(t,x)^⊤ f(t,x,u) ≥ -η , ∀ (t,x,u) ∈ [0,T] × X × U.
From Lemma <ref>, we deduce that V(t,x) = V_1 σ̃(t,x) + η(t-1-T) ∈ C^∞(ℝ^n+1) is feasible in (<ref>). From Eq. (<ref>), we deduce that V(0,x_0) ≥ V^*(0,x_0) - ϵ/2 - η - (1+T)η, and by definition of η, V(0,x_0) ≥ V^*(0,x_0) - ϵ.
The next theorem underlies the convergence proof of the hierarchy of semi-infinite problems in Sect. <ref>: if we restrict to polynomials HJB subsolutions, the value of the problem (<ref>) remains unchanged.
Under Assumption <ref> and Assumption <ref>, and for ℱ = ℝ[t,x_1, …, x_n], the value of the LP formulation (<ref>) equals V^*(0,x_0).
We consider ℱ = ℝ[t,x_1, …, x_n]. For a given ϵ > 0, and we will prove that there exists V ∈ℝ[t,x_1, …, x_n] that is feasible in (<ref>) and such that V(0,x_0) ≥ V^*(0,x_0) - ϵ. According to Th. <ref>, there exists a function Q ∈ C^∞(ℝ^n+1) which is a subsolution of the HJB equation and such that Q(0,x_0) ≥ V^*(0,x_0) - ϵ/2. We notice that Q has a locally Lipschitz gradient. Therefore, we can apply Lemma <ref>. This yields, in particular, that for any ν > 0, there exists a polynomial w ∈ℝ[t,x_1, …, x_n] such that for all (t,x) ∈ [0,T] × X, | w (y) - w (y) | ≤ν and |∂_i w (t,x) - ∂_i V (t,x) | ≤ν, i ∈{t,x_1, …, x_N }. We deduce that |∂_t Q (t,x) + ∇_x Q(t,x)^⊤ f(t,x,u) - ∂_t w (t,x) + ∇_x w(t,x)^⊤ f(t,x,u) | ≤ |∂_t Q (t,x) - ∂_t w (t,x) | + ∑_i=1^n |∂_x_i Q (t,x) - ∂_x_i w (t,x) | M_i ≤ν (1 + ∑_i=1^n M_i), where M_i = max_(t,x,u) ∈ [0,T] × X × U |f_i(t,x,u)|. Therefore, we observe that for all (t,x,u) ∈ [0,T] × X × U,
∂_t w (t,x) + 1 + ∇_x w (t,x)^⊤ f(t,x,u) ≥∂_t Q (t,x) + 1 + ∇_x Q (t,x)^⊤ f(t,x,u) - ν (1 + ∑_i=1^n M_i)
≥ - ν (1 + ∑_i=1^n M_i),
as Q is a subsolution of the HJB equation. In summary, for ν = η(1 + ∑_i=1^n M_i)^-1≤η,
w(0,x_0) ≥ Q(0,x_0) - η≥ V^*(0,x_0) - ϵ/2 - η
w(t,x) ≤ Q(t,x) + η≤η , ∀ (t,x) ∈ [0,T] × K
∂_t w (t,x) + 1 + ∇_x w (t,x)^⊤ f(t,x,u) ≥ -η , ∀ (t,x,u) ∈ [0,T] × X × U ,
the last inequality following from Eq. (<ref>). Based on Eqs. (<ref>)-(<ref>) and Lemma <ref>, we notice that the polynomial V(t,x) = w(t,x) + η(t-T-1)∈ℝ[t,x_1, …, x_n] is feasible in the problem (<ref>). Having defined η = ϵ/2(T+2), we see, based on Eq. (<ref>), that it satisfies V(0,x_0) ≥ V^*(0,x_0) - ϵ/2 - η - η(T+1) = V^*(0,x_0) - ϵ.
§ CONVEX SEMI-INFINITE PROGRAMMING TO COMPUTE NEAR-OPTIMAL SUBSOLUTIONS
For ℱ being either C^1(ℝ^n+1), C^∞(ℝ^n+1), or ℝ[t,x_1,…, x_n] }, the linear program (<ref>) is infinite dimensional, and thus, not tractable as it stands. Therefore, we next present a hierarchy of convex SIP problems that are solvable with a dedicated algorithm, to compute subsolutions to the HJB equation that are near optimal in the problem (<ref>).
§.§ A hierarchy of linear semi-infinite programs
Instead of having an optimization space ℱ that is infinite dimensional, we suggest to restrict to the finite dimensional subspaces ℝ_d[t,x_1,…, x_n] of polynomials of degree bounded by d. This restricted dual problem is:
[ V ∈ℝ_d[t,x_1,…, x_n]sup V(0,x_0) ; s.t. ∂_t V (t,x) + 1 + ∇_x V(t,x)^⊤ f(t,x,u) ≥ 0 ∀ (t,x,u) ∈ [0,T] × X × U; V(t,x) ≤ 0 ∀ (t,x) ∈ [0,T] × K. ]
In the rest of the paper, we will denote by N the dimension of the vector space ℝ_d[t,x_1,…, x_n] and Φ(t,x) ∈ℝ^N a basis of this space. For both objects, there is indeed a dependence of d, that is implicit here for readability reasons. For any V ∈ℝ_d[t,x_1,…, x_n], we introduce the vector θ of the coordinates of V in the basis Φ. Hence, we have the relation
V(t,x) = θ^⊤Φ(t,x) ∈ℝ_d[t,x_1,…, x_n].
Expressing problem (<ref>) as an optimization problem over the vector of coefficients, it appears clearly that this is a linear semi-infinite program.
For d ∈ℕ^*, problem (<ref>)
is a linear semi-infinite program, i.e, a linear program with a finite number of variables and an infinite number of constraints. More precisely, there exist a vector c ∈ℝ^N, and a compact set 𝒴⊂ℝ^N+1 such that (<ref>) reads
[ θ∈ℝ^Nsup c^⊤θ ; s.t. a^⊤θ + b ≤ 0 ∀ (a,b) ∈𝒴. ]SIP
We define the vector c = Φ(0,x_0), and the compact sets
𝒴_1 = { (-∂_tΦ(t,x) - ∇_xΦ(t,x)^⊤ f(t,x,u), -1), (t,x,u) ∈ [0,T] × X × U }
𝒴_2 = { (Φ(t,x),0), (t,x) ∈ [0,T] × K }
𝒴 = 𝒴_1 ∪𝒴_2.
We see that for any V_θ(t,x) = θ^⊤Φ(t,x) ∈ℝ_d[t,x_1,…, x_n], V_θ(0,x_0) = c^⊤θ, and V_θ(t,x) is feasible in (<ref>) if and only if a^⊤θ + b ≤ 0, for all (a,b) ∈𝒴.
We will see in the next section how to efficiently solve those semi-infinite programs. Prior to that, we state the convergence of this hierarchy of semi-infinite programs.
The sequence 𝗏𝖺𝗅(<ref>) converges to V^*(0,x_0) when d →∞.
On the one hand, we introduce the notation v_d = 𝗏𝖺𝗅(<ref>). This sequence is obviously an increasing sequence, bounded above by V^*(0,x_0). Hence, it converges to a value ℓ, and any subsequence converges to ℓ≤ V^*(0,x_0). On the other hand, Th. <ref> guarantees that there exists a sequence of polynomials w_k ∈ℝ[t,x_1,…,x_n] that are feasible in (<ref>) and such that w_k(0,x_0) →_k V^*(0,x_0). By definition, we have v_d_k≥ w_k(0,x_0), where d_k = 𝖽𝖾𝗀(w_k). Up to the extraction of a subsequence of (w_k), we can assume that the sequence d_k increasing, therefore (v_d_k)_k∈ℕ is a subsequence of (v_d)_d∈ℕ. As v_d_k→ℓ and w_k(0,x_0) →_k V^*(0,x_0), we deduce that ℓ≥ V^*(0,x_0), which yields the equality ℓ = V^*(0,x_0).
§.§ Regularization and solution of the semi-infinite programs
We introduce a quadratic regularization in the semi-infinite program (<ref>), yielding the following formulation depending on μ∈ℝ_++:
[ θ∈ℝ^Nmax c^⊤θ - μ/2‖θ‖^2 ; s.t. a^⊤θ + b ≤ 0 ∀ (a,b) ∈𝒴. ]
For any μ > 0, the semi-infinite program (<ref>) has a unique optimal solution with value 𝗏𝖺𝗅(<ref>)≤ V^*(0,x_0). Moreover, 𝗏𝖺𝗅(<ref>)μ→ 0→𝗏𝖺𝗅(<ref>).
The feasible set of (<ref>) being convex, and the objective function being strongly concave, this optimization problem admits a unique maximum θ. By definition, 𝗏𝖺𝗅(<ref>) = c^⊤θ - μ/2‖θ‖^2 ≤ c^⊤θ≤𝗏𝖺𝗅(<ref>), since θ is also feasible in the maximization problem (<ref>). Additionally, 𝗏𝖺𝗅(<ref>)≤ V^*(0,x_0), since any function V feasible in (<ref>) satisfies V(0,x_0) ≤ V^*(0,x_0). We also notice that the function μ↦𝗏𝖺𝗅(<ref>) is decreasing, so it admits a limit ℓ at 0^+, due to the aforementioned inequalities, ℓ≤𝗏𝖺𝗅(<ref>). For any μ, ϵ > 0, if we take θ_ϵ an ϵ-optimal solution in the problem (<ref>), we see that 𝗏𝖺𝗅(<ref>) - ϵ - μ/2‖θ_ϵ‖^2 ≤ c^⊤θ_ϵ - μ/2‖θ_ϵ‖^2 ≤𝗏𝖺𝗅(<ref>). For a fixed ϵ, and taking μ→ 0^+, we obtain 𝗏𝖺𝗅(<ref>) - ϵ≤ℓ. This being true for any ϵ > 0, we deduce that 𝗏𝖺𝗅(<ref>)≤ℓ, which proves the equality.
Setting the regularization parameter μ in practice implies a trade-off between the computational tractability of the semi-infinite program (<ref>) and the accuracy of the approximation of the original problem (<ref>). To solve the formulation (<ref>), we propose to use a standard algorithm for convex semi-infinite programming, called cutting-plane (CP) algorithm <cit.>. To that extent, we assume to have an separation oracle computing, for any θ∈ℝ^N,
ϕ(θ) = max_(a,b) ∈𝒴 a^⊤θ + b,
and an associate argmaximum. Solving the optimization problem in Eq. (<ref>) may be computationally intensive, since the compact set 𝒴 may not be convex. Therefore, we only assume to have an oracle with relative optimality gap δ∈ [0,1) computing (a,b) ∈𝒴, such that ϕ(θ)- (a^⊤θ + b) ≤δ |ϕ(θ)|. We treat this oracle as a black box, regardless of its implementation, via global optimization, gridding, interval arithmetics or sampling for instance.
Before stating the termination and the convergence of Algorithm <ref>, we introduce the vector θ̂∈ℝ^N of coordinates of the polynomial v̂(t,x) = t - 1 - T in the basis Φ(t,x), and we notice that this element helps obtaining feasible solutions since ϕ(θ̂) = -1: Due to Lemma <ref>, we observe that if θ has a feasibility error less or equal than η≥ 0 in (<ref>) and (<ref>) then, θ + ηθ̂ is feasible in (<ref>) and (<ref>). For any μ > 0, we define the convex and compact set 𝒳_μ = {θ∈ℝ^N c^⊤θ - μ/2‖θ‖^2 ≥ c^⊤θ̂ - μ/2‖θ̂‖^2 }, and we define R_μ = sup_θ∈𝒳_μ‖θ‖. Finally, we define the function r_μ(e) = e (1+T + μ R_μ^2 (1+e/2)). Note that r_μ(e) e → 0→ 0.
If ϵ > 0, Algorithm <ref> stops after a finite number K of iterations, and θ^K + ϵ/1-δθ̂ is a feasible and r_μ(ϵ/1-δ)-optimal in (<ref>). If ϵ = 0, the alternative holds: (a) Algorithm <ref> either stops after a finite number of iterations, and the last iterate is the optimal solution of (<ref>), (b) Or it generates an infinite sequence, and the optimality gap and the feasibility error converge towards zero with an asymptotic rate in O(1/k).
First of all, we notice that during the execution of Algorithm <ref>, we necessarily have θ^k ∈𝒳_μ, since θ̂ is a feasible solution in (<ref>) with value c^⊤θ̂ - μ/2‖θ̂‖^2, therefore by optimality of θ^k in (<ref>), c^⊤θ^k - μ/2‖θ^k ‖^2 ≥ c^⊤θ̂ - μ/2‖θ̂‖^2. The finite convergence of Algorithm <ref> if ϵ >0, and the convergence rate in the case ϵ = 0 (if no finite convergence) follows from <cit.> (which is an extension of <cit.>) : we apply these theorems to the problem (<ref>) with the additional constraint θ∈𝒳_μ. As previously explained, this additional constraint does not change the execution of the algorithm, but it enables us to satisfy the compactness assumption of <cit.>. We also note that the objective function is μ-strongly concave, and that θ̂∈𝒳_μ is a strictly feasible point with respect to the semi-infinite constraints.
We finish the proof by showing that if Algorithm <ref> stops at iteration K, then θ̃ = θ^K + ϵ/1-δθ̂ is feasible and r_μ(ϵ/1-δ)-optimal in (<ref>). If Algorithm <ref> stops at iteration K, this means that (a^K)^⊤θ^K + b^K ≤ϵ. If ϕ(θ^K)≤ 0, then θ^K is feasible in (<ref>), and so is θ̃ due to Lemma <ref>. If ϕ(θ^K) > 0, then by property of the δ-oracle,
(1-δ) ϕ(θ^K) ≤ (a^K)^⊤θ^K + b^K ≤ϵ, and we deduce that the feasibility error is ϕ(θ^K) ≤ϵ/1-δ. With Lemma <ref>, we deduce that θ̃ is feasible in (<ref>). We also note that
c^⊤θ̃ - μ/2‖θ̃‖^2 = c^⊤θ^K + ϵ/1-δ c^⊤θ̂ - μ/2‖θ^K + ϵ/1-δθ̂‖^2
≥ c^⊤θ^K - ϵ/1-δ (1+T) - μ/2( ‖θ^K ‖^2 + 2 ϵ/1-δ‖θ^K ‖ ‖θ̂‖ + ϵ^2/(1-δ)^2‖θ̂‖^2 ),
since c^⊤θ̂ = V_θ̂(0,x_0) = -(1+T), and due to the Cauchy-Schwartz inequality. By optimality of θ^K in (<ref>), which is a relaxation of (<ref>), we know that 𝗏𝖺𝗅(<ref>)≤ c^⊤θ^K - μ/2‖θ^K ‖^2. Applying this, we deduce that
c^⊤θ̃ - μ/2‖θ̃‖^2
≥𝗏𝖺𝗅(<ref>)- ϵ/1-δ (1+T) - μ/2( 2 ϵ/1-δ‖θ^K ‖ ‖θ̂‖ + ϵ^2/(1-δ)^2‖θ̂‖^2 )
≥𝗏𝖺𝗅(<ref>)- ϵ/1-δ (1+T) - μ R_μ^2 ( ϵ/1-δ + ϵ^2/2(1-δ)^2)
≥𝗏𝖺𝗅(<ref>)- r_μ(ϵ/1-δ),
the second inequality following from the fact that ‖θ̂‖≤ R_μ and ‖θ^K ‖≤ R_μ, as θ̂, θ^K ∈𝒳_μ.
§ FEEDBACK CONTROL BASED ON APPROXIMATE VALUE FUNCTIONS
In the previous section, we have seen how to compute subsolutions of the HJB equation based on convex semi-infinite programming, and how to deduce a lower bound on the minimal travel time. In this section, we focus on how subsolutions of the HJB equation, which approximate the value function V^*, enable one to recover a near-optimal control for the minimal time control problem (<ref>)-(<ref>).
§.§ Controller design and existence of trajectories
For a given continuously differentiable function V ∈ C^1(ℝ^n+1), we define the set-valued maps
𝒰_V(t,x) = u ∈ U𝖺𝗋𝗀𝗆𝗂𝗇 ∇_x V (t,x)^⊤ f(t,x,u)
ℐ_V(t,x) = { u ∈𝒰_V(t,x) f(t,x,u) ∈ T_X(x) },
where T_X(x) is the contingent cone to X at point x (see Introduction). In line with previous works designing feedback controllers based on approximate value functions <cit.>, we are interested in the trajectories satisfying the following differential inclusion depending on the function V ∈ C^1(ℝ^n+1):
ẋ_V(t) = f(t,x_V(t),u_V(t)) with u_V(t) ∈𝒰_V(t,x_V(t)).
Intuitively, such a feedback control pushes the system towards the descent direction of the function V. The following proposition confirms that, should the function V ∈ C^1(ℝ^n+1) be optimal in problem (<ref>), then any minimal time trajectory satisfies the differential inclusion (<ref>) with respect to V.
Under Assumptions <ref>-<ref>, we consider an optimal trajectory (x^*(·),u^*(·)) of the minimal time control problem (<ref>)-(<ref>) starting from (0,x_0), with hitting time τ^* = V^*(0,x_0). If the linear program (<ref>), for ℱ = C^1(ℝ^n+1), admits an optimal solution V, then, for almost every t ∈ [0, τ^*],
u^*(t) ∈ℐ_V(t,x^*(t)) ⊂𝒰_V(t,x^*(t)).
In particular, the trajectory (x^*(·),u^*(·)) satisfies the differential inclusion (<ref>).
We define the function α(t) = V (t,x^*(t)) + t, which is differentiable. We have that α'(t) = ∂_t V (t,x^*(t)) + 1 + ∇_x V (t,x^*(t))^⊤ f(t,x^*(t),u^*(t)), for almost all t ∈ [0,τ^*]. Since V is feasible in (<ref>), therefore satisfies Eq. (<ref>), and since (x^*(t),u^*(t)) ∈ X × U a. e. on [0,τ^*], we know that α'(t) ≥ 0 a. e. on [0,τ^*]. This proves that the differentiable function α(t) is non-decreasing function over [0,τ^*]. By optimality of V in (<ref>), and due to Th. <ref> (Assumptions <ref>-<ref> are satisfied), α(0) = V(0,x_0) = 𝗏𝖺𝗅(<ref>) = τ^*. Moreover, α(τ^*)= τ^* +V(τ^*,x^*(τ^*)) ≤τ^*, since V satisfies Eq. (<ref>) and x^*(τ^*) ∈ K. From α(τ^*) ≤α(0), we obtain that α(t) is constant. Hence, ∂_t V(t,x^*(t)) + 1 + ∇_x V (t,x^*(t))^⊤ f(t,x^*(t),u^*(t)) = 0, meaning
∇_x V (t,x^*(t))^⊤ f(t,x^*(t),u^*(t)) = -(∂_t V (t,x^*(t)) + 1), a.e. on [0,τ^*].
As V satisfies Eq. (<ref>), we have that ∇_x V (t,x^*(t))^⊤ f(t,x^*(t),u) ≥ -(∂_t V (t,x^*(t)) + 1) for all t ∈ [0,τ^*] and for all u∈ U. Together with Eq. (<ref>), we deduce that u^*(t) ∈𝒰_V(t,x^*(t)) for almost all t ∈ [0, τ^*]. Based on this fact, Lemma <ref> yields that for almost all t ∈ [0, τ^*], f(t,x^*(t),u^*(t)) ∈ T_X(x^*(t)). Therefore, for almost all t ∈ [0, τ^*], u^*(t) ∈ℐ_V(t,x^*(t)).
We just saw that whenever V ∈ C^1(ℝ^n+1) is optimal in the linear program (<ref>), any minimal time trajectory is a solution of the differential inclusion (<ref>) associated with the function V. However, we may not be able to compute exactly such an optimal function in practice, especially because it may not exist. The next theorem states the existence of closed-loop trajectories following (<ref>), for any function V ∈ C^1(ℝ^n+1).
Under Assumptions <ref>-<ref>, if V ∈ C^1(ℝ^n+1) is such that for any (t,x) ∈ℝ_+ × X, ℐ_V(t,x) ≠∅, then there exists a trajectory (x_V(·),u_V(·)) starting at (0,x_0), satisfying the differential inclusion (<ref>) over [0,∞) and such that x_V(t) ∈ X for almost all t ∈ [0,∞).
We introduce an auxiliary control system to reduce to a time-invariant system with a convex control set, so as to fit in the setting of <cit.>. In what follows, we use the notation y = (t,x) again. We introduce two objects: the set-valued map Û(y) = { f(y,u), u ∈ U } and the function f̂(y,v) = [ 1; v ] for y ∈ℝ^n+1 and v ∈ℝ^n. According to the terminology introduced in <cit.>, (Û,f̂) is a Marchaud control system, as (i) {(y,v) ∈ℝ^2n+1 v ∈Û(y)} is closed, (ii) f̂ is continuous, (iii) the velocity set {1 }× f(y,U) is convex according to Assumption <ref> and (iv) f̂ has a linear growth, and so has Û due to the fact that f is Lipschitz continuous and 𝒰 is bounded. We introduce 𝒞 = ℝ_+ × X and define the regulation map REG(y) = { v ∈Û(y) f̂(y,v) ∈ T_𝒞(y) }. We also introduce the set-valued map SEL(y) = v ∈Û(y)argmin ∇_x V(y)^⊤ v. We prove now that the graph of SEL is closed. For any converging sequence (y_k,v_k) → (y̅,v̅) with v_k ∈ SEL(y_k), we see that for all k ∈ℕ, v_k = f(y_k,u_k) for a given u_k ∈ U and ∇_x V(y_k)^⊤ f(y_k,u_k) = h(y_k), where h(y_k) = min_u ∈ U∇_x V(y_k)^⊤ f(y_k,u). Up to extracting a subsequence of u_k, we can assume that u_k →u̅, as U is compact. Note that h is continuous, by application of the Maximum Theorem <cit.>, in so far as (i) (y,u) ↦∇_x V(y)^⊤ f(y,u) is continuous, therefore lower and upper semicontinuous, (ii) the set-valued map M(y)= U is compact-valued, and lower and upper semicontinuous since it is constant. By continuity of h, ∇ V and f, we conclude that ∇_x V(y̅)^⊤v̅ = ∇_x V(y̅)^⊤ f(y̅,u̅) = h(y̅) = v ∈Û(y̅)min ∇_x V(y̅)^⊤ v, meaning that v̅ = f(y̅,u̅) ∈ SEL(y̅).
We notice that if u ∈ℐ_V(y), then v = f(y,u) ∈ REG(y) ∩ SEL(y). As ℐ_V(y) ≠∅ for all y ∈𝒞 (by assumption), REG(y) ∩ SEL(y) ≠∅. Together with the closedness of the graph of SEL, this means SEL is a selection procedure of REG, according to the terminology of <cit.>, and has convex values. We underline that REG(y) ≠∅, for all y ∈𝒞, i.e., 𝒞 is a viability domain for (Û,f̂). As (0, x_0) ∈𝒞, <cit.> yields the existence of a solution (y(·), v(·)) such that y(t) ∈𝒞, v(t) ∈ REG(y(t)) and
v(t) ∈ SEL(y(t)) ∩ REG(y(t)),
for almost all t ∈ [0, ∞). We notice first that y_1(0) = 0 and ẏ_1(t) = 1 for almost all t ≥ 0, thus y_1(t) = t. Hence, we can indeed see y(t) as (t,x(t)), with x(0) = x_0 and ẋ(t) = v(t). Moreover, v(t) = f(t,x(t),u(t)) for a given u(t) ∈ U, since v(t) ∈Û(y(t)) = f(t,x(t),U) a.e. on [0,∞). We deduce from v(t) ∈ SEL(y(t)), which comes from (<ref>), that u(t) ∈𝒰_V(t,x(t)). Moreover, we deduce from y(t) ∈𝒞 that x(t) ∈ X a.e. on [0,∞).
The condition ℐ_V(t,x) ≠∅ in Th. <ref> may appear restrictive, because it is not evident why a vector f(t,x,u_V) minimizing ∇_x V (t,x)^⊤ f(t,x,u) over u ∈ U would belong to T_X(x). However, we have seen that under the hypotheses of Prop. <ref>, Eq. (<ref>) yields ℐ_V(t,x) ≠∅. Moreover, should the condition ℐ_V(t,x) ≠∅ not be satisfied, we could enlarge the definition of 𝒰_V(t,x) in 𝒰_V,ϵ(t,x) = u ∈ U𝖺𝗋𝗀𝗆𝗂𝗇_ϵ ∇_x V (t,x)^⊤ f(t,x,u), so that for ϵ > 0 large enough, ℐ_V, ϵ(t,x) = { u ∈𝒰_V,ϵ(t,x) f(t,x,u) ∈ T_X(x) }≠∅.
§.§ Performance of the feedback controller depending on the value function approximation error
Previously, we introduced closed-loop trajectories satisfying the differential inclusion (<ref>) with respect to a function V ∈ C^1(ℝ^n+1). In this section, we state some performance guarantees on those trajectories, depending on some properties of the function V. In the following, we assume that, up to an enlargement of the time horizon, the system can reach the target set starting from any initial condition (t,x) ∈ [0, T] × X, and that the associated value function is Lipschitz.
There exists a time T^♯≥ T such that the minimal time control problem (<ref>)-(<ref>) defined over [0, T^♯] has a value function V^♯ which takes finite values over Y = [0,T] × X, and is Lipschitz continuous.
We emphasize that, under Assumption <ref>, V^*(t,x) < ∞ implies V^*(t,x) = V^♯(t,x) for any (t,x) ∈ Y. Since V^♯ is Lipschitz continuous over Y ⊂ℝ^n+1, it admits a Lipschitz continuous extension over ℝ^n+1 <cit.>. We assimilate the value function and its extension on ℝ^n+1, such that we can speak about the Clarke's generalized derivative ∂^c V^♯(y) of V^♯ at y ∈ Y. For any V∈ C^1(ℝ^n+1), we introduce the notation
‖∇ V - ∇ V^♯‖_∞ = sup_y ∈ Y sup_g∈∂^c V^♯(y)‖∇ V(y) - g ‖_2.
We also define the constant C_f = sup_(t,x,u) ∈ Y × U‖ f(t,x,u) ‖ <∞.
Let V∈ C^1(ℝ^n+1) be a continuously differentiable function, and let (x_V(·),u_V(·)) be a closed-loop trajectory starting at (0,x_0) satisfying the differential inclusion (<ref>) and the state constraints over [0, T]. We define t_V = sup{t ∈ [0,T] x_V([0,t])⊂ X ∖ K }. Then, under Assumptions <ref>-<ref>,
V^♯(t, x_V(t)) ≤ (τ^* - t) + t 2 (1 + C_f) ‖∇ V - ∇ V^♯‖_∞ ∀ t ∈[0, t_V],
where τ^* = V^*(0,x_0) = V^♯(0,x_0) ≤ t_V. In particular, we notice that
V^♯(τ^*, x_V(τ^*)) ≤ 2 τ^* (1 + C_f) ‖∇ V - ∇ V^♯‖_∞.
In Eq. (<ref>), V^♯(τ^*, x_V(τ^*)) measures how far the closed-loop trajectory (x_V(·),u_V(·)) is from the target set K at the moment when the time-optimal trajectory reaches K. As a corollary, we give a condition for the closed-loop trajectory (x_V(·),u_V(·)) to effectively reach the target set K, with a bounded delay compared to the time-optimal trajectory.
Under the same hypotheses as Th. <ref>, if ‖∇ V - ∇ V^♯‖_∞≤1 - τ^*/T/2 (1 + C_f), then
x_V(t_V) ∈ K with t_V ∈ [τ^*, 1/1 - 2 (1 + C_f) ‖∇ V - ∇ V^♯‖_∞τ^*].
We underline that the hitting time t_V ≥τ^* converges to the minimal time τ^*, when the approximation error ‖∇ V - ∇ V^♯‖_∞ vanishes.
For any Lipschitz continuous function F: ℝ^n+1→ℝ, we recall that ∂^c F(y) denote the Clarke's generalized derivative at y, and we define H_F as
H_F(y) = 1 + min_c u ∈ U
g ∈∂^c F(y) { g^⊤[ 1; f(y,u) ]}.
The minimum is attained by continuity of the objective, and by compactness of U and ∂^c F(y) (see <cit.>). Note also that for any V ∈ C^1(ℝ^n+1), for any y = (t,x) ∈ℝ^n+1, H_V(t,x) = 1 + ∂_t V(t,x) + min_u ∈ U∇_x V(t,x)^⊤ f(t,x,u), and the argmin is 𝒰_V(t,x). By application of the Maximum Theorem <cit.>, we know that H_F is lower semi-continuous, since (i) ∂^c F(y) is a compact-valued and upper semi-continuous set-valued map <cit.>, therefore so is y ↦ U ×∂^c F(y), and (ii) (y,u,g) ↦ g^⊤[ 1; f(y,u) ] is continuous.
First, we take any y_1 = (t_1,x_1) ∈ [0, T] × X ∖ K, and we prove that H_V^♯(t_1,x_1) ≤ 0. According to Assumption <ref>, V^♯(t_1,x_1) < ∞, and according to Th. <ref> applied to the control system (<ref>)-(<ref>) on the interval [0 , T^♯], there exists an optimal trajectory (x(·), u(·)) over [t_1, t_2] (with t_2 > t_1 since x_0 ∉ K) starting from (t_1, x_1). By definition, V^♯(t_1,x_1) = t_2 - t_1. We can also prove that for all t ∈ [t_1, t_2], V^♯(t,x(t)) = t_2 - t: (i) the trajectory restricted to [t, t_2], yields an admissible trajectory starting from (t,x(t)), therefore V^♯(t,x(t)) ≤ t_2 - t, and (ii) for an optimal trajectory (x̃(·), ũ(·)) starting from (t,x(t)) over [t, t_3], the trajectory following (x(·), u(·)) over [t_1, t] and (x̃(·), ũ(·)) over [t, t_3] is admissible and starting from (t_1, x_1), therefore, V^♯(t,x(t)) + (t-t_1) ≥ V^♯(t_1,x_1) = t_2 - t_1, giving V^♯(t,x(t)) ≥ t_2 - t. As α(t) = V^♯(t,x(t)) = t_2 - t for all t ∈ [t_1, t_2], we deduce that
α'(t) = -1 a. e. on [t_1, t_2].
Moreover, since V^♯ is Lipschitz continuous by assumption, and t ↦ (t, x(t)) is Lipschitz continuous as x(t) is differentiable a.e. with a bounded derivative, Lemma <ref> gives:
α'(t) = d (V^♯(t,x(t)))/dt≥min_g ∈∂^c V^♯(t,x(t)) g^⊤[ 1; ẋ(t) ] a.e. on [t_1, t_2]. Using that ẋ(t) = f(t,x(t),u(t)) a.e. on [t_1, t_2], and the definition of H_V^♯:
α'(t) ≥min_g ∈∂^c V^♯(t,x(t)) g^⊤[ 1; f(t,x(t),u(t)) ]≥ H_V^♯(t,x(t)) - 1,
a.e. on [t_1, t_2]. Combining this with Eq. (<ref>), we deduce that for almost all t ∈ [t_1, t_2], H_V^♯(t,x(t)) ≤ 0. By lower semi-continuity of H_V^♯ (see above), and by continuity of x(·)
H_V^♯(t_1,x_1) ≤ 0.
Second, still for any (t_1,x_1) ∈ [0, T] × X ∖ K, we observe that there exists (g,u_1) ∈∂^c V^♯(t_1,x_1) × U such that H_V^♯(t_1,x_1) = 1 + g^⊤[ 1; f(t_1,x_1,u_1) ]; indeed, we already mentioned that the minimum in (<ref>) is attained. Therefore, for any V ∈ C^1(ℝ^n+1)
1 + ∂_t V(t_1,x_1) + ∇_x V (t_1,x_1)^⊤ f(t_1,x_1,u_1) = H_V^♯(t_1,x_1) + (∇ V(t_1,x_1) - g)^⊤[ 1; f(t_1,x_1,u_1) ]
≤ H_V^♯(t_1,x_1) + ‖∇ V - ∇ V^♯‖ (1+C_f),
the inequality being due to Cauchy-Schwartz inequality, and the definition of ‖∇ V - ∇ V^♯‖. We know that H_V(t_1,x_1) ≤∂_t V(t_1,x_1) + 1 + ∇_x V (t_1,x_1)^⊤ f(t_1,x_1,u_1) by definition of H_V(t_1,x_1) (as u_1 ∈ U), therefore Eq. (<ref>) gives H_V(t_1,x_1) ≤ H_V^♯(t_1,x_1) + (1+C_f) ‖∇ V - ∇ V^♯‖. Using this inequality and Eq. (<ref>), we deduce that for all (t_1,x_1) ∈ [0, T] × X ∖ K,
H_V(t_1,x_1) ≤ (1+C_f) ‖∇ V - ∇ V^♯‖.
Third, according to the hypotheses of the theorem, we take any V∈ C^1(ℝ^n+1), and any closed-loop trajectory (x_V(·),u_V(·)) starting at (0,x_0) satisfying the differential inclusion (<ref>) and the state constraints over [0, T]. We, then, study the evolution of V^♯ over this trajectory. As x_V(t) is Lipschitz continuous, Lemma <ref> yields the existence of g(t) ∈∂^c V^♯(t,x_V(t)) for almost all t ∈ [0, T], such that
d/dt(V^♯(t,x_V(t))) ≤ g(t)^⊤[ 1; f(t,x_V(t),u_V(t)) ] a.e. on [0, T].
As u_V(t) ∈𝒰_V(t,x_V(t)), we know that H_V(t,x_V(t)) = 1 + ∇ V(t,x_V(t))^⊤[ 1; f(t,x_V(t),u_V(t)), ], and therefore,
d/dt(V^♯(t,x_V(t))) ≤ -1 + H_V(t,x_V(t)) + (g(t) - ∇ V(t,x_V(t)))^⊤[ 1; f(t,x_V(t),u_V(t)), ]
We deduce, using Cauchy-Schwartz inequality and the definition of ‖∇ V - ∇ V^♯‖,
d/dt(V^♯(t,x_V(t))) ≤ -1 + H_V(t,x_V(t)) + (1+C_f) ‖∇ V - ∇ V^♯‖,
for almost all [0, T]. Moreover, for all t ∈ [0, t_V), x_V(t) ∉ K. Therefore, we can apply Eq. (<ref>) to deduce, in combination with Eq. (<ref>), that for almost all [0, t_V],d/dt(V^♯(t,x_V(t))) ≤ -1 + 2 (1+C_f) ‖∇ V - ∇ V^♯‖. By integration, we deduce that for all t ∈ [0, t_V], V^♯(t, x_V(t)) - τ^* ≤ - t + 2 t (1+C_f) ‖∇ V - ∇ V^♯‖, as V^♯(0,x_V(0)) = V^♯(0,x_0) = τ^*. This proves Eq. (<ref>).
Fourth and finally, we prove the corollary. Due to the definition of t_V, the following (non-exclusive) alternative holds: either x_V(t_V) ∈ K or t_V = T. Moreover, if ‖∇ V - ∇ V^♯‖_∞≤1 - τ^*/T/2 (1 + C_f), then V^♯(t, x_V(t)) - τ^* ≤ - t + t (1 - τ^*/T) and V^♯(t, x_V(t)) ≤τ^* (1 - t/T) for all t ∈ [0, t_V]. We notice that if t_V= T, then V^♯(t_V, x_V(t_V)) ≤ 0, i.e., x_V(t_V) ∈ K. Coming to the aforementioned alternative, we deduce that x_V(t_V) ∈ K. Moreover, this fact combined with Eq. (<ref>) gives us that 0 ≤ (τ^* - t_V) + t_V 2 (1 + C_f) ‖∇ V - ∇ V^♯‖_∞, hence
t_V (1 - 2 (1 + C_f) ‖∇ V - ∇ V^♯‖_∞) ≤τ^*.
By assumption, 1 - 2 (1 + C_f) ‖∇ V - ∇ V^♯‖_∞≥τ^*/T >0, we can thus divide Eq. (<ref>) by this quantity to obtain the result of the corollary: t_V ≤τ^*/( 1 - 2 (1 + C_f) ‖∇ V - ∇ V^♯‖_∞).
In the previous theorem and the corollary, we saw that the suboptimality, in terms of hitting time, of a closed-loop trajectory (x_V(·),u_V(·)) satisfying the differential inclusion (<ref>) decreases as the approximation error ‖∇ V - ∇ V^♯‖_∞ decreases. Furthermore, we see that the closed-loop trajectory comes close to optimality when the approximation error vanishes. We now study a sufficient condition under which the approximation ‖∇ V_d - ∇ V^♯‖_∞ can be made arbitrarily small, using a polynomial V_d(t,x) of sufficiently large degree d ∈ℕ.
§.§ A sufficient regularity condition for the existence of near-optimal controllers based on polynomials
In the case where the value function is twice differentiable, there exist polynomials V_d with such a vanishing approximation error ‖∇ V_d - ∇ V^♯‖_∞, and that are near optimal solutions in the hierarchy of semi-infinite programs (<ref>).
Under Assumptions <ref>-<ref>, if the value function V^♯ belongs to C^2(ℝ^p | Y), and is a subsolution to the HJB equation, then there exist a sequence of polynomials (V_d(t,x))_d∈ℕ^*, with V_d(t,x) ∈ℝ_d[t,x_1,…, x_n], and two constants c_1, c_2 >0, such that for all d ∈ℕ^*,
* The polynomial V_d(t,x) is feasible, and c_1/d-optimal in the problems (<ref>) and (<ref>),
* The following inequality holds: ‖∇ V_d - ∇ V^♯‖_∞≤c_2/d.
Under these hypotheses, the polynomials V_d(t,x) are subsolutions to the HJB equation, and form a maximizing sequence of the problem (<ref>); we also notice that the hierarchy of semi-infinite programs (<ref>) converges in O(1/d) in terms of objective value. Moreover, according to Cor. <ref>, for any sequence of closed-loop trajectories (x_V_d(·), u_V_d(·)), the associated hitting times converge to the minimal time τ^*: this is a minimizing sequence of trajectories for the optimal time control problem (<ref>)-(<ref>).
By definition of C^2(ℝ^n+1 |Y), there exists a function Q ∈ C^2(ℝ^n+1) such that V^♯(y) = Q(y) and ∇ V^♯(y) = ∇ Q (y) for all y ∈ Y. In application of Lemma <ref>, as Q has a locally Lipschitz gradient since it is twice differentiable, there exists a constant A > 0, and a sequence of polynomials (w_d(t,x))_d ∈ℕ^* with w_d(t,x) ∈ℝ_d[t,x_1,…,x_n] and such that for all (t,x) ∈ Y, | w_d(t,x) - Q(t,x) | ≤A/d and ‖∇ w_d(t,x) - ∇ Q(t,x) ‖_2 ≤A/d. With α_d = A(1+C_f)/d, and β_d = A/d(1 + T + T C_f), we define the polynomial V_d(t,x) = w_d(t,x) + α_d t - β_d∈ℝ_d[t,x_1,…,x_n]. First, we notice that ‖∇ V_d - ∇ V^♯‖_∞≤‖∇ w_d - ∇ V^♯‖_∞ + α_d ≤A ( 2 + C_f)/d for all d≥ 1. This proves the second point of the theorem, having defined the constant c_2 = A(2+C_f), which is independent from d. We prove now the first point. For all d ≥ 1, and (t,x,u) ∈ [0, T] × X × U,
∂_t V_d(t,x) + 1 + ∇_x V_d(t,x)^⊤ f(t,x,u) = α_d + ∂_t V^♯(t,x) + 1 + ∇_x V^♯(t,x)^⊤ f(t,x,u)
+ (∇ w_d(t,x) - ∇ V^♯(t,x))^⊤[ 1; f(t,x,u) ]
≥α_d + (∇ w_d(t,x) - ∇ V^♯(t,x))^⊤[ 1; f(t,x,u) ],
as V^♯ is a subsolution to the HJB equation, hence satisfies Eq. (<ref>). Using the Cauchy-Schwartz inequality, we obtain ∂_t V_d(t,x) + 1 + ∇_x V_d(t,x)^⊤ f(t,x,u) ≥α_d - ‖∇ w_d(y) - ∇ V^♯(y) ‖_2 (1 + C_f) ≥α_d - A/d (1 + C_f) = 0. This proves that V_d satisfies Eq. (<ref>). It also satisfies Eq. (<ref>), because for any (t,x) ∈ [0, T] × K,
V_d(t, x) = w_d(t,x) + α_d t - β_d
≤ V^♯(t,x) + A/d + α_d t - β_d
≤ V^♯(t,x) + A/d + α_d T - β_d
≤ V^♯(t,x) = 0.
since A/d + α_d T - β_d = 0 by definition of β_d, and since x ∈ K. We deduce that V_d is feasible in (<ref>). Its objective value is V_d(0,x_0) ≥ w_d(0, x_0) - β_d ≥ V^♯(0, x_0) - A/d - β_d = V^♯(0, x_0) - c_1/d, where c_1 = A(2 + T + T C_f). As V^♯(0, x_0) = V^*(0,x_0) due to Assumption <ref>, V_d(0,x_0) ≥ V^*(0,x_0) - c_1/d≥𝗏𝖺𝗅(<ref>) - c_1/d≥𝗏𝖺𝗅(<ref>) - c_1/d, and we therefore conclude that V_d is c_1/d-optimal in (<ref>) and (<ref>).
Admittedly, the hypothesis in Th. <ref> that the value function V^♯ belongs to C^2(ℝ^p | Σ) is stringent. It is worth noting, however, that there exist systems that satisfy this hypothesis. Here is an example: ẋ(t) = u(t), x(t) ∈ X = [0,1]^2, ‖ u(t) ‖≤ 1 and K = {0 }× [0,1]. The value function associated with the horizon T = ∞ is V^♯(t,x) = x_1.
§ ILLUSTRATIVE EXAMPLES
We implemented and tested the proposed methodology on three Minimal Time Control Problems: a generalization of the Zermelo problem, a regatta problem and a generalization of the Brockett integrator. The numerical examples in this section were processed with our package [This package is available at <github.com/aoustry/MinTimeControl.jl>]. In this implementation of Algorithm <ref>, the master problem (<ref>) is solved with the simplex algorithm of the commercial solver <cit.>. At each iteration, we add a maximum of 100 points to the set 𝒴^k. The separation oracle (<ref>) is implemented with a random sampling scheme (with 500,000 samples at each iteration to detect violated constraints), and with the global optimization solver <cit.>, for the certification at the last iterate. This solver is used with a relative tolerance δ = 10^-4, and with a time limit of 10,000s. We also precise that we compute a heuristic trajectory based on the particularities of each problem; this heuristic is not optimal, but provides an upper bound T on the minimum time, and therefore, a relevant time horizon [0,T]. The trajectory resulting from the heuristic is used to initialize the set 𝒴^0, in the sense that we enforce the HJB inequality for some points of this trajectory. During the iterations of the algorithm, we obtain functions V_θ^k(t,x) and we simulate the associate feedback trajectory defined by the differential inclusion (<ref>); if the obtained trajectory reaches the target set, it gives us an upper bound. Those trajectories are also used to enrich the set 𝒴^k. For all the numerical experiments, the regularization parameter is μ = 10^-5, and we use the tolerance ϵ = 10^-3.
Table <ref>, Table <ref> and Table <ref> present the numerical results for three different applications. The different columns of these tables are the following:
* “d ∈ℕ” is the degree of the polynomial basis used.
* “Estimated value of (<ref>)” stands for the value V_θ(0,x_0), where θ is the output of Algorithm <ref>, using the sampling oracle. This estimated value of (<ref>) is not an exact lower bound, since this sampling oracle does not provide the guarantee that θ is indeed feasible in (<ref>).
* “Certified lower bound for (<ref>)” stands for V_θ(0,x_0) - ϕ̂(θ)(1+T), where θ is as defined above and ϕ̂(θ) is a guaranteed upper bound on ϕ(θ), the feasibility error of θ in (<ref>), computed by the global optimization solver . As V_θ(t,x) + ϕ̂(θ)(t-1-T) is therefore feasible in (<ref>), the value V_θ(0,x_0) - ϕ̂(θ)(1+T) is a guaranteed lower bound on 𝗏𝖺𝗅(<ref>), and, therefore, on V^*(0,x_0).
* “Value feedback control (<ref>)” is the hitting time of the best feasible control generated along the iterations: either with the heuristic control at the first iteration, or the closed-loop controlled trajectory defined by (<ref>) associated with V = V_θ^k at iteration k of Algorithm <ref>.
* “Solution time (in s)” is the total computational time of the heuristic control, of the iterations of Algorithm <ref> including the sampling oracle, and of the closed-loop trajectory simulation. Therefore, this is the computational time needed to obtain the estimated value of (<ref>) (second column), and the best feasible control (fourth column).
* “Iterations number” is the total number of iterations of Algorithm <ref>.
* “Certification time (in s)” is the computational time of the global optimization solver , playing the role of δ-oracle, to compute the aforementioned bound ϕ̂(θ), and deduce the certified lower bound (third column).
§.§ A time-dependent Zermelo problem
We consider a time-dependent nonlinear system with n = 2 and m = 2, defined by
ẋ_1(t) = u_1(t) + 1/2 (1+ t) sin(π x_2 (t))
ẋ_2(t) = u_2(t),
with the state constraint set X = [-1,1] × [-1,0], the control set U = B(0,1). This is the celebrated Zermelo problem, but with a river flow gaining in intensity over time. Fig. <ref> gives a representation of this flow. The initial condition is x(0) = (0,-1), and the target set is K = B(0,r), for r = 0.05. The travel time associated with the heuristic control, consisting in following a straight trajectory, is 1.261 (see Fig <ref>). Table <ref> presents the numerical results for different values of d. We see that the value of the linear semi-infinite program (<ref>) quickly converges as d increases: starting from d = 6, the 4 first digits of the estimated value (second column) reach a plateau which corresponds to the value (1.100) of the best feasible trajectory we generate with our feedback control. As regards the certified lower bound, the best value (1.092) is obtained for d=5. For greater d, we see that increasing d deteriorates the tightness of the best certified bound. This is due to the fact that the separation problem becomes more difficult, with two consequences: (i) the sampling fails to detect unsatisfied constraints, so Algorithm <ref> stops with a solution that has a real infeasibility ϕ(θ) larger than ϵ (targeted tolerance), and (ii) the global optimization solver called afterwards does not manage to solve the separation problem to global optimality within the time limit (case d ∈{7,8}), given only a large upper bound ϕ̂(θ) on the true infeasibility ϕ(θ). We notice that as soon as d ≥ 3, the feedback control defined by (<ref>) (see Sect. <ref>) yields a trajectory that is 13% faster than the heuristic trajectory. In summary, we obtain a certified optimization gap of 0.7% for this minimal time control problem.
In the special case of this non-polynomial controlled system, a polynomial reformulation exists, at the price of increasing the dimension of the system to n = 4:
ẋ_1(t) = u_1(t) + 1/2 (1+ t) sin(π x_2 (t))
ẋ_2(t) = u_2(t)
ẋ_3(t) = -π x_4(t)u_2(t)
ẋ_4(t) = π x_3(t)u_2(t),
with the state constraint set X̂ = [-1,1] × [-1,0] × [-1,1] × [-1,0], the control set Û = B(0,1), the terminal set K̂ = B(0,r) × [-1,1] × [-1,0], and the initial condition x_0 = (0,-1,-1,0). The dynamics maintain the equalities x_3(t) = cos(π x_2(t)) and x_4(t) = sin(π x_2(t)). We are therefore able to compare our approach with the sum-of-squares (SOS) hierarchy, which consists of replacing SIP inequalities in (<ref>) with SOS positivity certificates. For each order k of the hierarchy, i.e., for a maximal degree d = 2k of the polynomial basis, this yields a semi-definite programming problem that we solve with the solver , used with the package . We obtain a polynomial V(t,x_1,x_2,x_3,x_4) that is solution of the corresponding relaxation. Based on this polynomial, we can also generate a feedback controlled trajectory solution of the differential inclusion (<ref>).
Table <ref> compares the performance of the SIP and the SOS approaches. We see that for low-degree polynomials (d ≤ 4), the semi-infinite hierarchy gives better lower bounds than the SOS hierarchy, although at a higher computational time in the case d=4. For d ∈{6,8 }, the lower bound of the SOS hierarchy is tight, while only the estimated lower-bound of the SIP hierarchy is tight: to obtain a certified lower bound, the SOS hierarchy performs better. For these values of the degree d, this optional certification step (calling to the global optimization solver) is costly in the proposed approach. For this first example, where the SOS hierarchy is applicable since a polynomial reformulation of the dynamical system exists, the SIP approach is slower than the SOS hierarchy.
§.§ A regatta toy-model
We consider a time-dependent nonlinear (and non-polynomial) system with n = 2 and m = 1, defined by
ẋ_1(t) = 𝗐𝗂𝗇𝖽𝗌𝗉𝖾𝖾𝖽(t) 𝗉𝗈𝗅𝖺𝗋[ u(t)] cos(u(t) + 𝗐𝗂𝗇𝖽𝖺𝗇𝗀𝗅𝖾(t))
ẋ_2(t) = 𝗐𝗂𝗇𝖽𝗌𝗉𝖾𝖾𝖽(t) 𝗉𝗈𝗅𝖺𝗋[ u(t)] sin(u(t)+𝗐𝗂𝗇𝖽𝖺𝗇𝗀𝗅𝖾(t)),
where 𝗐𝗂𝗇𝖽𝗌𝗉𝖾𝖾𝖽(t) = 2 + t, 𝗐𝗂𝗇𝖽𝖺𝗇𝗀𝗅𝖾(t) = π/2(1-0.4 t) and 𝗉𝗈𝗅𝖺𝗋[u] = |sin(2u/3)|. In this model, the control u(t) represents the relative angle between the heading of the boat and the (origin) direction of the wind. The evolution of the wind direction over time is depicted in Fig. <ref>. The polar curve of this toy model of a sailing boat is represented in Fig <ref>; this figure clearly shows that this model does not satisfy Assumption <ref>. Although the absence of duality gap between the control problem and the LP problem (<ref>) is, therefore, not guaranteed, we see in Table <ref> that if this gap exists in this case, it is low (below 1.6 %). The state constraint set is X = [-1,1]^2, and the control set U = [-π, π]. The initial condition is x(0) = (0,-1), and the target set is K = B(0,r), for r = 0.05. The travel time associated with the heuristic control, consisting in following a straight trajectory, is 1.278 (see Fig. <ref>).
We see that the highest estimated value of (<ref>), for d=7 and d=8, is 0.5% lower than the value (0.913) of the best feasible trajectory obtained with the feedback controller for d=6. This feedback controller yields a trajectory which is 29% faster than the heuristic trajectory. As regards the certified lower bound, d = 4 yields the best result (0.896), at a price of a running time of 498s for the exact oracle (). For the same reasons as in the previous application, a larger d does not necessarily mean a better certified lower bound obtained within the time limit. In summary, we obtain a certified optimization gap of 1.6% for this minimal time control problem.
§.§ A generalized Brockett integrator
For n ∈ℕ^* and m = n-1 and given a continuous mapping qℝ^n →ℝ^m, we consider the following generalization of the Brockett integrator <cit.>,
ẋ_i(t) = u_i(t) ∀ i ∈{1, …, n-1}
ẋ_n(t) = q(x(t))^⊤ u(t).
In particular, we study this system for n = 6, and q(x) = ( 2/(2+x_4),-x_1,-cos(x_1 x_3),exp(x_2),x_1 x_2 x_6). The state constraint set is X = [-1,1]^5, and the control set is U = B(0,1). The initial condition is (x_0) = 1/21, and the target set is K = B(0,r), for r = 0.05. The travel time associated with the heuristic control is 1.377.
Since this system has a larger dimension than the other two examples, we see that the computation times are longer for the same degree d. Already for d=2, we obtain an estimated value of (<ref>) that is within 0.1% of the value of the feedback control (1.070). This feedback control yields an improvement of 22% over the heuristic trajectory. Note that the estimated values of (<ref>) computed by Algorithm <ref> with the (inexact) sampling oracle are slightly larger than the value of the best trajectory we computed: thus, these estimates are not valid lower bounds, but only estimates of the value of the minimum time control problem. Regarding the certification of lower bounds, the global optimization solver fails to produce tight upper and lower bounds on ϕ(θ), the infeasibility of the solution θ returned by Algorithm <ref>. Therefore, the resulting certified lower bounds are not tight either. In summary, we obtain a certified optimization gap of 29% for this minimal time control problem.
§ DISCUSSION
We apply the dual approach in minimal time control, that consists in searching for maximal subsolutions of the HJB equation, to generic nonlinear, even non-polynomial, controlled systems. The basis functions used to generate these subsolutions are polynomials, that are subject to semi-infinite constraints. We prove the theoretical convergence of the resulting hierarchy of semi-infinite linear programs, and our numerical tests on three different systems show good convergence properties in practice. These results show that the use of a random sampling oracle allows a good approximation of the value of the control problem. For small systems, it is even possible to obtain tight and certified lower bounds, based on a global optimization solver. Finally, the numerical experiments also show that the computed subsolutions of the HJB equations help to recover near-optimal controls in a closed-loop form. As illustrated in these numerical experiments, the advantage of our approach based on semi-infinite programming, compared to the sum-of-squares approach, is the ability to handle non-polynomial systems. In the numerical example where a polynomial reformulation of the system was possible, the sum-of-squares approach was, however, faster.
A promising avenue for continuing this work is to investigate the use of a other basis of functions to search for an approximate value function, resulting in other semi-infinite programming hierarchies with convergence guarantees. In particular, it would be relevant to use non-differentiable functions in the basis to improve approximation capabilities for non-differentiable value functions. Another avenue of research is to extend the approach and theoretical results to a generic optimal control problem.
§ ACKNOWLEDGMENTS
The authors would like to thank Maxime Dupuy, Leo Liberti and Claudia D'Ambrosio for fruitful discussions and advice.
plain
§ TECHNICAL LEMMATA
We consider a compact set Z ⊂ℝ^p, and the family of compact sets Z_δ = { z ∈ℝ^p d(z,Z) ≤δ} for any δ≥ 0 and a continuous function ψ∈ C(ℝ^p). Then, the function Ψ(δ) = min_z ∈ Z_δψ(z) is continuous at 0.
First of all, we notice that the function δ↦min_z ∈ Z_δψ(z) is well-defined, since ψ is continuous and Z_δ is compact. As Z_δ_1⊂ Z_δ_2 for any δ_1 ≤δ_2, the function Ψ is non-increasing, which proves that the following limit exists:
δ→ 0^+limΨ(δ) = Ψ(0^+) ≤Ψ(0).
We take a positive sequence (δ_k) ∈ℝ_++^ℕ such that δ_k → 0. Hence, Ψ(δ_k) →Ψ(0^+) by definition of the right-limit. For any k ∈ℕ, we define z_k ∈ Z_δ_k such that ψ(z_k) = Ψ(δ_k). The sequence (δ_k) being bounded, we can introduce an upper bound δ. Hence, any element of the sequence (z_n) belongs to the compact set Z_δ, and up to the extraction of a subsequence, converges to a point z being such that ψ(z) = Ψ(0^+) by continuity of ψ and uniqueness of the limit. As d(z_k,Z), the distance between z_k and the compact set Z, is bounded above by δ_k and is non-negative, it converges to 0. By continuity of the distance, we know that d(z,Z) = 0 and, thus, Ψ(0^+) = ψ(z) ≥Ψ(0). Together with Eq. (<ref>), this yields Ψ(0^+) = Ψ(0).
Let Q ∈ C^1(ℝ^p) be a continuously differentiable function, with a locally Lipschitz gradient. Let Z ⊂ℝ^p be a compact set. Then, there exists a constant A > 0, and a sequence of polynomials (w_d(x))_d ∈ℕ^* such that for all d ∈ℕ^*, w_d ∈ℝ_d[x_1,…,x_p] and
sup_x ∈ Z | w_d (x) - Q (x) | ≤A/d
sup_x ∈ Z‖∇ w_d (x) - ∇ Q (x) ‖≤A/d.
We underline that the constant A implicitly depends on p, Q and Z, but not on the polynomial w_d(x), nor on its degree d.
We introduce a constant R > 0 such that Z ⊂ B(0,R), and the function = ∗1_B(0,R+1), where is the mollifier introduced in the proof of Th. <ref>. We notice that ∈ C^∞(ℝ^p) is supported on Z̃ = B(0, R +2) and constant equal to 1 over B(0,R). We define Q̃(x) = Q(x) (x), and we notice that (i) Q̃ is supported on the compact set Z̃, which contains Z, (ii) for all x ∈ Z, Q̃(x) = Q(x) and ∇Q̃(x) = ∇ Q(x). Applying <cit.> to the function Q̃, that has a compact support, we know that there exists a constant C such that for any d ≥ 1, there exists a polynomial w_d(x) of degree at most d such that
sup_x ∈ Z |w_d(x) - Q̃(x) | ≤C/dκ(1/d) ≤ C κ(1/d),
sup_x ∈ Z |∂_i(w_d - Q̃)(x) | ≤ C κ(1/d)
where κ(δ) = 1 ≤ i ≤ p sup c (x,y) ∈ℝ^p ×ℝ^p
|x-y|≤δsup |∂_i Q̃(x) - ∂_i Q̃(y)|. We define Z̃ = { x ∈ℝ^p d(x,Z) ≤ 1 }. Since ∂Q̃ is uniformly null outside Z̃, and assuming that δ≤ 1, we notice that
κ(δ) = 1 ≤ i ≤ p sup c x,y ∈Z̃×ℝ^p
|x-y|≤δsup |∂_i Q̃(x) - ∂_i Q̃(y)| = 1 ≤ i ≤ p sup c x,y ∈Z̃×Z̃
|x-y|≤δsup |∂_i Q̃(x) - ∂_i Q̃(y)|.
We note that ∂_i Q̃(x) =(x) ∂_i Q (x) + Q(x) ∂_i (x), and therefore, |∂_i Q̃(x) - ∂_i Q̃(y)| = | (x) (∂_i Q (x) - ∂_i Q (y)) + ∂_i Q (y)((x) - (y)) + Q(x) (∂_i (x) - ∂_i (y)) + ∂_i (y)(Q(x) - Q(y))|. We use, then, the triangle inequality and the facts that (i) is C^∞, therefore bounded, Lipschitz continuous, and with a Lipschitz-continuous gradient over Z̃ and (ii) Q is continuously differentiable, therefore bounded, and Lipschitz continuous over Z̃; by assumption it has a Lipschitz continuous gradient over the compact set Z̃. We deduce that ∂_i Q̃ is Lipschitz continuous over Z̃: there exists L_i>0 such that c x,y ∈Z̃×Z̃
|x-y|≤δsup |∂_i Q̃(x) - ∂_i Q̃(y)| ≤ L_i δ, for all δ∈ [0, 1]. Defining L = max_i L_i, we deduce κ(δ) ≤ L δ. Then Eq. (<ref>) reads sup_x ∈ Z |w_d(x) - Q̃(x) | ≤CL/d, and Eq. (<ref>) reads sup_x ∈ Z |∂_i (w_d - Q̃)(x) | ≤CL/d for all i ∈{1, …, p }. We also deduce that sup_x ∈ Z‖∇ w_d (x) - ∇Q̃ (x) ‖≤C L p/d. Defining A = C L p, and noticing that
for all x ∈ Z, Q̃(x) = Q(x) and ∇Q̃(x) = ∇ Q(x), one obtains the claimed statement.
Under Assumption <ref> and Assumption <ref>, we consider an admissible trajectory (x(·),u(·)) over [0, t_1] of the minimal time control problem (<ref>)-(<ref>) starting from (0,x_0). Then, for almost all t ∈ [0, t_1], f(t,x(t),u(t)) ∈ T_X(x(t)).
We reduce to a time-invariant controlled system: we define, for any y = (t,x) ∈ℝ^n+1 and u ∈ℝ^m, f̃(y,u) = [ 1; f(y,u) ], and the constant set-valued map Ũ(y) = U. The control system (f̃,Ũ) is a Marchaud control system <cit.>, since: (i) the graph of Ũ is closed (ii) f̃ is continuous (iii) the velocity subsets {f̃(y,u) u ∈Ũ(y) } are convex due to Assumption <ref>, and (iv) the function f has a linear growth since it is Lipschitz continuous, and the set-valued map are bounded, thus also has a linear growth. We define the set 𝒞 = ℝ_+ × X and notice that the control u(·) regulates a trajectory y(t) = [ t; x(t) ] that remains in 𝒞, therefore according to <cit.>, for all most all t ∈ [0, t_1], u(t) ∈{ u ∈Ũ(y(t)) f̃(y(t),u(t)) ∈ T_𝒞(y(t)) }. We notice that T_𝒞(y(t)) ⊂ℝ× T_X(x(t)), implying that f(y(t),u(t)) ∈ T_X(x(t)).
For any locally Lipschitz continuous function F ℝ^p →ℝ, and for any Lipschitz continuous curve y [0, T] →ℝ^p, the function t ↦ F(y(t)) is differentiable a.e. and satisfies
min_g ∈∂^c F(y(t)) g^⊤ẏ(t) ≤d/dt(F(y(t))) ≤max_g ∈∂^c F(y(t)) g^⊤ẏ(t),
for almost all t ∈ [0, T].
The particular functions F for which these three quantities are equal are called path-differentiable in <cit.>.
First, we notice that the functions t ↦ y(t) and t ↦ F(y(t)) are Lipschitz continuous, therefore differentiable a.e. on [0, T] due to the Rademacher theorem <cit.>. Hence, for almost all t ∈ [0, T], both y(t) and F(y(t)) are differentiable at t. We consider such a t, and we show that (<ref>) holds for this particular t, which we prove the Lemma. Since y is differentiable at t, r(h) = y(t+h) - y(t) - h ẏ(t) is in o_h→ 0(h) .
Since s ↦ F(y(s)) is differentiable at t, the following holds
d/dt(F(y(t))) = h → 0, h >0limF(y(t+h)) - F(y(t))/h
= h → 0, h >0limF(y(t) + h ẏ(t) + r(h)) - F(y(t))/h
Since r(h) = o_h→ 0(h) and F is locally Lipschitz, we know that h → 0, h >0limF(y(t)) - F(y(t) + r(h)) /h = 0. Summing this with Eq. (<ref>), we deduce that
d/dt(F(y(t))) = h → 0, h >0limF(y(t) + r(h) + h ẏ(t) ) - F(y(t) + r(h))/h
≤c y' → y(t)
h → 0, h> 0 lim supF(y' + h ẏ(t)) - F(y')/h = F^∘(y(t), ẏ(t)),
where F^∘(y;v) = c y' → y
h → 0, h> 0 lim supF(y' + h v) - F(y')/h is the F^∘(y;h) Clarke's directional derivative at y ∈ℝ^p in the direction v ∈ℝ^p. The inequality follows from the fact that y(t) +r(h) → y(t). By property of the Clarke subdifferential <cit.>, we also know that F^∘(y;v) = max_g ∈∂^c F(y) g^⊤ v. Hence, in particular,
d/dt(F(y(t))) ≤max_g ∈∂^c F(y(t)) g^⊤ẏ(t).
The reasoning that proved Eq. (<ref>) is also applicable to -F, that is also locally Lipschitz, and such that s ↦ (-F)(s) is differentiable at t. Therefore,
d/dt(-F(y(t))) ≤max_g ∈∂^c (-F)(y(t)) g^⊤ẏ(t).
As ∂^c (-F)(y(t)) = - ∂^c F(y(t)) by property of the Clarke subdifferential, we deduce that
-d/dt(F(y(t))) ≤max_g ∈∂^c F(y(t)) - g^⊤ẏ(t) = - min_g ∈∂^c F(y(t)) g^⊤ẏ(t),
and therefore d/dt(F(y(t))) ≥min_g ∈∂^c F(y(t)) g^⊤ẏ(t).
|
http://arxiv.org/abs/2307.02850v1
|
20230706083238
|
Resist the Hype! Practical Recommendations to Cope With Résumé-Driven Development
|
[
"Jonas Fritzsch",
"Marvin Wyrich",
"Justus Bogner",
"Stefan Wagner"
] |
cs.SE
|
[
"cs.SE",
"cs.CY"
] |
University of Stuttgart, Germany
Technology trends play an important role in the hiring process for software and IT professionals. In a recent study of 591 software professionals in both hiring (130) and technical (558) roles, we found empirical support for a tendency to overemphasize technology trends in résumés and the application process. 60% of the hiring professionals agreed that such trends would influence their job advertisements. Among the software professionals, 82% believed that using trending technologies in their daily work would make them more attractive for potential future employers. This phenomenon has previously been reported anecdotally and somewhat humorously under the label Résumé-Driven Development (RDD). Our article seeks to initiate a more serious debate about the consequences of RDD on software development practice. We explain how the phenomenon may constitute a harmful self-sustaining dynamic, and provide practical recommendations for both the hiring and applicant perspectives to change the current situation for the better.
Resist the Hype! Practical Recommendations to Cope With Résumé-Driven Development
Jonas Fritzsch, Marvin Wyrich, Justus Bogner, Stefan Wagner
August 1, 2023
=================================================================================
The future of software development would have to cope with serious challenges if we adhered to the satirical manifesto of Résumé-Driven Development (RDD) <cit.>:
1.15
Specific technologies over working solutions, hiring buzzwords over proven track records, creative job titles over technical experience, and reacting to trends over more pragmatic options.
While this is obviously a humorous play on the Agile Manifesto,[<https://agilemanifesto.org>] it entails at least a grain of truth.
In times of social networks, communities, job portals, and especially career platforms, a software developer's portrait reflecting the professional résumé has become more of a figurehead than ever.
An up-to-date profile on LinkedIn[<https://www.linkedin.com>] comprising the professional career, degrees, obtained certificates as well as knowledge and skills confirmed by colleagues (endorsements) is nowadays rather the rule than the exception.
Thanks to sophisticated search capabilities, those who present themselves appropriately and comprehensively on such platforms will be found first — a
treasure trove for headhunters and companies.
What at first glance looks like an extremely useful tool for applicants and hiring professionals has its downsides as well.
Apart from general concerns about digital abstinence or data protection, the great importance of profiles and résumés being constantly available to
everyone also leads to an increasing urge to perfect one's own appearance.
A recent survey among 65,000 software developers <cit.> showed that knowledge and skills regarding various technologies play an important role in their application process.
While hiring professionals spend on average just 7 seconds looking at an applicant's résumé <cit.>, it may be tempting to try to convince the recruiters' critical eyes through breadth and trendiness of used technologies, often referred to as buzzwords by more cynical contemporaries.
§ THE PHENOMENON OF RÉSUMÉ-DRIVEN DEVELOPMENT
It appears that one possible strategy for building an impressive personal profile in the current field of activity is to work with a wide selection of recent and popular technologies.
Moreover, future employers and jobs may specifically be chosen according to whether they contribute to that goal.
The essence of a phenomenon termed Résumé-Driven Development lies in a focus on current technology trends (some of which we summarized visually in Fig. <ref>) that fill supposed gaps in the applicant's profile, thereby extending it and making it appear more impressive. Focusing on CV attractiveness then inevitably supersedes or replaces project-specific requirements, which should actually be the primary driver to select technologies.
Following RDD, a frontend developer would prioritize, e.g., Angular or React in the current project, if they lack practice with one of these frameworks. If there is too little with microservices in a developer's CV, they will perhaps implement this simple web app using Spring Boot and deploy it as 15 containers in a Kubernetes cluster. One's preference is based on current trends or hypes that look exciting in the résumé. First, however, they may not have been understood in depth and secondly, they often disappear from the market after a short time.
Here, the regularly appearing Gartner Hype Cycle <cit.> complemented by ThoughtWorks' quarterly technology radar <cit.> are a revealing indicator for such technology movements.
Developers in their role as applicants represent one side of RDD. On the other side, we find companies represented by their human resources (HR) departments.
While developers are focused on their own profiles, the company aims to create the best possible product using the right tools. These diverging interests can easily lead to conflicts in the choice of technologies for a specific project. The expertise they require may often involve technologies that developers are not particularly keen to work with. Companies find themselves in a quandary here.
On the side of companies and hiring professionals, RDD therefore implies that they benefit from using and specifically advertising popular technologies in their job postings that are more popular among developers.
That way, a larger number of applicants can be addressed, which may yield a better suited candidate in the end. Applicants, however, have no reason to assume that company job advertisements in reality do not reflect the precise technological needs, and hence feel encouraged to optimize their profile even more with a wider range of current technologies.
It is very likely that such behaviors will eventually have negative side effects. First, it can affect applicants whose expectations are not met.
Second, the long-term consequences for companies can be even more serious if technological heterogeneity and poor maintainability of created software result in high costs.
The legitimate question arises if there is more to RDD than anecdotal evidence, and whether these outlined interconnections actually exist in industrial practice.
We explored this question in a 2021 published study, which is briefly summarized in the following. Afterwards, we discuss potential consequences and provide practical recommendations for both perspectives.
§ DOES RDD REALLY EXIST? THE NEED FOR EVIDENCE
Due to missing scientific research on this subject, we lacked a clear definition for the term Résumé-Driven Development.
It can be sporadically found in blogs and discussion forums, where it often leads to controversial debates or even polemics <cit.>.
Our in 2020 initiated wide-ranging online survey invited both software and IT professionals as well as hiring professionals to share their experiences with us. Developers in their various roles, including students from computer science degree programs, have been surveyed as the applicant group. The hiring group consisted of employees from HR departments, team managers, and specialized headhunters in the IT industry.
The survey aimed to explain the phenomenon with scientific methods and determine its influencing factors.
With the support of a prominent German IT magazine, we collected a total of 591 responses, with the majority being located in Germany. Divided into the two perspectives of hiring and applicant, the answers resulted in a ratio of 130 to 558 (answering to both perspectives was possible). Demographic data showed a realistic distribution in terms of professional experience and company size.
The dominant job role was software engineer, followed by manager, student, and architect.
§ A SURVEY: HIRING AND APPLICANTS
The 130 hiring participants were asked regarding their preferences on knowledge and skills of applicants, specifically the technology-related orientation breadth vs. depth. Around two thirds of the respondents considered both characteristics important. However, given a choice, 22% of the participants would prefer a candidate with in-depth knowledge in specific technologies while to 42% the candidate with broad knowledge in a variety of technologies appears more attractive (see Fig. <ref>). This revealed a slight tendency towards a generalist profile in candidates.
Queried after the technology-related characteristics established and latest/trending,
a significant advantage for knowledge in established technologies (39%) over knowledge in current technology trends (20%) became apparent. 41% could or did not want to make a general statement on that (see Fig. <ref>).
We conclude that broad knowledge in established technologies tends to more likely meet the needs of hiring professionals and thus companies.
An important aspect in the interaction of both groups constitute the expectations from / assessment of the other side (see Fig. <ref>). 71% of the hiring participants agreed that software developers would generally enjoy working with the latest/trending technologies. In addition, about half of the hiring respondents assumed that applicants would even be discouraged by the prospect of working with established (legacy) technologies. Interestingly, the majority (59%) of the hiring respondents also agreed that technology trends have an impact on the contents of their own job advertisements. When asked directly, a large fraction of 46% admitted that their advertised technologies are influenced by the expectations of potential applicants.
The 558 applicant participants were first asked regarding the role that technology trends play for them in general and second while creating their profile or CV (see Fig. <ref>). A large majority of 73% stated that they enjoy using latest and trending technologies in their daily work. This matches the perception of the hiring side, with an almost equal percentage.
In addition to this intrinsic motivation for using trending technologies, 82% of the applicants were convinced that such knowledge and skills would make them more attractive for potential employers. Moreover, 63% confirmed that an even higher variety of technologies would further increase their own attractiveness. However, only 42% of the respondents believed that they would become better software developers by using such technologies.
The applicant participants furthermore reported predominantly positive experiences with using trending technologies.
Finally, one fifth of the respondents admitted that they have already used hyped technologies, although it was not the most appropriate choice for the concrete project or application.
These results suggest the existence of the RDD phenomenon on both sides. Technology trends do not always prove beneficial in practice, but are considered significantly more important when attracting applicants (hiring perspective) and likewise increasing one's own attractiveness (applicant perspective) in the hiring process.
The exploratory study culminated in the development of a theoretical construct that covers characteristics of both interacting groups, as well as strengthening predictors for its existence (see Fig. <ref>).
The hiring perspective is characterized by two facets: the degree to which 1) technology trends and 2) the expectations of applicants influence their job offerings. Whereas, the presence of the applicant perspective results from 1) the degree to which applicants are convinced that knowledge of trending technologies makes them more attractive for companies and 2) the importance of trends / hypes in the choice of technology. The thereby shaped RDD construct is reinforced by the strengthening predictors on the right side of the illustration in Fig. <ref>.
Based on these findings, our empirical study proposes the following definition of RDD <cit.>:
1.15
Résumé-Driven Development (RDD) is an interaction between human resource and software professionals in the software development recruiting process. It is characterized by overemphasizing numerous trending or hyped technologies in both job advertisements and CVs, although experience with these technologies is actually perceived as less valuable on both sides. RDD has the potential to develop a self-sustaining dynamic.
The definition and construct of Résumé-Driven Development reflect this dynamic, based on a substantial part of the sample in the interview study.
Further analysis found a positive correlation with the company size. However, the data did not show any significant correlation with the labor market situation, as collected from hiring participants via their perceived difficulty in finding appropriate candidates.
§ CONSEQUENCES FOR SOFTWARE DEVELOPMENT PRACTICE
The question arises what potential effects Résumé-Driven Development has on the practice of software development. Here, we can distinguish two main areas.
First, long-term effects on software quality are very likely. The majority of participants on the applicant side agreed that constantly emerging technology trends increase the diversity of languages, frameworks, and tools used in their company. This boosts the complexity <cit.> and hence affects the maintainability of the developed software, either through the management of dependencies and updates or the necessary knowledge transfer within the team. In addition, a lack of reliability is often attributed to immature technologies.
The consequences may not be immediately apparent, but manifest in the mid- to long-term. A recent article in IEEE Software on the subject of The future of software development points to already visible consequences of overwhelming complexity, combined with insufficient development competences, which lead to poor software quality and therefore, e.g., to the public's decreasing acceptance of […] self-driving cars <cit.>.
The second area of impact concerns the recruiting process itself. Here, RDD can arouse false expectations on both sides. Applicants generally dislike it if their future role in the company is not clearly defined.
Inadequately communicated hiring criteria were identified as one of the main deficiencies in the analysis of over 10,000 job applicants' reviews on the Glassdoor career portal <cit.>. A subsequent high turnover is costly for both sides, but especially for the company: the cost-intensive training was a false investment and, in the worst case, the newcomer leaves behind code in a cutting-edge technology that none of the other team members can maintain. Existing studies directly connect a high turnover with increased knowledge loss <cit.>, and reduced development productivity and software quality in general <cit.>.
Another point that was revealed by our study is the associated neglect of soft skills. These include social skills such as communication, self-motivation, and the ability to learn, as well as a basic understanding of the principles behind the various technologies.
§ RECOMMENDATIONS FOR COMPANIES
The consequences of RDD can manifest in various ways.
To avoid ending up with the above outlined threats and hence risking severe long-term damage, we encourage hiring professionals and companies to consider the following points:
* Restrict the diversity of languages, frameworks, and tools used to an adequate and manageable extent. Introducing a new technology should be well motivated and agreed on by lead developers or architects. A technology-related decision should always be justified by a concrete business need. Letting every developer select the tool of their choice for a given task is rarely the most sustainable approach.
* The used technologies should always be familiar to several team members. Just like for data and servers, it is highly recommended to have a backup for expertise too. It can happen unexpectedly fast that people with important knowledge are unavailable or change companies, leaving behind an expensive legacy. The truck factor <cit.>, i.e., the smallest number of people who need to get hit by a truck for the team to descend into knowledge and maintenance problems, describes this risk and suggests ways of mitigation.
Well-documented decisions, tools, and processes can also help to prevent such knowledge loss.
* Communicate your needs and expectations early and clearly. Enrich job advertisements only with technologies that are essential for the given position, and mark nice-to-have requirements as such. This avoids frustration for the team and applicant that could otherwise lead to a quick turnover. A recent study on the technical interview process identifies such inadequately communicated criteria as a main deficiency expressed by applicants <cit.>.
* Probe applicants critically after their motivations and intentions. In the hiring process, be careful when assessing applicant profiles. Résumé-driven developers may not aim to stay in a company for long, but rather watch out for new opportunities, allowing them to enrich their profile.
* Put equal emphasis on applicants' soft skills, and do not overemphasize technology-related knowledge. While the latter can be beneficial to get up to speed quickly, it is eventually outweighed by more fundamental soft skills, e.g., when coping with difficult tasks or when collaborating in a cross-functional team. Hire for the long term.
* Organize company-internal hackathons or coding challenges to allow developers to acquire and demonstrate new skills <cit.>. It offers room for experimenting with new technologies and thereby increases motivation. It can also be an opportunity to try out new approaches in a sandbox manner before risking false investments.
§ RECOMMENDATIONS FOR APPLICANTS
Our survey confirmed that the vast majority of software professionals enjoy working with the latest / trending technologies. Indeed, it is a great way to keep yourself up-to-date, broaden your technical domain, and obtain a well-paid, fulfilling job. However, as you learn, do not get trapped in the RDD spiral.
Consider the following food for thought to resist the mindset that primarily the latest technology on your resume is making you attractive to potential employers.
* You can hardly be an expert in all technologies, and this is perfectly fine. Having a rich repertoire of skills to offer to employers is a great asset. But if we are honest, few of us are experts in all the technologies that we list in our resume. We all have personal preferences and expertise — may it be that you are a Python guru or C++ enthusiast, you may probably not be a Java expert on top. Therefore, clearly state your skill level and experience with every listed technology to avoid misunderstandings in the hiring process. Our findings further suggest that there is no need to emphasize a variety of trending technologies, since companies are predominantly in demand for skills in established technologies. Promote these skills adequately.
* Structure your resume by projects and professional experience rather than technology skills. By adding the used technologies for each project and phase, you may demonstrate that you can easily learn new technologies in a professional context. This way, i.e., when seeing used technologies in context, hiring professionals can assess and compare your skill level more easily.
* Read job advertisements with a little (healthy) skepticism. Not all required skills for a particular position are really required. They are often comprehensive to address a large number of applicants. Figure out the main focus of this position and which other skills may have been merely added for decoration purposes. The latter can indeed be such latest / trending technologies that are actually not really needed.
* Pay attention to and strengthen your soft skills, just as you do for technologies you aim to master. Software development is becoming less and less a discipline that builds on individualists. Agile development practices are characterized by a high level of interaction that pushes social skills to the foreground. In the long term, these are at least equally important and even essential to pave the way when aiming to change your technical focus towards a management career path.
* Create high-quality products by choosing the right technologies. Companies pay considerable salaries for good employees, but we need to understand that this comes from creating value for their customers. Customers expect products of high quality and efficiently working organizations when signing long-term contracts. Hence, the adequacy of chosen tools and methods should always be checked first in terms of this aspect. Keep in mind that it can be equally fulfilling (and less frustrating) for every developer on the team if the product is of high quality and thus easy to maintain.
* After all, keep learning and improving. There are great ways to stay ahead of the latest technological developments, even if they are no adequate choices for your current work project. If you are eager to learn technologies that are not currently used in your organization, there are often other ways to learn them than imposing them on your current project. Open-source projects, e.g., offer a great way to indulge in trying a new framework or language. It is also a great way to socialize and make new contacts with like-minded people. For those who prefer to develop their professional skills during working hours, encourage your company to think about a hackathon or offering further training opportunities.
§ CONCLUSION
There are always two sides to a coin.
Hence, it is possible to derive both negative and positive aspects from the RDD phenomenon.
The renowned computer scientist Donald Knuth may speak rather in favor of giving software developers the freedom to choose their preferred tools and technologies: computer programming is an art […] A programmer who subconsciously views himself as an artist will enjoy what he does and will do it better. <cit.>
The history of software development has always been characterized by permanent change, and the ever faster expanding technology landscape requires a constantly high level of willingness to learn.
To both hiring and applicants, it is important to understand that such a thirst for knowledge is not demonstrated by a long list of trending technologies in one's resume. There are better ways of demonstrating profound knowledge and interest in the latest developments. Putting more emphasis on accomplished projects and phases enriched with the used technologies conveys a more professional approach.
Companies need to understand that this dynamic has potentially negative consequences for them as well.
Clearly communicated needs and expectations in job advertisements that demand only those technologies being essential for the given position are an easy step towards avoiding false expectations and frustration later on. Appropriately screening applicant profiles also regarding their soft skills can help to avoid an overemphasis of technology-related knowledge.
We derive from the analyzed data that there is more than anecdotal evidence to the phenomenon of Résumé-Driven Development, and believe that RDD has the potential for severe negative consequences.
Future research might draw a more precise picture by showing, e.g., how different industries and domains are affected by RDD. Long-term studies could yield more precise interconnections and insights, and hence result in specific guidance for practitioners.
For more details about the described survey, we refer the interested reader to our paper Résumé-Driven Development: A Definition and Empirical Characterization presented at the International Conference on Software Engineering (ICSE 2021) <cit.>.
IEEEtran
l0.10
< g r a p h i c s >
Jonas Fritzsch is a researcher in the field of software engineering and architecture at the University of Stuttgart, Germany. He benefits from over fifteen years of experience in enterprise software development at his previous employer HPE (formerly HP). As a university lecturer, he teaches programming and algorithms in computer science courses. Contact him at <[email protected]>
l0.10
< g r a p h i c s >
Marvin Wyrich is a researcher at the University of Stuttgart, where he has been part of the empirical software engineering research group since 2018, and at Saarland University as part of the software engineering research group since 2023. His research interests include empirical and behavioral software engineering, with a focus on developing sound research methodologies. Contact him at <[email protected]>
l0.10
< g r a p h i c s >
Justus Bogner is a Postdoctoral Researcher at the University of Stuttgart, Germany.
Within the Empirical Software Engineering group, he leads the division Software Engineering for AI- and Microservice-based Systems.
He has worked as a software engineer in industry for over 9 nears (HP, HPE, DXC Technology), building mostly Web- and service-based enterprise applications. Contact him at <[email protected]>
l0.10
< g r a p h i c s >
Stefan Wagner is a Full Professor of empirical software engineering and director at the Institute of Software Engineering at the University of Stuttgart. His research interests are human aspects, software quality, automotive software, AI-based systems, and empirical studies. He studied computer science in Augsburg and Edinburgh and received a doctoral degree from the Technical University of Munich. He is a senior member of IEEE and ACM. Contact him at <[email protected]>.
|
http://arxiv.org/abs/2307.05389v1
|
20230705075004
|
tsdownsample: high-performance time series downsampling for scalable visualization
|
[
"Jeroen Van Der Donckt",
"Jonas Van Der Donckt",
"Sofie Van Hoecke"
] |
eess.SP
|
[
"eess.SP",
"cs.GR"
] |
IDLab, Ghent University - imec, Technologiepark Zwijnaarde 126, 9052 Zwijnaarde, Belgium
Interactive line chart visualizations greatly enhance the effective exploration of large time series.
Although downsampling has emerged as a well-established approach to enable efficient interactive visualization of large datasets, it is
not an inherent feature in most visualization tools. Furthermore, there is no library offering a convenient interface for high-performance implementations of prominent downsampling algorithms.
To address these shortcomings, we present , an open-source Python package specifically designed for CPU-based, in-memory time series downsampling. Our library focuses on performance and convenient integration, offering optimized implementations of leading downsampling algorithms. We achieve this optimization by leveraging low-level SIMD instructions and multithreading capabilities in Rust. In particular, SIMD instructions were employed to optimize the argmin and argmax operations. This SIMD optimization, along with some algorithmic tricks, proved crucial in enhancing the performance of various downsampling algorithms.
We evaluate the performance of and demonstrate its interoperability with an established visualization framework. Our performance benchmarks indicate that the algorithmic runtime of approximates the CPU's memory bandwidth.
This work marks a significant advancement in bringing high-performance time series downsampling to the Python ecosystem, enabling scalable visualization. The open-source code can be found at <https://github.com/predict-idlab/tsdownsample>
time series visualization downsampling Rust Python SIMD
§ INTRODUCTION
Time series are ubiquitous in many domains, such as healthcare, finance, and manufacturing.
This complex data modality can be challenging to comprehend through summary statistics alone, making visualizations a crucial tool for gaining insights, with line charts proving particularly effective for most tasks <cit.>.
By following the “overview first, zoom and filter, then details on demand” paradigm <cit.>, interactive line chart visualizations allow users to quickly and easily understand the data and identify patterns and trends <cit.>.
Many real-world time series datasets are extremely large, encompassing millions or even billions of data points. As a result, there is a pressing need for scalable visualization techniques that are capable of effectively handling such datasets <cit.>. One approach to realize scalable visualization is through utilizing data aggregation techniques such as downsampling, which reduces the number of data points in a time series while preserving its overall shape <cit.>. Downsampling enables faster rendering and more responsive interactions, allowing users to explore large datasets more effectively <cit.>.
Downsampling algorithms find wide adoption in the time series database domain, with Uber integrating a downsampling function in their M3 metrics platform <cit.>, and TimeScaleDB offering downsampling as a server-side hyperfunction <cit.>.
However, when dealing with very large datasets (billions of data points), the downsampling process itself can become a bottleneck <cit.>. This is especially the case in the context of interactive visualizations, which require fast downsampling to minimize latency when interacting with the graph, such as zooming and panning <cit.>. Moreover, the authors observe that, at the time of writing, there is no Python library offering high-performance implementations of multiple time series downsampling algorithms.
Recognizing this challenge, we introduce , an open-source Python toolkit designed for in-memory, CPU-based, time series downsampling, focusing on performance and integrability.
provides optimized CPU implementations of the most prominent downsampling algorithms, i.e., EveryNth, MinMax, M4 <cit.>, LTTB (Largest-Triangle-Three-Buckts) <cit.>, and MinMaxLTTB <cit.>.
The algorithms are implemented in Rust, a system programming language known for its performance and memory safety. The Rust code leverages SIMD (Single Instruction Multiple Data) instructions together with some algorithmic tricks and (optionally) multithreading to achieve exceptional performance and scalability. A core component of is our Rust library, which provides SIMD accelerated argmin and argmax functionality.
Optimizing these operations for various CPU architectures proved to be crucial, as they form the inner loop of most downsampling algorithms <cit.>.
is distributed as a https://pypi.org/project/tsdownsample/Python toolkit by publishing the cross-compiled Python bindings for the underlying Rust code for a wide range of operating systems and CPU architectures.
In summary, this paper contributes , a high-performance library optimized for CPU that provides downsampling for scalable time series visualization. This library's integrability is demonstrated through its adoption as the downsampling solution in a time series visualization library, which has over 1.5 million installations at the time of writing.
The remainder of this paper is structured as follows. In Section <ref>, we present a description of the software. We then provide an illustrative example in Section <ref>, where we also highlight its integration in an existing toolkit. Finally, we evaluate the performance of in Section <ref>.
§ SOFTWARE DESCRIPTION
is a Python package that utilizes Rust to provide CPU-optimized implementations of downsampling algorithms for time series visualization. To facilitate seamless installation and usage, Python bindings for the underlying Rust code are cross-compiled for various operating systems and CPU architectures, which are distributed as a PyPi package. Installing is simple and can be done via pip by running the following command: https://pypi.org/project/tsdownsample/.
The following subsections detail the performance and user-interface aspect of .
We will first describe how optimizing argmin and argmax operations in Rust proved crucial for achieving high-performance downsampling. Then, we will discuss the convenient interface that this library provides for all implemented downsampling algorithms.
§.§ ArgMinMax
Given the significance of vertical extrema for ensuring the visual representativeness of time series downsampling <cit.>, it was imperative to optimize the argmin and argmax operations. These operations play a vital role in the inner loop
of the MinMax, M4 <cit.>, and MinMaxLTTB <cit.> algorithms. As such, we developed the ArgMinMax Rust library (also referred to as a crate), which provides a highly efficient and overflow-free implementation of the argmin and argmax operations. These operations return the indices of the minimum and maximum values of an array. Note that the argmin and argmax values are extracted simultaneously within a single pass over the data, as this is mainly a memory-bound task.
The https://crates.io/crates/argminmaxArgMinMax crate includes SIMD-optimized implementations of argmin and argmax for SSE, AVX(2), AVX512, and NEON, and includes runtime CPU feature detection to select the optimal (supported) SIMD implementation for the current CPU. SIMD instructions allow the CPU to perform the same operation on multiple data points simultaneously, providing a significant boost in performance for certain types of operations. The library is SIMD-optimized for a wide range of CPU architectures; x86, x86_64, arm(v7), and aarch64.
In addition, the ArgMinMax crate supports a wide range of data types (f16, f32, f64, i8, i16, i32, i64, u8, u16, u32, and u64). We further guarantee the library to be memory-efficient, as it operates on a memory view (i.e., a slice) of the data rather than copying it. The SIMD algorithm is also branchless, ensuring that the runtime is independent of the quality of the branch predictor, making the best-case runtime the same as the worst-case runtime.
§.§.§ argminmax SIMD algorithm
In code snippet <ref>, we present the inner loop of the SIMD argmin and argmax algorithm. This algorithm extracts both the argmin and argmax value in a single pass over the data. To do so, we utilize four accumulating SIMD vectors (also referred to as registers).
Two of these registers maintain the lowest and highest values encountered while iterating over the data in chunks of size . At the end of the iteration, these vectors contain the maximum and minimum values at each position within all seen chunks.
As a final step, after iterating over all the chunks, the algorithm extracts the minimum and maximum values along with their respective indices from the SIMD vectors (i.e., the horizontal operations).
[ht]
[language=Rust]code__simd_algo.txt
The core (inner loop) of the SIMD argminmax algorithm.
The pseudocode in snippet <ref> closely resembles the Rust code of the ArgMinMax package. It is worth noting that the SIMD instructions, such as , , , and , are generic function names that need to be associated with the corresponding CPU instructions of the various architectures. To achieve this in Rust, we utilized a trait that defines these generic SIMD instructions as functions, similar to C++ templates. For each supported CPU architecture and data type combination in the ArgMinMax package, we have implemented a concrete version of this trait[Since the lane-size is also influenced by this combination, this trait includes the lane size parameter as well.].
When the length of the array exceeds the maximum value that the index vector's underlying data type can represent, an (index) overflow will occur. It is this overflow challenge that makes the argmin and argmax operations a much harder problem to SIMD-optimize compared to the min and max operations. As a result, compiling a scalar implementation to vectorized instructions is not trivial or even impossible. As such, it was necessary to manually write the algorithm using SIMD instructions, rather than relying on the compiler for optimized compilation. Notably, the library, an exceptionally fast DataFrame library and in-memory query engine that currently exceeds 0.7M monthly installations, adopted our ArgMinMax crate to provide more optimized argmin and argmax operations <cit.>.
For more details on how to implement an overflow-free solution (which requires an additional outer loop), please refer to the open-source code repository available at https://github.com/jvdd/argminmaxgithub.com/predict-idlab/argminmax.
§.§.§ Optimized implementation for f16 and uints
In contrast to other data types, most modern CPUs (x86) do not have hardware support for the float16 (f16) data type[With float16 we refer to IEEE 754-2008 standard binary16, also known as half floating point type <cit.>.]. As a result, programming languages typically support f16 either by upcasting to f32 or by using a software implementation. Both approaches come at the cost of considerable overhead.
Instead of applying one of these two approaches, we convert f16 to an ordinal mapping of i16 (which we refer to as i16ord). This allows us to efficiently support f16 data types in the ArgMinMax crate and in the package as a whole.
The mapping preserves the ordinality of the f16 data, as illustrated in Figure <ref>, allowing the use of fast built-in i16 (SIMD) instructions for comparison[Note that this mapping only makes sense when you are solely interested in comparing values.]. Moreover, the transformation is symmetric, meaning that we can transform the outcome back to f16 (by using the same mapping function) without needing a lookup table, as illustrated in Figure <ref>.
Furthermore, as the transformation only performs binary (bitwise) operations, the overhead is limited, thus making it convenient to implement using SIMD instructions.
The ordinal transformation is performed as follows[To apply this transformation to a f16 value, we first transmute the f16 value to i16.]:
A final implementation remark is that uint data types (u8, u16, u32, u64) lack SSE, AVX(2), nor AVX512 SIMD instructions. This data type is supported by performing a similar, albeit much simpler, ordinal mapping to integer data type. This mapping leverages the two's complement of signed integers. In particular, the transformation first transmutes the uint as int and then XORs the value with the smallest (i.e., the largest negative) int value.
§.§ tsdownsample
further builds upon the optimizations of the ArgMinMax crate. In particular, the MinMax, M4, and MinMaxLTTB algorithms directly rely on the argmingmax algorithm in their inner loop (i.e., for each bin). Since these algorithms operate on local heuristics within each bin, they can be easily parallelized in Rust to leverage the processing power of modern multicore CPUs <cit.>. To achieve this, we implemented a multithreaded bin index generator using a search sorted approach, thereby enhancing cache hits through chunked execution. Remark that multithreading is not possible in Python due to the Global Interpreter Lock (GIL) which prevents multiple threads from executing Python bytecodes simultaneously <cit.>.
Unit testing is conducted to ensure the correctness of the supported downsampling algorithms. Specifically, we verify the consistency of the downsampled (i.e., selected) data points across various data types, downsamplers, and multithreading configurations. Additionally, we compare the Rust implementations to a reference Python implementation, which (although being considerably slower) serves as a benchmark for correctness. Furthermore, we ensure that passing an equally sampled yields the same output as not specifying the index.
Noteworthy, to capture potential performance regressions when updating the code base, we added performance monitoring to the CI/CD workflow.
§.§ Downsampling interface
aims to provide a convenient interface for the supported downsampling algorithms. Users can interact with these algorithms through Python classes, which act as a thin wrapper around the underlying Rust bindings. These classes abstract the dispatching of data type specific function calls, making the interface easy to use. All classes implement the method, which has the following signature:
This signature first accepts two positional arguments that represent the input values[This signature design aligns with the convention used in the method <cit.>.]. The first positional argument is optional and represents the index of the time series values (). If not provided, it is assumed that the time series values are equally sampled without any gaps. The second positional argument is mandatory and corresponds to the input time series values.
The argument is a mandatory keyword argument that defines the number of output values[It is important to note that if there are gaps in the time series (index), fewer than indices may be returned, as no data points can be selected for empty bins].
In addition to these arguments, optional keyword arguments can be passed via . These additional arguments provide increased flexibility, including options such as the parallel argument, a boolean that enables multi-threading when set to . By default, the parallel option is set to .
Lastly, the method returns a array containing unsigned 64-bit integers, representing the indices of the downsampled (i.e., selected) values.
§ ILLUSTRATIVE EXAMPLE
Listing <ref> provides an illustration of how can be utilized.
The code snippet begins by importing the class from the package, along with the library.
is utilized to generate a random time series dataset consisting of 10 million points.
Next, an instance of the class is constructed, which is used to downsample the aforementioned time series data to 1000 points. This is accomplished by utilizing the method, whose interface is detailed in Section <ref>. The resulting indices of the downsampled time series data are stored in the variable.
Subsequently, these selected indices can be used to retrieve a representative subset of the original time series data, facilitating efficient visualization.
[ht]
[language=Python]code__example.txt
Downsampling a random array with MinMaxLTTB.
§.§ Integration in plotly-resampler
has been integrated as downsampling back end in the visualization tool since version v0.9 <cit.>. At the time of writing, has over 1.5 million installations.
One longstanding challenge with the library was the need for users to compile downsampling C code locally during the installation process. This requirement often led to complications, as it necessitates users to have the correct Python headers, ensure compatibility of the version with the utilized C API, and have an appropriate C compiler installed. However, by adopting , these issues have been effectively addressed. has a precompiled binary available for multiple platforms, eliminating the need for users to compile the underlying code themselves, while ensuring optimal performance. This integration has not only resolved these compilation-related obstacles but has also resulted in remarkable speed improvements, ranging from 3 to 30 times faster performance.
§ PERFORMANCE
We analyzed the performance of for a range of data types and algorithms, as presented in Table <ref>. To do so, we created an array of random values for the data types under consideration and measured the time required to downsample the respective array to 2,000 values (i.e., =2,000) using Python's module. A reproducible notebook containing the benchmark code can be found here: <https://github.com/predict-idlab/tsdownsample/blob/main/notebooks/benches.ipynb>.
The benchmarks were executed on a server with an Intel Xeon E5-2650 v2 (32) @ 3.40GHz CPU and SAMSUNG M393B1G73QH0-CMA DDR3 1600MT/s RAM, running on the Ubuntu 18.04.6 LTS x86_64 operating system. Other running processes were limited to a minimum.
Table <ref> displays the median time measurements for the five downsampling algorithms available in . These measurements are provided for all supported data types and varying numbers of data points.
Among the algorithms, EveryNth exhibits a constant execution time of approximately 0.02 ms, regardless of the length of the input data.
For the M4, MinMax, and MinMaxLTTB algorithms, we observe two main trends. Firstly, there is a linear or sublinear increase (approximately 10x or less) in runtime when transitioning from 10 million to 100 million and then to 1 billion data points. Secondly, these three algorithms have similar runtimes for the same data type, which is most noticeable in the 1 billion data point rows. Both these trends hold true for the sequential and parallel executions.
In the case of the LTTB algorithm, we again notice a linear scaling pattern as the data length increases. It is important to note that the runtime of LTTB is significantly slower, up to two orders of magnitude, compared to MinMaxLTTB, especially when dealing with larger datasets (e.g., 1 billion uint8 data points). This slower runtime can be attributed to LTTB requiring much more computationally expensive calculations <cit.>, which is largely mitigated in MinMaxLTTB <cit.>.
Figure <ref> provides additional insights to complement the findings of Table <ref>. It illustrates the relationship between downsampling time (y-axis) and the number of data points (x-axis) for the three algorithms that utilize the optimizations of ArgMinMax. Given that the y-axis is logarithmic scale, the logarithmic trend that we observe for all integer data types in every subplot, confirms the earlier mentioned observation that the implementation scales linearly with the number of data points in the array.
Our primary finding is that the implementation exhibits faster performance for lower bitsize variants of the same data type. For instance, int32 demonstrates a roughly 2x speed improvement compared to int64 for the same number of data points, while int16 is 2x faster than int32. Remark that these 2x performance differences are even more pronounced (i.e., a clear 2x) for the parallel execution.
This discrepancy in performance can be attributed to the fact that reducing the bit-representation by 2x (e.g., int32 vs. int64) allows for a 2x increase in the number of values that fit in the CPU's SIMD registers. This utilization of SIMD registers is an essential part of the ArgMinMax code base and results in fewer read () instructions, which impacts performance since ArgMinMax is primarily bound by memory access.
Our second key finding emphasizes the benefits of implementing multithreading. On average, multithreading leads to an impressive 7x performance improvement on the benchmarking computer. Notably, when extrapolating the linear trend of the int64 data type, demonstrates the capability to downsample data at a rate of 45 GB/s (i.e., 8 GB / 0.177 s).
§ CONCLUSION
Time series visualization plays a crucial role in exploratory data analysis, particularly as datasets continue to grow in size. To enable scalable line-chart visualization, downsampling has emerged as a well-established technique. However, we have observed a need for a convenient and high-performance time series downsampling solution within the Python landscape, facilitating the integration of downsampling capabilities into widely used Python visualization packages.
To address this need, we introduce , a Python library that offers a convenient interface to leading precompiled downsampling algorithms, harnessing highly optimized underlying Rust code. A key aspect of achieving high performance in involved optimizing argmin and argmax operations using SIMD instructions and leveraging multithreading. The runtime feature set detection enables the selection of the most optimal implementation based on the CPU, allowing for the distribution of a single binary that can cater to multiple CPU feature sets within the same architecture.
Benchmark results confirm the critical role played by both SIMD optimizations and multithreading in achieving the impressive performance of . We firmly believe that represents a significant advancement in delivering high-performance time series downsampling capabilities to the Python ecosystem. This advancement is further illustrated by the adoption of in the tool, solidifying its position within the Python community.
§ ACKNOWLEDGEMENTS
The authors thank Martijn Courteaux and Tom Windels for having fruitful discussions on writing efficient low-level code.
elsarticle-num
§ CURRENT CODE VERSION
§ CURRENT EXECUTABLE SOFTWARE VERSION
|
http://arxiv.org/abs/2307.01023v1
|
20230703135450
|
Neural Chronos ODE: Unveiling Temporal Patterns and Forecasting Future and Past Trends in Time Series Data
|
[
"C. Coelho",
"M. Fernanda P. Costa",
"L. L. Ferrás"
] |
cs.LG
|
[
"cs.LG",
"I.5.1; G.1.7"
] |
My editor
Neural Chronos ODE: Unveiling Temporal Patterns and Forecasting Future and Past Trends in Time Series Data
Cecília Coelho [email protected]
M. Fernanda P. Costa [email protected]
Centre of Mathematics, University of Minho
Braga, 4710-057, Portugal
Luís L. Ferrás [email protected]
Centre of Mathematics, University of Minho
Braga, 4710-057, Portugal
Department of Mechanical Engineering (Section of Mathematics) - FEUP
University of Porto
Porto, 4200-465, Portugal
Received July 3, 2023, Accepted xx
================================================================================================================================================================================================================================================================================================================================================================================================================
This work introduces Neural Chronos Ordinary Differential Equations (Neural CODE), a deep neural network architecture that fits a continuous-time ODE dynamics for predicting the chronology of a system both forward and backward in time. To train the model, we solve the ODE as an initial value problem and a final value problem, similar to Neural ODEs. We also explore two approaches to combining Neural CODE with Recurrent Neural Networks by replacing Neural ODE with Neural CODE (CODE-RNN), and incorporating a bidirectional RNN for full information flow in both time directions (CODE-BiRNN), and variants with other update cells namely GRU and LSTM: CODE-GRU, CODE-BiGRU, CODE-LSTM, CODE-BiLSTM.
Experimental results demonstrate that Neural CODE outperforms Neural ODE in learning the dynamics of a spiral forward and backward in time, even with sparser data. We also compare the performance of CODE-RNN/-GRU/-LSTM and CODE-BiRNN/-BiGRU/-BiLSTM against ODE-RNN/-GRU/-LSTM on three real-life time series data tasks: imputation of missing data for lower and higher dimensional data, and forward and backward extrapolation with shorter and longer time horizons. Our findings show that the proposed architectures converge faster, with CODE-BiRNN/-BiGRU/-BiLSTM consistently outperforming the other architectures on all tasks.
Machine Learning; Neural ODEs; Forecasting Future; Unveiling Past; Neural Chronos ODE.
§ INTRODUCTION
Neural Ordinary Differential Equations (Neural ODEs) are a type of Neural Networks (NNs) that fit an ODE to model the continuous dynamics of time-series data. Unlike traditional NNs, ODEs are continuous-time functions, allowing predictions to be made at any point in time and handling data taken at arbitrary times <cit.>.
Time-series data is commonly used to describe phenomena that change over time and have a fixed order, with consecutive values conveying dependence. However, traditional NNs treat each input as independent, making it difficult to perceive the time relation between consecutive inputs <cit.>. Recurrent Neural Networks (RNNs) were introduced to handle this issue by adding a recurrence mechanism, allowing for the learning of dependencies between current and previous values and handling arbitrary length input and output sequences <cit.>.
However, in certain tasks such as natural language processing (NLP) and speech recognition, the current value depends on both previous and future values too, making it important to use future information when making predictions <cit.>.
Bidirectional Recurrent Neural Networks (BiRNNs) were introduced as an extension to traditional RNNs, allowing for processing of data sequences in both forward and backward directions. This enables combining information from past and future values to make predictions <cit.>.
Real-world time-series data may have missing values or be sampled irregularly, creating challenges for RNNs that are designed to recognise order but not time intervals. Preprocessing is often required to convert the data into regularly sampled intervals, which can lead to errors and loss of information due to the frequency of sampling <cit.>.
To address these challenges, ODE-RNNs were proposed. These are RNNs where state transitions are governed by a continuous-time model adjusted by a Neural ODE, eliminating the need for preprocessing by capturing the time dependence of sequential data <cit.>.
Neural ODEs solve Initial Value Problems (IVPs) by computing the solution curve forward in time from t_0 to t_f. However, in practice, predicting a system's behaviour requires leveraging data both forwards and backwards in time. This motivates the development of Neural Chronos[Greek word for time.] ODE (Neural CODE), an NN architecture that adjusts an ODE using information from both initial and final value problems. Neural CODE can predict the parameters of a previous condition of a system, unlike traditional Neural ODEs.
We compare Neural CODE to Neural ODE in predicting spiral dynamics in both time directions and with varying amounts of training data. Our results show that Neural CODE achieves better generalisation and robustness when predicting forward in time, thanks to the higher amount of information retrieved from the data. Neural CODE also improves backward predictive performance, as expected.
To leverage the advantages of Neural CODE, we refined the ODE-RNN model of <cit.> by replacing the Neural ODE by a Neural CODE into ODE-RNN (CODE-RNN). Two variants of the architecture were created by changing the update cell type to GRU and LSTM denoted by CODE-GRU and CODE-LSTM, respectively. However, Neural CODE requires both an initial and final condition to solve forward and backward, respectively. This is problematic for the CODE-RNN architecture, which has a single RNN cell that receives an intermediate hidden state h_i' from a Neural ODE and outputs a single hidden state h_i. We address this issue by adding a second RNN cell to the CODE-RNN architecture, given rise to a new architecture CODE-BiRNN. This architecture uses two separate RNN cells to process data forwards and backwards in time, allowing for the incorporation of both initial and final conditions. CODE-BiRNN is further refined by creating two variants, CODE-BiGRU and CODE-BiLSTM, which use GRU and LSTM update cells, respectively. Table <ref> shows the state transitions and the recurrence mechanism used in each of the continuous-times architectures studied and proposed in this work.
This paper is organised as follows. Section <ref> presents a brief review of essential concepts such as Neural ODE, RNN, ODE-RNN and BiRNN.
Section <ref> is devoted to the three novel deep neural network architectures. First, the Neural CODE architecture is presented along with the mathematical formulation of the initial and final value problems, followed by the proof of existence and uniqueness of solution.
Then, the two recurrent architectures CODE-RNN and CODE-BiRNN, are explained in detail. Followed by the presentation of their respective variants, CODE-GRU, CODE-LSTM, CODE-BiGRU, CODE-BiLSTM.
In Section <ref> we evaluate and compare the performance of the newly proposed NNs namely, Neural CODE, CODE-RNN, CODE-GRU, CODE-LSTM, CODE-BiRNN, CODE-BiGRU and CODE-BiLSTM with the baseline NNs Neural ODE, ODE-RNN, ODE-GRU and ODE-LSTM.
The paper ends with the conclusions and future work in Section <ref>.
§ BACKGROUND
This section aims to provide some background information on the developments discussed in Section 3. We will introduce a brief description of Neural ODE, RNN, ODE-RNN, and BiRNN.
Let X=(x_1, x_2, …, x_N) be a time-series of length N, with x_i ∈ℝ^d the data vector at time step i (i=1,…,N). Let Y=(y_1, y_2, …, y_N) be the ground-truth output time-series, with y_i ∈ℝ^d^* the response vector at time step i, and, let Ŷ=(ŷ_1, ŷ_2, …, ŷ_N) be the output response sequential data produced by a NN, with ŷ_i ∈ℝ^d* the output response vector at time step i.
§.§ Neural ODE
In traditional NNs, a sequential arrangement of hidden layers h_i is employed to transform an input vector x_i ∈ℝ^d into the desired predicted output ŷ_i ∈ℝ^d^* <cit.>. This transformation is achieved by propagating the output of each layer to the next one, following equation (<ref>) until reaching the output layer and obtaining the desired predicted output as indicated in equation (<ref>),
h_i = σ(W^[i]h_i-1+ b^[i]),
ŷ_i = σ(W^[out]h_i+ b^[out]).
Here, h_0 is the input layer, σ an activation function, W^[i]∈ℝ^n × d the weights matrix of layer i with n neurons, b^[i]∈ℝ^n the biases of layer i, W^[out]∈ℝ^d^* × n the weights matrix of the output layer and b^[out]∈ℝ^d^* the biases of the output layer.
In <cit.>, the authors introduced Neural ODE, a NN architecture that models hidden states using a continuous-time function. Unlike traditional NNs that rely on discrete layers, Neural ODE employs an infinite number of hidden layers, approaching a continuum. The output of these layers is defined as the solution of an IVP,
d h(t)dt=f_θ(h(t),t) with h(t_0)=h_0
where f_θ is given by a NN, parameterised by a set of weights and biases, θ, t is the time step of the solution and (h_0, t_0) is the initial condition of the IVP.
One significant advantage of training a Neural ODE is that it produces a continuous-time function. Consequently, predictions can be made at any desired time point by discretizing the function using a numerical ODE solver:
h(t) = ODESolve(f_θ, h_0, (t_0,t_f)),
with h(t) the solutions computed by solving the IVP (<ref>) in the time interval (t_0, t_f).
Taking f_θ(h(t), t) to be a NN, which builds an ODE dynamics with learnable parameters θ and the input vector (x,t)∈ℝ^d+1 that corresponds to the first time step t_0 of the time-series, the input layer h_0 is given by (<ref>):
h(t_0) = h_0 = σ( W^[in] (x,t) + b^[in]),
where W^[in]∈ℝ^n × d+1 and b^[in]∈ℝ^n are the weight matrix and the bias vector of the input layer, respectively.
Consider a single hidden layer, h∈ℝ^n, with n neurons, a_n, a batch size of 1 given by the Neural ODE, f_θ (see Figure <ref>).
The dynamics built by such a NN has the form:
f_θ(h(t),t) = σ( W^[out] (σ( W^[1]h(t_0)+ b^[1] )) + b^[out])
where σ represents an activation function, h(t_0) ∈ℝ^n is the output of the input layer, (<ref>), W^[1]∈ℝ^n × n, W^[out]∈ℝ^d^* × n are the matrix of the weights of the hidden and output layers and b^[1]∈ℝ^n, b^[out]∈ℝ^d^* are the bias of the hidden and output layers, respectively.
Upon examining (<ref>) and (<ref>), it becomes apparent that a NN is a composition of linear transformations with the application of various activation functions σ. These activation functions can either be linear (such as identity or binary step functions) or non-linear (including sigmoid, hyperbolic tangent (tanh), rectified linear unit (ReLU), softmax, etc.).
When all activation functions in the NN that constructs f_θ are linear, it results in a linear ODE. However, linear activation functions are not commonly used as they cannot effectively capture complex relationships within the data. Consequently, they limit the representational power of the NN and hinder its performance in the given task <cit.>. On the other hand, by incorporating one or more non-linear activation functions in the dynamics builder NN, it becomes a non-linear ODE.
In addition to the NN that models the function dynamics f_θ and by optimising its parameters θ, a Neural ODE also incorporates an ODE solver. The ultimate outcome of training a Neural ODE is the learned function f_θ. To make predictions ŷ_i, i ∈ 1,…,N at specific time points t_i within the prediction time interval (t_1, t_N), the IVP is solved. The solutions obtained from solving the IVP, h(t_i) ≡h_i, represent the predictions at each respective time step t_i, i ∈ 1,…,N.
Remark: In some cases the solutions h(t_i) obtained from the Neural ODE do not directly correspond to the desired predictions ŷ_i. In such cases, an additional NN can be employed to process the intermediate solutions h(t_i) and produce the final predictions ŷ_i.
§.§ RNNs
RNNs <cit.> are a type of artificial NN that are specifically designed to process sequential data. Unlike feed-forward NNs, which process inputs in a single pass and do not have memory of previous inputs, RNNs have a form of memory that allows them to retain and utilise information from previous steps or time points in the sequence.
The key characteristic of an RNN is its recurrent connection, which forms a loop that allows information to be passed from one step to the next (from one hidden state at time step t_i-1, h_i-1, into the next one, h_i). This loop enables the network to maintain an internal state or hidden state that captures the context and dependencies of the previous steps. The hidden state serves as a memory that encodes information about the sequence up to the current step, allowing the network to make decisions or predictions based on the entire previous observations rather than just the current input. Mathematically, a RNN cell which forms the layers of an RNN, can be defined for i=1,…,N:
h_i = σ(W^[feedback]h_i-1 + W^[in]x_i + b^[i])
where h_i ∈ℝ^n is the initial hidden state vector h_0 is initialised at the start of the recurrent computation, W^[feedback]∈ℝ^n × n is the recurrent matrix that contains the weights that link the two hidden state vectors at time step t_i-1 and t_i, W^[in]∈ℝ^n × d is the matrix of the weights between the input and all hidden state vectors, x_i ∈ℝ^d, b^[i]∈ℝ^n is the bias vector of the current hidden state and σ is an arbitrary activation function.
At each step t_i in the sequence, an RNN takes an input, processes it along with the current hidden state, and produces an output and an updated hidden state. The input can be any form of sequential data, such as a time-series, a sentence, or a sequence of images. The output can be a prediction, a classification, or another sequence of values.
The update of an RNN cell at a time step t_i is usually represented concisely as:
h_i = RNNCell(h_i-1, x_i).
The predicted output at time step t_i, ŷ_i, is computed by applying an output layer to the hidden state h_i <cit.>:
ŷ_i = σ(W^[out]h_i + b^[out])
where W^[out]∈ℝ^d^* × n is the matrix of the weights between the hidden state and the output vectors, b^[out]∈ℝ^d^* is the bias vector of the output layer and ŷ_i ∈ℝ^d^* is the prediction at time step i.
§.§ ODE-RNN
RNNs adjust a discrete-time dynamics with states defined at each time step, denoted as t_i. However, the states between data observations, occurring in the interval [t_i, t_i+1], remain undefined. Moreover, RNNs face challenges when handling irregularly sampled data due to the inability to perceive the sampling intervals between observations. Consequently, preprocessing of such data becomes essential to achieve regular sampling intervals, although this process introduces potential errors <cit.>.
To address these limitations, <cit.> introduced the ODE-RNN architecture, which defines the states between observations using an ODE.
Instead of using the previous hidden state h_i-1, as in (<ref>), for the RNN update, it employs an intermediate hidden state h'_i instead,
h_i = RNNCell(h'_i, x_i).
The intermediate hidden states are obtained as the solution of an IVP:
d h'(t)dt = f_θ (h(t),t) with h(t_i-1) = h_i-1
Thus, the solution is given by:
h'_i = ODESolve(f_θ, h_i-1,(t_i-1, t_i))
where h'_i ∈ℝ^n, f_θ is the ODE dynamics given by the NN with parameters θ, h_i-1 is the previous hidden state, and t_i-1 and t_i are the time steps of the previous and current hidden state, respectively.
ODE-RNNs are well-suited for handling irregularly sampled data due to their ability to leverage the sampling intervals and adapt to time-dependent dynamics. This advantage eliminates the necessity for preprocessing steps, which are typically required when using traditional RNNs.
§.§ BiRNNs
RNNs have been widely used, within encoder-decoder architectures, in various applications such as machine translation <cit.>, NLP <cit.>, and time-series prediction <cit.>.
As previously mentioned, RNNs enable the prediction of the output at a specific time step t_i, ŷ_i, by using past information stored in the hidden state from the previous time step, h_i-1. This captures the dependency between the current value and the preceding one <cit.>. However, certain tasks, such as NLP and machine translation, require predictions that depend not only on the past but also on the future. Therefore, incorporating future information can lead to improved predictive performance.
To address this, <cit.> introduced BiRNNs, an architecture composed of two stacked RNNs. The first RNN receives the input sequence in its original order, from x_1 to x_N, while the second RNN receives the same sequence but in reverse order, from x_N to x_1. Consequently, one row of the unfolded RNNs captures the dependence between the previous value and the current one through forward hidden states, denoted as h_i. Simultaneously, the other row captures the dependence between the next value and the current one through backward hidden states, denoted as h_i. This is represented in the two following equations for i=1,…,N:
h_i = σ (W^[feedback]h_i-1 + W^[f]x_i + b^[f]),
h_i = σ (W^[feedback]h_i-1 + W^[b]x_i + b^[b]),
where W^[f]∈ℝ^n × d is the weight matrix between the input and forward hidden state vectors, b^[f]∈ℝ^n is the bias vector of the forward hidden states, W^[b]∈ℝ^n × d is the weight matrix between the input and backward hidden states and b^[b]∈ℝ^n the bias vector of the backward hidden states.
To get the predicted output at a given time step, ŷ_i, the full hidden state, h_i, is computed by performing an aggregation/merging operation (concatenate, sum, mean, etc.) of the forward, h_i , and backward, h_i, hidden states of the same time step i <cit.> as follows, for i=1,…,N,
ŷ_i = σ(W^[out]h_i + b^[out]) with h_i = h_i ⊕h_i,
where ⊕ denotes the aggregation/merging operation used. In our study and experiments, we chose concatenation. Figure <ref> depicts the architecture of a BiRNN.
If multiple layers of BiRNNs are used, the complete hidden state, h_i, is the input for the subsequent BiRNN <cit.>.
§ METHOD
In this section, we introduce the innovative architecture called Neural CODE, providing the mathematical details that underlie its workings. Additionally, we demonstrate the versatility of Neural CODE by applying it to enhance the ODE-RNN family through the introduction of two new distinct architectures: CODE-RNN and CODE-BiRNN.
§.§ Neural CODE
In this work, we propose a novel continuous-time model called Neural CODE. It builds upon the concepts of Neural ODEs but introduces a new strategy to adjust the ODE dynamics by leveraging possible relationships between past and current as well as future and current values.
This idea was directly inspired by BiRNNs and, by allowing both past and future information to influence the current prediction, Neural CODE aims to enhance the model's ability to handle long-range dependencies and capture subtle patterns in the data.
Thus, the ODE dynamics f_θ is adjusted by optimising the parameters θ using the information given by capturing the dependence between the previous and the current values through forward hidden states h_i, as well as the dependence between the next and the current values through backward hidden states h_i.
In this context, the IVP (<ref>) is solved in a forward way within the time interval (t_0, t_f), using a numerical ODE solver.
h_i = ODESolve(f_θ, h_f, (t_0, t_f)).
For considering possible relationships between next and current values, it is defined a Final Value Problem (FVP) as follows:
d h(t)dt=f_θ(h(t),t) with h(t_f)=h_f,
where the final condition is given by the value at the last time step, h_f. Furthermore, the FVP is solved in a backward way within the reverse time interval (t_f, t_0):
h_i = ODESolve(f_θ, h_f, (t_f, t_0)).
Note that, by solving the IVP (<ref>) and FVP (<ref>) two sets of solutions are obtained. The first set, denoted as h_i, represents the solutions computed in the positive direction of time. The second set, h_i, corresponds to the solutions computed in the negative direction of time. For a visual representation, see Figure <ref>.
To optimise the parameters θ of the NN we designed a loss function given by the sum of two terms, namely the mean squared error (MSE) of the forward and backward predictions, ℒ(h_i) and ℒ(h_i) respectively:
ℒ_total(θ) = ℒ( h(t_0) + ∫_t_0^t_ff_θ(h(t), t) dt ) + ℒ( h(t_f) + ∫_t_f^t_0f_θ(h(t), t) dt )
= ℒ( ODESolve(f_θ, h_0, (t_0, t_f)) ) + ℒ( ODESolve(f_θ, h_f, (t_f, t_0)) )
= ℒ(h_i) + ℒ(h_i)
where ℒ(h_t) = MSE ( ŷ, y) and ℒ(h_t) = MSE ( ŷ, y), with y and y the ground-truth in the forward and backward directions.
By minimising the loss function ℒ_total, the optimisation process in Neural CODE aims to find the parameters θ that can produce solutions to the IVP that closely match the true values and, simultaneously, solutions to the FVP that also closely match the true values.
Similarly to Neural ODEs, the adjoint sensitivity method is used to compute the gradients of the loss function <cit.>.
After the optimisation process has been completed in training a Neural CODE, the ultimate outcome is the ODE dynamics f_θ, adjusted by a NN. To make predictions, a numerical ODE solver is used, which can solve an IVP by starting at the initial condition and solving forward in time (Algorithm <ref>), or a FVP by starting at the final condition and solving backward in time (Algorithm <ref>).
One advantage of Neural CODE is the ability to model the ODE dynamics to effectively capture the data dynamics when solved in both forward (IVP (<ref>)) and backward (FVP (<ref>)) time directions. Therefore, the proposed approach allows for the computation of valid predictions within or outside the training time interval. Within the training time interval, t ∈ (t_0, t_f), the method is capable of performing data completion tasks, where missing data points are estimated. Moreover, the approach also facilitates prediction of past data, t ∈ (t_m, t_f), where t_m represents a time preceding time t_0, and prediction of future data, denoted as t ∈ (t_0, t_M), where t_M represents a time beyond time t_f.
Remark: Note that, we consider that the solution h_i outputted by the ODE solver is the prediction ŷ_i. However, in some cases the solutions h(t_i) obtained from solving the IVP or FVP may not directly correspond to the desired predictions ŷ_i. In such cases, an additional NN can be employed to process the intermediate solutions h(t_i) to produce the final predictions ŷ_i.
§.§.§ Existence and uniqueness of the solution
Neural CODE aims to adjust the dynamics of an ODE that makes valid predictions when solved both IVP (<ref>) and FVP (<ref>), originating an Endpoint Value Problem (EVP) <cit.>.
To prove the existence and uniqueness of a solution to the EVP we use the Picard-Lindelöf theorem for both the initial and final value problems, (<ref>) and (<ref>) respectively.
Theorem 1 Let f_θ(h, t) : ℝ^d × [t_0,t_f] →ℝ^d, be a continuous vector-valued function that is Lipschitz continuous, i.e., there exists a constant K such that for all h_1, h_2 ∈ℝ^d and t ∈ [t_0,t_f],
||f_θ(h_1, t) - f_θ(h_2, t)|| ≤ K ‖h_1 - h_2 ‖.
Then, for any initial value h_0 ∈ℝ^d, the IVP (<ref>) has a unique solution on the interval [t_0,t_f].
Proof. The theorem can be proved by Picard's Existence and Uniqueness Theorem.
Remark It should be noted that:
*
By constructing f_θ, given by a NN, we ensure the continuity of the function in both variables since it is a composition of continuous functions. For any pair (h_i, t_i) ∈ℝ^d×∈ [t_0, t_f], there is a neighbourhood such that f_θ(h,t) is continuous within it.
*
Since the function f_θ(h, t), given by a NN, is a composition of Lipschitz functions of all layers, each with an associated Lipschitz constant, and the weights are finite, the function is Lipschitz continuous in h with K the product of the Lipschitz constants of its layers, i.e.,
||f_θ(h_1,t) - f_θ(h_2,t)|| ≤ K||h_1 - h_2||
holds for all (h_1, t), (h_2, t) ∈ℝ^d×∈ [t_0, t_f]
<cit.>.
Theorem 2 Let f_θ(h, t) : ℝ^d × [t_0,t_f] →ℝ^d, be a continuous vector-valued function that is Lipschitz continuous, i.e., there exists a constant K such that for all h_1, h_2 ∈ℝ^d and t ∈ [t_0,t_f],
||f_θ(h_1,t) - f_θ(h_2,t)|| ≤ K ‖h_1 - h_2 ‖.
Then, for any final value h_f ∈ℝ^d, the FVP (<ref>) has a unique solution on the interval [t_0,t_f].
Proof.
As the form of an IVP (<ref>) and a FVP (<ref>) are similar <cit.> the proof of Theorem 2
can also be proved using the Picard's Existence and Uniqueness Theorem
. We use time reversibility, and make the change of variable H(s)=h(t_f-s) which results in the IVP, dH/ds = -f_θ(H,t_f-s), H(0) = h(t_f) = h_f.
Since all the conditions needed for the Picard-Lindelöf theorem are satisfied by both IVP (<ref>) and FVP (<ref>), then it is guaranteed the existence and uniqueness of the solutions.
§.§ Recurrent architectures based on Neural CODE
We now demonstrate the versatility of Neural CODE by applying it to enhance the ODE-RNN family through the introduction of two new distinct recurrent architectures: CODE-RNN and CODE-BiRNN. The reason to introduce recurrent architectures based-on Neural CODE is that this architecture adjusts an ODE dynamics which is determined by the initial (IVP (<ref>)) and final condition (FVP (<ref>)). By employing a recurrence mechanism we aim to update the ODE dynamics at each observation, having as many initial and final conditions as time steps, similar to ODE-RNN.
§.§.§ CODE-RNN
As mentioned, with the aim of further improving the performance of predictions in both forward and backward directions of Neural CODE, we propose the recurrent recurrent architecture CODE-RNN. This new architecture is built by refining and redesigning the ODE-RNN architecture of <cit.> in which we replaced the Neural ODE by a Neural CODE.
In CODE-RNN, the intermediate hidden state h'_i in the RNN cell update (<ref>), is obtained by combining the forward intermediate hidden state, h'_i, and the backward intermediate hidden state, h'_i.
The forward intermediate hidden state, h'_i, is obtained as the solution of an IVP within the time interval between the previous and current time steps, (t_i-1, t_i):
d h'(t)dt = f_θ (h'(t),t) with h'(t_i-1) = h'_i-1.
On the other hand, the backward intermediate hidden state, h'_i, is obtained as the solution of a FVP within the time interval between the current and previous time steps, (t_i, t_i-1):
d h'(t)dt = f_θ (h'(t),t) with h'(t_i) = h'_i.
Thus, the solutions are given by:
h'_i= ODESolve(f_θ, h'_i-1, (t_i-1,t_i)),
h'_i = ODESolve(f_θ, h'_i-1, (t_i,t_i-1).
Then, the two intermediate hidden states h'_i and h'_i are merged to form the full intermediate state, h'_i=h'_i ⊕h'_i. Hence, the RNN cell update is:
h_i = RNNCell(h'_i, x_i) with h'_i = h_i ⊕h_i.
The training process of CODE-RNN is described in Algorithm <ref>.
It is important to note that in CODE-RNN, the initial and final conditions of both the IVP and FVP are determined by the forward h'_i and backward h'_i intermediate hidden states. This is in contrast to BiRNNs, where the hidden states that propagate from one time step to another also incorporate the input values x_i. Figure <ref> depicts the architecture of CODE-RNN.
To optimise the parameters θ of CODE-RNN, the loss function is defined by MSE:
ℒ(θ) = MSE(Ŷ, Y).
This is a standard loss function commonly used in neural networks, as opposed to the loss function (<ref>) specifically designed for Neural CODE.
To make predictions in both forward and backward directions using CODE-RNN, the IVP and FVP are solved to construct the intermediate hidden state h'_i, which is used to compute the output of the RNN update cell. The process of making predictions forward and backward in time follows a similar procedure, with the only difference being the order of the time steps t_i.
When making predictions in the forward direction, the time steps are given starting at the pairs (t_i-1, t_i) for the IVP (<ref>) and (t_i, t_i-1) for the FVP (<ref>), for i=1,…,M. Algorithm <ref> describes the process of making future predictions with CODE-RNN. However, when making predictions in the backward direction, the time steps are given starting at the pairs (t_i-1, t_i) for the IVP (<ref>), and (t_i, t_i-1) for the FVP (<ref>), for i=N,…,m. Algorithm <ref> describes the process of making past predictions with CODE-RNN. Similar to Neural CODE, predictions can be computed within (t ∈ (t_0, t_f)) or outside the training time interval for the past (t ∈ (t_m, t_f)) or the future (t ∈ (t_0, t_M)).
Note that, once more we consider that the hidden state h_i outputted by the RNN cell is the prediction ŷ_i.
In the literature, several variants of update cells have emerged to enhance the performance of the RNN cell. The recurrent architectures based on Neural CODE can incorporate any of these variants. In this study, we conducted experiments with two additional update cells: GRU (CODE-GRU) and LSTM (CODE-LSTM).
In contrast to ODE-RNNs, the hidden state h_i resulting from the RNN cell update is not transmitted to the next cell through the feedback loop. Our approach uses intermediate hidden states in the forward (h'_i) and backward (h'_i) directions. This is necessary because the initial and final conditions for solving the ODE cannot be the same, which would occur if we used the hidden state h_i in the feedback loop. This limitation reveals a significant drawback of CODE-RNN/-GRU/-LSTM, as the input information x_i is not considered or passed to the next time step iteration; it is only used to compute the output prediction. To address this issue, we propose an upgraded architecture called CODE-BiRNN.
§.§ CODE-BiRNN
We introduce CODE-BiRNN, an enhanced bidirectional recurrent architecture based on Neural CODE, which is an improvement over CODE-RNN. The aim of this enhancement is to incorporate the input information x_i into the hidden state of the feedback loop, thus updating the ODE dynamics at each observation.
To accomplish this, we made modifications to the architecture. Instead of having a single cell update that generates a single hidden state, h_i, we redesigned the structure to include two independent cell updates. The first update computes the forward hidden states, h_i, while the second update computes the backward hidden states, h_i.
The forward intermediate hidden state h'_i is given by solving the IVP (<ref>) within the time interval between the previous and current time steps, (t_i-1, t_i):
d h'(t)dt = f_θ (h(t),t) with h(t_i-1) = h_i-1.
The backward intermediate hidden state, h'_i, is obtained by solving the FVP (<ref>) within the time interval between the current and previous time steps, (t_i, t_i-1):
d h'(t)dt = f_θ (h(t),t) with h(t_i) = h_i.
Thus, unlike CODE-RNN, the forward, h'_i, and backward, h'_i, intermediate hidden states are computed using the previous forward h_i-1 (<ref>) and backward h_i-1 (<ref>) hidden states as initial and final value, respectively.
h'_i = ODESolve(f_θ, h_i-1, (t_i-1,t_i))
h'_i = ODESolve(f_θ, h_i-1, (t_i,t_i-1))
The two intermediate hidden states h'_i and h'_i, are then passed onto separate RNN cells which output the forward h_i (<ref>) and backward h_i (<ref>) hidden states:
h_i = RNNCell(h'_i, x_i),
h_i = RNNCell(h'_i, x_i),
To get the predicted output at a given time step, ŷ_i, the full hidden state, h_i, is computed by performing an aggregation/merging operation of the forward h_i and backward h_i hidden states of the same time step t_i:
ŷ_i = σ(W^[out]h_i + b^[out]) with h_i = h_i ⊕h_i.
By incorporating two separate hidden layers within the RNN, processing the data in opposite temporal directions, our network inherits the name BiRNN. Algorithm <ref> outlines the implementation details.
Replacement of the RNN by a BiRNN makes it possible to have the full independent recurrent process for each time direction, Figure <ref>.
Similar to CODE-RNN, in CODE-BiRNN, the optimisation process aims to find the optimal parameters θ by minimising the MSE between the predicted values and the true values.
To make predictions in both forward and backward directions using CODE-BiRNN, a similar approach is followed as in CODE-RNN. The IVP (<ref>) and FVP (<ref>) need to be solved to construct the forward and backward outputs of the RNN cells, respectively, which are then used to generate the final hidden state by combining the contributions from both directions.
The process of making predictions in forward and backward directions with CODE-BiRNN is analogous to that of CODE-RNN, with the directional difference lying in the order of the time steps. When making predictions in the forward direction, the time steps are given in the pairs (t_i-1, t_i) for the IVP and (t_i, t_i-1) for the FVP, for i = 1,…,M. Algorithm <ref> describes the process of making future predictions with CODE-RNN. On the other hand, when making predictions in the backward direction, the time steps are given in the pairs (t_i-1, t_i) for the IVP, and (t_i, t_i-1) for the FVP, , for i = N,…,m. Algorithm <ref> describes the process of making future predictions with CODE-RNN. Predictions can be computed within (t ∈ (t_0, t_f)) or outside the training time interval for the past (t ∈ (t_m, t_f)) or the future (t ∈ (t_0, t_M)).
Note that, once more we consider that the hidden state h_i that results of aggregating/merging the forward and backward outputs of the RNN cells is the prediction ŷ_i.
The bidirectional recurrent architecture based on Neural CODE can use any update cell. In this work we present results with RNN (CODE-BiRNN), GRU (CODE-BiGRU) and LSTM (CODE-BiLSTM) cells.
§ NUMERICAL EXPERIMENTS
In order to evaluate the performance of the architectures presented in this work, a set of experiments were conducted using various datasets and tasks. Specifically, we compared Neural CODE to Neural ODE by reconstructing spiral ODE dynamics, both forward and backward in time, using two synthetic datasets that differ in the number of training and testing points. By varying the number of data points, we aimed to investigate how Neural CODE performs in scenarios where data availability is limited, which is often encountered in real-world applications.
Then, CODE-RNN/-GRU/-LSTM, CODE-BiRNN/-BiGRU/-BiLSTM were tested and ODE-RNN/-GRU/-LSTM were used as baselines. These comparison were carried out using three real-world time-series datasets: regularly-sampled data from the climate domain, sparsely regularly-sampled data from the hydrological domain, and irregularly-sampled data from the stock market domain. The evaluation covered three distinct tasks: missing data imputation, future extrapolation, and backward extrapolation (past discovery).
The first task, missing data imputation, aimed to assess the models' ability to fill in missing values in the time-series data, being crucial when dealing with incomplete datasets where the missing information needs to be estimated. The second task, future extrapolation, focused on predicting future values beyond the observed time range, being essential for forecasting and decision-making in many domains. The third task, past discovery, aimed to evaluate the models' capacity to uncover and reconstruct past patterns in the time-series data. This task is particularly relevant for analysing historical trends and understanding the dynamics of the data over time.
Furthermore, in the context of the missing data imputation task, we conducted testing using two different settings: one involving a single input/output feature, and the other involving higher-dimensional input/output vectors. This allowed us to leverage the tests conducted on Neural CODE and examine its behaviour when learning from higher-dimensional data. By evaluating the model's performance in capturing the dynamics of the time-series in higher-dimensional settings, we gained insights into its ability to handle and learn from datasets with increased complexity. In the context of future and backward extrapolation, the testing was conducted using sequences of 7 or 15 observations to predict 7 or 15 observations. This allowed us to study the performance of the proposed architectures when predicting for longer time-horizons.
§.§ Case Study 1: Synthetic Spiral ODE Dynamics
We compared the performance of Neural CODE and Neural ODE in restoring the dynamics of a spiral ODE both forward and backward in time. By comparing the performance of Neural CODE and Neural ODE on this specific task, we aimed to gain insights into the strengths and limitations of each model in terms of their ability to capture and reproduce complex dynamics.
To create the training and testing sets, we defined a simple linear ODE (<ref>) and solved it numerically to retrieve data points in the time interval [0, 25]. For each time step, values for the two coordinates of the spiral are available (x, y).
We adapted the code from the Python Torchdiffeq <cit.> for our implementation:
dxdt = -0.1x^3 + 2.0y^3
dydt = -2.0x^3 - 0.1y^3
Two datasets were created: one with 2000 training points and 1000 testing points, denoted by 2000/1000 dataset, and another with 1000 training points and 2000 testing points, denoted by 1000/2000 dataset. We aimed to study the impact of dataset size on the models' performance.
For training, we used a batch size of 1, a sequence length of 10 time steps, 2000 iterations for MAXITER, and the Adam optimiser with a learning rate of 0.001. For the ODESolve, we employed the Runge-Kutta method of order 5 (Dormand-Prince-Shampine) with default configurations. The loss function of Neural CODE is based on the MSE (see (<ref>)).
The NN architecture used in this study consists of three layers. The input layer contains 2 neurons, representing the input features, (x_i,y_i). The hidden layer is composed of 50 neurons, each using the hyperbolic tangent activation function. Finally, the output layer consists of 2 neurons, representing the output or prediction of the model, (x̂_i,ŷ_i).
To account for random weight initialisation, we trained and tested each model three times (R=3). To evaluate the performance of the models, we compute the average of the MSE (MSE_avg) and standard deviation (std_avg) values for the test sets, from the three runs. Additionally, to analyse the evolution of the adjusted ODE during training, we plotted the MSE for forward and backward prediction on the training set at every 20 iterations (Figures <ref> and <ref>, respectively).
Forward reconstruction
Table <ref> presents the numerical results of the Neural ODE and Neural CODE for the forward reconstruction of the spiral. The first column displays the number of points in the training and testing sets, the second displays the time interval considered for the reconstruction task. The next two columns display, for each model, the MSE_avg and std_avg values.
The results presented in Table <ref> demonstrate that Neural CODE outperforms Neural ODE in reconstructing the spiral through the solution of an IVP. It is worth noting that as the number of training points decreases, there is an expected increase in the prediction error.
In Figure <ref>, the plots illustrating the MSE evolution during training demonstrate that Neural CODE achieved lower MSE values in the 2000 iterations. Additionally, Neural CODE exhibited faster adaptation to spiral dynamics compared to Neural ODE. By examining the MSE curves for both datasets, depicted in Figures <ref> and <ref>, we can conclude that using a smaller training dataset leads to more unstable learning.
Backward reconstruction
Table <ref> presents the numerical results of the Neural ODE and Neural CODE for the backward reconstruction of the spiral. The first column displays the
number of points in the training and testing sets, the second displays the time interval considered for the reconstruction task. The next two columns display, for each model, the
MSE_avg and std_avg values.
The results presented in Table <ref> demonstrate that when using the fitted ODEs to predict backward in time by solving a FVP (<ref>), Neural CODE demonstrates superior performance compared to Neural ODE for both datasets: 2000/1000 and 1000/2000. In contrast to the forward reconstruction results presented in Table <ref>, Neural CODE performs better or at least similarly when trained on a smaller number of data points (1000/2000 dataset).
In Figure <ref>, the evolution of MSE during training provides evidence of the increased difficulty faced by Neural ODE when solving the FVP. However, it exhibits a similar behaviour to Neural CODE after completing half of the total iterations when trained with fewer points.
This discrepancy can be attributed to the fact that Neural CODE minimises a function that accounts for backward prediction, while Neural ODE does not. The improved prediction capabilities of Neural CODE with the 1000/2000 dataset can be attributed to achieving a better generalised representation due to the limited number of training points, reducing the likelihood of overfitting.
For doing predictions backward in time using a Neural ODE model, the time interval is given in the reverse order to the ODESolve.
In Figures <ref> and <ref>, the visualisation of spiral dynamics and the predicted values (x̂_i,ŷ_i) with the testing set, of the 2000/1000 dataset, clearly demonstrates that Neural CODE outperforms Neural ODE in terms of prediction accuracy in both forward and backward reconstruction tasks.
Furthermore, for the 1000/2000 dataset, the visual representations, in Figures <ref> and <ref>, confirm our earlier conclusions. In forward time predictions, Neural CODE exhibits superior modelling capabilities compared to Neural ODE. Additionally, the backward dynamics of Neural CODE better aligns with the true dynamics, albeit deviating from the periodic function that models the x and y values.
It's important to note that these visualisations were generated using the models at the final training iteration. Upon examining the evolution plots of MSE, it becomes evident that implementing an early stopping criterion would be beneficial.
Case Study 2: Real-world time-series
We evaluated and compared the recurrent models CODE-RNN/-GRU/-LSTM and CODE-BiRNN/-BiGRU/-BiLSTM, using ODE-RNN/-GRU/-LSTM baselines, when modelling three real-world time-series datasets with different characteristics (available in Kaggle):
* Daily Climate time-series Data (regularly sampled), denoted by DC, <cit.>;
* Hydropower modelling with hydrological data (regularly sampled with sparse data), denoted by HM, <cit.>;
* DJIA 30 Stock time-series (irregularly sampled), denoted by DJIA, <cit.>.
For each dataset, the performance of the models was analysed at three distinct tasks:
(i) Missing data imputation: predicting an observation ŷ_i+1 at t_i+1 between observations at t_i and t_i+2, for i = 1,3,6….
Remark: Recurrent architectures based-on Neural ODE (ODE-RNN/-GRU/-LSTM) only receive the observation at t_i and predict the observation at t_i+1 while architectures based-on Neural CODE (CODE-RNN/-GRU/-LSTM and CODE-BiRNN/-BiGRU/-BiLSTM) receive both observations at t_i (for the IVP) and at t_i+2 (for the FVP) to predict the observation at t_i+1, being able to capture more information.
(ii) Forward extrapolation: given a sequence of length 7 or 15 predict the observations for the next 7 or 15 time-steps.
(iii) Backward extrapolation: given a sequence of length 7 or 15 predict the observations for the past 7 or 15 time-steps.
Remark: For doing predictions backward in time with ODE/-RNN/-GRU/-LSTM, the ODESolve receives the time interval in the reverse order.
The same training conditions were applied to all models for all datasets and tasks. The datasets were divided into 75% training and 25% testing points. The training was conducted with a batch size of 1, 50 epochs, and the Adam optimiser with a learning rate of 0.0005. For the ODESolve we use the Runge-Kutta method of order 5 (Dormand-Prince-Shampine) with the default configurations.
The NN architecture that builds the ODE dynamics used in this study consists of three layers. The input layer contains 1 neuron (or 4 neurons when predicting 4 features or 39 neurons when predicting 39 features), representing the input features of the model. The hidden layer is composed of 256 neurons, each using the exponential linear unit activation function. Finally, the output layer consists of 1 neuron (or 4 neurons when predicting 4 features or 39 neurons when predicting 39 features), representing the output of the model.
All architectures use a single RNN, GRU or LSTM cell with an input layer containing 1 neuron (or 4 neurons when predicting 4 features or 39 neurons when predicting 39 features), representing the size of output given by the ODE solver. The hidden layer is composed of 256 neurons and the output layer contains 1 neuron (or 4 neurons when predicting 4 features or 39 neurons when predicting 39 features).
To account for random weight initialisation, we trained and tested each model three
times (R=3). To evaluate the performance of the models, we computed the average of the
MSE (MSE_avg) and standard deviation (std_avg) values for the test sets, from the three runs.
Furthermore, to analyse and compare the convergence of the proposed networks with the baselines, the training losses of CODE-RNN/-GRU/-LSTM and CODE-BiRNN/-BiGRU/-BiLSTM were plotted, along with ODE-RNN/-GRU/-LSTM baselines, for the missing data imputation and future extrapolation tasks.
Remark: For the backward extrapolation task, the training losses were not plotted since the Neural ODE-based architectures (ODE-RNN/-GRU/-LSTM) solely compute the losses in the forward direction of time.
§.§.§ Daily Climate time-series Data
This dataset consists of 1462 data points with 4 daily features: mean temperature, humidity, wind speed, and mean pressure. These features were experimentally measured in Delhi, India for weather forecasting <cit.>.
Missing data imputation
We performed the missing data imputation task to predict data points with 1 feature (mean temperature) as well as all 4 available features, denoted by 1/1 and 4/4 respectively. The numerical results are shown in Table <ref> for architectures ODE-RNN, CODE-RNN and CODE-BiRNN, Table <ref> for architectures ODE-GRU, CODE-GRU and CODE-BiGRU, and Table <ref> for architectures ODE-LSTM, CODE-LSTM and CODE-BiLSTM. The evolution of the loss during training for the predictions with 1/1 and 4/4 features are depicted in Figures <ref> and <ref>, respectively.
From Tables <ref>-<ref>, the performance of CODE-BiRNN/-BiGRU/-BiLSTM stand out, from the others, by offering the best performance. Among these three variants, they present similar performance.
They achieve MSE_avg values smaller by an order of magnitude for 1/1 and 4/4.
When comparing ODE-RNN/-GRU/-LSTM with CODE-RNN/-GRU/-LSTM the performances are similar.
In general, all architectures exhibit a higher MSE_avg values when the number of features increased.
Figures <ref> and <ref> show that the CODE-BiRNN/-BiGRU/-BiLSTM stand out, having faster convergence and achieving lower loss values. When analysing the loss values of ODE-RNN/-GRU/-LSTM and CODE-RNN/-GRU/-LSTM, they present similar behaviour. We note that, analysing the loss values throughout the training process corroborate the test results (Tables <ref>-<ref>).
Future extrapolation
We performed the future extrapolation task to predict the mean temperature for the next 7 or 15 days after receiving the mean temperature for past 7 or 15 days, denoted by 7/7 and 15/15.
The numerical results are shown in Table <ref> for architectures ODE-RNN, CODE-RNN and CODE-BiRNN, Table <ref> for architectures ODE-GRU, CODE-GRU, CODE-BiGRU and Table <ref> for architectures ODE-LSTM, CODE-LSTM and CODE-BiLSTM.
The evolution of the losses during training for predictions with 7/7 and 15/15 are depicted in Figures <ref> and <ref>, respectively.
The results in Tables <ref>-<ref> show that, CODE-BiRNN/-BiGRU/-BiLSTM stand out from the others, consistently achieving the best predictive performance for 7/7 and 15/15. Among these variants, CODE-BiGRU has the best performance.
CODE-RNN and ODE-RNN have similar performance, while CODE-RNN present slightly better performance than CODE-GRU and CODE-LSTM, for 7/7.
CODE-RNN/-GRU/-LSTM outperforms ODE-RNN/-GRU/-LSTM, for 15/15.
Figures <ref> and <ref> show CODE-BiRNN/-BiGRU/-BiLSTM, in general, exhibit the fastest convergence rates and the lowest loss values.
As the number of epochs increases, CODE-RNN/-GRU/-LSTM show a slightly rising trend in the loss values. This suggests that implementing an early stopping criterion could be beneficial for CODE-RNN/-GRU/-LSTM.
Backward extrapolation
We performed the backward extrapolation task to predict the mean temperature for the past 7 or 15 days after receiving the mean temperature for the next 7 or 15 days, denoted by 7/7 and 15/15.
The numerical results are shown in Table <ref> for architectures ODE-RNN, CODE-RNN and CODE-BiRNN, Table <ref> for architectures ODE-GRU, CODE-GRU, CODE-BiGRU and Table <ref> for architectures ODE-LSTM, CODE-LSTM and CODE-BiLSTM.
From Tables <ref>-<ref>, the performance of the models on the backward extrapolation task are similar to that observed for the forward extrapolation task.
Once again, it is worth emphasising that CODE-BiRNN/-BiGRU/-BiLSTM demonstrates the highest performance for 7/7 and 15/15. Among these variants, CODE-BiGRU performs best at 7/7, while CODE-BiRNN excels at 15/15.
Although predicting longer time horizons is more challenging, CODE-BiRNN,/-BiGRU/-BiLSTM achieve lower MSE_avg values compared to smaller time horizons.
§.§.§ Hydropower Modelling with Hydrological Data
This dataset consists of weekly hydrological data from 27 European countries spanning the period 2015 to 2019, totalling 208 data points. The dataset includes hydropower inflows and average river discharges measured at national hydropower plants <cit.>. This dataset was specifically chosen for its limited amount of data, enabling the evaluation of how sparse training data affects the performance of the architectures.
Missing data imputation
We performed the missing data imputation task to predict data points with 1 feature (hydropower inflow) and 39 features (39 average river discharges), denoted by 1/1 and 39/39 respectively. The numerical results are shown in Table <ref> for architectures ODE-RNN, CODE-RNN and CODE-BiRNN, Table <ref> for architectures ODE-GRU, CODE-GRU and CODE-BiGRU, and Table <ref> for architectures ODE-LSTM, CODE-LSTM and CODE-BiLSTM. The evolution of the loss during training for the predictions with 1/1 and 39/39 features are depicted in Figures <ref> and <ref>, respectively.
From Tables <ref>-<ref>, the performance of CODE-BiRNN/-BiGRU/-BiLSTM stand out, from the others, by offering the best performance. Among these three variants, they present similar performance.
When comparing ODE-RNN/-GRU/-LSTM with CODE-RNN/-GRU/-LSTM the performances are similar. When the number of features is increased, all architectures present increased values of MSE_avg.
Figures <ref> and <ref> show that CODE-RNN/-GRU/-LSTM achieve the lowest MSE_avg values out of all architectures. CODE-BiRNN/-BiGRU/-BiLSTM have faster convergence although ODE-RNN/-GRU/-LSTM and CODE-RNN/-GRU/-LSTM reach lower MSE_avg values.
When analysing and comparing the loss values, during training, for ODE-RNN/-GRU/-LSTM and CODE-RNN/-GRU/-LSTM, they present similar behaviour. corroborating the test results in (Tables <ref>-<ref>).
From Figures <ref> and <ref>, CODE-BiRNN/-BiGRU/-BiLSTM present the highest training loss values however the test results show they have higher performance than the other architectures. Taking these conflicting pieces of information into account, we can conclude that the ODE-RNN and CODE-RNN models, along with their respective variants, exhibit poorer generalisation and may be prone to overfitting.
Future extrapolation
We performed the future extrapolation task to predict the hydropower inflow for the next 7 or 15 weeks after receiving the hydropower inflow for past 7 or 15 weeks, denoted by 7/7 and 15/15.
The numerical results are shown in Table <ref> for architectures ODE-RNN, CODE-RNN and CODE-BiRNN, Table <ref> for architectures ODE-GRU, CODE-GRU, CODE-BiGRU and Table <ref> for architectures ODE-LSTM, CODE-LSTM and CODE-BiLSTM.
The evolution of the losses during training for predictions with 7/7 and 15/15 are depicted in Figures <ref> and <ref>, respectively.
The results in <ref>-<ref> show that, CODE-BiRNN/-BiGRU/-BiLSTM stand out from the others, consistently achieving the best predictive performance for 7/7 and 15/15. Among these variants, CODE-BiRNN has the best performance.
When comparing ODE-RNN/-GRU/-LSTM and CODE-RNN/-GRU/-LSTM, the architectures present similar performance for 7/7 and 15/15.
Figures <ref> and <ref> show CODE-BiRNN/-BiGRU/-BiLSTM, in general, exhibit the fastest convergence rates and the lowest loss values.
ODE-RNN/-GRU/-LSTM and CODE-RNN/-GRU/-LSTM present achieve similar MSE_avg values at 50 epochs although, in general, CODE-RNN/-GRU/-LSTM achieve lower MSE_avg values faster.
Backward extrapolation
We performed the backward extrapolation task to predict the hydropower inflow for the past 7 or 15 weeks after receiving the hydropower inflow for the next 7 or 15 weeks, denoted by 7/7 and 15/15.
The numerical results are shown in Table <ref> for architectures ODE-RNN, CODE-RNN and CODE-BiRNN, Table <ref> for architectures ODE-GRU, CODE-GRU, CODE-BiGRU and Table <ref> for architectures ODE-LSTM, CODE-LSTM and CODE-BiLSTM.
From Tables <ref>-<ref>, the performance of the models on the backward extrapolation task are similar to that observed for the forward extrapolation task.
Once again, it is worth emphasising that CODE-BiRNN/-BiGRU/-BiLSTM demonstrates the highest performance for 7/7 and 15/15. Among these variants, CODE-BiRNN performs the best.
As expected, when predicting longer time horizons, all architectures show an increase in the MSE_avg values compared to smaller time horizons.
For doing predictions backward in time using the ODESolve in ODE-LSTM receives the time interval in the reverse order.
§.§.§ DJIA 30 Stock time-series
The architectures were tested on an irregularly sampled dataset called DJIA 30 Stock time-series. The dataset consisted of 3019 data points and included 6 daily features: the stock's price at open and close, the highest and lowest price, the number of shares traded, and the stock's ticker name. The data was collected between January 3, 2006, and December 29, 2017, for 29 DJIA companies <cit.>. However, for this particular study, data from a single company was utilised.
Missing data imputation
We performed the missing data imputation task to predict data points with 1 feature (stock's price at close) and 4 features (stock's price at open and close, and the highest and lowest price), denoted by 1/1 and 39/39 respectively. The numerical results are shown in Table <ref> for architectures ODE-RNN, CODE-RNN and CODE-BiRNN, Table <ref> for architectures ODE-GRU, CODE-GRU and CODE-BiGRU, and Table <ref> for architectures ODE-LSTM, CODE-LSTM and CODE-BiLSTM. The evolution of the loss during training for the predictions with 1/1 and 39/39 features are depicted in Figures <ref> and <ref>, respectively.
From Tables <ref>-<ref>, the performance of CODE-BiRNN/-BiGRU/-BiLSTM stand out, from the others, by offering the best performance. Among these three variants, CODE-BiRNN/-BiGRU offer the best performance.
When comparing ODE-RNN/-GRU/-LSTM with CODE-RNN/-GRU/-LSTM the performances are similar. When the number of features is increased, the MSE_avg values do not suffer a significant increase.
Figures <ref> and <ref> show that CODE-BiRNN/-BiGRU/-BiLSTM achieve faster convergence and lower MSE_avg values, out of all architectures. When comparing ODE-RNN/-GRU/-LSTM and CODE-RNN/-GRU/-LSTM they present similar loss evolution being ODE-GRU able to achieve slightly lower MSE_avg values than CODE-GRU.
Future extrapolation
We performed the future extrapolation task to predict the stock's price at close for the next 7 or 15 days after receiving the stock's price at close for past 7 or 15 days, denoted by 7/7 and 15/15.
The numerical results are shown in Table <ref> for architectures ODE-RNN, CODE-RNN and CODE-BiRNN, Table <ref> for architectures ODE-GRU, CODE-GRU, CODE-BiGRU and Table <ref> for architectures ODE-LSTM, CODE-LSTM and CODE-BiLSTM.
The evolution of the losses during training for predictions with 7/7 and 15/15 are depicted in Figures <ref> and <ref>, respectively.
The results in Tables <ref>-<ref> show that, CODE-BiRNN/-BiGRU/-BiLSTM stand out from the others, consistently achieving the best predictive performance, for 7/7 and 15/15. Among these variants, CODE-BiGRU offers the best performance for 7/7 while CODE-BiRNN presents the lowest MSE_avg for 15/15.
CODE-LSTM presents the lowest performance out of all networks, for 7/7 and 15/15 and, in general ODE-RNN/-GRU/-LSTM have lower or similar MSE_avg values when compared to CODE-RNN/-GRU/-LSTM. This was expected since the architecture of these proposed networks do not consider the information given by the input to update the hidden state passed onto the next iteration.
From Figures <ref> and <ref> CODE-BiRNN/-BiGRU/-BiLSTM exhibit the the lowest loss values.
In general, reveal unstable loss evolution for all architectures, for 7/7, being CODE-BiRNN/-BiGRU/BiLSTM the most stable and providing the lowest MSE_avg values throughout the epochs. Furthermore, ODE-RNN/-GRU/-LSTM shows lower MSE_avg values than CODE-RNN/-GRU/-LSTM, corroborating the test results in Tables <ref>-<ref>.
For 15/15, CODE-RNN/-GRU/-LSTM show fastest convergence. The loss value of all models increases around epoch 40, suggesting the implementation of an early stopping criterion could be beneficial.
Backward extrapolation
We performed the backward extrapolation task to predict the stock's price at close for the past 7 or 15 days after receiving the stock's price at close for the next 7 or 15 days, denoted by 7/7 and 15/15.
The numerical results are shown in Table <ref> for architectures ODE-RNN, CODE-RNN and CODE-BiRNN, Table <ref> for architectures ODE-GRU, CODE-GRU, CODE-BiGRU and Table <ref> for architectures ODE-LSTM, CODE-LSTM and CODE-BiLSTM.
From Tables <ref>-<ref>, the performance of the models on the backward extrapolation task are similar to that observed for the forward extrapolation task, with CODE-BiRNN/-BiGRU/-BiLSTM demonstrating the highest performance for 7/7 and 15/15. Among these variants, CODE-BiGRU performs best.
As expected, when predicting longer time horizons, all architectures show an increase in the MSE_avg values compared to smaller time horizons.
Furthermore, in general, CODE-RNN/-GRU/-LSTM perform less effectively than ODE-RNN/-GRU/-LSTM.
For doing predictions backward in time using the ODESolve in ODE-LSTM receives the time interval in the reverse order.
§ CONCLUSION
In this work, we have introduced Neural CODE, a novel NN architecture that adjusts an ODE dynamics such that it models the data both forward and backward in time.
To accomplish this, Neural CODE minimises a loss function based on predictions forward, through solving an IVP, and backward, through solving an FVP, in time. With this, Neural CODE effectively leverages the connections between data by considering how previous and future values influence the current value, effectively exploiting the inter-dependencies within the data.
Furthermore, we have proposed two innovative recurrent architectures, CODE-RNN and CODE-BiRNN, which use Neural CODE to model the states between observations. Additionally, GRU and LSTM update cell variants were introduced in these architectures originating CODE-GRU/-LSTM and CODE-BiGRU/-BiLSTM architectures.
Our experimental evaluation clearly demonstrates that Neural CODE surpasses Neural ODE in terms of learning the dynamics of a spiral ODE when making predictions both forward and backward in time. This highlights the significant impact of considering data from both directions on the performance of the trained models.
Furthermore, when applied to modeling real-life time series using the recurrent architectures CODE-RNN/-GRU/-LSTM and CODE-BiRNN/-BiGRU/-BiLSTM, our results indicate that CODE-BiRNN/-BiGRU/-BiLSTM stands out as the architecture with the highest accuracy, faster convergence, and the ability to reach the lowest training loss values.
These experimental findings provide strong evidence that Neural CODE-based architectures effectively double the available information for fitting the ODE, enhancing the optimisation algorithm's capabilities and resulting in faster convergence and superior performance. Learning the context from both past and future observations enables the capture of patterns and trends, leading to improved generalisation and increased resilience against possible existing data errors.
Overall, leveraging both the forward and backward solutions of an ODE proves to be a powerful approach for data fitting, offering superior performance, generalisation, and robustness compared to solely relying on forward solutions. However, it is crucial to consider the trade-off between these benefits and the associated computational resources and complexity.
While this work sheds light on the advantages of Neural CODE, there remains an open question regarding the impact of different merging operations on the performance of recurrent Neural CODE architectures, including CODE-RNN, CODE-GRU, CODE-LSTM, CODE-BiRNN, CODE-BiGRU, and CODE-BiLSTM. Further exploration in this area could provide valuable insights into optimising the performance of these architectures.
The authors acknowledge the funding by Fundação para a Ciência e Tecnologia (Portuguese Foundation for Science
and Technology) through CMAT projects UIDB/00013/2020 and UIDP/00013/2020.
C. Coelho would like to thank FCT for the funding through the scholarship with reference 2021.05201.BD.
0.2in
unsrt
|
http://arxiv.org/abs/2307.01701v1
|
20230704131603
|
Synthetic is all you need: removing the auxiliary data assumption for membership inference attacks against synthetic data
|
[
"Florent Guépin",
"Matthieu Meeus",
"Ana-Maria Cretu",
"Yves-Alexandre de Montjoye"
] |
cs.CR
|
[
"cs.CR",
"cs.AI"
] |
Normal forms of ordinary linear differential equations
in arbitrary characteristic
Florian Fürnsinn, Herwig Hauser
August 1, 2023
===================================================================================
Synthetic data is emerging as the most promising solution to share individual-level data while safeguarding privacy. Membership inference attacks (MIAs), based on shadow modeling, have become the standard to evaluate the privacy of synthetic data. These attacks, however, currently assume the attacker to have access to an auxiliary dataset sampled from a similar distribution as the training dataset. This often is a very strong assumption that would make an attack unlikely to happen in practice. We here show how this assumption can be removed and how MIAs can be performed using only the synthetic data. More specifically, in three different attack scenarios using only synthetic data, our results demonstrate that MIAs are still successful, across two real-world datasets and two synthetic data generators. These results show how the strong hypothesis made when auditing synthetic data releases – access to an auxiliary dataset – can be relaxed to perform an actual attack.
§ INTRODUCTION
Data is crucial in statistical modeling, machine learning systems, and decision-making processes, driving research and innovation. However, data often pertains directly or indirectly to individuals and may contain sensitive information, such as medical records and financial transactions, raising privacy concerns.
Synthetic tabular data emerges as a promising solution to sharing data while limiting the risk of re-identifaction<cit.>.
A synthetic data generator is a statistical model fitted to the original, private data and used to generate synthetic records that are not traceable to any specific individual while retaining most of the statistical utility.
Extensive research has been dedicated to exploring a wide range of techniques for generating synthetic data <cit.>.
Since, if truly anonymous, synthetic data would fall outside the scope of data protection legislation such as the European Union's General Data Protection Regulation (EU GDPR) <cit.> or California Consumer Privacy Act <cit.>, various sectors including finance <cit.>, healthcare <cit.>, and research <cit.> are starting to explore its use.
However, synthetic data alone does not necessarily preserve privacy.
First, it is long known that aggregation alone does not effectively safeguard privacy <cit.> and second, achieving formal privacy guarantees for synthetic data generation models poses implementation challenges and comes at a cost in utility <cit.>.
Membership inference attacks (MIAs) have thus been used to evaluate the privacy-preservation of synthetic data in practice.
An MIA aims to infer if a specific target record is part of the generative model's training set. Recent work has shown that synthetic data is vulnerable to MIAs, with state-of-the-art attacks relying on the shadow modeling approach <cit.>.
This approach involves training a membership classifier to distinguish between synthetic datasets generated from so-called shadow datasets with or without a particular target record.
As such, these attacks require the attacker to have access to an auxiliary dataset that follows the same distribution as the original, private dataset, from which the attacker will sample their shadow datasets.
We here argue that is often is a strong assumption in practice.
While general datasets of images are widely available, medical datasets or datasets of financial transaction – some of the main use cases for synthetic tabular data – are not only not widely available but also very specific e.g. to specific geographies, type of diseases, etc.
The practical feasibility of an attack is also an important criterion from a legal perspective when assessing what constitutes anonymous data.
Recital 26 of the EU GDPR <cit.> indeed states that “account should be taken of all the means reasonably likely to be used, such as singling out, either by the controller or by another person to identify the natural person directly or indirectly.”
Contribution.
In this work, we empirically show how synthetic data can effectively replace the auxiliary dataset when running MIAs, removing the strong assumption made so far by attacks and making the attack –in our opinion– reasonably likely from a legal perspective.
First, we consider an attacker with black-box access to the synthetic data generator, which we use to generate shadow datasets for running shadow modeling attacks.
We evaluate the shadow modeling attacks of Houssiau et al. <cit.> and Meeus et al. <cit.> on two real-world datasets and two synthetic data generators.
Our results show that MIAs based on synthetic data alone leak the membership of their most vulnerable records 65.5% of the time on average across datasets and generators.
This is 15.5 percentage points (p.p.) better than the random guess baseline.
We then compare the MIA performance to the previous setting that assumes access to an auxiliary dataset from the same distribution.
We find that our attacker only loses 11.6 p.p. when compared to this much stronger setting.
Second, we consider an even weaker attacker that only uses the released synthetic data to run shadow modeling attacks.
This attacker obtains an average accuracy of 62.8% or a 2.7 p.p. drop when compared to the black box access scenario. This result is especially meaningful as having access to the released synthetic dataset is an assumption almost always met in practice. Even here, we show the attack to still work 12.8 p.p. better than the random baseline of 50%.
Third, we identify a potential double counting issue when using synthetic data as auxiliary dataset which might lower the accuracy of an attack. We then formalize it and propose an empirical setup to evaluate an upper-bound on the accuracy of an attack solving the double counting issue. We show the upper-bound to reach 85.8%, higher by 8.7 p.p. than the auxiliary data scenario, emphasizing how synthetic only attacks might in the future outperform what is today considered the risk posed by a strong attacker.
MIAs are the main tool to evaluate the privacy-preserving capabilities of synthetic data. The strong auxiliary data assumption they currently rely on might however lead some to question the practical risk posed by these attacks and whether they are 'reasonably likely'. We here show how this assumption can be relaxed, as attackers having solely access to the synthetic data generator or even released synthetic data are still able to develop accurate attacks. We then find that attacks based on synthetic data only might actually outperform current attacks in the future if the double counting issue can be solved.
§ BACKGROUND AND RELATED WORK
§.§ Synthetic data generation
Suppose that an entity (e.g. governmental institution, company) seeks to grant a third party access to a private, tabular dataset D for analysis. This dataset consists of a collection of records, each corresponding to an individual, which we denote by D = {x_1, …, x_n}.
Each record consists of F features, where a feature is the value for a given attribute.
To address privacy risks, realizing that anonymizing record-level data often fails <cit.>, an increasingly popular approach involves generating and publishing a synthetic dataset <cit.>. Synthetic data is created by (1) fitting a statistical model to the original data, and (2) using this model to generate artificial (“synthetic”) records by sampling new values.
Ideally, the synthetic data should preserve key statistical properties of the original dataset D without disclosing private information of the individuals in D.
The statistical model employed for generating synthetic data is referred to as the synthetic data generator ϕ, and we write D^s ∼Φ, |D^s|=m to denote that a synthetic dataset of m records is sampled i.i.d. from the generator Φ.
The generator is fitted on a dataset D.
We write Φ = 𝒢(D) to say that a certain fitting procedure 𝒢 (e.g., parameter fitting of a Bayesian network) was applied to the original dataset D to obtain the generator Φ.
The generator can take various forms, such as a probabilistic model like Bayesian networks (BayNet) <cit.> and Synthpop <cit.> or a generative adversarial network like CTGAN <cit.>.
§.§ Membership inference attacks against synthetic tabular data
Membership inference attacks (MIAs) have become the standard to evaluate the privacy of synthetic data, machine learning (ML) models, and aggregation mechanisms more broadly.
Given the output of an aggregation mechanism, e.g., a synthetic dataset or a set of aggregate statistics computed on a private dataset D, the aim of an MIA is to infer whether a given target record x_T was part of D or not. Successful MIAs have been developed against aggregate statistics of e.g. location data <cit.>, genomic data <cit.>, and against ML models <cit.>.
For MIAs against synthetic tabular data, a first class of methods directly compares the synthetic records to the original records, searching for exact or near-matches <cit.>.
Stadler et al. <cit.> argue, however, that the studies relying on similarity testing severely underestimate the risk and instead and propose an attack using the shadow modeling approach.
First introduced to evaluate the privacy of ML models <cit.>, the shadow modeling approach is now the state-of-the-art in synthetic data <cit.>.
Shadow modeling typically assumes that the attacker has knowledge of the model Φ_T used to generate the synthetic data and has access to an auxiliary dataset D_aux, where the auxiliary dataset comes from the same distribution as the original dataset or D_aux∼𝒟. The attacker will then construct multiple shadow datasets D_shadow utilizing D_aux, as such that |D_shadow| = |D| with 50% of the shadow datasets containing the target record x_T and the other 50% containing a random record x_R instead. Next, by using the knowledge of the model Φ_T, the attacker will train multiple shadow generators Φ_shadow, which in turn produce synthetic shadow datasets D_shadow^s. As such, the attacker knows which D_shadow^s have been derived from a shadow dataset containing the target record x_T and which were not. This enables the attacker to train a binary meta-classifier on features extracted from the synthetic shadow datasets to predict membership. Figure <ref> illustrates how the shadow modeling technique is used to train the meta-classifier. Lastly, the meta-classifier is evaluated on similarly constructed synthetic shadow datasets for testing.
Different techniques have been proposed to extract meaningful features from the synthetic shadow datasets to predict membership. Stadler et al. <cit.> proposed to extract aggregate statistics, specifically the mean and standard deviation of the attributes, and correlation matrices and histograms.
Houssiau et al. <cit.> extended this work with a query-based feature extractor, using k-way marginal statistics computed over the values of the target record for randomly selected subsets of attributes.
Lastly, Meeus et al. <cit.> developed the first trainable feature extractor, which uses (part of) the synthetic dataset directly as input to an attention-based classifier. The authors compared the two approaches, showing that the query-based method is the state-of-the-art attack on tabular records.
Synthetically generated data has been previously used to run attacks against machine learning models <cit.>, but these works require specific assumptions to be met.
In one experiment, Shokri et al. <cit.> assumed knowledge of the dataset marginals in order to generate synthetic data.
In another experiment, the same authors generated this data using local search techniques but the method is only effective when applied to binary records <cit.>.
Finally, Cretu et al. <cit.> generated synthetic datasets using the copula generative model that satisfy a subset of the correlations present in the private training dataset D, which are assumed to be known to the attacker.
Differently from these approaches targeting ML models, our work targets synthetic data and assumes zero additional attacker knowledge about the original dataset compared to what is provided by the data release.
State-of-the-art MIAs against synthetic data rely on the shadow modeling technique, which traditionally assumes that an attacker has access to an auxiliary dataset. However, this assumption is rarely met in practice. We here propose, to our knowledge, the first MIA against synthetic data that exclusively utilizes synthetic data to run the attack.
§ ATTACK SCENARIOS
We exclusively consider state-of-the-art MIAs, which are based on the shadow modeling technique.
We assume that the attacker has access to the synthetic dataset D^s ∼ϕ_T(𝒢(D)), where ϕ_T is referred to as the target generator.
The attacker aims to infer whether a particular record, referred to as the target record x_T, was part of the original dataset, i.e., whether x_T∈ D or x_T ∉ D.
We consider the standard setting under which the attacker knows the generative algorithm 𝒢 used to fit the statistical model on the original data.
To model the uncertainty of the attacker about the dataset, we consider three attack scenarios, in the order of decreasing attacker strength: (S0) Auxiliary, where the attacker has access access to an auxiliary dataset sampled from the same distribution, (S1) Black-box, where the attacker has access to the target generator ϕ_T and can query it arbitrarily many times to sample new records and (S2) Published, where the attacker only has access to the released synthetic dataset D^s.
Shadow modeling attacks, which achieve state-of-the-art performance, have so far only been studied in the first scenario.
§.§ (S0) Auxiliary
As a baseline, we consider the attack scenario as typically considered in MIAs against synthetic data <cit.>, where the attacker has access to an auxiliary dataset D_aux sampled from the same distribution 𝒟 as the private dataset D, i.e., D_aux∼𝒟. D_aux is then used to construct the n_shadow shadow datasets for training the meta-classifier, by sampling records of D_aux uniformly without replacement.
The meta-classifier is evaluated on n_test synthetic datasets.
The binary membership prediction is then aggregated across all n_test synthetic datasets to a final accuracy that is used as the MIA performance metric.
§.§ (S1) Black box
Next, we remove the access to auxiliary dataset assumption.
We consider an attacker who is able to query the target generator Φ_T for synthetic records, i.e. has black-box access to the generator.
This scenario could for instance arise in practice when the end user of the synthetic data benefits from having more synthetic records than there were present in the original dataset, e.g. to train ML models.
The attacker can then use the generator to sample new records that can be used to construct the shadow datasets. Specifically, an attacker will generate m synthetic records and use this as auxiliary information instead.
Note that unlike the standard setting (S0) Auxiliary, the shadow datasets and consequently the meta-classifier are now specific to the target generator on which it is evaluated.
This means that for each of the n_test target generators, the attacker needs to train n_shadow shadow datasets and a meta-classifier.
While in the standard setting n_shadow + n_test generators and one meta-classifier are trained to evaluate the attack against a target record, in this setting and the next, n_shadow× n_test generators and n_test meta-classifiers need to be trained.
For computational reasons, in our experiments for every target generator Φ_T we sample one dataset of m records, which we use to sample the shadow datasets.
§.§ (S2) Published
We further remove the access to the target generator Φ_T assumption.
The only knowledge about the original data available to the attacker is that of the released synthetic dataset D^s.
We denote its size by n_synthetic=|D^s| and assume that it is equal
to the size of the original, private dataset, n_synthetic = |D|.
In this scenario, the attacker can still train another generator Φ_S, using the synthetic dataset as training points,
i.e., Φ_S = 𝒢(D^s). With this new generator, the attacker is able to generate
new synthetic records to be used to construct the shadow datasets. We evaluate the MIA performance for this scenario in the same way as in scenario (S1) Black box above.
§.§ (S3) Upper bound
In the scenarios where we use synthetic data to construct the shadow datasets (S1 and S2), the case where x_T ∈ D leads to a double counting phenomenon which could hurt the attack performance.
Indeed, when x_T ∈ D, the synthetic data generator Φ_T = 𝒢(D) is influenced by the presence of the target record.
Recall that to run the shadow modeling approach using synthetic data, we first generate shadow datasets D_shadow, then append x_T to them 50% of the time, and finally train a generator on each of the datasets.
The effect of x_T on generators trained with or without it and hence the two “worlds” (presence or absence of x_T) are likely to be less distinguishable overall to the meta-classifier, compared to scenario S0, where the shadow datasets are sampled from clean i.i.d. data of the same distribution as D.
We call the influence of the presence of x_T on the generated synthetic data the trace of x_T.
We formalize this effect by first defining the concept of synthetic neighbours to then define the trace.
Let D = (x_1,⋯,x_n) be a dataset, then a neighbour dataset with respect to x_T will be such that ∃ k | D^T = (x_1,⋯, x_k,x_T,x_k+2,⋯,x_n) and x_k+1≠ x_T. We call synthetic neighbours the resulting synthetic datasets generated by the same generator model 𝒢 trained on the respective datasets. Namely, D^s,T∼Φ = 𝒢(D^T) and D^s ∼Φ = 𝒢(D) are called two synthetic neighbours.
Let 𝒟^s and 𝒟^s,T be two synthetic neighbours with respect to x_T. Then, the trace of x_T is defined as the impact of excluding (respectively including) the target record in the training dataset D (D∪{x_T} = D^T) of a synthetic data generator Φ = 𝒢(D)(Φ = 𝒢(D^T)) on the generated synthetic data D^s ∼Φ (D^s,T∼Φ), written |.|_Φ.
Formally, trace(x_T) = |𝒟^s - 𝒟^s,T|_Φ.
When using a synthetic dataset 𝒟^s,T to construct shadow datasets as part of the MIA, the shadow datasets on their turn will contain x_T with 50% probability as well. In that situation, the meta-classifier will therefore be trained to recognize the trace of trace of x_T i.e.
|𝒟^s_2 - 𝒟^s,T_2 | where 𝒟^s_2 ∼Φ = 𝒢(𝒟^s,T∪{x_random}) and 𝒟^s,T_2 ∼Φ = 𝒢(𝒟^s,T∪{x_T}). However, at inference time the meta-classifier is expected to recognize the single trace of x_T i.e. |𝒟^s - 𝒟^s,T|_Φ. We call this the double counting phenomenon and intuitively expect this to affect the performance of the meta-classifier.
Note that this only happens when the target generator has seen the target during training (to generate 𝒟^s,T). When this is not the case, the resulting dataset 𝒟^s does not contain any trace of x_T, which enables the attacker to use 𝒟^s to construct the shadow datasets without encountering the double counting issue.
To avoid the double counting phenomenon, we here design a hypothetical attack, as a slight modification from scenario S1, where we artificially ensure that the target x_T is never seen during the training of the generator. Specifically, when the target is not seen during training, nothing changes, and the
attacker has black box access to Φ, just as in scenario (S1: Black box). In contrast, for a target generator that has seen the target during training (the target generator will generate 𝒟^s,T), we ensure the attacker to have access to a neighbour synthetic dataset 𝒟^s.
This scenario serves as an upper bound for an MIA with access only to synthetic data, since now we artificially avoid the double-counting issue.
§ EXPERIMENTAL SETUP
In this section, we describe the synthetic data generative models and datasets against which we evaluate the attacks, the meta-classifier methods used, and the parameters of the attacks.
§.§ Synthetic data generators
Synthpop has been introduced by Nowok et al. <cit.> as an R package for synthetic data generation. It uses classification and regression trees to estimate conditional probabilities from the training dataset, then used to generate synthetic data. In our work, we utilize the Python re-implementation of Synthpop <cit.> from the reprosyn repository <cit.>.
BayNet utilizes a Bayesian Network to generate synthetic data. It represents the attributes of the training data as a Directed Acyclic Graph, capturing causal relationships. Each node in the graph has a conditional distribution ℙ[X|Parents(X)] estimated from the available data. Synthetic data is generated by sampling from the joint distribution obtained by multiplying the computed conditionals. We also use the implementation from the reprosyn repository <cit.>.
§.§ Real world datasets
UK Census, or the 2011 Census Microdata Teaching File <cit.>, was published by the Office for National Statistics, consists of a random sample representing 1% of the 2011 Census output database for England and Wales. This dataset comprises a total of n=569,741 records and includes F=17 categorical attributes.
Adult <cit.> is extracted from the 1994 US Census database. The dataset comprises n=45,222 records with F=15 attributes, 9 of which are categorical and 6 continuous.
§.§ Meta-classifier methods
When training a meta-classifier to perform MIAs against synthetic data, the feature extraction from the synthetic dataset is key. We consider two previously proposed methods:
Query based. Introduced by Houssiau et al. <cit.>, this state-of-the-art attack uses k-way marginal statistics, or count queries, computed over subsets of the attribute values of the target record from the synthetic dataset. We use 100,000 randomly sampled count queries of the 2^F possibilities and use a random forest classifier with 100 trees and maximum depth 10 to predict membership.
Target Attention As introduced by Meeus et al. <cit.>, this method takes as input (part of) the synthetic dataset and is in essence the first trainable feature extractor. In one neural network, the method computes record-level embeddings, that through a custom attention mechanism are aggregated to a dataset-level embedding, used to predict binary membership. We use the exact implementation as laid out in the paper <cit.>.
§.§ Parameters of the attack
Table <ref> shows the parameters used throughout our experiments can be found in table. Here, D_aux represents the auxiliary dataset and D_test the dataset that is used to sample the test datasets. Both are random and disjoint subsets of the entire dataset. Further, m represents the number of synthetic records queried from the trained generator, n_shadow the number of shadow datasets used for training the meta-classifier and finally n_test the number of datasets used for testing.
Additionally, the size of the released synthetic dataset is equal to the size of the private dataset D, i.e., n_synthetic = |D| = 1000. We use the same dataset size for the shadow models, i.e. |D_shadow| = |D_shadow^s| = 1000. In the scenarios S1 and S3 where m > n_synthetic, we train the meta-classifier using shadow datasets randomly sampled from the m synthetic records. At inference time, we use a random subset of n_synthetic = 1000 synthetic records as input for the trained meta-classifier.
Both for constructing the n_shadow shadow datasets for training and n_test datasets for testing, we ensure that the target record x_T is present with 50% probability. This ensures that for binary prediction of membership the evaluation is balanced, with a random guess baseline of 50% accuracy.
Lastly, for each dataset, we run the attack on 10 vulnerable target records selected by using the vulnerable record identification method proposed by Meeus et al. <cit.>. For each record in the original dataset, the method computes its vulnerability score as the mean cosine distance, generalized across attribute types, to its five closest neighbours. The records that are the most separated from their closest neighbours, i.e. have the largest mean distance, are then selected as vulnerable records.
§ RESULTS
In this section, we evaluate how the performance of the MIA varies across the attack scenarios over two generators and two datasets. The results are displayed and discussed separately for the two attack methodologies as described in <ref>.
§.§ Query based attack
We first use the state-of-the-art, query based attack method as introduced by Houssiau et al. <cit.> to compare the MIA performance across different scenarios.
We start by evaluating our weak attackers (S1) Black box and (S2) Published, where the attacker has no access to any real records that could be used to train the shadow generators.
Instead, the attacker trains the shadow generators on synthetic records obtained using black-box access to the target generator Φ_T (S1) and the released synthetic records (S2), respectively.
Figure <ref> shows that across datasets and generators, an attacker in the (S1) Black box scenario achieves an average accuracy of 65.5%.
This is 15.5% better than the random guess baseline of 50%.
This shows that the traditional, core assumption of having access to an auxiliary dataset can be removed while still achieving a successful MIA.
Next, we aim to make the attack as realistic as possible. To achieve that goal, we remove the assumptions for the attacker even further and now only consider access to the published synthetic dataset (S2) Published. Remarkably, we find that the MIA performance remains fairly constant when compared to the (S1) Black box scenario. Figure <ref> shows that across datasets and generators, we achieve an average accuracy of 62.8%, which is only 2.7 p.p. lower than the (S1) Black box scenario. These results empirically prove that MIAs against synthetic data can still be successful, i.e. 12.8 p.p. better than the random baseline, when only the released dataset is used. Given that releasing synthetic data instead of the original, private dataset is often the ultimate goal of generating synthetic data, we argue that scenario (S2) Published represents a minimal assumption that is almost always met in practice. Our results show that even in this realistic case, although the double counting phenomenon is still present, records detected by the vulnerable record identification method of Meeus et al. <cit.> are at risk.
We then compare the performance of our weak attacker (S1) Black-box to the much stronger (S0) Auxiliary attack scenario.
Recall that in the latter scenario, the attacker has access to an auxiliary dataset D_aux of real records from the same distribution as the target dataset D.
Fig. <ref> shows that our (S1) Black-box attacker achieves an accuracy 11.6 p.p. lower compared to the baseline scenario (S0) Auxiliary on average across datasets and generators.
This is expected for two possible reasons.
First, the synthetic data might not be representative of the original distribution 𝒟.
Thus, the training distribution of the meta-classifier from scenario (S1) Black-box, consisting of features extracted from shadow generators trained on unrepresentative data, might be quite different from the one on which it is evaluated (features extracted from shadow generators trained on real data), leading to worse performance.
Scenario (S0) Auxiliary does not suffer from this issue, since the meta-classifier is trained on features extracted from shadow generators trained on subsets of D_aux, which was itself sampled from the underlying distribution 𝒟 followed by the test datasets.
Second, there is potentially a double counting issue, which we investigate next.
Our results in the (S3) Upper bound scenario demonstrate that the double counting phenomenon, and not the use of synthetic data for the shadow modeling approach, is the main reason affecting the performance of the weaker attackers.
We find that on average, across datasets and generators, this attacker achieves an accuracy of 85.8%, which is 8.7 p.p. higher than for the (S0) Auxiliary scenario.
Since the attacker uses only synthetic data to generate the meta-classifier training distribution, if synthetic data was not representative enough, then the performance of (S3) Upper bound would be lower than (S0) Auxiliary, when it is in fact larger.
We conclude that synthetic data is representative enough to allow performing an actual attack which can even outperform the one using an auxiliary dataset.
Finally, figure <ref> shows that (S3) Upper bound achieves an MIA performance of 20.3 p.p. more than (S1) Black box.
By constructing the setup as such that the target generator has never seen the target record, we have thus eliminated the effect of double counting as raised by scenarios (S1) Black box and (S2) Published.
Our results thus suggest that fixing the double counting issue could, in the future, bridge the gap between our weak attackers and the (S3) Upper bound scenario.
Table <ref> summarizes all results of the experiments for the query based attack.
§.§ Target attention attack
In this section, we evaluate if the MIA performance against synthetic data across the different attack scenarios is consistent across attack methods. Specifically, we now use the target attention attack method as proposed by Meeus et al. <cit.>. The results in figure <ref> show similar trends as for the query-based attack.
First, we find that, also when using the target attention attack method, the weak attacker in scenario (S1) Black box achieves a successful MIA. Specifically, across datasets and generators, the average accuracy of the MIA lies at 63.3 %, which is 13.3 p.p. better than the random baseline of 50%. This confirms that after removing access to the auxiliary dataset, even when using a distinct attack method, records remain vulnerable against MIAs.
Second, when relaxing the black-box access assumption and only considering having access to the released synthetic dataset (S2) Published, the MIA using the target attention method achieves 60.2%. This comes down to a drop by 3.1 p.p. in accuracy compared to the baseline (S1) Black box. These results show that the most realistic scenario, across generators, datasets and attack methods, can be considered as a realistic threat with a performance significantly above the random guess baseline.
Next, we find that the difference between the baseline scenario (S0) Auxiliary and our default scenario when the attacker has a black-box access to the target generator ϕ_T (S1) Black box is on par with the results for the query based attack. Across datasets and generators, the average accuracy drops by 8.5 p.p. while still achieving an average score of 63.3 %.
Finally, in the (S3) Upper bound scenario, we confirm our findings that the double counting phenomenon is the main reason affecting the performance of the weaker attackers, also when using the target attention method. Across generators and datasets, the MIAs in this scenario achieve an average of 81.2% accuracy, which is 9.4 p.p. higher than the (S0) Auxiliary and 17.9 p.p. higher than (S1) Black box.
In conclusion, the fact that our findings are consistent across two very distinct attack methods suggests that even when new attack methods are developed, MIAs against synthetic data using only synthetic data will be successful.
Table <ref> summarizes all results of the experiments for the target attention attack.
§ FUTURE WORK
§.§ Impact of the number of synthetic records
Intuitively, the more synthetic records are generated for a fixed number of training records, the more information would be released, which could lead to more privacy leakage for each record within the training set. Through our experiments, we used two different scenarios where the attacker could generate an infinite amount of records, namely (S1) Black box and (S3) Upper bound. For computational reasons, we only considered m=20000.
In scenarios (S0) Auxiliary and (S2) Published, the attacker only has access to a limited number of synthetic records m=|𝒟|. As in practice, synthetic data is often used to replace the private, original dataset, we argue that it is reasonable to generate the same amount of synthetic records as the number of training records.
However, we hypothesize that releasing less synthetic records for a fixed size of the training dataset, namely m < |𝒟|, could reduce the privacy risk. We leave the evaluation of this potential trade-off between m and the privacy risk of the generator for future work.
§.§ Differentially private synthetic generation methods
In this work, we aim to show that it is possible to attack a synthetic data generator based only on the generated synthetic data and compare this attack performance to the baseline scenario (S0) Auxiliary.
However, we did not evaluate our attacks on differentially private generators <cit.>. We motivate this choice by the fact that attacks against differentially private generators, with low value of epsilon (e.g. ϵ=1), achieve formal privacy guarantees and thus by definition bound the maximum MIA performance, including for the (S0) Auxiliary scenario <cit.>. In order to properly evaluate the removal of the auxiliary data assumption, we instead wanted to have a successful MIA baseline and thus focused on generators without formal guarantees.
Hence, how our findings transfer to differentially private generators remains for future work.
§.§ Bridging the gap with the upper bound
Our results show that the (S3) Upper bound scenario achieves very high MIA performance, while still only using synthetic data. We leave for future work how the MIA performance for scenarios (S1) Black box and (S2) Published could be brought closer to this upper bound.
Potentially, an attacker could remove the synthetic records close to the target record, prior to using the synthetic data to construct the shadow models, to reduce the impact of the double counting phenomenon. Additionally, note that in the (S1) Black box scenario, we currently train the meta classifier using shadow datasets randomly sampled from m=20000 synthetic records, to then infer a prediction on a random subset of n_synthetic = 1000 synthetic records. An attacker could for instance infer the prediction on multiple subsets of the m synthetic records to potentially make a more optimal, ensemble prediction.
§ CONCLUSION
Sharing data plays a pivotal role in research and innovation across industries. Increasingly, synthetic data has been proposed to share privacy-preserving tabular data, by breaking the individual level information while retaining data utility.
Membership Inference Attacks (MIAs) are the standard to audit the privacy preservation of synthetic data, and recent work has shown that these attacks can successfully infer the membership of certain records in the original, private dataset. State-of-the-art MIAs rely on shadow modeling, which traditionally assumes an attacker to have access to an auxiliary dataset.
On the one hand, this auxiliary data assumption is often not met in practice. On the other hand, GDPR Recital 26 <cit.> states that, to meet anonymization standards on a legal level, all means reasonably likely for an attacker to possess should be considered.
We here bridge the gap to a more realistic attack by removing the auxiliary data assumption. Across two real world datasets and two synthetic data generators, we find that MIAs are still successful when using synthetic data only.
Specifically, we find that on average, an attacker with black box access to the generator achieves 65.5% accuracy, while an attacker with only access to the released synthetic dataset attains an accuracy of 62.8%. The latter result is particularly significant, as it demonstrates that an attacker can extract sensitive information from a released synthetic dataset without any additional information.
Moreover, by addressing the double counting issue, we establish the theoretical upper bound for MIA accuracy against synthetic data when only synthetic data is available, which stands at 85.8%. This finding highlights the potential for future researchers or practical attackers to bridge the existing gap and further improve MIA performance.
Our results provide compelling evidence that MIAs against synthetic data pose a realistic threat in practice. We hope this helps researchers and practitioners to better understand the realistic privacy risk associated with releasing synthetic data, while encouraging the development of methods to address these concerns.
Acknowledgements We acknowledge computational resources and support provided by the Imperial College Research Computing Service[<http://doi.org/10.14469/hpc/2232>.].
splncs04
|
http://arxiv.org/abs/2307.01417v1
|
20230704004830
|
Free energy of Bayesian Convolutional Neural Network with Skip Connection
|
[
"Shuya Nagayasu",
"Sumio Watanabe"
] |
cs.LG
|
[
"cs.LG",
"stat.ML",
"62F15"
] |
Free energy of Bayesian Convolutional Neural Network with Skip Connection
Shuya Nagayasu and Sumio Watanabe
Department of Mathematical and Computing Science
Tokyo Institute of Technology,
Mail-Box W8-42, 2-12-1, Oookayama,
Meguro-ku, Tokyo,
152-8552, Japan
================================================================================================================================================================================================
Since the success of Residual Network(ResNet), many of architectures of Convolutional Neural Networks(CNNs) have adopted skip connection. While the generalization performance of CNN with skip connection has been explained within the framework of Ensemble Learning, the dependency on the number of parameters have not been revealed. In this paper, we show that Bayesian free energy of Convolutional Neural Network both with and without skip connection in Bayesian learning. The upper bound of free energy of Bayesian CNN with skip connection does not depend on the oveparametrization and, the generalization error of Bayesian CNN has similar property.
Learning theory; Convolutional Neural Network; Bayesian Learning; Free Energy
§ INTRODUCTION
Convolutional Neural Networks (CNNs) are a type of Neural Networks mainly used for computer vision. CNNs have been shown the high performance with deep layers <cit.>. Residual Network(ResNet) <cit.> adopted the skip connection for addressing the problem that the loss function of CNN with deep layers does not decrease well through optimization. After success of ResNet, the CNNs with more than 100 layers are realized. The high performance of ResNet has been explained by similarity to the ensemble learning <cit.>. On the other hand, there is a common issue in neural networks that the reason why the overparametrized deep neural network generalized has been unknown yet.
In conventional learning theory, if the Fisher information matrix of a learning machine is positive definite, and the data size is sufficient large, the generalization error of the learning machine is determined from the number of its parameter in maximum likelihood estimator <cit.>. The similar property is shown in free energy and generalization error in Bayesian learning <cit.>. From these characteristics of generalization error and free energy some information criteria such as AIC, BIC, MDL are proposed. However, most of the hierarchical models such as neural networks have degenerated Fisher information matrix. In such models, the Bayesian generalization error and free energy are determined by a rational number called Real Log Canonical Threshold(RLCT) and that is smaller than the number of parameters <cit.>. In particular, RLCTs are revealed in some concrete models such as three layered neural networks <cit.>, normal mixtures <cit.>, Poisson mixtures <cit.>, Boltzmann machine <cit.>, reduced rank regression <cit.>, Latent Dirichlet allocation <cit.>, matrix factorization, and Bayesian Network <cit.>. While RLCTs of many hierarchical models are revealed, that of neural networks with multiple layer of nonlinear transformation has not been clarified. Yet the possibility of that is shown in <cit.>, the RLCT of Deep Neural Network is revealed <cit.>. On the other hand the RLCT of neural networks other than DNN was not explored.
In Bayesian learning for neural networks, how to realize the posterior is important. There exist approaches for generating posterior, Variational Approximation or Markov chain Monte Carlo(MCMC) methods. Variational Approximation for neural netowrks, Variational Autoencoder <cit.> or Monte Carlo dropout <cit.> are practically used. Also for CNNs, variational approach for Bayesian inference was proposed <cit.>. MCMC for neural networks, Hamiltonian Monte Carlo or Langevin Dynamics are useful for sampling from posterior. Stochastic Gradient Langevin Dynamics(SGLD) <cit.> is a MCMC method applying Stochastic Gradient Descent instead of Gradient Descent to Langevin Dynamics is popular MCMC for Bayesian Neural Networks. <cit.> used SGLD for generating posterior of CNNs.
In this paper we clarify the free energy and generalization error of Bayesian CNNs with and without skip connection. In both case the free energy and generalization error don't depend on the number of parameters in redundant filters. Then, in case with skip connection, the redundant layers affect the free energy and generalization error whereas they don't affect in case without skip connection. This paper consists of seven main sections and one appendix. In section<ref>, we describe the setting of Convolutional Neural Network analyzed in this paper. In section<ref>, we explain the basic terms of the Bayesian learning. In section<ref>, we note the main theorem of this paper. In section<ref>, we conducts the experiment of synthetic data. In section<ref> and section<ref>, we discuss about the theorem in this paper and conclusion. In appendix<ref>, we prove the main theorem of this paper.
§ CONVOLUTIONAL NEURAL NETWORK
In this section we describe the function of Convolutional Neural Network. First, we explain CNN without skip connection. The kernel size is 3 × 3 with zero padding and 1-stride. The activation function is ReLU. The numbers of the layers of the CNN are K_1 (≥ 3) for Convolutional Layers and K_2 (≥ 3) for Fully Connected Layers.
Let x ∈^L_1 × L_2 × H_1 be an input vector generated from q(x) with bounded support and y ∈ be an output vector with q(y|x) whose support is {1, … ,H_K_1 + K_2}. We define w^(k)∈^3 × 3 × H_k - 1× H_k, b^(k)∈^H_k as weight and bias parameters in each Convolutional Layer (2 ≤ k ≤ K_1). f^(k)∈^L_1 × L_2 × H_k is output of each layer for 1 ≤ k ≤ K_1. Conv(f,w) is the convolution operation with zero padding and 1-stride:
Conv(f^k - 1 , w^k)_l_1,l_2,h_k = ∑_h_k-1∑_p=1,q=1^p=3,q=3 f_l_1 + p -1, l_2 + p -1,h_k - 1 w_p,q,h_k- 1,h_k.
We define g(b^k): ^H_k→^L_1 × L_2 × H_k as
g(b^(k))_l_1,l_2 = b^(k)
for 1 ≤ l_1 ≤ L_1, 1 ≤ l_2 ≤ L_2. By using w^(k) , g(b^(k)), and f^(k-1), f^(k) is described by
f^(k)(w,b,x) = σ(Conv(f^(k-1)(w,b,x),w^(k)) + g(b^(k)))
where w, b are the set of all weight and bias parameters. σ() is a function that applies the ReLU to all the elements of the input tensor.
The output of k = K_1 + 1 layer is result of Global Average Pooling on k = K_1 layer:
f^(K_1 + 1)(w,b,x) = 1/L_1L_2∑_l_1= 1^l_1= L_1∑_l_2 = 1^l_2 = L_2 f^(K_1)(w,b,x)_l_1,l_2.
Let w^(k)∈^H_k×^H_k - 1, b^(k)∈^H_k be weight and bias parameters in each Fully Connected Layer (K_1 + 2 ≤ k ≤ K_1 + K_2).
For K_1 + 2 ≤ k ≤ K_1 + K_2 - 1, f^(k) is defined by
f^(k)(w,b,x) = σ(w^(k)f^(k - 1)(w,b,x) + b^(k)),
and for k = K_1 + K_2,
f^(K_1 + K_2)(w,b,x) = softmax(w^(k)f^(k - 1)(w,b,x) + b^(k)),
where softmax() is a softmax function
softmax(z)_i = e^z_i/∑_j = 1^Je^z_j.
The output of the model is represented stochastically
y ∼Categorical(f^(K_1 + K_2)(w,b,x))
where Categorical() is a categorical distribution.
Then we describe CNN with skip connection. The number of layers within the skip connection is K_s and the number of skip connection is M. The output of the layer with skipped connection is described by
f^(mK_s + 2)(w,b,x) = σ(Conv( f^(mK_s + 1)(w,b,x),w^(mK_s + 2))
+ B^(mK_s + 2) + f^((m - 1)K_s + 2)(w,b,x)).
In this case, CNN satisfies the following conditions
K_1 = MK_s + 2
H^mK_s + 2 = const (1 ≤ m ≤ M) .
The other conditions are the same as the case without skip connection.
Figure<ref> shows the configuration of Convolutional Neural Network analyzed in this paper.
§ FREE ENERGY IN BAYESIAN LEARNING
§.§ Bayesian Learning
Let X^n = (X_1, ⋯ X_n) and Y^n = (Y_1, ⋯ Y_n) be training data and labels. n is the number of the data. These data and labels are generated from a true distribution q(x,y) = q(y|x)q(x). The prior distribution φ(w), the learning model p(y|x,w) is given on the bounded parameter set W. Then the posterior distribution is defined by
p(w|X^n,Y^n) = 1/Z(Y^n|X^n)φ(w)∏_i=1^np(Y_i|X_i, w)
where Z_n = Z(Y^n|X^n) is normalizing constant denoted as marginal likelihood:
Z_n = ∫φ(w)∏_i=1^np(Y_i|X_i,w)dw.
The free energy is negative log value of marginal likelihood
F_n = - log Z_n.
Free energy is equivalent to evidence and stochastic complexity.
The posterior predictive distribution is defined as the average of the model by posterior:
p^*(y|x) = p(y|x , X^n,Y^n) = ∫ p(y|x,w) p(w|X^n,Y^n)dw.
Generalization error G_n is given by Kullback-Leibler divergence between the true distribution and posterior distribution as follows
G_n = ∫ q(y|x)q(x) logq(y|x)/p^*(y|x) dx dy.
Average of Generalization error is difference between the average of Free energy of n and n + 1:
[G_n] - S = [F_n+1] - [F_n],
where [f(X^n,Y^n)] is the average of the generation of n data _X^n,Y^n[f(X^n,Y^n)].
§.§ Asymptotic property of Free energy and Generalization error
It is well known that if the average Kullback-Leibler divergence
K(w) = ∫ q(y|x)q(x) logq(y|x)/p(y|x,w)dx dy.
can be approximated by quadratic form, in other words, the Laplace approximation can be applied to the posterior distribution, average of Free energy has the following asymptotic expansion with the number of parameters of learning modeld <cit.>
E[F_n] = n(S + Bias) + d/2log n + O(1)
where S is entropy of true distribution and Bias is the minimum value of K(w) for w ∈ W. The generalization error is calculated from Free energy by using equation(<ref>) <cit.>:
E[G_n] = Bias + d/2n + o(1/n).
Laplace approximation cannot be applied to the average Kullback-Leibler divergence of hierarchical model such as Gaussian Mixture or neural networks because of the degeneration of Fisher information matrix. In such models, the average of Free energy and Generalization error have the following asymptotic expansions <cit.>
E[F_n] = n(S + Bias) + λlog n + o(log n),
E[G_n] = Bias + λ/n + o(1/n),
where λ is a rational number called Real Log Canonical Threshold(RLCT). In particular, <cit.> showed that in case Bias = 0 and x is bounded, when the Deep Neural Network is trained from the data generated from smaller network,
λ≤d^*/2
where d^* ≤ d is the number of parameter of data generating Network.
§ MAIN THEOREM
In this subsection the main result of this paper is introduced. First, to state the theorem, we define the data generating network. Both in skip connection the data generating network satisfies the following conditions about the number of layers and filters,
K^*_1 ≤ K_1, K^*_2 ≤ K_2, (H^*)^(1) = H^(1) (H^*)^(K_1) = H^(K_1 + K_2)
and
H^(k) ≥ (H^*)^(K^*_1) (K^*_1 + 1 ≤ k ≤ K_1)
H^(k) ≥ (H^*)^(K_1 + K^*_2) (K_1 + K^*_2 + 1 ≤ k ≤ K_1 + K_2 - 1)
H^(k) ≥ (H^*)^(k) (others) .
Then, we show the main theorem.
(No Skip connection)
Assume that the learning machine and the data generating distribution
are given by p(y|x,w,b) and q(y|x)=p(y|x,w^*,b^*) in case without skip connection which satisfy the conditions
(<ref>) and (<ref>), and that a training data
{(X_i,Y_i) i=1,2,...,n} is independently taken from q(x)q(y|x). Then
the average free energy satisfies the inequality,
[F_n]≤ nS+ λ_CNNlog n +C
where
λ_CNN=1/2(|w^*|_0 + |b^*|_0 + ∑_k = K^*_1 + 1^K_1(9H_K^*_1 + 1)H_K^*_1)
where |w^*|_0 , |b^*|_0 are the numbers of parameters of weights and biases in data generating network.
(Skip connection)
Assume that the learning machine and the data generating distribution
are given by p(y|x,w,b) and q(y|x)=p(y|x,w^*,b^*) in case with skip connection which satisfy the conditions (<ref>), (<ref>) and (<ref>), and that a training data
{(X_i,Y_i) i=1,2,...,n} is independently taken from q(x)q(y|x). Then
λ_CNN=1/2(|w^*|_0 + |b^*|_0)
Proof of main theorems are shown in Appendix<ref>.
If there exists asymptotic expansion of the generalization error [G_n] in theorem<ref> and theorem<ref>, that satisfies the following inequality
[G_n] ≤λ_CNN/n + o(1/n),
where
G_n = ∫ q(x)∑_i = 1^H_K_1 + K_2 f_i^(K^*_1 + K^*_2)(w^*,b^*,x) logf_i^(K^*_1 + K^*_2)(w^*,b^*,x)/_w,b[f_i^(K_1 + K_2)(w,b,x)]dx
which corresponds to categorical cross entropy.
§ EXPERIMENT
In this section, we show the result of experiment of synthetic data.
§.§ Methods
We prepared the 2-class labeled simple data shown in fig<ref>. The the data is x ∈ R^4 × 4 1 and the values of each elements are in (-1,1). The average of each element is 0.5 or -0.5 and added the truncated normal distribution noise within the interval(-0.5,0.5). The probability of each label of data is 0.5. We trained CNN whose number of convolutional layer K_1 = 2 and fully connected layersK_2 = 2 with SGD. The number of filter is H_2 = 2 and the parameters are L_2 regularized. We use the trained CNN named "true model" as a data generating distribution. Note that the label of original data fig<ref> is deterministic but the label of true model is probabilistic. We prepare three learning CNN models. Each number of convolutional layers is K_1 = 2,3,4. Each model has skip connection every one layers or does not have skip connection. The number of filters in each layers is H^(k) = 4. They have K_2 = 2 fully connected layers. The prior distribution is the Gaussian distribution which covariance matrix is 10^4 I for weight parameter and 10^2 I for bias parameter.
We train the learning CNN models by using the Langevin dynamics. The learning rate is 10^-2 and the interval of sampling is 100. We use the average of 1000 samples of learning CNN models as the average of posterior. We estimate the generalization error by the test error of 10000 test data from true model. We trained each learning model 10 times and estimated the [G_n] from the average of test error.
§.§ Result of experiments
Table<ref> shows the result of the experiment. Test Error shows n times of the average of 10 test error in each model and the standard error of them. d_model is a number of parameters of each model. All the CNN models include the true model, hence the bias is 0. Then from equation(<ref>), theoretical upper bound of the generalization error is λ_CNN / n. In table<ref>, the experimental values of all models are smaller than d_model / 2.
Moreover in case with skip connection, the experimental value did not so increase with the increase of the number of layer. Then, in case K_1 = 4 without skip connection, the experimental value increased from the caseK_1 = 2. In case K_1 = 3 without skip connection, the experimental value is smaller than that of K_1 = 2. Behavior of MCMC is considered to be the cause of this result. Since MCMC in high dimensional model needs the long series for convergence in general, the result is deviated from theoretical predict.
§ DISCUSSION
§.§ Difference with or without Skip Connection
In this paper for analyzing the overparametrized CNN, the data generating network is smaller than learning network both case of Skip Connection. Nevertheless two cases of the data generating network is different, if the learning model network has double filter H^(k) to the data generating network in each Convolutional Layer, the model network can represent the generating network in different case. The output of each layer is nonnegative hence the model can represent the skip connection or the negative of that. If the model network doesn't have larger layer to the data generating network, the free energy of CNN with skip connection can be both larger or smaller than that without skip connection by the data generating network. Then, the layer of model network gets larger, the free energy of CNN with skip connection does not change but that without skip connection gets larger and the free energy of CNN with skip connection comes to have smaller free energy for all data generating network.
§.§ Comparison to Deep Neural Network
Firstly we compare the result of this paper to that of DNN in <cit.>. In case of DNN, the free energy depends on the layers of the model and only on that of the data generating network. This stands to the reason that mapping of the linear transformation in lower layer can be represented in higher layer. On the other hand, convolution operation doesn't have such property hence, the free energy of CNN without skip connection depends on the layer of learning model network. However, with skip connection, there exists the essential parameter which doesn't depend on overparametrized layers and the free energy does not also depend on the layer of learning model network.
§ CONCLUSION
In this paper, we studied Free energy of Bayesian Convolutinal Neural Network with Skip Connection
and compared to the case without Skip Connection. Free energy of Bayesian CNN with Skip Connection doesn't depend on the layer of the model unlike the case without Skip Connection. In Bayesian learning, the increase of Free energy is equivalent to generalization error, hence the generalization error has same property about the Skip Connection. In particular, Free energy of CNN without skip connection does not depends on the number of parameters in learning network but depends only on that in data generating network. This feature shows the generalization ability of CNN with skip connection does not decrease with respect to any overparameterization in Bayesian learning.
§ PROOF OF MAIN THEOREM
In this Appendix, we show the proof of main theorem.
§.§ Inequalities
Note that we describe the Frobenius norm of any order of tensor as ⋯.
We denote the Kullback-Leibler divergence of a data-generating distribution
q(y|x)=p(y|x,w^*,b^*) and
a model p(y|x) that
K(w,b)=∫ q(x)q(y|x)logq(y|x)/p(y|x,w,b)dxdy.
<cit.>
Assume that a set W is contained in the set determined by
the prior distribution {(w,b);φ(w,b)>0}. Then for an
arbitrary postive integer n,
[F_n]≤ nS -log∫_Wexp(-n K(w,b))φ(w,b) dw db.
<cit.>
For arbitrary vectors s,t,
σ(s)-σ(t)≤s-t.
<cit.>
For arbitrary w,w', b, b', and K_1 + 1 ≤ k ≤ K_1 + K_2, the following inequality holds,
f^(k)(w,b,x)-
f^(k)(w',b',x)
≤w^(k)-w'^(k)f^(k-1)(w,b,x)
+ b^(k)-b'^(k)
+ w^(k)f^(k-1)(w,b,x)-f^(k-1)(w',b',x).
For arbitrary w,w', b, b', and 1 ≤ k ≤ K_1, the following inequality holds,
f^(k)(w,b,x)-
f^(k)(w',b',x)
≤ 9w^(k)-w'^(k)f^(k-1)(w,b,x)
+ L_1 L_2 b^(k)-b'^(k)
+ 9w^(k)f^(k-1)(w,b,x)-f^(k-1)(w',b',x)
+ δ^(k)w^(k)f^(k - K_2 -1)(w,b,x)-f^(k - K_2 -1)(w',b',x).
where δ^(k) equals to 1 if the network has Skip connection and k = mK_2 + 2, otherwise it equals to 0
f^(k)(w,b,x)-
f^(k)(w',b',x)
=σ(Conv(f^(k-1)(w,b,x),w^(k))+ g(b^(k)))
-σ(Conv(f^(k-1)(w,b,x),w'^(k)))+ g(b'^(k)))
+σ(Conv(f^(k-1)(w,b,x),w'^(k)) + g(b'^(k)))
-σ(Conv(f^(k-1)(w',b',x),w'^(k)) + g(b'^(k))).
From definition of Conv(), the following equation holds.
Conv(f^(k-1)(w,b,x),w^(k))_j ≤∑_i = 1^H^k - 1f^(k-1)(w,b,x)_i|(∑_p = 1^3∑_q = 1^3 w^(k)_pqij)|_1
≤ 9f^(k-1)(w,b,x)w_:,:,:,j
By using lemma<ref>, (<ref>) and (<ref>), corollary<ref> is proved.
For arbitrary w,b,x,
f^(k)(w,b,x) ≤ D_kw^(k)w^(k-1)⋯w^(2)x
+ D_0b^(k)
+ ∑_j=1^k-2 D_jw^(k)w^(k-1)⋯w^(k-j)b^(k-j).
where D_j , 0≤ j ≤ k is constant.
By considering the case all the parameters of w' and b' are 0,
in Lemma <ref>, it follows that
f^(k)(w,b,x) ≤ 9w^(k)f^(k-1)(w,b,x) +L_1L_2b^(k)
+ δ^(k)w^(k)f^(k - K_2 -1)(w,b,x)-f^(k - K_2 -1)(w',b',x).
Then mathematical induction gives the Lemma.
§.§ Notations of parameters
In order to prove the main theorem, we need several notations. We divide the filters of learning model in each convolutional layer 1 ≤ h^(k)≤ H^(k) into the 1 ≤ h^(k)≤ (H^*)^(k) and (H^*)^(k) + 1 ≤ h^(k)≤ H^(k). The former is denoted as A and the later is denoted as B.
The convergent tensor E^(k)∈^3 × 3 × H^(k-1)× H^(k) and vector E_0^(k)^H^(k)
where the absolute value of all elements are smaller than 1 / √(n) are denoted by
E^(k)_pq =
([ E_pqAA^(k) E_pqAB^(k); E_pqBA^(k) E_pqBB^(k) ]), (1 ≤ p ≤ 3 , 1 ≤ q ≤ 3),
E_0^(k) =
([ E_A0^(k); E_B0^(k) ]).
The positive constant tensor M^(k) and vector M_0^(k) are defined by the condition that
all elements are in the inverval [A,B],
M^(k)_pq =
([ M_pqAA^(k) M_pqAB^(k); M_pqBA^(k) M_pqBB^(k) ]), (1 ≤ p ≤ 3 , 1 ≤ q ≤ 3),
M_0^(k) =
([ M_A0^(k); M_B0^(k) ]).
To prove Theorem <ref> <ref>, we show an upper bound of
[F_n] is given
by choosing a set W_E which consists of essential weight and bias parameters in Convlutional Layers and Fully connected layers.
§.§ No Skip Connection Case
Definition. (Essential parameter set W_E without Skip Connection).
A parameter (w,b) is said to be in an essential parameter set W_E
if it satisfies the following conditions (1),(2) for 2 ≤ k ≤ K_1,
(1) For 2 ≤ k ≤ K^*_1
w^(k)_pq =
([ (w^*)^(k)+ E_pqAA^(k) M_pqAB^(k); - M_pqBA^(k) - M_pqBB^(k) ]),
b^(k) =
([ (b^*)^(k)+ E_A0^(k); - M_B0^(k) ]),
for 1 ≤ p ≤ 3, 1 ≤ q ≤ 3
(2) For K^*_1 + 1 ≤ k ≤ K_1
w^(k)_pq =
([ Z_pqAA^(k) M_pqAB^(k); - M_pqBA^(k) - M_pqBB^(k) ]),
b^(k) =
([ (b^*)^(k)+ E_A0^(k); - M_B0^(k) ]),
where
Z_pqAA^(k)
=
{[ I_22AA + E^(k)_22AA (p = q = 2); E^(k)_pqAA (others) ]..
where I_22AA∈^(H^*)^(k)×^(H^*)^(k) is an identity matrix.
Assume that the weight and bias parameters of Convolutional layers are in
the essential set W_E in case without Skip Connection. Then
there exist constants c_1,c_2>0 such that
f_:,:,A^(K_1)(w,b,x)-f^(K_1^*)(w^*,b^*,x) ≤c_1/√(n)(x+1),
f_:,:,A^(K_1)(w,b,x) ≤ c_2(x+1).
Eq.(<ref>) is derived from Lemma <ref>.
By the definitions (<ref>), (<ref>),
for 2≤ k≤ K^*
f_A^(2)(w,b,x)
=σ(Conv(f_:,:,A^(1)(w,b,x), (w^*)^(2)+ E_:,:,AA^(2)) +
g((b^*)^(2)+ E_A0^(2))),
f_A^(k)(w,b,x)
=σ(Conv(f_:,:,A^(k-1)(w,b,x),(w^*)^(k)+ E_:,:,AA^(k))
+Conv(f_:,:,B^(k-1)(w,b,x), M_:,:,AB^(k))+
g((b^*)^(k)+ E_A0^(k))).
In k = 2, |x| is bounded and M_:,:,AB^(k) is constant tensor, M_B0^(k) is large sufficiently, f_:,:,B^(2)(w,b,x)=0 because all the elements of the output of ReLU function f^(2)(w,b,x) is nonnegative. For 3≤ k≤ K_1, f_:,:,B^(k)(w,b,x)=0, since all elements of
w_:,:,BA^(k), w_:,:,BB^(k), and w_B0^(k) are negative.
Hence by Lemma <ref>, for 2≤ k≤ K_1^*,
f_:,:,A^(k)(w,b,x) - f^(k)(w^*,b^*,x)
≤ 9 E_:,:,AA^(k)f^(k-1)(w,b,x) + L_1 L_2 E_A0^(k)
+ 9(w^*)^(k)f^(k-1)_:,:,A(w,b,x)-f^(k-1)(w^*,b^*,x).
and for K_1^* + 1 ≤ k ≤ K_1, by using f^(K_1^*)(w^*,b^*,x) as f^(k)(w^*,b^*,x),
f_:,:,A^(k)(w,b,x) - f^(K_1^*)(w^*,b^*,x)
≤ 9 E_:,:,AA^(k)f^(k-1)(w,b,x) + L_1 L_2 E_A0^(k)
+ 9(w^*)^(k)f^(k-1)_:,:,A(w,b,x)-f^(K_1^*)(w^*,b^*,x).
The elements of tensors and vectors in E_:,:,AA^(k-1) and E_:,:,A0^(k)
are bounded by 1/√(n) order term, hence
E_AA^(k-1) and E_A0^(k) are bounded by
1/√(n) order term. Moreover (w^*)^(k) is a constant term.
For k=2, f_:,:,A^(k-1)(w,b,x)-f^(k-1)(w^*,b^*,x)=x-x=0. Then, by using mathematical
induction for (<ref>) and (<ref>) , the all terms can be bounded by 1/√(n) terms, hence we obtained the Lemma.
From <cit.>, because of the output in k = K_1 + 1 is nonnegative there exists the essential parameters for fully connected layers such that the number of the convergent parametersE equals to that of data generating network. From these lemmas, the main theorem can be proved.
(Proof of Theorem <ref>).
By Lemma <ref>, it is sufficient to prove
that there exists a constant C>0 such that
∫_W_Eexp(-nK(w,b))φ(w,b)dwdb
≥C/n^λ
From the property of KL-divergence, there exists the positive constant c_4
K(w,b) ≤c_4/2∫f^(K_1 + K_2)(w,b,x)-f^(K^*_1 + K^*_2)(w^*,b^*,x)^2 q(x) dx.
By using Lemma <ref>, if (w,b)∈ W_E,
K(w,b)≤c_4c_3^2/2n∫ (x+1)^2 q(x) dx =c_5/n<∞.
It follows that
∫_W_Eexp(-nK(w,b))φ(w,b)dwdb
≥exp(-c_5) (min_(w,b)∈ W_Eφ(w,b)) (W_E).
where c_5>0,
min_(w,b)∈ W_Eφ(w,b) >0, and (W_E) is the volume
of the set W_E by the Lebesgue measure. The convergent scale of (W_E) is determined from the number of convergent parameter E in W_E.
Then,
(W_E)≥C_1/n^λ,
where
λ = 1/2(
∑_k = 2^k = K_1 (9H^*_k - 1 + 1)H^*_k + ∑_k = K_1 + 1^k = K_1 + K_2 (H^*_k - 1 + 1)H^*_k)
= 1/2(|w^*|_0 + |b^*|_0 + ∑_k = K^*_1 + 1^K_1(9H_K^*_1 + 1)H_K^*_1).
We obtained theorem<ref>.
§.§ Skip Connection Case
Definition. (Essential parameter set W_E with Skip Connection).
An essential parameter set W_E with Skip Connection satisfies the following conditions (1),(2) for 2 ≤ k ≤ K_1,
(1) For 2 ≤ k ≤ K^*_1, the same conditions as (<ref>) and (<ref>).
(2) For K^*_1 + 1 ≤ k ≤ K_1
w^(k)_pq =
([ - M_pqAA^(k) - M_pqAB^(k); - M_pqBA^(k) - M_pqBB^(k) ]),
b^(k) =
([ - M_A0^(k); - M_B0^(k) ]),
Assume that the weight and bias parameters of Convolutional layers are in
the essential set W_E in case with Skip Connection. Then
there exist constants c_1,c_2>0 such that
f_:,:,A^(K_1)(w,b,x)-f^(K^*_1)(w^*,b^*,x) ≤c_1/√(n)(x+1),
f_:,:,A^(K_1)(w,b,x) ≤ c_2(x+1).
Because of similar reason to lemma<ref>, holds.
By Lemma <ref>, for k = mK_s + 1,
f_:,:,A^(k)(w,b,x) - f^(k)(w^*,b^*,x)
≤ 9 E_:,:,AA^(k)f^(k-1)(w,b,x) + L_1 L_2 E_A0^(k)
+ 9(w^*)^(k)f^(k-1)_:,:,A(w,b,x)-f^(k-1)(w^*,b^*,x)
+ w^(k)f^(k - K_2 -1)(w,b,x)-f^(k - K_2 -1)(w',b',x).
If k ≠ mK_s + 1 and 2 ≤ k ≤ K^*_1, inequality (<ref>) holds. Same as the lemma<ref>, from mathematical induction, f_:,:,A^(K^*_1)(w,b,x) - f^(K^*_1)(w^*,b^*,x) is bounded by 1/√(n) terms. For 2≤ k≤ K_1, f_:,:,B^(k)(w,b,x)=0 same reason as lemma<ref>. For K^*_1 + 1 ≤ k ≤ K_1, since all elements of w^(k) and b^(k) are negative, the following equations are given.
f_:,:,A^(k)(w,b,x) =
{[ f^(K^*_1)(w ,b ,x) (k = nK_s + 1); 0 (others) ]..
Hence, we obtained the Lemma.
Same as without Skip connection case, by using the result of <cit.> for fully connected layer and inequality(<ref>),(<ref>),
we obtained theorem<ref>.
|
http://arxiv.org/abs/2307.02625v2
|
20230705195650
|
Retinex-based Image Denoising / Contrast Enhancement using Gradient Graph Laplacian Regularizer
|
[
"Yeganeh Gharedaghi",
"Gene Cheung",
"Xianming Liu"
] |
eess.IV
|
[
"eess.IV",
"cs.CV",
"eess.SP"
] |
Proxy Selection in Transitive Proxy Voting
Jacqueline Harding
==========================================
Images captured in poorly lit conditions are often corrupted by acquisition noise.
Leveraging recent advances in graph-based regularization, we propose a fast Retinex-based restoration scheme that denoises and contrast-enhances an image.
Specifically, by Retinex theory we first assume that each image pixel is a multiplication of its reflectance and illumination components.
We next assume that the reflectance and illumination components are piecewise constant (PWC) and continuous piecewise planar (PWP) signals, which can be recovered via graph Laplacian regularizer (GLR) and gradient graph Laplacian regularizer (GGLR) respectively.
We formulate quadratic objectives regularized by GLR and GGLR, which are minimized alternately until convergence by solving linear systems—with improved condition numbers via proposed preconditioners—via conjugate gradient (CG) efficiently.
Experimental results show that our algorithm achieves competitive visual image quality while reducing computation complexity noticeably.
Image denoising, contrast enhancement, graph signal processing, numerical linear algebra
§ INTRODUCTION
Due to the relatively few photons collected per pixel area, a sensor capturing an image in poor lighting conditions suffers from non-negligible acquisition noise.
Thus, a contrast enhancement algorithm such as <cit.> that selectively brightens spatial areas to produce a visually pleasing image would also enhance the acquired noise, resulting in sub-par image quality.
We study the joint image denoising / contrast enhancement problem in this paper.
Since Land's seminal Retinex theory in human vision in 1977
<cit.>, researchers in imaging have since interpreted the theory to mean that a recorded pixel is a multiplication of illumination and reflectance components
<cit.>.
Because the two components have unique signal characteristics—, reflectance is commonly assumed to be piecewise constant (PWC)—image restoration schemes can be designed to first recover these components with appropriate signal priors, before combining them to reconstruct the target image <cit.>.
However, computation of the illumination and/or reflectance components can be expensive; for example, <cit.> employed Bergman iteration to minimize an ℓ_1-norm objective, while <cit.> proposed a non-convex low-rank signal prior.
Deep learning based Retinex schemes are also possible <cit.>, but they require expensive data training for a large number of network parameters with a large memory footprint, and thus are not typically suitable for memory-constrained devices like mobile phones.
In an orthogonal development, graph signal processing (GSP) has been intensively investigated over the last decade to study discrete signals on irregular data kernels described by graphs <cit.>.
In restoration problems, graph-based regularization terms like graph Laplacian regularizers (GLR) <cit.> have been adopted for a wide range of applications, including joint image contrast enhancement / JPEG dequantization in <cit.>.
Like total variation (TV) <cit.>, signal-dependent GLR (SDGLR) has been shown to promote PWC signal reconstruction <cit.>, but unlike non-differentiable ℓ_1 norm, GLR is in differentiable quadratic form that is amenable to fast optimization.
<cit.> employed GLR to efficiently recover the reflectance component via proximal gradient descent <cit.>.
In this paper, leveraging <cit.> we employ a new variant of GLR called gradient graph Laplacian regularizer (GGLR) <cit.>—shown to promote continuous piecewise planar (PWP) signal reconstruction—to recover the illumination component known to be generally smooth.
Like GLR, GGLR is also in convenient quadratic form, leading to a system of linear equations for a solution computed efficiently using conjugate gradient (CG) <cit.>.
Moreover, we propose appropriate preconditioners <cit.> to improve the condition numbers of the coefficient matrices, speeding up CG execution.
We leave the unrolling of our iterative graph-based algorithm to neural layers for data-driven end-to-end parameter optimization <cit.> for future work.
After recovering the reflectance and illumination components, the contrast-enhanced image is reconstructed via gamma correction <cit.> on the illumination component.
Experimental results show that our method has comparable contrast-enhanced image quality as competing schemes with reduced computation costs.
§ PRELIMINARIES
§.§ GSP Basics
We first review GSP definitions <cit.>.
A graph (,,) is composed of N nodes = {1, …, N} and edges connecting them, where edge (i,j) ∈ has weight w_i,j = W_i,j.
Assuming that edges are undirected, the adjacency matrix is symmetric.
The combinatorial graph Laplacian matrix is defined as Ł≜diag() -, where is an all-one vector of suitable length, and diag() denotes a diagonal matrix with vector as diagonal terms.
Ł is provably positive semi-definite (PSD)—, ^⊤Ł≥ 0, ∀—if edges are non-negative, , w_i,j≥ 0, ∀ i,j <cit.>.
^⊤Ł is also called the graph Laplacian regularizer (GLR), and its signal-dependent variant—where each edge weight w_i,j is a function of sought signal samples x_i and x_j—has been shown to promote PWC signal reconstruction <cit.>.
It was used for regularization in different graph signal restoration problems, including image denoising <cit.>, JPEG dequantization <cit.>, point cloud denoising <cit.> and super-resolution <cit.>.
Other graph-based regularizations are possible, such as graph total variation (GTV) <cit.> and graph shift varation (GSV) <cit.>.
In this work, we focus on GLR and a recent variant called gradient graph Laplacian regularizer (GGLR), which was shown to promote PWP signal reconstruction <cit.>.
For images, GGLR means applying GLR to horizontal / vertical image gradients; we detail derivation of GGLR in Section <ref>.
§.§ Interpretation of Retinex Theory
Similar to <cit.>, we mathematically interpret the known Retinex theory <cit.> to mean that a ground-truth N-by-N image patch (vectorized to strictly positive ∈ℝ_+^N^2 by scanning pixels row-by-row) is a point-by-point multiplication of strictly positive illumination and reflectance components, ł, ∈̊ℝ_+^N^2, , = ł⊙$̊, where operator⊙denotes point-by-point multiplication.
Specifically, the image formation model for observation∈ℝ_+^N^2is
= ł⊙+̊ ,
whereis a zero-mean additive Gaussian noise.
Reflectance$̊ depends only on surface properties of physical objects, and is known to be piecewise smooth or PWC.
In contrast, illumination ł varies less drastically than $̊; we modelłto be continuous PWP in this work.
§ PROBLEM FORMULATION
§.§ Initialization of Illumination & Reflectance
Our method requires initialization of both the illumination and reflectance components,łand$̊, of a N-by-N pixel patch, after which one component is optimized while the other is held fixed.
To initialize ł, we compute the blurred V component of an input image in the HSV color space using a Gaussian filter with a standard deviation of 5.
We then initialize $̊ by performing a point-by-point division of the image's intensity component () był.
§.§ Computation of Reflectance
§.§.§ Graph Construction
To connect a group ofNreflectance pixels in thek-th row (k-th column) of a targetN-by-Npixel patch, we construct a graph_r,k(_c,k) as follows.
We connect pixelsiandjin rowk(columnk) with edge weightw_i,j^r,k(w_i,j^c,k) defined as
w_i,j^r,k = exp( - |r_i - r_j|^2/σ_r^2
- _̧i - _̧j^2_2/σ_c^2) ,
wherer_iand_̧iare the reflectance intensity and 2D-coordinate of pixeli, respectively, andσ_randσ_care two parameters.
(<ref>) is analogous to bilateral filter weights <cit.>, where the 2D-coordinates compute the domain filter, and the reflectance values compute the range filter.
Note that edge weights in (<ref>) are signal-dependent—edge weights{w^r,k_i,j}used to compute$̊ depend on $̊.
For a sparse graph,w_i,j^r,kexists iffjis in a local neighborhood_iof pixeli.
See Fig. <ref>(a) for an example of a line graph for a row of three pixels.
Collection of edge weights{w_i,j^r,k}(<ref>) defines a symmetric adjacency matrix_r,k ∈ℝ^N ×N, and the corresponding graph Laplacian matrix is defined asŁ_r,k ≜diag(_r,k ) - _r,k ∈ℝ^N ×N.
As discussed,Ł_r,kis PSD given non-negative edges in (<ref>).
We use notations_c,kandŁ_c,kfor the adjacency and graph Laplacian matrices of graph_c,kfor thek-th column of a target patch.
§.§.§ Optimizing Reflectance
Given illuminationłfor aN^2-pixel patch, we compute reflectance$̊ by minimizing an unconstrained convex quadratic objective:
min_ - diag(ł) ^2_2 + μ_r ∑_k=1^N ^̊⊤_̋k ^⊤Ł_r,k_̋k
+ μ_r ∑_k=1^N ^̊⊤_k ^⊤Ł_c,k_k eq:opt_reflectance
where _̋k, _k ∈{0,1}^N × N^2 are selection matrices that pick out N pixels from the k-th row / column of the target patch, respectively.
For example, _̋1 and _2 for a 2 × 2 patch are
_̋1 = [ [ 1 0 0 0; 0 1 0 0 ]], _2 = [ [ 0 1 0 0; 0 0 0 1 ]] .
In (<ref>), the first term is a fidelity term following the image formation model (<ref>), and the second and third terms are graph Laplacian regularizers (GLR) <cit.> for the rows and columns of $̊, respectively.μ_ris a parameter that trades off the fidelity term and the GLRs.
Selecting rows and columns from a two-dimensional pixel grid for regularization means smaller Laplacian matrices, and thus lower complexity.
Moreover, promoting a piecewise linear (constant) 1D signal across each dimension separately can combine to mean promotion of a piecewise planar (constant) signal on a 2D grid.
The solution^̊*to (<ref>) is obtained by solving a linear system:
( diag^2(ł) + μ_r ∑_k=1^N (_̋k^⊤Ł_r,k_̋k + _k^⊤Ł_c,k_k) ) ^̊* = diag(ł) .
(<ref>) guarantees a unique solution because the coefficient matrix= diag^2(ł) + μ_r ∑_k (_̋k^⊤Ł_r,k _̋k + _k^⊤Ł_c,k _k)is provably positive definite (PD):diag^2(ł)is PD and{Ł_r,k}and{Ł_c,k}are PSD, and thusis PD by Weyl's inequality <cit.>.
Given that coefficient matrixis sparse, symmetric and PD,^̊*in (<ref>) can be computed via conjugate gradient (CG) <cit.> in roughly linear time without matrix inversion.
We defer discussion of complexity to Section <ref>.
§.§ Computation of Illumination
Computation of the illumination componentłdiffers from reflectance in that we assumełis generally smooth instead of PWC; mathematically we regularizełusing GGLR <cit.>.
This requires first the construction of a gradient graph for each row / column of pixels in a target image patch, on which we define a GLR.
Then we map the gradient GLR back to the pixel domain as GGLR for optimization.
§.§.§ Graph Construction
For ak-th row (column) ofNpixels in aN^2pixel patch, ,_̋k ł(_k ł), we first define a gradient operator∈ℝ^N-1 ×Nas
F_i,j = {[ 1 i=j; -1 i=j+1; 0 ]. .
See Fig. <ref>(b) for an example of gradient operatorfor a row of three pixels.
Note that = , andis full row-rank <cit.>.
Given_̋k ł, we first compute horizontal gradient= _̋k ł∈ℝ^N-1.
We then construct a gradient graph_r,lto connectN-1gradients inas follows.
We connect gradientsiandjwith edge weightw̅_i,j^r,k:
w̅_i,j^r,k = exp( - |g_i - g_j|^2/σ_l^2 - _̧i - _̧j^2_2/σ_c^2) .
Collection of edge weights{w̅_i,j^r,k}(<ref>) thus defines adjacency matrix_r,kand subsequent gradient graph LaplacianŁ̅_r,k ≜diag(_r,k ) - _r,k.
Becausew̅_i,j^r,k ≥0, ∀i,j, LaplacianŁ̅_r,kis provably PSD <cit.>.
We use notations_c,kandŁ̅_c,kfor the adjacency and graph Laplacian matrices of gradient graph_c,kfor thek-th column of a target patch.
§.§.§ Optimizing Illumination
GivenŁ̅_r,k, we define GLR for gradient:
^⊤Ł̅_l = ł^⊤_̋k^⊤^⊤Ł̅_r,k__r,k_̋k ł ,
where_r,k = ^⊤Ł̅_l is gradient-induced nodal graph (GNG) Laplacian._r,kcorresponds to a graph_r,k^gconnectingNillumination pixels, which in general is a signed graph containing both positive and negative edges; see Fig. <ref>(c) and (d) for an example of a gradient graphand the resulting GNG^gcorresponding to a three-pixel row.
Though Laplacians of general signed graphs can be indefinite, becauseŁ̅_r,kis PSD,_r,k = ^⊤Ł̅_r,k is also PSD.ł^⊤_̋k^⊤_r,k _̋k łis the GGLR for thek-th pixel row of the target illumination patch.
Given reflectance$̊, we compute illumination ł by minimizing the following objective:
min_ł - diag()̊ł^2_2 + μ_l ∑_k=1^N ł^⊤_̋k^⊤_r,k_̋k ł
+ μ_l ∑_k=1^N ł^⊤_k^⊤_c,k_k ł ,
where the first term is a fidelity term given fixed $̊, and the second and third terms are GGLRs for the rows and columns ofł.μ_lis a tradeoff parameter likeμ_rin (<ref>).
Similarly, the solutionł^*to (<ref>) can be obtained by solving a linear system:
( diag^2()̊ + μ_l ∑_k=1^N (_̋k^⊤_r,k_̋k + _k^⊤_c,k_k) ) ł^* = diag()̊ .ł^*in (<ref>) again can be obtained via CG without matrix inversion.
§.§ Algorithm Summary
Having obtainedł^*, a new solution^̊*to (<ref>) can be computed again using the updatedł^*and new graphs{_r,k}and{_c,k}with edges updated via (<ref>) using the most recently computed$̊.
This iterative update of edge weights means the GLRs are signal-dependent and thus promote PWC reflectance reconstruction.
Having obtained ^̊*, a new solution ł^* to (<ref>) can be sought using recomputed ^̊* and new GNGs {_r,k^g} and {_c,k^g} with edges updated via (<ref>).
Similarly, this iterative edge weight update means the GGLRs are signal-dependent and promote PWP illumination reconstruction.
Having obtained solutions ł^* and ^̊*, we construct a contrast-enhanced pixel x_i,j via gamma correction <cit.>:
x_i,j = ( l_i,j) ^γ r_i,j ,
where 0 < γ <1 is a pre-chosen parameter.
The operation (l_i,j)^γ essentially boosts the illumination component—more enhancement when l_i,j is small, and less enhancement when l_i,j is large.
§.§ Computation Considerations
The complexity of CG to solve for in a linear system = $̱—assumingis symmetric and PD—is(√(κ()) nnz() / log(ϵ)), whereκ() ≜λ_max()/λ_min()is the condition number of,nnz()is the number of non-zero entries in, andϵis a convergence parameter.
In linear system (<ref>), coefficient matrix= diag^2(ł) + μ_r ∑_k (_̋k^⊤Ł_r,k _̋k + _k^⊤Ł_c,k _k)is sparse, symmetric and PD, butκ()can be large due to small illumination values inł.
(Similarly, for linear system (<ref>)κ()can be large due to small reflectance values in$̊.)
To improve the computation speed of CG, we perform preconditioning <cit.> as follows.
Generally, in place of linear system = $̱, we can consider equivalent= $̱ instead for invertible matrix :
^⊤ (^⊤)^-1 =
^⊤ =
where ^⊤ (^⊤)^-1 =, = (^⊤)^-1, and = $̱.
Note that coefficient matrix^⊤in (<ref>) is also symmetric and PD givenis symmetric and PD by assumption, and thus CG can solve (<ref>).
Ifκ(^⊤) ≪κ(), then we have improved the conditioning of the linear system, and CG will run faster.
The challenge is to design an invertiblesatisfying this condition.
One simple preconditioner that is easily invertible is a diagonal matrix (called Jacobi in the linear algebra literature <cit.>).
We propose one variant:= diag(), wherep_i = A_i,i^-1/2.
This means^⊤= diag() diag()has ones along its diagonal.
Note thatA_i,i > 0forin (<ref>), sincełis strictly positive and diagonals of_̋k^⊤Ł_r,k _̋kand_k^⊤Ł_c,k _kare non-negative.
Note also thatin (<ref>) is diagonally dominant, ,A_i,i > ∑_j≠i |A_i,j|—a matrix condition where Jacobi preconditioner is known to perform well.
Thus, when solving for^̊*in (<ref>), we first solve for^*via linear systemdiag() diag() ^* = diag() $̱.
We then obtain solution ^̊* = diag() ^*.
Similar procedure is employed when solving for ł^* in (<ref>).
§ EXPERIMENTS
§.§ Experimental Setup
We conducted experiments using MATLAB R2022b on an Apple M2 chip with 8GB RAM to evaluate the performance of our proposed method. We selected 12 images from the datasets provided in <cit.> and <cit.>.
To ensure compatibility with the chosen patch size, we adjusted the size of each image so that the height and width were divisible by 5.
We added zero-mean Gaussian noise with a standard deviation of 0.001 to every image pixel.
Four parameters in our method, μ_r in (<ref>), σ_r in (<ref>), μ_l in (<ref>), and σ_l in (<ref>), were empirically set to 1, 1, 0.1, and 0.2, respectively.
More generally, weight parameters μ_r and μ_l can be chosen to minimize mean squared error <cit.>.
We set the convergence tolerance for CG to ϵ = 10^-6.
§.§ Experimental Results
We compared our method against four competing schemes <cit.>.
The first two methods focused on contrast enhancement, while the latter two employed joint denoising and contrast enhancement.
Note that <cit.> incorporated BM3D <cit.> as a post-denoising step, where we chose σ = 10.
Towards a fair comparison, we adjusted the brightness parameter in all five methods so that they produced images with roughly the same brightness level.
Fig. <ref> and <ref> show visual comparisons of different methods.
The first and third schemes noticeably amplified the noise when performing contrast enhancement, resulting in noisier outputs compared to our method.
Meanwhile, the second method employed denoising as a post-processing technique, resulting in blurring of image details.
In comparison, our method produced results that are comparable in quality to the LR3M model with a significantly faster computation speed as demonstrated in Table <ref>. Furthermore, as shown in Fig. <ref>, LR3M can lead to image over-smoothing.
In our objective evaluation, we have examined our method in contrast enhancement using two metrics: Lightness-Order-Error (LOE) and Minkowski Distance based Metric (MDM). Table <ref> demonstrates the effectiveness of our method in achieving a visually pleasing and realistic output. We outperformed other techniques in terms of LOE <cit.>, a no-reference image quality metric to assess naturalness in enhanced images. LOE specifically assesses the preservation of lightness order by comparing the enhanced image with the original image, without requiring any additional reference images. A lower LOE score indicates a higher level of preservation of lightness order and, consequently, a more visually pleasing and realistic output. Additionally, we achieved comparable results in MDM, another no-reference quality assessment metric where higher scores indicate higher image quality.
We observe that our preconditioner can greatly reduce the condition number of the coefficient matrix in (<ref>) and (<ref>). Specifically, in some cases, our preconditioner reduces very large condition numbers, which exceeded 5200, by up to 80%.
These findings suggest that our preconditioners can improve the speed of CG execution.
§ CONCLUSION
Leveraging on recent advances in GSP, we propose a Retinex-based image denoising / contrast enhancement scheme, where the reflectance and illumination components are optimized alternately using GLR and GGLR for regularization, respectively.
Both GLR and GGLR are in convenient quadratic form; solutions for reflectance and illumination can be computed as linear systems via conjugate gradient (CG) in roughly linear time.
We design preconditioners to improve condition numbers of coefficient matrices, speeding up CG.
Experiments show our denoising / contrast enhancement scheme achieved comparable image quality while reducing computation.
10
DBLP:conf/cvpr/FuZHZD16
X. Fu, D. Zeng, Y. Huang, X.-P. Zhang, and X. Ding,
“A weighted variational model for simultaneous reflectance and
illumination estimation,”
in 2016 IEEE Conference on Computer Vision and Pattern
Recognition, CVPR 2016, Las Vegas, NV, USA, June 27-30, 2016. 2016, pp.
2782–2790, IEEE Computer Society.
land77
E. Land,
“The Retinex theory of color vision,”
in Scientific American, 1977, vol. 61, no.1, pp. 1–11.
kimmel03
R. Kimmel, M. Elad, D. Shaked, R. Keshet, and I. Sobel,
“A variational framework for Retinex,”
Int. J. Comput. Vision, vol. 52, no. 1, pp. 7–23, apr 2003.
ma11
W. Ma, J.-M. Morel, S. Osher, and A. Chien,
“An l1-based variational model for Retinex theory and its
application to medical images,”
in CVPR 2011, 2011, pp. 153–160.
DBLP:journals/tip/GuoLL17
X. Guo, Y. Li, and H. Ling,
“LIME: low-light image enhancement via illumination map
estimation,”
IEEE Trans. Image Process., vol. 26, no. 2, pp. 982–993,
2017.
DBLP:journals/corr/abs-1804-08468
X. Ren, M. Li, W.-H. Cheng, and J. Liu,
“Joint enhancement and denoising method via sequential
decomposition,”
CoRR, vol. abs/1804.08468, 2018.
ren20
X. Ren, W. Yang, W.-H. Cheng, and J. Liu,
“LR3M: Robust low-light enhancement via low-rank regularized
Retinex model,”
IEEE Transactions on Image Processing, vol. 29, pp. 5862–5876,
2020.
yao21
LP Yao and ZI Pan,
“The Retinex-based image dehazing using a particle swarm
optimization method,”
Multimedia Tools and Applications, vol. 80, pp. 3425–3442,
2021.
lecert22
A. Lecert, R. Fraisse, A. Roumy, and C. Guillemot,
“A new regularization for Retinex decomposition of low-light
images,”
in 2022 IEEE International Conference on Image Processing
(ICIP), 2022, pp. 906–910.
chen18
C. Wei, W. Wang, W. Yang, and J. Liu,
“Deep Retinex decomposition for low-light enhancement,”
2018, arXiv:1808.04560.
li22
C. Li, C. Guo, L. Han, J. Jiang, M.-M. Cheng, J. Gu, and C. C. Loy,
“Low-light image and video enhancement using deep learning: A
survey,”
IEEE Transactions on Pattern Analysis and Machine Intelligence,
vol. 44, no. 12, pp. 9396–9416, 2022.
ortega18ieee
A. Ortega, P. Frossard, J. Kovacevic, J. M. F. Moura, and P. Vandergheynst,
“Graph signal processing: Overview, challenges, and applications,”
in Proceedings of the IEEE, May 2018, vol. 106, no.5, pp.
808–828.
cheung18
G. Cheung, E. Magli, Y. Tanaka, and M. Ng,
“Graph spectral image processing,”
in Proceedings of the IEEE, May 2018, vol. 106, no.5, pp.
907–930.
pang17
J. Pang and G. Cheung,
“Graph Laplacian regularization for inverse imaging: Analysis in
the continuous domain,”
in IEEE Transactions on Image Processing, April 2017, vol. 26,
no.4, pp. 1770–1785.
liu19
X. Liu, G. Cheung, X. Ji, D. Zhao, and W. Gao,
“Graph-based joint dequantization and contrast enhancement of poorly
lit JPEG images,”
IEEE Transactions on Image Processing, vol. 28, no.3, pp.
1205–1219, March 2019.
chambolle97
A. Chambolle and P.-L. Lions,
“Image recovery via total variation minimization and related
problems,”
Numerische Mathematik, vol. 76, no. 2, pp. 167–188, 1997.
liu17
X. Liu, G. Cheung, X. Wu, and D. Zhao,
“Random walk graph Laplacian based smoothness prior for soft
decoding of JPEG images,”
IEEE Transactions on Image Processing, vol. 26, no.2, pp.
509–524, February 2017.
parikh13
N. Parikh and S. Boyd,
“Proximal algorithms,”
in Foundations and Trends in Optimization, 2013, vol. 1, no.3,
pp. 123–231.
chen22
F. Chen, G. Cheung, and X. Zhang,
“Manifold graph signal restoration using gradient graph Laplacian
regularizer,”
2022, arXiv:2206.04245.
shewchuk94
J. Shewchuk,
“An introduction to the conjugate gradient method without the
agonizing pain,”
Tech. Rep., USA, 1994.
golub12
G. Golub and C. F. Van Loan,
Matrix Computations (Johns Hopkins Studies in the Mathematical
Sciences),
Johns Hopkins University Press, 2012.
vu21
H. Vu, G. Cheung, and Y. C. Eldar,
“Unrolling of deep graph total variation for image denoising,”
in IEEE International Conference on Acoustics, Speech and Signal
Processing, Toronto, Canada, June 2021.
gonzalez2008digital
R. C. Gonzalez and R. E. Woods,
Digital image processing,
Prentice Hall, Upper Saddle River, N.J., 2008.
zeng20
J. Zeng, G. Cheung, M. Ng, J. Pang, and C. Yang,
“3D point cloud denoising using graph Laplacian regularization
of a low dimensional manifold model,”
IEEE Transactions on Image Processing, vol. 29, pp. 3474–3489,
2020.
dinesh20
C. Dinesh, G. Cheung, and I. V. Bajić,
“Point cloud denoising via feature graph Laplacian
regularization,”
IEEE Transactions on Image Processing, vol. 29, pp. 4143–4158,
2020.
dinesh22
C. Dinesh, G. Cheung, and I. V. Bajić,
“Point cloud video super-resolution via partial point coupling and
graph smoothness,”
IEEE Transactions on Image Processing, vol. 31, pp. 4117–4132,
2022.
bai19
Y. Bai, G. Cheung, X. Liu, and W. Gao,
“Graph-based blind image deblurring from a single photograph,”
IEEE Transactions on Image Processing, vol. 28, no.3, pp.
1404–1418, 2019.
chen15
S. Chen, A. Sandryhaila, J. Moura, and J. Kovacevic,
“Signal recovery on graphs: Variation minimization,”
in IEEE Transactions on Signal Processing, September 2015, vol.
63, no.17, pp. 4609–4624.
tomasi98
C. Tomasi and R. Manduchi,
“Bilateral filtering for gray and color images,”
in Proceedings of the IEEE International Conference on Computer
Vision, Bombay, India, 1998.
DBLP:journals/spl/ChenL17
Pin-Yu Chen and Sijia Liu,
“Bias-variance tradeoff of graph laplacian regularizer,”
IEEE Signal Process. Lett., vol. 24, no. 8, pp. 1118–1122,
2017.
4271520
Kostadin Dabov, Alessandro Foi, Vladimir Katkovnik, and Karen Egiazarian,
“Image denoising by sparse 3-d transform-domain collaborative
filtering,”
IEEE Transactions on Image Processing, vol. 16, no. 8, pp.
2080–2095, 2007.
DBLP:journals/tip/WangZHL13
Shuhang Wang, Jin Zheng, Hai-Miao Hu, and Bo Li,
“Naturalness preserved enhancement algorithm for non-uniform
illumination images,”
IEEE Trans. Image Process., vol. 22, no. 9, pp. 3538–3548,
2013.
DBLP:journals/tbc/NafchiC18
Hossein Ziaei Nafchi and Mohamed Cheriet,
“Efficient no-reference quality assessment and classification model
for contrast distorted images,”
IEEE Trans. Broadcast., vol. 64, no. 2, pp. 518–523, 2018.
|
http://arxiv.org/abs/2307.01056v1
|
20230703143524
|
A 3 TOPS/W RISC-V Parallel Cluster for Inference of Fine-Grain Mixed-Precision Quantized Neural Networks
|
[
"Alessandro Nadalini",
"Georg Rutishauser",
"Alessio Burrello",
"Nazareno Bruschi",
"Angelo Garofalo",
"Luca Benini",
"Francesco Conti",
"Davide Rossi"
] |
cs.AR
|
[
"cs.AR"
] |
A 3 TOPS/W RISC-V Parallel Cluster for Inference of Fine-Grain Mixed-Precision Quantized Neural Networks
Alessandro Nadalini1,
Georg Rutishauser2,
Alessio Burrello1,
Nazareno Bruschi1,
Angelo Garofalo1,
Luca Benini12,
Francesco Conti1,
Davide Rossi1
1Department of Electrical, Electronic and Information Engineering (DEI), University of Bologna, Italy
2IIS Integrated Systems Laboratory, ETH Zurich, Switzerland
August 1, 2023
==================================================================================================================================================================================================================================================================================================================================================================================================
The emerging trend of deploying complex algorithms, such as Deep Neural networks (DNNs), increasingly poses strict memory and energy efficiency requirements on Internet-of-Things (IoT) end-nodes. Mixed-precision quantization has been proposed as a technique to minimize a DNN's memory footprint and maximize its execution efficiency, with negligible end-to-end precision degradation. In this work, we present a novel hardware and software stack for energy-efficient inference of mixed-precision Quantized Neural Networks (QNNs). We introduce Flex-V, a processor based on the RISC-V Instruction Set Architecture (ISA) that features fused Mac&Load mixed-precision dot product instructions; to avoid the exponential growth of the encoding space due to mixed-precision variants, we encode formats into the Control-Status Registers (CSRs). Flex-V core is integrated into a tightly-coupled cluster of eight processors; in addition, we provide a full framework for the end-to-end deployment of DNNs including a compiler, optimized libraries, and a memory-aware deployment flow. Our results show up to 91.5 MAC/cycle and 3.26 TOPS/W on the cluster, implemented in a commercial 22nm FDX technology, with up to 8.5× speed-up, and an area overhead of only 5.6% with respect to the baseline. To demonstrate the capabilities of the architecture, we benchmark it with end-to-end real-life QNNs, improving performance by 2× - 2.5× with respect to existing solutions using fully flexible programmable processors.
Embedded Systems, PULP Platform, Quantized Neural Networks, Mixed-precision, Microcontroller
§ INTRODUCTION AND RELATED WORK
Modern IoT applications require end-nodes to acquire raw data from sensors, extract “distilled” high-level features by applying near-sensors analytics including state-of-the-art ML and DL algorithms, and transmit this semantically dense information to higher-level nodes through wireless channels.
However, running these models on embedded microcontroller systems poses severe challenges due to limited on-chip memory, power budget, and compute capabilities, requiring optimizations on both hardware and software.
A well-established solution to shrink the full-precision DL models to fit the limited storage available on microcontrollers is the adoption of low-bitwidth (8-bit or less) integer arithmetic to represent their parameters, after post-training quantization <cit.> or quantization-aware training <cit.>. These techniques have been demonstrated on state-of-the-art DNN topologies, adopting uniform or mixed-precision quantization schemes, reducing the model footprint by 47% with a Top-1 accuracy drop in the range of 3.4%, without significant impact upon the user experience of many IoT applications. Banner et al. <cit.> propose a post-training 4-bit quantization method with an accuracy drop of a few percent, while authors of <cit.> presented further improvements reducing the memory footprint of DNNs up to 7× at the cost of an accuracy drop of only 4%.
If well supported by the hardware processing systems, reduced precision integer arithmetic offers a significant efficiency boost with respect to floating-point operations. Low-bitwidth integer formats are widely adopted in custom digital and analog accelerators such as UNPU <cit.>, supporting fully-variable 1 to 16 bit weight bit-precision and delivering a peak energy efficiency of 50.6 TOPS/W at a throughput of 184 GOPS.
Emerging Analog in-Memory Computing (AiMC) accelerators such as DIANA <cit.> also implicitly exploit quantization, delivering peak energy efficiency in the range of 100-1000 TOPS/W. However, the high performance and efficiency of hardwired accelerators are counterbalanced by their poor flexibility, which makes it hard to deploy real-sized end-to-end DNNs on these systems and to achieve actual efficiencies similar to the theoretical peak. Limited flexibility and high area cost per device make them hard to adopt in IoT applications.
A compromise solution between dedicated accelerators and fully programmable devices is represented by FPGAs, where embedded general-purpose processors are coupled to the DSP-capable hardware to accelerate DNNs <cit.>. Several works explore reduced-precision arithmetic <cit.>, but within a power envelope orders-of-magnitude larger than IoT nodes budget. Lattice proposed the SensAI stack <cit.>, which offers machine learning ultra-low-power (1 mW to 1 W) capabilities on FPGAs. However, these solutions have a limited number of LUTs and a non-negligible unit cost, not compatible with many IoT applications.
Hardware reconfigurability of these platforms offers higher flexibility than ASICs, but it is still far from the average IoT programmer demand. Additionally, their efficiency is much lower than that of ASICs.
The highest flexibility for QNN inference is offered by commercial general-purpose processors coupled with optimized software libraries such as CMSIS-NN libraries <cit.> for ARM Cortex M4 and M7 processors.
A recent approach to enhance the computing capabilities of low-power MCU systems is through domain-specific Instruction Set Architecture (ISA) extensions. To address the DNN computing at the extreme edge, ARM presented the Cortex M-55 core based on the ARMv8-1M ISA, including an M-Profile Vector Extension (MVE) called Helium <cit.> that also supports 8-bit MAC instructions. Unfortunately, microcontrollers implementing this ISA are not yet commercially available. Many solutions in the RISC-V ecosystem leverage this approach as well; for example, authors of <cit.> propose the XpulpV2 custom RISC-V ISA extensions for DSP applications, including support for 16-/8-bit SIMD operations. However, this ISA incurs performance degradation on sub-byte or mixed-precision linear kernels, since additional extra-instructions are required for data manipulation, introducing huge overhead <cit.>.
To boost sub-byte uniform linear kernels, XpulpNN <cit.> extends XpulpV2 with 4- and 2-bit SIMD operations. Moreover, it introduces fused Mac&Load instructions that allow concurrent execution of SIMD dot-product operations with memory accesses, increasing the computation efficiency of the core up to 94%. XpulpNN outperforms the performance of commercially available Cortex-M cores by up to 20× on quantized DNN layers. However, when operating on mixed-precision inputs, the efficiency boost of XpulpNN narrows significantly because of the massive software overhead necessary for packing and unpacking data. To eliminate the performance degradation compared to uniform precision kernels, authors of <cit.> propose direct hardware support for mixed-precision operations with dedicated RISC-V ISA extensions. To reduce the number of mixed-precision instructions to be encoded into the ISA, they exploit the dynamic bit-scalable execution mode: the ISA instruction only encodes the type of the operation, while its format is specified by a Control Status Register (CSR) of the core.
In this work, we present a new hardware and software stack targeting energy-efficient inference of mixed-precision QNNs on a parallel cluster of RISC-V processors. The main contributions of this paper are the following:
* We extend the RISC-V ISA with fused mixed-precision Mac&Load instructions. The proposed instruction set extension allows to achieve an ASIC-like utilization of MAC units in the cores (larger than 80%), being able to operate on all the mixed-precision variants. Considering the mixed-precision capabilities and the preserved flexibility for general-purpose applications, we name our processor Flex-V.
* We integrate the extended processor in an eight-cores parallel ultra-low-power (PULP) cluster implemented in a commercial 22nm FDX technology to evaluate accurately the impact of such extensions on the operating frequency, area, and power.
* We integrate the proposed hardware extensions in a software framework for the end-to-end deployment of DNNs including a compiler, optimized libraries, and a memory-aware deployment flow, and we compare the proposed solution with the state-of-the-art end-to-end DNNs.
We compare the extended processor with state-of-the-art architectures by running both single convolutional layers and full end-to-end QNNs, we report a summary in Table <ref>. Our results show a performance improvement, with respect to the execution with the extensions disabled, up to 8.5× on a single layer and up to 2.5× on the end-to-end network with negligible degradation of accuracy and a peak energy efficiency of 3.26 TOPS/W, approaching that of accelerators, at low area cost 5.6% with respect to the baseline processor cluster and without compromising flexibility. The hardware and software described in this work are open-source, to support and boost an innovation ecosystem focusing on ultra-low-power computing for the IoT landscape.
§ BACKGROUND
§.§ PULP cluster
Parallel Ultra-Low Power (PULP) is an open-source computing platform exploiting near-threshold computing to reach high energy efficiency, leveraging parallelism to enhance the performance degradation at low voltage <cit.>.
The PULP cluster adopted as a reference, shown in Fig. <ref>, is composed of eight RI5CY cores <cit.>, each of them characterized by a 4-stage in-order single-issue pipeline and the RV32IMCXpulpV2 Instruction Set Architecture (ISA). XpulpV2 is a specialized extension to the RISC-V ISA <cit.> designed for efficient digital signal processing (DSP) computation. It features hardware loops, post-modified access load and store instructions, along with SIMD operations on 16-bit and 8-bit integer vector operands.
The cores in the cluster share data on a Tightly Coupled Data memory (TCDM) of 128 kB, divided into 16 banks. The memory is accessed through a one-cycle latency logarithmic interconnect. The PULP cluster accelerator and its host, i.e. the Fabric Controller (FC), communicate through an AXI interface. Data transfers between the TCDM and the second level of memory, hosted by the Fabric Controller, are managed by a dedicated DMA.
The cluster processors fetch the instructions from a two-levels (the first private to each core, the second shared) hierarchical instruction cache to enhance the hit rate. The cluster is also provided with the Hardware Synchronization Unit which manages fine-grained parallel thread dispatching and clock-gating of idle cores waiting for synchronization, enabling low-overhead and fine-grained parallelism, thus high energy efficiency.
§.§ QNN execution model
The software stack we propose in this work extends the PULP-NN software library presented in <cit.>. It relies on the Height-Width-Channel (HWC) data layout and on an execution model optimized for resource-constrained microcontrollers. Convolution layers are implemented by combining three distinct phases:
im2col: for a given output pixel position, the 3D input activations (in HWC format) of the current convolution are re-arranged into a 1D vector along the filter and input channel dimensions. PULP-NN performs this operation simultaneously for 2 output pixels, producing two separate im2col buffers.
Matrix Multiplication: this step consists of a sum-of-dot-product operation between the current im2col buffer and the sets of filters to produce the intermediate outputs at the higher 32-bit precision.
The kernel leverages the XpulpV2 ISA and exploits data locality within the Register File (RF) of RI5CY to maximize the computation throughput. As a result of design exploration in the space of registers resources available in the RI5CY RF, it is possible to implement a MatMul with a “4×2" unrolling factor, fetching from memory the weights from two consecutive filters and the input activations from two different im2col buffers to produce two activation outputs related to four consecutive channels, in the same inner loop of the MatMul.
Quantization: each intermediate 32-bit accumulator from the previous stage is represented back in low-bitwidth form by applying normalization and quantization functions, composed of one MAC, one shift, and one clip operation.
§ FLEX-V CORE ARCHITECTURE
We introduce Flex-V, our RISC-V ISA-extended processor that flexibly supports sub-byte and mixed-precision preserving the fully-programmable capabilities of a general-purpose processor. The three key concepts that guided the development of the core's micro-architecture are dynamic bit-scalable execution, fused Mac&Load, and fully-flexible mixed-precision. In dynamic bit-scalable execution, a particular op-code defines a whole family of virtual instructions; the choice of a particular one to execute depends on contextual bits stored in a status register of the core.
Fig. <ref> shows the decoding process when the Flex-V core is running in dynamic bit-scalable execution mode: in case of a Scalar instruction, the decoder extracts all necessary information from the encoding of the instruction and communicates it to the EX stage. Contrarily, if the received op-code corresponds to a Virtual SIMD instruction, e.g. a (ml)sdotp, the decoder enables the proper functional unit within the EX stage, but the precision of the operation's operands depends also on status bits stored in the CSRs, such as the SIMD format, and on signals coming from dedicated controllers.
The mixed-precision Dot Product (Dotp) Unit shown in Fig. <ref>a, exploiting the operand-precision information stored in the CSRs, implements the sub-byte and mixed-precision support. It integrates dedicated units for 4- and 2-bit operands together with a Slicer&Router responsible for their extraction from a 32-bit input word. Considering the sum-of-dot-product (sdotp) operation between an 8-bit operand A and a 4-bit operand B, only four elements within the 32-bit input word for B can be consumed by a single instruction: as shown in Fig. <ref>b, the Slicer selects either the first or the last four elements depending on the value of MPC_CNT signal, then the Router directs the selected elements to the Dotp sub-unit specified by the SIMD_FMT signal coming from the CSR, i.e. the DOTP-8 for this example. The overall process is governed by the Mixed-Precision Controller (MPC).
The fused Mac&Load (mlsdotp) instruction overlaps a SIMD dot-product-like operation with a Load performed during the writeback stage, typically to replace non-stationary data in a register feeding the following Mac&Load instruction. Finally, we define fully flexible mixed-precision as the hardware support for automatic management of instructions involving operators with different bit precision. An additional Neural Network Register File (NN-RF), with six 32-bit registers dedicated to values of activations and weights, has been added to enable Load operations during the Mac&Load write-back stage, which cannot be done in the general purpose register file (GP-RF). Finally, the core includes a Mac&Load Controller (MLC) that is in charge of the automatic address generation, described in Fig. <ref>.
Fig. <ref>, shows an assembly snippet of a Matrix Multiplication kernel between 8-bit activations and 4-bit weights. The kernel starts initializing the CSRs driving the inputs of the MLC (i.e.
{w,a}_skip,
{w,a}_stride, {w,a}_rollback), and the MPC parameters defining encoded activations and weight precision (simd_format) and the weight reuse parameter (mix_skip). Once the CSRs needed to configure MLC and MPC are set, the inner loop of the kernel starts the execution. After the initialization of the base memory addresses, four weights and one activation are loaded explicitly to fill the NN-RF. The innermost loop executes only one explicit load per iteration, then all other updates of the NN-RF are performed in the writeback phase of the Mac&Load instruction. Strides, rollbacks, and thresholds are all stored in CSRs and they depend only on static features of the MatMul, such as the number of input channels, the dimensions of the filter kernel, and the precision of the operands.
During the execution of the inner loop the MLC automatically generates the memory address for both operands: it navigates a two-dimensional strided pattern by updating a register-stored pointer {w,a}_addr with three static parameters, {w,a}_stride, {w,a}_rollback, and {w,a}_skip. The {w,a}_stride parameter corresponds to the stride in the direction of the innermost loop of the pattern, while {w,a}_rollback rolls back the pointer of all innermost loop iterations and adds the stride of a single outermost loop iteration.
{w,a}_skip is the number of innermost loop iterations. Fig. <ref> shows the pattern in the example of a MatMul with unrolling factor “4×2" in PULP-NN <cit.>. This kind of pattern would require substantial instruction overhead (∼30%) for pointer management; the MLC deals with this entirely in hardware.
We can also note that, in case of mixed-precision inputs, there's an additional degree of unrolling with respect to uniform precision execution: thanks to the hardware support for mixed-precision, each 32-bit register dedicated to weights can be exploited from 2 to 4 times to process different activations, then they're updated at the end of each innermost loop iteration. These features, together with the automatic update of the activations and weights pointers enabled by the MLC, increase the utilization of MAC units reducing the overall number of loads from memory. Furthermore, it can be noted that, while the baseline core (i.e. RI5CY) is limited to a maximum unrolling factor of “4×2" that saturates the registers within the GP-RF, the introduction of the dedicated NN-RF in Flex-V extends it to “4×4" further improving data reuse, hence performance.
§ DEPLOYMENT FLOW
We develop an optimized software library to take advantage of the proposed ISA extensions, replacing software-based low-precision data unpacking with the hardware support for sub-byte and mixed-precision operands, and introducing the new unrolling degree for matrix multiplications between mixed-precision operands.
To deploy end-to-end real-sized QNN benchmarks, we extend the open-source DORY tool <cit.>[] to support low-precision data formats (< 8-bit).
The tool automatically produces template-based C code that wraps a target backend, managing different levels of memories (i.e., L1, L2, and the external RAM) and orchestrating the tensor movements.
In particular, DORY exploits a tiling approach to separate layers into small nodes whose tensors can fit the L1 memory of the system. Then, it produces C routines which i) execute these smaller nodes in L1, and ii) double-buffer the movements of tensors from L2 to L1.
Notice that since the DMA is not blocking, the calls to the kernels are always overlapped with the asynchronous DMA calls.
The existing tool only supported 8-bit integer tensors.
To plug in our new library, we modify the key elements of DORY to support 2-bit and 4-bit data formats.
First, we extend the tiling solver based on Constraint Programming to support different data formats: now it considers the new constraints associated with sub-byte data formats, i.e., that the convolutional loop's innermost dimensions should always be byte-aligned.
Then, we create a new set of templates to support the new ISA extensions. In the new templates, before the tiling loops, we insert the CSRs setup that is common to every sub-nodes executed. Inside the tiling loops, we call the functions implementing the key kernels exploiting the new proposed ISA extensions.
Finally, we adjust the DORY mapping tool to consider that layers' tensors can have different data formats, correctly sizing the data transfers between L3, L2, and L1.
§ RESULTS AND DISCUSSION
To evaluate the Flex-V core in terms of timing, power, and area overhead compared to other cores based on the RI5CY architecture, we integrate RI5CY, MPIC, XpulpNN and Flex-V cores into the PULP cluster and perform separate full implementations with the Global Foundries 22nm FDX technology node. To evaluate the proposed hardware-software stack, we benchmark the PULP cluster with the Flex-V cores on synthetic convolutional layers and on the full deployment of real-world end-to-end QNNs.
§.§ Physical Implementation
We synthesize the PULP clusters with Synopsys Design Compiler-2019.12 and perform full place&route flow with Cadence Innovus-17.11.000 using the worst-case corner (SSG 0.59V, -40C/125C). To perform power overhead evaluations between RI5CY and Flex-V with disabled extensions, we run timing-annotated post-layout simulations of 8-bit MatMuls in typical corners at 250 MHz.
The total area of the Flex-V core is 0.018 mm^2, with an overhead of 30% compared to RI5CY due to the additional logic to extend the Dotp Unit and implement the MLC and the MPC. We note that the impact is only 6% when we compare the area at the cluster level. The additional logic of Flex-V compared to RI5CY does not significantly impact the maximum operating frequency of the cluster (-2%), which peaks up to 463 MHz.
Note that, despite the additional logic introduced to implement the new ISA extensions, the power consumption overhead with respect to RI5CY related to the execution of an 8-bit MatMul with only XpulpV2 extensions is limited to 2.47% for the single processor and 2.04% for what concerns the whole cluster thanks to clock-gated CSRs. Complete area and power results are reported in Table <ref>.
§.§ DNN Layers Benchmarking
To demonstrate the benefits of the proposed core, we benchmark the PULP cluster with Flex-V cores on a set of synthetic convolution kernels, in terms of performance and energy efficiency, and we compare it with RI5CY<cit.>, MPIC<cit.> and XpulpNN<cit.>. The layers operate on representative tiles used in such types of devices to deploy QNN inference, applying 64×3×3×32 filters on a 16×16×32 input tensor and featuring different bit-precision for activations and weights (including mixed-precision). The results are then compared against the cluster execution of the same kernels on similar RISC-V cores and reported in Fig. <ref>.
Although MPIC <cit.> supports mixed-precision operations in its ISA, our solution speeds-up convolution kernels by 1.4× thanks to the Mac-Load mechanism available in the Flex-V core and the supported “4×4" MatMul format. Moreover, the performance boost of Flex-V grows up to 4.5× and 8.5× with respect to XpulpNN and XpulpV2, respectively, which show heavy performance degradation on mixed-precision and sub-byte QNN kernels due to lack of support in the ISA for these operations that require adding extra-instructions in the assembly for data manipulation.
Table <ref> shows that the proposed architecture reaches a peak of energy efficiency of 3.26 TOPS/W on the uniform 2-bit MatMul kernel and 870 GOPS/W on the 8-bit configuration, which is comparable to dedicated hardware acceleration units without giving away software flexibility. Flex-V outperforms all the other solutions for all the configurations.
§.§ End-to-end Networks
To further demonstrate the capabilities of the proposed architecture, we benchmarked it with end-to-end real-life QNNs exploiting the deployment flow described in Section <ref>. We considered three use cases: an 8-bit MobileNetV1, a fully mixed-precision 8b4b MobileNetV1 and an aggressively quantized 4b2b ResNet-20. The two MobileNetV1 networks have been trained on ImageNet while the 4b2b ResNet-20 targets CIFAR10. It is worth noticing that the memory footprint of the 8b4b MobileNetV1 is reduced by 47% with respect to the 8-bit quantized model while its accuracy reaches 66.0%, with a degradation of only 3.3%. Performance and accuracy of all the tested networks are reported in Tab. <ref>: the experiments performed on the ResNet-20 featuring 4-bit activations and 2-bit weights show that the proposed architecture achieves 2.3× and 2.5× of speedup with respect to XpulpV2 and XpulpNN. We also report the results of the execution of an end-to-end network on the STM32H7 presented by Capotondi et al. <cit.>: the speedup of the proposed architecture with respect to this commercial product reaches 19× thanks to the combination of the extended ISA and the optimized software executed on the eight-core cluster.
§ CONCLUSION
We presented a novel hardware and software stack that meets the challenge of energy-efficient mixed-precision QNN inference on MCU processors core. We extended the RISC-V ISA with sub-byte and mixed-precision fused Mac&Load instructions aiming to remove the overhead caused by loading and unpacking data before actual computation. We integrated the Flex-V core, which implements the extended ISA, into a tightly-coupled PULP cluster of eight cores. Its implementation with GF22FDX technology shows an area overhead of only 5.6% with respect to the baseline cluster with RI5CY cores. Furthermore, we developed a software library leveraging the new ISA extensions to improve the performance of convolutional kernels, key kernels to boost the execution of end-to-end QNNs. The results on single convolutional layers show up to 38.2 MAC/cycle boosting by 8.5× and 4.5× the execution on RI5CY and XpulpNN cores, respectively. We also benchmarked the proposed architecture with three end-to-end real-life QNNs, obtaining a performance gain of 2× - 2.5× with respect to state-of-the-art solutions. From the physical implementation of the cluster in 22nm FDX technology, we observed a peak energy efficiency of 3.26 TOPS/W.
§ ACKNOWLEDGEMENT
This work was supported in part by NeuroSoC HORIZON EU Project (g.a. 101070634) and in part by TRISTAN HORIZON EU Project (g.a. 101095947).
IEEEtran
|
http://arxiv.org/abs/2307.02821v1
|
20230706073319
|
Isotropic plasma-thermal atomic layer etching of superconducting TiN films using sequential exposures of molecular oxygen and SF$_6/$H$_2$ plasma
|
[
"Azmain A. Hossain",
"Haozhe Wang",
"David S. Catherall",
"Martin Leung",
"Harm C. M. Knoops",
"James R. Renzas",
"Austin J. Minnich"
] |
cond-mat.mes-hall
|
[
"cond-mat.mes-hall",
"cond-mat.mtrl-sci",
"cond-mat.supr-con"
] |
Division of Engineering and Applied Science, California Institute of Technology, Pasadena, CA 91125, USA
Division of Engineering and Applied Science, California Institute of Technology, Pasadena, CA 91125, USA
Division of Engineering and Applied Science, California Institute of Technology, Pasadena, CA 91125, USA
Division of Natural Sciences, Pasadena City College, Pasadena, CA 91106, USA
Oxford Instruments Plasma Technology, North End, Bristol BS49 4AP, U.K.
Department of Applied Physics, Eindhoven University of Technology, P.O. Box 513, 5600MB Eindhoven, The Netherlands
Oxford Instruments Plasma Technology, North End, Bristol BS49 4AP, U.K.
[email protected]
Division of Engineering and Applied Science, California Institute of Technology, Pasadena, CA 91125, USA
Microwave loss in superconducting TiN films is attributed to two-level systems in various interfaces arising in part from oxidation and microfabrication-induced damage. Atomic layer etching (ALE) is an emerging subtractive fabrication method which is capable of etching with Angstrom-scale etch depth control and potentially less damage. However, while ALE processes for TiN have been reported, they either employ HF vapor, incurring practical complications; or the etch rate lacks the desired control. Further, the superconducting characteristics of the etched films have not been characterized. Here, we report an isotropic plasma-thermal TiN ALE process consisting of sequential exposures to molecular oxygen and an SF6/H2 plasma. For certain ratios of SF6:H2 flow rates, we observe selective etching of TiO2 over TiN, enabling self-limiting etching within a cycle. Etch rates were measured to vary from 1.1 /cycle at 150 to 3.2 /cycle at 350 using ex-situ ellipsometry. We demonstrate that the superconducting critical temperature of the etched film does not decrease beyond that expected from the decrease in film thickness, highlighting the low-damage nature of the process. These findings have relevance for applications of TiN in microwave kinetic inductance detectors and superconducting qubits.
Isotropic plasma-thermal atomic layer etching of superconducting TiN films using sequential exposures of molecular oxygen and SF6/H2 plasma
Austin J. Minnich
August 1, 2023
===========================================================================================================================================
§ INTRODUCTION
Titanium nitride (TiN) is a superconducting metal of interest for microelectronics and superconducting quantum devices. Its high kinetic inductance, low microwave loss, and high absorption coefficient in the infrared and optical frequencies make it a promising material for single photon detectors <cit.>, ultra-sensitive current detectors <cit.>, quantum-limited parametric amplifiers <cit.>, and qubits <cit.>. Superconducting microwave resonators based on TiN routinely exhibit internal quality factors Qi > 10^6 <cit.>. TiN is also used for microelectronic applications in which it is used as a copper diffusion barrier and metal gate electrode <cit.>. In many of these applications, imperfections at film interfaces are the primary limitation to figures of merit for various devices. For instance, the quality factor of superconducting microresonators is presently thought to be limited by microwave surface loss associated with two-level systems (TLS) in various interfaces <cit.>. Subtractive nanofabrication methods based on typical wet or dry etching processes are unsuitable for mitigating TLS density in these devices due to the lack of Angstrom-scale precision in etching and the sub-surface damage they induce <cit.>.
Atomic layer etching (ALE) is an emerging subtractive nanofabrication process with potential to overcome these limitations <cit.>. Early forms of ALE focused on directional etching <cit.>. Directional ALE is based on surface modification by adsorption of a reactive species, and subsequent sputtering of the modified surface with ions or neutral atoms of low energy exceeding only the sputtering threshold of the modified surface <cit.>. Isotropic thermal ALE processes have also been developed recently using sequential, self-limiting surface chemical reactions <cit.>. In thermal ALE, the material surface is modified to form a stable layer that can then be removed by a selective mechanism, such as temperature cycling, ligand-exchange transmetalation reactions, or others <cit.>. Isotropic thermal and plasma ALE processes have now been reported for various dielectrics and semiconductors including Al2O3 <cit.>, SiO2 <cit.>, AlN <cit.>, InGaAs <cit.> and others <cit.>. Surface smoothing of etched surfaces using ALE has also been reported for various metals and semiconductors <cit.>.
For TiN, ALE processes based on fluorination and ligand exchange with Sn(acac)2, trimethylaluminum (TMA), dimethylaluminum chloride (DMAC), and SiCl4 did not lead to etching <cit.>. When fluorinated, TiN retains its 3+ oxidation state, yielding TiF3. TiF3 either formed non-volatile ligand-exchange products or did not react with the precursors, and hence no etching occurred. This difficulty was overcome by first converting the Ti to the 4+ oxidation state with exposure to ozone or H2O2, which upon fluorination using HF produced volatile TiF4 <cit.>. A conceptually similar process has also been reported using O2 plasma and CF4 plasma <cit.>.
Despite these advances, limitations remain. The use of HF vapor incurs practical complications. The process of Ref. <cit.> based on O2 plasma and CF4 requires a heating and cooling step per cycle which can lead to impractical time per cycle on most conventional plasma tools. Additionally, the recipe achieves nm/cycle etch rates, which lacks the desired Angstrom-scale control and low damage characteristics. Previous reports did not examine the effects of ALE on the superconducting properties of the samples. Identifying alternate reactants to HF vapor while maintaining Angstrom-level precision over the thickness, and ensuring that superconducting properties are not degraded, all remain topics of interest for TiN ALE.
Here, we report the isotropic atomic layer etching of TiN using sequential exposures of O2 gas and SF6/H2 plasma. The process is based on the selective etching of TiO2 over TiN for certain ratios of SF6:H2. The observed etch rates varied from 1.1 /cycle up to 3.2 /cycle for temperatures between 150 and 350 respectively, as measured using ex-situ ellipsometry. The etched surface was found to exhibit a ∼ 40% decrease in surface roughness. The superconducting transition temperature was unaffected by ALE beyond the expected change due to the decrease in film thickness, highlighting the low-damage nature of the process. Our findings indicate the potential of ALE in the processing of TiN for superconducting quantum electronics and microelectronics applications.
§ EXPERIMENT
The plasma-thermal ALE process of this work is illustrated in <Ref>. An exposure of molecular oxygen was used to oxidize the surface of TiN to TiO2, followed by a purge. After, a mixture of SF6 and H2 gas was introduced into the chamber and ignited to form SF6/H2 plasma. After this exposure, the reactor was again purged to complete the cycle. The use of SF6/H2 plasma was motivated by noting that HF does not etch TiN, but fluorine radicals will spontaneously etch TiN <cit.>. Studies on SiN and Si etching using hydrogen and fluorine-containing plasma have shown that the plasma formed by the mixture yields different products at different plasma concentration ratios, including HF molecules at high hydrogen concentrations <cit.>. We therefore hypothesize that at high H2 concentrations, the SF6/H2 plasma forms molecular HF which then selectively etches the TiO2 over the TiN, with minimal spontaneous etching from F radicals. The formation of HF in the SF6/H2 plasma is referred to as “in-situ HF" throughout the paper.
We investigated this approach to ALE of TiN using an Oxford Instruments FlexAL atomic layer deposition (ALD) system with an inductively-coupled plasma source, as described in Refs. <cit.>. The substrate table temperature varied between 150 to 200, as measured by the FlexAL substrate table thermometer. Prior to introducing the sample into the chamber for etching, the chamber walls and carrier wafer were conditioned by coating with 50 nm of Al2O3 using 300 cycles of Al2O3 ALD <cit.>. Alumina was selected as it does not form volatile fluoride species on exposure to SF6 plasma. For TiN ALE, the sample was first exposed to 50 sccm O2 and 50 sccm Ar gas for 2 s at 100 mTorr pressure, followed by a 10 s purge. Next, a mixture of 20 sccm H2 and 4 sccm SF6 was stabilized at 100 mTorr for 5 s before striking the plasma at 100 W for 10 s. The excess reactants were purged for 10 s before repeating the cycle. The recipe resulted in a total time of ∼ 40s per cycle. Before the sample was moved to the loadlock, the chamber was pumped down for 60 s. The sample was additionally held in the loadlock for two hours to cool down before exposure to air, so as to reduce oxygen diffusion into the sample.
The film thickness before and after etching was measured by ex-situ spectroscopic ellipsometry (J.A. Woolam M2000) at 60^∘ and 70^∘ from 370 nm to 1000 nm. Thickness was determined using 5 points on a 5 × 5 mm^2 square array. Subsequently, the data were fit using a Lorentz model to obtain the thickness of the samples <cit.>. Reported thickness values are the average of the 5 points. XPS analysis was performed using a Kratos Axis Ultra x-ray photoelectron spectrometer using a monochromatic Al Kα source. Depth profiling was performed using an Ar ion beam with a 60 s interval for each cycle. The estimated milling depth was calculated based on initial and final film thickness measured by ex-situ ellipsometry and assuming a constant ion milling rate. The XPS data was analyzed in CASA-XPS from Casa Software Ltd. We adopt universal Tougaard background and sub-peak fitting routines from Refs. <cit.>.
The film surface topography was characterized using a Bruker Dimension Icon atomic force microscope (AFM) over a 0.25 × 0.25 μm^2 area. The raw height maps collected on the AFM were processed by removing tilt via linear plane-fit. The surface roughness and power spectral density (PSD) were computed from the plane-fit height maps using procedures outlined in previous literature <cit.>. The PSD provides a quantitative measure of the lateral distance over which the surface profile varies in terms of spatial frequencies <cit.>. The PSD was calculated by taking the absolute square of the normalized 1D-discrete Fourier transform of each row and column from the plane-fit AFM scan. The transformed data was then averaged to produce a single PSD curve. Reported roughness values and PSD curves were found to be consistent across 3 spots on each film.
Electrical resistivity measurements were performed on the Quantum Design DynaCool Physical Property Measurement System (PPMS). The TiN films were connected to the PPMS sample holder by four aluminum wires, wirebonded on the Westbond 7476D Wire Bonder. The film resistivity (ρ) was measured using a 4-point setup <cit.>. The resistivity was measured from 6 K to 1.7 K, and the data was used to calculate the superconducting critical temperature (T_c) of the films.
The samples consisted of 50 – 60 nm thick TiN films on high resistivity Si (100) wafers (>20 kΩcm, UniversityWafer) prepared using ALD with the same FlexAL system. The ALD process consisted of sequential half-cycles of exposure to tetrakis(dimethylamino)titanium (TDMAT) and nitrogen plasma with a 20 W DC bias at 350, similar to the procedure reported in Refs. <cit.>. The resistance at 6 K and T_c of a 60 nm thick ALD TiN film were measured to be 210 μΩcm and 3.22 ± 0.06 K, respectively; these values are comparable to those reported for other TiN films made using TDMAT <cit.>. The chemical composition of the deposited films are described in <Ref>. The titania (TiO2) films used for demonstrating etch selectivity in <Ref> were made by oxidizing TiN samples under an oxygen plasma for 5 minutes at 300, yielding a 5 nm thick TiO2 film on top of the TiN film. The thicknesses of the TiO2 films were measured using ex-situ ellipsometry.
§ RESULTS
§.§ Selective etching with SF6/H2 Plasma
We begin by examining the etch rate of TiO2 and TiN films for various SF6:H2 flow rate ratios, η. <Ref> shows the etch rates of TiN and TiO2 versus η at 300. For η≲ 0.05, negligible etching of either film is observed. Negative etch rates correspond to an increase in the thickness of the film, which we assume to be growth of non-volatile TiF3. For η≥ 0.1, we observe spontaneous etching of TiO2, with the etch rate monotonically increasing with η. For TiN, we observe no etching for η≤ 0.2, but for η≥ 0.25 etching occurs. We attribute these observations to the formation of in-situ HF along with negligible fluorine radical concentration for 0.05 < η≤ 0.2, similar to the results obtained in prior work <cit.>. For η≥ 0.25, the concentration of F radicals becomes sufficient to spontaneously etch the TiN, leading to increasing etch rates for both films. From our measurements, we find that 0.1 ≤η≤ 0.2 achieves selective etching of TiO2 over TiN. To obtain the highest etch selectivity of TiO2 over TiN, we select η = 0.2 for our experiments. This 1:5 ratio of SF6:H2 plasma is used throughout the rest of the paper.
§.§ TiN ALE using O2 and in-situ HF exposures
<Ref> shows the thickness change of TiN versus number of cycles for both half cycles, and for the full ALE recipe at 200 and 300. For the half-cycles, the thickness change was measured after exposure to only molecular oxygen or in-situ HF. No etching was observed for either half-cycle, supporting the need for both steps. In contrast, we observe a decrease in the thickness with increasing number of cycles when using both steps. The etch rate is calculated by dividing the total thickness change by the number of cycles, giving values of 2.5 ± 0.16 /cycle at 200 and 3.2 ± 0.10 /cycle at 300.
We further examine the effect of temperature on the etch rate. <Ref> shows the etch per cycle (EPC) versus table temperature ranging from 150 and 350. The etch rates are calculated over a 100 cycles using a linear fit. We find that the etch rate increases from 1.1 /cycle at 150 to 3.2 /cycle at 300. The increase in EPC with temperature is similar to what has been observed in previous thermal ALE studies of various materials <cit.>. We also observe a constant etch rate from 300 to 350, similar to what is reported in Figure 7 of Ref. <cit.>.
We also explored the self-limiting nature of the recipe by measuring the saturation curves of each half-cycle. For each saturation curve, the purge times and one half-cycle time are fixed while the other is varied. The etch rates reported are calculated based on the thickness change over 50 cycles at 300. In <Ref>, the in-situ HF step is fixed at 10 s, while the etch rate is measured versus the oxygen exposure time. The etch rate is observed to saturate to ∼ 3 /cycle above 2 s, which is consistent with the self-limiting nature of the oxidation step. In <Ref>, the oxidation step is fixed at 2 s, while the etch rate is measured versus in-situ HF exposure time. The etch rate saturates to ∼ 3 /cycle above 10 s, which is consistent with the selectivity of the in-situ HF to etch TiO2 and terminate on the TiN.
§.§ Characterization of film composition
We next characterize the chemical composition of the TiN films before and after ALE using XPS. In <Ref>, we show the core levels of Ti2p, N1s, O1s, C1s and F1s. For the Ti2p XPS spectra in <Ref>, we observe five components. Each component is a doublet consisting of a 2p3/2 and a 2p1/2 subpeak. We observe subpeaks corresponding to Ti-C (454.9 eV and 460.4 eV) <cit.>, Ti-N (455.1 eV and 460.8 eV) <cit.>, Ti-ON (456.5 eV and 462.3 eV) <cit.>, Ti-O (458.5 eV and 464.2 eV) <cit.>, and Ti-F (459.4 eV and 465.6 eV) <cit.>. In <Ref>, we report the N1s spectra with two subpeaks at 397.1 eV and 398.9 eV, belonging to N-Ti and N-O bonds, respectively <cit.>. In <Ref>, we report the O1s spectra with two subpeaks at 530.4 eV and 532.2 eV, corresponding to O-Ti and O-N bonds, respectively <cit.>. In <Ref>, we report the F1s spectra with two subpeaks at 684.9 eV and 690.3 eV, corresponding to F-Ti and F-C bonds, respectively <cit.>.
We observe that the Ti2p spectra is dominated by oxides and oxynitrides, consistent with the presence of a native oxide on TiN
<cit.>. After ALE (bottom panels of <Ref>), an increase in the magnitude of the Ti-N and N-Ti peaks is observed along with an overall decrease in the O1s peak magnitude. The decreased O1s signal implies a reduced native oxide concentration after ALE, as has been observed in other works <cit.>. The F1s spectra for the original sample may be attributed to contamination from using the same chamber for deposition and etching, which is consistent with the reduced magnitude of the F1s peak in the original sample compared to that in the ALE-treated sample (bottom panel of <Ref>).
We also performed depth-profiling XPS to determine the atomic concentrations on the surface and bulk. In <Ref>, we show the atomic concentrations of Ti, N, F, C and O as a function of sputtering time and estimated depth in the original and ALE-treated films. In the original sample (<Ref>), the atomic concentrations on the surface are 31.9% (Ti), 37.6% (N), 16.1% (O), 12.0% (C), and 2.4% (F). After 120 s Ar milling (∼ 3.5 nm), the atomic concentrations plateau to their bulk values of 48.6% (Ti), 42.3% (N), 6.1% (O), 1.9% (C), and 1.1% (F). The carbon and oxygen levels are consistent with other reported ALD TiN films made using TDMAT <cit.>. For the ALE-treated sample (<Ref>), the atomic concentrations on the surface are 34.2% (Ti), 39.5% (N), 7.9% (O), 11.9% (C), and 6.5% (F). After 120 s Ar milling (∼ 3.5 nm), the atomic concentrations plateau to their bulk values of 49.0% (Ti), 42.2% (N), 5.9% (O), 1.8% (C), and 1.1% (F). We observe a ∼ 49% decrease in the surface oxygen concentration in the ALE-treated film. An increase in the surface fluorine concentration of the ALE-treated film is also observed, consistent with other works involving the interactions of fluorine-containing plasma with dielectric films <cit.>. The atomic concentrations in the bulk of the ALE-treated film are within 95% of the values in the original film. Therefore, we conclude that the effect of ALE is confined to a few nanometers of the surface, with negligible effect on the bulk chemical composition.
§.§ Surface roughness characterization
We characterized the roughness of the TiN films before and after ALE using AFM. <Ref> shows the plane-fit height map of the film as deposited using ALD. <Ref> shows the plane-fit height map after 100 cycles of ALE at 300. <Ref> shows the PSD curves for the original film, after 40 ALE cycles and after 100 ALE cycles at 300. We observe a decrease in the PSD intensity across all length scales as the number of ALE cycles is increased, indicating that features with length scales from ∼ 2 - 20 nm are smoothed by the ALE process. In <Ref>, the RMS roughness is plotted versus the number of ALE cycles at 300. We observe a monotonic decrease in RMS roughness from 4.4 to 2.5 after 100 cycles. This 43% reduction in roughness was observed across 3 different positions on the sample.
§.§ Electrical and superconducting properties
We investigated the effect of ALE on the electrical and superconducting properties of the TiN films by measuring their resistivity from 6 K to 1.7 K. A 60 nm TiN film was deposited using ALD, which was etched to 50 nm using ALE. Another 50 nm TiN film was prepared using ALD to compare to the ALE-treated 50 nm film. The measured resistivity versus temperature for the three films is shown in <Ref>. The resistivity at 6 K of the 60 nm ALD film is found to be 222 μΩcm, with a superconducting critical temperature T_c = 3.22 ± 0.06 K. The resistivity of the TiN film is consistent with those previously reported for ALD TiN films <cit.>, and the T_c reported is similar to the T_c of other TiN films grown with TDMAT <cit.>. After 40 cycles of ALE at 200, the TiN thickness decreased to 50 nm, with a resistivity of 201 μΩcm at 6 K and T_c = 3.13 ± 0.04 K. For comparison, the 50 nm ALD film had a resistivity of 227 μΩcm at 6 K, and T_c= 3.11 ± 0.05 K. We therefore find that the change in T_c of the TiN film after ALE is consistent with that expected with a decrease of 10 nm in thickness, without any additional decrease due to process-induced damage. This observation highlights the improved quality of the processed films compared to those obtained from processing methods which lack atomic control <cit.>. The reduced 6 K resistivity of the ALE-treated film is thought to arise due to the removal of the native oxide. This result warrants further investigation and is a topic of future study.
§ DISCUSSION
We now discuss the characteristics of our plasma-thermal TiN ALE process in context with isotropic thermal ALE processes for TiN and related materials. Thermal ALE of TiN has been reported using molecular O3 or H2O2 and HF vapor <cit.>, and O2 plasma and CF4 plasma <cit.>. The first process leads to an etch per cycle (EPC) of 0.20 /cycle at 200, achieving atomic-scale control of etching. However, the recipe requires the use of HF vapor which incurs practical complications. The second process based on O2 plasma and CF4 plasma achieves an EPC of 17.1 /cycle at 200, which is a larger EPC than is desired for manipulating the surface region of the films. The second process also requires an additional heating step, which can lead to impractical process times on conventional tools. The present recipe achieves an EPC of 2.6 /cycle at 200 and 3.1 /cycle at 250, providing etch rates between the previous reported recipes. The present recipe also avoids the use of HF, requiring only an SF6/H2 plasma that also yields etching selectivity of TiO2 over TiN.
Our isotropic plasma-thermal ALE may find potential applications in the fabrication of TiN-based superconducting microresonators for microwave kinetic inductance detectors and qubits, where the native oxide hosts parasitic TLS that presently limit the device performance. Based on our XPS and resistivity measurements, ALE-treated films have a reduced oxygen concentration while maintaining unaltered bulk chemistry and electrical properties. These properties make ALE promising for reducing the number of TLS in the metal-air interface and thereby improving the quality factor of superconducting microresonators. The smoothing effect and isotropic Angstrom-scale EPC of the present ALE recipe is also relevant for fabricating TiN-based nanoscale metal gate electrodes in CMOS devices and various transistor designs, where the metal layers are required to have thickness on the order of ∼ 10 nm with uniformity ≲ 4% <cit.>. The ALD system in our work (Oxford Instruments, FlexAL) has demonstrated high uniformity on 200 mm diameter substrates <cit.>, and therefore our process has the potential to extend to wafer-scale applications.
§ CONCLUSION
We have reported an isotropic plasma-thermal atomic layer etching process for TiN using sequential exposures of molecular oxygen and SF6/H2 plasma. The SF6/H2 plasma selectively etches TiO2 over TiN for SF6:H2 flow rate ratios between 0.1 and 0.2. The etch rate varies from 1.1 /cycle at 150 to 3.2 /cycle at 350. We observe a smoothing effect from ALE, corresponding to a ∼ 43% reduction in RMS roughness after 100 cycles. The surface oxygen concentration is reduced by ∼ 49% after 100 cycles of ALE, indicating a decrease in the volume of surface oxide. We also find that ALE does not induce any change in T_c beyond that expected from the decrease in film thickness, highlighting the low-damage nature of the process. We anticipate that the ability to engineer the surface of TiN films on the Angstrom-scale using isotropic ALE will facilitate applications of TiN in superconducting resonators and microelectronics.
§ ACKNOWLEDGEMENTS
This work was supported by NSF under Award #2234390. The authors thank Nicholas Chittock (Eindhoven University of Technology) for useful discussions, and Phillipe Pearson (California Institute of Technology) for assistance with the wirebonder. We gratefully acknowledge the critical support and infrastructure provided for this work by The Kavli Nanoscience Institute and the Molecular Materials Research Center of the Beckman Institute at the California Institute of Technology.
|
http://arxiv.org/abs/2307.00343v2
|
20230701134603
|
A note concerning polyhyperbolic and related splines
|
[
"Jeff Ledford",
"Ryan Urban",
"Alec Vidanes"
] |
math.NA
|
[
"math.NA",
"cs.NA",
"math.FA",
"41A05, 41A15, 41A25"
] |
Sparse-Input Neural Network using Group Concave Regularization
Bin Luo [email protected]
Department of Biostatistics and Bioinformatics
Duke University
Durham, NC 27708, USA
Susan Halabi [email protected]
Department of Biostatistics and Bioinformatics
Duke University
Durham, NC 27708, USA
August 1, 2023
===================================================================================================================================================================================================================================================================================================
This note concerns the finite interpolation problem with two parametrized families of splines related to polynomial spline interpolation. We address the questions of uniqueness and establish basic convergence rates for splines of the form s_α = pcosh(α·)+qsinh(α·) and t_α = p+qtanh(α·) between the nodes where p,q∈Π_k-1.
§ INTRODUCTION
This note continues the study of splines, popularized by Schoenberg <cit.>. The goal of this paper is to develop basic properties of two parametrized families of splines. Specifically, we address questions of uniqueness and convergence rate similar to those found in <cit.>. The first family of splines satisfy s∈ C^2k-2[a,b] and
(D^2-α^2)^k s =0 on [a,b]∖ X,
where α>0 and X is a partition of [a,b]. Following <cit.>, we call these k-th order polyhyperbolic splines. These splines are examples of L-splines, <cit.>. The hyperbolic designation was chosen because the homogeneous solution of (<ref>) is given by
s_α(x)=p(x)cosh(α x)+q(x)sinh(α x),
where p and q are polynomials of degree at most k-1.
We note that, in the literature, splines corresponding to the differential equation D^2(D^2-α^2) u =0 and are often called hyperbolic but they are also referred to by the terms tension or exponential. These splines have been studied extensively, see <cit.> among many others. We also find a different notion for hyperbolic splines in <cit.>, these correspond to the differential equation (D^2-(2k-1)^2)⋯(D^2-3^2)(D^2-1)u=0 or (D^2-(2k)^2)⋯(D^2-2^2)Du=0.
Interpolation schemes which depend on parameters have appeared throughout the literature. Polyhyperbolic splines are similar to (but distinct from) splines in tension <cit.> and the GB splines of <cit.>, although the do share the property that in the appropriate limiting sense, these schemes lead to polynomial spline interpolation. For the case studied here, we expect to recover cubic splines, since as α→ 0, the differential operator reduces to D^4.
The second family of splines are built out of tanh, specifically these are piecewise combinations of t=p+qtanh(α·), where p and q are polynomials. We can characterize the order these by the maximal degree of the polynomials; the k-th order splines have polynomials whose degree does not exceed k-1. Owing to the popularity of tanh as an activation function in neural networks <cit.> and the use of splines for the same <cit.>, it's possible these splines have application in their own right. However in this note, we will concern ourselves with basic approximation properties of these splines. In principal, interpolating with either spline of order k corresponds to 2k-1 degree polynomial spline interpolation. We will explore the problem for the values k=1 and k=2, which correspond to classical linear and cubic interpolation, respectively.
The rest of the paper is laid out as follows. Section 2 contains definitions and facts necessary to the sequel. Section 3 contains results for first order splines, while Section 4 contains those related to the second order splines. The main results are found in these sections as Proposition <ref> and Theorem <ref> and its corollary. General comments including those concerning the corresponding Hermite interpolation problem are made in Section 5, while in Section 6 we provide some Mathematica™ code to generate some of these splines.
§ DEFINITIONS AND BASIC FACTS
Throughout the sequel, we use standard notations for derivatives. For example, D^2u and u” both correspond to the second derivative of the function u. The set of polynomials of degree at most k is denoted Π_k. By a partition of the interval [a,b], we mean a sequence of increasing values X=(x_j: 0≤ j≤ N) such that a=x_0<⋯<x_N=b. Associated to a fixed X, we have h_j:=x_j-x_j-1 and h(X):=max_1≤ j≤ Nh_j.
We denote the set of all k-th order polyhyperbolic splines by S_α^k(X), that is
S_α^k(X):={ s∈ C^2k-2[a,b]: (D^2-α^2)^k s =0 on [a,b]∖ X }.
We denote by T_α^k(X), the collection
T_α^k(X) := {(α·)u: u∈ H_α^k(X) }.
Note that on each interval [x_j,x_j+1], t_α∈ T_α^k(X) take the form
t_α(x) = p(x)+ q(x)tanh(α x),
where p,q∈Π_k-1.
We we use the notation u _L^∞ to denote the usual L^∞ norm for a function u, while we use the notation M _∞ to mean the maximum row sum of the matrix M.
A diagonally dominant matrix is an N × N square matrix [a_ij] which satisfies |a_ii|-∑_j≠ i|a_ij|>0 for each 1≤ i ≤ N. The well known Levy-Desplanques theorem proves this condition implies the invertibility of [a_ij], see <cit.>.
Generally we will seek to interpolate the data Y=(y_j:0 ≤ j≤ N) on a fixed but arbitrary partition X using s_α∈ S_α^k(X) or t_α∈ T_α^k(X). When we want to emphasize the dependence on the data set Y, we write s_α[Y] or t_α[Y]. Finally, we note the value of the constant C will be depend on its occurrence and is typically independent of the listed parameters unless stated otherwise.
§ PROPERTIES OF FIRST ORDER SPLINES
As one may expect, things are easier when k=1. We include these results as more or less a warm up for the more involved k=2 case. When k=1, we have s_α=∑_j=1^Ns_j, where s_j:[x_j-1,x_j]→ℝ is given by s_j(x)=a_jcosh(α x)+b_jsinh(α x). Thus we need only solve the 2× 2 matrix equation
[ cosh(α x_j-1) sinh(α x_j-1); cosh(α x_j) sinh(α x_j) ][ a_j; b_j ] = [ y_j-1; y_j ].
The determinant of the matrix is
cosh(α x_j-1)sinh(α x_j) - cosh(α x_j)sinh(α x_j-1) = sinh(α h_j).
Since h_j>0, this system has a unique solution which after simplification yields
s_j(x)= sinh(α(x_j-x))sinh(α h_j)y_j-1+sinh(α(x- x_j-1))sinh(α h_j)y_j.
A similar computation yields
t_j(x) = tanh(α x_j)-tanh(α x)tanh(α x_j)-tanh(α x_j-1)y_j-1+tanh(α x)-tanh(α x_j-1)tanh(α x_j)-tanh(α x_j-1)y_j.
Expanding (<ref>) and (<ref>) in a Taylor series (in α) yields
s_α|_[x_j-1,x_j](x) =y_j-1+y_j-y_j-1h_j(x-x_j-1)+O(α^2) and
t_α|_[x_j-1,x_j](x) =y_j-1+y_j-y_j-1h_j(x-x_j-1)+O(α^2) .
We recognize the first two terms as the linear interpolant of (x_j-1,y_j-1) and (x_j,y_j). Thus we expect s_α to exhibit similar convergence properties to the linear spline l[Y]. The expansions above give us s_0[Y]=t_0[Y]=l[Y].
We may also estimate the rate of convergence by expanding a sufficiently smooth function f in a Taylor series about x=x_j-1, then on (x_j-1,x_j) we have
f(x)-t_α(x) =
(f'(x_j-1) - α h_j((α h_j)+tanh(α x_j))f[x_j-1,x_j] )(x-x_j-1)
+O(h^2)
= (f'(x_j-1)-f[x_j-1,x_j])(x-x_j-1)+O(h^2)
=O(h^2).
The last equation requires f∈ C^2[a,b], and follows from the Mean Value Theorem. A completely analogous result holds for s_α. We summarize these findings in the following propositions.
Let X be a partition of [a,b], for any α>0 and data set Y, the interpolants s_α[Y] and t_α[Y] are unique. Furthermore they satisfy
lim_α→ 0s_α[Y](x)=l[Y](x) and lim_α→ 0t_α[Y](x)=l[Y](x)
for all x∈ [a,b], where l[Y] is the linear spline interpolant.
Suppose that f∈ C^2[a,b] and X is a partition of [a,b]. If t_α∈ T_α^1(X) such that t_α|_X =f|_X, then for α sufficiently small, we have
f- t_α_L^∞[a,b]≤ C h^2,
where C>0 depends on f and α. The same is true if t_α is replaced by s_α∈ S_α^1(X).
Rather than expanding s_α in a Taylor series, we could use the fact that for any f∈ C^2[a,b], there exists g∈ C^2[a,b] with f(x)=cosh(α x)g(x). Thus
f - s_α_L^∞ [a,b] = cosh(α·) ( g- t_α) _L^∞[a,b]≤ Ch^2
for sufficiently small α.
§ PROPERTIES OF SECOND ORDER SPLINES
The k=2 problem is more involved. Just as in the case with cubic splines, we must impose endpoint conditions on our splines s_α∈ S_α^2(X) (or t_α∈ T_α^2(X)). For these, we use those found in <cit.>:
Type I: s_α '(a) = y'_0 , s_α '(b)=y'_N ,
Type II: s_α”(a)=s_α”(b)=0,
Type III: s_α”(a)=y”_0 , s_α”(b)=y”_N.
Assuming that we are sampling a function f:[a,b]→ℝ, so that y_j=f(x_j), conditions I and III become
Type I: s_α '(a) = f'(a) , s_α '(b)=f'(b) ,
Type III: s_α”(a)=f”(a) , s_α”(b)=f”(b).
In light of the remark that follows Proposition <ref>, we find it more convenient to work with t_α∈ T_α^2(X) in what follows, often suppressing the dependence on α. Note that since t_α|_[x_j-1,x_j](x)=p_j(x)+q_j(x)tanh(α x), where p_j,q_j∈Π_1, t”_α|_[x_j-1,j] = D^2[q_jtanh(α·)]. Specifying t_α”(x_j-1) and t_α”(x_j)) allow us to solve for the coefficients of q_j then integrating twice and using the interpolation conditions allow us to solve for the coefficients of p_j. One may then generate a tridiagonal system to enforce smoothness of the first derivative, setting t”_j:=t”_α(x_j) we have
b_0t”_0+c_0 t”_1 = d_0,
a_jt_j-1”+ b_jt”_j+c_jt”_j+1 = d_j; 1≤ j≤ N-1,
a_Nt”_N-1+b_Nt”_N = d_N.
One needs to evaluate t'|_[x_j-1,x_j] at both endpoints to generate the coefficients in (<ref>). Setting
E_j-1 = α h_jcosh(α h_j)-sinh(α h_j) 2α^2 h_j (tanh(α x_j)-tanh(α x_j-1)-α h_jtanh(α x_j-1)tanh(α x_j)),
we have for 1≤ j ≤ N-1
a_j =cosh(α x_j-1)cosh(α x_j)E_j-1=h_j6+O((αh)^2); α→ 0,
c_j =cosh(α x_j+1)cosh(α x_j)E_j=h_j+16+O((αh)^2); α→ 0, and
d_j = y_j+1-y_jh_j+1 - y_j-y_j-1h_j.
The term b_j is more complicated, setting
F_j-1 = 4α^2 h_j(tanh(α x_j)-tanh(α x_j-1)-α h_jtanh(α x_j-1)tanh(α x_j)),
which is twice the denominator of E_j-1, we can simplify the expression somewhat
b_j = ( sinh(2α x_j)-2α h_j - 2tanh(α x_j-1)(cosh^2(α x_j)-α^2 h_j^2) ) / F_j-1
-( sinh(2α x_j)+2α h_j+1 - 2tanh(α x_j+1)(cosh^2(α x_j)-α^2 h_j+1^2) ) / F_j
= h_j+h_j+13 +O((αh)^2); α→ 0.
For type I endpoint conditions, the coefficients in (<ref>) and (<ref>) are given by
b_0 = -( sinh(2α x_0) +2α h_1 - 2tanh(α x_1)(cosh^2(α x_0)-α^2 h_1^2) )/F_0
=h_13+O((αh)^2); α→ 0,
c_0 = cosh(α x_1)cosh(α x_0)E_0 = h_16+O((αh)^2);α→ 0,
d_0 = y_1-y_0h_1-y'_0,
a_N = cosh(α x_N-1)cosh(α x_N)E_N-1= h_N6+O((αh)^2);α→ 0,
b_N = ( sinh(2α x_N)-2α h_N - 2tanh(α x_N-1)(cosh^2(α x_N)-α^2 h_N^2) ) /F_N-1
= h_N3+O((αh)^2);α→ 0, and
d_N = y'_N - y_N-y_N-1h_N.
The adjustments for the other types are
II: b_0=b_N=1 , a_0=c_0=0, d_0=d_N=0
III: b_0=b_N=1 , a_0=c_0=0, d_0=y”_0 , d_N=y”_N.
Direct computation now provides the following analog of Proposition <ref>.
Let X be a partition of [a,b]. For sufficiently small α and data set Y with end condition type I,II, or III, the interpolants s_α[Y] ∈ S_α^2(X) and t_α[Y]∈ T_α^2(X) are unique. Furthermore,
lim_α→ 0s_α[Y](x)=σ[Y](x) and lim_α→ 0t_α[Y](x)=σ[Y](x)
for all x∈ [a,b], where σ[Y] is the cubic spline interpolant with the same type of endpoint condition.
The matrix (<ref>) for t_α∈ T_α^2(X) is diagonally dominant for sufficiently small α. Now consider Y=((α x_j)y_j:0≤ j≤ N), we have a unique t̃_α∈ T_α^2(X) which interpolates Y and s_α = cosh(α·)t_α∈ S_α^2 (X) interpolates Y.
We use an argument similar to the one given in <cit.> to prove a result similar to Proposition <ref>. We begin by setting δ=σ - t_α and δ_j” = δ”(x_j). We use (<ref>) and the tridiagonal system for σ to generate a system for δ”:
δ”_0+c_0/b_0δ”_1 = 3b_0-h_1/3b_0σ”(x_0)+6c_0-h_1/6b_0σ”(x_1),
a_j/b_jδ”_j-1+δ”_j+c_j/b_jδ”_j+1
=6a_j-h_j/6b_jσ”(x_j-1)+3b_j-h_j-h_j+1/3b_jσ”(x_j)+ 6c_j-h_j+1/6b_jσ”(x_j+1),
a_N/b_Nδ_N-1”+δ_N” = 6a_N-h_N/6b_Nσ”(x_N-1)+3b_N-h_N/3b_Nσ”(x_N).
Now we may write this as the matrix equation (I+M)δ”= b, where δ” represents the vector with components δ_j” and b is the vector version of the right hand side (<ref>). More computation with the coefficients in (<ref>) shows that M_∞=O(1/2) as α→ 0, hence for sufficiently small α
δ”_∞= (I+M)^-1b _∞≤ 3 b _∞.
To estimate b_∞ we note the Mean Value theorem provides the estimate for the first and last rows O((αh)^2)σ”' _L^∞([a,b]∖ X), while the the triangle inequality provides the estimate for the middle rows O((αh)^2)σ”_L^∞(X) as α→ 0. Thus we have for sufficiently small α
δ”_∞≤ C(αh)^2 max{σ”' _L^∞([a,b]∖ X),σ”_L^∞(X)}.
Now the argument follows a similarly to Proposition <ref>. We may use Rolle's theorem for the function δ, which allows us to find ξ_j∈(x_j-1,x_j) such that δ'(ξ_j)=0, this together with the fact that δ(x_j)=0 allow us to write
δ|_[x_j-1,x_j](x)=∫_x_j^x δ'(u) du and δ'|_[x_j-1,x_j](x)=∫_ξ_j^x δ”(u) du.
We need only estimate δ”_L^∞[a,b]. To this end note that we can write things in terms the values at the partition
δ”(x) = σ”(x)-t_α”(x)
=w_1(x)δ”_j-1 +w_2(x)δ”_j+z_1(x)σ”_j-1 +z_2(x)σ”_j,
where
w_1(x) = 4α^2 h_j^2cosh(α x_j-1)((-1+α(x_j-x)tanh(α x_j))tanh(α x)+tanh(α x_j))cosh^2(α x)F_j-1,
z_1(x) = x_j-xh_j -w_1(x)= O( (αh )^2 ) σ”' _L^∞([a,b]∖ X);α→ 0
w_2(x) =4α^2 h_j^2cosh(α x_j)((1+α(x-x_j-1)tanh(α x_j-1))tanh(α x)-tanh(α x_j-1))cosh^2(α x)F_j-1
z_2(x) = x-x_j-1h_j -w_2(x).
Thus
δ”_L^∞[a,b]= O( (αh )^2 ) max{σ”' _L^∞([a,b]∖ X),σ”_L^∞(X)};α→ 0.
Putting these estimates together yields the following theorem.
Let X be a partition for [a,b]. Given a data set Y, suppose that t_α[Y]∈ T_α^2(X) interpolates Y with type I (II, III) end conditions. For i=0,1, or 2 and sufficiently small α,
D^i(σ[Y]-t_α[Y]) _L^∞[a,b]≤ C (αh)^4-imax{σ”' _L^∞([a,b]∖ X),σ”_L^∞(X)},
where σ[Y] is the cubic spline interpolant with type I (II, III) end conditions and C>0 is a constant independent of h.
Let X be a partition for [a,b] and f∈ C^4[a,b]. Suppose that t_α[f]:=t_α[f(X)]∈ T_α^2(X) interpolates f(X) with type I (II, III) end conditions. For i=0,1, or 2 and sufficiently small α,
D^i(t_α[f]-f) _L^∞[a,b]≤ C h^4-i,
where C:=C_α ,f, X>0. The same is true for s_α[f]∈ S_α^2(X).
The result for t_α[f] follows from Theorem <ref>, the triangle inequality, and known convergence rates for σ[f]:=σ[f(X)]:
D^i(σ[f]-f) _L^∞[a,b] ≤ C_f,Xh^4-i; i=0,1,2 and
D^3(σ[f]-f) _L^∞[a,b] ≤ C_f,Xh
found in <cit.> and <cit.>, respectively. The constants depend on the norms of various derivatives of f and the mesh ratio, |X|=h/minh_j.
To see the result for s_α[f]∈ S_α^2(X), note that any function f∈ C^4[a,b] may be written f(x)=cosh(α x)g(x), where f∈ C^4[a,b] as well. We have
D^i(s_α[f] - f) _L^∞[a,b] = D^i(cosh(α·)(t_α[g] - g)) _L^∞[a,b]
so the result follows from the Leibniz rule and the corresponding result for t_α∈ T_α^2(X).
§ FURTHER CONSIDERATIONS
One powerful aspect of modeling with cubic splines is the ability to preserve certain qualities of the underlying data set to be interpolated. Unfortunately, s_α[Y]∈ S_α^2(X) (or t_α[Y]∈ T_α^2(X)) need not share properties of Y such as positivity, monotonicity, or convexity. As seen with interpolation with cubic splines, the trade off for preserving the shape of the data is giving up some smoothness of the interpolant. If we choose to specify the first derivatives rather than enforcing continuity of the second derivative at each internal x_j∈ X, then we can solve (<ref>) in terms of the values Y=(y_j) and Y'=(y'_j) on the interval [x_j-1,x_j]:
s_α(x) = y_j+y'_j(x-x_j)
+( 3(y_j+1-y_j)-h_j+1(2y'_j+y'_j+1)h_j+1^2+O(α^2))(x-x_j)^2
+(h_j+1(y'_j+y'_j+1)-2(y_j+1-y_j)h_j+1^3+O(α^2))(x-x_j)^3+O(α^2)
=σ(x)+O(α^2 h_max^2).
Here σ is the cubic Hermite spline interpolant for the same data. In order for s_α to be approximately shape preserving, we may first generate a cubic interpolant with the property. Algorithms to preserve positivity, monotonicity, and convexity for cubic splines have been well established. See, for instance, <cit.> and the references therein.
Essentially, these algorithms provide an interval from which to choose each parameter y'_j∈ Y' in such a way that certain inequalities hold for σ. In order to ensure that s_α has the same property, we first choose the parameters of σ to satisfy the strict inequalities. This gives us room to reduce α small enough so that the error term does not affect the relevant inequality for σ.
§ ALGORITHM
We include a Mathematica™ source code for s_α∈ S_α^2(X) for the interested reader.
tabsize=4,
frame=single,
language=mathematica,
basicstyle=,
keywordstyle=,
backgroundcolor=,
commentstyle=,
showstringspaces=false,
emph=[1]x, y, acoe, bcoe, ccoe, dcoe, allcoe, s1, s2, ds, dds, mat1, mat2, yend,emphstyle=[1],
emph=[2]i, emphstyle=[2],
emph=[3]n,emphstyle=[3],
emph=[4]p, emphstyle=[4]
(*Enter in x, y, and p values*)
x = (*Input list of values*);
y = (*Input list of values or functions of x*);
t = (*Input type of end condition*);
yend = (*Input end conditions for type 1 or 2*);
p = (*Input tension parameter*);
n = Length[x];
(*Constructs coefficient lists*)
acoe = Table[Symbol["a" <> ToString[i]], i, n - 1];
bcoe = Table[Symbol["b" <> ToString[i]], i, n - 1];
ccoe = Table[Symbol["c" <> ToString[i]], i, n - 1];
dcoe = Table[Symbol["d" <> ToString[i]], i, n - 1];
allcoe = ;
For[i = 1, i <= (n - 1), i++,
AppendTo[allcoe, Symbol["a" <> ToString[i]]];
AppendTo[allcoe, Symbol["b" <> ToString[i]]];
AppendTo[allcoe, Symbol["c" <> ToString[i]]];
AppendTo[allcoe, Symbol["d" <> ToString[i]]];
];
(*Creates spline equations and puts them in a list*)
s1 = Array[0 , n - 1];
s2 = Array[0 , n - 1];
For[i = 1, i < n, i++,
s1[[i]] = ((acoe[[i]] + bcoe[[i]]*x[[i]])*Exp[-p*x[[i]]])
+ ((ccoe[[i]] + dcoe[[i]]*x[[i]])*Exp[p*x[[i]]]);
];
For[i = 1, i < n, i++,
s2[[i]] = ((acoe[[i]] + bcoe[[i]]*x[[i + 1]])*Exp[-p*x[[i + 1]]])
+ ((ccoe[[i]] + dcoe[[i]]*x[[i + 1]])
*Exp[p*x[[i + 1]]]);
];
(*Creates the first derivative of spline equations and puts
them in a list*)
ds = Array[0 , n - 2];
For[i = 1, i <= n - 2, i++,
ds[[i]] = (bcoe[[i]]*Exp[-p*x[[i + 1]]]
+ dcoe[[i]]*Exp[p*x[[i + 1]]] - Exp[-p*x[[i + 1]]]*p
*(acoe[[i]] + bcoe[[i]]*x[[i + 1]])
+ Exp[p*x[[i + 1]]]*p
*(ccoe[[i]] + dcoe[[i]]* x[[i + 1]]))
- (bcoe[[i + 1]]*Exp[-p*x[[i + 1]]]
+ dcoe[[i + 1]]*Exp[p*x[[i + 1]]] - Exp[-p*x[[i + 1]]]
*p*(acoe[[i + 1]] + bcoe[[i + 1]]*x[[i + 1]])
+ Exp[p*x[[i + 1]]]*p
*(ccoe[[i + 1]] + dcoe[[i + 1]]* x[[i + 1]]));
];
(*Creates the second derivative of spline equations and puts
them in a list*)
dds = Array[0 , n - 2];
For[i = 1, i <= n - 2, i++,
dds[[i]] = (-2*bcoe[[i]]*Exp[-p*x[[i + 1]]]*p
+ 2*dcoe[[i]]*Exp[p*x[[i + 1]]]*p + Exp[-p*x[[i + 1]]]
*(acoe[[i]] + bcoe[[i]]*x[[i + 1]])*p^2
+ Exp[p*x[[i + 1]]]*(ccoe[[i]] + dcoe[[i]]*x[[i + 1]])
*p^2) - (-2*bcoe[[i + 1]]*Exp[-p*x[[i + 1]]]*p
+ 2*dcoe[[i + 1]]*Exp[p*x[[i + 1]]]*p
+ Exp[-p*x[[i + 1]]]
*(acoe[[i + 1]] + bcoe[[i + 1]]*x[[i + 1]])
*p^2 + Exp[p*x[[i + 1]]]
* (ccoe[[i + 1]] + dcoe[[i + 1]]*x[[i + 1]])*p^2);
];
(*Sets up invertible matrix by putting the lists s1, s2, ds,
and dds in an array*)
mat1 = ;
For[i = 1, i < n, i++,
AppendTo[mat1, Coefficient[s1[[i]], allcoe]];
];
For[i = 1, i < n, i++,
AppendTo[mat1, Coefficient[s2[[i]], allcoe]];
];
For[i = 1, i <= n - 2, i++,
AppendTo[mat1, Coefficient[ds[[i]], allcoe]];
];
For[i = 1, i <= n - 2, i++,
AppendTo[mat1, Coefficient[dds[[i]], allcoe]];
];
(*Adds end conditions to matrix*)
If[t == 1,
AppendTo[mat1, Coefficient[(bcoe[[1]]*Exp[-p*x[[1]]]
+ dcoe[[1]]*Exp[p*x[[1]]]
- Exp[-p*x[[1]]]*p*(acoe[[1]] + bcoe[[1]]*x[[1]])
+ Exp[p*x[[1]]]*p*(ccoe[[1]] + dcoe[[1]]* x[[1]])),
allcoe];
];
AppendTo[mat1, Coefficient[(bcoe[[n - 1]]*Exp[-p*x[[n]]]
+ dcoe[[n - 1]]*Exp[p*x[[n]]]
- Exp[-p*x[[n]]]*p*(acoe[[n - 1]] + bcoe[[n - 1]]*x[[n]])
+ Exp[p*x[[n]]]*p*(ccoe[[n - 1]] + dcoe[[n - 1]]* x[[n]])),
allcoe];
];,
If[t == 2 || t == 3,
AppendTo[mat1, Coefficient[(-2*bcoe[[1]]*Exp[-p*x[[1]]]*p
+ 2*dcoe[[1]]*Exp[p*x[[1]]]*p
+ Exp[-p*x[[1]]]*(acoe[[1]] + bcoe[[1]]*x[[1]])*p^2
+ Exp[p*x[[1]]]*(ccoe[[1]] + dcoe[[1]]*x[[1]])*p^2),
allcoe];
];
AppendTo[mat1, Coefficient[(-2*bcoe[[n - 1]]*Exp[-p*x[[n]]]*p
+ 2*dcoe[[n - 1]]*Exp[p*x[[n]]]*p
+ Exp[-p*x[[n]]]*(acoe[[n - 1]] + bcoe[[n - 1]]*x[[n]])*p^2
+ Exp[p*x[[n]]]*(ccoe[[n - 1]] + dcoe[[n - 1]]*x[[n]])*p^2),
allcoe];
];
];
];
(*Creates solution matrix*)
mat2 = ;
For[i = 1, i < n, i++,
AppendTo[mat2, y[[i]]];
];
For[i = 2, i <= n, i++,
AppendTo[mat2, y[[i]]];
];
For[i = 2*(n - 1), i < (n - 1)*4, i++,
AppendTo[mat2, 0];
];
(*Redefines allcoe as a list for all the values of
the coefficients*)
allcoe = LinearSolve[mat1, mat2];
(*Final spline function*)
f[u_] = Sum[allcoe[[4*k - 3 ;; 4*k]].Exp[-p*u],
u*Exp[-p*u], Exp[p*u], u*Exp[p*u]
*Piecewise[1, x[[k]] <= u < x[[k + 1]], 0],
k, 1, n - 1];
Plot[f[u], u, x[[1]], x[[n]]];
plain
Department of Mathematics & Computer Science, Longwood University, U.S.A.
E-mail address: [email protected]
E-mail address: [email protected]
E-mail address: [email protected]
|
http://arxiv.org/abs/2307.00533v2
|
20230702100208
|
Learning Robot Geometry as Distance Fields: Applications to Whole-body Manipulation
|
[
"Yiming Li",
"Yan Zhang",
"Amirreza Razmjoo",
"Sylvain Calinon"
] |
cs.RO
|
[
"cs.RO"
] |
Goal-oriented Tensor: Beyond Age of Information Towards Semantics-Empowered Goal-Oriented Communications
Aimin Li,
Graduate Student Member, IEEE,
Shaohua Wu,
Member, IEEE,
Sumei Sun,
Fellow, IEEE, and Jie Cao,
Member, IEEE
August 1, 2023
=============================================================================================================================================
In this work, we propose to learn robot geometry as distance fields (RDF), which extend the signed distance field (SDF) of the robot with joint configurations. Unlike existing methods that learn an implicit representation encoding joint space and Euclidean space together, the proposed RDF approach leverages the kinematic chain of the robot, which reduces the dimensionality and complexity of the problem, resulting in more accurate and reliable SDFs. A simple and flexible approach that exploits basis functions to represent SDFs for individual robot links is presented, providing a smoother representation and improved efficiency compared to neural networks. RDF is naturally continuous and differentiable, enabling its direct integration as cost functions in robot tasks. It also allows us to obtain high-precision robot surface points with any desired spatial resolution, with the capability of whole-body manipulation. We verify the effectiveness of our RDF representation by conducting various experiments in both simulations and with the 7-axis Franka Emika robot. We compare our approach against baseline methods and demonstrate its efficiency in dual-arm settings for tasks involving collision avoidance and whole-body manipulation.
Project page: https://sites.google.com/view/lrdf/homehttps://sites.google.com/view/lrdf/home
§ INTRODUCTION
In the field of robotics, the representation of a robot commonly relies on low-dimensional states, like joint configurations, the pose of the end-effector, and force/torque data. However, this low-dimensional representation is lacking internal structure details and is insensitive to external factors, limiting the ability to interact with the environment and respond to real-world dynamics. To handle this problem, some geometric representations have been proposed, like primitives and meshes, with various applications <cit.>. However, they either make simplified assumptions or require significant computational resources to obtain a detailed model.
A natural idea for handling this problem is to find a compact representation that can encode the geometry of the robot efficiently. Recent studies in computer vision and graphics have shown the advantages that representing scenes and objects using signed distance functions (SDFs) <cit.>. It not only offers continuous distance information but also exhibits query efficiency. Since the robot geometry usually involves high dimensionality, learning the SDF representation is still challenging.
We argue that representing the robot geometry as distance fields (RDF) has multiple advantages. First, it provides a continuous and smooth distance representation, granting easy access to derivatives. This characteristic is particularly well-suited for robot optimization problems, such as motion planning and collision avoidance. Further, RDF representation decouples the robot from spatial resolutions, enabling the acquisition of robot surface points at any desired scale, which is helpful in whole-body manipulation tasks. Finally, RDF allows efficient computation, which is crucial for real-time applications where robots need to quickly process sensory information and generate appropriate responses in dynamic environments.
In this work, we propose the use of a kinematics-aware distance fields that can represent the robot geometry with
arbitrary joint configurations. In contrast to existing methods that learn an encoded shape and configurations together for articulated objects or robots <cit.>, the proposed RDF adopts a configuration-agnostic approach during the learning phase and utilizes the kinematics chain of the robot during the inference phase. This approach simplifies the problem by learning the SDF for each robot link, reducing the dimensionality and making it mathematically explainable in joint space. During the inference phase, the kinematics information is used to retrieve the SDF values.
By leveraging the kinematics structure of the robot instead of attempting to learn it from scratch, our method captures the geometry with improved accuracy while being reliable. An approach based on basis functions and ridge regression is proposed to learn parameterized SDFs for the robot links, which not only has high efficiency in terms of storage and computation but also ensures continuity and smoothness of distance fields.
In experiments, we demonstrate the capabilities of our RDF in three aspects. First, we provide a quantitative comparison of produced distance fields against other representative methods, showing the advantage of our approach. Then, we tackle two different robot tasks to show the versatility of the RDF representation: a collision avoidance task in dynamic environments to show real-time control performance and a whole-body manipulation task with two arms to lift bulk objects to show how RDF works in gradient-based optimization problems. To summarize our main contributions, our RDF representation is
* based on the kinematics chain of the robot and can be extrapolated to any joint configurations reliably (extrapolation capability).
* learned through a combination of basis functions, having a compact and flexible structure, providing simple expressions for the analytic derivatives of the distance fields.
* useful in various optimization problems, including collision avoidance and whole-body manipulation, which are demonstrated in real-world experiments.
§ RELATED WORK
SDFs for scene/object representation.
Representing objects or scenes as SDFs is an active research topic in computer vision and graphics due to its query efficiency and the ability to describe complex shapes <cit.>. Typically, it is a continuous scalar field defined over a 3D space that assigns signed distance values to points, representing the distance to the surface. The capability of SDFs in modeling object and scenes have shown wide applications in robotics like mapping <cit.>, grasping <cit.> and rearrangement <cit.>. <cit.> has explored optimizing diverse grasping configurations based on the SDF of the object. <cit.> propose to learn a forward model that predicts rigid transformations of an observed point cloud for the given actions. <cit.> proposes to learn kinematics and dynamics models as SDFs for robot manipulation.
SDFs for motion planning.
SDFs can also be regarded as cost functions in optimization problems, which also fit well with robot motion planning and control <cit.>. <cit.> proposes to use SDF to represent the environment and achieve effective motion planning in trajectory optimization. <cit.> extends learning SDFs to approximate generic equality constraint manifolds. <cit.> presents a regularized SDF with neural networks to ensure the smoothness of SDFs at any scale, testing it in collision avoidance and reactive control tasks. <cit.> samples points on the robot surface and compute their SDF values with GPU acceleration for motion planning. Although representing scenes as SDFs has shown several advantages in robotics tasks, the environment is usually diverse and dynamic, and it is impossible to obtain SDFs for arbitrary scenes.
SDFs for robot geometry representation.
An intuitive idea to solve the problem is to represent the robot as an SDF (in addition, or instead of the scene). <cit.> proposes to learn SDFs expressed in joint space with neural networks, allowing query distance values with points and joints as input. Similarly, <cit.> also trains an SDF model with joint angles as input to represent their mobile robot. <cit.> presents a reachability-based SDF representation that can compute the distance between the swept volume of a robot arm and obstacles. All of the aforementioned methods need to learn an SDF model coupled with joint angles, which is highly dimensional and nonlinear. The sampled points and joint configurations are very sparse in that space, making it difficult to train an accurate robot SDF model. Instead, our proposed SDF representation of the robot can be constructed based on the transformation of each link SDF, which simplifies the problem by avoiding to model the joint configuration during the learning phase and achieves better generalization performance (including extrapolation).
§ LEARNING ROBOT GEOMETRY AS DISTANCE FIELDS
In this section, we present our approach to represent the robot geometry as distance fields. This method distinguishes itself from current techniques <cit.> by employing the robot kinematic chain to learn distance fields for individual robot links. Instead of relying on learning non-linear and high-dimensional functions to determine the distance between points p and the robot surface at any configuration q, our approach relies on the inherent kinematic structure of the robot. Specifically, we transform the points p to each link frame and estimate the distance value for each link separately. This framework allows us to reduce the amount of data required during the learning phase while simultaneously achieving more precise estimations during inference.
Furthermore, we demonstrate the use of compact 3D basis functions to learn the signed distance functions of each link. These basis functions exhibit smoothness properties that ensure the overall smoothness of the primary function. Consequently, the efficiency of the proposed algorithm is enhanced as we can reduce the number of required points for learning the functions.
§.§ Problem Notations
Let ℛ(q) ⊂ℝ^3 be a robot in the 3D Euclidean space at the configuration q, and ∂ℛ(q) denotes the surface of ℛ. The distance function f(p,q): ℝ^3 →ℝ is defined with f(p,q) = ± d(p, ∂ℛ(q)). Here, d(p, ∂ℛ(q)) = inf_p' ∈∂ℛ(q)|p-p'|^2 denotes the minimum distance between the points p∈ℛ^3 and the robot surface. Signs are assigned to points to guarantee negative values within the robot, positive values outside, and zero at the boundary. The gradient ∇ f_p points in the direction of maximum distance increase away from the robot surface. In consequence, the normal n ∈ℝ^3 with respect to ℛ can be defined as n = ∇ f_p.
§.§ Kinematic Transformation of SDFs
Consider a robot with C degrees of freedom and K links, characterized by joint angles q = q_0,q_1,⋯,q_C-1 and shapes = _0,_1,⋯,_K-1. The distance field to represent the robot geometry is the union of the links SDFs, which can be written as
f_ℛ = min{f__0^b,f__1^b,⋯,f__K-1^b},
where f__k^b is the SDF of link _k in the robot base frame. In Section <ref>, we will elaborate on our methodology, which queries RDF values through the SDF of each robot link. The SDF value for point p in the robot base frame f__k^b can be computed through the rigid transformation of SDFs <cit.>, which involves transforming the input points of the SDF as
f__k^b(p,q)=f__k(^b 𝒯_k^-1( q)p^k),
where ^b 𝒯_k( q) ∈𝕊𝔼(3) denotes a matrix dependent on q that performs the transformation from the frame of the k-th link to the base frame of the robot. The computation of these transformation matrices can be achieved using the kinematics chain of the robot, typically represented by Denavit-Hartenberg parameters, See Appendix <ref>.
§.§ Learning SDFs using Basis Functions
Basis functions have been widely used in encoding trajectories in robotics, such as in dynamic motion primitive (DMP) <cit.> or probabilistic movement primitives (ProMP) <cit.>, see <cit.> for a review. They provide a continuous, differentiable, and smooth representation of the trajectory, ensuring the encoded motion appears natural without abrupt changes. This compact parameterization also enables efficient storage and computation while accurately capturing complex motions.
Drawing inspiration from these studies, we propose the adoption of geometric primitives, which serve as a three-dimensional extension of basis functions to represent the SDF of each robot link. By leveraging these basis functions, we aim to preserve the aforementioned advantages. In this work, we employ Bernstein polynomials as the chosen basis function. However, other types of basis functions, such as Radial Basis Functions (RBF) and Fourier basis functions can easily be substituted, depending on the specific requirements of the application. For a more comprehensive understanding of the utilization of basis functions in robotics, we refer the reader to <cit.>.
The SDF f__k for robot link _k can be represented as a weighted combination of N basis functions as
f__k = ∑_n=1^NΨ_n w_n,k = Ψ w_k,
where Ψ is a set of basis functions (see details in Appendix <ref>). The weights w_k can be learned through least square regression, given by w^*_k = (Ψ^T Ψ)^-1Ψ^T f__k. However, computing the inverse for large matrices can be computationally expensive and suffers from memory issues. Instead of a batch evaluation, a recursive formulation can be used, providing exactly the same result. To do so, we define a new parameter B = (Ψ^⊤Ψ) and process the data sequentially by sampling a small batch of points {t̃, f̃} and updating the learned weights when new data points become available. The whole process is depicted in Algorithm <ref>, and a 2D example is shown in Fig. <ref>.
After obtaining the optimal weights w^*, the distance can be decoded with f( t) = Ψ( t) w^* during inference. This representation also provides analytical gradients ∇ f( t) by analytically differentiating the basis functions, which enables a fast and precise computation, see Appendix <ref> for details.
§ NUMERICAL EXPERIMENTS
To demonstrate the effectiveness of the proposed method in terms of quality and efficiency, we provide a number of numerical comparisons against baseline methods. Implementation details can be found in Appendix <ref>.
Effectiveness of basis functions.
We first compare the proposed basis function based method with two other representative state-of-the-art approaches: a volumetric-based method, TT-SVD <cit.>, and a neural network based method, DeepSDF <cit.>. TT-SVD utilizes tensor decomposition to obtain low-rank representations for volumetric SDFs. DeepSDF employs neural networks to represent continuous SDFs. We compared these methods based on their ability to model the data using the Chamfer distance (CD) <cit.>, the inference time required for obtaining SDF values, and the compactness of the learned models measured by the model size. For the TT-SVD method, we voxelized the SDF to a resolution of 256^3, and set the maximum rank R to 40. As for DeepSDF, we trained the network using both limited data (the same number of points used for the Bernstein polynomial) and augmented data (10 times the number of points).
We present the mean results aggregated across all links in Table <ref>. Our method demonstrates competitive results in the representation quality compared to the state-of-the-art approaches while having faster inference and smaller model sizes. Although TT-SVD shows a lower mean Chamfer distance (CD), it exhibits a higher max Chamfer distance, indicating a lack of smoothness and sensitivity to high-frequency data. Fig. <ref> presents a more detailed comparison between the proposed method and neural network. The plot highlights that our method shows faster convergence, requires less training data, and can generate smoother SDF compared to NNs. In addition, it is worth noting that for neural networks, we obtain analytical gradients via back-propagation, while our approach provides a simpler method by directly calculating the derivatives of the basis functions.
All parameters learned by our approach are also interpretable. Indeed, with Bernstein basis functions, the weights directly corresponds to keypoints.
Quality of RDF. We compare the performance of different methods for modeling the distance field. The first method is Neural-JSDF, which does not leverage the kinematic information of the robot and learns the distance field from scratch. The second method is a sphere-based approach, where each link of the robot is represented using multiple spheres. While this method incorporates the kinematic chain, the accuracy of modeling each individual link is expected to be relatively low. We also evaluate two methods that utilize the kinematic structure of the robot. One method employs a neural network (NN) to learn the SDF of each link, while the other method utilizes basis functions (BP) to represent the SDF. We report the mean absolute error (MAE) and root mean square error (RMSE), for points near the robot surface (within 0.03m) and points far away (over 0.03m), following the baseline method Neural-JSDF <cit.>. For the sphere-based method, we use 55 spheres to represent the robot geometry. The NN involves training the neural network until convergence and BP donates Bernstein polynomials with 24 basis functions.
Table <ref> demonstrates the kinematics chain of the robot plays an important role in modeling RDF, contributing to over 10 times error reduction compared to the method that does not exploit kinematics. The sphere-based representation also exhibits superior performance compared to Neural-JSDF, since it requires transformation matrices to compute the center of each sphere. In terms of estimation quality, the combination of BP with kinematics chain consistently outperforms other methods on average. The average MAE is about 1mm, which is accurate enough for whole-body manipulation tasks. Despite the time taken for forward/inverse kinematics, the total consumption is still at a microsecond level, allowing real-time control with a high frequency.
§ ROBOT EXPERIMENTS
In this section, we illustrate the effectiveness of our RDF representation through two dual-arm robot tasks: 1) Collision Avoidance: While a robot arm tries to reach a target, it must avoid colliding with another. Notably, this experiment does not incorporate the use of any visual sensors. 2) Dual-arm Lifting: Two robot arms collaborate to lift a large box that cannot be grasped conventionally. The objective is to plan a pair of joint configurations for both arms such that they can establish contact with the box using their entire arm structures and lift the box.
§.§ Collision Avoidance
In this section, we integrate the learned distance fields for collision avoidance, which is crucial in motion planning tasks. Specifically, we exploit an augmented quadratic Programming (QP) algorithm <cit.> to ensure self-collision avoidance between two robot arms during task execution, see Appendix <ref> for details.
We conduct dual-arm reaching and self-collision avoidance experiments in both simulation and real-world scenarios. In simulation, the goal is for both dual arms to reach their respective target position(g_1 and g_2) while the right arm should actively avoid collision with the left arm. The real-world experiment is conducted with a reactive control, where the left arm is manually moved by a human operator in gravity-compensated mode, serving as a dynamic obstacle for the right arm. For both experiments, we randomly sampled 256 points on the surface of the left arm as the input of RDF for the right arm, and then use the produced minimal distance for self-collision avoidance.
R0.6
captypetableExperimental results for collision avoidance in simulation.
Methods Success rate Computation time (ms)
Sphere-based 71% 9.90 ± 1.45
Ours 78% 9.69 ± 0.87
We conducted simulation experiments 100 times with different initial states for both robots, comparing our method with sphere-based representation. For Neural-JSDF, we find the distance errors are always large and it can not finish the task under our experimental settings. Results are reported in Table <ref>. The accuracy of our RDF representation enables the QP solver to exploit more free spaces around the robot, leading to better collision avoidance.
In addition, despite the sphere-based method took less time to query the distance, it might result in a more complex optimization problem for the QP solver due to the non-smoothness of its derivatives, thus taking a slightly longer time to perform the collision avoidance task compared to our method. Figures <ref> and <ref>
depict the collision avoidance process in simulation and real-world, showing our method enables the robot arm to avoid collisions and successfully reach the desired target position.
§.§ Dual-arm Lifting
In this experimental study, our focus is on the manipulation of a large box using a dual manipulator, utilizing the entire body of the robot. Our underlying assumption is that the contact points on the object are already predetermined, and the robot has the freedom to establish contact with the object using any point on its surface. To control the variables, we restrict the surface area used for contact to either the last four links (in experiments 1-3) or one specific link (in experiments 4-5). This problem can be formulated as an optimization task encompassing multiple cost functions while utilizing the RDF model developed for the robot. The details of this optimization problem, including the specific cost functions and their formulations can be found at Appendix <ref>.
The experimental results of the real robot implementation are visualized in Fig. <ref>. In experiments 1-3, the robot exhibits the capability to select any point on its last four links as a contact point with the object. These experiments provide empirical evidence of the generalization capability of the method across various poses. In experiments 4 and 5, the robot is constrained to utilize specific links for contact. Specifically, the contact is limited to the sixth link in experiment 4, while in experiment 5, it is restricted to the seventh link. This restriction narrows down the options for contact points, requiring the robot to adapt its approach accordingly. The optimization problem is still able to find appropriate solutions. It can be attributed to the differentiable representation provided by the distance field, which enables the optimization algorithm to navigate the constrained search space more effectively, leading to successful solutions even in scenarios with limited contact possibilities. Appendix <ref> also illustrates different solutions in simulation, highlighting the
diversity of feasible configurations and trajectories that the optimization algorithm can explore.
§ CONCLUSION AND LIMITATIONS
In this paper, we proposed an approach to represent the robot geometry as distance fields, which leverage the kinematic structure of the robot to generalize configuration-agnostic signed distance functions to arbitrary robot configurations. This approach enables more efficient learning and more accurate inference of robot distance fields. To achieve simple and efficient representations of the robot geometry, we introduce a novel approach that learns SDFs of robot links using basis functions, which ensures the compactness and smoothness of the learned SDF functions and which is beneficial for Newton (second order) and gradient-based (first order) optimization techniques. We have demonstrated the effectiveness of our RDF representation in a dual-arm self-collision avoidance and a whole-body lifting task, showing our RDF representation is naturally suitable for optimal control.
There are certain limitations to our proposed RDF representation that should be acknowledged. First, the generalization capability of basis functions to highly complex shapes has not been thoroughly investigated. As the memory consumption of the RDF model scales with 𝒪(N^3) in relation to the number of basis functions, extending it to very complex shapes with a high number of basis functions becomes challenging and would require the use of hash functions to treat more complex objects. Secondly, while the use of SDF efficiently determines the distance between a point and an object, it would be advantageous to also provide the location of the closest point. This capability can enhance the applicability of our approach in tasks that require precise knowledge of contact points and their corresponding Jacobians. In this regard, our method proves to be well-suited as we have obtained the SDF values separately for each robot link, allowing us to identify the specific link in contact.
Additionally, in this paper, we have primarily focused on the SDF representation of the robot itself, while neglecting the SDF of the object or the environment. Exploring the integration of the object SDF with the robot SDF is also a promising direction. Similarly, in the presented dual-arm experiments, the two arms are treated separately and it can be further explored to handle both arms as a unified system. Finally, we envision extending the application of our RDF representation to other complex manipulation tasks, such as pushing and pivoting, by incorporating the dynamics of the manipulated objects. For example, by considering the amount of penetration of external objects into the robot, we can estimate the interaction forces using a linear spring-damper model. These interaction forces can then be optimized using Newton-based or gradient-based optimization techniques to achieve desired manipulation objectives. We believe that our RDF representation has the potential to enhance robot planning and control for manipulation in contact-rich scenarios. Exploring this research direction will be the focus of our future work.
This work was supported by the China Scholarship Council (No. 202204910113), the State Secretariat for Education, Research and Innovation in Switzerland for participation in the European Commission’s Horizon Europe Program through the INTELLIMAN project (https://intelliman-project.eu/, HORIZON-CL4-Digital-Emerging
Grant 101070136) and the SESTOSENSO project (http://sestosenso.eu/, HORIZON-
CL4-Digital-Emerging Grant 101070310).
§ KINEMATICS EQUATION
The transformation of k-th robotic frame w.r.t. the base frame ^b 𝒯_k( q) ∈𝕊𝔼(3) can be computed by the kinematics equation
^b 𝒯_k( q) = ^b 𝒯_0(q_0) ^0 𝒯_1(q_1) ⋯^k-1𝒯_k(q_k),
where ^k-1𝒯_k(q_k) is the transformation matrix from the frame of link k to link k-1. It is conventionally described by Denavit–Hartenberg parameters in robotics such that
^k-1 T_k=[[ cos q_k -sin q_k cosα_k sin q_k sinα_k r_k cos q_k; sin q_k cos q_k cosα_k -cos q_k sinα_k r_k sin q_k; 0 sinα_k cosα_k d_k; 0 0 0 1 ]]=[[ ; R T; ; 0 0 0 1 ]],
where d, q, γ, r are D-H parameters.
§ LEARNING SDFS WITH BASIS FUNCTIONS
A univariate trajectory x^1D∈ℝ^T of T data points can be represented as a weighted sum of N basis functions with
x^1D = ∑_n=1^Nϕ_n w^1D_n = ϕ w^1D,
where ϕ can be any set of basis functions, including some common forms that are presented below (see also <cit.> for more details). The signed distance function can be viewed as a signal with multivariate inputs (instead of a single time input as for trajectories), and its representation follows a similar formulation as described above, but with an extended version of the basis function designed for 3D input variables. This extension allows for the representation of complex spatial relationships and enables accurate modeling of the robot geometry in three-dimensional space.
§.§ Representing SDFs with Bernstein Polynomials
The signed distance value f^h of the h-th point p^k_h = (x_1^h, x_2^h, x_3^h) for each robot link _k can be seen as signal with multivariate inputs, represented as a weighted sum of N basis functions as
f^h__k = ∑_n_1=1^N∑_n_2=1^N∑_n_3=1^NΨ^h_n_1,n_2,n_3 w_n_1,n_2,n_3,k =Ψ^h w_k,
Ψ^h_n_1,n_2,n_3 = ϕ_n_1(x_1^h) ϕ_n_2(x_2^h) ϕ_n_3(x_3^h),
where ϕ_n(·) is the n-th basis function. We define Ψ^h = ϕ(x_1^h) ⊗ϕ(x_2^h) ⊗ϕ(x_3^h) using the Kronecker product ⊗. For Bernstein polynomials, the basis functions are given by
ϕ_n(t) = N-1n t^n (1-t)^N-1-n, ∀ n ∈{ 0,⋯,N-1},
where t ∈ [0,1] is a scalar parameter that indicates the normalized location of the point. For instance, we can define t as t = x_e/x^max_e-x^min_e, where x^max_e and x^min_e represent the maximum and minimum range, respectively, in the e-th dimension. Here, e ∈{1,2,3}. Consequently, the derivative of the n-th basis function can be expressed as
∇_t ϕ_n(t)= N-1 n (1-t)^N-n-2 t^n-1( n(1-t)-(N-n-1)t),
and the derivatives of Ψ are
∇_x_1Ψ = ∇_x_1ϕ(x_1) ⊗ϕ(x_2) ⊗ϕ(x_3),
∇_x_2Ψ = ϕ(x_1) ⊗∇_x_2ϕ(x_2) ⊗ϕ(x_3),
∇_x_3Ψ = ϕ(x_1) ⊗ϕ(x_2) ⊗∇_x_3ϕ(x_3).
§.§ Recursive Updates
Imagine that we receive new batch of data at iteration m as {t̃, f̃_}. We define new parameters as Ψ_m^⊤ = [Ψ_m-1^⊤,Ψ̃^⊤]^⊤ and f_m = [ f_m-1^⊤,f̃]^⊤. By introducing the notation B = Ψ^T Ψ, we can update B as B_m → B_m-1 + Ψ̃^⊤Ψ̃. The inverse of B can also be computed iteratively using Sherman-Morrison-Woodbury formula as
B^-1_m → B^-1_m-1 - B^-1_m-1Ψ̃^⊤(I +Ψ̃ B^-1_m-1Ψ̃^⊤)^-1Ψ̃ B^-1_m-1.
The superposition weight can also be updated as
w_m → w_m-1 + K_m (f̃- Ψ̃ w_m-1),
where K_m= B^-1_m-1Ψ̃^⊤(I +Ψ̃ B^-1_m-1Ψ̃^⊤)^-1 is the Kalman gain. The computation steps are outlined in Algorithm. <ref>. It is important to note that this algorithm utilizes B^-1 instead of B. For recursive ridge regression, we initialize B^-1_0 = 1/λ I where λ is a constant regularized parameter.
§ IMPLEMENTATION DETAILS
We build the distance field for the Franka Emika Robot with 7 articulations and 9 links (the fingertips of the gripper are ignored). The superposition weights of Bernstein polynomials are separately trained for each robot link. Specifically, we learn the signed distance functions (SDFs) within a cubic volume surrounding each link. To construct Bernstein polynomials, the positions of points inside the volume are normalized to the range [0,1]. The positions of the points inside the volume are normalized to [0,1] to build Bernstein polynomials. For points located outside the volume, we adopt a projection approach to ensure continuity of the distance function on the boundary. This involves projecting the points onto the boundary, which can be performed efficiently due to the cubic volume. The distance approximation for points outside the volume is obtained by summing the distances from the projected point to the boundary. During inference, both forward kinematics and basis functions are implemented with the batch operation. All experiments are run on an Nvidia GeForce RTX 3060 GPU.
§.§ Quadratic Programming for Collision Avoidance
We augment the QP process by incorporating an additional inequality equation to consider the distance information
min_q̇, δ f_o(x) = 1/2q̇^⊤𝒬_1q̇ + 1/2δ^⊤𝒬_2δ,
s.t. J(q) q̇ = ν - δ,
q̇^- ≤q̇≤q̇^+,
q^- ≤q≤q^+,
∇ f_ qq̇ dt ≤ln ( f + s),
where q̇ represents the joint velocity. δ is the slack vector, providing additional flexibility for constraint satisfaction and local minima avoidance. ν represents the Cartesian velocity. q^- and q^+ represent the lower and upper limits of joint position. q̇^- and q̇^+ indicate the lower and upper limits of joint velocity; 𝒬_1 and 𝒬_2 denote the weighs adjusting the cost of joint velocity and the slack vector in the optimizer. The last inequality equation is designed for collision avoidance based on ∇ f_ q, the gradient of minimal distance with respect to q and the minimal distance f. s ∈ [0, 1] represents the safety distance for collision avoidance. In general, this equation constrains the minimal distance between robot and obstacle larger than the safety distance.
We calculate the gradient ∇ f_ q by calculating the derivation the minimal distance f with respect to joint configuration q. Additionally, we set the safety factor s to 0.95 to ensure a 5cm safety clearance.
§.§ Dual-arm Lifting
To perform this task, we employed an optimization problem incorporating a quadratic cost function c(q)= r^⊤r, where r = [r_r,r_c,r_j^max,r_j^min,r_j^d]^⊤ is the residual vector consisting of several elements: a reaching residual r_r to facilitate the contact between the robot arm and the object, a penetration residual r_p for collision avoidance, a joint distance cost r_j^d to motivate the system to find a solution near the robot’s initial configuration, and joint limit residuals r_j^max and r_j^min to bound joint angles:
r_r = ∑_ p_c f( p_c, q),
r_p = ∑_ p_iReLU(- f( p_i, q)),
r_j^d = q - q_init,
r_j^max = ReLU( q - q_max),
r_j^min = ReLU(q_min- q ),
where f(p,q) represents the spatial distance between spatial points p and the robot surface at configuration q. In this context, p_c represents the points selected on the object as desired contact points with the robot (as indicated by the red point in Figure <ref>), while p_i denotes the points uniformly selected within the box for collision avoidance purposes (as represented by the green points in Figure <ref>). q_min,q_max are the physical joint limits and q_init is the robot initial joint configuration. The optimization is solved using the Gauss-Newton algorithm as
q = q - α J^† r = q - α( J^⊤ J)^-1 J^⊤ r,
where J = ∂r/∂ q is the Jacobian matrix and α is a line search parameter. The algorithm is terminated when satisfying different criteria such as
∑_ p_c| f( p_c, q)| < 0.01,
∑_ p_iReLU(- f( p_i, q)) < 0.01,
∑_ p_c (1-⟨norm(∂ f( p_c, q)/∂ p_c), n_c⟩) < 0.1,
q_min< q < q_max.
The first two constraints measure the distance and the penetration between the robot arm and the box. These constraints ensure that the robot reaches the target surface while preventing any potential collisions. The third constraint is designed to limit the angle formed by the normals of the robot and the box. By imposing this constraint, the orientation of the robot arm is constrained to align with the desired configuration and prevents excessive tilting or misalignment during the lifting operation. Within the third constraint, the predefined normal direction on the contact points of the box is denoted as n_c.
To overcome the challenges associated with local optima, we adopt a batch-based approach during the problem-solving process. Multiple solutions are obtained by solving the problem from various random initial configurations. This strategy helps us explore a wider solution space and mitigate the risk of being trapped in local optima.
Following the acquisition of multiple solutions, we proceed with estimating the robot's trajectory through the utilization of linear interpolation between the initial and desired configurations. This interpolation method enables the generation of a continuous and viable trajectory that ensures the smooth movement of the robot. To ensure the safety of the operation and prevent any collisions, we meticulously filter out trajectories that could potentially lead to contact with the object, thereby ensuring a collision-free execution.
To enhance the lifting capability of the robot, we adopted a joint impedance controller in conjunction with a smaller box size during the planning phase. This combination allowed us to generate sufficient force at the contact point, enabling the successful lifting of the object. Specifically, the lifting action is accomplished by elevating the fourth joint of the robot, which is positioned immediately before the potential contact links.
|
http://arxiv.org/abs/2307.02217v1
|
20230705114705
|
Paley inequality for the Weyl transform and its applications
|
[
"Ritika Singhal",
"N. Shravan Kumar"
] |
math.CA
|
[
"math.CA",
"math.FA",
"43A32, 43A15, 43A25"
] |
In this paper, we prove several versions of the classical Paley inequality for the Weyl transform. As an application, we discuss L^p-L^q boundedness of the Weyl multipliers and prove a version of the Hörmander's multiplier theorem. We also prove Hardy-Littlewood inequality. Finally, we study vector-valued versions of these inequalities. In particular, we consider the inequalities of Paley, Hausdorff-Young, and Hardy-Littlewood and their relations.
[2010]Primary 43A32, 43A15, 43A25; Secondary 43A40
Spherical Basis Functions in Hardy Spaces with Localization Constraints
[
=======================================================================
§ INTRODUCTION
Let G be a locally compact abelian group. A classical result of the Fourier analysis is the Hausdorff-Young inequality which states the following: if 1 ≤ p ≤ 2, then the Fourier transform maps L^p(G) into L^p'(G), where G is the dual group of G and p' is the conjugate index of p. Later, Paley <cit.> extended this result to Lorentz spaces and showed that for G=𝕋, if f ∈ L^p(𝕋), then f̂∈ l^p',p(ℤ). The case G=ℝ is due to Hörmander <cit.>.
The Weyl transform, defined by Hermann Weyl in <cit.>, is a pseudo-differential operator associated to a measurable function
on ℝ^n ×ℝ^n. A large number of both mathematicians and physicists have studied the properties of the Weyl transform and its applications to quantum mechanics and partial differential equations. The Fourier transform and the Fourier inversion formula serves as the foundation for the creation of the Weyl transform. Therefore, all the classical properties of the Fourier transform namely the Reimann-Lebesgue lemma, Plancheral theorem, and Hausdorff-Young inequality, works for the Weyl transform as well. A natural question that arises here is whether Paley's extension of the Hausdorff-Young inequality possible for the Weyl transform. This paper answers this question in the affirmative sense. In fact, the main aim of this paper is to study Paley's inequality and its variants for the Weyl transform associated to locally compact abelian groups. More precisely, we derived the Paley inequality for the Weyl transform and the Inverse Weyl transform in Sections <ref> and <ref>, respectively, and proved the following version of the Paley inequality in Theorem <ref>.
Consider a positive function ϕ∈ l^1, ∞(ℕ). Then for 1 <p ≤ 2 and f∈ L^p(G×G), we have
( ∑ S_n(W(f))^pϕ(n)^2-p)^1/p≲ϕ_l^1,∞(ℕ)^2-p/pf_p.
In particular, for f ∈ L^p( G ×G), we have W(f) ∈ℬ_p',p(L^2(G)).
Interpolating the Hausdorff-Young inequality with Paley-inequality, one can obtain the Hausdorff-Young-Paley
inequality for the Weyl transform which is discussed in Theorem <ref>.
The study of Fourier multipliers on L^p spaces is one of the classical topics of harmonic analysis. One of the celebrated results of Hörmander gives sufficient conditions for a symbol to be a L^p-L^q Fourier multiplier. Very recently, Ruzhansky with his coauthors studied the Hörmander's Theorem for Lie groups<cit.>, homogeneous manifolds<cit.>, locally compact groups<cit.> and compact hypergroups<cit.>. In <cit.>, Mauceri introduced the concept of Weyl multipliers and proved a version of Hörmander's theorem for the Weyl transform on ℝ^2n. The assumptions include some regularity conditions like the commutator of the operator with annihilation and creation operator should be a bounded operator. In this paper, as an application of the Paley inequality, we also prove a version of the Hörmander's theorem for the Weyl transform on locally compact abelian groups without using any regularity assumptions. Thus, our conditions are in terms of the singular value sequence associated to the operator and we have proved the following result:
Let G be a locally compact abelian group and let 1<p≤ 2≤ q<∞. For M ∈ℬ(L^2(G)),
consider the operator C_M defined on L^1 ∩ L^2(G×G ) as W(C_Mf)=MW(f).
If {S_n(M)}∈ l^r,∞(ℕ) where 1/r=1/p-1/q then M ∈ℳ_p,q and
C_Mf_L^q(G×G)≲s>0sup [s S_n(M)>sn∈ℕ∑1 ]^1/p-1/qf_p.
Yet another classical inequality of Fourier analysis is the Hardy-Littlewood inequality. Established by Hardy and Littlewood for the torus 𝕋 in (<cit.>), it was later extended by
Hewitt and Ross <cit.> for compact abelian groups. In 2016, this inequality was further extended to compact Lie groups in <cit.> and to locally compact separable unimodular groups by Kosaki <cit.>. In <cit.>, the same was proved as an application of Paley inequality. In Section <ref>, as another application of the Paley inequality, we prove an analogue of the Hardy-Littlewood inequality for the Weyl transform on locally compact abelian groups.
Throughout the past few decades, the study of vector-valued functions has become increasingly popular. A majority of the classical problems in the theory of functions may be investigated in a vector-valued environment. Such a study provides new ideas for understanding various mathematical problems. As a result, several researchers have emphasized its importance and extended the classical results to the vector-valued
setting- Hausdorff-Young inequality <cit.>, Paley inequality <cit.>, Hardy's inequality <cit.>, singular integrals <cit.>, Fourier multipliers <cit.> and Weyl transform <cit.>.
The vector-valued notion of Paley inequalities was first studied by Garcia et. al in <cit.>. Recently, in <cit.>, the authors study vector-valued versions of the Hardy-Littlewood inequality and the Paley inequality. In Sections <ref> and <ref>, we introduce the concept of Weyl-Paley type/cotype and Weyl-HL type/cotype for a Banach space X and prove an analogue of Hausdorff-Young inequality. Finally, we examine the relationships between the Weyl type introduced in <cit.>, the Paley type, and the HL type (as well as, their cotype counterparts) with each other.
We shall begin with some preliminaries that are needed in the sequel.
§ PRELIMINARIES
Let G be a locally compact abelian group with G as its dual group. As usual, for 1≤ p ≤∞, we shall denote by L^p(G), the usual classical L^p-spaces on G w.r.t. the Haar measure on the group G. We shall denote by ℬ(L^2(G)), the Banach algebra of all bounded operators on L^2(G) and C_c(G) will denote all compactly supported functions on the group G.
The Weyl transform, denoted W, is defined as a ℬ(L^2(G))-valued integral on C_c(G ×G) given by
W(f) φ(y)=∫_G ×Gf(x, χ) ρ((x, χ))(φ)(y) dx dχ, f ∈ C_c(G ×G)
where ρ(x,χ) is the Schrödinger representation of G ×G on L^2(G), defined as
ρ_G((x, χ))(φ)(y)= χ(y) φ(x y), φ∈ L^2(G).
Let ℋ be a Hilbert space and 1≤ p<∞. If T:ℋ→ℋ is a compact operator, then it admits an orthonormal representation
T=n∈ℕ∑S_n(T)⟨.,e_n⟩σ_n,
where {e_n} and {σ_n} are orthonormal sequences in ℋ and S_n(T) denotes the n^th singular value of T. The p^th-Schatten-von Neumann class, denoted ℬ_p(ℋ), consists of all compact operators, T:ℋ→ℋ with {S_n(T)}∈ℓ^p.
For T∈ℬ_p(ℋ), define
T_ℬ_p(ℋ):=(n∈ℕ∑|S_n(T)|^p)^1/p= {S_n(T)}_l^p.
The space ℬ_p(ℋ) with the above norm becomes a Banach space. We shall denote by ℬ_∞(ℋ), the space of all compact operators on ℋ. By Riesz-Thorin theorem on complex interpolation, for 0<θ<1, we have, [ℬ_1(ℋ),ℬ(ℋ)]_θ≅ℬ_1/θ(ℋ). For 1 ≤ p ≤∞, p' denoted the conjugate index of p such that 1 / p+1 / p^'=1.
Let 1 ≤ p ≤ 2, then the Weyl transform is a continuous mapping of functions f ∈ L^p(G ×G ) to operators W(f) ∈ℬ_p'(L^2(G)), i.e., there exists a constant C>0 such that
W(f) _ℬ_p'(L^2(G)⩽ Cf_L^p(G ×G), f ∈ L^p(G ×G).
In fact for p=2, the Weyl transform is a unitary map between L^2(G×G) and ℬ_2(L^2(G)).
Also, by duality, we can conclude that for 1 ≤ p ≤ 2, if
W(f) belongs to ℬ_p(L^2(G )) for some measurable function f, then f belongs to L^p'(G ×G), and there exists C>0 such that
f_L^p'(G ×G)≤ C W(f) _ℬ_p(L^2(G).
For more on Weyl transform and Schatten-class operators, see <cit.>.
Let 1 ≤ p ≤∞. A bounded operator M ∈ℬ(L^2(G)) is said to be a (left) Weyl multiplier of L^p(
G ×G) if the operator C_M defined on f ∈ L^1∩ L^2(G ×G) by W(C_Mf)=MW(f) extends to a bounded operator on L^p(G ×G).
Let 1 ≤ p,q ≤∞. If M ∈ℬ(L^2(G)) is a Weyl multiplier of L^p(
G ×G), we say that M ∈ℳ_p,q if the map C_M defined above extends to a bounded map from L^p(G ×G) to L^q(G ×G).
In the case, when p=q,
ℳ_p,p=ℳ_p. By the Plancheral formula, ℳ_2 is the algebra ℬ(L^2(G)) and ℳ_1 coincides with the algebra of Weyl transform of the space of finite Borel measures on G ×G. See <cit.>.
Let X be a Banach space and let (Ω, 𝒜) be a measure space with a σ-finite positive measure μ. Consider a weight function ω: Ω→ (0, ∞) which is integrable on sets of finite measure from 𝒜. For p ∈ [1,∞), we
define the Bochner spaces L^p(Ω,ω ; X) formed by all (equivalence classes of) strongly
μ-measurable functions f : Ω→ X having a finite norm
f_L^p(Ω, ω;X)=(∫_Ωf(x)^p_Xω(x) d μ(x))^1/p.
When p = ∞, L^∞(Ω, ω; X) = L^∞(Ω; X) denote the functions which are essentially bounded and
f_L^∞(Ω, ω, X)= ess.sup_x ∈Ωf(x)_X.
Also, L^p(Ω; X) denotes the special case when ω≡ 1 and also L^p(Ω, ω) = L ^p(Ω, ω; ℂ).
Let f: Ω→ X. The decreasing rearrangement of f is the function f^* defined on (0, μ(Ω)) by
f^*(t)= inf{s>0: d_f(s) ≤ t}
where d_f(s )= μ ({x ∈Ω: f(x)_X> s}), the distribution function of f.
(Lorentz spaces)
For 1 ≤ p,q ≤∞, the Lorentz space L^ p,q(Ω; X) is the class of all μ-strongly measurable
functions f : Ω→ X such that
f_L^p,q(Ω;X)= ( ∫_0 ^μ(Ω)(t^1/pf^*(t))^q dt/t)^1/q< ∞ if q< ∞
t>0sup t^1/pf^*(t) < ∞ if q= ∞.
In the case when Ω is countable with discrete measure, we denote the above space by l^p,q(Ω;X). For X= ℂ, Lorentz spaces will be denoted by L^p,q(Ω).
For all 0 < p,q ≤∞, the
spaces L^p,q(Ω, X) are complete with respect to their quasinorm and they are therefore quasi-Banach spaces. For Schatten classes, by replacing l^p-norm of the singular values by the Lorentz spaces l^p,q quasi-norm, we get the non commutative Lorentz-spaces ℬ_p,q(ℋ) defined as the space of all compact operators T ∈ℬ(ℋ) such that
T_p,q={S_n(T)}_l^p,q= ( n∑ (n^1/p-1/q S_n(T))^q )^1/q , 1 ≤ q < ∞,
nsup n^1/pS_n(T), q= ∞
is finite. The operators that map L^p(Ω) to L^q,∞(Ω) are called weak type (p,q).
Let X and Y be Banach spaces. Let X∨⊗Y and X∧⊗Y denote the injective and projective tensor product of X and Y respectively.
If ℋ is a Hilbert space, we shall denote by ℬ_1[ℋ;X] and ℬ_∞[ℋ;X] the spaces ℬ_1∧⊗X and ℬ_∞∨⊗X, respectively. For 1<p,r<∞, we define ℬ_p[ℋ;X] and ℬ_p,r[ℋ;X] as follows:
ℬ_p[ℋ;X]:= [ℬ_1[ℋ;X],ℬ_∞[ℋ;X]]_1/p
ℬ_p,r[ℋ;X]:= [ℬ_1[ℋ;X],ℬ_∞[ℋ;X]]_1/p,r.
A set Y ⊆ X^* is norming for X if sup_f ∈ Y ∖{0}|f(x)|/f_X^*=x_X.
Let (Ω, 𝒜, μ) be a σ-finite measure space and ω: Ω→ (0, ∞) be measurable. Let p ∈ (1, ∞) and let Y ⊆ X^* be a normed closed subspace of X^* which is norming for
X. Then L^p'(Ω,ω^1/p-1; Y ) is norming for L^p(Ω, ω; X) with respect to the duality pairing
⟨ f,g ⟩ =∫_ω⟨ f(x), g(x)⟩ dμ(x).
Moreover, the subspace of Y-valued simple functions in L^p'(Ω, ω^1/p-1; Y ) is also norming for L^p(Ω, ω; X).
The proof of the above lemma can be found in <cit.> for the unweighted case and the weighted case can then be generalised.
Let (Ω, 𝒜, μ) be a σ-finite non atomic measure space. Let p,q ∈ (1, ∞) and let Y ⊆ X^* be a normed closed subspace of X^* which is norming for X. Then
f_L^p,q(Ω ;X)≅sup{| ∫_Ω⟨ f(x),g(x) ⟩ d μ(x) |: g_L^p', q'(Ω;Y)≤ 1 }.
Let 1<p< ∞. Then, we have
ℬ_p[ℋ;X]^*≅ℬ_p'[ℋ;X^*].
We say that a Banach space X has Weyl type p if there exists a constant C>0 such that
W(f)_ℬ_p^'[L^2(G);X]≤ C f_L^p(G×G,X).
Similarly,
a Banach space X has Weyl cotype q if there exist a constant C>0 such that
f_L^q(G×G,X)≤ C W(f)_ℬ_q^'[L^2(G);X] .
For more on Lorentz spaces and interpolation theorems, one can refer to <cit.> or <cit.>. The following is an adaptation of the classical Marcinkiewicz interpolation theorem for Schatten class operators. Since we couldn't find the result anywhere, we are providing a brief proof of it. For more general versions see <cit.>.
Let (X,μ) be a measure space and let ℋ be a separable Hilbert space. For each 0<p_0<p_1<∞, let φ:ℬ_p_0(ℋ)+ℬ_p_1(ℋ)→ L^0(X,μ) be a sublinear operator. Suppose that there exist constants C_0 and C_1 such that
φ(T)_L^p_i,∞≤ C_i T_ℬ_p_i(ℋ) ∀ T∈ℬ_p_i(ℋ), i=0,1
Then ∀ p_0<p<p_1 and ∀ T∈ℬ_p(ℋ), there exists C>0 such that
φ(T)_L^p≤ CT_ℬ_p(ℋ).
Fix T∈ℬ_p(ℋ) and α>0. Since T is a compact operator, ∃ orthonormal sequences {σ_n} and {θ_n} for ℋ such that
T=n∈ℕ∑λ_n(σ_n⊗θ_n).
For each n∈ℕ, let
λ_0,n^α=
{[ λ_n |λ_n| >δα; 0 ].
λ_1,n^α=
{[ λ_n |λ_n| ≤δα; 0 ].
for some fixed δandα >0. Let T_0^α=n∈ℕ∑λ_0,n^α (σ_n⊗θ_n) and T_1^α=n∈ℕ∑λ_1,n^α (σ_n⊗θ_n). Then T=T_0^α+T_1^α and |φ(T)|≤ |φ(T_0^α)| + |φ(T_1^α)|. Now we omit the remaining part of the proof as it goes exactly as in the classical case. See <cit.>.
§ HAUSDORFF-YOUNG PALEY INEQUALITY
In this section, we prove the Hausdorff-Young Paley inequality for the Weyl transform.
We shall begin this section by proving the Paley inequality. The corresponding analogue of this for the Fourier transform on ℝ can be found in <cit.>.
[Paley inequality]
Consider a positive function ϕ∈ l^1, ∞(ℕ). Then for 1 <p ≤ 2 and f∈ L^p(G×G), we have
( ∑ S_n(W(f))^pϕ(n)^2-p)^1/p≲ϕ_l^1,∞(ℕ)^2-p/pf_p.
Consider the measure ν on ℕ given by
ν({n}):=ϕ^2(n).
For 1 < p ≤∞, we let l^p(ℕ,ν) denote the space of all complex-valued sequences x=(x_n)_n∈ℕ such that x^p_p= n∈ℕ∑|x_n|^pϕ^2(n)<∞. We now claim that if f∈ L^p(G×G), then {S_n(W(f))/ϕ(n)}_n ∈ℕ∈ℓ^p (ℕ,ν). We will denote this correspondence by T and show that T is a bounded map. Our strategy here is to make use of the classical Marcinkiewicz Interpolation theorem <cit.>. To do this we first show that T is both weak type (2,2) and (1,1).
The distribution function, in this case, is given by
d_T(f)(y)=ν({ n∈ℕ:|T(f)(n)|>y }).
To show that T is of weak type (1,1), we prove that
T(f)_1,∞≲ϕ_l^1,∞(ℕ)f_1.
Observe that
S_n(W(f))≤n∈ℕsup S_n(W(f)) ≤W(f)≤f_1,
and therefore, for y<S_n(W(f))/ϕ(n)≤f_1/ϕ(n), we have
ν({n∈ℕ:S_n(W(f))/ϕ(n)> y }) ≤ν({n∈ℕ:f_1/ϕ(n)>y}).
Hence y<|T(f)(n)|n∈ℕ∑ ϕ^2(n) ≤y<f_1/ϕ(n)n∈ℕ∑ ϕ^2(n).
Now, let w=f_1/y. Then
ϕ(n)<wn∈ℕ∑ϕ^2(n) = ϕ(n)<wn∈ℕ∑ ∫_0^ϕ^2(n) dτ = ∫_0^w^2 dτ √(τ)<ϕ(n) < wn∈ℕ∑ 1
= ∫_0^w 2s ds s<ϕ(n)<wn∈ℕ∑ 1 ≤∫_0^w 2( ss<ϕ(n)n∈ℕ∑ 1) ds
≤ ∫_0^w 2 ϕ_ l^1,∞(ℕ) ds = 2w ϕ_ l^1,∞(ℕ) = 2ϕ_ l^1,∞(ℕ)/yf_1.
Thus, for y>0, we have
yd_Tf(y) = y y<|T(f)(n)|n∈ℕ∑ ϕ^2(n) ≲ϕ_ l^1,∞(ℕ)f_1
Also, by using the Plancheral theorem for Weyl transform, it can be seen that T maps L^2(G ×G) continuously to l^2(ℕ,ν) since
n∈ℕ∑ |T(f)(n)|^2ϕ^2(n) = n∈ℕ∑ |S_n(W(f))|^2 = W(f)^2_ℬ_2(L^2(G)) = f^2_2.
This shows that T is weak type (2,2).
Finally, using Marcinkiewicz interpolation theorem, it follows that T(f)_p≲ϕ_ l^1,∞(ℕ)^( 2-p/p)f_p or
(n∈ℕ∑ S_n(W(f))^pϕ(n)^2-p)^1/p≲ϕ_ l^1,∞(ℕ)^2-p/pf_p.
Hence the proof.
For 1 <p ≤ 2 and f∈ L^p(G×G), we have W(f) ∈ℬ_p',p(L^2(G)) and there exists C>0 such that
W(f)_ℬ_p',p(L^2(G))≤ C f_p.
By interpolating the Paley inequality and the Hausdorff-Young inequality for the Weyl transform, we obtain the following Hausdorff-Young Paley inequality.
Let G be a locally compact abelian group and let 1<p≤ b≤ p^'<∞. If ϕ∈ l^1,∞(ℕ), then for all f∈ L^p(G×G), we have
( n∈ℕ∑(S_n(W(f)) ϕ(n)^1/b -1/p^')^b)^1/b≲ϕ_ l^1,∞(ℕ)^1/b - 1/p^'f_p.
We now prove, as an application, the Weyl transform analogue of the Hörmander's theorem.
Let G be a locally compact abelian group and let 1<p≤ 2≤ q<∞. For M ∈ℬ(L^2(G)),
consider the operator C_M defined on L^1 ∩ L^2(G×G ) as W(C_Mf)=MW(f).
If {S_n(M)}∈ l^r,∞(ℕ) where 1/r=1/p-1/q then M ∈ℳ_p,q and
C_Mf_L^q(G×G)≲s>0sup [s S_n(M)>sn∈ℕ∑1 ]^1/p-1/qf_p.
Let p≤ q^'. Then for f∈ C_c(G×G),
C_Mf_q ≤W(C_Mf)_ℬ_q'(L^2(G)) = ( n∈ℕ∑ S_n(W(C_Mf))^q^')^1/q^'
= ( n∈ℕ∑ S_n(MW(f))^q^')^1/q^'.
Since S_n+m+1(T) ≤ S_n+1(T)+S_m+1(T) for any compact operator T, we have
( n∈ℕ∑ S_n(MW(f))^q^')^1/q^'
≲( n∈ℕ∑(S_n(M)S_n(W(f)))^q^')^1/q^'.
Let b=q' and ϕ(n)=S_n(M)^r. Hence using Theorem <ref>, we have
( n∈ℕ∑(S_n(M)S_n(W(f)))^q^')^1/q^'≲( s>0sup s S_n(M)^r > sn ∈ℕ∑ 1)^1/rf_p.
Also, notice that
( s>0sup s S_n(M)^r > sn ∈ℕ∑ 1)^1/r =
(s>0sup s^r S_n(M) > sn ∈ℕ∑ 1)^1/r = s>0sup s ( S_n(M) > sn ∈ℕ∑ 1)^1/r
Hence, we get
C_Mf_q ≲s>0sup s ( S_n(M) > sn ∈ℕ∑ 1)^1/p-1/qf_p
as required.
Now, by using the duality , we know that
C_M_L^p(G ×G) → L^q(G ×G)=C_M^*_L^q'(G ×G) → L^p'(G ×G).
Hence for the case when q'<(p')'=p, one can work with C_M^* whose associated operator will be M^* and C_M^*=C_M.
§ HAUSDORFF-YOUNG INEQUALITY FOR THE INVERSE WEYL TRANSFORM
In this section, we prove the Paley inequality for the inverse Weyl transform. Our approach here is the same as what we did for the case of the Weyl transform. Finally, we also prove a version of the Hörmander's theorem for the inverse Weyl transform which will later lead to Hardy-Littlewood inequality.
Consider a positive function ψ∈ L^1, ∞(G×G). Let
M_ψ:= ψ_L^1, ∞(G×G)=sup_s>0 s |ψ(x,χ)|> s(x,χ)∈ G×G∫ dx dχ.
Then for 1<p ≤ 2 and T ∈ℬ_p(L^2(G)), we have
( ∫_G×G |W^-1(T)(x,χ)|^p ψ(x,χ)^2-pdx dχ)^1/p≲ M_ψ ^2-p/pT_ℬ_p(L^2(G)).
On G×G, define a measure μ as
μ(x,χ)/dx dχ:= ψ^2(x,χ).
For 1<p≤∞, we shall denote by L^p(G×G,μ), the usual L^p space on G×G with respect to the measure μ, i.e.,
L^p(G×G, μ)={f:G ×G→ℂ: f_p,μ:=(∫_G ×G |f|^p d μ)^1/p< ∞}.
For T∈ℬ_p(L^2(G)), define a function Φ_T:G×G→ℂ as
Φ(T)(x,χ):=Φ_T(x,χ):=|W^-1(T)(x,χ)|/ψ(x,χ).
We now claim that Φ is a well defined sublinear bounded map from ℬ_p(L^2(G)) to L^p(G×G,μ) for 1 ≤ p≤ 2. We will be using Marcinkiewicz's interpolation theorem (Theorem <ref>) to prove this. In particular, we claim that Φ satisfies equation (<ref>) for p_0=1 and p_1=2 which follows exactly as in Theorem <ref> and the following inequalities hold:
d_Φ_T(α) = μ{(x,χ) ∈ G×G: |Φ_T(x,χ)| > α}≤( T_ℬ_2(L^2(G))/α)^2,
μ{(x,χ) ∈ G×G: |Φ_T(x,χ)| > α}≲M_ψT_ℬ_1(L^1(G))/α.
Now by letting 1/p=1 - θ +θ/2, 0<θ<1, and applying Theorem <ref>, we get
(G×G∫ |Φ_T(x,χ)|^p d μ(x,χ))^1/p≲ M_ψ^2-p/pT_ℬ_p(L^2(G))
which in turn gives
G×G∫(|W^-1(T)(x,χ)|^p |ψ(x,χ)|^2-p dxdχ)^1/p≲ M_ψ^2-p/pT_ℬ_p(L^2(G)).
As a consequence of the Paley inequality and the Hausdorff-Young inequality, we obtain the following Hausdorff-Young Paley inequality for the inverse Weyl transform.
Consider a positive function ψ∈ L^1, ∞(G×G) and let M_ψ be as in equation (<ref>).
Let 1<p ≤ 2, and 1<p≤ b ≤ p' ≤∞, then for T∈ℬ_p(L^2(G)), we have
( G×G∫(|W^-1(T)(x,χ)| ψ(x,χ)|^1/b-1/p')^b dx dχ)^1/b≲ M_ψ^1/b-1/p'T_ℬ_p(L^2(G)).
Here is the analogue of the Hörmander's theorem for the inverse Weyl transform. Let ℱ(L^2(G)) denote the space of all finite rank operators on L^2(G).
Let 1<p ≤ 2 ≤ q < ∞ and g ∈ L^r,∞(G ×G) where 1/r=1/p-1/q.
Consider the operator ϕ_g defined on ℱ(L^2(G)) as W^-1(ϕ_g(T))=gW^-1(T). Then ϕ_g can be extended to a bounded operator from ℬ_p(L^2(G)) to ℬ_q(L^2(G)) and we have
ϕ_g(T)_ℬ_q(L^2(G))≲sup_s>0s( |g(x, χ)| > s(x, χ ) ∈ G ×G∫ dx d χ)^1/p-1/qT_ℬ_p(L^2(G)).
Let us first assume that p ≤ q'. Then for T ∈ℬ_p(L^2(G)),
ϕ_g(T)_ℬ_q(L^2(G))≤W^-1(ϕ_g(T))_q'=gW^-1(T)_q'.
Now we will apply Hausdorff-Young Paley inequality of Theorem <ref> by taking b=q' and ψ(x, χ )=(|g(x, χ )|)^r. Since 1/q'-1/p'=1/p-1/q=1/r, we obtain
( G ×G∫|W^-1(T)(x, χ)g(x, χ )|^q'dx d χ)^1/q'≲s>0sups ( |g(x,χ)| > s(x, χ) ∈ G ×G∫ dx d χ)^1/p-1/qT_ℬ_p(L^2(G)).
Now by combining the above inequalities, we get the required inequality (<ref>).
For the case when q'<p, one can work with ϕ^* whose corresponding associated function will be g and |g|=|g|.
§ HARDY-LITTLEWOOD INEQUALITY
In this section, we study Hardy-Littlewood inequality, both scalar as well as the vector-valued case.
We shall begin this section by proving the scalar version of the Hardy-Littlewood inequality. This inequality is an application of the Paley inequality (Theorem <ref>).
Let 1<p ≤ 2. Assume that a positive function (x,χ) ↦μ_(x,χ) on G×G, have adequate rapid growth, that is,
G×G∫dx d χ/|μ_(x,χ)|^β < ∞ , β > 0.
Then the following inequality holds true.
( G×G∫|μ_(x,χ)|^-β(2-p)|W^-1(T)(x,χ)|^pdx d χ)^1/p≲T_ℬ_p(L^2(G)).
We will be using Paley's inequality to prove this result. We claim that the function ψ(x, χ)=|μ_(x, χ)|^-β satisfies equation (<ref>) required in Theorem <ref> . For s>0, it can be seen that
s|μ_(x, χ)|^-β > s∫ dx d χ≤1/s > |μ_(x, χ)|^β∫dx d χ/|μ_(x, χ)|^β≤∫_G ×Gdx d χ/|μ_(x, χ)|^β .
By assumption,
C:=∫_G ×Gdx d χ/|μ_(x, χ)|^β
is known to be finite.
Hence M_ψ≤ C< ∞ and our claim is proved. Now by applying Theorem <ref> to the function ψ(x)=|μ_(x, χ)|^-β, we get the required estimate.
Let G be a locally compact abelian group. Let 1<p ≤ 2 ≤ q < ∞ and let ω: G ×G→ (0, ∞) be a weight. We say that a Banach space X is of Weyl HL-ω type p on G if there exists a constant C> 0 such that
f_L^p(G ×G,ω^-(2-p) ;X)≤ C W(f)_ℬ_p[L^2(G);X].
Similarly, we say that a Banach space X is of Weyl HL-ω cotype q on G if there exists a constant C> 0 such that
W(f)_ℬ_q[L^2(G);X]≤ C f_L^q(G ×G ,ω^(q-2) ;X) .
X has Weyl HL-ω type p on G if and only if X^* has Weyl HL-ω cotype p' on G.
Let X has Weyl HL-ω type p, i.e,
∃ C>0 such that
f_L^p(G ×G,ω^-(2-p) ;X)≤ C W(f)_ℬ_p[L^2(G);X] .
To show that X^* has Weyl HL-ω cotype p', we first show that given elementary tensor f ⊗ x^* ∈ L^p'(G ×G,ω^(p'-2) ) ⊗ X^*, we have, W(f) ⊗ x^* ∈ℬ_p'[L^2(G);X^*] so that W(L^p'(G ×G,ω^(p'-2) ; X^* ) ⊂ℬ_p'[L^2(G);X^*]. Since
ℬ_p'[L^2(G);X^*] =( ℬ_p[L^2(G);X])^*,
let T ∈ℬ_p[L^2(G);X]. For 1 ≤ i ≤ n, let S_i ∈ B_p(L^2(G)) and x_i ∈ X such ∑_i=1^n S_i ⊗ x_i is one of the representations of T.
Then,
|⟨ W(f⊗ x^*), ∑_i=1^n S_i ⊗ x_i ⟩ |
= | ⟨ f⊗ x^*,∑_i=1^n W^-1(S_i) ⊗ x_i⟩ |
= | ∑_i=1^n⟨ f, W^-1(S_i)⟩⟨ x^*,x_i⟩ |
≤ f_L^p'(G ×G,ω^(p'-2) )x^*(∑_i=1^nW^-1(S_i)_L^p(G ×G,ω^-(2-p) ) x_i).
Since this is true for any arbitrary representation, taking infimum over all such representations we get
|⟨ W(f⊗ x^*), T ⟩| ≤ f_L^p'(G ×G,ω^(p'-2) )x^*( W^-1T_L^p(G ×G,ω^-(2-p);X))
≤ C f ⊗ x^*_L^p'(G ×G,ω^(p'-2) ) ⊗ X^*T_ℬ_p[L^2(G);X].
where we have used inequality (<ref>) in the last equation. Now, by using duality, we get the desired containment.
Now, let f ∈ L^p'(G ×G,ω^(p'-2);X^* ) . Let ϵ>0. By duality , there exists T ∈ℬ_p[L^2(G);X] such that T_ℬ_p[L^2(G);X] =1 and
W(f)_ℬ_p'[L^2(G);X^*])≤ (1+ϵ)|⟨ W (f), T ⟩|.
Thus,
W(f)_ℬ_p'[L^2(G);X^*]) ≤ (1+ϵ) | ⟨ W (f), T ⟩|
= (1+ϵ) | ⟨ f, W^-1(T) ⟩|
≤ (1+ϵ) f_L^p'(G×G,ω^(p'-2),X^* )W^-1T_L^p(G ×G,ω^(p-2);X)
≤ (1+ϵ) Cf_L^p'(G×G,ω^(p'-2),X^* )T_ℬ_p[L^2(G);X]
= (1+ϵ) Cf_L^p'(G×G,ω^(p'-2),X^* ) .
As ϵ>0 is arbitrary, letting ϵ→ 0, we get the desired inequality.
Let G be an abelian group and 1<p ≤ 2. Assume that a positive function (x, χ) →μ_(x, χ) on G ×G satisfies equation (<ref>). Then
T_ℬ_p'(L^2(G))≲( ∫_G|μ_(x, χ)|^β(p'-2)|W^-1(T)(x)|^p'dx d χ)^1/p' .
§ PALEY-TYPE INEQUALITIES
If f ∈ L^p(ℝ^2n), then it is a well know fact that W(f) ∈ℬ_p'(L^2(ℝ^n)). Paley in <cit.> showed that there is an extension of Hausdorff-Young Theorem to Lorentz spaces as follows, i.e if f ∈ L^p(𝕋), then f̂∈ l^p^', p(ℤ). The Corollary <ref> gives such an extension for the Weyl transform. The following is another version of the same result.
If f ∈ L^p,p'(G ×G), then W(f) ∈ℬ_p'(L^2(G)) and there exists C>0 such that
W(f)_ℬ_p'(L^2(G))≤ C f_p,p' .
In view of the Marcinkiewicz interpolation theorem (<cit.>), define an operator T from L^1+L^2,1(G ×G) to ℳ_0(ℕ) by
T(f)=S_n(W(f)).
Since the Weyl transform maps L^1(ℝ^2n) to ℬ_∞(L^2(ℝ^n)) and L^2(ℝ^2n) to ℬ_2(L^2(ℝ^n)), the operator T satisfies the assumptions of the Marcinkiewicz interpolation theorem. Hence T maps L^p,p'(G ×G) to l^p'(ℕ) continuously and
{S_n(W(f))}_l^p'≤ C f_p,p' .
Since W(f)_ℬ_p'(L^2(ℝ^n)) ={S_n((W(f)))}_l^p', we get the desired result.
Let G be a locally compact abelian group. Let 1<p ≤ 2 ≤ q < ∞. We say that a Banach space X is of Weyl-Paley type p on G if there exist a constant C>0 such that
W(f)_B_p'[L^2(G);X]≤ C f_L^p,p'(G ×G ;X) .
Similarly, we say that a Banach space X is of Weyl-Paley cotype q on G if there exist a constant C>0 such that
f_L^q,q'(G ×G ;X)≤ C W(f)_B_q'[L^2(G);X] .
A Banach space X has Weyl-Paley type p on G if and only if X^* has Weyl-Paley cotype p' on G.
We simply show that if X has Weyl-Paley type p on G, then for W(f) ⊗ x^* ∈ℬ_p(L^2(G)) ⊗ X^*, we have f ⊗ x^* ∈ L^p',p(G ×G, X^* ) and the rest follows as in Theorem <ref>.
Let u ∈ L^p,p'(G ×G, X) and let g_i ∈ L^p,p'(G ×G) and x_i ∈ X, 1 ≤ i ≤ n be such that u=∑_i=1^n g_i ⊗ x_i is one of the representation of u.
Then,
|⟨ W^-1(W(f)⊗ x^*), ∑_i=1^n g_i ⊗ x_i ⟩|
= | ⟨ W(f)⊗ x^*,∑_i=1^n W(g_i) ⊗ x_i⟩ |
= | ∑_i=1^n⟨ W(f), W(g_i)⟩⟨ x^*,x_i ⟩ |
≤ W(f)_ℬ_p(L^2(G)x^*(∑_i=1^nW(g_i)_ℬ_p'(L^2(G)x_i).
Since this is true for any arbitrary representation, taking infimum over all such representations we get
|⟨ W^-1(W(f)⊗ x^*), u ⟩| ≤ W(f)_ℬ_p(L^2(G)x^*( Wu_ℬ_p'[L^2(G);X])
= W(f) ⊗ x^*_ℬ_p[L^2(G);X^*]( Wu_ℬ_p'[L^2(G);X])
≤ C W(f) ⊗ x^*_ℬ_p[L^2(G);X^*]u_L^p,p'(G ×G ;X)
where we have used X has Weyl-Paley type p on G in the last step.
Let 1 ≤ p ≤ 2. Let f be such that W(f) ∈ℬ_p(L^2(G)), then f ∈ L^p',p(G ×G) and there exists C>0 such that
f_ L^p',p(G ×G)≤ C W(f)_ℬ_p(L^2(G)) .
Let 1<p<2<q< ∞.
(i) If X has Weyl-Paley type p on G, then X has Weyl type p on G.
(ii) If X has Weyl-Paley cotype q on G, then X has Weyl cotype q on G.
(iii) If X has Weyl type p on G, then it has Weyl-Paley type p_0 for any p_0 ∈ (1, p) and Weyl-Paley cotype q_0 for any q_0 ∈ (p', ∞) on G.
The (i) and (ii) part of the result directly follows from the fact that L^p',p(G ×G;X ) ↪ L^p'(G ×G;X ) and L^q'(G ×G;X ) ↪ L^q',q(G ×G;X ) are continuous embeddings.
For the last part, if X has Weyl type p, then
W: L^p(G ×G;X) →ℬ_p'[L^2(G);X]
is bounded map.
For 1<p_0< p, there exists θ∈ (0,1) such that 1/p_0=1-θ+θ/p. Let 1 ≤ r ≤∞. Then by interpolation with parameters (θ, r), we have
W: L^p_0,r(G ×G;X) →ℬ_p_0', r[L^2(G);X]
is bounded map. In particular, choose r=p_0' to get the desired result. For the other part, one can work with Weyl inverse instead.
Let 1<p<2<q< ∞. Let ω: ℝ^2n→ [0, ∞) is defined as ω(·)=|·|^-2n.
(i) If X has Weyl-Paley type p on ℝ^n, then it has Weyl-HL-ω cotype p' on ℝ^n.
(ii) If X has Weyl-Paley cotype q on ℝ^n, then it has Weyl-HL-ω type q' on ℝ^n.
(iii) If X has Weyl type p on ℝ^n, then it has Weyl-HL-ω cotype q_0 for any q_0 ∈ (p, ∞) and Weyl-HL-ω type p_0 for any p_0 ∈ (1,p) on ℝ^n.
(i)
If X has Weyl-Paley type p, then there exists C>0 such that
W(f)_B_p'[L^2(ℝ^n);X]≤ C f_L^p,p'(ℝ^2n ;X) .
Since
∫_0^∞ f^*(t) 1/(1 / w)^*(t) d t ≤∫_ℝ^d f(x) w(x) d x .
by using some rearrangements properties we get
∫_0^∞ t^ p^'/p(f(·)_X)^*(t)^p'd t/t≤ C
∫_ℝ^2nω(ξ)^p'-2f(ξ)_X^p' d ξ .
The left-hand side of the above equation is nothing but f_L^p,p'(ℝ^2n;X)^p'.
Hence, using the above inequalities, we get
W(f)_B_p'[L^2(ℝ^n);X]≤ Cf_L^p'(ℝ^2n ,ω^ (p'-2) ; X),
for all f ∈ L^p'(ℝ^2n ,ω^(p'-2) ; X ) as required.
(ii)Now, by using duality in equation (<ref>), we get
f_L^q'(ℝ^2n ,ω^ (q'-2) ; X)≤f_L^q,q'(ℝ^2n ;X).
If X has Weyl-cotype q, then using the above equation, we get
a constant C> 0 such that
f_L^q'(ℝ^2n ,ω^(q'-2) ;X)≤ C W(f)_B_q'[L^2(ℝ^n );X] .
(iii) This is a direct consequence of the above results and Proposition <ref>.
§ COMPETING INTERESTS
The authors declare that they have no competing interests.
acm
|
http://arxiv.org/abs/2307.03249v2
|
20230706184600
|
Single field inflation in the light of NANOGrav 15-year Data: Quintessential interpretation of blue tilted tensor spectrum through Non-Bunch Davies initial condition
|
[
"Sayantan Choudhury"
] |
astro-ph.CO
|
[
"astro-ph.CO",
"gr-qc",
"hep-ph",
"hep-th"
] |
ALPGEN
EVTGEN
PYTHIA
[lime, fill=lime] (0,0)
circle [radius=0.2]
node[white] qagID;
[white, fill=white] (-0.0625,0.095)
circle [radius=0.007];
in A, ..., Zorcidhttps://orcid.org/orcidauthor
|
http://arxiv.org/abs/2307.01968v1
|
20230705004019
|
Muti-scale Graph Neural Network with Signed-attention for Social Bot Detection: A Frequency Perspective
|
[
"Shuhao Shi",
"Kai Qiao",
"Zhengyan Wang",
"Jie Yang",
"Baojie Song",
"Jian Chen",
"Bin Yan"
] |
cs.CV
|
[
"cs.CV"
] |
Journal of Class Files, Vol. 14, No. 8, August 2023
Shell et al.: A Sample Article Using IEEEtran.cls for IEEE Journals
Muti-scale Graph Neural Network with
Signed-attention for Social Bot Detection:
A Frequency Perspective
Shuhao Shi, Kai Qiao, Zhengyan Wang, Jie Yang, Baojie Song, Jian Chen, Bin Yan
This paper was produced by the IEEE Publication Technology Group. They are in Piscataway, NJ.
Manuscript received June 30, 2023; revised August 16, 2023.
August 1, 2023
=====================================================================================================================================================================================================================================================
The presence of a large number of bots on social media has adverse effects. The graph neural network (GNN) can effectively leverage the social relationships between users and achieve excellent results in detecting bots. Recently, more and more GNN-based methods have been proposed for bot detection. However, the existing GNN-based bot detection methods only focus on low-frequency information and seldom consider high-frequency information, which limits the representation ability of the model. To address this issue, this paper proposes a Multi-scale with Signed-attention Graph Filter for social bot detection called MSGS. MSGS could effectively utilize both high and low-frequency information in the social graph. Specifically, MSGS utilizes a multi-scale structure to produce representation vectors at different scales. These representations are then combined using a signed-attention mechanism. Finally, multi-scale representations via MLP after polymerization to produce the final result. We analyze the frequency response and demonstrate that MSGS is a more flexible and expressive adaptive graph filter. MSGS can effectively utilize high-frequency information to alleviate the over-smoothing problem of deep GNNs. Experimental results on real-world datasets demonstrate that our method achieves better performance compared with several state-of-the-art social bot detection methods.
Graph Neural Network, Graph filter, Muti-scale structure, Signed-attention mechanism, Social bot detection.
§ INTRODUCTION
Social media have become an indispensable part of people's daily lives. However, the existence of automated accounts, also known as social bots, has brought many problems to social media. These bots have been employed to disseminate false information, manipulate elections, and deceive users, resulting in negative societal consequences <cit.>. Effectively detecting bots on social media plays an essential role in protecting user interests and ensuring stable platform operation. Therefore, the accurate detection of bots on social media platforms is becoming increasingly crucial.
Graph neural networks (GNNs) have emerged as powerful tools for processing non-Euclidean data, where entities are represented as nodes and relationships as edges in a graph. Leveraging the inherent graph structure, GNNs enable convolutions on the graph data, facilitating effective utilization of the relationships between entities. GNNs have demonstrated impressive performance in the field of social account detection. Building upon GNN-based approaches <cit.>, researchers have formulated the social bot detection task as a node classification problem. Alhosseini et al. <cit.> were pioneers in utilizing graph convolutional neural networks (GCNs) <cit.> to detect bots, effectively leveraging the graph structure and relationships among Twitter accounts. Subsequent investigations have focused on exploring multiple relationships within social graphs. For instance, Feng et al. <cit.> introduced the Relational Graph Convolutional Network (RGCN) <cit.> for Twitter social bot detection, enabling the integration of multiple social relationships between accounts. Additionally, Shi et al. <cit.> proposed a graph learning data augmentation technique to address the challenges of class-imbalance in socail bot detection.
Existing GNNs mainly apply fixed filters for the convolution operation, these models assuming that nodes tend to share common features with their neighbors (low-frequency information) <cit.>. However, this assumption may be weakened in networks containing anomalies, since anomalies tend to have different features from the neighbors (high-frequency signals) <cit.>. As shown in Fig. <ref>, using low-frequency information alone is insufficient in social bot detection. In view of the shortcoming that GNN cannot effectively utilize the high-frequency information in the user network, we designed a more flexible GNN structure that can adapt to learn the low-frequency and high-frequency information.
Our proposed framework pioneers the exploration of high-frequency signals in social bot detection, harnessing the power of GNNs. We introduce a novel GNN framework called MSGS, which adeptly captures the varying significance of different frequency components for node representation learning. At the core of this framework lies a simple yet elegant trainable filter, constructed through a multi-scale architecture and symbol attention mechanism that across multiple layers. By employing multi-scale features, we train a graph filter that intelligently exploits low-frequency and high-frequency information. Our extensive experimental results demonstrate the remarkable performance enhancement of GNNs on various benchmark datasets for social bot detection achieved by our proposed framework. The main contributions of our work are as follows:
* We are the first to analyze the high-frequency information in social bot detection and highlight the shortcomings of traditional GNNs in effectively utilizing it.
* Our proposed MSGS combines multi-scale architecture and signed-attention mechanism, enabling adaptive learning of the frequency response of the graph filter, thereby effectively leveraging both low-frequency and high-frequency information in social bot detection.
* Extensive experiments on real-world social bot detection datasets establish that MSGS outperforms other leading methods, including multi-scale GNNs and spectral GNNs.
§ PRELIMINARIES
In this section, we define some notations and used them throughout this paper. Let 𝒢=(𝒱, ℰ) denote the user networks graph, where 𝒱={v_1, ⋯, v_N} is the set of vertices with |𝒱|=N and ℰ is the set of edges. The adjacency matrix is defined as 𝐀∈{0,1}^N × N, and 𝐀_i, j=1 if and only if there is a edge between v_i and v_j. 𝐃∈ℝ^N × N is the degree matrix of 𝐀. 𝐃=diag{d_1, d_2, …, d_N} and d_i=∑_j𝐀_i j. Let 𝒩_i represents the neighborhood of node v_i. The feature matrix is represent as 𝐗∈ℝ^N × M, where each node v is associated with a M dimensional feature vector 𝐗_v.
§.§ Graph Fourier Transform
Theorem 1 (Convolution theorem) The Fourier transform of the convolution of functions is the product of the Fourier transforms of functions. For functions f and g, ℱ{·} and ℱ^-1{·} represent Fourier transform and Inverse Fourier transform respectively, then f * g=ℱ^-1{ℱ{f}·ℱ{g}}. The proof of Theorem 1 is provided in Appendix.
The graph spectral analysis relies on the spectral decomposition of graph Laplacians. Ordinary forms of Laplacian matrix is defined as 𝐋=𝐃-𝐀, The normalized form of Laplace matrix is defined as 𝐋_s v m=𝐈-𝐃^-1 / 2𝐀𝐃^-1 / 2. The the random walk normalized form of Laplace matrix is defined as 𝐋_r w=𝐃^-1𝐋=𝐈-𝐃^-1𝐀. In this paper, we only analyze the normalized graph Laplacian matrix 𝐋_sym. The analysis results can be easily extended to other Laplacian matrices. The purpose of defining the Laplacian operator is to find the basis for Fourier transforms. The Fourier basis on the graph is made up of the eigenvectors of the 𝐋, 𝐔=[𝐮_1…𝐮_n]. The eigenvalue decomposition of the Laplace matrix can be expressed as 𝐋=𝐔Λ𝐔^T, where Λ=diag([λ_1, λ_2, ⋯, λ_n]) is a diagonal matrix of 𝐋’s eigenvalues, λ_l∈[0,2] and 1 ≤ l ≤ N. Assuming λ_1≤λ_2≤…≤λ_N, λ_1 and λ_N correspond to the lowest and the highest frequency of the graph.
§.§ Graph Spectral Filtering
Signal filtering is a crucial operation in signal processing. It extracts or enhances the required frequency components in the input signal and filters or attenuates some unwanted frequency components. According to Theorem 1, the signal is first transformed into the frequency domain, multiplied element-by-element in the frequency domain, and finally transformed back into the time domain. A graph signal 𝐱 with filter f of the eigenvalues can be defined as follows:
𝐇=f * 𝐱=𝐔((𝐔^T f) ⊙(𝐔^T𝐱)),
where 𝐱̂=𝐔^⊤𝐱 denotes the graph Fourier transform, and 𝐱=𝐔𝐱̂ denotes Inverse Fourier transform. denotes element-wise multiplication. 𝐔^T f=[g(λ_1), g(λ_2), …, g(λ_n)]^T is called the convolution filter in the frequency domain. Define g_θ(Λ)=diag([g(λ_1), g(λ_2), …, g(λ_n)]), and θ is the learnable convolution kernel parameter, then:
𝐇=f * 𝐱=𝐔 g_θ𝐔^T𝐱.
The computational complexity of graph convolution is high because of the high cost of eigenvalue decomposition for graph’s Laplacian. To overcome the disadvantage of having a large convolution kernel, ChebNet approximates the parameterized frequency response function with a K-order polynomial g_θ=∑_k=0^Kθ_iΛ^i, then:
𝐱 * 𝐠≈𝐔(∑_i=0^Kθ_iΛ^i) 𝐔^T𝐱=∑_i=0^Kθ_i𝐋_n^i𝐱.
Thomas et al. [GCN] proposed a simpler graph convolution which approximates first-order Chebyshev graph convolution. Specifically, let θ_0=2 θ, θ_1=-θ, θ_k>1=0:
𝐱 * 𝐠≈θ(2 𝐈-𝐋_n) 𝐱=θ(𝐈+𝐃^-1 / 2𝐀𝐃^-1 / 2) 𝐱.
Theorem 2 (Over-smoothing) For any fixed low-pass graph filters defined over 𝐋^sym, given a graph signal 𝐱, suppose we convolve 𝐱 with the graph filter. If the number of layers in the GNN is large enough, the over-smoothing issue becomes inevitable. The proof of Theorem 2 is provided in Appendix.
§ THE PROPOSED METHOD
The use of fixed low-pass filters in GCN and other GNNs largely limits the expressive power of GNNs, thereby affecting their performance. The novelty of our method lies in the multi-scale and signed attention. Through the use of directional attention and coefficients γ^(0), γ^(1), …, γ^(K) of different scale channels, we learn the filtering function. MSGS works well universally by effectively utilizing low-frequency and high-frequency information through learning frequency hyperparameters to change the frequency spectrum of the graph filter.
§.§ Muti-scale Architecture
Proposition 1. Most existing GNN models, such as GCN, employ a fixed low-pass filter. As a result, after passing through a GNN, the node representations become similar.
Assume that (v_i, v_j) is a pair of connected nodes, 𝐱_i and 𝐱_j are the node features. 𝒟_i, j represents the distance between nodes v_i and v_j. The original distance of representations is 𝒟_i, j=𝐱_i-𝐱_j_2. The filter used in GCN is 𝐈+𝐃^-1 / 2𝐀𝐃^-1 / 2. Subject to d_i≈ d_j≈ d, the distance of representations learned after neighborhood aggregation is:
𝒟̃_i, j≈(𝐱_i+𝐱_j/d_j)-(𝐱_j+𝐱_i/d_i)_2≈1-1/d_2<𝒟_i, j
After neighborhood aggregation by GNN, the distance between node representations decreases.
Although different GNN models use different f in Equ. (<ref>), GCN and many subsequent models use a fixed low-pass filter for graph convolution, leading to similar node representations. According to Theorem 2, when the number of model layers is too deep, it will lead to the overs-smoothing issue in GNN. When using multiple GNN layers for learning, the task performance decline significantly.
To improve the ability of GNN models to utilize the information at different frequencies, we propose a multi-scale graph learning framework. Specifically, the feature embedding of the l-th layer of the GCN model is defined as follows:
𝐇^(l)=σ(𝐀̂𝐇^(l-1)𝐖^(l)),
where 𝐖^(l) is a learnable parameter matrix and l ≥ 1, 𝐇^(0)=𝐗𝐖^(0). σ(·) is the activation function. 𝐇^(1) represents the feature embedding obtained by passing l-layer of graph convolution. 𝐇^(0), 𝐇^(1), …, 𝐇^(K) are feature embeddings obtained at different scales. Let 𝐇̃^(l) denote the feature embedding obtained after neighborhood aggregation, 𝐇̃^(l)=𝐀̂𝐇^(l). We retain both the embeddings before and after feature propagation:
𝐙^(l)=(α^(l)-β^(l)) 𝐇^(l)+β^(l)𝐇̃^(l).
The calculation of α^(l) and β^(l) is detailed in Section <ref>. 𝐏 contains adaptive filters with K+1 different scales, shown in Equ. (<ref>). The coefficients Γ^(0), Γ^(1), …, Γ^(K) are calculated through scale-level attention mechanism, see Section <ref> for details.
𝐏=∑_k=0^KΓ^(k)·𝐙^(k).
§.§ Signed-attention Mechanism
Node-level attention mechanism
In Equ. (<ref>), α^(l)∈(0,1] and β^(l)∈(-1,1). α^(l)-β^(l) controls the proportion of preserved original embedded features, while β^(l) is the coefficient of the aggregated neighborhood features.
Proposition 2. The graph filter g: 𝐙^(K)=(α^(K)-β^(K)) 𝐇^(K)+β^(K)𝐇̃^(K) is an adaptive filter that can be adjusted to a low-pass or high-pass filter depending on the changes of α^(K) and β^(K).
The filter used in g is α^(K)𝐈+β^(K)𝐃^-1 / 2𝐀𝐃^-1 / 2, then:
𝒟̃_i, j ≈(α^(K)𝐱_i+β^(K)𝐱_j/d_j)-(α^(K)𝐱_j+β^(K)𝐱_i/d_i)_2
≈α^(K)1-β^(K)/d_2𝒟_i, j( s.t. du≈dv≈d).
When α^(K)1-β^(K)/d_2<1, 𝒟̃_i, j<𝒟_i, j, g is a low-pass filter. When α^(K)1-β^(K)/d_2<1, 𝒟̃_i, j<𝒟_i, j, g becomes a high-pass filter. High-pass filtering makes the representations become discriminative.
The proper design of α^(K) and β^(K) requires knowing whether the information in the graph is high frequency or low frequency. However, we usually do not know the frequency distribution of the graph signal. Therefore, we propose a shared adaptive mechanism to calculate node-specific frequency coefficients α_i^(K) and β_i, j^(K):
α_i^(K)=σ(𝐠_α^(K)[𝐡̃_i^(K)-𝐡_i^(K)]),
β_i, j^(K)=σ((𝐠_β^(K))^T[𝐡_i^(K)𝐡_j^(K)]),
where 𝐠_α^(K) and 𝐠_β^(K) are shared attention vectors, the more similar 𝐡̃_i^(K) and 𝐡_i^(K) is, the smaller α_i^(K) tend to be.
Scale-level attention mechanism Calculate the attention coefficients (Γ^(0), Γ^(1), …, Γ^(K)) of the multi-scale feature embeddings through a signed-attention mechanism:
(Γ^(0), Γ^(1), …, Γ^(K))=att(𝐙^(0), 𝐙^(1), …, 𝐙^(K))
where α^(k)∈ R^N × 1 represents the attention value vector of embeddings 𝐙^(K) for N node, 0 ≤ k ≤ K. For node v_i, its feature embedding in the (k+1)th scale is 𝐳_i^(k), which represents the i-th row of 𝐙^(k), (𝐙^(k))^T=(𝐳_1^(k), 𝐳_2^(k), …, 𝐳_N^(k)). The feature embedding is nonlinearly transformed and then attention values are obtained through a shared attention vector 𝐪:
γ_k, i=𝐪^T·tanh (𝐖_k·(𝐳_i^(k))^T).
Γ^(k)=[γ_k, i], 0<i ≤ N. Once all the coefficients are computed, we can obtain the final embedding 𝐏 according to Equ. (<ref>). Then, we use the output embedding for semi-supervised node classification with a linear transformation and a softmax function:
𝐘̂_i=softmax(𝐖·𝐏_i+𝐛),
where 𝐖 and 𝐛 are learnable parameters, softmax is actually a normalizer across all classes. Suppose the training set is V_L, for each v_n∈V_L the real label is 𝐲_n and the predicted label is 𝐲̃_n. In this paper, we employs Cross-Entropy loss to measure the supervised loss between the real and predicted labels. The loss function is as follows:
ℒ=-∑_v_n∈V_Lloss(𝐲_n, 𝐲̃_n).
§ THEORETICAL ANALYSIS
§.§ Spectral Analysis for GCN
According to Equ. (<ref>), the graph propagation of GCN can be formulated as follows:
𝐇_GCN=(2𝐈-𝐋)^K𝐗,
where K ∈ℤ^+ denotes the number of graph convolution layers. The graph filter can be formulated as g_G C N(λ)=(2-λ)^K, λ∈[0,2]. 0 indicates low frequency information and 2 indicates high frequency information. The formula of GCN neighborhood polymerization is:
𝐡̃_i^(l)=𝐡_i^(l)+∑_j ∈𝒩_i1/√(d_i d_j)𝐡_j^(l)
where d_i and d_j represent the degrees of nodes v_i and v_j, respectively. The frequency responses of the first to fourth order GCN filters are shown in Fig. <ref> (a)-(d). GCN amplifies low-frequency signals and restrains high-frequency signals. Essentially, the GCN filter is a fixed low-pass filter with a greater tendency to aggregate low-frequency information. As the number of GCN layers increases, the order of the filter increases, and the suppression of high-frequency information is enhanced. Therefore, deep GCN models can lead to over-smoothing.
§.§ Spectral Analysis for FAGCN
In order to extract low-frequency and high-frequency information separately, FAGCN incorporates two convolution kernels ℱ_L and ℱ_H to extract low-frequency and high-frequency information respectively:
ℱ_L=ε𝐈+𝐃^-1 / 2𝐀𝐃^-1 / 2=(ε+1) 𝐈-𝐋,
ℱ_H=ε𝐈-𝐃^-1 / 2𝐀𝐃^-1 / 2=(ε-1) 𝐈+𝐋.
For a K-layer FAGCN model, its spectral filter is the combination of g_FAGCN_L(λ) and g_FAGCN_H(λ):
g_FAGCN_L(λ)=(1-λ+ϵ)^K,
g_FAGCN_H(λ)=(λ-1+ϵ)^K,
where ϵ∈[0,1]. g_FAGCN_L(λ) and g_FAGCN_H(λ) denote low-frequency and high-frequency filters respectively. Fig. <ref> shows the frequency response of FAGCN_L and FAGCN_H. FAGCN use the attention mechanism to learn the coefficients for low-frequency and high-frequency graph signals.
𝐡̃_i^(l) =α_i j^L(ℱ_L·𝐇^(l))_i+α_i j^H(ℱ_H·𝐇^(l))_i
=ε𝐡_i^(l)+∑_j ∈𝒩_iα_i j^L-α_i j^H/√(d_i d_j)𝐡_j^(l).
Let α_i j^G=α_i j^L-α_i j^H. The coefficient α_i j^G is normalized by the tanh function, which ranges from -1 to 1, FAGCN can adaptively learn low-frequency and high-frequency information. The filters in FAGCN are essentially linear combinations of (1-λ+ϵ)^K and (λ-1+ϵ)^K. ϵ is actually a translation transformation of frequency response. Due to the limited range of values for ϵ, the space for the filter to adjust is limited.
§.§ Spectral Analysis for RFA-GNN
The RFA-GCN (frequency-adaptive graph convolutional network) is designed with a frequency-adaptive filter that includes a self-gating mechanism for adaptively selecting signals with different frequencies. RFA-GCN has a multi-hop relation-based frequency-adaptive architecture that considers both the graph properties of the data and high-order information between nodes. The convolution kernel of RFA-GNN is:
ℱ=α𝐈+β𝐃^-1 / 2𝐀 𝐃^-1 / 2=(α+β) 𝐈-β𝐋.
Its graph filter can be formulated as:
g_RFA-GCN(λ)=(α+β-βλ)^K,
where α∈(0, 1] and β∈(-1, 1). For the key parameter β in Equ. (<ref>), a shared adaptive mechanism was used to learn the frequency coefficient {β_i, j}_i, j=1^N for each node. The formula of GCN neighborhood polymerization is:
𝐡̃_i^(l)=α𝐡_i^(l)+∑_j ∈𝒩_iβ_i, j^(l)/√(d_i d_j)𝐡_j^(l),
and the frequency response of RFA-GCN of order K can be written as:
(α+β-βλ)^K=β^K(α+β/β-λ)^K.
The range of α+β/β is (-∞,+∞).
Fig. <ref> shows the frequency response of RFA-GNN with different values of α and β. Although RFA-GNN extends RAGCN to more generalized cases, the frequency response of RFA-GNN is still a shifted transformation of (-λ)^K.
§.§ Spectral Analysis for MSGS
The K-th order graph filter of MSGS can be formulated as follows:
g_MSGF(λ) =∑_k=0^Kγ_k(α^(k)+β^(k)-β^(k)λ)^k
=∑_k=0^Kγ_k(β^(k))^k(α^(k)+β^(k)/β^(k)-λ)^k,
where α_k∈(0,1], β_k∈(-1,1). The parameters and of MSGS can be adjusted to utilize different frequencies from the K-hop neighborhood.
𝐩_i=∑_k=0^Kγ_k, i𝐳_i^(k)=∑_k=0^Kγ_k, i[α_i^(k)𝐡_i^(k)+∑_j ∈𝒩_iβ_i, j^(k)/√(d_i d_j)𝐡_j^(k)].
As shown in Fig. <ref>, the frequency response of K-layer GCN, RAGCN, and RFA-GNN only considers the K-th power of λ, and the frequency response of graph filters is relatively fixed. Compared to the aforementioned methods, MSGS expands the frequency response to a K-order polynomial, allowing for more flexible adaptation of low and high-frequency information.
Proposition 3 For a single MSGS graph filter g, C*g can represent any K-th order polynomial, where C is any real number.
This proposition highlights that the frequency response of K-layer MSGS can represent any K-th order polynomial, which expands the space of graph filters. As a result, the model can be more flexible in preserving or filtering out low-frequency and high-frequency information.
C*g_MSGF(λ)=∑_k=0^K C γ_k(β^(k))^k(α^(k)+β^(k)/β^(k)-λ)^k
Let c_1=C γ_k(-β^(k))^k, c_2=α^(k)+β^(k)/β^(k)
C*g_MSGF(λ) =∑_k=0^K C γ_k(-β^(k))^k(λ-α^(k)+β^(k)/β^(k))^k
=∑_k=0^K c_1(λ-c_2)^k,
where c_1, c_2∈(-∞,+∞), therefore, C*g can represent any K-th order polynomial expression.
The frequency response of MSGS under different parameters is shown in Fig. <ref>. Compared with the previous GNNS, MSGS has a larger variation space and can learn a more accurate frequency response. MSGS can adaptively utilize the information of the K-hop neighborhood of the target node. By learning the weights of the edges during adaptive neighborhood aggregation, positive weights are assigned to edges with low-frequency information to enhance the information through addition. In contrast, negative weights are assigned to those with high-frequency information for enhancement through subtraction. This approach strengthens the low-frequency information and enhances the high-frequency information in the graph.
§ EXPERIMENT SETUP
§.§ Dataset
We evaluated MSGS and other bot detection models on three datasets: Cresci-15 <cit.>, Twibot-20 <cit.>, and MGTAB <cit.>. These datasets provide information on the follower and friend relationships between users. Cresci-15 is a dataset of 5,301 users labeled genuine or automated accounts. Twibot-20 is a dataset of 229,580 users and 227,979 edges, of which 11,826 accounts have been labeled genuine or automated. MGTAB is a dataset containing more than 1.5 million users and 130 million tweets. It provides information on seven types of relationships between these users and labels 10,199 accounts as either genuine or bots.
We constructed user social graphs by using all labeled users and follower and friend relationships between them. For MGTAB, we used the top 20 user attribute features with the highest information gain and 768-dimensional user tweet features extracted by BERT as user features. For Twibot-20, following <cit.>, we used 16 user attribute features, user description features, and user tweet features extracted by BERT. For Cresci-15, as described in <cit.>, we used 6 user attribute features, 768-dimensional user description features extracted by BERT, and user tweet features. Table <ref> provides a summary of the dataset statistics. We randomly partitioned all datasets using a 1:1:8 ratio.
§.§ Baseline Methods
To verify the effectiveness of our proposed RF-GNN, we compare it with various semi-supervised learning baselines. The detail about these baselines as described as follows:
* Node2Vec <cit.> is a weighted random walk algorithm that facilitates the creation of node vectors that satisfy both homophily and structural similarity assumptions.
* APPNP <cit.> combines GCN with PageRank to better propagate information from neighboring nodes, utilizing a large, adjustable neighborhood.
* GCN <cit.> is a spectral graph convolution method that generates node embedding vectors by truncating the Chebyshev polynomial to the first-order neighborhoods.
* SGC <cit.> is a simplified version of GCN that reduces excessive complexity by iteratively removing non-linearities between GCN layers and collapsing the resulting function into a single linear transformation.
* GAT <cit.> is a semi-supervised homogeneous graph model that employs the attention mechanism to determine the weights of node neighborhoods, thereby improving the performance of graph neural networks.
* Boosting-GNN <cit.> trains a series of GNN base classifiers by serializing them, and sets higher weights for training samples that are not correctly classified by previous classifiers, thus obtaining higher classification accuracy and better reliability.
* LA-GCN <cit.> improves the expressiveness of GNN by learning the conditional distribution of neighbor features to generate features.
* JK-Nets <cit.> is a kind of GNN that employs jump knowledge to obtain a more effective structure-aware representation by flexibly utilizing the distinct neighborhood ranges of each node.
* MSGCN <cit.> adds multi-scale information to the neural network and fuses it with a self-attention mechanism and multi-scale information into the GCN design. This enhances the neural network's expression ability and alleviates the over-smoothing phenomenon of GCNs.
* FAGCN <cit.> explored, for the first time, the role of low-frequency and high-frequency signals in GNNs. They then designed a novel frequency-adaptive GCN that combines low-frequency and high-frequency signals in an adaptive manner.
* RFA-GNN <cit.> designs a frequency-adaptive filter with a self-gating mechanism that picks signals with different frequencies adaptively, without knowing the heterophily levels.
* AdaGNN <cit.> is an adaptive frequency response filter that can learn to control information flow for different feature channels. It adjusts the importance of different frequency components for each input feature channel, which creates a learnable filter when multiple layers are stacked together.
§.§ Parameter Settings and Hardware Configuration
All baseline methods have been initialized using the recommended parameters from their official codes and have undergone meticulous fine-tuning. Additionally, we conducted training for 500 epochs and selected the model with the highest validation accuracy for testing. Our model was trained using the Adam optimizer for 500 epochs. We experimented with different learning rates, specifically {0.001, 0.005, 0.01}. The number of layers, K, was set to 10 for all datasets. The L2 weight decay factor of 5e-4 was applied across all datasets. The dropout rate ranged from 0 to 0.5. The model presented in this paper utilized hidden units of {16, 32, 64, 128}. We fine-tuned the remaining parameters until achieving optimal classification performance.
We implemented MSGS using PyTorch 1.8.0 and Python 3.7.10, along with PyTorch Geometric <cit.> for efficient sparse matrix multiplication. All experiments were executed on a server equipped with 9 Titan RTX GPUs, an Intel Xeon Silver 4210 CPU running at 2.20GHz, and 512GB of RAM. The operating system employed was Linux bcm 3.10.0.
§.§ Evaluation Metrics
We employ both accuracy and F1-score to assess the overall performance of the classifier.
Acurracy=TP+TN/TP+FP+FN+TN,
Precision=TP/TP+FP,
Recall=TP/TP+FN,
F1=2 ×Precision×Recall/Precision+ Recall,
where TP is True Positive, TN is True Negative, FP is False Positive, FN is False Negative.
§ EXPERIMENT RESULTS
In this section, we performance experiments on real world social bot detection benchmarks to evaluate MSGS. We aim to answer the following questions:
* Q1: How does MSGS perform compare to the state-of-the-art baselines in different scenarios? (Section <ref>).
* Q2: How does MSGS perform under different training set partitions? (Section <ref>).
* Q3: How does each individual module contributes to the performance of MSGS? (Section <ref>).
* Q4: Can MSGS alleviate the over-fitting phenomenon prevalent in GNNs? (Section <ref>).
* Q5: Can MSGS effectively use high and low-frequency information? What are the differences in using high-frequency and low-frequency information across different datasets? (Section <ref>).
* Q6: What are the frequency responses learned by MSGS on different datasets? (Section <ref>).
§.§ Evaluation on the Real-World Dataset
In this section, we perform experimental analysis on publicly available social bot detection datasets, aimed at assessing the efficacy of our proposed method. The data was partitioned randomly into training, validation, and test sets, maintaining a ratio of 1:1:8. To ensure reliability and minimize the impact of randomness, we performed five evaluations of each method using different seeds. Our results are reported in Table <ref>, illustrating the average performance of the baselines, as well as our proposed method, MSGS, and its various adaptations. Notably, MSGS consistently outperforms both the baselines and the alternative variants across all scenarios.
MSGS demonstrates significantly superior performance compared to GCN across all datasets. Specifically, MSGS exhibits improvements of 4.13%, 14.57%, and 1.40% on the MGTAB, Twibot-20, and Cresci-15 datasets, respectively, when compared to the baseline model GCN. Notably, detecting bots on the Cresci-15 dataset proves to be relatively facile, as most detection methods achieve over 95% accuracy. Consequently, there is limited scope for enhancement on this dataset. Furthermore, in comparison to the best results among state-of-the-art methods, our approach enhances accuracy by 1.51%, 1.94%, and 0.19% on the MGTAB, Twibot-20, and Cresci-15 datasets, respectively. These outcomes effectively demonstrate the efficacy of MSGS.
Regarding the multi-scale GNN, JK-Net incorporates skip connections between different layers, enabling the collection and aggregation of feature representations from diverse hierarchical levels to form the final feature representation. This approach retains more information compared to GCN. MSGCN, on the other hand, leverages information from multi-order neighborhoods, leading to respective improvements of 2.59%, 8.03%, and 0.91% on the MGTAB, Twibot-20, and Cresci-15 datasets compared to GCN. Notably, unlike previous multi-scale GNNs such as MixHop, etc., which are linear combinations of different order GCNs, the linear combinations of fixed K low-pass filters do not effectively exploit high-frequency information.
Recently proposed methods such as FAGCN, RFA-GNN, and AdaGNN effectively utilize high-frequency information within the graph, exhibiting superior detection performance compared to previous GNN approaches. Our proposed MSGS, however, surpasses FAGCN, RFA-GNN, and AdaGNN in detection performance by flexibly adjusting frequency responses based on different datasets, thereby achieving the best results.
§.§ Different Training Set Partition
To further evaluate the performance enhancement of our approach, we conducted a comprehensive comparison between MSGS and other GNNs across various training sets. Specifically, we employed a validation set with a scale of 0.1 and a test set of 0.5. By varying the training set from 0.1 to 0.4, the results are presented in Table <ref>. Notably, MSGS surpasses the baseline models by a significant margin across all social bot detection datasets, regardless of the training set. On the MGTAB, Twibot-20, and Cresci-15 datasets, MSGS achieves an average accuracy improvement of 5.55%, 1.24%, and 0.78% over the best-performing baseline, respectively.
§.§ Ablation Analysis
In this section, we conduct a comparative analysis between MSGS and its three variants to assess the effectiveness of the designed modules. The following is a detailed description of these variations:
* MSGS w/o MS removes the multi-scale structure and solely utilizes the output from the final layer of the GNN model.
* MSGS w/o SAM (N) eliminates the node-level signed-attention mechanism, setting α=1 and β=0.
* MSGS w/o SAM (S) excludes the scale-level signed-attention mechanism.
* MSGS incorporates all modules within the multi-scale graph learning framework.
The second half of Table <ref> presents the performance of various variants, highlighting the roles of different modules within our proposed MSGS. Among all the variants, MSGS w/o SAM (N) exhibits the worst performance. This is because, without the node-level signed-attention mechanism, MSGS degenerates into a fixed low-pass filter, unable to effectively utilize high-frequency information. On the other hand, MSGS w/o MS removes the multi-scale structure, resulting in a significant decline in performance as it cannot leverage multi-scale representations. Conversely, MSGS w/o SAM (S), which excludes the scale-level signed-attention mechanism, demonstrates improved performance compared to MSGS w/o MS when able to utilize multi-scale features. MSGS w/o SAM (S), which averages the multi-scale features, is not as flexible as attention-based weighting. As a result, its performance is still inferior to MSGS.
§.§ Alleviating Over-Smoothing Problem
To verify the ability of MSGS to alleviate the over-smoothing problem, we compared the performance of MSGS with GCN, FAGCN, and RFA-GCN models at different depths. We varied the number of layers in the models to {2, 4, 6, 8, 10, 16, 32, 64}, and the results are shown in Fig. <ref>. GCN achieved the best performance at two layers, but its performance gradually decreased as the number of layers increased, demonstrating that a too-deep structure can cause severe over-smoothing in GCN models. FAGCN, RFA-GCN, and our proposed MSGS all achieved significantly higher accuracy than GCN, especially when the models had a deeper layer configuration.
GAT added an attention mechanism to the neighborhood aggregation process based on GCN, and performed better than GCN at different layer configurations. The over-smoothing problem can be slightly alleviated by the attention mechanism.
FAGCN significantly outperformed GCN at different layer configurations, indicating that utilizing high-frequency information can alleviate the negative impact of over-smoothing on the model. Compared to FAGCN, the RFA-GCN model increased the range of graph filter adjustment and consistently outperformed FAGCN. Although both FAGCN and RFA-GCN can utilize high-frequency information to alleviate the over-smoothing problem, their detection accuracy slightly decreases when the model's depth is continuously increased. Our proposed MSGS, on the other hand, not only avoids over-smoothing as the number of layers increases but also improves classification performance.
§.§ Visualization of Edge Coefficients
We visualize the coefficient β^(k), extracted from the last layer of MSGS to verify whether MSGS can learn different edge coefficients for different datasets. We categorize the edges in the social network graph into intra-class and inter-class based on the labels of the connected nodes. In terms of the spatial domain, low-frequency information in the graph originates from intra-class edges, while high-frequency information originates from inter-class edges.
In GCN, all edges are assigned positive weights, assuming that nodes share similar features with their normal neighbors. However, high-frequency information also plays an essential role in bot detection, and anomalous nodes may connect with normal nodes, forming inter-class edges. Aggregating the neighborhood through intra-class edges can enhance the original features of the nodes, while aggregation through inter-class edges may destroy them. Our proposed MSGS allows for adaptive learning of edge weights. As shown in Fig. <ref>, most inter-class edges have negative weights, while most intra-class edges have positive weights. This effectively utilizes high-frequency information. This allows MSGS to prioritize and leverage the important high-frequency components in the graph, enhancing its ability to capture fine-grained details and subtle patterns in the data. By incorporating this signed-attention mechanism, MSGS can effectively utilize low-frequency and high-frequency information for social bot detection.
§.§ Visualization of Graph Filters
We have generated an approximate filter for MSGS on various datasets to gain a more profound understanding of our model. Fig. <ref> illustrates that our approach can effectively learn appropriate filtering patterns from the data. In the cases of MGTAB and Twibot-20, MSGS pays attention to low-frequency and high-frequency information. However, Twibot-20 exhibits more high-frequency information than MGTAB, resulting in stronger responses for the obtained graph filters in the high-frequency domain. Conversely, for Cresci-15, MSGS primarily focuses on utilizing low-frequency information for classification. Therefore, on Cresci-15, MSGS behaves similarly to previous low-frequency filtered GNNS. This explains why MSGS did not improve significantly on the Cresci-15 dataset.
§ RELATED WORK
§.§ Social Bot Detection
Social bot detection methods can be broadly categorized into feature-based and graph-based approaches. Feature-based methods <cit.> rely on feature engineering to design or extract effective detection features and then employ machine learning classifiers for classification. Early research <cit.> utilized features such as the number of followers and friends and the number of tweets for detection. Subsequent work incorporated account posting content features to improve detection effectiveness further <cit.>. However, feature-based methods fail to leverage the interaction relationships between users.
Graph neural networks have recently been applied to social bot detection with promising results. Compared to feature-based methods, graph neural networks effectively utilize user interaction features, such as follow and friend relationships <cit.>. Graph neural network-based account detection methods <cit.> first construct a social relationship graph and then transform the problem of detecting bot accounts into a node classification problem. Feng et al. <cit.> constructed a social relationship graph using friend and follower relationships, extracted tweet features, description features, and identity field features of the accounts, and then performed node classification using RGCN. OS3-GNN <cit.> is a graph neural network framework that addresses the issue of class imbalance in social bot detection by generating minority class nodes in the feature space, thereby alleviating the imbalance between human and bot accounts. Shi et al. <cit.> proposed a graph ensemble learning method that combines random forest <cit.> with GNN for social bot detection.
§.§ Graph Neural Networks
Graph Neural Networks are neural networks designed for processing graph data. Unlike traditional methods, GNNs enable information exchange and aggregation among nodes by defining message passing on nodes and edges. Compared to traditional graph embedding methods such as DeepWalk <cit.> and node2vec <cit.>, GNNs have the capability to learn richer and more advanced node representations through multi-layer stacking and information propagation mechanisms. GNNs effectively capture relationships and global structures among nodes in graphs, making them suitable for various domains such as social network analysis, recommendation systems, and molecular graph analysis <cit.>.
Inspired by graph spectral theory, a learnable graph convolution operation was introduced in the Fourier domain <cit.>. GCN <cit.> simplified the convolution operation using a linear filter, becoming the most prevalent approach. GAT <cit.> introduced an attention mechanism to weigh the feature sum of neighboring nodes based on GCN. APPNP <cit.> utilizes Personalized PageRank <cit.>, constructing a low-pass filter with distinct concentration properties compared to GCN. Several algorithms <cit.> have contributed to the improvement of GCN and enhanced the performance of GNNs.
Existing spectral GNNs primarily employ fixed filters for the convolution operation, which can lead to over-smoothing issues due to the lack of learnability <cit.>. Recently, the spectral analysis of GNNs has garnered significant interest for its valuable insights into the interpretability and expressive power of GNNs <cit.>. RFGCN <cit.> has attempted to demonstrate that most GNNs are restricted to low-pass filters and have argued for the necessity of high-pass and band-pass filters. RFA-GNN <cit.> further extends the adjustment scope of RFGCN <cit.>, enabling better utilization of high-frequency information. These models enhance the expressive capacity of GNNs and enable adaptive adjustments of the frequency response of graph filters. However, their adjustment space needs to be improved. In this regard, we propose MSGS, which further expands the frequency domain adjustment space.
§ CONCLUSION
This paper introduces a novel social bot detection method called Multi-scale Graph Neural Network with Signed-Attention (MSGS). By incorporating multi-scale architecture and the signed attention mechanism, we construct an adaptive graph filter that can adjust the frequency response of the detection model based on different data, effectively utilizing both low-frequency and high-frequency information. Through the theoretical analysis from the frequency domain perspective, we have proved that MSGS expands the frequency domain adjustment space compared to existing graph filters. Moreover, MSGS addresses the over-smoothing problem commonly observed in existing GNN models. It exhibits exceptional performance, even in deep structures. Extensive experiments demonstrate that MSGS consistently outperforms state-of-the-art GNN baselines on social bot detection benchmark datasets.
§ ACKNOWLEDGMENT
This work was supported by the National Key Research and Development Project of China (Grant No. 2020YFC1522002).
[Proof of Theorem in paper]
Proof of Theorem 1.
The Fourier transform of f can be expressed as: ℱ{f}(v)=∫_ℝ f(x) e^-2 π i x · v d x. The inverse transformation can be expressed as: ℱ^-1{f}(x)=∫_ℝ f(v) e^2 π i x · v d v. We define h to be the convolution of f and g, then h(z)=∫_ℝ f(x) g(z-x) d x. Taking the Fourier transform of h, we get:
ℱ{f * g}(v) ℱ{h}(v)
= ∫_ℝ h(z) e^-2 π i z · v d z
= ∫_ℝ∫_ℝ f(x) g(z-x) e^-2 π i z · v dx dz
= ∫_ℝ f(x)(∫_ℝ g(z-x) e^-2 π i z · v dz) dx.
We substitute y=z-x and dy=dz into Equ. (<ref>):
ℱ{f*g}(v) = ∫_ℝf(x)(∫_ℝg(y) e^-2 π i(y+x) · v dy) dx
= ∫_ℝ f(x) e^-2 π i x v v(∫_ℝ g(y) e^-2 π i v v v d y) d x
= ∫_ℝ f(x) e^-2 π i x v v d x ∫_ℝ g(y) e^-2 π i v v v d y
= ℱ{f}(v) ·ℱ{g}(v)
Taking the inverse Fourier transform of both sides of Equ. (<ref>), we get: f * g=ℱ^-1{ℱ{f}·ℱ{g}}.
Proof of Theorem 2. For GCN, the symmetric Laplacian matrix is:
𝐋_sym=𝐈_N-𝐃^-1/2𝐀𝐃^-1/2=𝐔Λ𝐔^T=∑_i=1^Nλ_i𝐮_i𝐮_i^T,
where λ_i represents the eigenvalue, 1 ≤λ_i≤ N and 0=λ_1≤λ_2≤ ... ≤λ_N.
𝐃^1/2𝐋_s y m𝐃^1/21=(𝐃-𝐀) 1=0,
where 1 is the vector with all 1 elements, and multiply both sides by the inverse of 𝐃^1/2 to get 𝐋_sym𝐃^1/21=0.
So 𝐋_sym has an eigenvalue of 0 and the corresponding eigenvector 𝐃^1/21, and the largest eigenvalue of 𝐋_sym is the upper bound of the Rayleigh quotient:
λ_N=sup _𝐠𝐠^T𝐋_s y m𝐠/𝐠^T𝐠,
where 𝐠 is a nonzero vector. let 𝐟=𝐃^-1/2𝐠, then we have
𝐟^T L f/(𝐃^1/2𝐟)^T(𝐃^1/2𝐟) = ∑_(u, v) ∈ E(𝐟_u-𝐟_v)^2/∑_v ∈ V f_v^2 d_v
≤∑_(u, v) ∈ E(2 f_u^2+2 f_v^2)/∑_v ∈ V f_v^2 d_v = 2.
When the graph is a binary graph, the equality sign of the inequality holds. Since in reality, as long as the graph is not too small, it is almost impossible to be a bipartite graph, so we will not discuss the case of bipartite graph. Therefore, under the assumption that it is not a bipartite graph, the maximum eigenvalue is less than 2. Since and are both symmetric normalized Lapacian matrices of a graph, the only difference is that the graph corresponding to the former is added with a self-ring, so the eigenvalues of the former are also in the range [0,2).
For the formula X, ignore the activation function we can get: 𝐇^(l)=𝐀̂𝐇^(l-1)𝐖^(l). Since 𝐀̂=𝐈_N-𝐋̃_sym,
𝐀̂^K =(I_N-L̃_s y m)^K=(𝐈_N-𝐔Λ𝐔^T)^K
=𝐔(𝐈_N-Λ)^K𝐔^T=∑_i=1^N(1-λ_i)^K𝐮_i𝐮_i^T.
According to the range of eigenvalues proved above, the convergence state of 𝐀̂^K can be obtained:
lim _K →+∞𝐀̂^K=𝐮_1𝐮_1^T, 𝐮_1=𝐃^1/21/√(M+N),
where M and N represent the number of edges and nodes, respectively,
lim _K →∞𝐀̂^K𝐱=C ×[[ √(d_1+1); √(d_2+1); ⋮; √(d_N+1), ]]
where C is a constant, C=1/M+N∑_j=1^N(√(d_j+1) x_j). Therefore, when the number of layers K is large, the input graph signal has been completely smoothed off, and the remaining information is only the degree, and the graph signal is difficult to be linearly separable in Euclidean space. It leads to over smoothing. As the filter of conventional GCN variants are mainly defined over 𝐋̃_sym and satisfy the above condition at extremely deep layers, thus they often suffer from the over-smoothing problem.
IEEEtran
§ BIOGRAPHY SECTION
[
< g r a p h i c s >
]Shuhao Shi
received the B.S. degree in PLA strategy support force information engineering university, Zhengzhou, China, in 2019. He is currently pursuing his Ph.D. degree at China PLA strategy support force information engineering university. He is the author of 15 articles. His research interests include graph neural network and social media account detection.
[
< g r a p h i c s >
]Kai Qiao received the B.S., M.S. and Ph.D. degrees in PLA strategy support force information engineering university, Zhengzhou, China, in 2014, 2017 and 2020, respectively. He is the author of 55 articles. Since 2020, he has been an Assistant Professor. His research interests include image processing, and social media account detection.
[
< g r a p h i c s >
]Zhengyan Wang received the B.E. degree in Central South University (CSU), Changsha, China, in 2018 and the M.E. degree in information and communication engineering from the National University of Defense Technology (NUDT), Changsha, China, in 2020. Her research interests include image processing and social media account detection.
[
< g r a p h i c s >
]Jian Chen received the B.S., M.S. and Ph.D. degrees in PLA strategy support force information engineering university, Henan, China, in 2003, 2007 and 2013. From 2001 to 2004, he was a Research Assistant the National Digital Switching System Engineering & Technological R & D Centre. Since 2015, he has been an Assistant Professor. He is the author of three books, 37 articles and holds 3 patents. His research interests include graph data processing, bots detection and intelligent information processing.
[
< g r a p h i c s >
]Jie Yang
received the B.S. degree in PLA strategy support force information engineering university, Zhengzhou, China, in 2021. He is currently pursuing his M.S. degree at China PLA strategy support force information engineering university. His research interests include reinforcement learning and bots detection.
[
< g r a p h i c s >
]Baojie Song
received the B.S. degree in PLA strategy support force information engineering university, Zhengzhou, China, in 2021. He is currently pursuing his Ph.D. degree at China PLA strategy support force information engineering university. His research interests include semantic segmentation, object detection and natural language processing.
[
< g r a p h i c s >
]Bin Yan
received the B.S. degree in PLA strategy support force information engineering university, Zhengzhou, China, in 2002, and the Ph.D. degree in Institute of High Energy Physics Chinese Academy of Sciences, in 2006. From 2006 to 2009. He was a Research Assistant with the National Digital Switching System Engineering & Technological R & D Centre. From 2009 to 2015, he was an Assistant Professor with the National Digital Switching System Engineering & Technological R & D Centre and Henan Key Laboratory of Imaging and Intelligence Processing. Since 2015 he has been a Professor in Henan Key Laboratory of Imaging and Intelligence Processing. He is the author of three books, more than 200 articles and holds 5 patents. His current research is focused on intelligent information process.
|
http://arxiv.org/abs/2307.03200v1
|
20230704092632
|
Transcribing Educational Videos Using Whisper: A preliminary study on using AI for transcribing educational videos
|
[
"Ashwin Rao"
] |
cs.CY
|
[
"cs.CY",
"cs.AI",
"cs.MM"
] |
OrthoBoXY: A Simple Way to Compute True Self-Diffusion Coefficients from
MD Simulations with Periodic Boundary Conditions Without Prior Knowledge of
the Viscosity
Dietmar Paschek
August 1, 2023 at
===================================================================================================================================================================
Videos are increasingly being used for e-learning, and transcripts are vital to enhance the learning experience.
The costs and delays of generating transcripts can be alleviated by automatic speech recognition (ASR) systems.
In this article, we quantify the transcripts generated by whisper for 25 educational videos and identify some open avenues of research when leveraging ASR for transcribing educational videos.
§ INTRODUCTION
During the last decade, we have witnessed an increase in the volume of video content that is disseminated over the Internet.
The pandemic further exacerbated this trend as people started to consume a wide category of videos from their houses <cit.>.
Along with lectures, we have also witnessed a rise in the conferences and talks that are being recorded and uploaded online on streaming sites.
These videos augment the material taught in the classrooms and are increasingly being leveraged for educational purposes <cit.>.
Educational videos, like entertainment videos, are consumed in a combination of personal devices such as laptops, tablets, smartphones, and studies.
The capabilities of the audio systems on these devices vary significantly, and a given audio file may sound different on each of these devices <cit.>.
Words in an audio segment recorded by amateurs may sound clear and comprehensible on one device, and the same audio segment may be unintelligible on another device.
Furthermore, the educational videos might include the voices of people from a wide range of ethnicities, and the speakers might also not be native speakers of the language in which they are speaking.
Clearly, the audio quality of educational videos is vital, and addressing acoustic issues can result in drastic improvement in the quality of the material <cit.>.
However, the video and audio quality of educational videos might not be optimal for all devices because they may not be professionally created, edited, and processed.
l0.35
Example Closed Caption. The metadata (the file format and language) is followed by the time stamps during which the text can be shown.
Audio transcripts and subcaptions help alleviate the issues in the audio quality and enable the viewers to receive a correct interpretation of the content.
For instance, Gernsbacher has shown that captions are particularly beneficial for persons watching videos in their non-native language <cit.>.
Although generating transcripts has been non-trivial, recent advances in speech-to-text generation have shown promising results in transcribing audio content.
In the context of videos, transcripts are different from subtitles: transcripts typically refer to a textual copy of the words someone has said in the video, while subtitles refer to the textual versions of the dialogues in the video <cit.>.
Subtitles can either be open or closed: open subtitles are embedded in the video frames, while closed subtitles are stored separately and can be overlayed over the video frames or can be displayed on a second screen.
A variant of closed subtitles is closed captions which contain an additional description of the audio-video content being shown, such as the sound made by animals, etc.
At times, a transcript can also include additional description; examples include laughter by students, audience clapping, etc.
A key difference between a transcript and the subtitles is that a transcript does not contain the time stamp at which the words in the transcript were said.
In this article, we do a preliminary evaluation of the quality of transcripts generated by whisper <cit.>.
We focus on the speech-to-text translation, and not on the time stamp at which the word was spoken.
Although there is a wide range of tools and models for generating transcripts, we focus our attention on whisper.
Our goal is to get an understanding of using whisper for academic videos and identify open avenues of research in the area of leveraging ASR for transcribing academic videos.
§ METHODOLOGY
Tools used and data processing pipeline.
For our analysis, we first collect a set of 25 YouTube videos that have closed captions that are not automatically generated; YouTube shows if the captions are auto-generated or provided by the content creator.
For each video, we use to download the best audio files corresponding to the video and the available captions (as transcripts).
The downloaded captions are the baseline for our evaluation.
We do this because YouTube keeps multiple versions of the same video, and dynamically adapts to the optimal audio/video quality depending on the network connectivity.
We then use whisper <cit.> to generate the transcripts, and run it in our cluster powered by NVidia V100 GPUs <cit.>.
The generated transcripts are then compared with our baseline transcripts downloaded from YouTube using .
We summarize the tools used in <Ref>.
l0.55
Tool Version Usage
whisper 20230314 Speech to text conversion.
jiwer 3.0.1 Compare the text in two files.
yt-dlp 2023.03.04 Download audio files and transcripts.
opusinfo 0.1.10 Extract metadata from audio files.
Software Tools
Automatic Transcript Generation (Speech to Text).
In this article, we restrict ourselves to whisper <cit.>.
Whisper offers multiple models which can be used to process the transcribe the audio files, and in our evaluation we restrict ourselves to the following five models (number of parameters in parenthesis) of which large-v2 is a multi-lingual model: base.en (74 M), tiny.en (39 M), small.en (244 M), medium.en (769 M), and large-v2 (1550 M).
We acknowledge that there is a wide range of open-source tools and models including Kaldi <cit.>, Flashlight <cit.>, and Paddlespeech <cit.>.
We plan to analyze the efficiency of these tools in our subsequent works.
Metrics for evaluating transcript quality.
The Word Error Rate (WER) is a commonly used metric for comparing texts <cit.> and it is computed as
WER = S+D+IN = H+S+D where H is the number of hits (correct words), S is the number of substitutions, D is the number of deletions, and I is the number of insertions, and N denotes the number of words in the reference (baseline) against which the hypothesis (results of the transcribing tool) is being evaluated.
In contrast, the Match Error Rate (MER) is the probability of an incorrect match <cit.>, and is given by MER = S+D+IH+S+D+I.
The Word Information Lost (WIL) is an approximation for the Relative Information Lost (RIL) which is computed using the hits, substitutions, insertions, and deletions <cit.>; the RIL measures the statistical dependence between the reference and the hypothesis and is calculated using the Shannon entropy.
Our goal is not to compare the metrics, and instead we rely on the WER, MER, and WIL to evaluate the performance of the transcription.
We use jiwer to compute the WER, MER, and WIL.
It is known that jiwer can end up computing a higher WER without normalizing the text <cit.>, and the WER depends on the normalization technique used.
For this preliminary analysis we avoid using any custom normalizations, and we plan to explore the impact of normalization in a subsequent study.
r0.5
< g r a p h i c s >
Average Bitrate of the Audio Files.
Dataset Description.
Of the 25 YouTube videos, 15 were from lectures on MIT OCW.
The remaining 10 included 5 talks at Google, one talk at MIT OCW, and four Turing Award lectures.[Availability:
The details of these videos are available with our code and datasets at:
<https://version.helsinki.fi/transcribe-educational-videos/preliminary-study-dai2023/>].
In <Ref>, we present the playback duration (size in seconds) of each of the videos and the average bitrate of the audio file.
The quality of the audio file is important because it can affect the quality of the transcripts being generated, and we observe that the audio files downloaded have an average bit rate of at least 92 kbps.
Note that the audio file was encoded in audio format which supports variable bitrate and is optimized for speech <cit.>.
We also observe that the audio files were sampled at 48 kHz.
Whisper internally converts the audio file to 16 kHz, and we believe that the audio files in our dataset have a sufficiently higher frequency from which audio segments can be sampled at 16 kHz.
§ EVALUATION
In <Ref>, we present the time required to transcribe a video for a given playback time (see <Ref>), and also for a given word count in our baseline transcripts (see <Ref>).
We observe that the time to transcribe increases linearly with the playback duration and word count, and the larger models require more time.
We present these results to give a ballpark on what to expect, and we are aware that these times are heavily biased to the audio content, and the computational capabilities in our cluster.
r0.45
< g r a p h i c s >
Relative transcription time. If the playback time is 50 s and it takes 10 s to generate the transcript then the fraction of playback time is 10/50 = 0.2, i.e., generating a transcript required 20% of the playback time. (Range = min, max)
In <Ref>, we plot the fraction of the playback time that a given model took to transcribe the video.
We observe that even the large-v2 model was able to complete the transcription process in less than 25% of the time required to playback the video.
For the videos in our dataset, and while running whisper on our servers, we observe that the base, tiny, and small models took less than 10% of the playback time to transcribe the video, and the larger models took less than 25% of the playback time.
A typical human transcriber would require at least the playback time to listen to the whole audio.
In <Ref>, we present a snippet of the transcripts generated using Whisper.
In this snippet, the speaker asks the audience member to repeat what they said because of audio issues.
We see that the original transcript marks the conversation as inaudible while the whisper tries to guess what is said, and the results vary with the model size.
Clearly, this speed-up when using smaller models is meaningless if the quality of the transcription is poor.
l0.5
< g r a p h i c s >
Transcript quality. The error bars represent the min and max across the files in the dataset.
In <Ref>, we present the WER, MER, and WIL when using the various models.
Across all the metrics, we observe that the WER, MER, and WIL decreases as the number of parameters in the models increases.
An exception is for the large-v2 model.
We believe that this is primarily due to the lack of using a normalizer <cit.>, and the audio segments that were marked in the original transcripts.
As shown in <Ref>, whisper transcribes the conversation marked by the human transcriber, and the volume of text generated (sans punctuations) by the large-v2 model is larger than the other models thus resulting in a higher error rate.
r0.6
< g r a p h i c s >
Fraction of Hits, Substitutions, Deletions, and Insertions. Error bars represent the min and max across files in our dataset. The cutout zooms into the Deletions and Insertions.
Along with the example provided in <Ref>, we also observe a high WER, a high WIL, and a high MER for other videos, as highlighted by the error bars in <Ref>.
To better understand this behavior, we present the fraction of hits, substitutions, deletions, and insertions in <Ref>.
Across all models, we observe that the hits are above 80% for the majority of videos, and the fraction of hits increases with the number of parameters.
However, for some videos, such as the one in <Ref>, we observe a large number of substitutions, insertions, and deletions.
One reason for the high error rates is that whisper does not provide as output and tries to extract the text even from the audio which a human transcriber might mark as .
This is further exacerbated by not leveraging the context.
For instance, in the example shown in <Ref> the conversation was about domain-specific architecture, and the question being asked was on the same topic, and yet some of the models wrongly predicted the outcome to be Thomas version architecture or Thomas's certificate architecture.
These predictions are bullshit[We apologize for the use of profanity, and we rely on the following quote by Harry Frankfurt <cit.> for describing the term bullshit: “it is impossible for someone to lie unless he thinks he knows the truth. Producing bullshit requires no such conviction.”] because they (and the underlying models) are indifferent to truth.
Furthermore, although only two substitutions are needed to replace thomas certificate architecture to domain specific architecture, incorrect predictions like these diminish the usefulness of the generated transcripts.
We believe that marking the audio segments as either or its equivalent that indicates a low confidence in the transcription result would be more beneficial in such scenarios.
This is achievable by tweaking some thresholds in whisper's configurations, and we plan to explore their impact in subsequent works.
§ CONCLUDING REMARKS AND AVENUES FOR FUTURE WORK
We performed a preliminary analysis on the transcription capabilities of Whisper, however we cannot draw any strong conclusions: our dataset is heavily biased to the videos picked by the author, and the results are only for the models of one tool, whisper.
However, we gained some insights such as the importance of marking audio segments as , and how audio segments affect the quality of transcripts generated by ASR systems.
Some avenues for future work in this area include: a) metrics that account for the semantic information such as the importance of each word, and evaluate the quality of transcripts in end-user studies; b) comparing the transcription results from different models; c) evaluating transcription capabilities for languages other than English, and also for non-native speakers for these languages; d) quantifying the impact of multiple speakers from different ethical backgrounds in the same video/audio; e) approaches to identify the context of the lecture/talk, and leveraging it for better transcriptions; f) quantifying the costs for generating transcripts for different accelerators, and identifying effectiveness of accelerators for transcript generation on end-user devices; and g) quantifying the quality of subtitles including the timestamp of the words and descriptions of the sounds that are generated by the ASR system.
Acknowledgement.
The authors wish to thank the Finnish Computing Competence Infrastructure (FCCI) for supporting this project with computational and data storage resources
unsrt
This work is licensed under a https://creativecommons.org/licenses/by-sa/4.0/Creative Commons Attribution-ShareAlike 4.0 International License.
|
http://arxiv.org/abs/2307.03293v1
|
20230706210803
|
CheXmask: a large-scale dataset of anatomical segmentation masks for multi-center chest x-ray images
|
[
"Nicolás Gaggion",
"Candelaria Mosquera",
"Lucas Mansilla",
"Martina Aineseder",
"Diego H. Milone",
"Enzo Ferrante"
] |
eess.IV
|
[
"eess.IV",
"cs.CV",
"physics.med-ph"
] |
Lensing in the Blue II: Estimating the Sensitivity of Stratospheric Balloons to Weak Gravitational Lensing
Sut Ieng Tam
===========================================================================================================
empty
§ BACKGROUND & SUMMARY
Chest radiography is a pivotal imaging technique used to diagnose a variety of lung diseases, including pneumonia, tuberculosis, and lung cancer. The significant role of chest X-rays (CXR) in clinical practice is ascribed to their non-invasive nature, relatively low cost, and rapid diagnostic potential. However, the interpretation of these images poses a considerable challenge due to the intricate and overlapping structures within the thoracic cavity, and the subtle manifestations of certain pathological conditions. The high demand for chest radiography and the global shortage of radiologists accentuate the need for efficient and reliable automated analysis systems.
In recent years, methods based on deep learning (DL) have demonstrated exceptional prowess in interpreting medical images, rivaling and occasionally surpassing expert human performance <cit.>. Convolutional neural networks (CNN) have been particularly instrumental in facilitating such computer-aided diagnosis (CADx) systems <cit.>. Nonetheless, the success of these algorithms is closely tethered to the availability of accurately annotated data, with sufficient quantity and diversity, to train the models.
An essential task within this framework is segmentation – the delineation of specific anatomical structures or pathological lesions within an image. In the context of CXR, this might involve the demarcation of anatomical structures such as lungs or heart, or the location of disease abnormalities <cit.>. Accurate and robust segmentation can serve as a precursor to other downstream tasks, for example providing significant information about the location and size of specific organs or detected abnormalities. However, manual segmentation is a time-consuming process, demanding substantial expertise, and thus, does not scale well to the size of large databases required for DL model training <cit.>.
HybridGNet, a deep learning model for realistic organ contouring, offers a solution for the generation of anatomically plausible CXR segmentations <cit.>. Utilizing a hybrid approach, it combines conventional convolution operations for image encoding with graph generative models for the anatomically-guided delineation of organ contours. The HybridGNet model was initially introduced with a small CXR landmark dataset to demonstrate its efficacy. In this work, we leverage this model to accomplish our main objective: introducing a large-scale segmentation dataset, named CheXmask, which provides anatomical masks with their corresponding quality index, for 6 extensive chest X-ray databases: CANDID-PTX<cit.>, Chest x-ray8<cit.>, Chexpert<cit.>, MIMIC-CXR-JPG<cit.>, Padchest<cit.> and VinDr-CXR<cit.>. These databases collectively represent a wide variety of geographical locations, patient demographics, and disease spectra, enabling the development of a broad, diverse segmentation dataset.
As the original databases lack manually curated ground-truth segmentations, we perform quality control by implementing our own Reverse Classification Accuracy (RCA) framework <cit.>. RCA allows to estimate the accuracy of a segmentation method for an individual image with no ground-truth (GT) masks, which is particularly valuable for large-scale image analysis studies like ours. The fundamental concept behind RCA involves training an auxiliary model (known as the reverse classifier) solely on the individual image, using its predicted segmentation as pseudo-GT. This model is then evaluated on a reference database that contains GT data to obtain a performance metric, which is expected to correlate with the performance that would be measured for the individual image if its GT was available. We validated this method by comparing it to traditional performance evaluation on a subset of test images with masks manually segmented by an expert physician. Additionally, since large-public CXR databases built from automatic analysis of electronic health records (EHR) are subject to errors both in image selection and image annotation, we found that RCA is a useful tool to detect out-of-distribution samples (e.g. poor-quality images). Thus, the RCA metrics for HybridGNet segmentations stand out as a powerful quality metric to handle large databases for downstream tasks, by detecting not only low quality segmentation masks, but also images that should be filtered out.
Our comprehensive analysis underscores the capacity of the HybridGNet model to generate high-quality segmentations of lungs and heart structures in CXR, and presents the RCA method as a way to use these segmentations for detection of poor-quality images in large volumes of data. CheXmask dataset provides a key resource to the medical imaging community and represents a significant stride towards democratizing access to diverse, large-scale segmentation datasets, thereby propelling the advancement of automated CXR analysis research.
§ METHODS
§.§ Data Preparation
§.§.§ Image datasets
In this study, we utilized six extensive CXR datasets: CANDID-PTX, ChestX-ray8, CheXpert, MIMIC-CXR-JPG, Padchest, and VinDr-CXR. Figure <ref> provides an overview of the complete study, which was repeated six times, one for each dataset individually. The CANDID-PTX dataset <cit.>, which encompasses 19,237 anonymized X-ray images, was collected at Dunedin Hospital in New Zealand between 2010 and 2020. The ChestX-ray8 dataset <cit.> contains 112,120 frontal-view X-ray images from 30,805 unique patients, annotated with text-mined fourteen disease image labels. The CheXpert dataset <cit.> comprises 224,316 chest radiographs collected from Stanford Hospital between October 2002 and July 2017. The MIMIC-CXR-JPG dataset <cit.> includes 377,110 CXR images from 227,827 imaging studies involving 65,379 patients; these were collected at the Beth Israel Deaconess Medical Center Emergency Department in the United States from 2011 to 2016. The Padchest dataset <cit.> is composed of 160,861 images from over 67,000 patients, collected from Hospital San Juan in Spain from 2009 to 2017. Lastly, the VinDr-CXR dataset <cit.> consists of 18,000 manually annotated images gathered from two primary hospitals in Vietnam.
§.§.§ Ethical statement
The datasets used in this study uphold rigorous ethical standards as reported in the original publications associated to every database. The CANDID-PTX dataset, sourced from New Zealand, underwent a comprehensive anonymization process wherein all identifiable details were systematically expunged. Access to this dataset required a two-step ethics training process, involving an online course and signing a Data Use Agreement. While no explicit information about the ethical or anonymization processes was found in the original papers for the ChestX-ray8 and CheXpert datasets, their use in this study was in compliance with established ethical standards. For the MIMIC-CXR-JPG dataset from the United States, anonymization was accomplished according to the Health Insurance Portability and Accountability Act of 1996 (HIPAA) Safe Harbor provisions. Accessing this dataset required the completion of a dedicated human subjects research training course. The Padchest dataset, gathered from Spain, received approval from the Hospital San Juan's institutional research committee. This dataset was comprehensively anonymized and de-identified by the local health authorities. Lastly, the VinDr-CXR dataset from Vietnam ensured the privacy of subjects by eliminating or randomizing all personally identifiable information. A rigorous automated and manual verification process was implemented to ensure the thorough removal of all textual information containing identifiable data. Accessing this dataset also required the completion of a training course. To ensure the ethical use of all datasets, the authors of this study have fulfilled all necessary requirements, including completion of requisite ethics courses and signing relevant data use agreements.
§.§.§ Inclusion-exclusion criteria
Our study incorporated only frontal images, captured in posteroanterior (PA) or anteroposterior (AP) views (no lateral views were included). Specific selection criteria varied among the datasets due to their differing metadata, as summarized in Figure <ref>.
The CANDID-PTX, ChestX-ray8, and VinDr-CXR datasets contain only frontal images, so all images were included. We incorporated images from the Padchest dataset based on the “Projection” metadata field, including only those labeled as PA or AP, reducing the initial count of 160,861 images to 96,287. For the CheXpert dataset, images labeled as “Frontal” in the “Frontal/Lateral” metadata column were included, resulting in 187,825 out of the total 224,316 images. Lastly, images from the MIMIC-CXR-JPG dataset were included if the “ViewPosition” tag was listed as “PA” or “AP”, which resulted in 243.334 out of the initial 377,110 images.
§.§.§ Image preprocessing
For compatibility with HybdriGNet input format, we preprocessed images to attain a uniform size suitable for generating pseudo-landmark contours. The standard size of the training data was 1024x1024 pixels, obtained by padding the images to square dimensions then resizing them to the required size, when needed.
Images from ChestX-ray8 dataset already satisfied the required dimension and format, thus requiring no pre-processing. The CANDID-PTX dataset met the required image dimension in DICOM format, so we converted images to PNG. The datasets of CheXpert, MIMIC-CXR-JPG, Padchest, and VinDr-CXR underwent a standard pre-processing pipeline, which included padding the images to a square shape and then resizing them to 1024x1024 pixels. Additionally, VinDr-CXR images needed extraction from DICOM files before the pre-processing. All these pre-processing procedures are reversible, which allows for the preservation of the original image shapes for releasing the generated annotations.
§.§ Data processing
§.§.§ Anatomically plausible segmentation via HybridGNet
The DL model HybridGNet segments organ contours by detecting the coordinates of anatomical landmarks. Thus, it is trained to minimize the distance between the predicted landmark positions and their GT location. The model incorporates an encoder-decoder architecture that combines standard convolutions for image encoding and graph generative models to achieve anatomically plausible representations. Pixel-level masks are obtained by filling in the contours defined by the landmarks predicted by HybridGNet. A detailed description of the HybridGNet model can be found in the work of Gaggion et al. <cit.>. A modified training procedure was presented in a later work <cit.>, to avoid domain memorization issues which emerge when dealing with heterogeneous labels in multi-centric scenarios (i.e. images from some medical centers contain only lung annotations, while others include both lung and heart masks).
In this work, we used the modified multi-centric training procedure <cit.> to retrain the HybridGNet model with the complete dataset used in prior studies (i.e., both the train and test splits of previous works). This dataset, which will be referred to as Chest-Xray-Landmark dataset, was released previously and is available on GitHub[<https://github.com/ngaggion/Chest-xray-landmark-dataset>]. The Chest-Xray-Landmark dataset incorporates 911 annotated images from the JSRT <cit.>, Montgomery <cit.>, Shenzhen <cit.> datasets, and a subset of the Padchest dataset <cit.>. All images contain lung annotations and 383 images also include heart segmentations. The training process incorporated various data augmentation techniques, such as random rotations, scaling, and color shifting, to enhance robustness and generalizability.
§.§.§ Reverse classification accuracy to measure segmentation quality
We employed RCA to estimate the Dice Similarity Coefficient (DSC) <cit.> in the absense of ground truth. DSC is a widely used metric for assessing the similarity between two sets of binary labels. By using RCA, it is possible to produce accurate estimations of DSC, thereby making it easier to perform quality control in the absence of GT. The DSC quantifies the overlap between the predicted segmentation and the GT segmentation, ranging from 0 (no overlap) to 1 (perfect overlap). It is defined as DSC = 2TP/2TP+FP+FN, where TP, FP, and FN denote the number of true positive, false positive, and false negative pixels, respectively.
To obtain the estimated DSC for a segmentation mask, the RCA framework consists in training a new segmentation method using solely this image with the predicted mask as GT. This new trained model (i.e., the reverse classifier) is then applied to segment a reference image set, with known GT masks. The hypothesis is that the segmentation accuracy of the reverse model on the reference set correlates to the segmentation quality of the original image: if the original segmentation of the evaluated image was good, the segmentation accuracy of the reverse model on this reference set should also be good.
In the original work, the authors experiment with three different methods for implementing the RCA classifier: Atlas Forests, Deep Learning, and Atlas-based Label Propagation. In this case, we used the last one, which employs a non-rigid registration technique for single-atlas label propagation, where the single atlas corresponds to the evaluated image together with its predicted segmentation. A summary value of the distribution of DSC across the reference set images is then used as performance metric. We will use for these values the mean and max RCA-estimated DSC. The RCA-estimated DSC is expected to correlate well with the real DSC that one would obtain if GT data was available. Figure <ref> showcases the distribution of RCA-estimated DSC for the complete CheXmask dataset with examples sampled from each bin.
The use of DL in atlas registration has shown improved performance over classic multi-atlas approaches, while also offering a significant speed-up in computation. In order to speed up the evaluation process, we implemented a DL-based atlas registration procedure <cit.> as label propagation method (that we refer as Deep RCA), which is several orders of magnitude faster than traditional registration methods. Specifically, we divide the registration procedure into two stages: first performing rigid registration to globally align the corresponding images, and then proceeding with deformable registration to correct local misalignments.
§ DATA RECORDS
The CheXmask dataset is structured as one file of comma-separated values (CVS) for each CXR dataset. The CSV content is explained in Table <ref>. We do not include images or metadata from the original datasets. We include an image ID in the first column of the CSV files, which has the same column name as the ID column of the respective dataset, and allows to match the rows in this CSV with the original dataset's instances. We also include the pre-processed version of the masks, allowing to use all datasets in the same image resolutions (please also note that CANDID-PTX and ChestX-ray8 masks are already in the desired resolution). This database <cit.> is available at PhysioNet <cit.>, in the following repository: <https://physionet.org/content/chexmask-cxr-segmentation-data/>.
§ TECHNICAL VALIDATION
The technical validation of the CheXmask dataset was conducted through a three-tiered approach: (1) evaluating the quality of HybridGNet segmentations by measuring the DSC on a gold-standard subset revised by an expert physician; (2) validating the relevance of the RCA-estimated DSC as performance metric; (3) evaluating the quality of the masks for the six source datasets separately, by exploring their suitability as a training GT set for the segmentation task.
§.§ 1) Validation via physician annotations as gold-standard
We built a gold-standard set of masks (for images sampled from the six original datasets) labeled by an experienced physician to evaluate the segmentation quality of CheXmask lungs and heart masks. The physician, counting with more than five years of experience, performed a manual revision of the right lung, left lung and heart landmark-based segmentations. We used an open-source labeling platform called LabelStudio to do so. The landmarks predicted by HybridGNet for a subset of images were uploaded as predictions to LabelStudio, where the annotator corrected these predicted landmarks by moving them to their correct specific contour positions. This guarantees that the GT masks are based on the same collection of nodes than the evaluated masks, and reduces the time needed for annotation. Figure <ref> shows the annotation interface. We include the source code to launch this interface in our repository, as an easy tool to perform further validations. We measured the DSC of HybridGNet's masks comparing with the manually-corrected landmark-annotations, and report statistics on the DSC distribution for the six public datasets.
The sample size of the subset used as gold-standard evaluation was chosen based on the work by El Jurdi and Colliot <cit.>, which suggests sampling 100 images for a 4%-wide confidence interval in segmentation tasks. We sampled 30 images per dataset, obtaining a final gold-standard set of 180 images. We combined random sampling and histogram-based sampling to ensure the presence of high and low quality masks. For each dataset, ten images were sampled based on a 10-bins histogram of the RCA-estimated DSC of all the dataset's images, taking one random image per bin. Another twenty images were then randomly selected from the whole dataset.
Table <ref> summarizes the results of physician validation for lung and heart segmentation across the six datasets on their 30-samples subsets. We report DSC, Hausdorff Distance (HD), and Hausdorff 95% Distance (HD95) as evaluation metrics. HD and HD95 are distance error metrics, providing insights into the difference between the contours of the GT masks and the predicted masks.
In terms of lung segmentation, the datasets generally exhibit high DSC values, ranging from 0.947 to 0.981. This indicates a strong overlap between the GT masks and the predicted lung masks. Among the datasets, CANDID-PTX exhibits the highest agreement, while CheXpert has the lowest mean DSC. CheXpert shows relatively higher HD and HD95 values compared to other datasets, suggesting a larger discrepancy between the contours.
For heart segmentation, the DSC values range from 0.950 to 0.980, indicating a strong overlap between the GT and predicted heart masks across all datasets. The HD and HD95 values for heart segmentation are generally lower compared to lung segmentation, indicating a smaller distance between the contours.
Overall, the results indicate that the segmentation models perform well in capturing the lung and heart structures. CANDID-PTX consistently demonstrates the highest agreement for both lungs and heart, while CheXpert exhibits relatively lower agreement. These findings highlight the variations in segmentation performance across different datasets and provide valuable insights for further improving the accuracy of the segmentation algorithms.
§.§.§ Per-landmark error analysis
In addition to the segmentation metrics, we also computed the mean squared error (MSE) per landmark across the entire set. This measure provided deeper insights into the positional accuracy of the landmarks predicted by HybridGNet. Interestingly, the results suggested that certain landmarks were more prone to prediction errors than others. Specifically, we observed higher MSE values for landmarks corresponding to the lower part of the lungs and the heart, indicating that these regions are more difficult for the network.
Figure <ref> illustrates the MSE per landmark across the whole dataset. Each landmark is color-coded based on its respective MSE value, using a logarithmic scale for better visualization. The size of each landmark also corresponds to the magnitude of the MSE, with larger landmarks indicating higher errors. As can be seen in this figure, the bottom part of the lungs and the heart landmarks tend to be larger, denoting higher prediction errors. This aligns with the observations made by the expert physician responsible for the manual annotations, who noted that the boundaries of the heart and the lower section of the lungs tend to exhibit more blurriness compared to the upper part.
§.§ 2) Validation via RCA-estimated DSC
As previously mentioned, computing the RCA-estimated DSC requires training a rigid and deformable registration model based on deep learning. To this end, we employed 80% of the ChestXray-Landmarks database for training. This model was used to propagate the predicted labels from the image atlas to the reference images (composed of the 5 most similar images to the atlas from the training set), and finally compute the RCA-estimated DSC (mean and max across the 5 values).
To validate that RCA-estimated DSC accurately approximates the real DSC, we proceeded to use simulated segmentation masks of various qualities for which we can compute the real coefficient. To this end, we generated a set of candidate segmentations of various quality (i.e. with real DSC ranging from low to high values), by training 12 UNet models on the same dataset but halting at various epochs (models trained for a few epochs produced lower quality segmentations while those trained until convergence produce better segmentations). Then, we assessed the correlation of RCA-estimated DSC on the remaining 20%test-split of the ChestXray-Landmarks database. The Pearson Correlation scores for the Max and Mean RCA-estimated DSC across the reference set were 0.93 and 0.94 respectively, demonstrating a strong correlation (Figure <ref>). We found that the RCA-estimated value tends to slightly sub-estimate the true DSC, suggesting the performance measured with RCA follows a conservative approach (i.e., more pessimistic than the actual true performance). We selected the mean RCA-estimated DSC as RCA estimation metric for the following experiments since it achieved better correlation.
§.§.§ Evaluation of segmentation quality through RCA
As part of the technical validation on the quality of lungs and heart masks, we report a statistical description of the distribution of mean RCA-estimated DSC for all images in each dataset (Table <ref>). The full histograms are presented in Figure <ref>. For all datasets, the mean RCA-estimated DSC is higher than 0.88. As explained above, the true DSC distribution probably has a larger mean, since the RCA-estimated DSC was found to under-estimate the performance.
§.§.§ Demographic bias analysis through RCA
To explore potential biases <cit.>in the masks quality, we conducted a detailed analysis incorporating the metadata associated with the MIMIC-CXR-JPG, Padchest, ChestX-ray8, and CheXpert datasets. We took into account parameters such as disease findings, the sex and age of the patients, and the X-ray capture view (PA or AP).
Figure <ref> presents histograms that clearly demonstrate a superior RCA-estimated DSC for PA images compared to AP images across all investigated datasets. This is consistent with the fact that PA images tend to come from hospitalized patients, who are more difficult to position in standard views and usually include artifacts or cables that may act as confounders or occlude body parts, making anatomical segmentation more challenging. In contrast, discrepancies based on sex were less pronounced, with a slightly diminished segmentation quality observed in females. This is also consistent with the fact x-ray imaging of the upper thorax women’s breasts tend to occlude the imaged organs, resulting in poorer image contrast for the relevant anatomy <cit.> and making the segmentation task more challenging. Furthermore, the influence of disease findings on segmentation quality varied amongst datasets, with significant disparities evident in the ChestX-ray8 and CheXpert datasets.
Upon evaluating the relationship between patient age and RCA-estimated DSC in the CheXpert dataset, no substantial patterns emerged. It is important to note, however, that all subjects are above the age of 18. On the contrary, when analyzing results for the ChestX-ray8 dataset which includes patients under 18, we observed lower RCA-estimated DSC for these cases. This is consistent with results reported in a previous analysis of age patterns for the HybridGNet model <cit.>, where subjects below the age of 18 demonstrated subpar performance due to their absence in the training set. We hypothesize a similar trend may be affecting the sparse set of ChestX-ray8 X-rays taken for individuals under the age of 18.
These findings underscore the importance of desaggregated analysis with respect to image acquisition protocol, disease presence and demographic attributes when understanding segmentation quality.
§.§.§ Out-of-distribution detection through RCA
In this section we focus on analyzing those segmentation masks which received lower RCA-estimated DSC values. Through empirical observation, we discovered that RCA-estimated DSC values lower than 0.7 tend to be indicative of out-of-distribution cases. To exemplify this, we randomly selected a subset of six images from each dataset with some of the lowest RCA values, which are showcased in Figure <ref>. By performing a manual and qualitative analysis of the lowest RCA-estimated DSC masks for each dataset, we found a wide variety of challenging scenarios, ranging from images with high noise levels or mislabeled lateral CXR, to an X-ray image of a mobile phone from the Padchest dataset. That is why we recommend users of the CheXmask dataset to only use segmentations whose RCA-estimated DSC is higher than 0.7.
Table <ref> can be useful to estimate the prevalence of such out-of-distribution images in each dataset. For instance, in the CANDID-PTX and VinDr-CXR datasets, the number of out-of-distribution images is relatively low. Notably, the VinDr-CXR dataset does not include any non-posteroanterior (PA) images, as all were manually annotated by physicians. However, the inclusion of zoomed-out images capturing both arms, which were absent in the HybridGNet training set, leads to lower-quality segmentations in those cases. In contrast, the larger datasets contain a higher number of cases mislabeled as PA or AP in their metadata. These mislabeled images could be easily identified and discarded by thresholding the RCA-estimated DSC, for example using the 1% or 5% quartile of a dataset as threshold. This approach can help to improve the overall quality and reliability of downstream applications developed using this datasets.
§.§ 3) Validation via retraining segmentation models with CheXmask
We performed a final quality control at the dataset-level to evaluate the quality of HybridGNet segmentations per dataset (not individually as we did with RCA-estimated DSC). The hypothesis is that if the segmentations are good, they can be used as training set for a segmentation model that will obtain high performance. Thus, we trained six new HybridGNet models, by using the set of predicted masks of each of the six public datasets as training GT. We then evaluated model performance on the complete ChestXray-Landmarks database. All models were trained using the same configurations and hyperparameters as the original HybridGNet model <cit.>. As a comparison reference, we include the performance metrics reported on prior studies for the 20% test split of the ChestXray-Landmarks database which can be seen as an upper bound in performance, as this model was trained with the training split of the ChestXray-Landmarks database.
Table <ref> shows the results for lungs and heart segmentation. A slight decrease is observed in terms of DSC, decreasing from 0.967 to 0.955 when using the segmented datasets as training set, in comparison to the reference values. This indicates that the overall quality of the segmentations is good enough to train models which perform well when compared with existing expert annotations.
§ USAGE NOTES
The CheXmask dataset is intended to serve as a resource for researchers working in the field of medical imaging, particularly those interested in computational anatomy, shape understanding, CXR analysis and image segmentation. It may be useful for a wide range of downstream applications, including deep learning model development and evaluation, training of generative models, clinical decision support systems, disease detection and diagnosis, and anomaly detection.
For the sake of completeness, we release segmentation masks for all images included in our study. However, based on the qualitative and quantitative analyses presented in this paper (see section Technical Validation), our recommendation for downstream tasks is to only use segmentations whose RCA-estimated DSC is higher than 0.7, to avoid including out-of-distribution images as well as low quality masks.
A critical point to note is that this dataset does not include the original CXR images due to proprietary and privacy considerations. Hence, researchers intending to utilize this dataset are required to source the original images directly from the respective datasets cited in the manuscript. It is imperative that all conditions of usage and restrictions established by the providers of these datasets are fully respected and complied with.
The segmentation masks within this dataset are encoded using the Run-Length Encoding (RLE) technique, a standardized method for representing binary masks in a compact manner. In order to decode these masks back to their original binary format, researchers can use Python along with widely-used scientific computing libraries such as NumPy and Pandas. We provide the necessary scripts for encoding and decoding these masks to facilitate their use.
For researchers aiming to employ machine learning or deep learning techniques for analyzing this dataset, we recommend the use of well-established libraries. In particular, the HybridGNet model, used in this work, was developed in PyTorch. The source code for this model is made available alongside the dataset, allowing researchers to reproduce our results, conduct comparative studies, or further refine the methodology.
The segmentation masks are available in two formats. The first retains the original resolution as provided by the respective imaging source. The second consists of preprocessed masks with an uniform resolution across different imaging sources. We also provide the corresponding image preprocessing scripts which researchers can apply to the original images, enabling the use of all datasets at the same resolution. These scripts have been designed to replicate the preprocessing steps applied to the original datasets, thereby facilitating the integration of all these major CXR datasets.
§ LIMITATIONS OF STUDY
There are several limitations to consider in this study. Firstly, the validation of the subset of images was performed by a single physician, which may introduce potential biases or subjective interpretations. While efforts were made to ensure accuracy and consistency, further validation by multiple radiologists or experts could enhance the reliability and generalizability of the findings.
The selection process focused on including only CXR images in the posteroanterior (PA) or anteroposterior (AP) view, aiming to ensure consistency and homogeneity. However, due to potential inaccuracies or inconsistencies in the metadata (particulary in large-scale databases built from automatic analysis of electronic health records), there is a possibility that other types of images may have been included in the PA and AP views.
Despite the utilization of the Reverse Classification Accuracy (RCA) framework, which provides a reliable metric to detect low-quality segmentations, it is important to acknowledge that some images that do not meet the specific PA or AP criteria may have inadvertently leaked into the dataset. While RCA helps identify such instances, there remains a potential for the inclusion of non-PA or non-AP images that could impact the segmentation quality assessment.
Therefore, when working with these large datasets, it is crucial to exercise caution and acknowledge the possibility of mislabeled or misclassified images. Further investigations and refinements in the dataset selection process, including more robust metadata validation techniques, would be beneficial to enhance the reliability and accuracy of the segmentation results.
Finally, it is important to highlight that, since segmentation masks released in CheXmask are generated with HybridGNet, they tend to follow patterns of anatomical plausibility, even in presence of complex conditions like tuberculosis, which generate different type of organ oclussions. This is of particular importance for lung segmentation, where there are usually two ways in which masks can be annotated, either following the 'air' or 'anatomy' strategy <cit.>. While the 'air' strategy focus on segmenting the air cavity of the lung, excluding areas of increased opacity due to infection, the 'anatomy' strategy provide a comprehensive view which follows the expected anatomy, including these opaque areas. Segmentation masks generated by HybridGNet (i.e. those in CheXmask) are more similar to those generated via the 'anatomy' strategy. For a complete discussion about this, see Section V.5) and Figure 6 in Gaggion et. al <cit.>.
§ CODE AVAILABILITY
The code associated with this study is available in our Github repository[<https://github.com/ngaggion/CheXmask-Database>]. The repository encompasses Python 3 code for various components, including data preparation, data post-processing, technical validation, and the deep learning models. The data preparation section includes scripts for preprocessing the images. The data post-processing section provides scripts for converting the segmentation masks from run-length encoding to a binary mask format, examples of how to read the model and the necessary code to revert the pre-processing steps for each dataset. The technical validation section includes the code for the individual RCA analysis and the processing of the physician results. Additionally, the repository includes the code for the deep learning models used for image segmentation, including the HybridGNet model architecture, weights, training and inference scripts. The software prerequisites for running the code are outlined in the repository's README file.
§ ACKNOWLEDGEMENTS
This work was supported by Argentina’s National Scientific and Technical Research Council (CONICET), which covered the salaries of E.F., D.M. and N.G. The authors gratefully acknowledge NVIDIA Corporation with the donation of the GPUs used for this research, and the support of UNL (CAID-PIC50420150100098LI, CAID-PIC-50220140100084LI) ANPCyT (PICT-PRH-2019-00009).
§ AUTHOR CONTRIBUTIONS STATEMENT
E.F., N.G. and C.M. conceptualized the idea of this paper. N.G. performed the experiments. C.M. designed the LabelStudio interface and analyzed the results. L.M. implemented the deformable registration module for the RCA estimation. M.A. manually segmented the gold-standard set. D.H.M. and E.F. analyzed the results. All authors reviewed the manuscript.
§ COMPETING INTERESTS
The authors declare not conflict of interest.
§ APPENDIX: STATISTICAL ANALYSIS FOR MAX RCA-ESTIMATED DSC SCORES
In the main manuscript we included statistical analysis for Mean RCS-estimated DSC scores since this indicator showed better correlation with real DSC. However, for the sake of completeness, here we include the same analysis for the alternative Max RCA-estimated DSC score.
|
http://arxiv.org/abs/2307.01176v1
|
20230703173919
|
Nonlinear Subharmonic Dynamics of Spectrally Stable Lugiato-Lefever Periodic Waves
|
[
"Mariana Haragus",
"Mathew A. Johnson",
"Wesley R. Perkins",
"Björn de Rijk"
] |
math.AP
|
[
"math.AP",
"math-ph",
"math.MP"
] |
Characterisation of three-body loss in 166Er and optimised production of large Bose–Einstein condensates
Robert P. Smith
August 1, 2023
========================================================================================================
We study the nonlinear dynamics of perturbed, spectrally stable T-periodic stationary solutions of the Lugiato-Lefever equation (LLE), a damped nonlinear Schrödinger equation with forcing that arises in nonlinear optics. It is known that for each N∈, such a T-periodic wave train is asymptotically stable against NT-periodic, i.e. subharmonic, perturbations, in the sense that initially nearby data will converge at an exponential rate to a (small) spatial translation of the underlying wave. Unfortunately, in such results both the allowable size of initial perturbations as well as the exponential rates of decay depend on N and, in fact, tend to zero as N→∞, leading to a lack of uniformity in the period of the perturbation. In recent work, the authors performed a delicate decomposition of the associated linearized solution operator and obtained linear estimates which are uniform in N. The dynamical description suggested by this uniform linear theory indicates that the corresponding nonlinear iteration can only be closed if one allows for a spatio-temporal phase modulation of the underlying wave. However, such a modulated perturbation is readily seen to satisfy a quasilinear equation, yielding an inherent loss of regularity. We regain regularity by transferring a nonlinear damping estimate, which has recently been obtained for the LLE in the case of localized perturbations to the case of subharmonic perturbations. Thus, we obtain a nonlinear, subharmonic stability result for periodic stationary solutions of the LLE that is uniform in N. This in turn yields an improved nonuniform subharmonic stability result providing an N-independent ball of initial perturbations which eventually exhibit exponential decay at an N-dependent rate. Finally, we argue that our results connect in the limit N →∞ to previously established stability results against localized perturbations, thereby unifying existing theories.
Keywords: Nonlinear Stability; Periodic Waves; Subharmonic Perturbations; Lugiato-Lefever Equation.
Subject Class: 35B35; 35B10; 35Q60.
§ INTRODUCTION
In this paper, we consider the asymptotic behavior and nonlinear stability against subharmonic perturbations of periodic stationary solutions of the Lugiato-Lefever equation (LLE)
ψ_t = -ψ̱_xx - (1+)ψ + |ψ|^2ψ + F,
where ψ(x,t) is a complex-valued function depending on a temporal variable t ≥ 0 and a spatial variable x ∈, the parameters α,β are real, and F is a positive constant. The LLE was derived in 1987 from Maxwell's equations in <cit.> as a model to study pattern formation within the optical field in a dissipative and nonlinear cavity filled with a Kerr medium and subjected to a continuous laser pump. In this context, ψ(x,t) represents the field envelope, F>0 represents the normalized pump strength, |β|=1 is a dispersion parameter, and α>0 represents a detuning parameter. Note that the case β=1, corresponding to a defocusing nonlinearity, is referred to as the “normal" dispersion case, while the case β=-1, corresponding to a focusing nonlinearity, is referred to as the “anomalous" dispersion case. The LLE has recently become the subject of intense study in the physics literature, in part due to the fact that it has become a canonical model for high-frequency combs generated by microresonators in periodic optical waveguides; see, for example, <cit.> and references therein.
Several recent works have studied the existence of spatially periodic stationary solutions of (<ref>), as well as the nonlinear dynamics about them.
Such solutions ψ(x,t)=ϕ(x) correspond to T-periodic solutions of the profile equation
-iϕ̱” - (1+)ϕ + |ϕ|^2ϕ + F=0.
Smooth periodic solutions of (<ref>) have been constructed using perturbative arguments, as well as local and global bifurcation theory <cit.>.
It turns out that most of the constructed periodic waves are unstable under general bounded perturbations <cit.>. A class of periodic waves which are spectrally stable under general bounded perturbations has been identified in <cit.>. Nonlinear stability results have been obtained for co-periodic perturbations, i.e. T-periodic perturbations of the T-periodic wave ϕ, in <cit.> and for localized perturbations in the recent works <cit.>. The results from <cit.> can be extended to subharmonic perturbations, i.e. NT-periodic perturbations, provided spectral stability holds and the integer N is fixed. It turns out that both the allowable size of initial perturbations as well as the exponential rates of decay, which depend on N, tend to zero as N→∞, leading to a lack of uniformity in the period of the perturbation. A linear stability result which holds uniformly in N has been obtained in <cit.>. The goal of the present work is to upgrade this result to the nonlinear level.
§.§ Spectral Stability Assumptions
The local dynamics about a given T-periodic stationary solution ϕ of (<ref>) can be captured by considering the perturbed solution
ψ(x,t)=ϕ(x)+ṽ(x,t)
of (<ref>), where ṽ represents some admissible perturbation.
Decomposing the solution ϕ=ϕ_r+ϕ_i and the perturbation ṽ=ṽ_r+ṽ_i into their real and imaginary parts[Going forward, we will slightly abuse notation and write our complex functions f in the form f = ([ f_r; f_i ]).], we see that (<ref>) is a solution of (<ref>) provided that the real-valued functions ṽ_r and ṽ_i satisfy the system
∂_t([ ṽ_r; ṽ_i ])=𝒜[ϕ]([ ṽ_r; ṽ_i ])+𝒩[ϕ](ṽ),
where here 𝒜[ϕ] is the (real) matrix differential operator
𝒜[ϕ]=- I+𝒥ℒ[ϕ],
with
𝒥=([ 0 -1; 1 0 ]), ℒ[ϕ] = ([ -̣̱_x^2 - + 3ϕ_r^2 + ϕ_i^2 2ϕ_rϕ_i; 2ϕ_rϕ_i -̣̱_x^2 - + ϕ_r^2 + 3ϕ_i^2 ]),
and where the nonlinearity is given by
𝒩[ϕ](ṽ) = 𝒥([ 3ṽ_r^2 + ṽ_i^2 2ṽ_rṽ_i; 2ṽ_rṽ_i ṽ_r^2 + 3ṽ_i^2 ])([ ϕ_r; ϕ_i ]) +
𝒥|([ ṽ_r; ṽ_i ])|^2([ ṽ_r; ṽ_i ]).
The following notion of spectral stability served as the main hypothesis for the uniform linear stability result for subharmonic perturbations in <cit.> as well as for the nonlinear stability results for localized perturbations in <cit.>.
Let T>0. A smooth T-periodic stationary solution ϕ of (<ref>) is said to be diffusively spectrally stable provided the following conditions hold:
* the spectrum of the linear operator 𝒜[ϕ], given by (<ref>) and acting in L^2(), satisfies
σ_L^2()(𝒜[ϕ])⊂{λ∈:(λ)<0}∪{0};
* there exists θ>0 such that for any ξ∈[-π/T,π/T) the real part of the spectrum of the Bloch operator
𝒜_ξ[ϕ]:=ℳ_ξ^-1𝒜[ϕ]ℳ_ξ, acting on L^2_ per(0,T), satisfies
(σ_L^2_ per(0,T)(𝒜_ξ[ϕ]))≤-θξ^2,
where here ℳ_ξ denotes the multiplication operator (ℳ_ξ f)(x)=^ξ xf(x);
* λ=0 is a simple eigenvalue of the Bloch operator 𝒜_0[ϕ], and the derivative ϕ'∈ L^2_ per(0,T) of the periodic wave is an associated eigenfunction.
Since the pioneering work of Schneider <cit.>, the above notion of spectral stability (or extensions of it that account for more symmetries) has been standard in the linear and nonlinear stability analysis of periodic traveling or steady waves in dissipative systems. Indeed, it has been shown <cit.> to imply important properties regarding the nonlinear dynamics against localized, or general bounded, perturbations, including the long-time dynamics under (large) phase modulations. We emphasize that the spectrally stable periodic steady waves of the LLE constructed in <cit.> are diffusively spectrally stable in the sense of Definition <ref>.
Floquet-Bloch theory shows that the spectrum of 𝒜[ϕ] acting in L^2() is equal to the union of the spectra of the Bloch operators 𝒜_ξ[ϕ] acting in L_ per^2(0,T) for ξ∈[-π/T,π/T). For subharmonic perturbations, the operator 𝒜[ϕ] acts in L_ per^2(0,NT), for some N∈ℕ, and its spectrum is the union of the spectra of the Bloch operators 𝒜_ξ[ϕ] acting in L_ per^2(0,T) for ξ in the discrete set {y ∈ [-π/T,π/T) : ^ y NT=1}. As a consequence, diffusively spectrally stable periodic waves are necessarily spectrally stable to all subharmonic perturbations and, further, the spectrum possesses an N-dependent gap δ_N > 0, i.e., we have
(σ_L^2_ per(0,NT)(𝒜[ϕ])∖{0})≤ -δ_N
for each N∈ℕ. We recall that the presence of the eigenvalue 0 is a well-known consequence of the invariance of the LLE under spatial translations. Differentiation of the profile equation (<ref>) with respect to x shows that 𝒜[ϕ](ϕ') = 0, and hence λ=0 is an eigenvalue with eigenfunction ϕ' of the operator 𝒜[ϕ] acting in L_ per^2(0,NT) for all N ∈.
§.§ Prior Subharmonic Stability Results
The presence of the spectral gap (<ref>) and the simplicity of the eigenvalue λ=0 allow us to obtain nonlinear stability against subharmonic perturbations for each arbitrary, but fixed, integer N ∈. This result follows as a straightforward extension of the nonlinear stability result for co-periodic perturbations in <cit.>.
Let ϕ be a smooth T-periodic stationary solution of (<ref>) that is diffusively spectrally stable in the sense of Definition <ref>. For each N∈, take δ_N>0 such that (<ref>) holds.
Then, for every δ∈ (0,δ_N) there exist _δ,C_δ>0 such that for each v_0∈ H^1_ per(0,NT) with v_0_H^1_ per(0,NT)<_δ there exist a constant γ∈ and a global mild solution ψ∈ C([0,∞),H_ per^1(0,NT)) of (<ref>) with initial condition ψ(0)=ϕ+v_0 satisfying
|γ| ≤ C_δv_0_H^1_ per(0,NT), ψ(·,t)-ϕ(· + γ)_H_ per^1(0,NT)≤ C_δ^-δ tv_0_H_ per^1(0,NT),
for all t ≥ 0.
While Theorem <ref> establishes nonlinear stability against NT-periodic perturbations for each fixed N ∈, it lacks uniformity in N in two (related) aspects. Indeed, both the allowable size of initial perturbations _δ and the exponential rate of decay δ are controlled by the size of the spectral gap δ_N. Since δ_N→ 0 as N→∞, it follows that both _δ and δ chosen in Theorem <ref> necessarily tend to zero as N→∞, while the constant C_δ tends to infinity.
Addressing this issue requires us to develop a strategy to uniformly handle the accumulation of NT-periodic eigenvalues near λ=0 as N →∞. In the proof of Theorem <ref>, the eigenvalue λ=0 was enclosed in a small ball B(0,r_N), where the N-dependent radius r_N > 0 is chosen so that
σ_L^2_ per(0,NT)(𝒜[ϕ])∩ B(0,r_N)={0},
leading to the associated spectral projection
𝒫_0,N=1/2π∫_∂ B(0,r_N) z/z-𝒜[ϕ] = 1/Nϕ' Φ_0,·_L^2_ per(0,NT),
onto the 1-dimensional NT-periodic kernel of 𝒜[ϕ] spanned by ϕ'. Here, Φ_0 is the function spanning the L^2_ per(0,T)-kernel of the adjoint 𝒜[ϕ]^* of the operator 𝒜[ϕ], normalized to satisfy
⟨Φ_0,ϕ'⟩_L^2_ per(0,T)=1. One then decomposes the semigroup, generated by 𝒜[ϕ], as
^𝒜[ϕ]t=𝒫_0,N+^𝒜[ϕ]t(1-𝒫_0,N),
and shows that for each δ∈(0,δ_N) there exists a constant C_δ>0 such that
^𝒜[ϕ]t(1-𝒫_0,N)f_H^1_ per(0,NT)≤ C_δ^-δ tf_H^1_ per(0,NT),
for all t≥ 0 and f∈ H^1_ per(0,NT). This allows one to establish the nonlinear stability result in Theorem <ref>; see <cit.> for more details.
The lack of uniformity thus stems from the fact that one must take the radius r_N→ 0 as N→∞. To establish uniform bounds one should instead work with a ball B(0,r) with an N-independent radius r>0 and define the spectral projection
𝒫:=1/2π∫_∂ B(0,r) z/z-𝒜[ϕ],
associated with the generalized eigenspace corresponding to all the eigenvalues in the interior of the ball. Naturally, the dimension of this generalized eigenspace tends to infinity as N→∞, a difficulty which must be handled to establish uniform in N decay estimates associated with the induced decomposition of the semigroup. This analysis of the semigroup was carried out in detail in <cit.>. We slightly reformulate the main result from <cit.>; see Section <ref>.
Suppose ϕ is a smooth T-periodic stationary solution of (<ref>) that is diffusively spectrally stable in the sense of Definition <ref>. Then, there exists a constant C>0 such that for every N∈ and f∈ L^2_ per(0,NT) there exist a constant σ_ℓ∈ and a function γ_ℓ∈ C([0,∞),L^2_ per(0,NT)) with the following properties:
^𝒜[ϕ]tf_L^2_ per(0,NT), |σ_ℓ| ≤ C f_L^1_ per(0,NT)∩ L^2_ per(0,NT),
^𝒜[ϕ]tf - 1/Nϕ'(·)σ_ℓ_L^2_ per(0,NT) ≤ C(1+t)^-1/4f_L^1_ per(0,NT)∩ L^2_ per(0,NT),
γ_ℓ(·,t)- 1/Nσ_ℓ_L^2_ per(0,NT) ≤ C(1+t)^-1/4f_L^1_ per(0,NT)∩ L^2_ per(0,NT),
^𝒜[ϕ]tf-ϕ'(·)γ_ℓ (·,t)_L^2_ per(0,NT) ≤ C (1+t)^-3/4f_L^1_ per(0,NT)∩ L^2_ per(0,NT),
for all t≥0.
We point out that the use of the L^1-norm is essential in obtaining these uniform estimates as can be seen from the proof of Theorem <ref> in <cit.>. In fact, the nonuniform embedding L^2_ per(0,NT)↪ L^1_ per(0,NT) for which
f_L^1_ per(0,NT)≤√(NT)f_L^2_ per(0,NT), f ∈ L^2_ per(0,NT),
yields that the estimates (<ref>) do hold in L^2_ per(0,NT), but with an N-dependent constant C√(NT). In addition, the uniform decay rates obtained in this result are exactly the ones found in the case of localized perturbations, the ideas of proof being also very similar, cf. <cit.>.
§.§ Goal of Paper and Technical Challenge
The aim of this paper is to upgrade Theorem <ref> to the nonlinear level. We note that such a result is not at all straightforward. To gain insight into the main technical difficulties, we first provide an interpretation of the dynamics suggested by Theorem <ref>. Suppose ϕ is a T-periodic stationary solution of (<ref>), which is diffusively spectrally stable, and suppose that ψ(x,t) is a solution of (<ref>) with initial data ψ(x,0)=ϕ(x)+ v_0(x) with ||≪ 1 and v_0∈L^2_ per(0,NT). From (<ref>), it is natural to suspect that for t≫ 1 one should have
ψ(x,t) ≈ϕ(x) + ^𝒜[ϕ]tv_0(x)≈ϕ(x) + ϕ'(x) γ_ℓ(x,t)≈ϕ(x+γ_ℓ(x,t)),
which is a small phase modulation of the background wave ϕ. This suggests that, in order to capture the leading-order dynamics under perturbations (in a uniform way), one must incorporate into the nonlinear argument a phase modulation γ_nl(x,t) which depends on both space and time.[This stands in contrast to the classical approach of using a phase modulation that depends only on time.]
It turns out that the resulting inverse-modulated perturbation
v(x,t) = ψ(x - γ_nl(x,t),t) - ϕ(x),
necessarily satisfies a quasilinear equation, thus yielding an inherent loss of regularity. In earlier work <cit.>, considering subharmonic perturbations of periodic waves in reaction-diffusion systems, this obstacle was overcome using a nonlinear damping estimate, which is an energy estimate effectively controlling higher Sobolev norms of the modulated perturbation in terms of its L^2-norm. However, in contrast to the reaction-diffusion case, such a damping estimate cannot be obtained with standard methods in our case due to low-order damping effects of the LLE.
The same difficulty appears in the study of the nonlinear stability of periodic waves for localized perturbations. Two alternative approaches to control regularity in the LLE were recently presented in <cit.>. While the approach in <cit.> relies on tame estimates on the unmodulated perturbation
ṽ(t) := ψ(t) - ϕ.
the approach in <cit.> uses nonlinear damping estimates on the forward-modulated perturbation
v(x,t) = ψ(x,t) - ϕ(x + γ_nl(x,t)).
This second approach has the advantage that it requires less regularity on initial data. We refer to <cit.> for a comparison of these methods; see also Section <ref>.
In this paper, we present an N-uniform nonlinear iteration scheme for subharmonic perturbations of Lugiato-Lefever periodic waves, loosely following the modulational approach of <cit.> for subharmonic perturbations of periodic reaction-diffusion waves, and, in order to control regularity, we transfer the method of <cit.> to the subharmonic setting in an N-uniform way.
§.§ Main Result
We state our main result, which establishes N-uniform nonlinear stability of diffusively spectrally stable T-periodic steady waves in the LLE against subharmonic perturbations and gives a precise modulational description of the local dynamics about the wave.
Let T > 0 and suppose that ϕ is a smooth T-periodic stationary solution of (<ref>) that is diffusively spectrally stable in the sense of Definition <ref>.[These hypotheses on ϕ are made throughout the whole paper.] Then, there exist constants , M > 0 such that, for each N∈, whenever v_0∈ H^2_ per(0,NT) satisfies
E_0:=v_0_L^1_ per(0,NT) ∩ H^2_ per(0,NT)<,
there exist a constant σ_nl∈, a modulation function
γ_nl∈ C([0,∞),H_ per^4(0,NT)) ∩ C^1([0,∞),H^2_ per(0,NT)),
and a global classical solution
ψ∈ C([0,∞),H_ per^2(0,NT)) ∩ C^1([0,∞),L^2_ per(0,NT)),
of (<ref>) with initial condition ψ(0) = ϕ + v_0,
with the following properties:
ψ(·,t) - ϕ_H^2_ per(0,NT), |σ_nl|
≤ ME_0,
ψ(·,t) - ϕ(· + 1/Nσ_nl)_H^2_ per(0,NT) ≤ ME_0 (1+t)^-1/4,
γ_nl(·,t)-1/Nσ_nl_L^2_ per(0,NT) ≤ ME_0 (1+t)^-1/4,
ψ(·,t) - ϕ(· + γ_nl(·,t) )_H^2_ per(0,NT) ≤ ME_0 (1+t)^-3/4,
∂_x γ_nl(·,t)_H^3_ per(0,NT),
∂_t γ_nl(·,t)_H^2_ per(0,NT) ≤ ME_0 (1+t)^-3/4,
for all t≥ 0.
We note that the decay rates in Theorem <ref> are sharp in the sense that they coincide with the (optimal) decay rates in the corresponding N-uniform linear result, Theorem <ref>. They also agree with the decay rates obtained for localized perturbations in <cit.>. Although the regularity on the initial perturbation required in Theorem <ref> is higher than in the linear result, Theorem <ref>, the regularity requirement in Theorem <ref> is natural in the sense that it does reflect the amount of regularity needed to obtain a classical solution in L^2_ per(0,NT) of the semilinear LLE (<ref>) via standard semigroup theory. Indeed, the domain of the linear operator β∂_x^2 acting on L^2_ per(0,NT) is H^2_ per(0,NT). We emphasize that the use of nonlinear damping estimates on the forward-modulated perturbation as in <cit.> allows us to preserve the regularity on the initial perturbation from the local existence theory, v_0∈ H^2_ per(0,NT), whereas the use of mild estimates on the unmodulated perturbation as in <cit.> would require a higher regularity, v_0∈ H^6_ per(0,NT); see Section <ref>.
We point out that the spatial translates γ and σ_nl/N in Theorems <ref> and <ref> must be the same. To see this, fix N∈, take _δ as in Theorem <ref> and as in Theorem <ref>. If v_0∈ H^2_ per(0,NT) satisfies E_0 := v_0_L^1_ per(0,NT) ∩ H^2_ per(0,NT) < min{,_δ}, then Theorems <ref> and <ref> imply that the solution ψ of (<ref>) with initial condition ψ(0)=ϕ+v_0 satisfies
ψ(·,t)-ϕ(·+γ)_H^1_ per(0,NT)≤ C_δ E_0e^-δ t, ψ(·,t) - ϕ(· + 1/Nσ_nl)_H^2_ per(0,NT)≤ ME_0 (1+t)^-1/4,
for t ≥ 0. So, the triangle inequality yields
ϕ(·+γ) - ϕ(·+1/Nσ_ nl)_H^1_ per(0,NT)≤ C_δ E_0e^-δ t + ME_0 (1+t)^-1/4,
for all t≥0, and taking t→∞ justifies that γ=1/Nσ_ nl, as claimed.
In light of this remark, we note that, as in <cit.>, the results of Theorems <ref> and <ref> can be combined to yield an N-independent ball of initial perturbations which eventually exhibit exponential decay at an N-dependent rate to a translate of the periodic wave ϕ. That is, we improve Theorem <ref> by showing that, for all but an at most finite number of N∈, the allowable size of initial perturbations can be increased to a uniform size .
Let ϕ, ε and M be as in Theorem <ref>. Fix N∈, let δ_N be as in (<ref>), and, for δ∈(0,δ_N), let _δ be as in Theorem <ref>. Then, there exist constants T_δ≥ 0 and M_δ>0 such that, whenever v_0∈H^2_ per(0,NT) satisfies E_0 := v_0_L^1_ per(0,NT) ∩ H^2_ per(0,NT)< max{, _δ},[It is important to note that, since is independent of N while _δ→0 as N→∞, there is an at most finite number of positive integers N for which we might have _δ>.] there exist a constant σ_ nl∈ and a global mild solution ψ∈ C([0,∞),H_ per^1(0,NT)) of (<ref>) with initial condition ψ(0)=ϕ+v_0 satisfying
ψ(·,t) - ϕ(· + 1/Nσ_nl)_H^1_ per(0,NT)≤
ME_0(1+t)^-1/4, for 0<t≤ T_δ,
M_δ E_0^-δ t, for t>T_δ.
Together with the (formal) observation that, as N increases, functions in H^m_ per(0,NT) look more like functions in H^m(),
we see that Corollary <ref> also serves to formally connect the subharmonic result in the limit as N→∞ to the localized result established in <cit.>.
In order to take the limit N→∞, we (only here) fix E_0∈(0,) independent of N and choose a sequence v_0,N∈ L^1_ per(0,NT) ∩ H^2_ per(0,NT) such that v_0,N_L^1_ per(0,NT) ∩ H^2_ per(0,NT) = E_0 for each N∈.[This argument essentially mimics choosing w_0∈ L^1()∩ H^2() such that E_0:=w_0_L^1()∩ H^2()∈(0,) and a sequence v_0,N∈ L^1_ per(0,NT) ∩ H^2_ per(0,NT) such that v_0,N_L^1_ per(0,NT) ∩ H^2_ per(0,NT)→E_0 as N→∞.]
For N sufficiently large, T_δ is defined to be the minimal time such that ME_0(1+t)^-1/4≤_δ for all t≥ T_δ, see the proof in Section <ref> for motivation of the definition of T_δ.
Since M and E_0 are independent of N while _δ→ 0, we observe that T_δ→∞ as N→∞. Finally, noting that the spatial translate 1/Nσ_ nl converges to 0
and the time T_δ diverges to ∞ as N →∞, we see that the localized result is indeed recovered when we, again formally, take N→∞ and fix E_0 independent of N.
§.§ Outline of the Paper
In Section <ref>, we collect and extend the relevant linear results obtained in <cit.>. We decompose the semigroup ^𝒜[ϕ]t in a low- and high-frequency part, state associated N-uniform estimates on these parts and study the interaction of the low-frequency part with spatial and temporal derivatives. In Section <ref>, we construct our nonlinear iteration scheme and establish an N-uniform nonlinear damping estimate to compensate for the loss of regularity exhibited by the scheme. In Section <ref>, we apply the linear estimates to our nonlinear iteration scheme and prove our main result, Theorem <ref>, and Corollary <ref>. In Appendix <ref>, we provide a brief proof of the results presented in Section <ref>. Finally, Appendix <ref> is devoted to obtaining some local existence and regularity results necessary for our nonlinear analysis.
§.§ Notation
For convenience, for each N,m∈ we introduce the notations
L^1_N := L^1_per(0,NT),
L^2_N := L^2_per(0,NT),
H^m_N:=H^m_per(0,NT).
We equip L^1_N and L^2_N with the norm and the inner-product
f_L^1_N = ∫_0^NT|f(x)|dx ⟨ f,g⟩_L^2_N=∫_0^NTf(x)g(x)dx,
respectively, and equip H^m_N with the norm
f_H^m_N^2 = f_L^2_N^2+∑_k=1^mf^(k)_L^2_N^2,
with f^(k) representing the k-th order derivative of f. Notice that with these norms the Sobolev embedding H^1_N↪ L^∞(), holds uniformly in N.[This can be seen by taking a constant C > 0, which bounds the norm of the continuous embedding H^1() ↪ L^∞(), and a smooth, T-dependent, cut-off function ω→ with
ω_L^∞ = 1, ω(x) = 1 for x ≥ 0 and ω(x) = 0 for x ≤ -T. For N ∈ define ω_N → by ω_N(x) = 1 for x ∈ [0,NT], ω_N(x) = ω(x) for x ≤ 0 and ω_N(x) = ω(NT-x) for x ≥ NT. It follows f_L^∞ = fω_N_L^∞≤ Cfω_N_H^1≤ 3Cω_N_W^1,∞f_H^1_N = 3Cω_W^1,∞f_H^1_N for f ∈ H^1_N, where we use the NT-periodicity of f.]
Acknowledgments: MH was partially supported by French National Research Agency (ANR) through the project Optimal (grant number ANR-20-CE30-0004) and the EUR EIPHI program (grant number ANR-17-EURE-0002).
The work of MAJ was partially funded by the NSF under grant DMS-2108749, as well as the Simons Foundation Collaboration grant number 714021.
The work of BdR was funded by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) – Project-ID 258734477 – SFB 1173.
§ SUBHARMONIC LINEAR ESTIMATES
The starting point for our nonlinear analysis is the recent linear analysis in <cit.>, where the authors established Theorem <ref> by showing that the diffusive spectral stability assumptions in Definition <ref> imply that, after splitting off the translational mode by applying the spectral projection 𝒫_0,N, defined in (<ref>), the action of the semigroup ^𝒜[ϕ]t can be decomposed into two components, one with (1+t)^-1/4-decay and one with (1+t)^-3/4-decay.
The nonlinear analysis presented here requires the following slight extension of these results showing in addition that the component with slowest decay at rate (1+t)^-1/4 is smoothing.
Let T>0 and suppose that ϕ is a smooth, T-periodic stationary solution of (<ref>) that is diffusively spectrally stable in the sense of Definition <ref>. Then, the action of the
linearized solution operator ^𝒜[ϕ]t acting on v∈ L^2_N can be decomposed as
^𝒜[ϕ]tv = 𝒫_0,N v + ϕ' s_p,N(t)v + S_N(t)v,
where 𝒫_0,N is the spectral projection (<ref>), and the operators s_p,N(t) and S_N(t) have the following properties.
* For all integers j,l,k≥ 0 there exists an N-independent constant C_j,l,k>0 such that
∂_x^l ∂_t^j s_p,N(t) ∂^k_x v_L_N^2≤ C_j,l,k(1+t)^-1/4-l+j/2v_L_N^1 , v∈ H^k_N,
∂_x^l ∂_t^j s_p,N(t) ∂^k_x v_L_N^2≤ C_j,l,k(1+t)^-l+j/2v_L_N^2, v∈ H^k_N,
for all t≥ 0.
* There exists an N-independent constant C>0 such that
S_N(t) v_L_N^2≤ C(1+t)^-3/4v_L_N^1∩ L_N^2, v∈ L_N^2,
for all t≥ 0.
For j=l=k=0 this result has been proved in <cit.>. We summarize the arguments leading to the additional estimates (<ref>) in Appendix <ref>.
Notice that the linear stability result in Theorem <ref> is an immediate consequence of the proposition above with the choice
σ_ℓ:=Φ_0,f_L^2_N, γ_ℓ(·,t) := 1/NΦ_0,f_L^2_N + s_p,N(t)f,
for f∈ L_N^2, where Φ_0 is the T-periodic function in the formula for the spectral projection (<ref>).
For our purposes, it is convenient to slightly modify the decomposition (<ref>) such that the first two terms on the right hand side of (<ref>), which have slowest decay, vanish at t=0.[This will facilitate the choice of the modulation functions in <ref>.] To this end, we introduce a smooth cutoff function χ:[0,∞)→[0,1], which vanishes on [0,1] and equals 1 on [2,∞), and write
^𝒜[ϕ]tv = χ(t)𝒫_0,N v + ϕ' s_p,N(t)v + S_N(t)v,
with
s_p,N(t) = χ(t) s_p,N(t), S_N(t) = (1-χ(t))(𝒫_0,N+ϕ' s_p,N(t)) + S_N(t).
Because χ equals 1 on [2,∞), the inequalities (<ref>) and (<ref>) hold for s_p,N(t) and S_N(t) as well.
§ NONLINEAR ITERATION SCHEME
The goal of this section is to introduce the nonlinear iteration scheme that will be employed in the next section to prove our nonlinear stability result, Theorem <ref>. To this end, let ϕ be a smooth, T-periodic stationary solution of the LLE (<ref>), which is diffusively spectrally stable in the sense of Definition <ref>. Fix N ∈ and consider the perturbed solution ψ(t) of (<ref>) with initial condition
ψ(0) = ϕ + v_0,
where v_0 ∈ H^2_N is sufficiently small. Noting that the linear operator β∂_x^2 acting on the space L^2_N with domain H^2_N generates a C_0-semigroup, and the mapping ψ↦ -(1+)ψ + |ψ|^2ψ + F is locally Lipschitz continuous on H^2_N, standard semigroup theory readily yields local existence and uniqueness of the perturbed solution ψ(t); see, for instance, <cit.>.
For any v_0 ∈ H^2_N, there exists a maximal time T_max∈ (0,∞] such that (<ref>) admits a unique classical solution
ψ∈ C([0,T_max),H^2_N) ∩ C^1([0,T_max),L^2_N),
with initial condition ψ(0) = ϕ + v_0. In addition, if T_max < ∞, then
lim_t ↑ T_maxψ(t)_H^2_N = ∞.
It is well-known that direct control on the unmodulated perturbation
ṽ(t) := ψ(t) - ϕ,
which satisfies the semilinear equation (<ref>), is not strong enough to close a nonlinear stability argument. Indeed, iterative estimates on the Duhamel formula associated with (<ref>) are too weak to close the nonlinear argument because of the presence of the constant, non-decaying, term 𝒫_0,Nv in the decomposition (<ref>) of the semigroup ^𝒜[ϕ] t. To overcome this lack of decay, a standard approach is to consider the temporally-modulated perturbation
v̆(x,t) := ψ(x,t) - ϕ(x+σ(t)),
which then leads to the result in Theorem <ref>.
However, this result is not uniform in N and the N-uniform decay rate (1+t)^-1/4 of the remaining terms in the semigroup ^𝒜[ϕ]t is too weak to close the nonlinear iteration in the presence of a quadratic nonlinearity. To overcome this obstacle, we introduce the inverse-modulated perturbation
v(x,t) := ψ(x - γ(x,t) - 1/Nσ(t),t) - ϕ(x),
where the temporal modulation function σ(t) is chosen to account for the nondecaying term 𝒫_0,Nv of the semigroup, whereas the spatio-temporal phase modulation γ(x,t) is chosen to account for the term with slowest algebraic decay rate (1+t)^-1/4 in the semigroup decomposition (<ref>).
The idea of a spatio-temporal phase modulation to capture the most critical diffusive dynamics stems from the nonlinear stability analysis of periodic waves in reaction-diffusion systems against localized and nonlocalized perturbations; see <cit.>. The approach was then later adapted to obtain N-uniform results in the case of subharmonic perturbations in <cit.> by including an additional nondecaying temporal modulation σ(t). Note that this methodology also
extends to systems with additional conservation laws, thus allowing for additional modulation functions <cit.>.
An issue is that the inverse-modulated perturbation v defined in (<ref>) satisfies a quasilinear equation, yielding an apparent loss of derivatives in the nonlinear iteration scheme. To regain regularity one often relies on nonlinear damping estimates, which are energy estimates, effectively providing control of higher Sobolev norms of the inverse-modulated perturbation in terms of its L^2-norm. If the underlying equation is parabolic, such nonlinear damping estimates can be obtained from smoothing properties of the (analytic) semigroup; see for instance <cit.> and <cit.> for the case of localized and subharmonic perturbations in reaction-diffusion systems, respectively. In general, however, the existence of nonlinear damping estimates is not guaranteed and their derivation can be tedious and lengthy; see, for instance, the delicate analyses <cit.>, <cit.> and <cit.> in the case of hyperbolic-parabolic systems.
In cases where nonlinear damping estimates are unavailable or difficult to obtain, there is an alternative approach to control regularity, which was introduced in <cit.>. The key idea in <cit.> is to incorporate tame estimates on the unmodulated perturbation ṽ(t), which satisfies a semilinear equation in which no derivatives are lost. This approach was applied in the stability analysis <cit.> of periodic waves in the LLE (<ref>) against localized perturbations and works as long as the underlying equation is semilinear. Moreover, it has the advantage that it does not rely on localization or periodicity properties of perturbations and, thus, can be applied in case of pure L^∞-perturbations, cf. <cit.>.
Recently, a nonlinear damping estimate was established for the LLE in <cit.> in the case of localized perturbations. The approach in <cit.> is to first derive a damping estimate for the forward-modulated perturbation
v(x,t) := ψ(x,t) - ϕ(x + γ(x,t) + 1/Nσ(t)),
and then exploit its equivalence to the inverse-modulated equation, modulo absorbable errors. Here, the modulation functions γ and σ are precisely those chosen from the inverse-modulated perturbation in (<ref>).
The advantage of using a nonlinear damping estimate over the approach in <cit.> is that it requires less regularity on the initial data, as can be seen by comparing <cit.> with <cit.>, see also Remark <ref>. In addition, as pointed out in <cit.>, it allows for sharp bounds in case of a nonlocalized initial phase modulation, and has the possibility (to be checked in individual cases) of extension to quasilinear equations. We refer to <cit.> for further discussion and comparison with the above method from <cit.>.
Thus, motivated by the possibility to allow for less regular initial data, we choose to control regularity in this work by transferring the nonlinear damping estimate in <cit.> to the case of subharmonic perturbations.
The rest of this section is structured as follows. First, we derive the (quasilinear) equation for the inverse-modulated perturbation v, and obtain N-uniform estimates on the nonlinearities. Next, we show that the critical terms in the Duhamel formula of the inverse-modulated perturbation can be compensated for by making a judicious choice for the phase modulation functions σ(t) and γ(x,t). Subsequently, we establish local well-posedness of the integral system consisting of v, σ and γ. Finally, in an effort to control regularity in this system, we consider the forward-modulated perturbation and derive a suitable N-uniform nonlinear damping estimate. The result in Theorem <ref> is proved in Section <ref> with γ_nl = γ +σ/N and a suitably chosen constant σ_nl.
We note that it is possible to adapt the method of <cit.> to the current setting in order to regain regularity and obtain a nonlinear subharmonic stability result that is uniform in N. This adaptation is complicated, however, by the fact that the temporally-modulated perturbation, v̆, and the inverse-modulated perturbation, v, want to naturally select different temporal modulations, σ(t), at least under the approach of <cit.>. Despite this complication, we were able to establish a result[We do not report the full result or details here.] similar to Theorem <ref>, albeit with the requirement that v_0∈ H^6_N, instead of v_0∈ H^2_N, where the higher regularity is necessary to guarantee optimal decay results in the absence of a nonlinear damping estimate.
§.§ The Inverse-Modulated Perturbation
Applying the differential operator ∂_t - 𝒜[ϕ] to the formula (<ref>) for the inverse-modulated perturbation v while using that ψ(t) and ϕ solve the Lugiato-Lefever equation (<ref>), we obtain the quasilinear equation
(∂_t-𝒜[ϕ])(v + γϕ' + 1/Nσϕ') = 𝒩(v,γ,∂_t γ,∂_t σ) + (_̣t - 𝒜[ϕ])(γ_x v),
where 𝒜[ϕ] is the linear operator defined by (<ref>) and the nonlinearity is given by
𝒩(v,γ,γ_t,σ_t) = 𝒬(v,γ) + _̣x ℛ(v,γ,γ_t,σ_t) + _̣x^2 𝒫(v,γ),
where
𝒬(v,γ) =
(1-γ_x)𝒥[([ 3v_r^2+v_i^2 2v_r v_i; 2v_r v_i v_r^2+3v_i^2 ])ϕ+|v|^2v],
is (at least) quadratic in v and where
ℛ(v,γ,γ_t,σ_t) = -γ_t v -1/Nσ_t v + β𝒥[γ_xx/(1-γ_x)^2 v - γ_x^2/1-γ_xϕ'] ,
𝒫(v,γ) = -β𝒥[γ_x + γ_x/1-γ_x] v,
contain all terms which are linear in v.
Using the N-uniform embedding H^1_N↪ L^∞(),
the following estimate on the nonlinearity is straightforward to verify.
There exists an N-independent constant C > 0 such that the inequality
𝒩(v,γ,γ_t,σ_t)_L_N^1∩ L_N^2 ≤ C( v_L_N^2v_H_N^1 + (γ_x,γ_t)_H_N^2 × H_N^1(v_H_N^2 + γ_x_L_N^2) + |σ_t|v_H_N^1),
holds for all v ∈ H_N^2, (γ,γ_t) ∈ H_N^3 × H^1_N and (σ,σ_t) ∈× satisfying v_L^∞, γ_x_W^1,∞≤1/2.
Next, we introduce the modulation functions σ and γ.
The decomposition (<ref>) of the semigroup ^𝒜[ϕ]t in which the first two terms, with lower decay, vanish at t=0 allows us to consider modulation functions which vanish identically at t = 0, i.e. such that σ(0)=0 and γ(·,0)=0. Then, the Duhamel formulation associated with (<ref>) reads
v(t) + 1/Nσ(t)ϕ' + γ(t)ϕ' = ^𝒜[ϕ]tv_0 + ∫_0^t ^𝒜[ϕ](t-s)𝒩(v,γ,∂_s γ,∂_s σ)(s) s + γ_x(t)v(t).
Together with the semigroup decomposition (<ref>), and the formula (<ref>) for the spectral projection 𝒫_0,N, this recommends the (implicit) choices
σ(t) = χ(t)Φ_0,v_0_L^2_N + ∫_0^t χ(t-s)Φ_0,𝒩(v,γ,∂_s γ,∂_s σ)(s)_L^2_N s,
γ(t) = s_p,N(t)v_0 + ∫_0^t s_p,N(t-s)𝒩(v,γ,∂_s γ,∂_s σ)(s) s,
so that σ(t) accounts for the non-decaying χ(t)𝒫_0,N-terms and γ(x,t) for the slowly decaying s_p,N(t)-terms on the right-hand side of (<ref>).
Subtracting (<ref>) and (<ref>)
from (<ref>) yields the equation for the inverse-modulated perturbation
v(t) = S_N(t)v_0 + ∫_0^t S_N(t-s)𝒩(v,γ,∂_s γ,∂_s σ)(s) s + γ_x(t)v(t).
Recalling the definition (<ref>) of the inverse-modulated perturbation, one observes that (<ref>)-(<ref>) forms a closed integral system in terms of σ and γ. A standard contraction mapping argument then yields the local existence and uniqueness result for the modulation functions.
Taking ψ and T_max as in Proposition <ref>, there exists a maximal time τ_max∈ (0,T_max] such that the integral system (<ref>)-(<ref>), with v given by (<ref>), has a unique solution
(σ,γ) ∈ C([0,τ_max),× H^4_N) ∩ C^1([0,τ_max),× H^2_N),
with (σ,γ)(0)=0. In addition, if τ_max < T_max, then
lim_t ↑τ_max(σ(t),∂_t σ(t),γ(t),∂_t γ(t))_×× H^4_N × H^2_N = ∞.
We prove this proposition in Appendix <ref>. Then, for the inverse-modulated perturbation v we obtain the following local existence result.
Taking ψ and T_max as in Proposition <ref> and σ,γ and τ_max as in Proposition <ref>, the inverse-modulated perturbation v, defined by (<ref>), satisfies v ∈ C([0,τ_max),L^2_N). Moreover, for any t ∈ [0,τ_max) with γ_x(t)_L^∞≤1/2 it holds v(t) ∈ H^2_N.[We note that it is not clear that the inverse-modulated perturbation v [0,τ_max) → H^2_N is continuous. A standard approach to prove continuity of v would be to apply the mean value theorem to the perturbed solution ψ(t) and its derivatives. This would however require boundedness of the third derivative of ψ(t), which does not follow from Proposition <ref>.]
First, notice that v = V ∘ F where V is the continous mapping defined in Lemma <ref> and F [0,τ_max) → L^2_N ×× [0,T_max) is defined by F(t) = (γ(t),σ(t),t).
By Proposition <ref> the map F is continuous which together with the continuity of V implies that v ∈ C([0,τ_max),L^2_N).
Next, let t∈ [0,τ_max) be such that γ_x(t)_L^∞≤1/2. Then, the map A_t → given by
A_t(x) = x - γ(x,t) - 1/Nσ(t),
is invertible. Moreover, the NT-periodicity of γ(·,t) implies A_t(NT) - A_t(0) = NT. Hence, using Young's inequality, the N-uniform embedding H^1_N ↪ L^∞() and the substitution y = A_t(x), we establish a constant C > 0 such that for any f ∈ H^2_N it holds
f(A_t(·))_H^2_N^2
≤∫_0^NT|f(A_t(y))|^2 y
+ 2∫_0^NT|f”(A_t(y))|^2 A_t'(y)^4 y
+ ∫_0^NT|f'(A_t(y))|^2 (A_t'(y)^2 + 2A_t”(y)^2) y
= ∫_A_t(0)^A_t(NT)|f(x)|^2/1-γ_x(A_t^-1(x),t) x
+ 2∫_A_t(0)^A_t(NT)|f”(x)|^2(1-γ_x(A_t^-1(x),t))^3 x
+ ∫_A_t(0)^A_t(NT)|f'(x)|^2(1-γ_x(A_t^-1(x),t))^2 + 2γ_xx(A_t^-1(x),t)^2/1-γ_x(A_t^-1(x),t) x
≤ Cf_H^2_N^2(1+γ_xx(t)_H^1_N^2).
Therefore, we find v(t) = ψ(A_t(·),t) - ϕ∈ H^2_N by Propositions <ref> and <ref>.
We note that the primary success in the above nonlinear decomposition is that the only component of the linearized evolution ^𝒜[ϕ]t that remains in the equation (<ref>) for v(t) is the S_N(t)-component, which exhibits temporal decay at the rate (1+t)^-3/4, which is strictly faster than the (diffusive) decay rate (1+t)^-1/4 associated with the projected semigroup ^𝒜[ϕ]t(1-𝒫_0,N). Further, the nonlinear residual 𝒩 depends only on derivatives of γ and σ which, recalling that s_p,N(0)=0 and χ(0) = 0, satisfy
∂_t^j σ(t) = ∂_t^j χ(t)Φ_0,v_0_L^2_N + ∫_0^t ∂_t^j χ(t-s)Φ_0,𝒩(v,γ,∂_s γ,∂_s σ)(s)_L^2_N s
∂_x^ℓ∂_t^jγ(x,t) = ∂_x^ℓ∂_t^js_p,N(t)v_0 +∫_0^t∂_x^ℓ∂_t^j s_p,N(t-s)𝒩(v,γ,∂_sγ,∂_s σ)(s) s
for all ℓ,j∈_0 and t ∈ [0,τ_max). We observe that the derivative χ'(t) vanishes for t ≥ 2, whereas the temporal decay of s_p,N(t) improves to (1+t)^-3/4 upon taking derivatives, cf. Proposition <ref>. This suggests that the linear decay in an iteration scheme consisting of v(t) and derivatives of σ(t) and γ(t) is strong enough to close a nonlinear argument. Yet, the apparent loss of regularity needs to be addressed, which will be the purpose of the remainder of this section.
§.§ The Forward-Modulated Perturbation
First, the local existence and uniqueness of the forward-modulated perturbation, defined by (<ref>), readily follows from Propositions <ref> and <ref>.
Taking ψ as in Proposition <ref> and σ,γ and τ_max as in Proposition <ref>, the forward-modulated perturbation defined by (<ref>) satisfies ∈ C([0,τ_max),H^2_N) ∩ C^1([0,τ_max),L_2^N).
Let t,s ∈ [0,τ_max). With the aid of the mean value theorem and the embedding H^1_N ↪ L^∞() we establish a constant C > 0 such that
(t) - (s)_H^2_N ≤ Cϕ'_W^2,∞(1 + γ_x(t)_H^2_N + γ_x(s)_H^2_N)^2(γ(t) - γ(s)_H^2_N + |σ(t) - σ(s)|)
+ ψ(t) - ψ(s)_H^2_N,
and
∂_t (t) - ∂_s (s)_L^2_N ≤ϕ'_L^∞(∂_t γ(t) - ∂_s γ(s)_L^2_N + |σ'(t) - σ'(s)|) + ∂_t ψ(t) - ∂_s ψ(s)_L^2_N
+ ϕ”_L^∞(∂_t γ(t)_H^1_N + |σ'(t)|)(γ(t) - γ(s)_L^2_N + |σ(t) - σ(s)|),
which yields the proof by invoking Propositions <ref> and <ref>.
Applying the operator ∂_t - 𝒜[ϕ] to (<ref>) while using that ϕ and ψ(t) are solutions of (<ref>), one finds that the forward-modulated perturbation satisfies the equation
(∂_t-𝒜[ϕ]) = 𝒩[ϕ]() + ℛ(γ,∂_t γ,∂_t σ),
for t ∈ [0,τ_max), where ϕ denotes the modulated periodic wave
ϕ(x,t) = ϕ(x + γ(x,t) + 1/Nσ(t)),
with σ and γ chosen as in (<ref>)-(<ref>),
the linear operator 𝒜[ϕ] is defined by (<ref>), the nonlinearity 𝒩[ϕ]() is given by (<ref>), and the residual ℛ(γ,γ_t,σ_t) is defined by
ℛ(γ,γ_t,σ_t) = - β J(ϕ”(· + γ(·,t) + 1/Nσ(t))(2γ_x + γ_x^2) + ϕ'(· + γ(·,t) + 1/Nσ(t))γ_xx)
+ ϕ'(· + γ(·,t) + 1/Nσ(t)) (γ_t + 1/Nσ_t).
One observes that the equation (<ref>) for the forward-modulated perturbation arises from the equation (<ref>) for the unmodulated perturbation by replacing the periodic steady wave ϕ by the modulated wave ϕ and adding the residual term ℛ(γ,γ_t,σ_t), which does not depend on . In particular, equation (<ref>) is semilinear in , which simplifies acquiring a nonlinear damping estimate. In fact, for the case of localized perturbations, a nonlinear damping estimate has been obtained for equation (<ref>) in <cit.> and, using an analogous approach, for equation (<ref>) in <cit.>. The method transfers to the case of subharmonic perturbations in an N-uniform way, leading to the following result.
Let ψ be as in Proposition <ref> and σ,γ and τ_max as in Proposition <ref>. Then, there exist N-independent constants R_1, C > 0 such that the forward-modulated perturbation (t) given by (<ref>) obeys the estimate
(t)_H^2_N^2 ≤ C^-tv_0_H^2_N^2 + C(t)_L^2_N^2
+ C∫_0^t ^-(t-s)((s)_L^2_N^2 + γ_x(s)_H^3_N^2 + ∂_s γ(s)_H^2_N^2 + |∂_s σ(s)|^2) s,
for any t ∈ [0,τ_max) with
sup_0 ≤ s ≤ t((s)_H^2_N + γ_x(s)_H^3_N + ∂_s γ(s)_H^2_N + |∂_s σ(s)|) ≤ R_1.
We proceed as in <cit.> and define the energy
E(t) = _xx(t)_L^2_N^2 - 1/2β⟨𝒥 M[ϕ(t)] _x(t), _x(t)⟩_L^2_N,
for t ∈ [0,τ_max), with M[ϕ] given by
M[ϕ] = 2([ -2ϕ_rϕ_i ϕ_r^2 -ϕ_i^2; ϕ_r^2 - ϕ_i^2 2ϕ_r ϕ_i ]),
where we recall the notation ϕ = ϕ_r + ϕ_i.
First, using the basic Sobolev interpolation inequality
f'_L^2_N^2 ≤f”_L^2_Nf_L^2_N,
for f ∈ H^2_N, which follows from integration by parts, together with
Young's and Cauchy-Schwarz inequalities and boundedness of ϕ, we obtain an N-independent constant K > 0 such that
_xx(t)_L^2_N^2 ≤ 2E(t) + K(t)_L^2_N^2,
for t ∈ [0,τ_max). This shows that the second derivative of (t) is controlled by the energy E(t) and the L^2_N-norm of (t).
Next, we use the density of the subspace H^4_N in H^2_N to derive an inequality for the energy. Thus, we take v_0 ∈ H^4_N, for which standard local existence theory (as in Proposition <ref>) implies that ψ∈ C([0,T_max),H^4_N) ∩ C^1([0,T_max),H^2_N). Combining this with Proposition <ref> yields ∈ C([0,τ_max),H^4_N) ∩ C^1([0,τ_max),H^2_N). We denote
B[ϕ] = ([ 3ϕ_r^2 + ϕ_i^2 2ϕ_rϕ_i; 2ϕ_rϕ_i ϕ_r^2 + 3ϕ_i^2 ]),
and differentiate the energy given by (<ref>) to obtain
∂_t E(t) = -2 E(t) + E_1(t) + E_2(t) + E_3(t),
for t ∈ [0,τ_max), where
E_1(t) = -1/β⟨𝒥 M[ϕ(t)] _x(t), _x(t)⟩_L^2_N - 1/2β⟨𝒥 M'[ϕ(t)] ϕ_t(t) _x(t), _x(t)⟩_L^2_N
+ 2⟨𝒥(∂_xx(B[ϕ(t)](t)) - B[ϕ(t)]_xx(t)),_xx(t) ⟩_L^2_N
- ⟨ M'[ϕ(t)] ϕ_x(t) _x(t), _xx(t)⟩_L^2_N
+ 1/β⟨𝒥M[ϕ(t)]∂_x ((I + 𝒥(α - B[ϕ(t)]))(t)),_x(t)⟩_L^2_N,
contains all irrelevant bilinear terms in ,
E_2(t) = 2⟨∂_x^2 𝒩((t)), _xx(t)⟩_L^2_N - 1/β⟨𝒥M[ϕ]∂_x 𝒩((t)),_x(t)⟩_L^2_N,
consists of all higher-order nonlinear terms in , and
E_3(t) = 2⟨∂_x^2 ℛ(γ(t),∂_t γ(t), ∂_t σ(t)), _xx(t)⟩_L^2_N
- 1/β⟨𝒥M[ϕ]∂_x ℛ(γ(t),∂_t γ(t), ∂_t σ(t)),_x(t)⟩_L^2_N,
contains all residual linear terms in .
We estimate the terms E_j(t) with the aid of estimates (<ref>) and (<ref>), the Cauchy-Schwarz and Young inequalities, boundedness of ∂_x^l ϕ for l∈_0, and the N-uniform embedding H^1_N() ↪ L^∞(). That is, we establish N-independent constants R_1,C_l > 0, l = 1,…,5 such that for each t ∈ [0,τ_max) satisfying (<ref>) we have
|E_1(t)| ≤ C_1 (t)_H^2_N(t)_H^1_N≤1/6_xx(t)_L^2_N^2 + C_2(t)_L^2_N^2
≤1/3E(t) + (C_2 + K)(t)_L^2_N^2,
|E_2(t)| ≤ C_3(t)_H^2_N^3 ≤1/6_xx(t)_L^2_N^2 + C_3(t)_L^2_N^2
≤1/3E(t) + (C_3 + K)(t)_L^2_N^2,
|E_3(t)| ≤ C_4 (γ_x(t)_H^3_N + ∂_t γ(t)_H^2_N + 1/N∂_t σ(t)_L^2_N)(t)_H^2_N
≤1/6_xx(t)_L^2_N^2 + C_5((t)_L^2_N^2 + γ_x(t)_H^3_N^2 + ∂_t γ(t)_H^2_N^2 + |∂_t σ(t)|^2)
≤1/3E(t) + (C_5+K)((t)_L^2_N^2 + γ_x(t)_H^3_N^2 + ∂_t γ(t)_H^2_N^2 + |∂_t σ(t)|^2)
for t ∈ [0,τ_max). Hence, we obtain an N-independent constant C_0 > 0 such that for t ∈ [0,τ_max) satisfying (<ref>) we have the energy estimate
∂_t E(t) ≤ -E(t) + C_0((t)_L^2_N^2 + γ_x(t)_H^3_N^2 + ∂_t γ(t)_H^2_N^2 + |∂_t σ(t)|^2).
Integrating the latter and using (<ref>), we arrive at
_xx(t)_L^2_N^2 ≤ 2^-tE(0) + K(t)_L^2_N^2
+ 2C_0∫_0^t ^-(t-s)((s)_L^2_N^2 + γ_x(s)_H^3_N^2 + ∂_s γ(s)_H^2_N^2 + |∂_s σ(s)|^2) s,
for t ∈ [0,τ_max) satisfying (<ref>), which, noting that 2E(0) ≤ C_*(0)_H^2_N = C_*v_0_H^2_N for some N-independent constant C_* > 0, yields (<ref>) for v_0 ∈ H^4_N.
Finally, for v_0 ∈ H^2_N, we approximate v_0 in H^2_N-norm by a sequence (v_0,n)_n ∈ in H^4_N and note that continuity with respect to initial data, cf. <cit.>, implies that the perturbed solution ψ of (<ref>) with initial condition ψ(0) = ϕ + v_0 is, for any T < T_max, approximated in C([0,T],H^2_N) by the sequence of solutions (ψ_n)_n ∈ of (<ref>) with initial data ψ_n(0) = ϕ + v_0,n. Observing that (<ref>) only depends on the H^2_N-norm of (t) = ψ(t) - ϕ(t), the desired result follows by approximation.
It was exploited in <cit.> for the case of localized perturbations that a nonlinear damping estimate on the forward-modulated perturbation yields nonlinear damping of the inverse-modulated perturbation by using that the H^k-norms of the inverse- and forward-modulated perturbations are equivalent (modulo absorbable errors) for any k ∈_0. In our nonlinear argument in the upcoming section we adopt a similar approach. To this end, we establish the norm equivalence (N-uniformly) in the current subharmonic setting following <cit.> and <cit.>.
Let ψ be as in Proposition <ref> and σ,γ and τ_max as in Proposition <ref>. Then, there exist N-independent constants R_2,C > 0 such that the inverse- and forward-modulated perturbations v and defined by (<ref>) and (<ref>), respectively, satisfy
v(t)_H^2_N≤ C((t)_H^2_N + γ_x(t)_H^1_N), (t)_L^2_N≤ C(v(t)_L^2_N + γ_x(t)_H^1_N),
for any t ∈ [0,τ_max) with γ(t)_H^3_N + |σ(t)| ≤ R_2.
Recalling the N-uniform embedding H^1_N ↪ L^∞(), we choose an N-independent constant R_2 > 0 such that γ(t)_H^3_N≤ R_2 implies
γ_x(t)_L^∞≤1/2.
Take t ∈ [0,τ_max) such that the inequality
γ(t)_H^3_N + |σ(t)| ≤ R_2,
holds. As in the proof of Proposition <ref> we consider the map A_t → given by
A_t(x) = x - γ(x,t) - 1/Nσ(t),
which is invertible by (<ref>) and satisfies
A_t(NT) - A_t(0) = NT,
by the NT-periodicity of γ(·,t). Using the equalities
x = A_t(A_t^-1(x)) = A_t^-1(x) - γ(A_t^-1(x),t) - 1/Nσ(t)
and the mean value theorem, we find that
|A_t^-1(x) - (x + γ(x,t) + 1/Nσ(t))| = |γ(A_t^-1(x),t) - γ(x,t)|
≤γ_x(t)_L^∞|A_t^-1(x) - x|
≤γ_x(t)_L^∞(|γ(A_t^-1(x),t)| + 1/N |σ(t)|),
for all x ∈. Moreover, the inverse function theorem implies
∂_x (A_t^-1(x)) = 1/1-γ_x(A_t^-1(x),t), ∂_x^2 (A_t^-1(x)) = γ_xx(A_t^-1(x),t)/(1-γ_x(A_t^-1(x),t))^3,
for all x ∈. Combing the latter with (<ref>) and (<ref>) we find[Throughout this proof the notation A≲ B means that there exists an N- and t-independent constant K>0 such that A≤ K B.]
|∂_x (A_t^-1(x)) - 1| ≲|γ_x(A_t^-1(x),t)|, |∂_xx(A_t^-1(x))| ≲|γ_xx(A_t^-1(x),t)|,
for all x ∈.
Using the properties (<ref>) and (<ref>) of the map A_t, the N-uniform embedding H^1_N ↪ L^∞(), Young's inequality, and the bounds (<ref>) and (<ref>) we find the inequalities
f(A_t^-1(·))_L^2_N^2
= ∫_A_t(0)^A_t(NT)|f(A_t^-1(y))|^2 y
= ∫_0^NT|f(x)|^2 A_t'(x) x
≲f_L^2_N^2,
f(A_t(·))_L^2_N^2
= ∫_0^NT|f(A_t(y))|^2 y
= ∫_A_t(0)^A_t(NT)|f(x)|^2/1-γ_x(A_t^-1(x),t) x ≲f_L^2_N^2,
for f ∈ L^2_N, and
f(A_t^-1(·))_H^2_N^2 ≲∫_A_t(0)^A_t(NT)|f(A_t^-1(y))|^2 y
+ ∫_A_t(0)^A_t(NT)|f”(A_t^-1(y))|^2 |∂_y(A_t^-1(y))|^4 y
+ ∫_A_t(0)^A_t(NT)|f'(A_t^-1(y))|^2 (|∂_y(A_t^-1(y))|^2 + |∂_y^2(A_t^-1(y))|^2) y
= ∫_0^NT|f(x)|^2 A_t'(x) x
+ ∫_0^NT|f”(x)|^2/(1-γ_x(x,t))^3 x
+ ∫_0^NT|f'(x)|^2/1-γ_x(x,t)(1 + γ_xx(x,t)^2/(1-γ_x(x,t))^4) x
≲f_H^2_N^2.
for f ∈ H^2_N.
Recalling the formulas (<ref>) and (<ref>) for the inverse- and forward-modulated perturbations and applying
the mean value theorem, Young's inequality, the N-uniform embedding H^1_N ↪ L^∞(), the bounds (<ref>) and (<ref>), and the estimates (<ref>), (<ref>) and (<ref>), we find
(·,t) - v(A_t^-1(·),t)_L^2_N ≲ϕ'_L^∞γ_x(t)_L^∞(γ(A_t^-1(·),t)_L^2_N + 1/Nσ(t)_L^2_N) ≲γ_x(t)_H^1_N,
(·,t) - v(A_t^-1(·),t)_H^2_N ≲ϕ'_W^2,∞(γ_x(t)_L^∞(γ(A_t^-1(·),t)_L^2_N + 1/Nσ(t)_L^2_N).
. 1/N + γ_x(t)_H^1_N + γ_x(A_t^-1(·),t)_L^2_N + γ_xx(A_t^-1(·),t)_L^2_N)
≲γ_x(t)_H^1_N,
Subsequently applying (<ref>) yields
(A_t(·),t) - v(·,t)_H^2_N≲(·,t) - v(A_t^-1(·),t)_H^2_N≲γ_x(t)_H_N^1.
Finally, combining the latter two estimates with (<ref>), (<ref>) and (<ref>), we obtain
v(t)_L^2_N ≲(A_t(·),t)_L^2_N + γ_x(t)_H^1_N≲(t)_L^2_N + γ_x(t)_H^1_N,
(t)_H^2_N ≲v(A_t^-1(·),t)_H^2_N + γ_x(t)_H^1_N≲v(t)_H^2_N + γ_x(t)_H^1_N,
which concludes the proof.
§ NONLINEAR STABILITY ANALYSIS
We prove our main result, Theorem <ref>, by taking
γ_nl(x,t) = γ(x,t) +1/Nσ(t),
and a suitably chosen constant σ_nl.
We apply the linear estimates, stated in Proposition <ref>, to the nonlinear iteration scheme, which was established in <ref> and consists of equations for the inverse-modulated perturbation v and the phase modulation functions σ and γ. We use the nonlinear damping estimate on the forward-modulated perturbation , obtained in Proposition <ref>, as well as the connection between the norms of the inverse- and forward-modulated perturbations established in Lemma <ref>, to control regularity in the iteration scheme.
We start by defining a template function, which controls the phase modulation functions γ [0,τ_max) → H^4_N, σ [0,τ_max) → and the forward-modulated perturbation [0,τ_max) → H^2_N in the nonlinear argument. By Proposition <ref> and Corollary <ref>, the template function η [0,τ_max) → given by
η(t) = sup_0≤ s≤ t[(1+s)^3/4((s)_H^2_N + ∂_x γ(s)_H^3_N + ∂_s γ(s)_H^2_N).
. + (1+s)^1/4γ(s)_L^2_N + (1+s)^3/2|∂_s σ(s)| + |σ(s)|],
is continuous, positive and monotonically increasing.
As usual, the key step of the approach is to prove that there exist N- and t-independent constants R > 0 and C ≥ 1 such that for all t ∈ [0,τ_max) with η(t) ≤ R we have the inequality
η(t) ≤ C(E_0 + η(t)^2).
To this end, we take R := min{R_1,R_2} with R_1 > 0 as in Proposition <ref> and R_2 > 0 as in Lemma <ref>, and assume t ∈ [0,τ_max) is such that
η(t) ≤ R.
First, we point out that by Proposition <ref> the inverse-modulated perturbation v(r), given by (<ref>), lies in H^2_N for all r ∈ [0,t]. In particular, Lemma <ref> and the inequality (<ref>) yield the bound[Throughout this proof the notation A≲ B means that there exists an N- and t-independent constant K>0 such that A≤ K B.]
v(r)_H^2_N≲η(r)/(1+r)^3/4,
for r ∈ [0,t]. Therefore, (<ref>), (<ref>) and Lemma <ref> afford the nonlinear estimate
𝒩(v,γ,∂_r γ,∂_r σ)(r)_L_N^2 ∩ L_N^1 ≲η(r)^2/(1+r)^3/2,
for r ∈ [0,t]. Applying the linear estimates in Proposition <ref> and the nonlinear bound (<ref>) to the Duhamel formulations (<ref>) and (<ref>), we arrive at
v(s)_L^2_N ≲E_0/(1+s)^3/4 + ∫_0^s η(r)^2/^μ(s-r)(1+r)^3/2 r + ∫_0^s η(r)^2/(1+s-r)^3/4(1+r)^3/2 r ≲E_0 + η(s)^2/(1+s)^3/4,
and
∂_x^ℓ∂_s^j γ(s)_L^2_N ≲E_0/(1+s)^3/4 + ∫_0^s η(r)^2/(1+s-r)^3/4(1+r)^3/2 r ≲E_0 + η(s)^2/(1+s)^3/4,
γ(s)_L^2_N ≲E_0/(1+s)^1/4 + ∫_0^s η(r)^2/(1+s-r)^1/4(1+r)^3/2 r ≲E_0 + η(s)^2/(1+s)^1/4,
for all s ∈ [0,t] and ℓ, j ∈_0 with 1 ≤ℓ + 2j ≤ 4. On the other hand, we use the Cauchy-Schwarz inequality and the properties χ_L^∞ = 1 and χ'(s) = 0 for s ∈∖ [1,2] and the estimates (<ref>) and (<ref>) to bound the right-hand side of (<ref>) as
|σ(s)| ≲ E_0 + ∫_0^s η(r)^2/(1+r)^3/2 r ≲ E_0 + η(s)^2,
|∂_t σ(s)| ≲ |χ'(s)| E_0 + ∫_0^s |χ'(s-r)|η(r)^2/(1+r)^3/2 r ≲E_0 + η(s)^2/(1+s)^3/2,
for all s ∈ [0,t]. Next, we combine Lemma <ref> with (<ref>) and (<ref>) to arrive at
(s)_L^2_N≲E_0 + η(s)^2/(1+s)^3/4,
for s ∈ [0,t]. Finally, by Proposition <ref> and the inequalities (<ref>), (<ref>) and (<ref>) we obtain
(t)_H^2_N^2 ≲^-t E_0^2 + (E_0 + η(t)^2)^2/(1+t)^3/2 + ∫_0^t ^-(t-s)(E_0 + η(s)^2)^2/(1+s)^3/2 s ≲(E_0 + η(t)^2)^2/(1+t)^3/2.
Hence, combining the inequalities (<ref>), (<ref>) and (<ref>) yields an N- and t-independent constant C ≥ 1 such that the key inequality (<ref>) is satisfied.
To end the proof of Theorem <ref>, we set = min{1/4C^2,R/2C} > 0 and take E_0 ∈ (0,). Then, as outlined in the proof of <cit.>, inequality (<ref>) yields η(t) ≤ 2CE_0 ≤ R for all t ∈ [0,τ_max). Consequently, (<ref>) cannot hold and we have τ_max = T_max by Proposition <ref>. Furthermore, the mean value theorem implies
ψ(t)_H^2_N ≲(t)_H^2_N + ϕ(· + γ(·,t) + 1/Nσ(t)) - ϕ(·)_H^2_N + ϕ_H^2_N
≲(t)_H^2_N + ϕ'_W^2,∞(1 + γ(t)_H^2_N + 1/√(N)|σ(t)|) + ϕ_H^2_N,
for t ∈ [0,τ_max). Hence, by (<ref>) and the fact that η(t) ≤ R for all t ∈ [0,τ_max), (<ref>) cannot hold and Proposition <ref> yields τ_max = T_max = ∞. We conclude that we have
η(t) ≤ 2CE_0 ≤ R,
for all t ≥ 0, which yields the last two estimates in (<ref>) with γ_nl as defined in (<ref>). In addition, the mean value theorem affords the inequality
ψ(t) - ϕ_H^2_N ≤(t)_H^2_N + ϕ'_W^2,∞(γ(t)_H^2_N + 1/√(N) |σ(t)|),
for t ≥ 0, where we use (<ref>). Combining the latter with (<ref>) proves the first estimate in (<ref>).
Finally, we set
σ_nl = Φ_0,v_0_L^2_N + ∫_0^∞Φ_0,𝒩(v,γ,∂_s γ,∂_s σ)(s)_L^2_N s,
which is well-defined and satisfies |σ_nl| ≲ E_0 by the Cauchy-Schwarz inequality and the estimates (<ref>) and (<ref>). Using the Cauchy-Schwarz inequality, the properties χ_L^∞ = 1 and χ(t) = 1 for t ∈ [2,∞), and estimates (<ref>) and (<ref>) we obtain
|σ(t) - σ_nl| ≲∫_t-2^∞η(s)^2/(1+s)^3/2 s ≲E_0/(1+t)^1/2,
for t ≥ 2. On the other hand, the mean value theorem and (<ref>) yield
ψ(t) - ϕ(· + 1/Nσ_nl)_H^2_N ≲ψ(t) - ϕ(· + 1/Nσ(t) )_H^2_N + 1/√(N)ϕ'_W^2,∞ |σ(t) - σ_nl|
≲(t)_H^2_N + ϕ'_W^2,∞γ(t)_H^2_N
+ 1/√(N)ϕ'_W^2,∞ |σ(t) - σ_nl|,
for t ≥ 0. The last two estimates and (<ref>) justify the remaining inequalities in (<ref>), and complete the proof.
Finally, it remains to prove Corollary <ref>.
Let ϕ, ε and M be as in Theorem <ref>. Fix N∈, let δ_N be as in (<ref>), and, for δ∈(0,δ_N), let _δ be as in Theorem <ref>. Let v_0∈ H^2_N satisfy E_0 := v_0_H^2_N ∩ L^1_N <max{,_δ}.
If _δ >, we take T_δ=0, and the proof is finished by Theorem <ref>. Otherwise, Theorem <ref> yields a constant σ_nl∈ and a global mild solution ψ∈ C([0,∞),H^1_N) of (<ref>) satisfying
ψ(·,t) - ϕ(· + 1/Nσ_nl)_H^1_N≤ ME_0 (1+t)^-1/4,
for all t>0. Thus, there exists a (minimal) time T_δ≥0 such that ME_0(1+t)^-1/4 < _δ for t ≥ T_δ so that
ψ(· - 1/Nσ_ nl,t) - ϕ(·)_H^1_N = ψ(·,t) - ϕ(· + 1/Nσ_nl)_H^1_N< _δ,
for all t≥ T_δ. If we take the initial datum ψ_0=ψ(· - 1/Nσ_ nl,T_δ)∈ H^2_N, then the perturbation v_0 := ψ_0 - ϕ∈ H^1_N satisfies v_0_H^1_N<_δ by (<ref>). By uniqueness of solutions, cf. Proposition <ref>, the solution ψ of (<ref>) with initial condition ψ(0) = ψ_0 satisfies ψ(x,t) = ψ(x - 1/Nσ_ nl,t + T_δ) for x ∈ and t ≥ 0. On the other hand, Theorem <ref> yields constants γ̃∈ and C_δ > 0 such that
ψ(·,t + T_δ) - ϕ(· + σ_ nl/N + γ̃)_H^1_N = ψ(· - σ_ nl/N,t + T_δ) - ϕ(· + γ̃)_H^1_N
= ψ(·,t) - ϕ(· + γ̃)_H^1_N
≤ C_δv_0_H^1_N^-δ t≤ C_δ M E_0 ^δ T_δ^-δ (t+T_δ),
for all t≥ 0, where we use that ṽ_0_H^1_N≤ ME_0(1+T_δ)^-1/4≤ ME_0 by (<ref>). Comparing estimates (<ref>) and (<ref>) and letting t →∞ implies γ̃ = 0. Finally, taking M_δ = C_δ M ^δ T_δ completes the proof.
§ PROOF OF PROPOSITION <REF>
Assume that ϕ is a smooth, T-periodic stationary solution of the LLE (<ref>), which is diffusively spectrally stable in the sense of Definition <ref>. We briefly recall below the main steps of the linear analysis from
<cit.> leading to the result in Proposition <ref>.
The starting point of this analysis is the Floquet-Bloch theory for NT-periodic functions developed in <cit.>. Setting[The set Ω_N is the analogue for NT-periodic functions of the interval [-π/T,π/T) in the better known Floquet-Bloch theory for functions in L^2().]
Ω_N={ξ∈[-π/T,π/T):^ξ NT=1},
a function g∈ L^2_N can be represented by the inverse Bloch formula
g(x)=1/NT∑_ξ∈Ω_N^ξ xℬ_T(g)(ξ,x),
in which ℬ_T is the T-periodic Bloch transform
defined as
ℬ_T(g)(ξ,x) =∑_ℓ∈^2πℓ x/Tĝ(ξ+2πℓ/T), ξ∈Ω_N, x∈,
where ĝ denotes the Fourier transform of g on the torus given by
ĝ(z):=∫_-NT/2^NT/2^- zyg(y) y.
Accordingly, for the operator 𝒜[ϕ], acting on L^2_N, we have the identity
ℬ_T(𝒜[ϕ]v)(ξ,x) = 𝒜_ξ[ϕ]ℬ_T(v)(ξ,x), v∈ L^2_N,
in which 𝒜_ξ[ϕ], acting on L^2_ per(0,T), are the associated Bloch operators introduced in Definition <ref>.
An important consequence of this Floquet-Bloch decomposition is the spectral decomposition
σ_L^2_N(𝒜[ϕ])=⋃_ξ∈Ω_Nσ_L^2_ per(0,T)(𝒜_ξ[ϕ]),
which characterizes the L_N^2-spectrum of 𝒜[ϕ] in terms of the union of the eigenvalues (including multiplicities) of the 1-parameter family of Bloch operators {𝒜_ξ[ϕ]}_ξ∈Ω_N.
Furthermore, a direct consequence of the diffusive spectral stability of ϕ is that there is an analytic curve λ_c(ξ) of simple eigenvalues of the Bloch operators 𝒜_ξ[ϕ], which expands as
λ_c(ξ)= aξ- d ξ^2+𝒪(|ξ|^3),
for some a∈ and d>0, while the rest of the spectrum is bounded away from the imaginary axis. The eigenfunction Φ_ξ associated with λ_c(ξ) is a smooth function, depending analytically on ξ, which expands as
Φ_ξ=ϕ'+𝒪(|ξ|).
As shown in <cit.>, both the operator 𝒜[ϕ], acting on L^2_N, and the associated Bloch operators 𝒜_ξ[ϕ], acting on L^2_ per(0,T), generate C^0-semigroups. Furthermore, we have the identity
ℬ_T(^𝒜[ϕ]tv)(ξ,x) =
(^𝒜_ξ[ϕ]tℬ_T(v)(ξ,·))(x), v∈ L^2_N,
and the representation formula
^𝒜[ϕ]tv(x) = 1/NT∑_ξ∈Ω_N^ξ x^𝒜_ξ[ϕ]tℬ_T(v)(ξ,x), v∈ L^2_N,
which is used to obtain the decomposition (<ref>) of the semigroup ^𝒜[ϕ]t. Without going into details, we only recall the formula of the principal part s_p,N(t) of the semigroup ^𝒜[ϕ] t, given by
s_p,N(t)v(x)=1/NT∑_ξ∈Ω_N ∖{0}ρ(ξ)^ξ x^λ_c(ξ)t⟨Φ_ξ,ℬ_T(v)(ξ,·)⟩_L_ per^2(0,T), v∈ L^2_N,
in which ρ is a smooth cutoff function satisfying ρ(ξ)=1 for |ξ|<ξ_1/2 and ρ(ξ)=0 for |ξ|>ξ_1, with ξ_1 ∈ [-π/T,π/T) suitably chosen, and Φ_ξ is the smooth eigenfunction of the adjoint operator 𝒜_ξ[ϕ]^* associated with the eigenvalue λ_c(ξ), normalized to satisfy ⟨Φ_ξ,Φ_ξ⟩_L_ per^2(0,T)=1. We refer to <cit.> for the formula for S_N(t) and the proof of its decay property (<ref>).
The decay properties of s_p,N(t) in (<ref>) are obtained by directly estimating the sum on the right hand side of (<ref>), using the expansion (<ref>) of the eigenvalue λ_c(ξ), noting that
|Φ_ξ, ℬ_T(v)(ξ,·)_L_ per^2(0,T)|
≲v_L_N^1, |ρ(ξ)^ξ xe^λ_c(ξ)t|≲^-dξ^2 t,
and
1/N∑_ξ∈Ω_Nξ^2n^-2dξ^2 t≲ (1+t)^-1/2-n.
for n∈_0; see <cit.> for more details.[See also <cit.> where similar estimates have been obtained in the case of localized perturbations for functions v∈ L^1()∩ L^2(), or see <cit.> for details on how to get uniform bounds on the Riemann sum.]
§ LOCAL THEORY
The goal of this appendix is to establish Proposition <ref>, which provides local existence for the phase modulation functions.
To this end, we first prove the following preliminary result.
Let ψ and T_max be as in Proposition <ref>. The mapping V L^2_N ×× [0,T_max) → L^2_N given by
V(γ,σ,t)[x] = ψ(x-γ(x)-1/Nσ,t) - ϕ(x),
is well-defined, continuous in t, and locally Lipschitz continuous in (γ,σ) (uniformly in N and t on compact subintervals of [0,T_max)).
We apply the mean value theorem to establish
V(γ_1,σ_1,t) - V(γ_2,σ_2,t)_L^2_N≤ψ(t)_W^1,∞(γ_1 - γ_2_L^2_N + 1/√(N) |σ_1 - σ_2|)
for γ_1,2∈ L_N^2, σ_1,2∈ and t ∈ [0,T_max). Hence, recalling the N-uniform embedding H^1_N ↪ L^∞() and using Proposition <ref>, it follows that V is locally Lipschitz continuous in (γ,σ) (uniformly in N and t on compact subintervals of [0,T_max)). In addition, taking γ_2,σ_2 = 0 in (<ref>) and recalling ϕ∈ L^2_N, proves that V is well-defined.
Similarly as in (<ref>), we employ the mean value theorem to obtain
V(γ,σ,t) - V(γ,σ,s)_L^2_N ≤(V(γ,σ,t) - V(γ,σ,s)) - (V(0,0,t) - V(0,0,s))_L^2_N
+ V(0,0,t) - V(0,0,s)_L^2_N
≤ψ(t) - ψ(s)_W^1,∞(γ_L^2_N + 1/√(N)|σ|) + ψ(t) - ψ(s)_L^2_N,
for γ∈ L^2_N, σ∈ and s,t ∈ [0,T_max). Continuity of V with respect to t now follows from Proposition <ref> and the embedding H^1_N ↪ L^∞().
Next, we establish the relevant local existence result.
Let ψ and T_max be as in Proposition <ref>. Moreover, let V L^2_N ×× [0,T_max) → L^2_N be the mapping defined in Lemma <ref>. Then, there exists a maximal time τ_max∈ (0,T_max] such that the integral system
γ(t) = s_p,N(t)v_0 + ∫_0^t s_p,N(t-s)(𝒬(V(γ,σ,·),γ)(s) + _̣x ℛ(V(γ,σ,·),γ,γ_t,σ_t)(s).
. + _̣x^2 𝒫(V(γ,σ,·),γ)(s)) s
γ_t(t) = ∂_t s_p,N(t)v_0 + ∫_0^t ∂_t s_p,N(t-s)(𝒬(V(γ,σ,·),γ)(s) + _̣x ℛ(V(γ,σ,·),γ,γ_t,σ_t)(s).
. + _̣x^2 𝒫(V(γ,σ,·),γ))(s) s
σ(t) = χ(t) Φ_0,v_0_L^2_N + ∫_0^t χ(t-s) (Φ_0,𝒬(V(γ,σ,·),γ)(s)_L^2_N.
. - Φ_0',ℛ(V(γ,σ,·),γ,γ_t,σ_t)(s)_L^2_N + Φ_0”,𝒫(V(γ,σ,·),γ)(s)_L^2_N) s,
σ_t(t) = χ'(t) Φ_0,v_0_L^2_N + ∫_0^t χ'(t-s) (Φ_0,𝒬(V(γ,σ,·),γ)(s)_L^2_N.
. - Φ_0',ℛ(V(γ,σ,·),γ,γ_t,σ_t)(s)_L^2_N + Φ_0”,𝒫(V(γ,σ,·),γ)(s)_L^2_N) s,
possesses a unique solution
(γ,γ_t,σ,σ_t) ∈ C([0,τ_max),H^4_N × H^2_N ××).
In addition, if τ_max < T_max, then it holds
lim_t ↑τ_max(γ,γ_t,σ,σ_t)_H^4_N × H^2_N ×× = ∞.
Finally, we have (γ,σ) ∈ C^1([0,τ_max),H^2_N ×) with ∂_t (γ,σ)(t) = (γ_t,σ_t)(t) for t ∈ [0,τ_max).
First, we note that, for any j,l ∈ℕ_0 the operators ∂_t^l s_p,N(t)∂_x^j L^2_N → H^4_N and L^2_N →, f ↦∂_t^l χ(t) ⟨∂_x^j Φ_0, f⟩_L^2_N are t- and N-uniformly bounded by Proposition <ref>. Second, it follows from Lemma <ref> that the nonlinear maps 𝒬,𝒫 H^4_N ×× [0,T_max) → L^2_N and ℛ H^4_N × H^2_N ××× [0,T_max) → L^2_N given by
𝒫(γ,σ,t) = 𝒫(V(γ,σ,t),γ), 𝒬(γ,σ,t) = 𝒬(V(γ,σ,t),γ),
and
ℛ(γ,γ_t,σ,σ_t,t) = ℛ(V(γ,σ,t),γ,γ_t,σ_t),
are continuous in t and locally Lipschitz continuous in (γ,σ) and (γ,γ_t,σ,σ_t), respectively (uniformly in N and t on compact subintervals of [0,T_max)).
Abbreviating A = (γ,γ_t,σ,σ_t), we thus find that the right-hand side of (<ref>) is of the abstract form
𝒮_1(t)A_0 + ∫_0^t(𝒮_1(t-s)𝒩̆_1(A(s),s) + 𝒮_2(t-s)𝒩̆_2(A(s),s) + 𝒮_3(t-s)𝒩̆_3(A(s),s)) s,
with A_0 = (v_0,v_0,v_0,v_0) and where 𝒮_i(t) are t- and N-uniformly bounded operators and the nonlinear maps 𝒩̆_i(A,t) are continuous in t and locally Lipschitz continuous in A (uniformly in N and t on compact subintervals of [0,T_max)). Thus, standard arguments, see for instance <cit.> or <cit.>, now yield a constant R > 0, independent of N, and a time τ > 0 such that Ψ C([0,τ],B_N(R)) → C([0,τ],B_N(R)), where Ψ(A)[t] is given by the right-hand side of (<ref>) and B_N(R) is the closed ball centered at the origin in H^4_N × H^2_N ×× of radius R, is a well-defined contraction mapping. Hence, by the Banach fixed point theorem, Ψ admits a unique fixed point, which yields a unique solution A ∈ C([0,τ],B_N(R)) of (<ref>). Letting τ_max∈ (0,T_max] be the supremum of all such τ, we obtain a maximally defined solution A ∈ C([0,τ_max),H^4_N × H_2^N ××) of (<ref>).
Next, assume by contradiction that τ_max < T_max and (<ref>) does not hold. Take t_0 ∈ [0,τ_max). Similarly as before, one proves that there exist constants δ > 0 and M > 0, which are independent of t_0, such that Ψ_t_0 C([t_0,min{t_0+δ,T_max}],B_N(M)) → C([t_0,min{t_0+δ,T_max}],B_N(M)) given by
Ψ_t_0(A) = 𝒮_1(t)A_0 + ∫_0^t_0(𝒮_1(t-s)𝒩̆_1(A(s),s) + 𝒮_2(t-s)𝒩̆_2(A(s),s) + 𝒮_3(t-s)𝒩̆_3(A(s),s) s
+ ∫_t_0^t(𝒮_1(t-s)
𝒩̆_1(A(s),s) + 𝒮_2(t-s)
𝒩̆_2(A(s),s) + 𝒮_3(t-s)𝒩̆_3(A(s),s)) s,
is a well-defined contraction mapping, which admits a unique fixed point A∈ C([t_0,min{t_0+δ,T_max}], B_N(M)). Now setting t_0 := τ_max - δ/2, it readily follows that the function Ǎ∈ C([0,min{τ_max + δ/2,T_max}],B_N(R)) given by
Ǎ(t) = A(t), t ∈ [0,τ_max - δ/2],
A(t), t ∈ [τ_max - δ/2,min{τ_max + δ/2,T_max}],
solves (<ref>), which contradicts the maximality of τ_max. We conclude that, if τ_max < T_max, then (<ref>) must hold.
Finally, using Proposition <ref>, we observe that both γ(t) and σ(t) are differentiable on [0,τ_max) with ∂_t (γ,σ)(t) = (γ_t,σ_t)(t), where we use s_p,N(0) = 0 and χ(0) = 0. This completes the proof.
abbrv
|
http://arxiv.org/abs/2307.00185v1
|
20230701010720
|
An Interpretable Constructive Algorithm for Incremental Random Weight Neural Networks and Its Application
|
[
"Jing Nan",
"Wei Dai",
"Guan Yuan",
"Ping Zhou"
] |
cs.LG
|
[
"cs.LG",
"cs.AI"
] |
IEEE Transactions on Industrial Informatics
Shell et al.: An Interpretable Constructive Algorithm for Random Weight Neural Networks and Its Application
An Interpretable Constructive Algorithm for Incremental Random Weight Neural Networks and Its Application
Jing Nan, Wei Dai, Senior Member, IEEE, Guan Yuan, and Ping Zhou, Senior Member, IEEE,
This work was supported in part by the National Natural Science Foundation of China under Grant 61973306, in part by the Nature Science Foundation of Jiangsu Province under Grant BK20200086, in part by the Open Project Foundation of State Key Laboratory of Synthetical Automation for Process Industries under Grant 2020-KF-21-10. (Corresponding author: Wei Dai)
Jing Nan and Wei Dai are with the school of information and control engineering and artificial intelligence research institute, China University of Mining and Technology, Xuzhou 221116, China (e-mail: [email protected], [email protected]).
Guan Yuan is with the school of computer science and technology, China University of Mining and Technology, Xuzhou 221116, China (e-mail: [email protected]).
Ping Zhou is with the State Key Laboratory of Synthetical Automation for Process Industries, Northeastern University, Shenyang 110819, China (e-mail: [email protected]).
August 1, 2023
========================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================
Incremental random weight neural networks (IRWNNs) have gained attention in view of its easy implementation and fast learning. However, a significant drawback of IRWNNs is that the relationship between the hidden parameters (node) and the residual error (model performance) is difficult to be interpreted. To address the above issue, this article proposes an interpretable constructive algorithm (ICA) with geometric information constraint. First, based on the geometric relationship between the hidden parameters and the residual error, an interpretable geometric information constraint is proposed to randomly assign the hidden parameters. Meanwhile, a node pool strategy is employed to obtain hidden parameters that is more conducive to convergence from hidden parameters satisfying the proposed constraint. Furthermore, the universal approximation property of the ICA is proved. Finally, a lightweight version of ICA is presented for large-scale data modeling tasks. Experimental results on six benchmark datasets and a numerical simulation dataset demonstrate that the ICA outperforms other constructive algorithms in terms of modeling speed, model accuracy, and model network structure. Besides, two practical industrial application case are used to validate the effectiveness of ICA in practical applications.
Incremental random weight neural networks, interpretable constructive algorithm, geometric information constraint, universal approximation property, large-scale data modeling.
§ INTRODUCTION
Neural networks (NNs), a promising computing paradigm that thoroughly differs from traditional model-based computing, can learn amazingly well patterns from complex data. Therefore, it should not be surprising that NNs are applied in a variety of research fields<cit.>. The most popular NNs are the deep neural networks (DNNs) and the flat neural networks (FNNs). DNNs realize end-to-end learning by organically combining unsupervised layer-by-layer pre-training with supervised fine-tuning<cit.>. This learning approach makes DNNs have great potential in expressive power and generalization ability, but it also leads to a time-consuming training process. Recently, due to the universal approximation ability, RWNNs, as a typical representative of FNNs, have been receiving increasing interset<cit.>. RWNNs are characterized by a two-step training paradigm, i.e., randomly assigning the hidden parameters, and evaluating the output weights by solving a system of linear equations.
Despite the aforementioned advantages, little is known about how network structure for RWNNs fits into modeling tasks. Too large a network structure will result in a poor generalization, while too small a network structure will cause insufficient learning ability. The constructive algorithms always begin with a small network structure (usually a hidden node) and dynamically grow the network structure by adding new hidden nodes incrementally until the requirements are met<cit.>. This means that the constructive algorithms are likely to offer smaller network structure for modeling tasks<cit.>. Therefore, constructive version of RWNNs (incremental RWNNs, IRWNNs) have been successfully applied for data modeling tasks<cit.>.
From the perspective of probability theory, the randomly generated hidden parameters are not necessarily suitable for IRWNNs. Therefore, a natural question arises: What hidden parameters are good for IRWNNs? Based on algebraic study of multidimensional nonlinear functions, <cit.> proved that the relationship between input samples and hidden parameters can be expressed by nonlinear weight equations. <cit.> showed that there exists a supervisory mechanism between the hidden parameters and the input samples for better network performance. <cit.> proposed a constructive algorithm with supervisory mechanism to randomly assign hidden parameters in the dynamical interval. <cit.> proposed a hidden parameters generation approach by analyzing the input samples scope and activation function. Recently, <cit.> proposed RWNNs with compact incremental inequality constraints, i.e., CIRWN, to improve the quality of hidden parameters. Although these approaches further improve the potential of IRWNNs, very little is known about how hidden parameters accomplish their goals. That is, it is difficult to visualize the influence of each hidden parameter on residual error (network performance). At present, how to interpret the predicted behavior of NNs to improve interpretability is a meaningful and important topic<cit.>. Thus, further research on interpretable constructive algorithm is important and necessary for RWNNs.
Motivated by the above analysis, this paper proposes an interpretable constructive algorithm (ICA) to help people understand the nature behind the predicted behavior of RWNNs. The main contributions are listed below:
1) The geometric relationship between the hidden parameters and the residual error is employed to build an interpretable geometric information constraint for assigning the randomized hidden parameters in the incremental construction process, and the theoretical analysis is fully discussed.
2) A node pool strategy is developed to further improve the quality of the hidden nodes by searching the hidden parameters that are more conducive to convergence.
3) Using different calculation methods of network output weights, two algorithm implementations, namely ICA and ICA+, are proposed.
The remainder of the article is organized as follows. Section 2 briefly reviews RWNNs and the constructive algorithms. Section 3 proposes an interpretable constructive algorithm and describes it in detail. In section 4, a numerical simulation dataset, six real-world datasets, an ore grinding semi-physical simulation platform, and a gesture recognition system are considered to evaluate the effectiveness and efficiency of the proposed ICA and ICA+. Finally, conclusions are drawn in section 5.
§ PRELIMINARIES
§.§ Random Weight Neural Networks
RWNNs can be regarded as a flatted network, where all the hidden parameters (the input weights and biases) are randomly assigned from a fixed interval and fixed during the training process. The output weights are evaluated by solving a system of linear equations. The theory of RWNNs is described as follows.
For a target function f:R^d→R^m, the RWNNs with L hidden nodes can be written as : f_L = Hβ, where H = [ g_1( ω _1^T· x + b_1), ⋯ ,g_L( ω _L^T· x + b_L)], T denotes matrix transpose, x is the input sample, ω _j and b_j are the input weights and biases of the j-th hidden node, respectively. j = 1, ⋯ ,L, g_j denotes the nonlinear activation function of the j-th hidden node. The output weights β are evaluated by β = H^†f_L, where β = [ β _1,β _2,...,β _L]_^T, H^† denotes the Moore-Penrose generalized inverse of H.
§.§ Constructive Algorithms
Constructive algorithms are likely to find the minimal network structure due to their incremental construction nature. Therefore, the constructive algorithms are introduced RWWNs, namely IRWNNs. Specifically, assuming that the IRWNNs with L - 1 hidden nodes does not reach the termination condition, then a new hidden node will be generated by the following two steps:
1) The input weights ω _L and bias b_L are randomly generated from the fixed interval [ - λ ,λ]^d and [ - λ ,λ]. In particular, λ usually takes the value 1. Then, the output vector g_L of the L-th hidden node, which is determined by maximizing Δ = ⟨e_L - 1,g_L⟩^2/g_L^2, where e_L - 1 = f - f_L - 1 = [ e_L - 1,1,e_L - 1,2,...,e_L - 1,m] is the current network residual error. f_L - 1 is the output of IRWNNs with with L - 1 hidden nodes.
2) The output weights vector β _L of the L-th hidden node can be obtained by β _L = ⟨e_L - 1,g_L⟩/g_L^2.
If the new network residual error e_L = f - f_L dose not reach the predefined residual error, a new hidden node needs to be added until the predefined residual error or maximum number of hidden nodes is reached.
§ INTERPRETABLE CONSTRUCTIVE ALGORITHM
In this section, the interpretable geometric information constraint is constructed based on the geometric relationship between the residual error and the hidden parameters, and the universal approximation property of this constraint is guaranteed by combining the residual error. In addition, a node pool strategy is employed to obtain hidden parameters that are more conducive to convergence. Finally, two different algorithm implementations are proposed, namely ICA and ICA+.
§.§ Interpretable Geometric Information Constraint
Theorem 1: Suppose that span(Γ) is dense in L^2 and ∀ g ∈Γ, 0 < g < v for some v ∈ R. Given 0 < σ < 1, σ = σ + rand( 1 - σ ,1), τ = 1 + σ L/1 + L and γ _L≥( 1 - τ). If g_L is randomly generated under interpretable geometric information constraint
cos ^2θ _L - 1≥γ _L⟨e_L - 1,e_L - 1⟩
The output weights β _L are evaluated by β _L = ⟨e_L - 1,g_L⟩/g_L^2. Then, we have lim _L → + ∞e_L = 0.
Proof:
Based on the above analysis, we have that
[ e_L^2 - e_L - 1^2; = e_L - 1 - β _Lg_L^2 - e_L - 1^2; = - 2⟨e_L - 1,β _Lg_L⟩ + ⟨β _Lg_L,β _Lg_L⟩; = - ⟨e_L - 1,β _Lg_L⟩^2/g_L^2; ≤ 0 ]
Then, it has been proved that the residual error e_L is monotonically decreasing as L →∞.
It follows from Eq. (1) and Eq. (2) that
[ e_L^2 - τe_L - 1^2; = ∑_q = 1^m ⟨e_L - 1,q - β _L,qg_L,e_L - 1,q - β _L,qg_L⟩; - ∑_q = 1^m τ⟨e_L - 1,q,e_L - 1,q⟩; = ( 1 - τ)∑_q = 1^m e_L - 1,q^2 - ∑_q = 1^m ⟨e_L - 1,q,g_L⟩^2/g_L^2; ≤γ _L∑_q = 1^m e_L - 1,q^2 - ∑_q = 1^m ⟨e_L - 1,q,g_L⟩^2/g_L^2; = γ _Le_L - 1^2 - ⟨e_L - 1,g_L⟩^2/g_L^2 ]
According to e_L = e_L - 1 - β _Lg_L and β _L = ⟨e_L - 1,g_L⟩/g_L^2, we have
[ ⟨e_L,g_L⟩; = ⟨e_L - 1 - β _Lg_L,g_L⟩; = ⟨e_L - 1,g_L⟩ - β _L⟨g_L,g_L⟩; = ⟨e_L - 1,g_L⟩ - ⟨e_L - 1,g_L⟩/g_L^2⟨g_L,g_L⟩; = 0 ]
Then, Eq. (4) means e_Lg_L. It can be easily observed that e_L, e_L - 1 and β _Lg_L satisfy the geometric relationship shown in Fig. 1.
In addition, based on f_L - 1 = ∑_j = 1^L - 1β _jg_j, we have
[ ∑_j = 1^L - 1β _j⟨e_L - 1,g_j⟩; = ⟨e_L - 1,∑_j = 1^L - 1β _jg_j⟩; = ⟨e_L - 1,e_L - 1 + f_L - 1⟩; = ⟨e_L - 1,e_L - 1⟩ + ⟨e_L - 1,f_L - 1⟩; = ⟨e_L - 1,e_L - 1⟩; = e_L - 1^2 ]
where f_L - 1 is orthogonal to e_L - 1. Thus, we have
[ ∃β _j⟨e_L - 1,g_j⟩≥e_L - 1^2/L - 1; < = > β _jg_j⟨e_L - 1,g_j⟩/g_j≥e_L - 1^2/L - 1; = > | ⟨e_L - 1,g_j⟩|/g_j≥e_L - 1^2/( L - 1)β _jg_j ]
where e_L - 1^2/L - 1is the average of e_L - 1^2.
According to the geometric relationship (Fig. 1) and Eq. (6), the following equation is obtained
[ | ⟨g_j,e_L - 1⟩|/g_j = ( e_L - 1cosθ _L - 1) ]
It follows from Eq. (6) and (7) that
cos ^2θ _L - 1≥φe_L - 1^2
where 0 < φ = 1/( ( L - 1)β _jg_j)^2 < 1. φ is sufficiently small when L - 1 is very large.
Based on the Eq. (3) and Eq. (8), the parameter γ _L is directly related to whether the residual error converges. As the modeling process proceeds, the residual error becomes smaller which makes the configuration task on w_L and b_L more challenging. Therefore, the parameter γ _L is designed as a dynamic value to ensure that Eq. (9) holds.
φ≥γ _L
It follows from Eq. (1), (3), (8), and (9) that
[ e_L^2 - τe_L - 1^2; ≤γ _Le_L - 1^2 - ⟨e_L - 1,g_L⟩^2/g_L^2; ≤γ _Le_L - 1^2 - cos ^2θ _L - 1; ≤ 0 ]
Then, we have that lim _L → + ∞e_L = 0.
Remark 1: According to Fig. 1, the complex black-box relation between e_L - 1 and g_L can be visualized using θ _L. Then, Eq. (1) has some interpretability.
§.§ Node Pool Strategy
Although the interpretable geometric information constraint can ensures that the constructed network has universal approximation property. However, the hidden parameters (g_L) are generated randomly at a single time, which may not make the network residual decrease quickly. As a result, Eq. (1) is optimized using the node pool strategy as
( cos^2θ _L)_max≥γ _L⟨e_L - 1,e_L - 1⟩
Remark 2: Eq. (11) directly selects the hidden parameter that can minimize the network residual error from many candidates (node pool). However, the traditional output weights calculation method (β _L = ⟨e_L - 1,g_L⟩/g_L^2) may leads to slower convergence. Therefore, two effective methods are designed to evaluate the output weights, namely, 1) global optimization and 2) dynamic stepwise updating.
§.§ Algorithm Implementations
In this section, two different algorithm implementations, termed ICA and ICA+, will be reported. The network structure and spatial geometry construction process of ICA are shown in Fig. 2 and Fig. 3.
§.§.§ ICA
For a target function f:R^d→R^m, assume that an ICA with L-1 hidden nodes has been constructed, i.e., f_L - 1 = ∑_j = 1^L - 1β _jg_j( ω _j^T· x + b_j). If the generated g_L makes the interpretable geometric information constraint Eq. (11) hold, and the output weights are given by
β = min_βf - ∑_j = 1^L β _jg_j
Then, we have that lim _L →∞f - f_L = 0, where f_L = f_L - 1 + β _Lg_L.
Rearranging Eq.(12), the following matrix form results
β = H^†f_L
where β = [ β _1,β _2,...,β _L]_^T, H = [ g_1,g_2,...,g_L], H^† denotes the Moore-Penrose generalized inverse of H.
Remark 3: The calculation of the Moore-Penrose generalized inverse involves the SVD, which will greatly increase computational cost. The problem is even more acute when dealing with large-scale data modeling tasks. To solve this problem, using the iteration theory of Greville<cit.>, the ICA is extended to a lightweight version, called ICA+.
§.§.§ ICA+
For a target function f:R^d→R^m, assuming that the ICA with L-1 hidden nodes has been constructed, i.e., f_L - 1 = ∑_j = 1^L - 1β _jg_j( ω _j^T· x + b_j). If the generated g_L makes Eq. (11) holds. Let H_L - 1= [ g_1,g_2,...,g_L-1] denote the output weights matrix of the hidden layer with L - 1 nodes. H_L = [ H_L - 1g_L] represents the output weights matrix of the hidden layer with L nodes. Based on the iteration theory of Greville, H_L^† can be obtained by
H_L^† = [ [ H_L - 1^† - d_Lb_L^T; b_L^T ]]
where d_L = H_L - 1^†g_L, c_L = h_L - H_L - 1d_L, b_L^T = {[ ( c_L)^† [ if c_L 0 ]; ( 1 + d_L^Td_L)^ - 1d_L^TH_L - 1^ + [ if c_L = 0 ] ]..
Then, the output weights can be derived as
β = [ [ β ^previous - d_Lb_L^Tf; b_L^Tf ]]
where β ^previous denotes the output weights before a new hidden node is added. The specific implementation steps of ICA and ICA+ are shown in the pseudocode.
§ EXPERIMENTAL RESULTS
In this section, we present the performance of the proposed ICA and ICA+ as well as IRWNNs, and CIRWN on a function approximation dataset, six benchmark datasets, an ore grinding semi-physical simulation platform, and a gesture recognition system. The function approximation dataset was randomly generated by Eq. (16) defined on [0, 1]. The specifications of the seven datasets can be found in TABLE I. In addition, the experimental parameters of all algorithms are summarized in TABLE II. The above experimental parameter settings are the best solutions obtained from multiple experiments.
f( x ) = 1/((x - 0.3)^2 + 0.01) + 1/((x - 0.9)^2 + 0.04) - 6
where x ∈[ 0,1].
All the comparing experiments are implemented on MATLAB 2020a running on a PC with 3.00 GHz Core i7 CPU and 8 GB RAM. Each experiment is repeated 30 times, and the average of the 30 experiments is set as the final reported result. The sigmoid function g( u ) = 1/1 + exp( - u) is employed as the activation function of these four randomized algorithms. In addition, modeling accuracy, root mean squares error (RMSE), and efficiency (the time spent on building the network) were employed to measure the performance of all randomized algorithms.
RMSE = √(1/N∑_i = 1^N ( f_i - f̃_i)^2)
where f_i denotes the real value of the output, f̃_i is the prediction, N is the number of the input.
§.§ Results
§.§.§ Function Approximation Dataset
Fig. 4 shows the convergence performance of RMSE for the ICA, ICA+, IRWNNs, and CIRWN on the function approximation dataset. From Fig. 4, it can be observed that the convergence of RMSE based on ICA, ICA+ and CIRWN can be reached after dozens of hidden nodes. In addition, both ICA and ICA+ require fewer hidden nodes to achieve the RMSE convergence. These results show that the proposed two randomized algorithms have obvious advantages in compact structure. Fig. 5 shows the kernel density function (KDF) of the estimated error when the models achieve the expected error tolerance. It can be seen from Fig. 5 that the ICA and the ICA+ perform better than the IRWNNs and the CIRWN due to the KDF of ICA and ICA+ is approximating the real data distribution. This means that the proposed ICA and ICA+ have better prediction ability. Fig. 6 describes the different parameters λ influence on the KDF performance of ICA. It is evident that different λ have favorable or unfavorable effects on the KDF performance of ICA. This shows that λ is an important parameter for the KDF performance of ICA. Therefore, to achieve better KDF performance, λ should not remain fixed.
§.§.§ Benchmark Datasets
In this section, the performance of the ICA, ICA+, IRWNNs, and CIRWN is measured on six benchmark datasets. These benchmark datasets are mainly from KEEL and UCI, and their details can be observed in TABLE I. The experimental parameter information of the four randomized algorithms are given in TABLE II. TABLE III shows the time, training RMSE and testing RMSE results of these four randomized algorithms on six benchmark datasets. As shown in TABLE III, the training RMSE of both ICA and ICA+ is lower than that of CIRWN on most datasets. This means that the interpretable geometric information constraint can help generate better quality hidden parameters. Meanwhile, when compared wth the ICA+, the ICA is weak in the testing and training RMSE performance. This is because ICA+ uses an iterative update method to obtain the output weights, which is very dependent on the quality of the output weights of the first hidden node. In contrast, ICA uses the Moore-Penrose generalized inverse method to obtain the output weights, which enables ICA to obtain the globally optimal output weights after each node is added.
When the same hidden nodes are added, the training time of the proposed ICA and ICA+ is lower than the training time of CIRWN, especially ICA+. Compared with ICA, ICA+ reduced the training time by 14.29%, 50.56%, 67.58%, 43.31%, 67.02%, and 75.96% on Iris, Segment, HAR, Compactiv, Concrete, and Winequality, respectively. Therefore, the ICA+ is superior to other randomized algorithms in terms of lightweight. Fig. 7 depicts the effect of the hidden node pool size T_max on the RMSE of ICA. It shows that too large or too small T_max leads to an increase in RMSE. It is worth noting that we do not show the training time. In fact, T_max is related to the efficiency of the network because it controls the number of hidden parameters obtained from the random interval. In our experiments, this parameter was set with careful trade-offs.
§.§ Hand Gesture Recognition Case
Hand Gesture recognition (HGR) is a hot topic in patter recognition due to its wide ranges of applications, such as virtual reality, health monitoring and smart homes<cit.>. In this section, we evaluate the performance of ICA and ICA+ based on our own developed HGR system. The HGR system framework is shown in Fig. 8, which includes both hardware module and software module. The hardware module consists of a gesture data acquisition and a data transmission shown in Fig. 9. The software module includes a feature extraction and a modeling and recognition shown in Fig. 10. In this section, a gesture dataset with a total sample size of 5136, feature number of 64 and category number of 24 was obtained through the feature extraction of the software module<cit.>. The dataset has been divided into the training dataset and the testing dataset.
§.§.§ Parameter Configuration
For IRWNNs, the random set of hidden parameters is fixed interval [-150,150]. The random parameters of the other three randomized algorithms is selected from a variable interval ζ = {150:10:200}. For ICA, ICA+, and CIRWN, the maximum number of iteration is set to L_max = 500, and the maximum times of random configuration is set to T_max = 20.
§.§.§ Comparison and Discussion
The average RMSE of four algorithms based on thirty times of experiments on the HGR testing dataset is displayed in Fig. 11. It can be found that the ICA and ICA+ have good stability performance in terms of RMSE. As can be seen from Fig. 11, the difference between the maximum RMSE and the minimum RMSE for IRWNNs and CIRWN is 0.15 and 0.02, respectively. TABLE IV shows the experimental results of the IRWNNs, ICA, ICA+, and CIRWN on the HGR system. It can be seen from TABLE IV that compared with IRWNNs and CIRWN, the ICA and ICA+ have a great advantage in terms of training time and classification accuracy. Based on the comparisons and analyzes of these results, we can conclude that the proposed ICA and ICA+ are more effective than IRWNNs and CIRWN for HGR tasks.
§.§ Ore grinding Case
Ore grinding is the monomer dissociation between useful minerals and lode minerals, and its process is illustrated in Fig. 12<cit.>. The mechanisms of ore grinding are complicated and hard to establish a mathematical model. Therefore, it is essential to establish a soft sensor for monitoring the ore grinding. As TABLE V shows, there are five process variables that are chosen to establish the ore grinding model. These process variables are collected from the ore grinding semi-physical simulation platform (see Fig. 13) and 20000 training samples and 5000 test samples have been obtained. The purpose of constructing the soft sensor model of the ore grinding process is to achieve the following nonlinear mapping:
PS = f( R_1,R_2,R_3,α _1,α _2)
§.§.§ Parameter Configuration
For IRWNNs, the random set of hidden parameters is fixed interval [-150,150]. The random parameters of the other three randomized algorithms are selected from a variable interval ζ = {150:1:500}. For ICA, ICA+, and CIRWN, the maximum number of iteration is set to L_max = 100, and the maximum times of random configuration is set to T_max = 20.
§.§.§ Comparison and Discussion
Fig. 14 shows the probability density function (PDF) of the estimation error when the model reaches the expected error tolerance for the four models. In particular, ICA and ICA+ share a PDF curve due to the similar result. It can be seen from Fig. 14 that the ICA and ICA+ have better than the other two models because the curve of ICA and ICA+ is approximately normal distribution comparing to IRWNNs and CIRWN. This means that the ICA and ICA+ have the best performance among ore grinding systems. Besides, TABLE VI shows the experimental results of the IRWNNs, ICA, ICA+, and CIRWN on the the ore grinding semi-physical simulation platform. The modeling time of ICA, ICA+, CIRWN, and IRWNNs are 3.29s, 1.91s, 3.93s, and 0.57s, respectively. Comparing with ICA and CIRWN, ICA+ achieves the minimum training time while maintains the desired accuracy. When compared with the CIRWN, the ICA is strong in training time. It follows from the above experiment results and analysis that the proposed ICA and ICA+ can obtain superior performance in terms of generalization and training time. Moreover, these remarkable merits make the ICA and ICA+ be a very nice choice for the ore grinding.
§ CONCLUSION
In this paper, an interpretable constructive algorithm (ICA) is proposed to visualize the contribution of each hidden parameter on residual error to improve the interpretability of RWNNs predicted behavior. In ICA, the hidden parameters are randomly assigned by the interpretable geometric information constraint with node pool strategy. Further, ICA is extended to ICA+ in order to reduce the computational cost. In particular, the difference between ICA+ and ICA is that ICA+ uses a more lightweight and efficient iterative update method to evaluate the output weights, while ICA uses a globally optimal approach to evaluate the output weights. Experimental results on seven benchmark datasets, a hand gesture recognition system and an ore grinding semi-physical simulation platform show that ICA and ICA+ can effectively reduce computational consumption and have better network performance than other construction algorithms.
1
IEEEtran
ref1
X. Meng, J. Tang and J.-F. Fei, “NOx emissions prediction with a brain-inspired modular neural network in municipal solid waste incineration processes,” in IEEE Transactions on Industrial Informatics., vol. 18, pp. 4622–4631, 2022.
ref2
Z.-Q. Geng, Z.-W. Chen, Q.-C. Meng and Y.-M. Han, “Novel transformer based on gated convolutional neural network for dynamic soft sensor modeling of industrial processes,” in IEEE Transactions on Industrial Informatics., vol. 18, pp. 1521–1529, 2022.
ref3
X.-F. Yuan, L. Li, Y.-L. Wang, “Nonlinear dynamic soft sensor modeling with supervised long short-term memory network,” in IEEE Transactions on Industrial Informatics., vol. 16, pp. 3168–3176, 2020.
ref4
H.-F. Zhang, Y. Dong, C.-X. Dou and G.-P. Hancke, “PBI based multi-objective optimization via deep reinforcement elite learning strategy for micro-grid dispatch with frequency dynamics,” in IEEE Transactions on Power Systems., vol. 38, pp. 488–498, 2023.
ref5
Y.-H. Jia, S. Kwong and R. Wang, “Applying exponential family distribution to generalized extreme learning machine,” in IEEE Transactions on Systems, Man, and Cybernetics: Systems., vol. 50, pp. 1794–1804, 2020.
ref6
Y.-H. Pao and Y. Takefuji, “Functional-link net computing: theory, system architecture, and functionalities,” in Computer., vol. 25, pp. 76–79, 1992.
ref7
Y.-H. Pao, G.-H. Park, D.-J. Sobajic, “Learning and generalization characteristics of the random vector Functional-link net,” in Neurocomputing., vol. 6, pp. 163–180, 1994.
ref8
B. Igelnik and Y.-H. Pao, “Stochastic choice of basis functions in adaptive function approximation and the functional-link net,” in IEEE Transactions on Neural Networks., vol. 6, pp. 1320–1329, 1995.
ref9
F. Han, J. Jiang, Q. H. Ling, and B. Y . Su, “Stochastic choice of basis functions in adaptive function approximation and the functional-link net,” in Neurocomputing., vol. 335, pp. 261–273, 2019.
ref10
X. Wu, P. Rozycki and B. M. Wilamowski, “A hybrid constructive algorithm for single-layer feedforward networks learning,” in IEEE Transactions on Neural Networks and Learning Systems., vol. 26, pp. 1659–1668, 2015.
ref11
L.-Y. Ma and K. Khorasani, “Insights into randomized algorithms for neural networks: Practical issues and common pitfalls,” in IEEE Transactions on Neural Networks and Learning Systems., vol. 16, pp. 821–833, 2005.
ref12
G. Feng, G.-B. Huang, Q. Lin and R. Gay, “Error minimized extreme learning machine with growth of hidden nodes and incremental learning,” in IEEE Transactions on Neural Networks., vol. 20, pp. 1352-1357, 2009.
ref13
Dudek G, “A constructive approach to data-driven randomized learning for feedforward neural networks,” in Applied Soft Computing., vol. 112, pp. 107797, 2021.
ref14
S. Ferrari and R.-F. Stengel, “Smooth function approximation using neural networks,” in IEEE Transactions on Neural Networks., vol. 16, pp. 24–38, 2005.
ref15
I.-Y. Tyukin and D.-V. Prokhorov, “Feasibility of random basis function approximators for modeling and control,” in 2009 IEEE Control Applications, (CCA) & Intelligent Control., 2009, pp. 1391–1396.
ref16
D.-H. Wang and M. Li, “Stochastic configuration networks: fundamentals and algorithms,” in IEEE Transactions on Cybernetics., vol. 47, pp. 3466–3479, 2017.
ref17
Dudek G, “Generating random weights and biases in feedforward neural networks with random hidden nodes,” in Information Sciences., vol. 481, pp. 33-56, 2019.
ref18
Q.-J. Wang, W. Dai, P. Lin and P. Zhou, “Compact incremental random weight network for estimating the underground airflow quantity,” in IEEE Transactions on Industrial Informatics., vol. 13, pp. 426–436, 2022.
ref19
M. Islam, D.-T. Anderson, A.-J. Pinar, T.-C. Havens, G. Scott and J.-M. Keller, “Enabling explainable fusion in deep learning with fuzzy integral neural networks,” in IEEE Transactions on Fuzzy Systems., vol. 28, pp. 1291–1300, 2020.
ref20
H. Sasaki, Y. Hidaka and H. Igarashi, “Explainable deep neural network for design of electric motors,” in IEEE Transactions on Magnetics., vol. 57, pp. 1–4, 2021.
ref21
C.-L.-P. Chen and Z.-L. Liu, “Broad learning system: An effective and efficient incremental learning system without the need for deep architecture,” in IEEE Transactions on Neural Networks and Learning Systems., vol. 29, pp. 10–24, 2018.
ref22
S.-L. Issa, Q.-M. Peng and X.-G. You, “Emotion classification using EEG brain signals and the broad learning system,” in IEEE Transactions on Systems, Man, and Cybernetics: Systems., vol. 51, pp. 7382–7391, 2021.
ref23
L. Cheng, Y. Liu, Z.-G. Hou, M. Tan, D. Du and M. Fei, “A rapid spiking neural network approach with an application on hand gesture recognition,” in IEEE Transactions on Cognitive and Developmental Systems., vol. 13, pp. 151–161, 2021.
ref24
H. Cheng, L. Yang and Z. Liu, “Survey on 3D hand gesture recognition,” in IEEE Transactions on Circuits and Systems for Video Technology., vol. 26, pp. 1659–1673, 2016.
ref25
G. Yuan, X. Liu, Q. Yan, S. Qiao, Z. Wang and L. Yuan, “Hand gesture recognition using deep feature fusion network based on wearable sensors,” in IEEE Sensors Journal., vol. 21, pp. 539–547, 2021.
ref26
W. Dai, X.-Y. Zhou, D.-P. Li, S. Zhu and X.-S. Wang, “Hybrid parallel stochastic configuration networks for industrial data analytics,” in IEEE Transactions on Industrial Informatics., vol. 18, pp. 2331–2341, 2022.
|
http://arxiv.org/abs/2307.02126v1
|
20230705090514
|
Robust Graph Structure Learning with the Alignment of Features and Adjacency Matrix
|
[
"Shaogao Lv",
"Gang Wen",
"Shiyu Liu",
"Linsen Wei",
"Ming Li"
] |
cs.LG
|
[
"cs.LG",
"stat.ML"
] |
Proton irradiation of plastic scintillator bars for POLAR-2
[
===========================================================
[1]Department of Statistics and Data Science, Nanjing Audit University, China.
[2]University of Electronic Science and Technology of China, China.
[3]School of Astronautics, Northwestern Polytechnical University, China.
[4]Key Laboratory of Intelligent Education Technology and Application of Zhejiang Province, Zhejiang Normal University, Jinhua, China; Key Laboratory of Scientific and Engineering Computing (Ministry of Education), Shanghai Jiao Tong University, China.
To improve the robustness of graph neural networks (GNN), graph structure learning (GSL) has attracted great interest due to the pervasiveness of noise in graph data. Many approaches have been proposed for GSL to jointly learn a clean graph structure and corresponding representations.
To extend the previous work, this paper proposes a novel regularized GSL approach, particularly with an alignment of feature information and graph information, which is motivated mainly by our derived lower bound of node-level Rademacher complexity for GNNs.
Additionally, our proposed approach incorporates sparse dimensional reduction to leverage low-dimensional node features that are relevant to the graph structure. To evaluate the effectiveness of our approach, we conduct experiments on real-world graphs. The results demonstrate that our proposed GSL method outperforms several competitive baselines, especially in scenarios where the graph structures are heavily affected by noise. Overall, our research highlights the importance of integrating feature and graph information alignment in GSL, as inspired by our derived theoretical result, and showcases the superiority of our approach in handling noisy graph structures through comprehensive experiments on real-world datasets.
§ INTRODUCTION
Graph neural networks (GNNs) have received increasing attention in recent years and have achieved remarkable performance across various tasks, including node classification <cit.>, recommendation systems <cit.>, and information retrieval <cit.>. In essence, GNNs employ a message-passing framework, wherein node embeddings are derived through the aggregation and transformation of neighboring embeddings.
The graph structure in GNNs distinguishes it from traditional neural network models. The success of vanilla GNNs over graph data relies heavily on one fundamental assumption, i.e., the original graph structure is reliable <cit.>. However, inherent noise in graph data often exists in graph structure due to measurement errors or adversarial attacks <cit.>. A variety of (graph) neural networks are susceptible to noise <cit.>, especially inevitable noise in graph data that significantly diminishes the quality of representations produced by deep GNN models. Consequently, it is impossible to apply noisy GNNs to risk-critical practical problems, such as finance management and medical analysis. Therefore, it is of significant importance to develop robust GNN models that can effectively counter adversarial attacks and mitigate the impact of noise.
Graph structure learning (GSL), which aims to simultaneously learn an optimized graph structure and corresponding representations, has achieved considerable success in recent years. This is achieved by modifying the graph structure, particularly through operations such as adding, deleting, or rewiring edges <cit.>. In previous related literature, a fundamental concern in developing efficient GSL methods for GNN models is to generate a “clean” graph structure for learning representations and downstream tasks. However, one controversial and challenging issue for GSL lies in the criteria for a clean graph structure.
It is frequently observed that real-world graphs often share specific properties. For instance, many real-world clean graphs tend to possess characteristics such as low-rankness and sparsity <cit.>. However, previous studies have overlooked the inherent relationships between node features and the underlying graph structure. Taking social networks as an example, individuals with similar hobbies are more likely to be friends. To effectively integrate node features with the raw graph structure, a core idea for GSL is to learn an encoding function that assigns edge weights based on pairwise distances between node features or presentations. Subsequently, the final graph structure for GNNs is refined by incorporating both the original graph structure and the learned one. Existing studies mainly consist of various metric learning approaches for GSL <cit.>.
In this paper, our core idea for GSL is primarily inspired by a novel theoretical finding concerning the Rademacher complexity of GNNs, a widely used notation that can describe the generalization capacity of a learning algorithm. Specifically, <ref> in <ref> demonstrates that the lower bound of Rademacher complexity of GNNs relies on the alignment between the feature and graph information. Furthermore, it is worth considering that practical problems often involve high-dimensional node features, while not all variables are necessarily associated with the graph structure. Taking this into account, the distance learning method for GSL proposed in this paper takes into consideration both the low dimensionality and sparsity of node features. In summary, we design robust graph neural networks by embedding the alignment between the feature and graph information, as well as leveraging the feature properties of sparsity and low dimension.
Contributions.
In particular, this paper concentrates on semi-supervised classification models that utilize graph convolutions such as graph convolutional networks (GCNs, ), a class of popular graph models that bridge the gap between spectral and spatial domains. Our method is also applicable to both unsupervised and supervised graph models. For notational simplicity, we assume the node feature is noiseless. Overall, our contributions to GSL can be summarized as follows:
* We first establish a minimax lower bound of node-level Rademacher complexity of GCNs. This theoretical result provides valuable insights into the complexity of GCN models and serves as a foundation for our subsequent work. Building on this theoretical finding, we further propose a novel robust graph neural network for GSL. Our method takes into account the degree of alignment between the graph structure and node features, as well as the low dimensionality and sparsity of node features. By considering these factors, we aim to enhance the performance and robustness of GNN models in the context of GSL.
* A suitable graph structure is a fundamental prerequisite for the success of GNNs. To the best of our knowledge, we are the first to consider the alignment between the graph structure and node features in the GSL literature. By addressing this crucial aspect, we contribute to filling a gap in the existing literature and advancing the understanding of GSL methods.
* We conduct extensive experiments on three real-world datasets to evaluate the effectiveness of our proposed method. The results demonstrate the superiority of our method, further validating its effectiveness in GSL tasks against several competitive methods.
Organization.
The rest of the paper is organized as follows. <ref> reviews some of the related work. <ref> presents several basic notations used on GNNs and introduces the theoretical results concerning the Rademacher complexity of GCNs. We further outline the methodology for GSL. Additionally, we present an alternative optimization approach for our proposed method in <ref>. <ref> presents the experiment results of our proposed method. Finally, we conclude the work in <ref>.
Notation.
The notation used in the paper is as follows:
For a vector , _2 refers to the standard norm in Euclidean space.
For a matrix , _2 and _F denote the spectral norm and Frobenius norm of , respectively.
These notations allow for a clear and concise representation of vector and matrix norms, which are essential in the theoretical analysis and formulation of the proposed methods in the paper.
§ RELATED WORK
Graph Neural Networks.
Common GNNs are divided into two categories: spectral GNNs and spatial GNNs (see <cit.> for a related survey). In graph spectral theory, spectral GNNs are a kind of GNN that designs graph signal filters in the spectral domain. For example, based on the graph Laplacian, <cit.> propose the graph convolution operation in the Fourier domain. <cit.> use Chebyshev polynomials as the convolution filter. <cit.> utilize the first-order approximation of Chebyshev that achieves a fast approximate convolution on graphs. <cit.> reduce the excess complexity of graph convolution into a single linear model, which achieves state-of-the-art performance in overall predictive accuracy.
On the other hand, using adjacency neighbors, spatial GNNs can intuitively define convolution operations on graphs. Specially, <cit.> utilize an attention mechanism to aggregate representations of neighbors. <cit.> first sample the neighbors and then aggregate the information to generalize the graph convolution. <cit.> implement importance sampling on each convolutional layer, which improves learning efficiency.
Graph Structure Learning.
To alleviate the dependence of learning GNNs on the graph structure, recent efforts have been made for GSL (see <cit.> for a related survey), which learn the graph structure and GNN parameters jointly.
In particular, <cit.> model each edge in the adjacency matrix as a parameter, and learned these together with the GNN parameters in a two-level way. <cit.> propose a similarity-based GSL with the help of node features. <cit.> learn the metrics to generate the graph structure between node features and GNN embeddings iteratively. <cit.> jointly learn a structural graph and a robust graph network model with graph properties. However, these aforementioned GSL methods do not fully consider the matching between node features and graph structure.
§ METHODOLOGY
§.§ Preliminaries
Consider an undirected network or graph 𝒢=(𝒱,ℰ), where 𝒱={ν_1,ν_2,...,ν_n } denotes the set of nodes with |𝒱|=n and ℰ represents the set of edges between nodes. The edge in the graph quantifies certain relationships among the data, such as correlation, similarities, or causal dependencies. The graph can be equivalently described by a (weight) adjacency matrix ∈ℝ^n × n. In this paper, we focus on the standard binary setting where _ij=1 indicates the existence of an edge between node i and node j, otherwise _ij=0.
In addition to the edge information over the graph, the typical supervised setting assigns a pairwise feature (,y) ∈ℝ^d×𝒞 at each node. The main task of such graph learning is to learn a function f_Θ: 𝒱→𝒞, parameterized by Θ, which fully exploits the underlying pattern based on useful information of the graph structure and the other regular samples. Consider an input matrix = (_1, _2,..., _n)^T ∈ℝ^n× d, where _i represents the attribute feature of node i.
In a semi-supervised setting, only some labels of nodes can be observed, denoted by 𝒱_m={ν_1,ν_2,...,ν_m} with m≪ n, and the corresponding labels are given by _L={y_1,y_2,...,y_m}. The objective function for GNNs can be formulated as:
min_Θℒ_gnn(Θ,,_m, ) =∑_ν_i∈𝒱_mℓ(f_Θ(,)_i,y_i),
where ℓ:ℝ×ℝ→ℝ^+ is a loss function, usually, the least square function is used for regression, while the cross entropy is used for classification.
For clarity, this paper focuses on GCNs originally introduced in <cit.>, although
our idea can be easily extended to other GNNs.
Specifically, a two-layer GCN with Θ = (_1, _2) implements f_Θ as
f_Θ(,)=σ_2(σ_1(_1)_2),
where =^-1/2(+I)^-1/2, is the diagonal matrix of +I
and σ_1,σ_2 are two activation functions such as ReLU and softmax.
In this paper, we are primarily interested in the case where the available graph matrix is full of noise, due to measurement error or adversarial attack. It is known that existing standard GNNs are sensitive to noise, and even a small amount of noise in the graph can propagate to neighboring nodes, impacting the embeddings of many nodes. To improve the robustness of GNN models, one key idea is to produce a denoised graph structure that can be used for learning representations.
As previously discussed, to deal with the GLS problem, most of the existing work imposed prior information for Θ as one regularization of <ref>, such as low rank assumption and sparse structure <cit.>.
Within the framework of empirical risk minimization augmented with a regularization term, we introduce a novel regularization term that captures the alignment between the graph structure and node features. This idea is motivated by both an upper bound and an additional lower bound on the generalization performance of GNNs, which will be discussed in detail in the subsequent subsection.
§.§ Motivation from Generalization Bounds
A predictor with a generalization guarantee is closely related to the complexity of its hypothesis space. We adopt (empirical) Rademacher complexity to measure the functional complexity, which can be used to directly obtain one generalization error. For a function set ℱ defined over graph 𝒢, the empirical Rademacher complexity is defined as
ℛ(ℱ):=𝔼_ϵ[1/msup_f∈ℱ|∑_j=1^mϵ_j f(_j)|_1,_2,...,_N],
where {ϵ_i}_i=1^m is an i.i.d. family (independent of _i) of Rademacher variables. Note that the conditional expectation here is taken with respect to {ϵ_i}_i=1^m given that {_i}_i=1^n is fixed and is not limited to the supervised input data over 𝒱_m.
Since the neighbor representation of graph shift operators is maintained, the t-th output of the first layer in <ref> can be written as a vector form
σ(∑_l=1^d _l^(t)∑_j=1^n_ν jx_jl)=σ(∑_j∈ N(ν)_ν j⟨_j, ^(t)⟩),
where N(ν) denotes the set of neighbors of ν, and _1=(^(1),^(2),...^(k))∈ℝ^d× k is represented in a column-wise manner. Note that we write σ=σ_1=σ_2 for notional simplicity.
Thus, the class of functions defined over the node set 𝒱_L with norm constraints coincides with
ℱ_D,R:={f(_i)
=σ(∑_t=1^kw_2^(t)∑_ν=1^n_iv×σ(∑_j∈ N(ν)_ν j⟨_j, ^(t)⟩) ),
i∈[L],
_1_F≤ R, _2_2≤ D },
where the Frobenius norm of a matrix is given as _F^2:=∑_ijW_ij^2=∑_t=1^k^(t)_2^2. Bounding the population Rademacher complexity over ℱ_D,R is quite challenging, mainly due to the fact that each output h_i^(2) depends on all the input features that are connected to node ν_i, as shown in <ref>. Although some upper bounds of Rademacher complexity for specific GNNs have been provided in <cit.>, it may be argued that these upper bounds are suboptimal in some senses, therefore, there is a lack of explainability and persuasiveness.
As a necessary supplement, we now provide a minimax lower bound of Rademacher complexity over a two-layer GCN, to reveal some essential factors together with their tight upper bounds stated in <ref>.
Without loss of generality, we assume that the number of neighbors is equal for all nodes, denoted by q.
Let 𝐗_v=(𝐱̃_1^T, …, 𝐱̃_q^T)^T ∈ℝ^q × d be the feature matrix of the nodes in 𝒢_v, where all 𝐱̃_i 's are denoted to be reordered input data according to the neighbors of node ν. We are state our main results.
Let ℱ_D,R be a class of GCNs with one hidden layer, where the parameter matrix and the parameter vector satisfy
_1_F≤ R and _2_2≤ D respectively. Then there exists a choice of l-Lipschitz activation function, data points {_i}_i=1^n and a family of given graph convolutional filters, such that
ℛ(ℱ_D,R)≥l^2BDR/√(m)min_k∈ [q]{_q_· k_2∑_t=1^q_kt},
where the constant B:=max_i∈ [N]_i_2.
<ref> indicates that the lower bound of ℛ(ℱ_D,R) depends on the number of labels, the degree distribution of the graph, and the choice of the graph convolution filter. It is interesting to observe that, the above bound is independent of the graph size (n) in general.
It is also worth noting that, for the two-layer neural network with width k, our
lower bound only has an explicit dependence on the Frobenius norm of the parameter matrix,
while is independent of the network width. Importantly, the matching between the graph matrix and the features plays a crucial role in determining the best-case generalization performance of GCN.
§.§ Method Formulation
In this subsection, we use the metric learning approach <cit.> to update the graph structure based on the input features. This involves deriving edge weights through learning a metric function that measures the pairwise similarity of representations.
We define a nonnegative function ϕ: 𝒳×𝒳→ℝ^+ between data _i and _j by
ϕ(_i,_j) =√((∘(_i-_j))^T^T ∘(_i-_j)),
_ij =exp(-ϕ(_i,_j)^2/2τ^2),
where the symbol ∘ denotes the element-wise multiplicative, namely, ∘:=(_1^(1),...,_d^(d)). The vector is a trainable parameter with a sparsity constraint, which allows us to select only a few relevant features for the graph structure. The matrix ∈ℝ^p × d is also trainable and used for projecting node embeddings into a latent space, where p ≤ d for dimension reduction.
We introduce feature selection to the GSL process because not all features are necessarily related to the graph structure. In practical problems, such as chemical and molecular graphs, there can be strong heterophily, where certain features of connected nodes exhibit significant variation <cit.>. Our sparsity-based feature selection approach is different from previous sparse GSL methods <cit.>, where sparsity primarily refers to the adjacency matrix.
The learned feature-based matrix is then combined with the original structure to form a new adjacency matrix in an interpolation manner:
=(1-α)+α,
where α∈[0,1] is a tuning hyperparameter that mediates the influence of the learned structure.
Based on the basic notation described above, we introduce a metric-based regularizer by
ℒ_ss(,): =1/2∑_i,j=1^n_i-_j_2^2_ij+λ_1_1
=tr(^T(-))+λ_1_1,
where is the diagonal matrix of . The first term in <ref> is to enforce certain smoothness for graph structure <cit.>, while the second term is used to generate sparse parameters associated with the input features.
In addition to the graph update based on some of the input features, it is worth mentioning that the alignment of features and graph plays a significantly positive role in the generation performance of general GNNs, as shown in our result in <ref>, as well as an upper bound of transductive Rademacher complexity on GNNs <cit.>. In the minimax sense, as _2 becomes smaller, the corresponding minimax generalization error of various GNNs will be sharper.
Inspired by such a theoretical finding, we introduce another regularizer with respect to the graph constraint:
ℒ_align(,)=_2.
Therefore, our final objectives can be formulated by combining <ref>
min_Θ,,ℒ_gnn(Θ,,_L, )
+γ_1ℒ_ss(,)
+γ_2ℒ_align(,).
Given that dense graphs not only lead to a heavy computational burden, they also might contain noise. Hence, it is common to prune edges according to edge weights as a post-processing operation, which results in either a kNN graph (i.e. each node has up to k neighbors) or a ϵNN graph (i.e. edges whose weights are less than ϵ will be discarded).
§.§ Numerical Algorithm
Since it is difficult to optimize <ref> directly, we consider an alternate optimization algorithm associated with GNN parameters Θ and parameters ,, respectively. Our proposed framework consists of the following two steps.
Step 1: update (,).
Note that the GNN is irrelevant to the parameters (,), and thus we can update them without calculating GNN. Only the empirical loss involves the graph structure. Fix , the objective function in <ref> to update (,) is represented as:
min_, ℒ_gnn(Θ,,_L, ) +γ_1ℒ_ss(,)+γ_2ℒ_align(,).
Step 2: update Θ.
When we calculate the metric learning of node features and update the adjacency matrix of by , we can reduce <ref> to the empirical loss:
min_Θ ℒ_gnn(Θ,,_L, ).
Before presenting our algorithm, we introduce the following basic notations. The identity matrix is denoted by 𝐄, 𝐈 is a vector whose elements are 1.
A function f is a metric learning with the Gaussian kernel.
The prox algorithm <cit.> is a forward-backward method designed to approximate a gradient at non-derivable points.
It can be presented as follows:
prox_η·_1 () = sgn() ⊙max(|| - λ_1, 0),
where η>0 is the learning rate, sgn is the sign function, λ_1 is a weight of ℓ_1-norm, and ⊙ is the Hadamard product.
In <ref>, we start by initializing ←𝐄 and ←𝐈. Following that, we calculate the new adjacency matrix using <ref> (Lines 4-5). Subsequently, the function parameter Θ of GNN is updated by calculating its gradient with respect to (, ) and through solving <ref> (Line 7). After that, both and are updated using gradient descent and the proximal operator, respectively, as described by <ref> (Lines 8-9).
§ NUMERICAL EXPERIMENTS
In this section, we use metattack <cit.>, which is an attack model to deteriorate the performance of the graph model, to poison the graph. Note that the goal of metattack is to launch a global attack so that it can reduce the overall classification accuracy of a model.
According to <cit.>, metattack connects nodes with a significant feature difference. So we can use metattack to poison the graph, and then we can use our model to learn the original graph structure based on the raw graph structure and node features.
§.§ Experiment settings
We employ the following real-world datasets to evaluate our proposed model. The statistics of these datasets are shown in <ref>.
* Cora <cit.> and Citeseer <cit.> are citation network datasets in which Cora has a total of 2708 nodes, and each node contains 1433 features. The value of each feature is 0/1, to indicate whether it contains feature words. Cora has a total of 5429 edges and 7 classifications.
* Citeseer <cit.> has a total of 3312 nodes. Each node contains 3703 features. Similar to Cora, the value of each feature is also 0/1. Citeseer has a total of 4732 edges and 6 classifications.
* Polblogs <cit.> is a political dataset with 1490 blog pages. Each blog page can be regarded as a node of the graph, and the connections between nodes are the connections between blogs. There are a total of 19,090 edges. The node labels of the dataset are conservative or liberal.
To assess the proposed method, we conduct a comparative analysis against state-of-the-art GNN models using the DeepRobust adversarial attack repository. Among the various models of Graph Convolutional Networks (GCN), we specifically focus on the most prominent and representative one <cit.>. Graph Attention Network (GAT, ) is a network architecture that consists of attention layers, allowing it to learn different weights for different nodes in the neighborhood. GAT is commonly used as a baseline method for defending against adversarial attacks.We employ the metattack method, which is a representative non-targeted attack. This type of attack aims to diminish the overall performance of the GNN model on the entire graph, rather than targeting specific nodes or classes.
The perturbation method of the adjacency matrix of the graph is mettack <cit.>, and the interference rate is 5% to 25%, in 5% increments. All the experiments are conducted 10 times with different initializations and random seeds.
§.§ Performance Comparison
In <ref>, we observe that our proposed method, RGSLA, outperforms both and , particularly when the interference rate is relatively high. Notably, when the interference rate exceeds 0.15, our model demonstrates superior robustness compared to other models.
For the Citeseer dataset in <ref>, our model initially performs worse than and . However, as the interference rate of the adjacency matrix increases, both and aggregate the information of mislabeled nodes into the prediction nodes. This leads to a sharp drop in algorithm performance. In contrast, our proposed model can learn the connection relationships of the nodes through their features. Therefore, when the interference rate increases, our model can correct the adjacency matrix of the graph. Even when the interference rate is significant, our model's performance remains close to the uninterrupted state.
Moving on to the Polblogs dataset illustrated in <ref>, our proposed model consistently outperforms both and due to its superior ability to correct the graph structure.
§.§ Ablation Study
To gain a deeper understanding of the contributions of various components in defending against adversarial attacks, we conducted an ablation study. In our model, the parameter α plays a crucial role in mediating the influence of the learned structure on the model, transitioning from the input features of the original adjacency matrix. A higher value of α indicates a stronger inclination towards learning the structure directly from the input features to construct a new adjacency matrix.
To investigate the impact of the parameter α on the model, we compared the corresponding optimal values of α under different levels of noise disturbance, as depicted in <ref>. The results show that for different datasets, when the data is heavily disturbed, a larger value of α leads to improved accuracy of the model. This observation confirms that on datasets with significant levels of contamination, the structure learned by the model directly from its features tends to yield better performance.
§.§ Parameter Sensitivity
We considered four crucial parameters in our study: α, τ, γ_1, and γ_2. Each of these parameters plays a significant role in controlling specific aspects of our model, including the influence of the learned graph, the distance scale, the smoothness of features, and the sparsity and alignment of the model.
To evaluate the impact of each component, we conducted experiments where we systematically varied the value of one parameter while keeping the other parameters fixed at zero. By doing so, we were able to observe how changes in a particular parameter affected the overall performance of the model. Analyzing the performance variations resulting from these experiments allowed us to gain a better understanding of the individual effects of each component. This knowledge helped us assess the importance of different parameters and their contributions to the overall effectiveness of our model.
Weight Parameter α.
Parameter α in this experiment represents the relative weight assigned to the original adjacency matrix and the similarity matrix learned from the graph structure. A larger value of α indicates a higher degree of reliance on the node features of the graph to compensate for the noise present in the graph structure. Specifically, when α is set to 0, the model is solely updated based on the adjacency matrix of the graph. Conversely, when α is set to 1, the model exclusively utilizes the information from the similarity matrix derived from the nodes of the graph. A larger value of α indicates a greater utilization of the information captured in the learned similarity matrix.
<ref> demonstrates the effect of different values of α. When α=0, the model is directly updated from the adjacency matrix, resulting in performance consistent with . As α gradually increases, the model begins to incorporate noise information from the nodes, which initially leads to poorer performance at lower α values. Due to the relatively high noise level in the data, smaller α values are insufficient for noise reduction. However, as observed in <ref>, the model's performance gradually improves as α increases. Notably, when α=0.9, the model significantly outperforms other models. This indicates that, when confronted with high levels of noise, the model effectively leverages information directly from the node features.
Window Width τ.
The size of the node distance is controlled by τ. Using a Gaussian kernel function, a larger value of τ increases the likelihood of connecting nodes with greater distances. At the extreme, when τ is very large, all node connections have a weight of 1. Conversely, when τ is very small, the calculated weights are 0. In essence, the value of τ determines the extent of node connectivity, and different datasets may require different values of τ. To illustrate the impact of variable τ, we use the Cora dataset as an example, although similar observations hold for other datasets.
<ref> demonstrates that when τ is either too small or too large, the model achieves an accuracy of approximately 0.6. In these extreme cases, the adjacency matrix is either all 0s or all 1s, disregarding the original adjacency matrix in the calculations, resulting in similar outcomes. An inappropriate τ value can lead to erroneous node connections, where nodes with significant dissimilarities become connected, consequently interfering with the model's performance. Conversely, an appropriate choice of τ can improve the model's performance. Therefore, caution must be exercised when selecting the hyperparameter τ.
The Regularized Parameters (γ_1,γ_2).
The two parameters, γ_1 and γ_2, control the regularized loss function. Specifically, γ_1 regulates the learned similarity between nodes. Nodes that are closer together are more likely to belong to the same class, while nodes that are farther apart are less likely to be connected. On the other hand, γ_2 governs the alignment between attribute features and the graph structure. When _2 is small, the generalization error of A given a GNN becomes sharper.
From <ref>, it can be observed that both γ_1 and γ_2 significantly impact the experiment accuracy. Considering the influence of the distance factor, γ_1 has a more pronounced effect on accuracy, while γ_2 also plays a role, albeit to a lesser extent compared to γ_1.
The Item Parameters (θ_1, θ_2, θ_3).
Recall our loss function,
ℒ_gnn + γ_1ℒ_ss(,)+γ_2ℒ_align(,)
= ℒ_gnn + γ_1 tr(^T(-))
+ γ_1 λ_1
+ λ_2 ℒ_align(,).
The parameters γ_1, γ_2, λ can be combined as new parameters θ_1, θ_2, θ_3. Transformed by the new parameter, we can find the contribution of the three items of our model. Specifically,
ℒ_gnn + θ_1 tr(^T(-)) + θ_2 _1 + θ_3 _2.
We define θ_1 = γ_1, θ_2 = γ_1 λ, and θ_3 = γ_2. Subsequently, we set each of these parameters to zero individually and observe the influence of the remaining two parameters. As depicted in <ref>, we can observe the impact of the parameter θ on our model. <ref> demonstrate that parameter θ_1 has a significant effect on accuracy, as it controls the smoothness of the graph structure. On the other hand, θ_2 and θ_3 govern the sparsity of the input features and the relationship between the input feature and the adjacency matrix. Therefore, we can conclude that the smoothness of the graph structure is crucial for the GNN model, while the other two components also contribute to improved accuracy.
§.§ Visualization of Noise Improvement
This section investigates the robustness of the proposed model to noise in the adjacency matrix of graph-structured data. Such noise can arise from adversarial attacks or inherent imperfections in the graph itself. It is crucial to assess whether our proposed Graph Structure Learning (GSL) model remains effective in the presence of noise, and whether the estimated adjacency matrix accurately reflects the node connection information.
noise⟶_noisenoise reduction⟶
Our method aims to capture more aggregated information from nodes with similar features. However, it is important to verify whether nodes with similar features, as determined by our model, also tend to share the same labels. <ref> provides insights into this aspect.
For graph data, we calculate the ratio of nodes with the same label in the neighborhood connected to each node. A higher ratio indicates a larger presence of nodes with the same label in the neighborhood. Specifically, for a given node j, this ratio is defined as follows:
r_j = the number of neighborhoods with the same label /the number of all neighbors
By calculating the ratio for each node in the adjacency matrix and plotting the corresponding frequency distribution histogram, we can observe the distribution of same-label ratios in the neighborhood. In this study, we utilize the Cora dataset with a pollution rate set at 0.25. The left side of <ref> shows the ratio distribution histogram for the original adjacency matrix of the graph data, while the right side displays the ratio distribution histogram for the adjacency matrix improved by our model.
As depicted in <ref>, the ratio of nodes with the same label in the neighborhood is low in the original adjacency matrix of the graph data. However, after applying our model, the ratio of nodes with the same label increases. This allows the GNN to aggregate more data with the same label during the aggregation process, thereby reducing errors and enhancing the overall performance of the model.
§ CONCLUSION
In this paper, we study GSL for GNNs and propose a novel robust GSL approach that simultaneously learns the graph structure and the GNN parameters. Remarkably, our proposed approach in the context of GSL is the first to consider the alignment of node features and graph structure. Such an idea is motivated by our derived lower bound of empirical Rademacher complexity on GCN. Our experiments indicate that our approach mostly outperforms several competitive baselines
and improves overall robustness under various amounts of noise pollution. In future research, several avenues can be explored to further advance the field of GSL and improve upon our proposed robust GSL approach: i) Enhancing model interpretability. It is meaningful to investigate advanced methods to enhance the interpretability of the learned graph structure and GNN parameters; ii) Handling dynamic graphs. It is strongly anticipated that we will further extend our proposed robust GSL approach to handle dynamic graphs where the graph structure evolves over time; iii) Transfer learning and generalization. The exploration of techniques to leverage pre-trained graph structure models or transfer knowledge from related tasks has good potential to enhance the performance and generalization capabilities of the robust GSL approach.
§ ACKNOWLEDGMENT
Shaogao’s work is partially supported by the National Natural Science Foundation of China (No.11871277), and Young and Middle-aged Academic Leaders in Jiangsu QingLan Project (2022). Ming Li acknowledged the support from the National Natural Science Foundation of China (No. 62172370, No. U21A20473), the support from Zhejiang Provincial Natural Science Foundation (No. LY22F020004), and the support from the Fundamental Research Funds for the Central Universities.
plainnat
sectionappendix
subsectionappendix
The appendices are structured as follows. In <ref>, we include additional derivation of the metric function between pairwise representations. In <ref>, we provide the necessary backgrounds for our theoretical results. The proof of <ref> is sketched in <ref>. Before delving into the appendices, we will first review the necessary notations used in the paper (refer to <ref>), which enable a clear and concise representation of the concepts and results discussed.
1.05
§ THE DERIVATION OF THE METRIC FUNCTION BETWEEN PAIRWISE REPRESENTATIONS
Consider <ref> , it can transform to the following
ϕ(_i,_j) =√((∘(_i-_j))^T^T ∘(_i-_j))
=√( (∘(_i-_j))^T ∘(_i-_j) )
=√( (∘_i - ∘_j)^T (∘_i - ∘_j) )
=√( (_i - _j)^T (_i - _j))
= _i - _j _2,
where =∘ is a linear function.
From the expressions above, we can consider <ref> to be Euclidean distance. So, given parameter W and a, we can transform feature bX and then compute the Euclidean distance between nodes.
Given a metrix learning _i j = (_i-_j)^T(_i-_j), consider that
_i j = (_i-_j)^T(_i-_j) = _i^T _i - 2_i^T _j + _j^T _j.
Let _i j=_i^T _j, so that _i j=_i i - 2_i j + _j j and define 𝐇_i j = _i i, 𝐊_i j=_j j, 𝐇=𝐊^T in matrix
H =
[ _1 1 _1 1 ⋯ _1 1; _2 2 _2 2 ⋯ _2 2; ⋮ ⋮ ⋮ ⋮; _n n _n n ⋯ _n n; ],
K =
[ _1 1 _2 2 ⋯ _n n; _1 1 _2 2 ⋯ _n n; ⋮ ⋮ ⋮ ⋮; _1 1 _2 2 ⋯ _n n; ],
G =
[ _1 1 _1 2 ⋯ _1 n; _2 1 _2 2 ⋯ _2 n; ⋮ ⋮ ⋮ ⋮; _n 1 _n 2 ⋯ _n n; ].
So a metric learning = 𝐇 + 𝐊-2. Therefore, we can transform a for loop to matrix multiplies using the function above
=exp(-/2σ^2),
where is metric learning of ϕ^2(_i, _j). Given node feature and parameter ,, first we calculate = f()=∘ X,
then, let =^T, and 𝐇_i j = _i i, 𝐊_i j=_j j, and we can get = 𝐇 + 𝐊 - 2𝐆. Finally, we can calculate = exp(-/2σ^2).
§ THEORETICAL BACKGROUND AND TECHNICALITIES
In this section, we introduce some definitions and theoretical results necessary for our core idea's source.
We briefly recall the framework for transductive learning. Let 𝒳≜{x_i}_i=1^n be the domain with features x_i ∈ℝ^d and {y_i}∈{± 1} be the known label set. The goal of learning is to find a predictor h which can minimise the generalisation error ℒ_u(h) ≜1/n-m∑_i=m+1^n ℓ(h(x_i), y_i). In this way, the generalization error bound for graph-based transduction takes the form as
ℒ_u(h) ≤ℒ_m(h)+ complexity term,
where ℒ_m(h) ≜1/m∑_i=1^m ℓ(h(x_i), y_i) is the empirical error of h and
ℒ_n(h) ≜1/n∑_i=1^n ℓ(h(x_i), y_i). The complexity term is typically characterized using learning-theoretic terms such as VC dimension and Rademacher complexity<cit.>.
The following definition of the Transductive Rademacher complexity avoids the triviality of VC Dimension based error bounds and extends inductive Rademacher complexity by considering the unobserved instances.
Let 𝒱⊆ℝ^n, p ∈ [0,1/2] and m labeled points. Let σ=(σ_1, …, σ_n)^T be a vector of i.i.d. random variables, where σ_i takes value +1 or -1, P(σ_i=1)=P(σ_i=-1)=p, and P(σ_i=0)=1-2p. The transductive Rademacher complexity of 𝒱 is
ℜ_m, n(𝒱) ≜(1/m+1/n-m) ·σ𝔼[sup _𝐯∈𝒱σ^⊤𝐯] .
Next, the following result derives a bound for the TRC of K-layer GNNs and states the corresponding generalization error bound.
Assume ℋ_𝒢^ϕ, β, ω⊆ℋ_𝒢^ϕ where the trainable parameters satisfy b_k_1 ≤β and W_k_∞≤ω for every k ∈[K]. The transductive Rademacher complexity (TRC) of the restricted hypothesis class is bounded as
ℜ_m, n(ℋ_𝒢^ϕ, β, ω)
≤ c_1 n^2/m(n-m)(∑_k=0^K-1 c_2^kS_∞^k)
+c_3 c_2^KS_∞^KSX_2 →∞√(log n),
where c_1 ≜ 2 L_ϕβ, c_2 ≜ 2 L_ϕω, c_3 ≜ L_ϕω√(2 / d) and L_ϕ is Lipschitz constant for activation ϕ.
Following <cit.>, the bound leads to a generalization error bound. For any δ∈(0,1) and h ∈ℋ_𝒢^ϕ, β, ω, the generalisation error satisfies
ℒ_u(h)-ℒ_m(h)
≤ ℜ_m, n(ℋ_𝒢^ϕ, β, ω)+c_4 n √(min{m, n-m})/m(n-m)
+c_5 √(n/m(n-m)ln(1/δ)),
with probability 1-δ, where c_4, c_5 are absolute constants such that c_4<5.05 and c_5<0.8.
Note that the bound in <ref> depends on the alignment between the feature and graph information.
§ PROOF OF <REF>: LOWER BOUND OF RADEMACHER COMPLEXITY
By definition of the Rademacher complexity, it is enough to lower bound the complexity of some subset of ℱ_D,R, denoted by ℱ'. In particular, we focus on the class ℱ_D,R' of graph neural networks over ℝ of the form
ℱ_D,R':={f(_i)
=σ(∑_t=1^kw_t^(2)∑_v=1^n_iv×σ(∑_j∈ N(v)_vj⟨_j, _t^(1)⟩) ), i∈[m],
^(1)=(^(1), 0,..., 0), ^(2)=(w^(2), 0); ^(1)_2≤ R, |w^(2)|≤ D },
where we choose ^(1)=(^(1), 0,..., 0) so that only the first column vector could be nonzero. In this case, ^(1)_F=^(1)_2≤ R holds. Similarly, we only allow ^(2) to vary in the first coordinate to simplify the proof. Furthermore, we take the linear activation σ(s)=ls as our choice.
In this setup, it holds that
ℛ(ℱ_D,R')
≥ l^2𝔼_sup_^(1)_2≤ R, |w^(2)|≤ D1/m| ∑_i=1^mϵ_i( w^(2)∑_v=1^n_iv×(∑_j∈ N(v)_vj⟨_j,^(1)⟩))|
=l^2D/m𝔼_sup_^(1)_2≤ R|⟨∑_i=1^mϵ_i( ∑_v=1^n_iv×(∑_j∈ N(v)_vj_j)), ^(1)⟩|
=l^2RD/m𝔼_∑_i=1^mϵ_i( ∑_v∈ N(i)_iv×(∑_j∈ N(v)_vj_j))_2,
where the last step follows from the neighbor representation of graph shift operators, as well as the equivalent form of the L_2-norm, that is, s_2=sup__2=1⟨ s,⟩. Let e_1=(1,0,...,0) denote the standard unit vector in ℝ^d, and we assume that all the input data have the specific form _j=B e_1 for all j∈ [n].
Then
∑_i=1^mϵ_i( ∑_v∈ N(i)_iv(∑_j∈ N(v)_vj_j))_2
=B| ∑_i=1^mϵ_i( ∑_v∈ N(i)_iv(∑_j∈ N(v)_vj))|.
Note that, the exchange of summation leads to the following equality,
∑_i=1^mϵ_i( ∑_v∈ N(i)_iv×(∑_j∈ N(v)_vj))=∑_k=1^q∑_t=1^q_kt(∑_i=1^mϵ_i_ik).
Suppose that the term ∑_t=1^q_kt is invariant with k, denoted by h_q(). Then
∑_i=1^mϵ_i(∑_v∈ N(i)_iv×(∑_j∈ N(v)_vj))=h_q()∑_i=1^mϵ_i(∑_k=1^q_ik).
Hence, this together with <ref> yields that
𝔼_∑_i=1^mϵ_i( ∑_v∈ N(i)_iv(∑_j∈ N(v)_vj_j))_2
=h^2_q()𝔼_|∑_i=1^mϵ_i|=h^2_q√(m).
Moreover, by our choice of _j for all j as above, we can check that
|h_q()|=|⟨_· k, 1⟩|=_q_· k_2.
As a consequence, combining <ref>, we obtain
ℛ(ℱ_D,R')≥l^2BRD/√(m)min_k∈ [q]{_q_· k_2∑_t=1^q_kt}.
This completes the proof of <ref>.
|
http://arxiv.org/abs/2307.00733v2
|
20230703034205
|
Asymptotic properties of maximum likelihood estimators for determinantal point processes
|
[
"Yaozhong Hu",
"Haiyi Shi"
] |
math.ST
|
[
"math.ST",
"stat.TH"
] |
On the choice of training data for machine learning of geostrophic mesoscale turbulence
Chengxing He1,3, Yubo Wang1, Carlo Waldfried2, Guangcanlan Yang1, Jun-Fei Zheng2, Shu Hu3 and Hong X. Tang1,*
August 1, 2023
==================================================================================================================
We obtain the almost sure consistency and the Berry-Esseen type bound of the maximum likelihood estimator for determinantal point processes (DPPs), completing and extending previous work
initiated in Brunel, Moitra, Rigollet, and Urschel <cit.>.
We also give explicit formula and a detailed discussion for the maximum likelihood estimator for
blocked determinantal matrix of two by two submatrices and compare it with the frequency
method.
§ INTRODUCTION
Determinantal point processes (DPPs) arise from random matrix theory <cit.> and are first introduced to give the probability distribution of fermionic system in thermal equilibrium in quantum physics <cit.>.
Since then, DPPs have been found in various aspects of mathematics, including for example, loop-free Markov chains <cit.> and edges of uniformly spanning trees <cit.>.
In the seminal work <cit.>, Kulesza and Taskar show that DPPs demonstrate
the unique characteristics comparing to various other probabilistic models in the sense that they capture the
global repulsive behavior between items, give polynomial-time algorithms for statistical inference, and have geometrical intuition. Due to these advantages DPPs have played very important roles in machine learning, especially in subset selection problems, such as documentary summarization, image search, and pose determination <cit.>, and
so on. These real
world applications necessitate the estimation of parameters of determinantal
point process models. In this context, maximum likelihood estimation is a natural
choice, which in general leads to a non-convex optimization problem in our situation. Along
this direction, Kulesza and Taskar split DPPs model into diversity part and quality part and only learn the quality part while the first part is fixed. They conjecture that the problem of learning the likelihood of DPPs is NP-hard, which has been proven by <cit.> a decade later. Brunel, Moitra, Rigollet, and Urschel <cit.> first studies the local geometry of the expected maximum likelihood estimation of DPPs, that is, the curvature of likelihood function around its maximum. Then they prove that the maximum likelihood estimator converges to true values in probability and establish the corresponding central limit theorem. Motivated by this work, our first result in this paper is to prove that the convergence
of the maximum likelihood estimator to the true value also holds almost surely. Our second result is even more involved, we shall obtain the
Berry-Essen type theorem of the maximum likelihood estimator, that is, the quantitative
rate in the central limit theorem. Lastly, we present some special
cases where all the parameters can be estimated analytically.
The paper is organized as follows. In Section 2 we introduce some basic definitions and properties of DPPs. In Section 3 we present our main results for the almost sure consistency and the Berry-Esseen type theorem. In Section 4, we
discuss the explicit MLE for the two by two ensembles. Some concluding remarks are given in Section 5.
§ PRELIMINARY
We first explain the notations that we are going to use in this work. Fix a positive integer N and denote [N] ={1,2,...,N}. For a J ⊆ [N], |J|=#J denotes the number of element in J. For a matrix A ∈ℝ^N × N and J ⊆ [N], denote by A_J the restriction of A to J × J, which is a J|× |J| matrix. Sometimes A_J also refers to an N × N matrix whose restriction to J is A_J and has zeros everywhere else.
Let , , and be the sets of all symmetric matrices, positive semi-definite matrices, (strictly) positive definite matrices, and symmetric matrices whose eigenvalues belong to interval (0,1) respectively, on ℝ^N × N.
Let A and B be matrices in . We say that B≼ A if A - B is positive semidefinite. Similarly, we say that B ≺ A if A-B is positive definite. By contrast, we say that B ≤ A if A_i,j - B_i,j≥ 0 for all i and j.
For a matrix A ∈ℝ^N × N, let ‖ A ‖_F, (A), and (A) denote its Frobenius norm (Hilbert-Schmidt norm), determinant and trace respectively. If A is vectorized as an N × N column vector then the Frobenius norm of A is ℒ^2 norm ‖ A ‖_2.
For A∈, k ≥ 1 and a smooth function f: →ℝ, we denote by ^kf(A) the k-th derivative of f evaluated at A ∈. This is a k-linear map defined on ; for k=1, f(A) is the gradient of f, ^2 f(A) the Hessian, etc.
A matrix A ∈ is called block diagonal if there exists a partition {J_1, J_2, ..., J_k}, k ≥ 1, such that A_ij = 0 when i and j belong to different J_a and J_b. The largest k such that the partition exists is called the number of blocks of A and consequently J_1,...,J_k are called blocks of A.
For a subset A ⊆, let A denote the complement of A, that is, set \ A.
Let us recall that a point process on a ground set is a probability measure over the subsets
of . Random subsets drawn from the point process can be any subset between null set and full set . In this paper, we focus on the discrete and finite point process, where the ground set, without loss of generality, is = {1,2, ⋯ ,N}.
The set of all subsets of is denoted by .
A point process is called a determinantal point process if is a -valued random variable such that for every fixed set A ⊆,
(A ⊆) = (K_A) ,
where K_A is the restriction of an N × N symmetric matrix
K to
the subset A, that is, K_A [K_i,j]_i,j ∈ A.
If we think of each of item in the ground set as the Boolean variable, the left side of (<ref>) is the marginal probability in certain sense and hence K is called marginal kernel. (<ref>) satisfies the following necessary conditions:
* Since the marginal probability of empty set is the total probability space, (Ω) = (∅⊆) =1. We set (K_∅) =1.
* Since is a probability measure, all principal minors of K, i.e. (K_A) must be nonnegative, and thus K itself must be positive semidefinite, that is, K ≽ 0.
* From (∅ = ) + (⋃_i=1^N{i ∈} ) = 1 and using inclusion–exclusion principle we get
(⋃_i=1^N{i ∈} ) = ∑_i ∈ [N](i∈) - ∑_{i,j}⊂ [N]({i,j}⊆) + …
… + (-1)^N-1( [N]⊆)
= ∑_|A| =1(K_A) - ∑_|A| =2(K_A) + …
… + (-1)^N-1(K)
= 1- (I-K) .
The above last equality follows from the characteristic polynomial. Equation (<ref>) also means
(∅ = ) = (I-K) ≥ 0.
Similarly, we are able to show that (∅ = ∩ A) = (I_A-K_A) ≥ 0 for any subset A ⊆ [N] and hence K ≼ I. So the necessary condition for a symmetric matrix to give a determinantal process is 0 ≼ K ≼ I. In particular, all the diagonal elements of the marginal kernel K_i,i should be in the interval [0,1]. We can assume K_i,i is always greater than 0, otherwise the element i can be excluded from the model. This condition turns out to be sufficient: any 0 ≼ K ≼ I defines a DPP. To prove this, it's sufficient to show that for every A ⊆ [N], the atomic probability is well-defined, that is, 0 ≤(A = ) ≤ 1. The probability being less or equal to 1 holds since K ≼ I. For the other inequality, we assume K_A is invertible.[if K_A is not invertible, we immediately get (A = ) = 0.] Then using Schur complement and characteristic polynomial, we have
(A = ) = (A ⊆) - (⋃_i∈A{A∪{i}⊆})
= (K_A) - ∑_i ∈A(K_A∪{i}) + ∑_{i,j}⊆A(K_A∪{i,j})+
… + (-1)^|A|(K)
= (K_A) - ∑_i ∈A(K_A) (K_ii - K_{i},A K^-1_A K_A,{i} )
+ ∑_{i,j}⊆A(K_A)(K_{i,j} - K_{i,j},A K^-1_A K_A,{i,j}) +
… + (-1)^|A|(K_A) (K_A - K_A,A K^-1_A K_A,A )
= (-1)^|A|(K_A) ( (K_A - K_A,A K^-1_A K_A,A)-I_A)
= (-1)^|A|( K- I_A) ,
where K_A,B denotes the matrix obtained from K by keeping only those entries whose rows belong to A and whose columns belong to B (if A = B we simply have K_A.), |A| denotes the cardinality of subset A, and A the complement of set A. Here we use a slight abuse of notation of I_A̅. We refer it to an N × N matrix whose restriction to A̅ is I_A̅ and has zeros everywhere else. Since 0 ≼ K≼ I, (A = ) = |(K-I_A)| ≥ 0.
Sometimes it is quite inconvenient to work with marginal kernels since their eigenvalues should be bounded by 0 and 1, and the marginal probability is not very appropriate to describe real world data. Here we introduce a slightly smaller class of DPPs called L-ensembles.
A point process is called an L-ensemble if it is defined through a real, symmetric matrix L:
_L(A = ) ∝(L_A),
where A ⊆ is a fixed subset.
By the normalization, the proportion coefficient is equal to
1/∑_A ⊆(L_A).
Though this seems very cumbersome, the following theorem gives us the closed form of (<ref>).
For any A ⊆,
∑_A ⊆ Y ⊆(L_Y)
= (L + I_A).
In particular, when A = ∅, we have ∑_A ⊆(L_A)= (L +I).
Thus we have
_L(A = ) = (L_A)/(L+I) .
Moreover, the following theorem proven by <cit.> shows that L-ensembles are indeed DPPs.
An L-ensemble is a DPP, and its marginal kernel is
K = L(L+I)^-1 = I - (L+I)^-1.
However, not all DPPs are L-ensembles. By inverting the (<ref>), we have
L = K(I-K)^-1.
We see that the equality fails when the eigenvalues of K achieve the upper bound 1. Also from (<ref>) we observe that the existence of L-ensembles is equivalent to the point processes giving non-zero probability to the empty set.
From Equation (<ref>), if A = {i}⊆ is a singleton, then we have
(i ∈) = K_ii .
So the diagonal of marginal kernel gives the probability of inclusion for individual elements. if A = {i,j}⊆, then the probability is given by the two by two principal minor (
K_ii K_ij
K_ji K_jj)
({i,j}⊆) = K_ii K_jj -K^2_ij
≤ K_ii K_jj
= (i ∈ Y)(j ∈ Y) .
Inequality (<ref>) implies that element i and j tend not to co-occur, especially when K^2_ij is close to K_iiK_jj. This feature is called repulsive behavior of determinantal point processes and the off-diagonal elements characterize the degree of repulsion. Because of this major property, points tend to repel each other and hence induce point configurations that usually spread out evenly on the space. For example, let our ground set be a 2-dimensional grid: set {(i,j) ∈ℤ^2 : 1≤ i,j ≤ 60}, and then the kernel should a 3600 by 3600 matrix. Let the matrix be a Gaussian kernel[the Gaussian kernel defines an L-ensemble instead of marginal kernel.], where each entry is given by L_ij,kl = exp{-1/0.1^2((i-k)^2 + (j-l)^2) }. Using the sampling algorithm proposed by Hough et al <cit.>, we draw samples from the DPP. See Figures <ref> and <ref>.
§ MAXIMUM LIKELIHOOD ESTIMATOR OF DPPS
In the remaining part of this paper, we are only concerned with the estimation of the
L-ensemble from the data.
As we mentioned before, DPPs possess many nice properties, which make them very prevalent in mathematics. However, what makes DPPs more complicated is that they are not identifiable, that is, different ensembles could give the same DPP. Let DPP(L^*) denote the L-ensemble determined by the matrix L^*. The identifiability problem is precisely described by Theorem 4.1 in <cit.>.
Let be the collection of all diagonal matrices whose entry is either 1 or -1.
For L_1 and L_2∈, DPP(L_1) = DPP(L_2) if and only if there exists a D ∈ such that L_2 = D L_1 D.
We are interested in how many possible ensembles can a given DPP has, so <cit.> defines the degree of identifiability of DPP.
Let L ∈. The degree Deg(L) of identifiablity of L is the cardinality of the family { DLD: D ∈}. We say that L is irreducible if the cardinality is 2^N-1 and reducible otherwise. If ∼DPP(L), we also call is irreducible if L is irreducible and reducible otherwise.
The next proposition shows that the degree of identifiability turns out to be completely described by the block structure of the matrix.
Let L ∈, Z ∼DPP(L), and K be the corresponding marginal kernel. Let 1≤ k≤ N and {J_1, J_2,..., J_k} be a partition of [N]. The following statements are equivalent:
* 1. L is block diagonal with k blocks J_1, J_2,..., J_k,
* 2. K is block diagonal with k blocks J_1, J_2,..., J_k,
* 3. Z ∩ J_1, ..., Z ∩ J_k are mutually independent irreducible DPPs,
* 4. L = D_jLD_j for all j ∈ [k], where D_j∈ whose diagonal element is 1 on J_j and -1 otherwise.
From the above proposition we know that L has k blocks if and only if the degree of identifiability of L is 2^N-k. In particular, L is irreducible if and only if it only has one block.
Let Z_1,...,Z_n be n independent subsets drawn from DPP(L^⋆) for some unknown L^⋆∈𝒮_[N]^++. The scaled log-likelihood associated to this model for any L ∈𝒮_[N]^++ is
Φ̂(L) = 1/n∑_i=1^nlog P_L(Z_i) =
∑_J ⊆ [N]p̂(J)log(L_J)- log(L+I),
where
p̂(J)= 1/n∑_i=1^n𝕀(Z_i = J).
𝕀(·) stands for the characteristic function.
It is also useful to define the expected log-maximum likelihood function given the real kernel L^⋆
Φ_L^⋆(L) = ∑_J ⊆ [N]p_L^⋆(J)log(L_J) - log(L+I) ,
where
p_L^⋆(J) = (p̂(J)) = (L^⋆_J)/(L^⋆+I).
Basically, we take the expectation of p̂(J) with respect to the true probability measure DPP(L^⋆) and then get the expected maximum likelihood function. In the sequel let L^⋆ be fixed, let p̂_J denote p̂(J), p^*_J denote p_L^*(J) and Φ denote Φ_L^⋆.
Let KL(DPP(L^⋆), DPP(L)) be the Kullback-Leibler divergence, which measures the difference between distributions of DPP(L^⋆) and of DPP(L). Since Kullback-Leibler divergence is always non-negative, we have
KL(DPP(L^⋆), DPP(L)) = Φ(L^⋆) - Φ(L) ≥ 0, ∀ L ∈.
As a consequence L^⋆ is the global maxima of the expected maximum function Φ(L). Due to non-identifiability of DPPs illustrated in Theorem <ref>, Φ(L) achieves the maximum whenever L = D L^⋆ D for some D ∈ and hence the global maxima is the set {DL^⋆D: D ∈}. Now we introduce a useful lemma.
The gradient of log-likelihood function defined in (<ref>) is given by
= ∑_J ⊆ [N]_J L_J^-1 - (L+I)^-1.
We regard determinant as a multivariate function of N × N variables and then the directional derivative of (L+I) along direction H is given by
(L+I) (H) = lim_t → 0(L+I+tH) -(L+I)/t
= lim_t → 0(L+I)[(I+t(L+I)^-1H)-1/t]
= lim_t → 0(L+I) [1+t((L+I)^-1H)+o(t^2)-1/t]
= (L+I)((L+I)^-1H),
where the third equality follows from the power series representation of (I+A). Then the directional derivative of along direction H is
(H) = ∑_J ⊆ [N]_J(L_J^-1H_J) - ((L+I)^-1H).
In matrix form, the above equation becomes
= ∑_J ⊆ [N]_J L_J^-1 - (L+I)^-1.
§.§ Strong consistency
One critical issue for the maximum likelihood estimation is its consistency. Since determinantal point processes are not identifiable we measure the performance of maximum likelihood estimation by the distance between the likelihood maximizer L̂_n and the set of true values:
ℓ(L̂_n, L^⋆) = min_D∈𝒟‖L̂_n - DL^⋆D ‖_F.
<cit.> proves that this distance converges to zero in probability. We shall prove a stronger version: the convergence also holds almost surely. The proof is based on <cit.> and Wald's consistency theorem <cit.>. Even though the latter theorem originally requires the distribution to be identifiable, this is not a problem for this setting where we consider distance between L̂_n and the set of true values instead of one value.
We first show that ℓ(L̂_n, L^⋆) converges to zero almost surely when parameters of matrices are restricted on a compact set. For 0<α < β< 1, define a set E_α,β
E_α,β = { L ∈𝒮^++_[N] : K=L(I+L)^-1∈𝒮^[α,β]_[N]} .
Choose appropriate α, β such that L^⋆∈ E_α,β. E_α,β is compact since it's bounded and closed in ℝ^N × N.
Let Z_1,...,Z_n be n independent subsets of Z ∼DPP(L^⋆) for some unknown L^⋆∈ E_α,β. Let L̂_n be the maximum likelihood estimator of defined on E_α,β, then ℓ(L̂_n,L^⋆) converges to zero almost surely.
Let
ΔΦ̂(L) = Φ̂(L) - Φ̂(L^⋆) = 1/n∑_i=1^nlogP_L(Z_i)/P_L^⋆(Z_i)
and
ΔΦ(L) = Φ(L) - Φ(L^⋆)
=_L^⋆(logP_L(Z)/P_L^⋆(Z)).
ΔΦ(L) is the Kullback-Leibler Divergence between DPP(L^⋆) and DPP(L). By Jensen's inequality, ΔΦ(L) ≤ 0 for all L and Φ(L) = Φ(L^⋆) if and only if P_L(Z) = P_L^⋆(Z) for all Z∈[N], which means L = DL^⋆ D for some D ∈𝒟. In the sequel let E denote E_L^*
For each L ∈ E_α,β, the strong law of large numbers implies
ΔΦ̂(L) ΔΦ(L).
However, the above convergence doesn't imply the convergence of maximum likelihood estimator to the true values. Thus the Wald's integrability condition is needed:
for every L ∈ E_α, β, there exists ϵ > 0 such that,
sup_N∈ E_α, β
ℓ(L,N) < ϵlogP_N(Z)/P_L^⋆(Z) < ∞.
Since L ↦logP_L(Z)/P_L^⋆(Z) is continuous (the determinant function is continuous), for any arbitrary δ >0 there exists ϵ >0, when ℓ(L,N)
< ϵ
(1-δ)P_L(Z)/P_L^⋆(Z)< P_N(Z)/P_L^⋆(Z) <(1+δ) P_L(Z)/P_L^⋆(Z).
Then the Wald's integrability condition is satisfied.
Now for every sequence {L_n} converging to L, we show that ΔΦ(L_n) is upper semicontinuous:
lim sup_n→∞ΔΦ(L_n) = lim sup_n→∞logP_L_n(Z)/P_L^⋆(Z)
≤ lim sup_n→∞logP_L_n(Z)/P_L^⋆(Z)
= P_L(Z)/P_L^⋆(Z)
= ΔΦ(L) .
The second inequality follows from the Fatou's lemma and the third identity is the consequence of continuity of the function logP_L_n(Z)/P_L^⋆(Z).
For every η > 0 we define the set K_η
K_η = { L ∈ E_α,β: ℓ(L,L^⋆) ≥η}
= ⋂_D∈𝒟{L ∈ E_α,β: ‖ L - DL^⋆D ‖_F≥η},
which is closed and hence compact.
Since ΔΦ(L) is an upper semicontinuous function, it achieves maximum over the compact set K_η. We denote the maximum by m(η). And we cannot have m(η) = 0 because that would imply there is a L ∈ K_η such that L = DL^⋆ D for some D ∈𝒟.
The strong law of large numbers implies
sup_N∈ E_α, β
ℓ(L,N) < ϵΔΦ̂(N) ≤ 1/n∑_i=1^nsup_N∈ E_α, β
ℓ(L,N) < ϵlogP_N(Z_i))/P_L^⋆(Z_i)
sup_N∈ E_α, β
ℓ(L,N) < ϵlogP_N(Z)/P_L^⋆(Z).
By continuity,
lim_ϵ→ 0sup_N∈ E_α, β
ℓ(L,N) < ϵlogP_N(Z))/P_L^⋆(Z) = logP_L(Z)/P_L^⋆(Z)
and sup_ϵlogP_N/P_L^⋆
is a decreasing function with respect to ϵ because supremum over a smaller subset is smaller than over a bigger subset. And by (<ref>) it is integrable for all small enough ϵ. Hence by the dominated convergence theorem,
lim_ϵ→ 0sup_N∈ E_α, β
ℓ(L,N) < ϵlogP_N(Z))/P_L^⋆(Z) = logP_L(Z)/P_L^⋆(Z) = ΔΦ(L) .
Thus for any L ∈ K_η and any γ > 0 there exists a ϵ_L such that
sup_N∈ E_α, β
ℓ(L,N) < ϵ_LlogP_N(Z)/P_L^⋆(Z) < m(η)+γ .
For each L ∈ K_η, we define the open set:
V_L = {N∈ E_α, β: ℓ(N,L) < ϵ_L}
and then the family {V_L: L ∈ K_η} is an open cover of K_η and hence has a finite subcover: V_L_1, V_L_2, .... , V_L_d.
On every V_L_i we use strong law of large numbers again to obtain
lim sup_n →∞sup_N∈ V_L_iΔΦ̂(N) ≤ lim sup_n →∞1/n∑_i=1^nsup_N∈ V_L_ilogP_N(Z_i)/P_L^⋆(Z_i)
= sup_N∈ V_L_ilogP_N(Z)/P_L^⋆(Z).
From (<ref>) we get
lim sup_n →∞sup_N∈ V_L_iΔΦ̂(N) < m(η) +γ i = 1,2,...,d.
Since {V_L_i: i=1,2 ...,d } cover K_η we have
lim sup_n →∞sup_N∈ K_ηΔΦ̂(N) < m(η) +γ
which, since γ is arbitrary, implies
lim sup_n →∞sup_L ∈ K_ηΔΦ̂(L) < sup_L ∈ K_ηΔΦ(L) = m(η) .
Notice that m(η) < 0. From (<ref>) there exists a constant N_1 such that
sup_L ∈ K_ηΔΦ̂(L) < m(η)/2, n > N_1.
But
ΔΦ̂(L̂_n) = sup_L ∈ E_α,βΔΦ̂(L) ≥ΔΦ̂(L^⋆) ΔΦ(L^⋆) =0 ,
so there exists a constant N_2 such that
ΔΦ̂(L̂_n) ≥m(η)/2, n>N_2
which implies that L̂_n∉ K_η, that is,
ℓ(L̂_n, L ) < ϵ.
Now we can remove the compactness condition.
Let Z_1,...,Z_n be n independent
sample subsets of Z ∼DPP(L^⋆).
Let L̂_n be the maximum likelihood estimator of L^⋆.
Then ℓ(L̂_n, L^⋆) converges to zero almost surely.
The first step is to show that the event {L̂_n∈ E_α,β} holds almost sure. We adopt the proof from <cit.>.
Let δ = min_J⊂ [N]P_L^⋆(J). For simplicity, we denote P_L^⋆(J) by p^⋆_J. Since L^⋆ is positive definite, δ >0. Define the event 𝒜 by
𝒜 = ⋂_J ⊂[N]{ p^⋆_J≤ 2p̂_J≤ 3p^⋆_J}.
Observe that Φ(L^⋆)<0 and we can find α < exp(3Φ(L^⋆)/δ) and β >1- exp(3Φ(L^⋆)/δ) such that 0<α<β<1.
Then using <cit.> we know that on the event 𝒜, L̂∈ E_α, β, that is,
P(L̂∈ E_α, β) ≥ P(𝒜).
Because
p̂_J = 1/n∑_i=1^n𝕀(Z_i=J) P_L^⋆(Z=J) = p^⋆_J ,
the event 𝒜 holds almost surely when n goes to infinity and hence {L̂_n∈ E_α,β} holds almost surely.
Let 𝕀_E_n denote the characteristic function of the event {L̂_n∈ E_α, β}. Then
(lim_n →∞ℓ(L̂_n,L^⋆) = 0 ) = (lim_n →∞ℓ(L̂_n ,L^⋆) = 0 , lim_n →∞𝕀_E_n =1 )
+(lim_n →∞ℓ(L̂_n ,L^⋆) = 0 , lim_n →∞𝕀_E_n≠ 1 )
= (lim_n →∞ℓ(L̂_n ,L^⋆) = 0 , lim_n →∞𝕀_E_n =1 )
= (lim_n →∞ℓ(L̂_n ,L^⋆) = 0 | lim_n →∞𝕀_E_n =1 )( lim_n →∞𝕀_E_n = 1)
= (lim_n →∞ℓ(L̂_n ,L^⋆) = 0 | lim_n →∞𝕀_E_n =1 )
= 1 .
The last equality follows from the fact that L̂_n∈ E_α,β almost surely and from lemma <ref>.
§.§ Berry-Essen theorem
We observe that an N by N matrix [A_ij]_N× N can also be viewed as an N × N dimensional column vector: (A_11, A_12, ..., A_1N, A_21, ..., A_N1, ... A_NN)^T. Then the Frobenius norm of the matrix is just the ℒ^2 norm for its corresponding column vector. In the following we shall regard the matrix as the corresponding column vector.
Because of non-identifiability of DPPs, maximum likelihood estimators are not unique. We choose the estimator L̃ which is closest to the fixed true value L^*. In fact, let L̂ be one maximal likelihood estimator. let Let D̂∈𝒟 be such that
‖D̂L̂D̂ - L^⋆‖_F = min_D∈𝒟‖DL̂D - L^⋆‖_F
and set L̃ = D̂L̂D̂. Then the strong consistency of L̃ immediately follows from the Theorem <ref>.
Assume that L^⋆ is irreducible and then according to <cit.>, d^2Φ(L^⋆) is negative definite and hence invertible. Let V(L^⋆) denote its inverse. Here if we vectorize L then d^2Φ(L^⋆) is an (N × N) × (N × N) Hessian matrix. By <cit.>,
√(n)(L - L^⋆)
= -((d^2logP_L^⋆(Z)))^-11/√(n)∑_i=1^nd( log P_L^⋆(Z_i)) + o_P(1)
= -V(L^⋆)1/√(n)∑_i=1^n((L^⋆_Z_i)^-1 - (I +L^⋆)^-1) + o_P(1).
In particular, <cit.> states that the sequence √(n)(L - L^⋆) is asymptotically normal with mean 0 and covariance matrix -V(L^⋆). Hence we get the following theorem from <cit.>.
Let L^⋆ be irreducible. Then, L̃ is asymptotically normal:
√(n)(L - L^⋆) 𝒩 (0,-V(L^⋆)),
where the above convergence holds in distribution.
Next, let us take one step further. We want to find the rate of convergence of (<ref>).
Namely,
we want to find an upper error bound on the rate of convergence of the distribution of (-V(L^⋆))^-1/2√(n)(L - L^⋆) to standard multidimensional normal distribution Z ∼𝒩(0,I). We argue that when ∈ E_α, β, the bound of the maximal error is of order n^-1/4. The condition is not of too much restriction. Indeed, since α and β can be arbitrarily close to 0 and 1 respectively, E_α,β converges to . What's more, since from Theorem <ref>, L̂∈ E_α,β almost surely, D̂L̂D̂ = ∈ E_α,β almost surely.
Let L̃ be as defined as above and also belong to E_α, β and Z be an N × N standard Gaussian matrix. Then for every x ∈ℝ^N × N,
|((-V(L^⋆))^-1/2√(n) ( - L^⋆) < x) - (Z <x ) |≤ C 1/√(n),
where C is a sufficiently large constant, which is irrelevant to x, subject to α, β and proportional to N^2.
We divide the proof into four steps.
Step 1.
According to (<ref>), (-V(L^⋆))^-1/2√(n)(L - L^⋆) can be decomposed into a sum
X_n =∑_i=1^nξ_i := (-V(L^⋆))^1/21/√(n)∑_i=1^n((L^⋆_Z_i)^-1 - (I +L^⋆)^-1)
and a term ρ_n=(-V(L^⋆))^-1/2o_P(1) whose Frobenius norm converges to zero in probability.
|(X_n + ρ_n < x) - (Z < x) |
= |(X_n + ρ_n < x, ‖ρ_n‖_F≥ k_n)+(X_n + ρ_n < x, ‖ρ_n‖_F < k_n)
- (Z < x) |
≤(‖ρ_n‖_F≥ k_n)
+ |(X_n + ρ_n < x, ‖ρ_n‖_F < k_n) - (Z < x)|
≤(‖ρ_n‖_F≥ k_n)
+ |(X_n + k_n1 < x, ‖ρ_n‖_F < k_n) - (Z < x)|
+ |(X_n - k_n1 < x, ‖ρ_n‖_F < k_n) - (Z < x)|
=: I1+I2+I3 ,
where {k_n} is an arbitrary sequence of positive real number and 1 is the N × N matrix whose entries are all 1.
Step 2. The Estimation of (I1). We claim
(‖ρ_n‖≥ k_n) ≤C_4/√(n),
where k_n = n^-1/4 and C_4 is a constant.
In fact,
from the proof of <cit.>, ρ_n has the following expression
ρ_n = √(n) (-V(L^⋆))^1/2( ^2 Φ̂_n(L^⋆) - (^2 Φ̂_n(L^⋆))
+ 1/2(L̃-L^⋆)^T^3Φ̂_n(L_n))(L̃-L^⋆),
where L_n is a point on the line segment between L and L^⋆. To simplify notation, let θ denote
( ^2 Φ̂_n(L^⋆) - (^2 Φ̂_n(L^⋆)) + 1/2(L̃-L^⋆)^T^3Φ̂_n(L_n))(L̃-L^⋆).
Then
‖ρ_n‖_F = ‖√(n)(-V(L^⋆))^1/2θ‖_F
= √(n)‖(-V(L^⋆))^1/2θ‖_F
≤ √(n)‖ (-V(L^⋆))^1/2‖_op‖θ‖_2
= √(n ·Λ_max(-V))·‖θ‖_2.
‖·‖_op denotes the operator norm induced by ℒ^2 norm and Λ_max denotes the largest eigenvalue. For the first inequality, we regard θ as an N × N column vector and (-V(L^⋆))^1/2 is an (N × N) × (N × N) matrix.
‖ϕ‖_2 = ‖( ^2 Φ̂_n(L^⋆) - (^2 Φ̂_n(L^⋆)) + 1/2(L̃-L^⋆)^T^3Φ̂_n(L_n))(L̃-L^⋆)‖_2
≤ ‖( ^2 Φ̂_n(L^⋆) - (^2 Φ̂_n(L^⋆))(L̃-L^⋆)‖_2I1-1
+ ‖1/2(L̃-L^⋆)^T^3Φ̂_n(L_n))(L̃-L^⋆)‖_2 . I1-2
Using Cauchy-Schwartz inequality to estimate (<ref>) we see
I1-1 ≤ ^1/2‖( ^2 Φ̂_n(L^⋆) - (^2 Φ̂_n(L^⋆))‖^2_op^1/2‖L̃-L^⋆‖^2_2
≤ N^2/√(n)max_i,j(L^⋆^-1)_ij^2^1/2‖L̃-L^⋆‖^2_2.
Let h(x) be a multivariate function:
h: ℝ^N × N⟶ ℝ
(x_1, x_2,..., x_NN) ⟼ x_1^2 + x_2^2 + ⋯ + x_NN^2
Then h is a continuous function. What's more almost surely ∈ E_α,β, which is a compact and convex set. Using Theorem <ref> and portmanteau lemma we have
(h(√(n)( -L^⋆)))= n ‖ - L^⋆‖^2_F⟶‖Z‖^2_F ,
where Z∼𝒩(0, -V(L^⋆)). ‖Z‖^2_F is equal to (Z_11^2 + ⋯ + Z_1n^2 + Z_21^2 + ⋯ + Z_nn^2) = Tr(-V(L^⋆)). Then there exists a constant C_1 subject to α, β such that
^1/2‖L̃-L^⋆‖^2_2≤ C_11/√(n).
As a result,
(<ref>)≤ C_2 N^21/n ,
where C_2 is a suitable constant.
Next, we estimate the second part, that is (<ref>):
‖1/2(L̃-L^⋆)^T^3Φ̂_n(L_n))(L̃-L^⋆)‖_2.
Here ^3Φ̂_n(L_n) is an N × N dimensional column vector whose entries are N × N matrices. Since is infinitely many time differentiable, L_n is on the line segment between L and L^⋆, and E_α,β is a convex and compact set, we conclude that every entry of ^3Φ̂_n(L_n) is bounded. Hence there exists a constant C_3≥ 0 subject to α and β such that
‖1/2(L̃-L^⋆)^T^3Φ̂_n(L_n))(L̃-L^⋆)‖_2≤ C_3‖L̃-L^⋆‖^2_2
≤ C_1^2C_3/n.
Now let k_n = n^-1/4. Using Chebyshev's inequality we get:
(‖ρ_n‖_F≥ k_n) ≤‖ρ_n‖_F/k_n = C_4/√(n)
for a suitable constant C_4.
Step 3.
Our next goal is to estimate (I2) as follows.
Let k_n be 1/√(n). Then
I2≤C_7/√(n)
for some constant C_7.
Because
(X_n+k_n1 <x) -(Z<x)
≥(X_n + k_n1 < x, ‖ρ_n‖_F < k_n) - (Z < x)
= ((X_n+k_n1 < x)-(X_n + k_n1 < x, ‖ρ_n‖_F≥ k_n)) - (Z < x)
≥(X_n+k_n1 < x)- (‖ρ_n‖_F > k_n) - (Z < x) ,
we have
I2
≤|(X_n+k_n1 <x) -(Z<x)|
+ |(X_n+k_n1 < x)- (|ρ_n‖_F≥ k_n) - (Z < x)|
≤ 2 |(X_n+k_n1 <x) -(Z<x)| + (‖ρ_n‖_F≥ k_n)
= 2 |(X_n+k_n1 <x) -(Z +k_n1 <x)
+ (Z +k_n1 <x) - (Z<x)| + (‖ρ_n‖_F≥ k_n)
≤ 2 |(X_n+k_n1 <x) -P(Z +k_n1 <x) |I2-1
+ 2|(Z +k_n1 <x)- P(Z<x) |I2-2
+ (‖ρ_n‖_F≥ k_n)I2-3.
By multidimensional Berry-Essen theorem in <cit.>,
(<ref>)≤ C_5·√(N)· n ·‖ξ_1‖_2^3
where C_5 is a constant and ξ_1 is defined in (<ref>):
‖ξ_1‖^3 = ‖1/√(n)(-V(L^⋆))^-1/2((L^⋆_Z_i)^-1 - (I +L^⋆)^-1)‖_2^3
≤ (1/√(n))^3‖(-V(L^⋆))^-1/2((L^⋆_Z_i)^-1 - (I +L^⋆)^-1) ‖^3_2.
Since ‖(-V(L^⋆))^-1/2((L^⋆_Z_i)^-1 - (I +L^⋆)^-1) ‖_2^3 is a constant we get
(<ref>)≤ C_6√(N/n)
For (<ref>), since Z can be viewed as a standard Guassian random vector, we have
(<ref>) = 2|(x-k_nI <Z_n <x) |
≤ 2∑_i,j = 1^N(x_ij-k_n≤ (Z_n)_ij≤ x_ij)
= 2N^2/√(2π)k_n
Combining (<ref>), (<ref>) with previous bound, where we take k_n = n^-1/4 we conclude that
I2 ≤C_7/√(n),
where C_5 is a constant.
Step 4.
As for I3 we can use the same argument as above and conclude that I3 is
bounded by C_8· n^-1/4 for some constant C_8.
This completes the proof of the theorem.
§ TWO-BY-TWO BLOCK KERNEL
In this section we show that if the kernels of determinantal point processes are two-by-two symmetric positive semi-definite matrices, the maximum likelihood estimators can be solved analytically. This result can also be immediately extended to any two by two block matrices. However, this method effective to two by two matrices is difficult to apply to higher dimensional kernel.
Let Z ∼DPP(L^⋆), where L^⋆ = (
a^* b^*
b^* c^*
), and the ground set be = [2]. For our purpose, we assume
a^*, c^* > 0
and
a^*c^* -b^*^2≥ 0.
We can always assume b is non-negative since by identifiability of DPPs, (
a b
b c
) and (
a -b
-b c
) give the same DPP.
For ease of notation, let _0, _1,_2,_3 denote the empirical probability of the subset {∅}, {1}, {2}, {1,2} respectively and let p_0,p_1, p_2, p_3 denote the theoretical probability respectively.
The relationship between (a,b,c) and (p_0, p_1, p_2, p_3) are given by
( a, b, c) = ( p_1/ p_0,
√( p_1 p_2- p_0 p_3)/ p_0,
p_2/ p_0) ,
and
p_0= 1/(a+1)(c+1)-b^2 ,
p_1= a/(a+1)(c+1)-b^2 ,
p_2= c/(a+1)(c+1)-b^2 ,
p_3= ac-b^2/(a+1)(c+1)-b^2 .
The likelihood function defined in (<ref>) becomes now
= ∑_J∈ [2]_Jlog(L_J) - log(L+I)
= _1log a +_2log c +_3log(ac-b^2) - log [(a+1)(c+1)-b^2]
To find the critical point of (<ref>) we first let the partial derivative of with respect to b equal zero and get
∂/∂ b = 2_3b/ac-b^2+ 2b/(a+1)(c+1)-b^2 =0 .
Then we have b is either equal to 0 or
b^2 = ac-(a+1)(c+1)_3/1-_3.
If b =0, then by setting the partial derivative with respect to a and c to zero and notice that _0 + _1 + _2 + _3 = 1 we get the first critical point
(â,b̂,ĉ) = (_1 +_3/_0+_2, 0, _2+_3/_0+_1).
This critical point exists only if _0 + _2 and _0+_1 is nonzero. Since empirical probability converges to its corresponding theoretical probability almost surely and p_0 >0, the strong law of large numbers implies the critical point exists almost surely when n is sufficiently large.
If b≠0, then we can use (<ref>) to estimate b̂ once â, ĉ are obtained:
b̂ = √(âĉ-(â+1)(ĉ+1)_3/1-_3).
To find the maximum likelihood estimators â and ĉ of a and c
we plug (<ref>) into to obtain
= _1log a +_2log c + (_3 -1)log (a+c+1) -(_3-1)log_3/1-_3 + log_3.
Letting ∂/∂ a and ∂/∂ c equal zero yields
∂/∂ a =
_1/a + _3-1/a+c+1 = 0
∂/∂ c = _2/c+_3-1/a+c+1 =0.
The above system of function equations can be explicitly solved and combining it together with (<ref>) yields
(â,b̂,ĉ) = ( _1/_0,
√(_1_2-_0_3)/_0,
_2/_0) ,
from which we have this critical point exists only if _0 > 0 and _1_2-_0_3≥ 0. Again by strong laws of large numbers, the second critical point also exists and converges to the true value almost surely. In fact, we have almost surely,
_1/_0→p_1/p_0 = a^*, √(_1_2-_0_3)/_0→√(p_1p_2-p_0p_3)/p_0 = b^*, _2/_0→ c^* .
Furthermore, we can establish the central limit theorem for the estimator (<ref>), which corresponds to the result in Theorem <ref>.
Assume b > 0, then the estimator (â,b̂,ĉ) in (<ref>) is asymptotically normal,
√(n) ( (â,b̂,ĉ) -(a^*,b^*,c^*)) 𝒩 (0,-V(a^*,b^*,c^*)),
where the convergence holds in distribution and V(a^*,b^*,c^*) is the inverse of the Hessian matrix of the expected maximum likelihood function Φ(a,b,c) = p_1log a +p_2log c +p_3log(ac-b^2) - log [(a+1)(c+1)-b^2].
Let Z_1,..., Z_n be n independent subsets of Z ∼DPP(L^*), where L^*= (
a^* b^*
b^* c^*
). Let X_i be the random vector (𝕀_{Z_i = ∅},𝕀_{Z_i = {1}},𝕀_{Z_i = {2}},𝕀_{Z_i = {1,2}})^T, where 𝕀_{·} stands for the indicator random variable. Then X_i has mean μ = (p_0,p_1,p_2,p_3)^T and covariance matrix
Σ =
[ p_0 - p_0^2 -p_0p_1 -p_0p_2 -p_0p_3; -p_0p_1 p_1 - p_1^2 -p_1p_2 -p_1p_3; -p_0p_2 -p_1p_2 p_2 - p_2^2 -p_2p_3; -p_0p_3 -p_1p_3 -p_2p_3 p_3 - p_3^2 ] .
By central limit theorem, √(n)(X_n - μ) converges to a multivariate normal distribution with mean 0 and covariance Σ. Let a function g: ℝ^4 →ℝ^3 be defined by
g(x_1,x_2,x_3,x_4) = (x_2/x_1,√(x_2x_3-x_1x_4)/x_1,x_3/x_1).
Its Jacobi matrix ġ(x) = [∂ g_i/∂ x_j]_3×4 is given by
[ -x_2/x_1^2 1/x_1 0 0; -x_4/2x_1√(x_2x_3-x_1x_4) - √(x_2x_3-x_1x_4)/x_1^2 x_3/2x_1√(x_2x_3-x_1x_4) x_2/2x_1√(x_2x_3-x_1x_4) -1/2√(x_2x_3-x_1x_4); -x_3/x_1^2 0 1/x_1 0 ].
Now we are in the position to apply Delta method <cit.> to get
√(n)((â,b̂,ĉ)-(a^*,b^*,c^*)) = √(n)(g(X_n) - g(μ))𝒩(0,ġ(μ)Σġ(μ)' ).
After tedious matrix computations, ġ(μ)Σġ(μ)' is found to be
D[ (a^*+a^*^2) σ_12 σ_13; σ_12 a^*c^*/b^*^2-1/4D + a^*+c^*+4a^*c^*/4 σ_23; σ_13 σ_23 c^*+c^*^2 ],
where
{
D = (a^*+1)(c^*+1) - b^*^2 ;
σ_12 = (a^*c^*/2b^*+a^*b^*+a^*/2b^*(a^*c^*-b^*^2)) ;
σ_13 = a^*c^* ;
σ_23= a^*c^*/2b^* +b^*c^* + c^*/2b^*(a^*c^*-b^*^2) .
.
It is straightforward to verify the above matrix is the inverse of the Hessian matrix of the expected maximum likelihood function Φ(L), that is, -V(a^*,b^*,c^*), which in turn verifies Theorem <ref> in this special case. However, in this two-by-two case, our maximum likelihood estimator is unique without the maneuver of the identifiability <ref>.
This idea can be extended to blocked ensemble with the two by two block
submatrices.
If L^⋆ is a matrix with k two-by-two blocks J_1, ..., J_k
[ a_1 b_1 ; b_1 c_1 ; a_2 b_2 ; b_2 c_2 ; ⋱; a_k b_k; b_k c_k ],
where for each 1≤ i ≤ k, a_i, b_i, c_i > 0 and a_ic_i -b_i^2≥ 0. Let ground set of this DPP be {J_1^1, J_1^2,J_2^1,J_2^2,...,J_k^1,J_k^2} and for each 1≤ i ≤ k,
_J_i^0 = 1/n∑_m=1^n𝕀{ J_i^1∉ Z_m, J_i^2∉ Z_m}
_J_i^1 = 1/n∑_m=1^n𝕀{ J_i^1∈ Z_m, J_i^2∉ Z_m}
_J_i^2 = 1/n∑_m=1^n𝕀{ J_i^1∉ Z_m, J_i^2∈ Z_m}
_J_i^3 = 1/n∑_m=1^n𝕀{ J_i^1∈ Z_m, J_i^2∈ Z_m} ,
where Z_1, ..., Z_n are n independent subsets drawn from DPP(L^⋆).
By Proposition <ref>, Z∩ J_1, ..., Z ∩ J_k are mutually independent.
Then the result of critical point for two by two matrix can be applied:
(â_i, b̂_i,ĉ_i) =
(_J_i^1/_J_i^0,
√(_J_i^1_J_i^2 -_J_i^0_J_i^3)/_J_i^0
, _J_i^2/_J_i^0),
for every 1 ≤ i≤ k.
However the above method is fraught with difficulties when the kernel has dimension higher than 2. For example, if the kernel is a 3 × 3 matrix
[ a d e; d b f; e f c ],
the letting the gradient of likelihood function equal zero will yile
= ∑_J ⊆ [3]_J L_J^-1 - (L+I)^-1 = 0 .
Computing L^-1 and (L+I)^-1 could be troublesome. For example, L^-1 is:
1/a(bc-f^2) -d(cd-ef)+e(df-be)[ bc-f^2 -cd+ef -be+df; -cd+ef ac-e^2 de-af; -be+df de-af ab-d^2 ]
which is difficult to use to obtain explicit
maximum likelihood estimator.
§ CONCLUSION
In this paper, we study the maximum likelihood estimation for the ensemble matrix for the determinantal process. Brunel et al show that the expected likelihood function Φ(L) is locally strongly concave around true value L^⋆ if and only if L^⋆ is irreducible, since the Hessian matrix of Φ(L) at L^ ⋆ is negative definite. Then they prove the maximum likelihood estimator (MLE) is consistent in terms of the convergence in probability and when L^⋆ is irreducible they also obtained the central limiting theorem for the MLE. Motivated by their results, we show that the MLE is also strongly consistent in terms of almost sure convergence. Moreover, we obtain the Berry-Esseen type result for the central limiting theorem and find the n^-1/4 rate of convergence of the MLE to normality. Last, we obtain the explicit form of the MLE where L^⋆ is a two by two block matrix or a block matrix, whose blocks are two
by two matrices. The strong consistency and central limit theorem follows from these explicit forms, which demonstrates the general strong consistency and central limit theorem proved earlier. It would be interesting to find the explicit form of some particular higher dimensional DPPs. However, as the learning of maximum likelihood of DPPs is proven to be NP-hard, the explicit form for general ensembles, even if was found, would be very difficult to compute.
In addition to the maximum likelihood estimator there are also other approaches in lieu of MLE. Let us only mention one alternative approach.
For all J such that | J |≤ 1, we let
(L_J)/(L+I) = _J,
where the left hand side is the theoretical probability of J and the right hand side
is the empirical probability of J. Taking J={i} suggests us the following estimator for L_ii.
L̂_ii = _i/_0.
Using equations (<ref>) for | J | = 2 again we are able to determine the off-diagonal elements up to the sign
L^2_ij = _i_j -_∅_{i,j}/^2_∅,
where i ≠ j. Notice that this is the maximum likelihood estimator when L is two dimensional. There is a question on how to choose the sign for L_ij in
(<ref>), which has been resolved by <cit.> with graph theory.
alpha
|
http://arxiv.org/abs/2307.04768v1
|
20230706025646
|
A short proof of Seymour's 6-flow theorem
|
[
"Matt DeVos",
"Kathryn Nurse"
] |
math.CO
|
[
"math.CO"
] |
A short proof of Seymour's 6-flow theorem
Matt DeVos
Email: [email protected].
Supported by an NSERC Discovery Grant (Canada)
Kathryn Nurse
Email: [email protected]
August 1, 2023
===========================================================================================================================================
We give a compact variation of Seymour's proof that every 2-edge-connected graph has a nowhere-zero _2 ×_3-flow.
All graphs are finite; loops and multiple edges are allowed. For notation not defined here we use <cit.>. Let G=(V,E) be a directed graph, A an additively-written abelian group, and f: E → A a function. We say f is an A-flow whenever ∑_e ∈δ^+(v)f(e) = ∑_e ∈δ^-(v)f(e) holds for every v ∈ V, where δ^+(v) (δ^-(v)) is the set of edges whose initial (terminal) vertex is v. If 0 ∉f(E), then we say f is nowhere-zero. If f is a -flow with f(E) ⊆{0, ±1, ±2, …, ±(k-1)}, then we say f is a k-flow. Note that reversing an edge e and replacing f(e) with its negation preserves all of the aforementioned properties; accordingly the existence of a nowhere-zero A-flow or k-flow depends only on the underlying graph. A famous conjecture of Tutte <cit.> asserts that every 2-edge-connected graph has a nowhere-zero 5-flow. This conjecture remains open with the best result due to Seymour <cit.> who proved that such graphs have nowhere-zero 6-flows. His argument involves a standard reduction due to Tutte equating the existence of a nowhere-zero k-flow and a nowhere-zero A-flow whenever |A|=k, together with the following central result.
Every 2-edge-connected digraph has a nowhere-zero _2 ×_3-flow.
We give a compact version of Seymour's proof. We prove a slightly stronger statement by induction, using simple contraction-based arguments
(and in particular, we don't require any special reductions).
Our proof of Theorem <ref> relies on contracting a set of edges S, finding a flow f in the smaller graph, then uncontracting S and extending the domain of f to include S. Observe that 1. it is always possible to extend the domain while maintaining that f is a flow, and 2. if S is a set of at least two parallel edges, and the abelian group has size at least 3, then it is possible to extend the domain so that additionally 0 ∉f(S). In the following we use G/S to denote the graph obtained from G by contracting the set of edges S ⊆ E, and δ(u) is the set of edges incident to vertex u.
If G = (V,E) is a 2-edge-connected digraph, and u ∈ V, then G has a nowhere-zero flow f_2 × f_3 : E →_2 ×_3 so that δ(u) ∩(f_2) = ∅.
We proceed by induction on |V|, with the base case |V| = 1 holding trivially.
First, suppose G-u has a 1-edge-cut E(V_1,V_2) = {e}. Choose a partition {E_1,E_2} of E∖{e} so that for i = 1,2 the edges in E_i have ends in V_i ∪{u}. Let G_i = G/E_i. By induction, G_i has a nowhere-zero flow f^i_2 × f^i_3 : E(G_i) →_2 ×_3 so the edges incident to the contracted vertex are not in the support of f_2^i. By possibly replacing f^1_3 with its negation, we may assume that f^1_3 (e) = f^2_3(e) and then the _2 ×_3-flows in each G_i combine to give the desired flow in G. Thus we may assume G-u has no cut-edge.
Choose distinct edges ux and ux' so that x and x' are in the same component of G-u (possibly x = x'). By our assumptions, we may choose two edge-disjoint paths P_1, P_2 ⊆ G-u from x to x'.
Set H = P ∪ P', S = E(u,V(H)), G_1 = G/E(H) with contracted vertex u_1, and G_2 = G_1/ S with contracted vertex u_2. By induction, G_2 has a flow f_2 × f_3 : E(G_2) →_2 ×_3 so that δ(u_2) ∩(f_2) = ∅. Because S is a set of at least two parallel edges, we may extend f_3 to E(G_1) so that it remains a flow and f_3(e) ≠ 0 for all e ∈ S. Because δ(u_2) ∩(f_2) = ∅, setting f_2(e) = 0 for all e ∈ S extends f_2 to E(G_1) keeping it a flow. Note that δ_G_1(u) ∩(f_2) = ∅ = δ_G_1(u_1) ∩(f_2). Now, further extend f_3 to E(G) so that it remains a flow. Because δ(u_1) ∩(f_2) = ∅, and every vertex of H has even degree, we may extend f_2 to E(G) by setting f_2(e) = 1 for all e ∈ E(H), keeping it a flow. Now S ⊆(f_3), E(H) ⊆(f_2) and so f_2 × f_3 is as desired.
abbrv
|
http://arxiv.org/abs/2307.02401v1
|
20230705162112
|
Exciton transport in a germanium quantum dot ladder
|
[
"T. -K. Hsiao",
"P. Cova Fariña",
"S. D. Oosterhout",
"D. Jirovec",
"X. Zhang",
"C. J. van Diepen",
"W. I. L. Lawrie",
"C. -A. Wang",
"A. Sammak",
"G. Scappucci",
"M. Veldhorst",
"E. Demler",
"L. M. K. Vandersypen"
] |
cond-mat.mes-hall
|
[
"cond-mat.mes-hall"
] |
[Current address: ]Department of Physics, National Tsing Hua University, Hsinchu 30013, Taiwan
These authors contributed equally to this work
QuTech and Kavli Institute of Nanoscience, Delft University of Technology, 2600 GA Delft, The Netherlands
These authors contributed equally to this work
QuTech and Kavli Institute of Nanoscience, Delft University of Technology, 2600 GA Delft, The Netherlands
QuTech and Kavli Institute of Nanoscience, Delft University of Technology, 2600 GA Delft, The Netherlands
Netherlands Organisation for Applied Scientific Research (TNO), 2628 CK Delft, The Netherlands
QuTech and Kavli Institute of Nanoscience, Delft University of Technology, 2600 GA Delft, The Netherlands
QuTech and Kavli Institute of Nanoscience, Delft University of Technology, 2600 GA Delft, The Netherlands
[Current address: ]Niels Bohr Institute, University of Copenhagen, Copenhagen, Denmark
QuTech and Kavli Institute of Nanoscience, Delft University of Technology, 2600 GA Delft, The Netherlands
[Current address: ]Niels Bohr Institute, University of Copenhagen, Copenhagen, Denmark
QuTech and Kavli Institute of Nanoscience, Delft University of Technology, 2600 GA Delft, The Netherlands
QuTech and Kavli Institute of Nanoscience, Delft University of Technology, 2600 GA Delft, The Netherlands
QuTech and Kavli Institute of Nanoscience, Delft University of Technology, 2600 GA Delft, The Netherlands
Netherlands Organisation for Applied Scientific Research (TNO), 2628 CK Delft, The Netherlands
QuTech and Kavli Institute of Nanoscience, Delft University of Technology, 2600 GA Delft, The Netherlands
QuTech and Kavli Institute of Nanoscience, Delft University of Technology, 2600 GA Delft, The Netherlands
Institute for Theoretical Physics, Wolfgang Pauli Str. 27, ETH Zurich, 8093 Zurich, Switzerland
[email protected]
QuTech and Kavli Institute of Nanoscience, Delft University of Technology, 2600 GA Delft, The Netherlands
Quantum systems with engineered Hamiltonians can be used as simulators of many-body physics problems to provide insights beyond the capabilities of classical computers. Semiconductor gate-defined quantum dot arrays have emerged as a versatile platform for quantum simulation of generalized Fermi-Hubbard physics, one of the richest playgrounds in condensed matter physics. In this work, we employ a germanium 4×2 quantum dot array and show that the naturally occurring long-range Coulomb interaction can lead to exciton formation and transport. We tune the quantum dot ladder into two capacitively-coupled channels and exploit Coulomb drag to probe the binding of electrons and holes. Specifically, we shuttle an electron through one leg of the ladder and observe that a hole is dragged along in the second leg under the right conditions. This corresponds to a transition from single-electron transport in one leg to exciton transport along the ladder. Our work paves the way for the study of excitonic states of matter in quantum dot arrays.
Exciton transport in a germanium quantum dot ladder
L. M. K. Vandersypen
August 1, 2023
===================================================
§ INTRODUCTION
Analog quantum simulators with well-controlled interaction parameters can shed light on the physics of strongly-correlated many-body quantum systems <cit.>. Electrostatically-defined semiconductor quantum dot arrays, owing to their in-situ tunability of electrochemical potentials and relevant energy scales which can far exceed the thermal energy, have become an attractive platform for simulating fermionic systems <cit.>. Over the past few years, the techniques for control and probing of quantum dot simulators has progressed significantly. This platform and closely related donor arrays have been used as a small-scale simulator of Mott-Hubbard physics <cit.>, Nagaoka ferromagnetism <cit.>, Heisenberg antiferromagnetic spin chains <cit.>, resonating valence bond states <cit.>, and the Su–Schrieffer–Heeger model <cit.>.
The charge carriers confined in quantum dot arrays exhibit an intrinsic long-range Coulomb interaction, which plays an important role in fundamental physics phenomena like exciton formation <cit.> and chemical bonding <cit.>, as well as in exotic phases such as Wigner crystals <cit.>, excitonic insulators <cit.> and exciton condensates <cit.>. In contrast, simulating these phenomena is challenging for the highly successful quantum simulation platform based on ultra-cold atoms, where the inter-particle interaction is largely limited to on-site <cit.>, or non-local dipole-dipole interaction <cit.>.
One manifestation of the long-range Coulomb interaction in low-dimensional systems is Coulomb-drag. In a two-channel system, a current imposed by a voltage bias across one channel (the drive channel) leads to a current or voltage across a second channel (the drag channel) <cit.>. Coulomb drag can take two forms. “Positive" Coulomb drag occurs when an electron in the drive channel pushes electrons in the drag channel forward due to Coulomb-mediated momentum transfer <cit.>. “Negative" Coulomb drag can result from Wigner-crystal physics <cit.> or from exciton formation <cit.>, in which the motion of a charge carrier in the drive channel pulls along a charge carrier of opposite sign in the drag channel. The negative Coulomb drag effect from exciton formation has been observed in double quantum wells in the quantum hall regime <cit.>, double quantum wires <cit.> and 2D materials <cit.>. In these works, the negative Coulomb drag is interpreted as resulting from inter-channel exciton transport, which serves as a precursor for exciton condensation and excitonic insulator phases.
Excitonic states can be described theoretically using a two-channel Hubbard model with N× 2 sites <cit.>
H = -t∑_⟨ i,j ⟩, σ c^†_iσc_jσ + U∑_in_i(n_i-1)/2
+ U^'∑_⟨ i,j ⟩n_in_j + V∑_i∈α,j∈βn_in_j
+ V^'∑_i∈α,j∈βn_in_j
where c_iσ denotes the annihilation operator of a spin-1/2 fermion with spin σ ∈{↑, ↓} at site i of a two-channel system where site 1 to N are located in channel α and site (N+1) to 2N are part of channel β, and ⟨ i,j ⟩ sums over neighbouring sites in the same channel. The number operator is given by n_i=c^†_i↑c_i↑+c^†_i↓c_i↓,
t is the tunnel coupling within the same channel, U the on-site Coulomb interaction, U^' is the nearest-neighbor Coulomb interaction within the same channel, V is the nearest-neighbor inter-channel Coulomb interaction, and V^' is the diagonal inter-channel Coulomb interaction. When the two channels are occupied by charge carriers of opposite sign, the inter-channel interactions are attractive. Note that we consider systems without hopping between the two channels and interaction terms beyond nearest-neighbor or diagonal sites are neglected. Furthermore, in Eq. <ref> we assume homogeneous tunnel couplings and Coulomb interactions. To describe systems with inhomogeneous couplings, we will use t_ij and V_ij to denote the tunnel coupling and inter-channel Coulomb interaction between site i and j.
This model can describe the conduction band and valence band in a material, and also two capacitively-coupled channels. Earlier works have reported on arrays of metallic or superconducting tunnel junctions <cit.>, and small quantum-dot arrays <cit.>. However, these systems lack the control knobs for individual interaction parameters and the probes for the quantum state at each site. In comparison, when a N× 2 quantum dot ladder is tuned to host electrons in one channel and holes in the other channel, thanks to the advanced control and probing capabilities, it can be used as a versatile analog quantum simulator for excitonic physics.
Many years of work on quantum dot systems have led to steady scaling of linear arrays <cit.>. Furthermore, several reports on two-dimensional quantum dot arrays have appeared using GaAs <cit.>, silicon <cit.> and germanium <cit.> as the host material. Among the various host materials, germanium is particularly promising to scale to large arrays thanks to the low disorder and light effective mass <cit.>. Even a 4 × 4 Ge quantum dot array has been realized <cit.>, albeit with shared-controlled electrochemical potentials and tunnel couplings.
In this work we use a 4×2 Ge quantum dot ladder as an excitonic simulator, doubling the size of fully controlled Ge quantum dot arrays <cit.>. We activate hopping along the legs of the ladder but suppress hopping between the legs. In this way, two capacitively-coupled channels of quantum dots are formed. The charge carriers in this platform are holes arising from the valence band. A missing hole on top of a singly-occupied background of holes effectively defines an electron. We control the electrochemical potentials of the array such that the top channel hosts an electron and the bottom channel can host a hole. To explore the formation of excitons, we use real-time charge sensing to study under what conditions the imposed motion of an electron through the top channel drags along a hole in the bottom channel through the long-range Coulomb-interaction.
§ DEVICE AND EXPERIMENTAL APPROACH
The experiment is carried out in an electrostatically-defined 4×2 hole quantum dot array, which is fabricated in a Ge/SiGe quantum well heterostructure <cit.>. Fig. <ref>(a) shows a device image, with the positions of the dots and charge sensors as indicated by the labeled circles. Fig. <ref>(b) shows a schematic gate stack of the device. Screening gates, plunger gates and barrier gates were fabricated in successive lithography steps (see the Appendix for details). We refer to the path from dot 1 to dot 4 as the top channel (drive channel) and to the path from dot 5 to dot 8 as the bottom channel (drag channel). Quantum dots are formed by applying negative DC voltages on a set of plunger gates, P, and barrier gates, B, to accumulate and confine holes in the quantum well in the area between the screening gates. The charge occupation of the 4×2 array is denoted ( [ O_1 O_2 O_3 O_4; O_5 O_6 O_7 O_8 ]), where O_i represents the number of holes in dot i. The structure allows for individual control of all ten nearest-neighbor tunnel couplings. Plunger gates and barrier gates are additionally connected to high-frequency lines via bias tees to allow fast control of electrochemical potentials and tunnel couplings.
In this experiment the plunger and barrier gates are virtualized such that changing a virtual plunger P^'_i indepently controls the electrochemical potential, ϵ_i, of dot i and changing a virtual barrier B^'_ij mainly modulates the tunnel coupling, t_ij, between neighboring dots i and j without influencing the dot potentials. In this device four charge sensors (BL, BR, TL and TR) can be formed at the four corners of the array. They serve both as detectors for the charge occupation and as reservoirs. In this experiment we use only the BL and BR sensors for charge sensing, with multiplexed RF reflectometry (TL and TR are used as reservoirs.). The plunger gates for the BL and BR sensors are also included in the gate virtualization, such that sweeping a plunger gate in the array does not shift the sensor peak position. Therefore, the sensors are mostly sensitive to changes of the charge occupation in the array.
To study exciton formation via the Coulomb drag effect, we will aim to initialize the device in the ( [ 1 1 1 1; 0 0 0 0 ]) charge state, where each top-channel dot is occupied by one hole and the bottom channel is empty. Because the charge carriers in the array are holes originating from the valence band, removing a hole in the top channel amounts to adding an electron relative to the singly-filled background of holes (see Fig. <ref>). We can thus load an electron to the top channel by emptying a dot (e.g. pulsing to the (0111) charge state in the top channel). The electrochemical potentials of the bottom dots in the ( [ 1 1 1 1; 0 0 0 0 ]) configuration are aligned with each other, such that loading a hole from the reservoir to the bottom channel costs the same energy regardless of its position. We label this energy cost E (Fig. <ref>). When E is lower than the nearest-neighbor inter-channel Coulomb interaction V_ij, a hole will be attracted in the bottom channel by the top-channel electron, reaching e.g. the charge state ( [ 0 1 1 1; 1 0 0 0 ]). An electron-hole pair is thus formed bound by V_ij, which constitutes an inter-channel exciton (strictly speaking, V_ij must here be corrected by intra-channel and diagonal Coulomb interactions; we will neglect these corrections to simplify the discussion but they are included when aligning the bottom dot potentials). Furthermore, if the system Hamiltonian favors an exciton ground state, pushing the electron (the missing hole) through the top channel will cause the hole in the bottom to move together with the electron (Fig. <ref>(c) and Fig. <ref>).
§ QUANTUM DOT LADDER FORMATION AND TUNE-UP
Figure <ref>(a) shows charge stability diagrams for the inter-channel dot pair 1-5 near the ( [ 1 1 1 1; 0 0 0 0 ]) charge configuration (see the Appendix for the other inter-channel pairs). The virtualized sensors result in a gradient-free signal within each charge state region. The inter-channel Coulomb interactions V_ij between dot i and j are extracted from the size of the anti-crossing for an inter-dot transition. The obtained inter-channel Coulomb interaction strengths are V_15 = 220 , V_26 = 260 , V_37 = 315 , and V_48 = 213 . The diagonal Coulomb interactions V^' are smaller than 100 .
Figure <ref>(b) shows the sensor signal as a function of the detuning of dots 1 and 2, δ(P^'_1-P^'_2), and the detuning of dots 5 and 6, δ(P^'_5-P^'_6), near their respective inter-dot transitions. If we sweep δ(P^'_1-P^'_2) and keep δ(P^'_5-P^'_6) fixed near the 5-6 transition, as indicated by the black arrow in Fig. <ref>(b), a transition is made from ( [ 0 1 1 1; 1 0 0 0 ]) to ( [ 1 0 1 1; 0 1 0 0 ]) whereby a charge tunnels from dot 2 to dot 1 and simultaneously a charge moves from dot 5 to dot 6, thanks to the inter-channel Coulomb interactions V_15 and V_26. This co-tunneling process <cit.> results in an exciton moving in the ladder array, and is the dominant exciton transport process since it happens before sequential tunneling is energetically allowed (see the path along the black line in Fig. <ref>.
Efficient exciton transport requires strong intra-channel tunnel couplings in order to obtain large intra-channel co-tunneling couplings, and weak inter-channel tunnel couplings. Strong inter-channel tunneling exceeding the channel detuning would allow the charge carriers to hybridize between the two channels, in which case we can no longer speak of a distinct electron and hole which are bound by long-range Coulomb interaction.
Using the gate voltages, we can control both the inter-channel and intra-channel tunnel couplings. The tunnel couplings are characterized by fitting inter-dot transition sensor signals to a model described in <cit.>. Figures <ref>(a-b) show the control of t_12 and t_15. Due to fabrication procedure, some barrier gates exhibit a weaker response than others, meaning that larger voltage swings are required for modulating the corresponding tunnel couplings (see appendix for details). Note that in the virtualized B^' we do not compensate for tunnel coupling crosstalk <cit.> since the present experiment only requires setting the tunnel couplings once and furthermore is robust to small variations in tunnel couplings.
We here set all intra-channel tunnel couplings to 30–40 . For the inter-channel tunnel couplings we target values ideally below 1 . However, it is challenging to quantify such small tunnel couplings by fitting the inter-dot sensor signal, given that the thermal energy based on the effective electron temperature is about 20 in this experiment. Instead of the tunnel couplings, we measure the inter-dot tunnel rates by abruptly aligning the dot potentials using a gate voltage pulse. The relation between tunnel coupling and tunnel rate can be expressed as <cit.>
Γ_ij = 2T_2t^2_ij
where Γ_ij and t_ij are the tunnel rate and tunnel couplings between dot i and j, and T_2 is the charge dephasing time (T_2 ≥ 0.3 ns extracted from photon-assisted-tunneling measurement <cit.>, see appendix for details). Figure <ref>(c) shows the tunnel rate measurement between dot 2 and dot 6. The fit yields Γ_26 = 40 kHz. Using Eq. (<ref>) we obtain t_26≤ 0.03 . For comparison, Fig. <ref>(d) shows the measurement of Γ_56 when t_56 =46 . In this case the decay appears instantaneous owing to the fast tunneling between the dots. Using the inter-channel barrier voltages, all inter-channel tunnel couplings can be suppressed below 0.1 (see appendix), with all inter-channel Coulomb interactions > 150 . However, we ideally want homogeneous inter-channel Coulomb interactions of about 200-300 , in order to have a large window for Coulomb drag. Since V_15 is only 166 when t_15=0.07 , we bring dot 1 and dot 5 closer together to increase V_15 to 220 , at the expense of a higher t_15∼25 [We note that although t_15 is higher than other inter-channel tunnel couplings, since electron-hole pair transport is a co-tunneling process and since t_26 remains below 1, the correlated hopping of an electron-hole pair across the channels is still three orders of magnitude smaller than the hopping along the channel direction.].
§ COULOMB DRAG AND EXCITON FORMATION
The experiment scheme for measuring exciton formation and transport is illustrated in Fig. <ref>. In phase I, the 4×2 dot array is set to the ( [ 1 1 1 1; 0 0 0 0 ]) charge occupation in which the dot potentials in the top channel (drive channel) are aligned and are placed ∼200 below the Fermi level. The potentials in the bottom channel (drag channel) are aligned as well, and positioned above the Fermi level by an energy offset, E. From phase II to V, the respective top-channel dot potentials are consecutively raised and then lowered by 6 mV (∼670 ) to load and shuttle an electron from left to right. If E<V_ij, the top-channel electron capacitively lowers the bottom-channel potential on the opposite site below the Fermi level. As a consequence a hole is loaded in the bottom channel. Due to the inter-channel Coulomb interaction, the hole is dragged along with the electron, i.e. the electron and hole move together as an exciton along the channel throughout the pulse sequence. In contrast, if E>V_ij, the top-channel electron moves alone without dragging along a hole. Therefore, a transition between exciton transport and single electron transport is expected to occur at E ∼ V_avg = ⟨ V_ij⟩. In this work, the average inter-channel Coulomb interaction V_avg is 252 . We note that for a system with inhomogeneous V_ij, the range of E where Coulomb drag can occur is limited by the smallest V_ij.
In the measurements shown in Fig. <ref>, the top-channel dot potentials are pulsed from phase I to V in the time domain while the bottom-channel potentials are fixed at E [In the experiment we apply a global virtual gate voltage on the bottom channel and convert the global voltage to a global energy offset using an averaged bottom-channel lever arm 112 /mV]. Figs. <ref>(a) and (b) show the BL and BR sensor signals as a function of time and E. The sensor signals corresponding to the ( [ 1 1 1 1; 0 0 0 0 ]) charge state (phase I when E>0) are assigned a reference value of 0. An increasing (decreasing) signal indicates a positive (negative) charge moves closer to the corresponding sensor. In the region enclosed by the blue dashed rectangle, from phase II to V, the BL (BR) sensor signal is increasingly (less and less) negative. As E is reduced, the sensor signals first pass through a transition region around E∼200 and then reach a region enclosed by the orange rectangle, where the BL (BR) sensor signal is less and less (increasingly) positive from phase II to V.
The data in Fig. <ref>(a) and (b) can be understood as follows. In the blue-dashed region, the system is in the single-electron regime in which a top-channel electron is moving away from BL and towards BR. Hence, the magnitude of the negative signal decreases (increases) over time for BL (BR). In contrast, in the orange-dashed region, the system enters the exciton transport regime in which an inter-channel exciton moves to the right. Because the BL and BR sensors are more sensitive to the bottom-channel hole than to the top-channel electron, the net signal induced by the exciton is positive and the magnitude of this positive signal decreases (increases) over time for BL (BR). See Fig. <ref>(c) and (d) for a further comparison between the signals in the single-electron-transport regime and the exciton transport regime. In Fig. <ref>(a) and (b) the transition between the single-electron regime and the exciton transport regime occurs around E∼200 , which is consistent with the predicted transition point E∼ V_avg=252 . The width of the transition regime depends on the level of disorder in the dot potentials (δϵ≤ 50 , which is the accuracy of the automated calibration) and variations in inter-channel V_ij (standard deviation in V_ij of ∼ 40 ). Note that when E<0 , the signals in phase I increase because the bottom channel starts loading holes from the reservoirs, even if no electron is loaded in the top channel.
Finally, since the transport of an inter-channel exciton involves a co-tunneling process, it is possible in principle that either the electron or the hole or the entire exciton are not successfully transferred from one site to another. In the data of Fig. <ref>, no such failed charge transfers are observed. This is expected since the 50 duration of the pulse segments by far exceeds both the single-particle tunneling rates and the co-tunneling rates (in the Appendix, we estimate the probability of successful adiabatic charge transfer to be about 99.2%).
§ CONCLUSION AND OUTLOOK
In summary, we have fabricated a germanium 4×2 quantum dot ladder and use it as a quantum simulator for exciton formation. To engineer the simulator Hamiltonian, we tune the full array into the single-hole regime and independently control all the on-site potentials and interdot tunnel couplings. We find strong inter-channel Coulomb interaction while the tunneling between channels is suppressed, which is essential for simulating excitonic physics. To probe exciton formation by means of Coulomb drag, we drive an electron through the top channel and measure the charge sensor signals as a function of the bottom channel potential. The measured signals are in good agreement with the picture of a transition from single-electron transport to exciton transport resulting from the inter-channel Coulomb interaction. An interesting next step possible with the present sample is to create and study an engineered excitonic insulator <cit.>.
In the future, we envision that with sufficiently homogeneous interaction energies and co-tunnel couplings in longer ladders, excitons can delocalize over the array, show coherent dynamics in the time domain, and exciton quasi-condensation [Strictly speaking, exciton condensation does not occur in 1D or 2D at finite temperature. However, for real experimental systems we can have quasi-condensation when the correlation length exceeds the system size <cit.>].
It is useful to point out an enhanced symmetry in bilinear quantum dot arrays as described by Eq. <ref>, which should play an important role in the nature of the ground state in the thermodynamic limit. As there is no tunnelling between the channels, one can define separate SU(2) symmetries for each channel [Holes in strained germanium have spin-3/2, but the large heavy-hole light-hole splitting leads to an effective two-level system.]. The full Hamiltonian is symmetric with respect to both of them, and the full symmetry of the system is SO(4) ≃ SU(2) ⊗ SU(2)<cit.>. Excitonic condensation in this system would require spontaneous symmetry breaking of the SO(4) symmetry. For non-Abelian symmetries such as SO(4), the Hohenberg-Mermin-Wagner theorem shows that only exponentially decaying correlations are allowed even at zero temperature, due to the abundance of possible fluctuations of the order parameter. Interestingly, two excitons can together form a SO(4) singlet. Such singlets can exhibit quasi-long range order at zero temperature in one dimensional systems, analogously to spinless bosons. This suggests our system can exhibit unusual types of ground states in the thermodynamic limit, such as quasi-condensates of composite particles or states with broken translational symmetry. Analogues phenomena have been discussed in the context of spinor condensates of cold atoms in one-dimensional systems<cit.>.
One can also break the SO(4) symmetry by introducing extra terms to the Hamiltonian. When breaking SO(4) symmetry with a magnetic field, S_z=1 excitons are favored and can form a (quasi-)condensate, which is not usually seen in optical spectroscopy since these excitons are dark. Finally, the spin-orbit coupling present in germanium quantum wells, while not breaking time reversal symmetry <cit.>, can also hybridize singlet and triplet states, lifting their degeneracy <cit.>, which may lead to condensation at zero magnetic field.
We acknowledge useful discussions with members of the Vandersypen group, and with D. Sels, S. Gopalakrishnan, A. Bohrdt, F. Grusdt, I. Morera, H. Lange. We thank software development by S. L. de Snoo. We also acknowledge technical support by O. Benningshof, J. D. Mensingh, R. Schouten, E. Van der Wiel and N. P. Alberts. L.M.K.V. acknowledges support from an Advanced Grant of the European Research Council (ERC) under the European Union’s Horizon 2020 research and innovation programme (grant agreement No 882848) and by a Vici grant of the Dutch Research Council (NWO). E.D. acknowledges support from the ARO grant number W911NF-20-1-0163, and the SNSF project 200021-212899.
§ DATA AVAILABILITY
The data reported in this paper are archived on a Zenodo data repository at https://doi.org/10.5281/zenodo.8105397
§.§ Device fabrication and experiment setup
The device was fabricated on a Ge/SiGe heterostructure featuring a strained Ge quantum well positioned 55 nm below the semiconductor-dielectric interface, as described in <cit.>. The fabrication started by defining ohmic contacts, which were made by electron beam lithography, etching of the native oxide with buffered HF, and electron beam deposition of 30 nm of Al. An insulating layer of 7 nm Al2O3 was grown with atomic layer deposition, also annealing the device and diffusing the aluminum into the heterostructure during the process. Subsequently, the screening gates (3/17 nm Ti/Pd), plunger gates (3/27 nm Ti/Pd), and barrier gates (3/37 nm Ti/Pd) were made in three metalization layers, which are all separated by 5 nm thick layers of Al2O3. Note that for easing the lift-off of the compact barrier gates, we made the barrier gates in two steps, in which the barrier gates were distributed in two lithography/evaporation/lift-off processes without a Al2O3 layer in between.
The measurement was performed in a Oxford Instruments Triton dilution refrigerator with a nominal base temperature of 6 mK. The device was mounted on a custom-made sample PCB. DC voltages from homebuilt SPI DAC modules and pulses from a Keysight M3202A AWG are combined using on-PCB bias-tees. RF reflectometry for charge sensing was done using SPI IQ-demodulation modules and on-PCB LC tank circuits. The demodulated signals were recorded by a Keysight M3102A digitizer.
§.§ Single-hole regime of the 4×2 array
The charge state tunability of the 4×2 ladder is displayed in Fig. <ref>, where we show charge stability diagrams for all dot pairs down to the single-hole regime. The area on the top right corner of the plots corresponds to the zero-charge state. The effect of gate voltage crosstalk is compensated using virtual gates P^'. All ten plots are obtained using charge sensing using the bottom right and bottom left sensors.
Additionally, in Fig. <ref>, we show global charge state control of full the 4×2 array by sweeping two virtual gates, corresponding to the top and bottom channel energies (P_T and P_B, respectively). Every vertical or horizontal addition line reflects a single charge being added to either the top or the bottom channel. Lines are spaced apart by the long-range Coulomb interaction. The starting charge occupation for the Coulomb drag experiment corresponds to the top left of this plot, with 4 charges in the top channel and none in the bottom.
§.§ Lever arm measurement
The conversion between a virtual gate voltage P^'_i and electrochemical potential ϵ_i is described by δϵ_i=L_iδ P^'_i, where L_i is the lever arm for dot i. The lever arms can be characterized using photon-assisted tunneling (PAT) <cit.>. In Fig. <ref>(a), the signal is fitted to hf = √(δϵ_3^2+4t_37^2). From the fit a lever arm L_3 = 117 /mV is extracted. In addition, the ratio between two lever arms can be determined from the slope, S, of an inter-dot charge transition line based on the fact that V_ij=V_ji. For instance, in Fig. <ref>(b), V_34=L_3 H = V_43 = L_4 W. Therefore, S = H/W = L_4/L_3. So, L_4 can be estimated from L_3 and S. We obtain L_4=117 /mV with L_3=117 /mV and S=1.0. Similarly, based on PAT measurements and inter-dot slopes, all lever arms are estimated. The results are summarized in table <ref>. All lever arms have similar value ∼ 110 /mV with a standard deviation of 4 /mV.
§.§ Inter-channel Coulomb interaction measurement
Figure <ref>(a)-(d) shows the measurements of inter-channel Coulomb interactions, which are responsible for the excitonic Coulomb drag effect. As in Figure <ref>(a), the Coulomb interactions are characterized by finding the sizes of the anti-crossings and converting them into energies through lever arms. From Fig. <ref>(a)-(d) we obtain V_15=220 , V_26=260 , V_37=315 , and V_48=213 .
§.§ Tunnel coupling control
Figure <ref> shows control of all nearest-neighbour tunnel couplings t_ij using the corresponding virtual barrier gates B^'_ij. The tunnel coupling dependency is fitted by an exponential function A exp(-γ_ijB^'_ij) + C, from which the barrier lever arm γ_ij is extracted. The γ_ij are summarized in table <ref>. Roughly, the barrier lever arms can be separated into two groups, corresponding to the two steps in which the barriers were fabricated. Notably, the barrier gates patterned in the first fabrication step display a stronger lever arm than those patterned in the second step, despite the absence of an ALD layer between the two barrier metalization layers. The reasons for this discrepancy requires further investigation, but might be caused by the device design or residual resist under the second barrier gate layer. Nonetheless, all barriers display a reasonable level of tunnel coupling control, which allows us to tune the tunnel couplings to the values required to perform the excitonic Coulomb drag experiment.
§.§ Tunnel rate measurement
Tunnel coupling extraction via fitting of the inter-dot transition signals allows us to reliably obtain tunnel coupling values of the order of tens of , larger than or comparable to the electron temperature. As t_ij becomes much smaller than the electron temperature, this fit becomes unreliable. When the hopping between channels is suppressed, we estimate the inter-channel t_ij from the inter-channel tunnel rates Γ_ij as described in the main text. Figure <ref> (a)-(d) show the tunnel rate measurements, from which we obtain Γ_15=208 kHz, Γ_26=40 kHz, Γ_37=118 kHz, and Γ_48=81 kHz. Since we estimate T_2 ≥ 0.3 ns (lower limit) from the linewidth of the PAT in Fig. <ref>(a), by using Eq. <ref> we can then estimate t_15≤0.07 , t_26≤0.03 , t_37≤0.06 , and t_48≤0.05 in the target regime where the inter-channel hopping is suppressed.
§.§ Automated calibration routine
Slow changes in the electrostatic environment of the device lead to inevitable drift of dot electrochemical potentials. To compensate for this low frequency drift, we implement a fast automated calibration routine to keep the electrochemical potentials fixed relative to the Fermi level. Our target is to set the level of dot i with an offset P^'_i, target from the Fermi level. In this experiment P^'_1, target to P^'_8, target are initially [2, 2, 2, 2, -4, -4, -4, -4] mV, which places the device in the ( [ 1 1 1 1; 0 0 0 0 ]) charge state. For the first instance of the calibration, we manually tune the device to a baseline DC voltage V_base close to the target condition (within a tolerance of a few mV). The calibration routine starts with optimizing the sensor signals, which is done by scanning sensor plunger gates and locating the optimal sensing positions, as shown in Fig. <ref>(a) and (f). The voltage drift of dot i is measured by scanning P^'_i centered at V_base + P^'_i, target and fitting the signal to a charge addition line to locate the Fermi level, as shown in Fig. <ref> (b)-(e) and (g)-(j). V_base is subsequently shifted by the deviation of the addition lines from the centers of the scans to compensate for the voltage drift. The entire automated calibration routine takes about 10 seconds and offers a valuable tool for the efficient adjustment of dot potentials in multi-dot devices.
§.§ Exciton tunnel coupling
The tunneling of excitons entails a co-tunneling process of two charges in the ladder array. Here we take the tunneling between ( [ 1 0 1 1; 0 1 0 0 ]) and ( [ 1 1 0 1; 0 0 1 0 ]) as an example. The relevant charge states are 0 =( [ 1 0 1 1; 0 1 0 0 ]), 1 =( [ 1 1 0 1; 0 0 1 0 ]), 2 =( [ 1 1 0 1; 0 1 0 0 ]), 3 =( [ 1 0 1 1; 0 0 1 0 ]), 4 =( [ 1 1 1 1; 0 0 0 0 ]), and 5 =( [ 1 0 0 1; 0 1 1 0 ]). The Hamiltonian in this basis is
H=
[ E_0 0 -t_23 -t_67 -t_26 -t_37; 0 E_1 -t_67 -t_23 t_37 t_26; -t_23 -t_67 E_2 0 0 0; -t_67 -t_23 0 E_3 0 0; -t_26 t_37 0 0 E_4 0; -t_37 t_26 0 0 0 E_5; ]
where
E_0 = -ϵ_3-ϵ_6+V+2V^'-ϵ_1-ϵ_4
E_1 = -ϵ_2-ϵ_7+V+2V^'-ϵ_1-ϵ_4
E_2 = -ϵ_2-ϵ_6+2V+V^'-ϵ_1-ϵ_4
E_3 = -ϵ_3-ϵ_7+2V+V^'-ϵ_1-ϵ_4
E_4 = -ϵ_2-ϵ_3+3V-ϵ_1-ϵ_4
E_5 = -ϵ_6-ϵ_7+V+2V^'-ϵ_1-ϵ_4
V the nearest-neighbor Coulomb interaction and V^' the diagonal Coulomb interaction (for simplicity we assume homogeneous V and V^' in the ladder array). Near a symmetric exciton tunneling condition in which ϵ_2≈ϵ_3=ϵ+Δ, ϵ_6≈ϵ_7=ϵ, and (V-V^'), Δ≫ t_23, t_67, t_26, t_37, Eq. <ref> becomes
E_0 = -2ϵ-Δ+V+2V^'+δ E_0
E_1 = -2ϵ-Δ+V+2V^'+δ E_1
E_2 = -2ϵ-Δ+2V+V^'+δ E_2
E_3 = -2ϵ-Δ+2V+V^'+δ E_3
E_4 = -2ϵ-2Δ+3V+δ E_4
E_5 = -2ϵ+V+2V^'+δ E_5
where δ E_i is a small perturbation of E_i near the symmetric exciton tunneling condition. We then express Eq. <ref> in the eigenbasis of the first-order perturbation H^'≃ U^†HU in which
U=
[ 1 0 -t_23/V-V^' -t_67/V-V^' -t_26/2V-2V^'-Δ -t_37/Δ; 0 1 -t_67/V-V^' -t_23/V-V^' t_37/2V-2V^'-Δ t_26/Δ; t_23/V-V^' t_67/V-V^' 1 0 0 0; t_67/V-V^' t_23/V-V^' 0 1 0 0; t_26/2V-2V^'-Δ -t_37/2V-2V^'-Δ 0 0 1 0; t_37/Δ -t_26/Δ 0 0 0 1; ]
Neglecting terms of more than second order in t_ij/V-V^', t_ij/2V-2V^'-Δ or t_ij/Δ the effective Hamiltonian H^' for the perturbed states 0' and 1' becomes
H^'=
[ E_0' -t_co; -t_co E_1'; ]
where E_0' = E_0-t_23^2/V-V^'-t_67^2/V-V^'-t_26^2/2V-2V^'-Δ-t_37^2/Δ, E_1' = E_1-t_23^2/V-V^'-t_67^2/V-V^'-t_37^2/2V-2V^'-Δ-t_26^2/Δ, and t_co = 2t_23t_67/V-V^'-t_26t_37/2V-2V^'-Δ-t_26t_37/Δ <cit.>. From Eq. <ref> we see that the tunneling of exciton states is determined by t_co, which has a term proportional to the product of intra-channel tunnel couplings and a term proportional to the product of inter-channel tunnel couplings. In the present experiment, the former is much larger than the latter by at least three orders of magnitude. Therefore, t_co is predominantly caused by the co-tunneling of charges in the intra-channel direction.
§.§ Adiabatic exciton transfer probability
We estimate the probability that an exciton adiabatically transitions between neighbouring sites in the quantum dot ladder. When this transition does not occur adiabatically, the exciton initially stays where it was. Afterwards, either the electron or the hole may tunnel, leaving the other particles behind, and eventually the entire exciton may still transition, but at least for a brief moment in time the intended exciton transport does not take place. For instance, the transition from ( [ 1 0 1 1; 0 1 0 0 ]) to ( [ 1 1 0 1; 0 0 1 0 ]) might instead end with ( [ 1 0 1 1; 0 1 0 0 ]) (the pair is not transferred) or ( [ 1 1 0 1; 0 1 0 0 ]) (a hole lags behind). Using the Landau-Zener formula <cit.>, we obtain
P_dia = exp(-2πt_co^2/ħ V_E)
t_co = 2t^2/V-V^'
V_E = Δ E/Δ T_r
where P_dia is the diabatic transition probability for the transition from ( [ 1 0 1 1; 0 1 0 0 ]) to ( [ 1 1 0 1; 0 0 1 0 ]), t_co is the intra-channel co-tunneling of the electron-hole pair, V_E is the energy level velocity, t is the intra-channel tunnel coupling, V is the inter-channel Coulomb interaction, V^' is the diagonal Coulomb interaction, Δ E is the energy difference between the ( [ 1 0 1 1; 0 1 0 0 ]) and ( [ 1 1 0 1; 0 0 1 0 ]) charge states, and Δ T_r is the rise time of the pulse. Note that we do not include the inter-channel co-tunneling processes in the analysis because they are at least three orders of magnitude smaller than that of the intra-channel co-tunneling process, as discussed before. Entering the experimental parameters, we obtain P_dia≃0.8%. Therefore, during Coulomb drag the inter-channel exciton is transported adiabatically with an estimated fidelity of 99.2%.
§.§ Coulomb drag data processing
In Fig. <ref> the raw data of the BR sensor signal is inverted such that an increasing (decreasing) signal corresponds to a positive (negative) charge. In addition, residual crosstalk from the bottom virtual gates to the sensor signals leads to a small gradient along E (y axis) in phase I of Fig. <ref>(a) and (b). We remove this residual crosstalk by fitting the signals in phase I to a linear background signal and subtracting this background from the data of the entire panel. The scripts for data processing can be found in the data repository.
§.§ Numerical simulation of exciton transport
We perform numerical simulations to compare with the measured exciton transport data in Fig. <ref>(a) and (b). To this end, we compute the ground state charge configuration of a classical Fermi-Hubbard Hamiltonian:
H = ∑_iϵ_in_i + U∑_in_i(n_i-1)/2 + U^'∑_⟨ i,j ⟩n_in_j
+ ∑_i∈α,j∈βV_ijn_in_j + V^'∑_i∈α,j∈βn_in_j
Compared to Eq. <ref>, we have set t = 0 to facilitate the computation. We further include the electrochemical potentials {ϵ_i} and account for the experimentally observed differences in inter-channel Coulomb repulsion V_ij.
Due to the absence of tunnel coupling terms, this simple Hamiltonian is already diagonal. Finding its ground state charge configuration becomes therefore a straight-forward energy minimization problem. Since U ≫ V_ij, U^', double occupations are always high in energy and it suffices to input homogeneous charging energies U ∼2meV as extracted from charge stability diagrams in Fig. <ref>. To capture the observed variations of the exciton transport windows, it is necessary to input the measured inter-channel Coulomb interaction parameters V_ij as specified in the main text (see section IV, neglecting diagonal interactions). Furthermore, for the intra-channel Coulomb interaction, we assume homogeneous interaction terms U^'∼400 eV.
Figure <ref> shows the simulated charge ground state variation as a function of the electrochemical potentials {ϵ_i}. These are varied in the same way as in the experiment: The bottom (drag) channel detuning E is swept from 500 eV to past the Fermi energy, while the individual top channel potentials are raised and lowered by 670 eV, corresponding to the charge shuttling sequence specified in section IV. The charge states are converted to charge sensor signal by inputting the sensor-to-dot distances {r_i} and assuming 1/r^2 decay of Coulomb interactions and a linear response of the sensors. The numerical simulations show good agreement with the measured data. We point out that for each dot pair, the exciton transport window is equal to the inter-channel Coulomb interaction V_ij, as highlighted in the main text. The faster vanishing response of the measured data as opposed to the numerical simulations can be explained by a decay of Coulomb interactions faster than 1/r^2, as previously observed in other work <cit.>.
71
fxundefined [1]
ifx#1
fnum [1]
#1firstoftwo
secondoftwo
fx [1]
#1firstoftwo
secondoftwo
noop [0]secondoftwo
ref[1]@startlink#1@href
href[1]#1@endlink
anitize@url [0]`
12`$12`&12`#12`1̂2`_12`%12
startlink[1]
endlink[0]
rl [1]href #1
@bib@innerbibempty
[Cirac and Zoller(2012)]Cirac2012
author author J. I. Cirac and author P. Zoller, title title Goals and opportunities in quantum
simulation, https://doi.org/10.1038/nphys2275 journal journal Nature Physics volume
8, pages 264 (year 2012)NoStop
[Georgescu et al.(2014)Georgescu, Ashhab, and Nori]Georgescu2014
author author I. M. Georgescu, author S. Ashhab, and author F. Nori, title title Quantum simulation, https://doi.org/10.1103/RevModPhys.86.153 journal journal Reviews of Modern Physics volume 86, pages 153 (year 2014)NoStop
[Barthelemy and Vandersypen(2013)]Barthelemy2013
author author P. Barthelemy and author L. M. K. Vandersypen, title title Quantum Dot Systems:
A versatile platform for quantum simulations, https://doi.org/10.1002/andp.201300124 journal journal Annalen der Physik volume 525, pages 808 (year 2013)NoStop
[Manousakis(2002)]Manousakis2002
author author E. Manousakis, title title A quantum-dot array
as model for copper-oxide superconductors: A dedicated quantum simulator for
the many-fermion problem, https://doi.org/10.1023/A:1014295416763
journal journal Journal of Low Temperature
Physics volume 126, pages 1501
(year 2002)NoStop
[Byrnes et al.(2008)Byrnes,
Kim, Kusudo, and Yamamoto]Byrnes2008
author author T. Byrnes, author N. Y. Kim,
author K. Kusudo, and author Y. Yamamoto, title
title Quantum simulation of Fermi-Hubbard models in
semiconductor quantum-dot arrays, https://doi.org/10.1103/PhysRevB.78.075320 journal journal Physical Review B volume 78, pages 075320 (year 2008)NoStop
[Yang et al.(2011)Yang,
Wang, and Das Sarma]Yang2011
author author S. Yang, author X. Wang, and author S. Das Sarma, title title Generic Hubbard model description of
semiconductor quantum-dot spin qubits, https://doi.org/10.1103/PhysRevB.83.161301 journal journal Physical Review B - Condensed Matter and Materials Physics volume 83, pages 161301(R) (year 2011)NoStop
[Hensgens et al.(2017)Hensgens, Fujita, Janssen, Li, Van Diepen, Reichl, Wegscheider, Das Sarma, and Vandersypen]Hensgens2017
author author T. Hensgens, author T. Fujita,
author L. Janssen, author X. Li, author
C. J. Van Diepen, author
C. Reichl, author W. Wegscheider, author S. Das Sarma, and author L. M. K. Vandersypen, title title Quantum simulation of a Fermi-Hubbard model using a semiconductor
quantum dot array, https://doi.org/10.1038/nature23022 journal journal Nature volume
548, pages 70 (year 2017)NoStop
[Singha et al.(2011)Singha,
Gibertini, Karmakar, Yuan,
Polini, Vignale, Katsnelson,
Pinczuk, Pfeiffer, West, and Pellegrini]Singha2011
author author A. Singha, author M. Gibertini,
author B. Karmakar, author S. Yuan, author
M. Polini, author G. Vignale, author M. I. Katsnelson, author A. Pinczuk, author L. N. Pfeiffer, author K. W. West, and author V. Pellegrini, title title
Two-dimensional Mott-Hubbard electrons in an artificial honeycomb
lattice, https://doi.org/10.1126/science.1204333 journal journal Science volume
332, pages 1176 (year 2011)NoStop
[Salfi et al.(2016)Salfi,
Mol, Rahman, Klimeck,
Simmons, Hollenberg, and Rogge]Salfi2016
author author J. Salfi, author J. A. Mol,
author R. Rahman, author G. Klimeck, author
M. Y. Simmons, author
L. C. L. Hollenberg, and author S. Rogge, title
title Quantum simulation of the Hubbard model with dopant atoms
in silicon, https://doi.org/10.1038/ncomms11342 journal journal Nature Communications volume 7, pages 11342 (year
2016)NoStop
[Dehollain et al.(2020)Dehollain, Mukhopadhyay, Michal,
Wang, Wunsch, Reichl,
Wegscheider, Rudner, Demler, and Vandersypen]Dehollain2020
author author J. P. Dehollain, author U. Mukhopadhyay, author V. P. Michal, author Y. Wang,
author B. Wunsch, author C. Reichl, author
W. Wegscheider, author
M. S. Rudner, author
E. Demler, and author
L. M. K. Vandersypen, title
title Nagaoka ferromagnetism observed in a quantum dot
plaquette, https://doi.org/10.1038/s41586-020-2051-0 journal journal Nature volume 579, pages 528 (year 2020)NoStop
[van Diepen et al.(2021)van
Diepen, Hsiao, Mukhopadhyay, Reichl, Wegscheider, and Vandersypen]VanDiepen2021
author author C. J. van Diepen, author T. K. Hsiao, author U. Mukhopadhyay,
author C. Reichl, author W. Wegscheider, and author L. M. K. Vandersypen, title title Quantum Simulation of Antiferromagnetic
Heisenberg Chain with Gate-Defined Quantum Dots, https://doi.org/10.1103/PhysRevX.11.041025 journal journal Physical Review X volume 11, pages 041025 (year 2021)NoStop
[Wang et al.(2023)Wang,
Déprez, Tidjani, Lawrie, Hendrickx, Sammak, Scappucci, and Veldhorst]Wang2023
author author C.-A. Wang, author C. Déprez,
author H. Tidjani, author W. I. L. Lawrie, author
N. W. Hendrickx, author
A. Sammak, author G. Scappucci, and author M. Veldhorst, title title
Probing resonating valence bonds on a programmable germanium quantum
simulator, https://doi.org/10.1038/s41534-023-00727-3 journal journal npj Quantum Information volume 9, pages 58 (year 2023)NoStop
[Kiczynski et al.(2022)Kiczynski, Gorman, Geng, Donnelly, Chung, He, Keizer, and Simmons]Kiczynski2022
author author M. Kiczynski, author S. K. Gorman, author H. Geng,
author M. B. Donnelly, author Y. Chung, author
Y. He, author J. G. Keizer, and author M. Y. Simmons, title title
Engineering topological states in atom-based semiconductor quantum dots, https://doi.org/10.1038/s41586-022-04706-0 journal
journal Nature volume 606, pages 694 (year 2022)NoStop
[Frenkel(1931)]Frenkel1931
author author J. Frenkel, title title On the Transformation of
light into Heat in Solids. I, https://doi.org/10.1103/PhysRev.37.17 journal journal Physical Review volume 37, pages 17 (year 1931)NoStop
[French et al.(2010)French,
Parsegian, Podgornik, Rajter,
Jagota, Luo, Asthagiri,
Chaudhury, Chiang, Granick,
Kalinin, Kardar, Kjellander,
Langreth, Lewis, Lustig,
Wesolowski, Wettlaufer, Ching, Finnis, Houlihan, von
Lilienfeld, van Oss, and Zemb]French2010
author author R. H. French, author V. A. Parsegian, author R. Podgornik, author R. F. Rajter, author A. Jagota,
author J. Luo, author
D. Asthagiri, author
M. K. Chaudhury, author
Y.-m. Chiang, author
S. Granick, author S. Kalinin, author M. Kardar, author R. Kjellander, author D. C. Langreth, author J. Lewis, author S. Lustig,
author D. Wesolowski, author J. S. Wettlaufer, author W.-Y. Ching, author
M. Finnis, author F. Houlihan, author O. A. von Lilienfeld, author C. J. van Oss, and author T. Zemb, title title Long
range interactions in nanoscale science, https://doi.org/10.1103/RevModPhys.82.1887 journal journal Reviews of Modern Physics volume 82, pages 1887 (year 2010)NoStop
[Knörzer et al.(2022)Knörzer, van Diepen, Hsiao,
Giedke, Mukhopadhyay, Reichl,
Wegscheider, Cirac, and Vandersypen]Knorzer2022
author author J. Knörzer, author C. J. van
Diepen, author T.-K. Hsiao,
author G. Giedke, author U. Mukhopadhyay, author
C. Reichl, author W. Wegscheider, author J. I. Cirac, and author L. M. K. Vandersypen, title title
Long-range electron-electron interactions in quantum dot systems and
applications in quantum chemistry, https://doi.org/10.1103/PhysRevResearch.4.033043 journal
journal Physical Review Research volume
4, pages 033043 (year 2022)NoStop
[Wigner(1934)]Wigner1934
author author E. Wigner, title title On the Interaction of
Electrons in Metals, https://doi.org/10.1103/PhysRev.46.1002
journal journal Physical Review volume 46, pages 1002 (year
1934)NoStop
[Vu and Das
Sarma(2020)]Vu2020
author author D. Vu and author S. Das
Sarma, title title Collective ground states
in small lattices of coupled quantum dots, https://doi.org/10.1103/PhysRevResearch.2.023060 journal
journal Physical Review Research volume
2, pages 023060 (year 2020)NoStop
[Jérome et al.(1967)Jérome, Rice, and Kohn]Jerome1967
author author D. Jérome, author T. M. Rice, and author W. Kohn, title title Excitonic Insulator, https://doi.org/10.1103/PhysRev.158.462 journal journal Physical Review volume 158, pages 462 (year 1967)NoStop
[Kohn and Sherrington(1970)]KOHN1970
author author W. Kohn and author D. Sherrington, title title Two Kinds of Bosons
and Bose Condensates, https://doi.org/10.1103/RevModPhys.42.1
journal journal Reviews of Modern Physics volume 42, pages 1 (year
1970)NoStop
[Bloch et al.(2008)Bloch,
Dalibard, and Zwerger]Bloch2008
author author I. Bloch, author J. Dalibard, and author W. Zwerger, title title Many-body physics with ultracold
gases, https://doi.org/10.1103/RevModPhys.80.885 journal journal Reviews of Modern Physics volume 80, pages 885 (year
2008)NoStop
[Argüello-Luengo et al.(2019)Argüello-Luengo, González-Tudela, Shi, Zoller, and Cirac]Arguello-Luengo2019
author author J. Argüello-Luengo, author A. González-Tudela, author T. Shi, author P. Zoller, and author J. I. Cirac, title title Analogue quantum chemistry simulation, https://doi.org/10.1038/s41586-019-1614-4 journal journal Nature volume 574, pages
215 (year 2019)NoStop
[Chomaz et al.(2023)Chomaz,
Ferrier-Barbut, Ferlaino, Laburthe-Tolra, Lev, and Pfau]Chomaz2023
author author L. Chomaz, author I. Ferrier-Barbut, author F. Ferlaino, author B. Laburthe-Tolra, author B. L. Lev, and author T. Pfau, title title Dipolar physics: a review of
experiments with magnetic quantum gases, https://doi.org/10.1088/1361-6633/aca814 journal journal Reports on Progress in Physics volume
86, pages 026401 (year 2023)NoStop
[Browaeys and Lahaye(2020)]Browaeys2020
author author A. Browaeys and author T. Lahaye, title title Many-body physics with
individually controlled Rydberg atoms, https://doi.org/10.1038/s41567-019-0733-z journal journal Nature Physics volume 16, pages 132 (year 2020)NoStop
[Bohn et al.(2017)Bohn,
Rey, and Ye]Bohn2017
author author J. L. Bohn, author A. M. Rey, and author J. Ye, title title Cold molecules: Progress in quantum engineering
of chemistry and quantum matter, https://doi.org/10.1126/science.aam6299 journal journal Science volume 357, pages
1002 (year 2017)NoStop
[Nandi et al.(2012)Nandi,
Finck, Eisenstein, Pfeiffer, and West]Nandi2012
author author D. Nandi, author A. D. K. Finck,
author J. P. Eisenstein,
author L. N. Pfeiffer, and author K. W. West, title title Exciton condensation and perfect Coulomb drag, https://doi.org/10.1038/nature11302 journal journal Nature volume 488, pages
481 (year 2012)NoStop
[Gramila et al.(1991)Gramila, Eisenstein, MacDonald,
Pfeiffer, and West]Gramila1991
author author T. J. Gramila, author J. P. Eisenstein, author A. H. MacDonald, author L. N. Pfeiffer, and author K. W. West, title title Mutual friction between
parallel two-dimensional electron systems, https://doi.org/10.1103/PhysRevLett.66.1216 journal journal Physical Review Letters volume 66, pages 1216 (year 1991)NoStop
[Yamamoto et al.(2006)Yamamoto, Stopa, Tokura, Hirayama, and Tarucha]Yamamoto2006
author author M. Yamamoto, author M. Stopa,
author Y. Tokura, author Y. Hirayama, and author S. Tarucha, title
title Negative Coulomb Drag in a One-Dimensional Wire, https://doi.org/10.1126/science.1126601 journal
journal Science volume 313, pages 204 (year 2006)NoStop
[Narozhny and Levchenko(2016)]Narozhny2016
author author B. Narozhny and author A. Levchenko, title title Coulomb drag, https://doi.org/10.1103/RevModPhys.88.025003 journal
journal Reviews of Modern Physics volume
88, pages 025003 (year 2016)NoStop
[Kellogg et al.(2002)Kellogg, Spielman, Eisenstein,
Pfeiffer, and West]Kellogg2002
author author M. Kellogg, author I. B. Spielman, author J. P. Eisenstein, author L. N. Pfeiffer, and author K. W. West, title title Observation of Quantized
Hall Drag in a Strongly Correlated Bilayer Electron System, https://doi.org/10.1103/PhysRevLett.88.126804 journal
journal Physical Review Letters volume
88, pages 126804 (year 2002)NoStop
[Tutuc et al.(2004)Tutuc,
Shayegan, and Huse]Tutuc2004
author author E. Tutuc, author M. Shayegan, and author D. A. Huse, title title Counterflow Measurements in Strongly
Correlated GaAs Hole Bilayers: Evidence for Electron-Hole Pairing, https://doi.org/10.1103/PhysRevLett.93.036802 journal
journal Physical Review Letters volume
93, pages 036802 (year 2004)NoStop
[Laroche et al.(2011)Laroche, Gervais, Lilly, and Reno]Laroche2011
author author D. Laroche, author G. Gervais,
author M. P. Lilly, and author J. L. Reno, title title Positive and negative Coulomb drag in vertically
integrated one-dimensional quantum wires, https://doi.org/10.1038/nnano.2011.182 journal journal Nature Nanotechnology volume 6, pages 793 (year 2011)NoStop
[Gorbachev et al.(2012)Gorbachev, Geim, Katsnelson, Novoselov, Tudorovskiy, Grigorieva,
MacDonald, Morozov, Watanabe,
Taniguchi, and Ponomarenko]Gorbachev2012
author author R. V. Gorbachev, author A. K. Geim,
author M. I. Katsnelson,
author K. S. Novoselov, author T. Tudorovskiy, author
I. V. Grigorieva, author
A. H. MacDonald, author
S. V. Morozov, author
K. Watanabe, author
T. Taniguchi, and author
L. A. Ponomarenko, title
title Strong Coulomb drag and broken symmetry in double-layer
graphene, https://doi.org/10.1038/nphys2441 journal
journal Nature Physics volume 8, pages 896 (year 2012)NoStop
[Li et al.(2017)Li,
Taniguchi, Watanabe, Hone, and Dean]Li2017
author author J. I. A. Li, author T. Taniguchi, author K. Watanabe,
author J. Hone, and author C. R. Dean, title
title Excitonic superfluid phase in double
bilayer graphene, https://doi.org/10.1038/nphys4140 journal journal Nature Physics volume
13, pages 751 (year 2017)NoStop
[Liu et al.(2017)Liu,
Watanabe, Taniguchi, Halperin, and Kim]Liu2017
author author X. Liu, author K. Watanabe,
author T. Taniguchi, author B. I. Halperin, and author P. Kim, title
title Quantum Hall drag of exciton condensate in graphene, https://doi.org/10.1038/nphys4116 journal journal Nature Physics volume 13, pages 746 (year 2017)NoStop
[Pandey et al.(2021)Pandey,
Alvarez, and Dagotto]Pandey2021
author author B. Pandey, author G. Alvarez, and author E. Dagotto, title title Excitonic wave-packet evolution in a
two-orbital Hubbard model chain: A real-time real-space study, https://doi.org/10.1103/PhysRevB.104.L220302 journal
journal Physical Review B volume
104, pages L220302 (year 2021)NoStop
[Kaneko et al.(2012)Kaneko,
Seki, and Ohta]Kaneko2012
author author T. Kaneko, author K. Seki, and author Y. Ohta, title title Excitonic insulator state in the two-orbital
Hubbard model: Variational cluster approach, https://doi.org/10.1103/PhysRevB.85.165135 journal journal Physical Review B volume 85, pages 165135 (year 2012)NoStop
[Vu and Sarma(2023)]vu2023excitonic
author author D. Vu and author S. D. Sarma, @noop title Excitonic phases in a spatially separated
electron-hole ladder model (year 2023), https://arxiv.org/abs/2305.16305 arXiv:2305.16305 [cond-mat.mes-hall]
NoStop
[Averin et al.(1991)Averin,
Korotkov, and Nazarov]Averin1991
author author D. V. Averin, author A. N. Korotkov, and author Y. V. Nazarov, title title Transport of
electron-hole pairs in arrays of small tunnel junctions, https://doi.org/10.1103/PhysRevLett.66.2818 journal journal Physical Review Letters volume 66, pages 2818 (year 1991)NoStop
[Matters et al.(1997)Matters, Versluys, and Mooij]Matters1997
author author M. Matters, author J. J. Versluys, and author J. E. Mooij, title title Electron-Hole Transport in
Capacitively Coupled 1D Arrays of Small Tunnel Junctions, https://doi.org/10.1103/PhysRevLett.78.2469 journal journal Physical Review Letters volume 78, pages 2469 (year 1997)NoStop
[Shimada and Delsing(2000)]Shimada2000
author author H. Shimada and author P. Delsing, title title Current Mirror Effect
and Correlated Cooper-Pair Transport in Coupled Arrays of Small Josephson
Junctions, https://doi.org/10.1103/PhysRevLett.85.3253 journal journal Physical Review Letters volume 85, pages 3253 (year
2000)NoStop
[Shinkai et al.(2009)Shinkai, Hayashi, Ota, Muraki, and Fujisawa]Shinkai2009
author author G. Shinkai, author T. Hayashi,
author T. Ota, author
K. Muraki, and author
T. Fujisawa, title title Bidirectional Current Drag Induced by Two-Electron Cotunneling in
Coupled Double Quantum Dots, https://doi.org/10.1143/APEX.2.081101 journal journal Applied Physics Express volume 2, pages 081101 (year 2009)NoStop
[Zajac et al.(2016)Zajac,
Hazard, Mi, Nielsen, and Petta]Zajac2016
author author D. M. Zajac, author T. M. Hazard,
author X. Mi, author
E. Nielsen, and author
J. R. Petta, title title Scalable Gate Architecture for a One-Dimensional Array of
Semiconductor Spin Qubits, https://doi.org/10.1103/PhysRevApplied.6.054013 journal
journal Physical Review Applied volume
6, pages 054013 (year 2016)NoStop
[Philips et al.(2022)Philips, Ma̧dzik, Amitonov,
de Snoo, Russ, Kalhor,
Volk, Lawrie, Brousse,
Tryputen, Wuetz, Sammak,
Veldhorst, Scappucci, and Vandersypen]Philips2022
author author S. G. J. Philips, author M. T. Ma̧dzik, author S. V. Amitonov, author S. L. de Snoo, author M. Russ, author N. Kalhor,
author C. Volk, author
W. I. L. Lawrie, author
D. Brousse, author L. Tryputen, author B. P. Wuetz, author A. Sammak, author M. Veldhorst,
author G. Scappucci, and author L. M. K. Vandersypen, title title Universal control of a six-qubit
quantum processor in silicon, https://doi.org/10.1038/s41586-022-05117-x journal journal Nature volume 609, pages
919 (year 2022)NoStop
[Ha et al.(2022)Ha,
Ha, Choi, Tang, Schmitz, Levendorf, Lee, Chappell, Adams, Hulbert, Acuna, Noah, Matten, Jura,
Wright, Rakher, and Borselli]Ha2022
author author W. Ha, author S. D. Ha, author M. D. Choi, author
Y. Tang, author A. E. Schmitz, author M. P. Levendorf, author K. Lee, author J. M. Chappell,
author T. S. Adams, author D. R. Hulbert, author
E. Acuna, author R. S. Noah, author J. W. Matten, author M. P. Jura, author J. A. Wright, author M. T. Rakher, and author M. G. Borselli, title title A Flexible Design Platform for
Si/SiGe Exchange-Only Qubits with Low Disorder, https://doi.org/10.1021/acs.nanolett.1c03026 journal
journal Nano Letters volume 22, pages 1443 (year 2022)NoStop
[Mortemousque et al.(2021)Mortemousque, Jadot, Chanrion,
Thiney, Bäuerle, Ludwig, Wieck, Urdampilleta, and Meunier]Mortemousque2021
author author P.-A. Mortemousque, author B. Jadot,
author E. Chanrion, author V. Thiney, author
C. Bäuerle, author
A. Ludwig, author A. D. Wieck, author M. Urdampilleta, and author T. Meunier, title title
Enhanced Spin Coherence while Displacing Electron in a Two-Dimensional
Array of Quantum Dots, https://doi.org/10.1103/PRXQuantum.2.030331 journal journal PRX Quantum volume 2, pages
030331 (year 2021)NoStop
[Chanrion et al.(2020)Chanrion, Niegemann, Bertrand,
Spence, Jadot, Li,
Mortemousque, Hutin, Maurand,
Jehl, Sanquer, De
Franceschi, Bäuerle, Balestro,
Niquet, Vinet, Meunier, and Urdampilleta]Chanrion2020
author author E. Chanrion, author D. J. Niegemann, author B. Bertrand,
author C. Spence, author B. Jadot, author
J. Li, author P.-A. Mortemousque, author L. Hutin, author R. Maurand, author X. Jehl, author M. Sanquer, author S. De
Franceschi, author C. Bäuerle, author F. Balestro, author Y.-M. Niquet, author M. Vinet,
author T. Meunier, and author M. Urdampilleta, title title Charge Detection in an Array of CMOS Quantum
Dots, https://doi.org/10.1103/PhysRevApplied.14.024066 journal journal Physical Review Applied volume 14, pages 024066 (year
2020)NoStop
[Unseld et al.(2023)Unseld,
Meyer, Ma̧dzik, Borsoi,
de Snoo, Amitonov, Sammak,
Scappucci, Veldhorst, and Vandersypen]Unseld2023
author author F. K. Unseld, author M. Meyer,
author M. T. Ma̧dzik,
author F. Borsoi, author S. L. de Snoo, author
S. V. Amitonov, author
A. Sammak, author G. Scappucci, author M. Veldhorst, and author L. M. K. Vandersypen, title title A 2D quantum dot array in planar 28Si/SiGe, https://arxiv.org/abs/2305.19681 (year 2023), https://arxiv.org/abs/2305.19681 arXiv:2305.19681 NoStop
[Hendrickx et al.(2021)Hendrickx, Lawrie, Russ, van
Riggelen, de Snoo, Schouten, Sammak, Scappucci, and Veldhorst]Hendrickx2021
author author N. W. Hendrickx, author W. I. L. Lawrie, author M. Russ,
author F. van Riggelen, author S. L. de Snoo, author
R. N. Schouten, author
A. Sammak, author G. Scappucci, and author M. Veldhorst, title title A
four-qubit germanium quantum processor, https://doi.org/10.1038/s41586-021-03332-6 journal journal Nature volume 591, pages
580 (year 2021)NoStop
[Lodari et al.(2019)Lodari,
Tosato, Sabbagh, Schubert,
Capellini, Sammak, Veldhorst, and Scappucci]Lodari2019
author author M. Lodari, author A. Tosato,
author D. Sabbagh, author M. A. Schubert, author
G. Capellini, author
A. Sammak, author M. Veldhorst, and author G. Scappucci, title title
Light effective hole mass in undoped Ge/SiGe quantum wells, https://doi.org/10.1103/PhysRevB.100.041304 journal journal Physical Review B volume 100, pages 041304(R) (year 2019)NoStop
[Scappucci et al.(2021)Scappucci, Kloeffel, Zwanenburg,
Loss, Myronov, Zhang,
De Franceschi, Katsaros, and Veldhorst]Scappucci2021
author author G. Scappucci, author C. Kloeffel,
author F. A. Zwanenburg,
author D. Loss, author
M. Myronov, author J. J. Zhang, author S. De Franceschi, author G. Katsaros, and author M. Veldhorst, title title The
germanium quantum information route, https://doi.org/10.1038/s41578-020-00262-z journal journal Nature Reviews Materials volume 6, pages 926 (year 2021)NoStop
[Borsoi et al.(2022)Borsoi,
Hendrickx, John, Motz,
van Riggelen, Sammak, de Snoo, Scappucci, and Veldhorst]Borsoi2022
author author F. Borsoi, author N. W. Hendrickx, author V. John,
author S. Motz, author
F. van Riggelen, author
A. Sammak, author S. L. de Snoo, author G. Scappucci, and author M. Veldhorst, title title
Shared control of a 16 semiconductor quantum dot crossbar array, http://arxiv.org/abs/2209.06609 (year 2022), https://arxiv.org/abs/2209.06609 arXiv:2209.06609 NoStop
[Lodari et al.(2021)Lodari,
Hendrickx, Lawrie, Hsiao,
Vandersypen, Sammak, Veldhorst, and Scappucci]Lodari2021a
author author M. Lodari, author N. W. Hendrickx, author W. I. L. Lawrie, author T.-K. Hsiao,
author L. M. K. Vandersypen,
author A. Sammak, author M. Veldhorst, and author G. Scappucci, title
title Low percolation density and charge noise with holes in
germanium, https://doi.org/10.1088/2633-4356/abcd82 journal journal Materials for Quantum Technology volume 1, pages 011002 (year
2021)NoStop
[van Diepen et al.(2018)van
Diepen, Eendebak, Buijtendorp, Mukhopadhyay, Fujita, Reichl, Wegscheider, and Vandersypen]VanDiepen2018
author author C. J. van Diepen, author P. T. Eendebak, author B. T. Buijtendorp, author U. Mukhopadhyay, author T. Fujita, author C. Reichl,
author W. Wegscheider, and author L. M. K. Vandersypen, title title Automated tuning of inter-dot tunnel
coupling in double quantum dots, https://doi.org/10.1063/1.5031034 journal journal Applied Physics Letters volume 113, pages 33101 (year 2018)NoStop
[Hsiao et al.(2020)Hsiao,
van Diepen, Mukhopadhyay, Reichl, Wegscheider, and Vandersypen]Hsiao2020b
author author T.-K. Hsiao, author C. J. van
Diepen, author U. Mukhopadhyay,
author C. Reichl, author W. Wegscheider, and author L. M. K. Vandersypen, title title Efficient Orthogonal Control of Tunnel Couplings
in a Quantum Dot Array, https://doi.org/10.1103/PhysRevApplied.13.054018 journal
journal Physical Review Applied volume
13, pages 054018 (year 2020)NoStop
[Qiao et al.(2020)Qiao,
Kandel, Deng, Fallahi,
Gardner, Manfra, Barnes, and Nichol]Qiao2020
author author H. Qiao, author Y. P. Kandel,
author K. Deng, author
S. Fallahi, author G. C. Gardner, author M. J. Manfra, author E. Barnes, and author J. M. Nichol, title title Coherent Multispin
Exchange Coupling in a Quantum-Dot Spin Chain, https://doi.org/10.1103/PHYSREVX.10.031006 journal journal Physical Review X volume 10, pages 031006 (year 2020)NoStop
[Braakman et al.(2013)Braakman, Barthelemy, Reichl, Wegscheider, and Vandersypen]Braakman2013
author author F. R. Braakman, author P. Barthelemy, author C. Reichl,
author W. Wegscheider, and author L. M. K. Vandersypen, title title Long-distance coherent coupling in a
quantum dot array, https://doi.org/10.1038/nnano.2013.67
journal journal Nature Nanotechnology volume 8, pages 432 (year
2013)NoStop
[Oosterkamp et al.(1998)Oosterkamp, Fujisawa, Van Der Wiel,
Ishibashi, Hijman, Tarucha, and Kouwenhoven]Oosterkamp1998
author author T. H. Oosterkamp, author T. Fujisawa, author W. G. Van Der
Wiel, author K. Ishibashi,
author R. V. Hijman, author S. Tarucha, and author
L. P. Kouwenhoven, title
title Microwave spectroscopy of a quantum-dot molecule, https://doi.org/10.1038/27617 journal journal Nature volume 395, pages
873 (year 1998)NoStop
[Note1()]Note1
note We note that although t_15 is higher than other
inter-channel tunnel couplings, since electron-hole pair transport is a
co-tunneling process and since t_26 remains below 1,
the correlated hopping of an electron-hole pair across the channels is still
three orders of magnitude smaller than the hopping along the channel
direction.Stop
[Note2()]Note2
note In the experiment we apply a global virtual gate voltage on
the bottom channel and convert the global voltage to a global energy offset
using an averaged bottom-channel lever arm 112 /mVNoStop
[Note3()]Note3
note Strictly speaking, exciton condensation does not occur in 1D
or 2D at finite temperature. However, for real experimental systems we can
have quasi-condensation when the correlation length exceeds the system
size <cit.>NoStop
[Note4()]Note4
note Holes in strained germanium have spin-3/2, but the large
heavy-hole light-hole splitting leads to an effective two-level
system.Stop
[Yang and Zhang(1990)]YANG1990
author author C. N. Yang and author S. C. Zhang, title title SO4 Symmetry in a Hubbard
model, https://doi.org/10.1142/S0217984990000933 journal journal Modern Physics Letters B volume 04, pages 759 (year 1990)NoStop
[Rizzi et al.(2005)Rizzi,
Rossini, De Chiara, Montangero, and Fazio]Rizzi2005
author author M. Rizzi, author D. Rossini,
author G. De Chiara, author S. Montangero, and author R. Fazio, title
title Phase Diagram of Spin-1 Bosons on One-Dimensional
Lattices, https://doi.org/10.1103/PhysRevLett.95.240404
journal journal Physical Review Letters volume 95, pages 240404 (year 2005)NoStop
[Shlyapnikov and Tsvelik(2011)]Shlyapnikov2011
author author G. V. Shlyapnikov and author A. M. Tsvelik, title title Polar phase of
one-dimensional bosons with large spin, https://doi.org/10.1088/1367-2630/13/6/065012 journal
journal New Journal of Physics volume
13, pages 065012 (year 2011)NoStop
[Winkler(2003)]Winkler2003
author author R. Winkler, @noop title Spin-orbit Coupling
Effects in Two-Dimensional Electron and Hole Systems (publisher Springer, year 2003)NoStop
[Gor'kov and Rashba(2001)]Gorkov2001
author author L. P. Gor'kov and author E. I. Rashba, title title Superconducting 2D System
with Lifted Spin Degeneracy: Mixed Singlet-Triplet State, https://doi.org/10.1103/PhysRevLett.87.037004 journal
journal Physical Review Letters volume
87, pages 037004 (year 2001)NoStop
[Golovach et al.(2008)Golovach, Khaetskii, and Loss]Golovach2008
author author V. N. Golovach, author A. Khaetskii, and author D. Loss, title title Spin relaxation at the
singlet-triplet crossing in a quantum dot, https://doi.org/10.1103/PhysRevB.77.045328 journal journal Physical Review B volume 77, pages 045328 (year 2008)NoStop
[L. D. Landau(1932)]L.D.Landau1932
author author L. D. Landau, title title Zur Theorie der
Energieübertragung. II, https://doi.org/https://doi.org/10.1016/B978-0-08-010586-4.50014-6
journal journal Physics of the Soviet Union volume 2, pages 46 (year
1932)NoStop
[Zener(1932)]Zener1932
author author C. Zener, title title Non-Adiabatic Crossing of
Energy Levels, https://doi.org/10.1098/rspa.1932.0165 journal journal Proceedings of the Royal Society of London
Series A volume 137, pages 696
(year 1932)NoStop
[Petrov et al.(2000)Petrov,
Shlyapnikov, and Walraven]Petrov2000
author author D. S. Petrov, author G. V. Shlyapnikov, and author J. T. M. Walraven, title title Regimes of
Quantum Degeneracy in Trapped 1D Gases, https://doi.org/10.1103/PhysRevLett.85.3745 journal journal Physical Review Letters volume 85, pages 3745 (year 2000)NoStop
|
http://arxiv.org/abs/2307.00982v1
|
20230703125733
|
The Fyodorov-Hiary-Keating Conjecture. II
|
[
"Louis-Pierre Arguin",
"Paul Bourgade",
"Maksym Radziwiłł"
] |
math.NT
|
[
"math.NT",
"math.PR",
"11M06, 11M50, 60G70, 60B20"
] |
rcases
.
}
corollaryCorollary
theoremTheorem
definitionDefinition
lemmaLemma
*theorem*Theorem
*lemma*Lemma
propProposition
remarkRemark
conjectureConjecture
tocline#1#2#3#4#5#6#7#1>@̧tocdepth
secpenalty#2
M
ifempty#4
tempdimar@tocindent#1
tempdima#4
@ #3tempdimapnumwidth plus4em -pnumwidth
#5-tempdima
#1
2em 3em 4em
#6topnumwidthtocpagenum#7
We prove a lower bound on the maximum of the Riemann zeta function in a typical short interval on the critical line. Together with the upper bound from <cit.>, this implies
tightness of
max_|h|≤ 1|ζ( 12+τ+ h)|·(loglog T)^3/4/log T,
for large T, where τ is uniformly distributed on [T,2T].
The techniques are also applied to bound the right tail of the maximum, proving the distributional decay ≍ y e^-2y for y positive. This confirms the Fyodorov-Hiary-Keating conjecture, which states that the maximum of ζ in short intervals lies in the universality class of logarithmically correlated fields.
Department of Mathematics, Baruch College and Graduate Center, City University of New York, USA
[email protected]
Courant Institute, New York University, USA
[email protected]
Department of Mathematics, Caltech, Department of Mathematics, U.T. Austin, USA
[email protected]
#1
The Fyodorov-Hiary-Keating Conjecture. II.
Maksym Radziwiłł
August 1, 2023
==========================================
§ INTRODUCTION
fancy
[LE,RO]
[CO]L.-P. Arguin, P. Bourgade and M. Radziwiłł
[CE]The Fyodorov-Hiary-Keating Conjecture. II.
The distribution of the Riemann zeta function on the critical line is conjecturally related to random matrices, a fact discovered by Montgomery <cit.> for local statistics of the zeros.
It was extended in many directions including to distributions for families of L-functions <cit.> and their moments <cit.>.
Fyodorov, Hiary & Keating <cit.> and Fyodorov & Keating <cit.> proposed to further expand the scope of this analogy at the level of extreme values. Based on a similar conjecture for random unitary matrices, they
put forward the very precise asymptotics
1/T· meas{ T ≤ t ≤ 2T : max_|h| ≤ 1 |ζ( 12 + t + h)| > e^y ·log T/(loglog T)^3/4}→ F(y),
as T →∞, where the limiting distribution function F satisfies F(y)∼ C ye^-2y for large y.
While the explicit form of F is not expected to be universal, the exponent 3/4 and the tail asymptotics ye^-2y characterize the universality class of logarithmically correlated fields.
In the first part of this series <cit.>, we showed the upper bound of this conjecture, F(y) ≪ y e^-2y.
The main goal of this paper is to complete this work and establish tightness in the Fyodorov-Hiary-Keating conjecture,
by showing F(y) → 1 as y → - ∞. The following is the main result.
There exists c>0 such that for any T≥ 100 and 0≤ y≤ (loglog T)^1/10 we have
1/T· meas{ T ≤ t ≤ 2T : max_|h|≤ 1 |ζ(1/2+ t+ h)|< e^-ylog T/(loglog T)^3/4}≤ c^-1 y^-c.
A direct consequence of the above result and <cit.> is the expected tightness of maxima on short intervals, and existence of subsequential limits.
For every ε > 0 there exists C > 0 such that for any T≥ 100, for t ∈ [T, 2T] in a set of measure larger than (1 - ε) T we have
|max_|h| ≤ 1log |ζ( 12 + t + h)| - (loglog T - 3/4logloglog T)| ≤ C.
In particular, there exists a subsequence T_ℓ→∞ and a distribution function F such that
1/T_ℓ· meas{ t ∈ [T_ℓ, 2 T_ℓ]: max_|h| ≤ 1 |ζ( 12 + t + h)| > e^y ·log T_ℓ/(loglog T_ℓ)^3/4}→ F(y),
uniformly in y ∈ℝ outside of a countable set.
Previous results in the direction of Theorem <ref> were limited to the first order log T, conditionally on the Riemann Hypothesis by Najnudel <cit.> and unconditionally by the authors with Belius and Soundararajan <cit.>. This contrasts with the developments on the upper bound, starting with the first order log T proved in <cit.>, then the second order by Harper <cit.>, and finally the optimal upper bound with the tail distribution <cit.>. In fact, the present work builds on many techniques developed for the upper bound in <cit.>, as well as new inputs as we now explain.
Progress towards the Fyodorov-Hiary-Keating conjecture has relied on the observation that the maxima of |ζ| on a short interval are related to extremes of branching processes. Indeed,
the emergence of large values of |ζ| follows a scenario first identified by Bramson <cit.> in the setting of branching Brownian motion. As explained in the introduction of <cit.>,
the explicit branching structure behind ζ comes from the Dirichlet polynomials (S_k(h), k≥ 1), |h|≤ 1, defined in (<ref>).
These polynomials behave similarly to correlated random walks, the time index k corresponding to primes in the loglog scale.
Bramson's scenario translates into the ballistic behavior of
(S_k(h), k≤ n_ℒ) conditioned not to cross an upper barrier.
Estimating the maximum of ζ with a precision of order one is a delicate task because the final index n_ℒ needs to be y-dependent and very large, i.e., the sum must include primes very close to T.
The proof of Theorem <ref> is decomposed into two parts. First, it is shown that large values of S_n_ℒ indeed imply large values of log|ζ|, cf. Proposition <ref>. Second, we prove that large values of S_n_ℒ of the claimed size are achieved, see Proposition <ref>.
Proposition <ref> builds on two techniques from <cit.>, namely the introduction of a lower barrier ensuring that large deviations of the increments of S_k can be obtained even for large primes, and the precise encoding through Dirichlet sums of the event that S remains in the corridor defined by an upper barrier and lower barrier.
To justify that large values of S_n_ℒ imply large values of log|ζ|, the first order asymptotics from <cit.> relied on working on the right of the critical line. However, implementing this method for the much finer tightness would be considerably more involved. Instead, Proposition <ref> uses a new, simpler argument allowing to work directly on the critical line, through an integral approximation of ζ by a finite Euler product (Lemma <ref>), and a control of the regularity in h of S_n_ℒ on high points (Proposition <ref>).
With these methods developed for Theorem <ref>, we can also complement the upper bound F(y) ≪ y e^-2 y from <cit.>, and show that F(y) ≍ y e^-2y for positive y.
For any C>0 there exists c>0 such that
for any 10≤ y≤ Cloglog T/logloglog T, we have
(max_|h|≤ 1 |ζ(1/2+τ+ h)|>e^ylog T/(loglog T)^3/4)≥ c y e^-2y e^-y^2/loglog T.
This proves the matching lower bound of the upper tail not only in the in the exponential regime y≤√(loglog T) but also in the Gaussian regime √(loglog T)≤ y≤ Cloglog T/logloglog T, because the proof of <cit.> implies the Gaussian decay in this range.
The estimate (<ref>) essentially
matches the range y=(t) proved by Bramson <cit.> for the branching Brownian motion up to time t.
(The time t corresponds to loglog T in our problem.)
It is weaker by a logarithmic factor as it would corresponds to y≤ C t/log t in the branching Brownian motion case.
We are not aware of other examples of log-correlated processes where the order of the right tail of the maximum is known to this level of precision. In fact, any form of decay has only been proved for a few models in this universality class.
Notably
for the branching random walk, the best known range is y=(√(t)) <cit.>, which matches the known precision for the two dimensional discrete Gaussian free field on the N× N square grid, y=(√(log N)) <cit.>.
A finer control of the contributions from small primes in the random walk would improve this range of y in Theorem <ref> to match Bramson's.
The distributional limit obtained in Corollary <ref> is presumably unique but we believe this is out of reach with current number theory techniques. Moreover, no explicit formula for F was conjectured.
Indeed, denoting U_n a Haar-distributed n× n unitary matrix, <cit.> proposed a very precise limiting distribution for
sup_|z|=1(log| det(z-U_n)|-log n + 3/4loglog n),
but as explained in <cit.> this limit is not expected to coincide with F: It
primarily suggested
the characteristic exponent 3/4 and the tail distribution y e^-2y for ζ, which are the prominent signatures of extremal statistics in log-correlated fields <cit.>.
Progress on a limit for (<ref>) culminated in the breakthrough proofs of
tightness <cit.> and uniqueness <cit.> of a limiting distribution for the more general circular beta ensembles, after initial steps verifying the first <cit.> and second order terms <cit.>.
The exact form of the limiting distribution of (<ref>), and universality of its right tail, remain open.
Acknowledgment. L.-P. A. is supported by the grants NSF CAREER 1653602 and NSF DMS 2153803, P. B. is supported by the NSF grant DMS 2054851, and M. R. is supported by the NSF grant DMS 1902063.
Notation. Throughout the paper, τ will denote a random variable uniformly distributed in [T, 2T], and T will be some large parameter that is usually taken to go to infinity.
With this notation, for any measurable function f on [T,2T] and event A, we have
ℙ( f(τ) ∈ A) := 1/T· meas{ T ≤ t ≤ 2T : f(t) ∈ A }.
§ PROOF OF THEOREM <REF>
Let
n_0 := ⌊ y ⌋ and n := ⌊loglog T ⌋ and n_ℒ := n - n_0.
For n_0 ≤ k ≤ n_ℒ and |h|≤ 1, we consider the partial sums
S_k(h) = ∑_n_0<loglog p ≤ k( p^-(1/2 + τ + h) + 1/2· p^- 2(1/2 + τ + h) ).
Essentially one can think of S_k(h) as an approximation to
∫_ℝlog |ζ( 12 + τ + h + x)| f ( e^k x) e^k x.
for some choice of smoothing with f compactly supported.
We will show that with high probability the local maxima of S_n_ℒ(h) arise at those h at which the partial sums S_k(h) evolve in a predictable manner as k runs from n_0 to n_ℒ. More precisely, the partial sums S_k(h) of maximizing h's are constrained between L_k and U_k (defined below) for all n_0 ≤ k ≤ n_ℒ. Once k reaches n_ℒ there are only (1)_y well-spaced (i.e, 1/log T spaced) values of h that can satisfy all those constraints, thus identifying the maximum almost uniquely.
In order to define L_k and U_k we introduce the slope,
α=1-3/4log n/n,
Furthermore given a function f, we define a symmetrized version,
𝒮_ℒ(f)(k) :=
f(k - n_0) for n_0 < k ≤n/2,
f(n_ℒ - k) for n/2 < k < n_ℒ,
0 for k ≥ n_ℒ or k ≤ n_0.
Then, the so-called barriers (i.e., values L_k and U_k) are defined as
U_k = y/10 +α (k-n_0) - 10 𝒮_ℒ(x ↦log(x))(k),
L_k = - 10 y +α (k-n_0) - 𝒮_ℒ(x ↦ x^3/4)(k).
We now introduce the set of good points G_ℒ, defined more generally for
n_0 ≤ℓ≤ n_ℒ as
G_0 =[- 12, 12]∩ e^-(n_ℒ - n_0)ℤ,
G_ℓ = { h∈ G_0 : S_k(h) ∈[L_k, U_k] for all k ≤ℓ}.
We will show that with high probability the local maximum belongs to G_ℒ.
We first comment on the above choices of barriers and discrete sets.
The interval [- 12, 12] defining G_0 needs to be strictly included in the original interval [-1,1], as it will be apparent in the proof of Proposition <ref>. Moreover, the discretization step e^-(n_ℒ - n_0) will be convenient for the proof of Proposition <ref> as it corresponds to the number of steps of the random walk (<ref>), but it is not essential and any step in [e^-n_ℒ,e^-(n_ℒ - n_0)] would work. However, contrary to <cit.>, it is essential that the upper barrier is convex and not concave, as we will see in the proof of Proposition <ref>.
The proof of the main theorem reduces now to two main propositions. In the first proposition, we show how the local maxima of the zeta function arise from the good points h ∈ G_ℒ.
There exists an absolute constant C>0 such that uniformly in T≥ 100 and 0≤ y≤ (loglog T)^1/10 we have
ℙ ( max_|h| ≤ 1 log |ζ( 12 + τ + h)| ≥ n - 3/4log n - 100 y - C ) ≥ℙ ( ∃ h ∈ G_ℒ ) + (e^-y).
In the second proposition, we then show that good points exist with high probability.
There exists c>0 such that uniformly in T≥ 100 and 0≤ y≤ (loglog T)^1/10 we have
ℙ ( ∃ h ∈ G_ℒ ) = 1 + (y^-c).
Combining Proposition <ref> and Proposition <ref> yields Theorem <ref>. We now describe the proofs of Proposition <ref> and Proposition <ref>
§.§ Proof of Proposition <ref>
The proof of Proposition <ref> breaks down into two propositions.
There exists C>0 such that for any 1000 < y < n^1/10
ℙ ( max_|h| ≤ 1log |ζ( 12 + τ + h)| ≥max_h ∈ G_0min_|u| ≤ 1 (S_n_ℒ(h + u) + √(|u| e^n_ℒ)) - 2 C - 20 y ) ≥ 1 - (e^-y).
We then show that with high probability for all h ∈ G_ℒ and all |u| ≤ 1,
|S_n_ℒ(h + u) - S_n_ℒ(h)| ≤ 20y + √(|u| e^n_ℒ).
For any 1000 < y < n^1/10 we have
ℙ (∀ h ∈ G_ℒ ∀ |u| ≤ 1 : |S_n_ℒ(h + u) - S_n_ℒ(h)| ≤ 20 y + √(|u| e^n_ℒ) ) = 1 - (e^-y).
On the event that there exists a h ∈ G_ℒ, Proposition <ref> now implies
max_v ∈ G_0min_|u| ≤ 1 (S_n_ℒ(v + u) + √(|u|e^n_ℒ)) ≥min_|u| ≤ 1 (S_n_ℒ(h + u) + √(|u| e^n_ℒ))
≥ S_n_ℒ(h) - 20y ≥ n - 3/4log n - 50 y
outside of a set of probability (e^-y). Proposition <ref> then yields that outside of a set of τ of probability ≪ e^-y,
max_|h| ≤ 1log |ζ( 12 + τ + h)| > n - 3/4log n - 100 y - 2C.
In other words,
ℙ (∃ h ∈ G_ℒ ) ≤ℙ (max_|h| ≤ 1log |ζ( 12 + τ + h)| > n - 3/4log n - 100 y - 2C ) + (e^-y)
and Proposition <ref> follows.
§.§ Proof of Proposition <ref>
Cauchy-Schwarz inequality readily implies
ℙ ( ∃ h ∈ G_ℒ ) ≥𝔼[#G_ℒ]^2/𝔼[(# G_ℒ)^2].
For fixed k and h,h'∈ G_0, we posit that the random variables (S_k(h), S_k(h')) can be well approximated by two correlated Gaussian random variables (𝒢_k(h), 𝒢_k(h')) with,
𝒢_k(h) := ∑_n_0 ≤ j ≤ k𝒩_j and 𝒢_k(h') := ∑_n_0 ≤ j ≤ k𝒩_j',
where the increments 𝒩_j and 𝒩_j' are Gaussian random variables with mean 0, equal variance
[𝒩_k^2]=[𝒩'_k^2]= 𝔰_k^2 := ∑_e^k-1 < log p ≤ e^ k ( 1/2 p + 1/8 p^2 ),
and covariance
[𝒩_k𝒩'_k]=ρ_k:= ∑_e^ k-1 < log p ≤ e^ k ( cos(|h - h'| log p)/ 2 p + cos(2 |h - h'| log p)/ 8 p^2 ).
The analog of the good sets (<ref>) for the Gaussian random variables is
𝔊_ℒ^± := #{ h ∈ G_0 : 𝒢_k(h) ∈ [L_k ∓ 1, U_k ± 1] for all n_0 ≤ k ≤ n_ℒ}.
We then show that in (<ref>) we can replace the arithmetic good set G_ℒ by the purely probabilistic good sets 𝔊_ℒ^±.
Uniformly in T≥ 100 and 100≤ y≤ n^1/10, we have
𝔼[# G_ℒ]^2/𝔼[(# G_ℒ)^2]≥ (1 + (y^-10) ) 𝔼[#𝔊_ℒ^+]^2/𝔼[(#𝔊_ℒ^-)^2].
This result is an immediate consequence of the comparison with Gaussian random walks as stated in Propositions <ref> and <ref> in the next Section <ref>.
The problem is now reduced to a purely probabilistic computation.
The proof of Proposition <ref> is concluded by the next proposition building on ideas of Bramson.
There is an absolute constant c>0 such that for any T≥ 100 and c^-1≤ y≤ n^1/10,
[#𝔊_ℒ^+]^2/𝔼[(#𝔊_ℒ^-)^2]≥ 1 - y^- c.
Combining the two above propositions with the lower bound from (<ref>) yield Proposition <ref>.
§ APPROXIMATIONS BY GAUSSIAN RANDOM WALKS
The proof of Proposition <ref> relies on approximating one-point and two-point correlations in terms of correlations of Gaussian random variables, see Propositions <ref> and <ref> below.
Note that the one-point estimate contains an additional twist by a Dirichlet polynomial. This will be needed in the proof of Proposition <ref>.
The proofs of Propositions <ref> and <ref> are independent of the rest of the paper and can be skipped on a first reading.
proponepoint
Let h∈[-1,1].
Let n_0 ≤ℓ≤ n_ℒ.
Let (S_k(h), n_0 ≤ k ≤ n_ℒ) and (𝒢_k(h),n_0 ≤ k ≤ n_ℒ)
be as in Equations (<ref>) and (<ref>).
Let 𝒬 be a Dirichlet polynomial of length ≤exp(1100 e^n) and supported on integers such that
all their prime factors are greater than exp(e^ℓ).
Then, we have for n_0 large enough,
𝔼 [ |𝒬( 12 + τ + h)|^2 1 ( S_k(h) ∈ [L_k, U_k], k ≤ℓ ) ]
≥ (1 + n_0^-10) 𝔼 [ |𝒬( 12 + τ + h)|^2 ] ·ℙ (𝒢_k(h) ∈ [L_k + 1, U_k - 1] , n_0 ≤ k ≤ℓ )
and
𝔼 [ |𝒬( 12 + τ + h)|^2 1 ( S_k(h)∈ [L_k, U_k], k ≤ℓ ) ]
≤ (1+n_0^-10) 𝔼 [ |𝒬( 12 + τ + h)|^2 ] ·ℙ(𝒢_k(h)∈ [L_k-1, U_k+1], n_0 < k ≤ℓ).
proptwopoints
Let h,h'∈ [-1,1].
Consider (S_k(h), S_k(h)) and (𝒢_k(h), 𝒢_k'(h)) for n_0<k≤ n_ℒ as defined in Equations (<ref>) and (<ref>).
We have for n_0 large enough
((S_k(h),S_k(h'))∈ [L_k, U_k]^2, n_0< k≤ n_ℒ)
≤ (1+n_0^-10) ·((𝒢_k(h), 𝒢_k(h'))∈ [L_k-1, U_k+1]^2, n_0< k≤ n_ℒ).
A similar lower bound can be proved, but is actually not needed in the proofs of Theorem <ref> and <ref>.
The proof of both propositions rely on an extension of the techniques of <cit.> to estimate the probability of events involving the partial sums (<ref>) in terms of random walk estimates.
The first step is to approximate indicator functions in terms of explicit polynomials in Section <ref>.
The relations between the partial sums and the random walks are then established in Section <ref> via Dirichlet polynomials.
§.§ Approximation of Indicator Functions by Polynomials
First, we state a slight modification of <cit.> that is more convenient when working with lower bounds. Throughout the paper, the normalization for the Fourier transform is
f(u)=∫_ℝ e^- 2π u x f(x) x.
There exists an absolute constant C > 0 such that for any Δ, A ≥ 3, there exist entire functions G^-_Δ,A and G^+_Δ,A(x) ∈ L^2(ℝ) such that:
* The Fourier transforms G^±_Δ,A are supported on [-Δ^2A, Δ^2A].
* We have,
0 ≤ G^-_Δ, A(x)≤ G^+_Δ, A(x)≤ 1
for all x ∈ℝ.
* We have
1(x ∈ [0, Δ^-1]) ≤ G^+_Δ, A(x) · (1 + C e^-Δ^A - 1),
1(x ∈ [0, Δ^-1]) ≥ G^-_Δ, A(x)-C e^-Δ^A - 1.
* We have
G^+_Δ, A(x) ≤1(x ∈ [-Δ^-A/2 , Δ^-1 + Δ^-A/2]) + C e^-Δ^A - 1,
G^-_Δ, A(x) ≥1(x ∈ [Δ^-A/2 , Δ^-1 - Δ^-A/2]) · (1 - C e^-Δ^A - 1).
* We have
∫_ℝ |G^±_Δ, A(x)| x ≤ 2Δ^2A.
This is proved the same way as <cit.> with
G^-_Δ,A(x)=∫_Δ^- A/2-Δ^-A^Δ^-1-Δ^-A/2+Δ^-AΔ^2AF(Δ^2A(x-t)) t
and
G^+_Δ,A(x)=∫_-Δ^-A^Δ^-1+Δ^-AΔ^2AF(Δ^2A(x-t)) t
with the approximate identity F=F_0/F_0_1, where the existence of F_0 is given by the following lemma.
<cit.>
There exists a smooth function F_0 such that
* For all x ∈ℝ, we have 0 ≤ F_0(x) ≤ 1 and F_0(x) ≥ 0.
* F_0 is compactly supported on [-1,1].
* Uniformly in x ∈ℝ, we have
F_0(x) ≪ e^-|x| / log^2 (|x| + 10).
With Lemma <ref>, we get the following estimate of indicator functions expressed in terms of polynomials.
Let A≥ 3 and Δ large enough. There exist polynomials 𝒟^-_Δ, A(x) and 𝒟^+_Δ, A(x) of degree at most Δ^10A with ℓ-th coefficient bounded by 2Δ^2A(ℓ+1)
such that for all |x|≤Δ^6A
(x ∈ [0,Δ^-1]) ≤ (1 +C e^-Δ^A - 1) |𝒟^+_Δ, A(x)|^2
|𝒟^+_Δ, A(x)|^2 ≤1(x ∈ [-Δ^-A/2, Δ^-1+Δ^-A/2]) +Ce^-Δ^A-1,
and
(x ∈ [0,Δ^-1]) ≥ |𝒟^-_Δ, A(x)|^2-Ce^-Δ^A-1
|𝒟^-_Δ, A(x)|^2 ≥ (1 -C e^-Δ^A - 1) 1(x ∈ [Δ^-A/2, Δ^-1-Δ^-A/2]),
for some absolute constant C>0.
We prove the inequalities (<ref>) for 𝒟^-_Δ, A. The ones for 𝒟^+_Δ, A(x) were proved in <cit.> using the function G^+_Δ, A, cf. Equations (32), (33) and (41), (42) there.
The treatment is very similar to the one below.
For the first inequality in (<ref>), item (3) of Lemma <ref> ensures the existence of a function G^-_Δ, A(x) in L^2 such that
(x∈ [0,Δ^-1])≥ G^-_Δ, A(x)-Ce^-Δ^A-1.
For ν=Δ^10A, we write G^-_Δ,A(x) as
G^-_Δ,A(x)= ∫_ℝ e^2πξ xG^-_Δ,A(ξ) dξ= 𝒟^-_Δ, A(x) + ∑_ℓ >ν(2π x)^ℓ/ℓ!∫_ℝξ^ℓG^-_Δ, A(ξ) dξ,
where
𝒟^-_Δ, A(x)= ∑_ℓ≤ν(2π x)^ℓ/ℓ!∫_ℝξ^ℓG^-_Δ, A(ξ) dξ .
Clearly, the degree of 𝒟^-_Δ, A is ν=Δ^10A,and
∫_ℝ |ξ|^ℓ |G^-_Δ,A(ξ)| dξ≤Δ^2A ℓ∫_ℝ |G^-_Δ,A(ξ)| dξ≤ 2 Δ^2A (ℓ+1),
by properties (1) and (5) of Lemma <ref>. Thus, the coefficients of 𝒟^-_Δ, A(x) are bounded by ≪Δ^2A(ℓ+1).
Assuming that |x|≤Δ^6A, then the error term in Equation (<ref>) is smaller than
(2π)^ν/ν! |x|^ν∫_ℝ |ξ^ν| |G^-_Δ, A(ξ)| dξ≤10^ν/ν !Δ^6Aν Δ^2A(ν + 1)≤10^ν/ν !Δ^9Aν.
This is ≤ e^-Δ^A for the choice ν=Δ^10A.
This shows that whenever |x|≤Δ^6A
G^-_Δ, A(x)=𝒟^-_Δ, A(x)+^*(e^-Δ^A),
where the ^⋆ means that the implicit constant is smaller than 1.
If x∉ [0,Δ^-1], then Equations (<ref>) and (<ref>) with the fact that G^-_Δ, A≥ 0 imply
-e^-Δ^A≤𝒟^-_Δ, A(x)≤ 2 C e^-Δ^A-1,
for Δ large enough (depending on C).
Therefore, in this case, the following holds
(x∈ [0,Δ^-1]) ≥ |𝒟^-_Δ, A(x)|^2+(𝒟^-_Δ, A(x)-|𝒟^-_Δ, A(x)|^2)-2Ce^-Δ^A-1
≥ |𝒟^-_Δ, A(x)|^2-2|𝒟^-_Δ, A(x)|-2Ce^-Δ^A-1
≥ |𝒟^-_Δ, A(x)|^2-6Ce^-Δ^A-1.
If x∈ [0,Δ^-1], then the fact that G^-_Δ, A≤ 1 implies instead.
-e^-Δ^A≤𝒟^-_Δ, A(x)≤ 1+2 C e^-Δ^A-1.
We deduce that:
(x∈ [0,Δ^-1]) ≥ |𝒟^-_Δ, A(x)|^2+(𝒟^-_Δ, A(x)-|𝒟^-_Δ, A(x)|^2)-2Ce^-Δ^A-1
= |𝒟^-_Δ, A(x)|^2+|𝒟^-_Δ, A(x)|(sgn𝒟^-_Δ, A(x) -|𝒟^-_Δ, A(x)|)-2Ce^-Δ^A-1
≥ |𝒟^-_Δ, A(x)|^2-6Ce^-Δ^A-1.
This establishes the first inequality in (<ref>) by redefining C.
For the second inequality in (<ref>), item (4) of Lemma <ref> and Equation (<ref>) give
|𝒟^-_Δ, A(x)+^*(e^-Δ^A-1)|^2≥ (1 -C e^-Δ^A - 1)·1(x ∈ [Δ^-A/2, Δ^-1-Δ^-A/2]).
Since the constant in ^* is ≤ 1, the dominant term on the left-hand side is 𝒟^-_Δ, A(x), and we can absorb the additive error in a multiplicative factor to get
the second inequality in (<ref>).
§.§ Proof of Propositions <ref> and <ref>.
For these proofs, we need two preliminary steps. First, the constraints for the random walk (S_k)_k (<ref>) are re-expressed in terms of its increments.
Second, this allows to write the probabilities for the Dirichlet sums S_k in terms of a probabilistic model.
Constraints and increments. First, the polynomial approximation of indicator functions from Lemma <ref> will be related to events involving the partial sums S_k, n_0<k≤ n_ℒ. Fix h∈ [-1,1].
Consider the increments
Y_j(h)=S_j(h)-S_j-1(h), n_0< j≤ n_ℒ.
To shorten the notation, we consider the set of times
𝒥_ℓ={n_0+1, n_0+2, …, ℓ -1 , ℓ}.
with n_0 ≤ℓ≤ n_ℒ.
We will partition the intervals of values taken by Y_j, j∈𝒥_ℓ into sub-intervals of length Δ_j^-1 where
Δ_j=(j ∧ (n-j))^4.
The exponent 4 is chosen to ensure summability. In particular we will simply use that for y chosen large enough we have
∑_j ∈𝒥_ℓΔ_j^-1≤∑_j≥ n_0Δ_j^-1≤ 1.
We consider events for the partial sums of the form
{S_j(h)∈ [L_j, U_j], j∈𝒥_ℓ}, h∈ [-1,1].
We would like to decompose the above in terms of events for the increments
{Y_j(h)∈ [u_j,u_j + Δ_j^-1],j∈𝒥_ℓ}, h∈ [-1,1],
for a given tuple (u_j, j∈𝒥_ℓ). Note that such events are disjoint for two distinct tuples.
On an event of the form (<ref>), from (<ref>) we have
∑_i≤ j u_i ≤ S_j(h) ≤∑_i≤ j (u_i +Δ_i^-1) ≤∑_i≤ j u_i +1, for all j∈𝒥_ℓ,
This means that we have the following inclusions
{S_j(h)∈ [L_j, U_j+1], j∈𝒥_ℓ} ⊃⋃_𝐮∈ℐ{Y_j(h)∈ [u_j,u_j + Δ_j^-1], j∈𝒥_ℓ},
{S_j(h)∈ [L_j+1, U_j], j∈𝒥_ℓ} ⊂⋃_𝐮∈ℐ{Y_j(h)∈ [u_j,u_j + Δ_j^-1], j∈𝒥_ℓ},
where ℐ is the set of tuples 𝐮=(u_j, j∈𝒥_ℓ), u_j∈Δ_j^-1ℤ, such that ∑_i≤ ju_i∈ [L_i, U_i] for all j∈𝒥_ℓ.
The definition of ℐ imposes restrictions on the u_j's.
Indeed, we must have
u_j≤ U_j-L_j-1≤ 10Δ_j^1/4 u_j ≥ L_j-U_j-1≥ -10Δ_j^1/4.
In all cases, we have the following bound which will be repeatedly used:
|u_j|≤ 100 Δ_j^1/4, j∈𝒥_ℓ.
Probabilistic model for the increments. Additionally to the original random walk (<ref>) and its Gaussian counterpart (<ref>), as an intermediate we now consider another probabilistic model needed for the proofs of Propositions <ref> and <ref>.
For h∈[-1,1], let
𝒮_k(h) = ∑_n_0≤loglog p≤ k ( e^θ_p p^-(1/2 + h) + 12 e^2θ_p p^-(1 + 2 h) ), k≤ n_ℒ,
where (θ_p, p prime) are i.i.d. random variables distributed uniformly on [0,2π], and define the corresponding increments
𝒴_k(h)=𝒮_k(h)-𝒮_k-1(h), k≤ n_ℒ.
It is easy to see that 𝒮_k and 𝒴_k have mean 0. The variance of the increments 𝒴_k coincides with (<ref>) and by a quantitative version of the Prime Number Theorem (see <cit.>) they satisfy
𝔰_j^2=1/2+ O(e^-c √(j)).
for some universal c>0. These precise asymptotics are not used in the comparison with the Gaussian model, i.e. in the proof of Proposition <ref> below, and they will be used only for convenience in the first and second moment for the Gaussian model, Proposition <ref>. In fact to apply the Ballot theorem from Proposition <ref> we will only rely on 𝔰_j^2∈[κ,κ^-1] for some fixed κ>0.
We prove (<ref>). The upper bound (<ref>) is proved in a similar way, see Proposition <ref>. We define the weighted expectation,
𝔼_𝒬[X] := 𝔼 [ |𝒬( 12 + τ)|^2 · X(τ) ] ·𝔼 [ |𝒬( 12 + τ)|^2 ]^-1,
and the corresponding measure ℙ_𝒬(A) := 𝔼_𝒬[1(τ∈ A)].
In what follows, we drop the dependence on h as it plays no role.
Equation (<ref>) directly implies (by taking U_k instead of U_k+):
ℙ_𝒬(S_k∈ [L_k, U_k],k∈𝒥_ℓ)≥∑_𝐮∈ℐℙ_𝒬(Y_k-u_k∈ [0,Δ_k^-1], k∈𝒥_ℓ),
where ℐ is now the set of tuples 𝐮=(u_j, j∈𝒥_ℓ), u_j∈Δ_j^-1ℤ, such that ∑_i≤ ju_i∈ [L_i, U_i-1] for all j∈𝒥_ℓ.
By introducing the indicator functions ∏_k (|Y_k-u_k|≤Δ_k^6A), Equation (<ref>) of Lemma <ref> can be applied with A=10 (say), thanks to the bound (<ref>).
This yields
ℙ_𝒬(Y_k-u_k∈ [0,Δ_k^-1],k∈𝒥_ℓ)≥𝔼_𝒬[∏_k(|𝒟^-_Δ_k,A(Y_k-u_k)|^2-Ce^-Δ_k^A-1)(|Y_k-u_k|≤Δ_k^6A)].
The tricky part is to get rid of the indicator function.
For simplicity, let's write 𝒟_k for |𝒟^-_Δ_k,A(Y_k-u_k)|^2-Ce^-Δ_k^A-1.
Since (|Y_k-u_k|≤Δ_k^6A)=1-(|Y_k-u_k|> Δ_k^6A), we can rewrite the above as
𝔼_𝒬[∏_k∈𝒥_ℓ𝒟_k]
+∑_J⊆𝒥_ℓ, J≠∅(-1)^|J|𝔼_𝒬[∏_k∈𝒥_ℓ𝒟_k∏_j∈ J(|Y_j-u_j|>Δ_j^6A)].
We start with the first term, which will be dominant.
Each Y_j is a Dirichlet polynomial of length at most exp(2e^j). Therefore,
from Lemma <ref>, for any subset ℳ⊂𝒥_ℓ the Dirichlet polynomial ∏_j∈ℳ𝒟^-_Δ_j,A(Y_j-u_j) is of length at most
exp(2e^Δ_^100)
≤exp ( 1/100 e^n )
for y large enough.
Therefore, Lemma <ref> applies to compare with the random model with increments 𝒴_k given in (<ref>):
𝔼_𝒬[∏_k∈ℳ |𝒟^-_Δ_k, A(Y_k-u_k)|^2]
= (1+(T^-99/100))∏_k∈ℳ𝔼 [ |𝒟^-_Δ_k,A(𝒴_k - u_k)|^2 ],
where we have split the expectation _𝒬 and used =_𝒬 for the probabilistic model, thanks to the independence of the 𝒴_k's.
Moreover, for each k, we have
[ |𝒟^-_Δ_k,A(𝒴_k-u_k)|^2] ≥[ |𝒟^-_Δ_k,A(𝒴_k-u_k)|^2(|𝒴_k-u_k|≤Δ_k^6A)]
≥ (1-Ce^-Δ_k^A-1)·(𝒴_k-u_k ∈ [Δ_k^-A/2, Δ_k^-1-Δ_k^-A/2]),
where the second inequality follows from (<ref>),
noting that the condition |𝒴_k-u_k|≤Δ_k^6A is implied by 𝒴_k-u_k ∈ [Δ_k^-A/2, Δ_k^-1-Δ_k^-A/2], and thus can be dropped.
We now rewrite this probability in terms of Gaussian increments.
Lemma <ref> in Appendix <ref> gives
(𝒴_k-u_k ∈ [Δ_k^-A/2, Δ_k^-1-Δ_k^-A/2])=(𝒩_k-u_k∈ [Δ_k^-A/2,Δ_k^-1-Δ_k^-A/2])+(e^-ce^k/2).
The overspill Δ_j^-A/2 can be removed at no cost: from (<ref>) and 𝔰_k≍ 1, uniformly in x,y∈ u_k+[Δ_k^-A/2,Δ_k^-1-Δ_k^-A/2] the density f_k of 𝒩_k satisfies f_k(x)≍ f_k(y), so
(𝒩_k -u_k∈ [Δ_k^-A/2,Δ_k^-1-Δ_k^1-A/2]) =(1+(Δ_k^-A/2))·(𝒩_k-u_k∈ [0,Δ_k^-1]).
Moreover,
(𝒩_k-u_k∈ [0,Δ_k^-1])≫Δ_k^-1e^-2 u_k^2≫Δ_k^-1e^-100^2Δ_k^1/2.
This is much larger than the additive error term (e^-ce^k/2) in (<ref>), which can therefore be replaced by a multiplicative error. Both multiplicative errors together give
for k≤ n_ℒ
(𝒴_k-u_k ∈ [Δ_k^-A/2, Δ_k^-1-Δ_k^-A/2])=(1+ ((k∧ (n-k))^-2A ) ·(𝒩_k-u_k∈ [0,Δ_k^-1]).
The product over k∈𝒥_ℓ of the error terms above is (1+(n_0^-A)).
Going back to Equations (<ref>) and (<ref>), we have established that
_𝒬[∏_k∈ℳ|𝒟_Δ_k, A^-(Y_k-u_k)|^2]≥ (1+(n_0^-A))∏_k∈ℳ(𝒩_k-u_k∈ [0,Δ_k^-1]).
Remember that we aim at a similar estimate for 𝒟_k=|𝒟^-_Δ_k, A(Y_j-u_j)|^2-Ce^-Δ_k^A-1. From (<ref>), (𝒩_k-u_k∈ [0,Δ_k^-1])≫ e^-Δ_k and (<ref>) holds for arbitrary ℳ⊂𝒥_ℓ, so that by a simple expansion we have
_𝒬[∏_k∈𝒥_ℓ𝒟_k]≥ (1+(n_0^-A))∏_k∈𝒥_ℓ(𝒩_k-u_k∈ [0,Δ_k^-1]).
We now bound the second term in (<ref>). Let's fix the non-empty subset J⊆𝒥_ℓ in the sum.
Since
(|X|>λ)≤|X|^2q/λ^2q,
we have
𝔼_𝒬[∏_k∈𝒥_ℓ𝒟_k∏_j∈ J(|Y_j-u_j|>Δ_j^6A)]≤𝔼_𝒬[∏_k∈𝒥_ℓ𝒟_k∏_j∈ J|Y_j-u_j|^2q_j/Δ_j^12Aq_j],
where we pick q_j=⌊Δ_j^6A⌋, A=10.
As for the first term, we need to handle the error Ce^-Δ_k^A-1 in 𝒟_k.
For this we abbreviate d_k(x)= D^-_Δ_k,A(x-u_k), ε_k=Ce^-Δ_k^A-1, and expand
𝔼_𝒬[∏_k∈𝒥_ℓ𝒟_k∏_j∈ J|Y_j-u_j|^2q_j/Δ_j^12Aq_j]≤∑_B⊂𝒥_ℓ𝔼_𝒬[∏_k∈ B|d_k(Y_k)|^2∏_k∈𝒥_ℓ∖ Bε_k∏_j∈ J|Y_j-u_j|^2q_j/Δ_j^12Aq_j].
From Lemma <ref>, the Dirichlet polynomial d_j is of length at most exp(2e^jΔ_j^100).
The choice of q_j implies that the Dirichlet polynomial
∏_k∈ A d_k ∏_j∈ J (Y_j-u_j)^q_j
has length at most exp(2e^Δ_^100)≤exp(1100 e^n) as in (<ref>).
Therefore, we can use Lemma <ref> again, and work with the random model term by term.
Again, the fact that 𝒬 is supported on integers with primes p with log p > e^ℓ means
that for the random model the expectation with respect to 𝔼_𝒬 is equal to the
expectation with respect to 𝔼.
We start with the case j∈ B∩ J. We have
[|d_j(𝒴_j)|^2|𝒴_j-u_j|^2q_j]≪ [|d_j(𝒴_j)|^4 ]^1/2·[|𝒴_j-u_j|^4q_j]^1/2.
The definition of 𝒟^-_Δ_j, A in Equations (<ref>) and (<ref>) implies the following bound on all 2k-moments, k∈ℕ,
[|d_j(𝒴_j)|^2k]
≤[ ( ∑_ℓ≤Δ_j^10 A(2π )^ℓ/ℓ! 2Δ_j^2A(ℓ+1) (|𝒴_j|+100Δ_j^1/4)^ℓ )^2k ]
≪Δ_j^4k A [exp( 4π k Δ_j^2A (| 𝒴_j|+100 Δ_j^1/4))] ≪_k e^Δ_j^5A,
where the third inequality follows from Lemma <ref>. By Lemma <ref> and the inequality x^4q/q^4q≤(4q)!/(λ q)^4q· (e^λ x+e^-λ x)
with the choice q=q_j=⌊Δ_j^6A⌋, λ=10, we have for any j≤ n_ℒ
[|𝒴_j-u_j|^4q_j/Δ_j^24Aq_j]
≪ e^-2Δ_j^6A,
by Stirling's formula and the fact that |u_j|≤ 100 Δ_j^1/4.
From equations (<ref>) (<ref>) (<ref>) (<ref>) and (<ref>) we have proved
𝔼_𝒬[∏_k∈𝒥_ℓ𝒟_k∏_j∈ J(|Y_j-u_j|>Δ_j^6A)]≪∑_B⊂𝒥_ℓ∏_J e^-Δ_j^6A∏_B\ J[|d_j(𝒴_j)|^2]∏_𝒥_ℓ\ Bε_j
=∏_j∈𝒥_ℓ[|d_j(𝒴_j)|^2] ∑_B⊂𝒥_ℓ∏_J e^-Δ_j^6A/[|d_j(𝒴_j)|^2] ∏_𝒥_ℓ\ (B∪ J)ε_j/[|d_j(𝒴_j)|^2] ∏_J\ Bε_j.
Moreover, from (<ref>) with the estimates (<ref>), (<ref>), we have
[|d_j(𝒴_j)|^2] ≫Δ_j^-1e^-100^2Δ_j^1/2. We have obtained
|∑_J⊆𝒥_ℓ, J≠∅(-1)^|J|𝔼_𝒬[∏_k∈𝒥_ℓ𝒟_k∏_j∈ J(|Y_j-u_j|>Δ_j^6A)]|
≪∏_j∈𝒥_ℓ[|d_j(𝒴_j)|^2]∑_J⊂𝒥_ℓ,J≠∅∑_B⊂𝒥_ℓ∏_J e^-12Δ_j^6A∏_𝒥_ℓ\ Bε_j^1/2
=∏_j∈𝒥_ℓ[|d_j(𝒴_j)|^2](∏_𝒥_ℓ(1+e^-12Δ_j^6A)-1)∏_𝒥_ℓ(1+√(ε_j))≪ e^-n_0^100∏_j∈𝒥_ℓ[|d_j(𝒴_j)|^2].
The above product is ≪∏_j∈𝒥_ℓ(𝒩_j-u_j∈ [0,Δ_j^-1]) as easily proved by combining (<ref>) and (<ref>). (A similar bound in the more general case of joint increments is detailed in (<ref>).)
Equations (<ref>),(<ref>) and (<ref>) with the above finally yield
ℙ_𝒬(Y_j-u_j ∈ [0,Δ_j^-1], j∈𝒥_ℓ)
≥ (1+(n_0^-10)) ∏_j∈𝒥_ℓ(𝒩_j-u_j∈ [0,Δ_j^-1]).
The claim (<ref>) follows by summing over 𝐮∈ℐ as in Equation (<ref>), and by applying the inclusion (<ref>) for the Gaussian random walk with increments 𝒩_j.
For the proof Proposition <ref> below, we will also consider the partial sums at h and h' jointly, i.e., S_k(h) and S_k(h'), n_0<k≤ n_ℒ, as well as the joint increments 𝒴_j(h) and 𝒴_j(h'). These increments have
covariance and correlations identical to those of 𝒩_j and 𝒩_j', i.e., they are given by (<ref>), which satisfies the asymptotics
ρ_j =
𝔰_j^2+((e^j|h-h'|)^2) if j≤log |h-h'|^-1,
((e^j|h-h'|)^-1) if j≥log |h-h'|^-1,
as is easily proved using the Prime Number Theorem as in <cit.>.
We also define ε_j=ε_j(h,h') by
ρ_j = 𝔰_j^2 - ε_j if j ≤log |h - h'|^-1,
ε_j if j > log |h - h'|^-1.
The precise asymptotics of the covariances in <ref> will not play a role in the proof of Proposition <ref> below.
However, it will be crucial in the proof of Proposition <ref>.
We write (S_k,S_k') for (S_k(h),S_k(h')) for conciseness, and similarly for the increments.
The event on the left-hand side of (<ref>) is decomposed using the increments as in Equation (<ref>).
Then, Equation (<ref>) can be used to bound the indicator functions for both points. We take A=10 (say). This gives that the left-hand side of (<ref>) is
≤ (1+(e^-n_0^10))∑_𝐮, 𝐮'∈ℐ[∏_j∈𝒥_ℒ |𝒟^+_Δ_j, A(Y_j-u_j) 𝒟^+_Δ_j, A(Y'_j-u'_j)|^2],
where we write 𝒥_ℓ as in (<ref>).
We proceed as in Equation (<ref>).
From Lemma <ref>, the Dirichlet polynomial ∏_j𝒟^+_Δ_j,A(Y_j-u_j) is of length at most exp(2e^n_ℒΔ_n_ℒ^100). So the product of the polynomials for h and h' has length smaller than exp(4e^n_ℒΔ_n_ℒ^100)≤ T^1/100, as in (<ref>).
Lemma <ref> then implies
[ ∏_j∈𝒥_ℒ|𝒟^+_Δ_j, A(Y_j-u_j)𝒟^+_Δ_j, A( Y'_j-u'_j)|^2]
=(1+(T^-99/100))∏_j∈𝒥_ℒ[|𝒟^+_Δ_j, A(𝒴_j-u_j)𝒟^+_Δ_j, A( 𝒴'_j-u'_j)|^2].
We estimate the expectation for each j.
Write for short 𝒟^+_Δ_j, A(𝒴_j-u_j)=𝒟_j and similarly for 𝒟_j'.
We would like to introduce the indicator functions (|𝒴_j-u_j|≤Δ_j^6A) and (|𝒴'_j-u'_j|≤Δ_j^6A).
For this, note first that
[|𝒟_j 𝒟'_j |^2(|𝒴_j-u_j|> Δ_j^6A)]
≤ [|𝒟_j|^6 ]^1/3· [|𝒟'_j|^6 ]^1/3· (|𝒴_j-u_j|> Δ_j^6A )^1/3≪ e^-Δ_j^6A,
by Equation (<ref>) (with a=3) and Markov's inequality using (<ref>).
This observation implies that
[|𝒟_j 𝒟'_j |^2 ] =[|𝒟_j 𝒟'_j |^2(|𝒴_j-u_j|≤Δ_j^6A, |𝒴'_j-u'_j| ≤Δ_j^6A) ] + (e^-Δ_j^6A )
≤ ((𝒴_j-u_j, 𝒴_j'-u_j')∈ [-Δ_j^-A/2,Δ_j^-1+Δ_j^-A/2]^2 ) + (e^-Δ_j^A-1 ),
by Equation (<ref>) applied to both 𝒟_j and 𝒟_j'.
The Berry-Esseen approximation of Lemma <ref> can now be applied:
[|𝒟_j 𝒟'_j |^2 ]
≤ (1+Δ_j^-A/2) ((𝒩_j-u_j, 𝒩_j'-u_j')∈ [-Δ_j^-A/2,Δ_j^-1+Δ_j^-A/2]^2 ) + (e^-Δ_j^A-1 ).
The overspill Δ_j^-A/2 can be also removed as in (<ref>).
We conclude that the above is
= (1+ O(Δ_j^-A/2)) ((𝒩_j-u_j, 𝒩_j'-u_j')∈ [0,Δ_j^-1]^2 ) + (e^-Δ_j^A-1 )
=(1+ O(Δ_j^-A/2)) ((𝒩_j-u_j, 𝒩_j'-u_j')∈ [0,Δ_j^-1]^2 ),
since ((𝒩_j-u_j, 𝒩_j'-u_j')∈ [0,Δ_j^-1]^2)≫ e^-cu_j^2 -c u_j'^2≫ e^-2 cΔ_j^1/2 by the bound on u_j and u_j'.
It remains to use the above bound in (<ref>) and then (<ref>).
The claim then follows from Equation (<ref>) for the Gaussian random walks.
§ PROOF OF PROPOSITION <REF>
We first need preliminary bounds on the size of ζ and Dirichlet sums. We will use the notation
P_n_0(h) =∑_loglog p≤ n_0( p^-(1/2 + τ + h) + 1/2· p^- 2(1/2 + τ + h) ).
We have, for 1000 < y < n / 10,
ℙ (∀ m ≥ 1: max_|u| ≤ 2^m |ζ( 12 + τ + u)| ≤ 2^2m e^2n_ℒ ) = 1 - (e^-n),
ℙ (∀ m ≥ 1: max_|u| ≤ 2^m |S_n_ℒ(u)| ≤ 2^m/100 e^n_ℒ / 100 ) = 1 - (e^-n),
ℙ ( ∀ m ≥ 1: max_|u| ≤ 2^m | P_n_0(u)| ≤ 2^m / 100· 10 y ) = 1 - (e^-y).
By a union bound, the probability of the complement of the first event is
∑_m ≥ 1 2^- 4 m e^-4n_ℒ𝔼 [ max_|u| ≤ 2^m |ζ( 12 + τ + u)|^2 ] ≪∑_m ≥ 1 2^-4m e^-4n_ℒ· 2^m e^2n≪ e^-n,
as claimed, where the first inequality above relies on the same subharmonicity argument as <cit.>. For the second claim, we similarly have that the probability of the complement is bounded by,
∑_m ≥ 1 2^-4m e^-4n_ℒ·𝔼 [ max_|u| ≤ 2^m |S_n_ℒ(u)|^400 ] ≪∑_m ≥ 1 2^-4m e^-4n_ℒ· 2^m e^n n^200≪ e^-2n,
where we used (<ref>) in Lemma <ref>.
Finally, the last bound is proved in exactly the same way, using that, for v = ⌊ 100 y ⌋,
∑_m ≥ 1 2^-2v m / 100 (10 y)^-2 v·𝔼 [ max_|u| ≤ 2^m | P_n_0(u) |^2v ]
≪∑_m ≥ 1 2^-2v m / 100 (10 y)^-2v· 2^m e^n_0· v^1/2(2v)!/2^v v!· (C y)^v≪ e^-90 y,
where the moments calculation is now based on (<ref>) in Lemma <ref>.
The main analytic input is the next lemma.
Let 100 ≤ T ≤ t ≤ 2T and |h| ≤ 1.
Let f be a smooth function with f compactly supported in [-1/2π,1/2π] and such that f(0) = 1. Then,
log X ∫_ℝζ( 12 + t + h + x) ∏_p ≤ X (1 - 1/p^1/2 + t + h + x ) f (x log X) x = 1 + (T^-1).
For z∈ℝ, we have f(z)=∫_ℝf(u)e^ 2π z u u. As f is compactly supported, by Paley-Wiener this defines for z∈ℂ an entire function of rapid (faster than polynomial) decay as | z| →∞ inside any fixed strip. We can therefore shift the contour of integration in (<ref>) and see that it is equal to
log X ∫_2 - i ∞^2 + i ∞ζ(s + t + h) ∏_p ≤ X ( 1 - 1/p^s + t + h ) f ( s - 12/·log X ) s/ + (T^-1)
where T^-1 is the contribution of the pole at s = 1 - it - ih of ζ. On the line s = 2 we can write pointwise
ζ(s + t + h) ∏_p ≤ X ( 1 - 1/p^s + t + h ) = 1 + ∑_n > 1
p | n p > X1/n^s + t + h.
After nterchanging the sum and integral, the task reduces to estimating
log X/∫_2 - ∞^2 + ∞ n^-s - t - h f ( s - 12/·log X ) s.
Shifting the contour back to the line s = 12, this is equal to
log X ∫_ℝ1/n^1/2 + t + h + x· f(x log X) x = f ( -log n/2πlog X ) ·1/n^1/2 + t + h.
If n = 1, then this is equal to f(0) = 1. On the other hand if n ≠ 1 then n > X and then by assumption f(-log n / (2πlog X)) = 0. This gives the claim.
We are now ready to prove Proposition <ref>.
From Lemma <ref>, there exists a smooth function f ≥ 0 such that f(0) = 1, f is compactly supported in [-1/2π,1/2π] and
|f(x)| ≪ e^- 2 √(|x|).
Applying Lemma <ref> with this choice for f and X=exp(e^n_ℒ), we find by the mean-value theorem that for every τ and h ∈ G_0, there exists a k ≥ 0 and 1/4· (2^k - 1) ≤ |u| ≤14· (2^k + 1 - 1) such that
log |ζ( 12 + τ + h + u)| - S_n_ℒ(h + u) - P_n_0(h + u) - √(|u| e^n_ℒ)≥ -C
with C > 0 an absolute constant and where we remind the definition (<ref>).
By (<ref>) and (<ref>) in Lemma <ref>, the probability (in τ) that there exists an |h| ≤ 1 and k ≥ 1 for which (<ref>) holds is ≪ e^-n. Moreover, by (<ref>) in Lemma <ref>, we also know that
max_|h| ≤ 1
|u| ≤ 1/4 |P_n_0(h + u)| ≤ 20 y
for all τ outside of a set of probability ≪ e^-y. Therefore, for all τ outside of a set of probability ≪ e^-y we find that for all h ∈ G_0 there exists a |u| ≤ 1/4 such that
log |ζ( 12 + τ + h + u)| - S_n_ℒ(h + u) - √(|u| e^n_ℒ)≥ -C - 20 y.
Since G_0 ⊂ [- 12, 12], it follows that for all τ outside of a set of measure ≪ e^-y, for all h ∈ G_0, there exists an |u| ≤ 1/4 such that
max_|v| ≤ 1log |ζ( 12 + τ + v)| > S_n_ℒ(h + u) + √(|u| e^n_ℒ) - 2C - 20 y
≥min_|u| ≤ 1 (S_n_ℒ(h + u) + √(|u| e^n_ℒ)) - 2C - 20 y.
We now take an h ∈ G_0 that maximizes the right-hand side, and the claim follows.
§ PROOF OF PROPOSITION <REF>
The following lemma will be important.
Let n_0 ≤ℓ≤ n_ℒ.
Let v ≥ 1 and 0 ≤ k ≤ n be given.
Let 𝒬 be a Dirichlet polynomial supported on primes p or their squares p^2, such that e^ℓ≤log p ≤ e^n_ℒ and of length ≤exp(1200 v e^n):
𝒬(s)=∑_e^ℓ≤log p ≤ e^n_ℒ(a(p)/p^s+b(p)/p^2s),
where we also assume |b(p)|≤ 1.
Then
𝔼 [ sup_|h| ≤ 1
|u| ≤ e^-k + 1 |𝒬( 12 + τ + h + u) - 𝒬( 12 + τ + h)|^2v·1_h ∈ G_ℓ ]
≪ e^n_ℒ - n_0 - ℓ + 10 ((ℓ-n_0) ∧ (n_ℒ - ℓ))^3/4 + 20 y
× 100^v v! · ( ( e^-2k + 4∑_e^ℓ≤log p ≤ e^k|a(p)|^2 log^2 p/p )^v + ( 16 ∑_e^k ≤log p|a(p)|^2/p )^v· e^n_ℒ - k +1 ).
To simplify the exposition we first assume that b(p)= 0 for all p.
Since G_ℓ⊂ G_0 = e^-(n_ℒ - n_0)ℤ∩ [-1,1] we have,
sup_|h| ≤ 1
|u| ≤ e^-k + 1 | 𝒬( 12 + τ + h + u) - 𝒬( 12 + τ + h) |^2v·1_h ∈ G_ℓ
≤∑_h ∈ G_0sup_|u| ≤ e^-k + 1 | 𝒬( 12 + τ + h + u) - 𝒬( 12 + τ + h) |^2v·1_h ∈ G_ℓ.
Taking the expectation we find that (<ref>) is
≤ e^n_ℒ - n_0·𝔼 [ sup_|u| ≤ e^-k + 1 |𝒬( 12 + τ + u) - 𝒬( 12 + τ)|^2v·1_0 ∈ G_ℓ ].
We now split the Dirichlet polynomial 𝒬( 12 + τ + u) - 𝒬( 12 + τ) into two parts. One part 𝒬_≤ k( 12 + τ + u) - 𝒬_≤ k( 12 + τ) composed of primes p with log p ≤ e^k and another part supported on primes p with log p > e^k, denoted 𝒬_> k( 12 + i τ + i u) - 𝒬_> k( 12 + i τ). For the first part, for |u| ≤ e^-k + 1,
|𝒬_≤ k( 12 + τ + u) - 𝒬_≤ k( 12 + τ)|^2v ≤ (∫_0^e^-k + 1 |𝒬_≤ k'( 12 + τ + x)| x )^2v
≤ e^- (2v - 1) (k - 1)∫_0^e^-k + 1 |𝒬_≤ k'( 12 + τ + x)|^2v x.
Then
𝔼 [ |𝒬_≤ k'( 12 + τ + x)|^2v·1_0 ∈ G_ℓ ] ≪ v! · ( ∑_log p ≤ e^k|a(p)|^2 log^2 p/p )^v· e^-ℓ + 20y + 10 ((ℓ-n_0) ∧ (n_ℒ - ℓ))^3/4),
using Proposition <ref>, Lemma <ref> and the Ballot theorem from Proposition <ref>. Therefore
e^n_ℒ - n_0·𝔼 [ sup_|u| ≤ e^-k + 1 | 𝒬_≤ k( 12 + τ + u) - 𝒬_≤ k( 12 + τ)|^2v·1_0 ∈ G_ℓ ]
≪ e^n_ℒ - n_0 - ℓ + 20y + 10 ((ℓ-n_0) ∧ (n_ℒ - ℓ))^3/4· v! · ( e^-2k + 2∑_log p ≤ e^k|a(p)|^2 log^2 p/p )^v.
For the second part, we bound the contribution of 𝒬_≥ k( 12 + τ + u) - 𝒬_≥ k( 12 + τ) simply by the triangle inequality and the discretization Lemma (<ref>) applied to D=𝒬_≥ k^v, followed by Proposition <ref>. This gives
e^n_ℒ - n_0·𝔼 [ sup_|u| ≤ e^-k + 1 | 𝒬_≥ k( 12 + τ + u) - 𝒬_≥ k( 12 + τ)|^2v·1_0 ∈ G_ℓ ]
≪ e^n_ℒ - n_0 - ℓ + 20 y + 10 ((ℓ-n_0) ∧ (n_ℒ - ℓ))^3/4· e^n_ℒ - k· 2^2v v! · ( 4 ∑_log p > e^k|a(p)|^2/p )^v.
Combining everything we obtain the claim when b(p)=0. When b is non-trivial, the only difference is that we cannot dirrectly apply Lemma <ref> to bound the moments of 𝒬: instead, we just use |X+Y|^2v≤ 2^2v(|X|^2v+|Y|^2v)
for X=∑a(p)/p^s, Y=∑b(p)/p^2s, and apply Lemma <ref> separately to each term. The assumption |b(p)|≤ 1 allows to absorb the contribution of |Y|^2v into the +1 in (<ref>).
We are now ready to prove Proposition <ref>.
If there exists an h ∈ G_ℒ and |u| ≤ 1 such that
|S_n_ℒ(h + u) - S_n_ℒ(h)| > 20 y + √(|u| e^n_ℒ),
then there exists a 0 ≤ k < n_ℒ' := n_ℒ - ⌊ 2 log y ⌋ such that
sup_|h| ≤ 1
|u| ≤ e^-k + 1 |S_n_ℒ(h + u) - S_n_ℒ(h)| ·1_h ∈ G_ℒ≥ e^(n_ℒ - k) / 2.
Notice that we can stop at k = n_ℒ' := n_ℒ - ⌊ 2 log y ⌋ thanks to the term 20 y.
Therefore it suffices to bound
∑_0 ≤ k < n'_ℒℙ ( sup_|h| ≤ 1
e^-k≤ |u| ≤ e^-k + 1 |S_n_ℒ(h + u) - S_n_ℒ(h)| ·1_h ∈ G_ℒ≥ e^(n_ℒ - k) / 2 ).
Suppose now that |u| ≤ e^-k + 1 for some 0 ≤ k < n'_ℒ.
Notice that
|S_n_ℒ(h + u) - S_n_ℒ(h)| 1_h ∈ G_ℒ≤ ∑_n_0≤ j < k |(S_j + 1 - S_j)(h + u) - (S_j + 1 - S_j)(h)|1_h ∈ G_j
+ |(S_n_ℒ - S_k)(h + u) - (S_n_ℒ - S_k)(h)| 1_h ∈ G_k,
because S_n_0=0.
Therefore, by the union bound, for each 0 ≤ k ≤ n_ℒ',
ℙ ( sup_|h| ≤ 1
e^-k≤ |u| ≤ e^-k + 1 |S_n_ℒ(h + u) - S_n_ℒ(h)| ·1_h ∈ G_ℒ≥ e^(n_ℒ - k) / 2 )
≤∑_0 ≤ j < kℙ ( sup_|h| ≤ 1
|u| ≤ e^- k + 1 |(S_j + 1 - S_j)(h + u) - (S_j + 1 - S_j)(h)|1_h ∈ G_j≥e^(n_ℒ - k) / 2/4 (k - j)^2 )
+ ℙ ( sup_|h| ≤ 1
|u| ≤ e^- k + 1 |(S_n_ℒ - S_k)(h + u) - (S_n_ℒ - S_k)(h)| 1_h ∈ G_k≥e^(n_ℒ - k) / 2/4 ).
We now estimate each of the above probabilities using Chernoff's bound.
According to Lemma <ref> for 0 ≤ j < k, for v≥ 1, we have
ℙ ( sup_|h| ≤ 1
|u| ≤ e^- k + 1 |(S_j + 1 - S_j)(h + u) - (S_j + 1 - S_j)(h)|1_h ∈ G_j≥e^(n_ℒ - k) / 2/4 (k - j)^2 )
≪ (4 (k - j))^4v·𝔼 [ sup_|h| ≤ 1
|u| ≤ e^-k + 1|(S_j + 1 - S_j)(h + u) - (S_j + 1 - S_j)(h)|^2v/e^v (n_ℒ - k)·1_h ∈ G_j ]
≪(k - j)^4v· e^n_ℒ - n_0 - j + 20 y + 10 ((j-n_0) ∧ (n_ℒ - j)^3/4)· e^- v (n_ℒ - k)· v! · e^C̃v· e^2v(j - k).
The above e^2v(j-k) factor is due to the contribution of (e^-2k + 4∑_e^j≤log p ≤ e^j+1|a(p)|^2 log^2 p/p )^v in Lemma <ref>.
We choose v=⌊ e^n_ℒ-j-C⌋ for fixed C>0. Then the Dirichlet sum S_j+1^v has length exp(e^j· e^n_ℒ-j-C)≤exp(e^n/200) for large enough C, so Lemma <ref> can be applied. The above bound becomes, for some absolute positive constant C̃,
≪ e^n_ℒ - n_0 - j + 20 y + 10 ((j-n_0) ∧ (n_ℒ - j))^3/4exp (vlog v- (n_ℒ+k-2j-4log (k-j)-C̃) v )
≪ e^n_ℒ - n_0 - j + 20 y + 10 ((j-n_0) ∧ (n_ℒ - j))^3/4exp (-v(k-j-4log(k-j)+C-C̃) )
≪ e^n_ℒ - n_0 - j + 20 y + 10 ((j-n_0) ∧ (n_ℒ - j))^3/4exp (-c e^n_ℒ-j(k-j) ),
for some small constant c>0, by choosing C large enough.
Summing over n_0 ≤ j < k we see that the sum is dominated by the contribution of the last term j = k - 1. The full sum (over j and k) is therefore bounded with
∑_0≤ k<n'_ℒe^n_ℒ - n_0 - k + 20 y + 10 ((k-n_0) ∧ (n_ℒ - k))^3/4exp( - ce^n_ℒ - k),
which is dominated by k=n'_ℒ-1 and gives a global bound c^c_1 y-c_2 y^2 for some absolute c_1, c_2>0.
The second probability in (<ref>) is again by a Chernoff bound,
ℙ ( sup_|h| ≤ 1
|u| ≤ e^- k + 1 |(S_n_ℒ - S_k)(h + u) - (S_n_ℒ - S_k)(h)|1_h ∈ G_k≥e^(n_ℒ - k) / 2/4 )
≪ 4^4v·𝔼 [ sup_|h| ≤ 1
|u| ≤ e^-k + 1|(S_n_ℒ - S_k)(h + u) - (S_n_ℒ - S_k)(h)|^2v/e^v (n_ℒ - k)·1_h ∈ G_k ]
≪ e^n_ℒ - n_0 - k + 10 ((k-n_0) ∧ (n_ℒ - k))^3/4· e^- v (n_ℒ - k)· v! · e^C̃v· (n_ℒ - k)^v· e^n_ℒ-k,
for some absolute C̃.
Choosing v = ⌊ e^n_ℒ - k-C/(n_ℒ-k)^4⌋, we see that this is also ≪ e^v(C̃-C). Therefore, for alarge enough absolute constant C > 0, the full contribution of
(<ref>) after summation over k is
≪∑_0 ≤ k < n_ℒ' e^n_ℒ - n_0 - k + 10 ((k-n_0) ∧ (n_ℒ - k))^3/4·exp(-e^n_ℒ - k-C/(n_ℒ-k)^4)≪ e^c̃_1 y-c̃_2y^2/(log y)^4,
for some absolute c̃_1,c̃_2>0,
where we used that the main contribution comes from k=n'_ℒ. This concludes the proof.
§ PROOF OF PROPOSITION <REF>
We first need a lemma which precisely captures the coupling/decoupling of the Gaussian walks 𝒢_k(h) defined in (<ref>) as a function of the distance |h-h'|. For this, the following elementary lemma will be key in the decoupling regime |h-h'|>e^-j.
Let |ρ|<𝔰^2.
Consider the following Gaussian vectors and their covariance matrices:
(𝒩_1,𝒩_1'), 𝒞_1=(𝔰^2 ρ
ρ 𝔰^2
),
(𝒩_2,𝒩_2'), 𝒞_2=(𝔰^2+|ρ| 0
0 𝔰^2+|ρ|
).
Then for any measurable set A⊂ℝ^2 we have
ℙ((𝒩_1,𝒩'_1)∈ A)≤√(𝔰^2+|ρ| /𝔰^2-|ρ|)·ℙ((𝒩_2,𝒩'_2)∈ A).
The proof is simply by expanding the density of (𝒩_1, 𝒩'_1), which is
1/2π√(𝔰^4-ρ^2)exp(-𝔰^2 w^2+𝔰^2 z^2-2ρ w z/2(𝔰^4-ρ^2)).
If ρ≥ 0 then for any w,z∈ℝ we have 𝔰^2 w^2+𝔰^2 z^2-2ρ w z≥ (𝔰^2-ρ)(w^2+z^2) so that
𝔰^2 w^2+𝔰^2 z^2-2ρ w z/2(𝔰^4-ρ^2)≥w^2+z^2/2(𝔰^2+ρ),
and the conclusion follows.
If ρ≤ 0 then from the previous case for any B⊂ℝ^2
ℙ((𝒩_1,-𝒩'_1)∈ B)≤√(𝔰^2-ρ/𝔰^2+ρ)·ℙ((𝒩_2,-𝒩'_2)∈ B),
which concludes the proof by choosing B={(x,-y):(x,y)∈ A}.
We have
[(#𝔊^+_ℒ)^2]=
∑_h,h'(𝔖(h)∩𝔖(h'))
where
𝔖(h)={𝒢_k(h)∈ [L_k-1,U_k+1], ∀ n_0<k≤ n_ℒ}, h∈ [-1,1].
In what follows, we fix h, h' and simply write (𝒢_k,𝒢_k') for (𝒢_k(h),𝒢_k(h')).
We divide the above sum over pairs in three ranges of |h-h'|; this is necessary to achieve the precision 1+(1) required by Proposition <ref>.
§.§ Case |h-h'|>e^-n_0/2
This is the dominant term.
We can express the events 𝔖(h) in terms of the increments using <ref>, and then in terms of independent increments using Lemma <ref>.
Under the product over j, the multiplicative error from Lemma <ref> is
∏_n_0<j≤ n_ℒ√(𝔰_j^2+|ρ_j| /𝔰_j^2-|ρ_j|)=exp((∑_n_0≤ j≤ n_ℒρ_j))=exp((∑_n_0≤ j≤ n_ℒ1/e^j|h-h'|)≤ 1+ (e^-n_0/2),
therefore we obtain
∑_|h-h'|>e^-n_0/2(𝔖(h)∩𝔖(h'))≤ (1+(n_0^-10))·((𝒢_j∈ [L_j-1, U_j+2], n_0<j≤ n_ℒ))^2,
where 𝒢_j=∑_i≤ j𝒩_j and the independent Gaussian centered 𝒩_j's have variance 𝔰_j^2+|ρ_j|. Moreover the change from the original interval [L_j-1, U_j+1] to [L_j-1, U_j+2] is due to (<ref>) when transferring the constraint on increments back to the random walk itself.
From the Ballot theorem in Proposition <ref> the barrier can be changed into [L_j+1,U_j-1], and the 𝒢_j can be replaced by 𝒢_j at a combined multiplicative cost of 1+(y^-c), so that in particular
∑_|h-h'|>e^-n_0/2(𝔖(h)∩𝔖(h'))≤ (1+(y^-c))([#𝔊_ℒ^-])^2.
All the other cases will be much smaller than ([#𝔊_ℒ^-])^2.
§.§ Case e^-n_0< |h-h'|≤ e^-n_0/2
The same reasoning as above applies in this case. The multiplicative error term analogue to (<ref>) is now (1), and the precise estimate of this error is not necessary since there are only ≪ e^2(n_ℒ-n_0)e^-n_0/2 pairs (h,h') to consider. Therefore, we obtain
∑_e^-n_0< |h-h'|≤ e^-n_0/2(𝔖(h)∩𝔖(h'))≪ e^-n_0/2 ([#𝔊_ℒ^-])^2.
§.§ Case e^-(n_ℒ-n_0)≤ |h-h'| ≤ e^-n_0
We start by writing,
∑_h, h' ∈𝔊^+_ℒ
e^-(n_ℒ - n_0) < |h -h'| ≤ e^-n_0ℙ ( 𝔖(h) ∩𝔖(h') ) =
∑_n_0 ≤ j^⋆≤ n_ℒ∑_h, h' ∈𝔊^+_ℒ
j^⋆ = ⌊log |h - h'|^-1⌋ℙ ( 𝔖(h) ∩𝔖(h') ).
In order to evaluate ℙ(𝔖(h) ∩𝔖(h')), we apply the Gaussian decorrelation Lemma <ref> for the increments j≥ j^*.
For the increments before j^*, it will be useful to consider the
random variables
𝒢_j=𝒢_j+𝒢'_j/2, 𝒢^⊥_j=𝒢_j-𝒢'_j/2, n_0< j≤ n_ℒ.
Note that (𝒢_j)_j and (𝒢_j^⊥)_j are independent and 𝒢_j = 𝒢_j + 𝒢_j^⊥, 𝒢_j' = 𝒢_j - 𝒢_j^⊥.
As before we can express the events 𝔖(h) in terms of the increments using (<ref>); here we only use such a decomposition for the process 𝒢_j^⋆,j := 𝒢_j - 𝒢_j^⋆, approximating its increments with independent ones through Lemma <ref>, up to a multiplicative error equal to
∏_j^*<j≤ n_ℒ√(𝔰_j^2+|ρ_j| /𝔰_j^2-|ρ_j|)=(1).
For h, h' such that ⌊log |h - h'|^-1⌋ = j^⋆, this gives
ℙ(𝔖(h) ∩𝔖(h'))≪∑_L_j^⋆ - 1 ≤ v-q, v+q ≤ U_j^⋆C_j^⋆(h, h', v, q) D_j^⋆(h, v - q)D_j^⋆(h', v + q),
where
C_j^⋆(h, h', v, q) := ℙ ( 𝒢_j, 𝒢'_j ∈ [L_j - 1, U_j + 1] for all j < j^⋆, 𝒢_j^⋆∈ [v, v + 1], 𝒢_j^⋆^⊥∈ [q, q + 1] ),
D_j^⋆(h, v) := ℙ ( 𝒢_j^⋆, j(h) + v ∈ [L_j - 2, U_j + 2] for all j > j^⋆),
and 𝒢_j^⋆,j=𝒢_j-𝒢_j^*.
The proof now reduces to bounding the correlated (C) and decorrelated (D) terms.
§.§.§ The Correlated term
Note that if 𝒢_j, 𝒢_j' ∈ [L_j - 1, U_j + 1] for all j < j^⋆ then also
𝒢_j∈ [L_j - 1, U_j + 1]
for all j < j^⋆. Moreover, 𝒢_j^*^⊥ is independent of (𝒢_j)_j≤ j^*. We can therefore bound
C_j^⋆(h,h', v, q) ≤ℙ ( 𝒢_j∈ [L_j - 1, U_j + 1] for all j < j^⋆, 𝒢_j^⋆∈ [v, v + 1])·ℙ(𝒢_j^⋆^⊥∈ [q, q + 1) ).
The Gaussian 𝒢^⊥_j^⋆ is centered with variance ≪∑_j≤ j^⋆_j^2≪ 1 from (<ref>) and (<ref>). We thus have
( 𝒢^⊥_j∈ [q, q + 1))≪ e^-cq^2, for some c>0.
Moreover, (𝒢_j)_j≤ j^* satisfies the assumptions of Proposition <ref>, and 𝒢_j^* has variance 1/2∑_j≤ j^*(𝔰_j^2+ρ_j)=j^*-n_0/2+(1) from (<ref>) and (<ref>). Thus, uniformly in |v|≤ 100(j^*-n_0), we have
C_j^⋆(h,h', v, q) ≪U_n_0· (U_j^⋆ - v + 1)/(j^⋆ - n_0)^3/2· e^- v^2/j^⋆ - n_0 - c q^2 .
§.§.§ The Decorrelated term
We condition on 𝒢_j, j^⋆∈ [v_2, v_2 + 1] which implies that v_1 + v_2 ∈ [U_n_ℒ, L_n_ℒ]. Then by the Ballot theorem stated in Proposition <ref>, D_j^⋆(h, v_1 - q) is
≪∑_L_n_ℒ - 2 ≤ v_1 + v_2 ≤ U_n_ℒ + 2(U_j^⋆ - v_1 + q + 1) (U_n_ℒ - v_1 - v_2 + q + 1)/(n_ℒ - j^⋆)^3/2· e^ - (v_2 - q)^2/n_ℒ - j^⋆,
where we have used from (<ref>) and (<ref>) to obtain [(𝒢_j^⋆,n_ℒ)^2]=∑_j^*<j≤ n_ℒ(𝔰_j^2+|ρ_j|)=n_ℒ-j^*+(1), and |v_2-q|≤ 100(n_ℒ-j^*).
Likewise, D_j^⋆(h, v_1 + q) is
≪∑_L_n_ℒ - 2 ≤ v_1 + v_3 ≤ U_n_ℒ + 2(U_j^⋆ - v_1 - q + 1) (U_n_ℒ - v_1 - v_3 - q + 1)/(n_ℒ - j^⋆)^3/2· e^- (v_3 + q)^2/n_ℒ - j^⋆
§.§.§ Putting it together
The above estimates give, after summing over q ∈ℤ,
ℙ ( 𝔖(h) ∩𝔖(h') )≪∑_L_j^⋆ - 1 ≤ v_1 ≤ U_j^⋆ + 1
L_n_ℒ - 2 ≤ v_1+v_2, v_1+v_3 ≤ U_n_ℒ + 2 e^- v_1^2/j^⋆ - n_0 - v_2^2/n_ℒ - j^⋆ - v_3^2/n_ℒ - j^⋆
×U_n_0 (U_j^⋆ - v_1 + 1)^3 (U_n_ℒ - v_1 - v_2 + 1) (U_n_ℒ - v_1 - v_3 + 1)/(n_ℒ - j^⋆)^3 · (j^⋆ - n_0)^3/2.
We change the variables to v_1=v_1-α(j^⋆-n_0), v_2=v_2-α(n_ℒ-j^⋆) and v_3 = v_3 - α(n_ℒ - j^⋆) so that v_1 + v_2∈ [L_n_0, U_n_0] and v_1 + v_3∈ [L_n_0, U_n_0], giving
e^ - v_1^2/j^⋆ - n_0 - v_2^2/n_ℒ - j^⋆ - v_3^2/n_ℒ - j^⋆/(n_ℒ - j^⋆)^3 (j^⋆ - n_0)^3/2≪ e^-2(n_ℒ-j^⋆)-(j^⋆-n_0) + 2 v_1 - 2 (v_1 + v_2) - 2 (v_1 + v_3) ·n^3/2j^⋆/n n^3(1-j^⋆/n)/(j^⋆-n_0)^3/2(n_ℒ-j^⋆)^3.
The contribution of the integral over v_1+v_2∈ [L_n_0,U_n_0] is
∫_[L_n_0,U_n_0] (U_n_0- z+1) e^-2z z≪ |L_n_0| e^2|L_n_0|.
The same bound holds for the integral over v_1+v_3. The integral over v_1 is for ℬ_j^⋆=U_n_0-10log (( j^⋆-n_0)∧ (n_ℒ-j^⋆))
≪∫_-∞^ℬ_j^⋆ ( ℬ_j^⋆-v_1+1)^3 e^2v_1v_1≪ e^2ℬ_j^⋆.
Combining these estimates for the (e^2(n_ℒ-n_0)-j^⋆) pairs with log |h-h'|^-1≥ j^⋆, we obtain
∑_e^-(n_ℒ - n_0)≤ |h-h'|≤ e^-n_0(𝔖(h)∩𝔖(h'))≪
e^-n_0U_n_0 L_n_0^2 e^4|L_n_0|∑_j^⋆ e^2ℬ_j^⋆n^3/2j^⋆/n n^3(1-j^⋆/n)/(j^⋆-n_0)^3/2(n_ℒ-j^⋆)^3
≪ e^-n_0U_n_0 L_n_0^2 e^4|L_n_0|· e^2U_n_0.
On the other hand, from Proposition <ref> we have a simple lower bound for 𝔼[#𝔊^-_ℒ]:
[#𝔊^-_ℒ]=e^n_ℒ-n_0·ℙ(𝒢_k∈ [L_k + 1,U_k - 1], n_0<k≤ n_ℒ) ≫ U_n_0 |L_n_0| e^2 |L_n_0|.
We conclude from (<ref>) and (<ref>) that
∑_e^-(n_ℒ - n_0)≤ |h-h'|≤ e^-n_0(𝔖(h)∩𝔖(h')) ≪ U_n_0^-1e^2U_n_0-n_0([#𝔊^-_ℒ])^2≪ e^-y/10([#𝔊^-_ℒ])^2
by the choice of n_0 and U_n_0.
§.§ Conclusion.
When |h - h'| ≤ e^-(n_ℒ - n_0),
because of the spacing constraint we necessarily have h = h', and the contribution from such trivial pairs admits the same upper bound as for j^*=n_ℒ above. All together, we have obtained
[(#𝔊^+_ℒ)^2]≤ (1+(y^-c))([#𝔊_ℒ^-])^2,
which concludes the proof of Proposition <ref>.
§ PROOF OF THEOREM <REF>
The proof of the theorem follows the same structure as the one of Theorem 1. The parameters need to be picked differently.
We take for the times
n_0=⌊ y/100⌋, n_ℒ=loglog (T^1/100)=n-log 100.
The partial sums on primes are now starting from p=2 and not exp e^n_0
S_j(h) = ∑_loglog p ≤ j( p^-(1/2 + τ + h) + 1/2 p^- 2(1/2 + τ + h) ), j∈ℕ.
The set of good points are
G_0=[- 12, 12]∩ e^-n_ℒℤ, G_j={h∈ G_0: S_j∈ [L_j,U_j], n_0≤ j≤ n_ℒ},
where the barriers are now for j≥ n_0
U_j =y+α j -10log (j∧ (n-j)),
L_j =-10+(α +y/n_ℒ)j -(j∧ (n-j))^3/4.
The slope α is 1-3/4log n/n as before. Both barriers are convex, which is crucial. Note that the final interval for S_n_ℒ is [L_n_ℒ, U_n_ℒ] where
U_n_ℒ=n-3/4log n +y L_n_ℒ=n-3/4log n +y-10.
The reason for the slightly larger slope in L_j, i.e., (α +y/n_ℒ) instead of α, is to ensure that the width of the final interval is order one. The factor y/n_ℒ will not affect the proof.
It is necessary to take U_n_0=y+α n_0, as this is the origin of the factor y in front of the exponential decay in Theorem <ref>.
For y of order one, it would be possible to take n_0=(1). However, for larger y, the spread U_j-L_j could be quite large for small j. This prevents a Gaussian comparison for small primes.
For this reason, the barrier starts at n_0, a multiple of y. For these times, the spread is proportional to variance and the Gaussian comparison goes through.
Unlike the left tail, we do need to include the small primes in the partial sums. Dropping the first exp e^n_0 primes would give a lower bound ye^-2ye^-n_0e^-y^2/n, which is suboptimal for n_0≍ y.
A more involved analysis of the small primes would probably allow to improve the result range of Theorem <ref> to y=(n), matching the branching Brownian motion estimate.
For the proof of Theorem <ref>, we first need the analogue of Proposition <ref>.
We have, for any fixed C > 10, uniformly in 1 ≤ y = (n)
ℙ ( max_|h| ≤ 1log |ζ( 12 + τ + h)| > n - 3/4log n + y - 10 C ) ≥ℙ ( ∃ h ∈ G_ℒ ) - (e^-50 C y e^- 2y e^-y^2 / n).
Therefore, upon taking C large enough but fixed, the estimate
(#G_n_ℒ≥ 1 )≫ y e^-2y e^-y^2/n
will imply Theorem <ref>.
Equation (<ref>) follows directly from the Paley-Zygmund inequality from the propositions below.
Uniformly in 10≤ y≤ Cloglog T/logloglog T,
[#G_n_ℒ]≫ y e^-2ye^-y^2/n.
Uniformly in 10≤ y≤ Cloglog T/logloglog T,
[(#G_n_ℒ)^2]≪ y e^-2ye^-y^2/n.
Unlike the left tail, the dominant term in the second moment will come from the pairs h,h' that are very close, i.e., |h-h'|≪ e^-n_ℒ.
§.§ Proof of Proposition <ref>
First we have the following easy variant of Proposition <ref>
We have, for 1000 < y < n^1/10,
ℙ ( max_|h| ≤ 1log |ζ( 12 + τ + h)| ≥max_h ∈ G_0min_|u| ≤ 1 (S_n_ℒ(h + u) + √(|u| e^n_ℒ)) - 2 C ) ≥ 1 - (e^-n),
with C > 0 an absolute constant.
This is the same proof as Proposition <ref>, the only difference is that this time we do not need to bound the contribution of the primes p with log p ≤ e^n_0 and therefore there is no additional term -20 y. Because of this, the exceptional set is also better, i.e., e^-n instead of e^-y.
We highlight the changes needed in Lemma <ref> and Proposition <ref>, with the following two variants.
Let 1 ≤ℓ≤ n_ℒ.
Let v ≥ 1 and 0 ≤ k ≤ n be given.
Let 𝒬 be a Dirichlet polynomial as defined in (<ref>), such that e^ℓ≤log p ≤ e^n_ℒ and of length ≤exp(1200 v e^n). Denote by a(p) the coefficients of 𝒬. Then
𝔼 [ sup_|h| ≤ 1
|u| ≤ e^-k + 1 |𝒬( 12 + τ + h + u) - 𝒬( 12 + τ + h)|^2v·1_h ∈ G_ℓ ]
≪ e^n_ℒ ℙ(𝒢_ℓ) · 2^2v v! · ( ( e^-2k + 4∑_e^ℓ≤log p ≤ e^k|a(p)|^2 log^2 p/p )^v + ( 16 ∑_e^k ≤log p|a(p)|^2/p )^v· e^n_ℒ - k ).
Moreover, we simply bound ℙ(𝒢_ℓ)≤ 1 if ℓ<n_0, and otherwise
ℙ(𝒢_ℓ) ≪y/ℓ^3/2exp ( - ℓ + 3/2·ℓlog n_ℒ/n_ℒ - 2 y ℓ/n_ℒ - y^2 ℓ/n_ℒ^2 + 10 (ℓ∧ (n - ℓ))^3/4 ).
The proof is the same as for Lemma <ref>, the only differences being the different bound for ℙ(𝒢_ℓ) (when ℓ≥ n_0) that arises from a different barrier.
Note that for 1≤ℓ≤ n_0 there is no barrier so the proof does not rely on Proposition <ref>, which requires ℓ≥ n_0.
We have, for any C > 10, and for y = o(n),
ℙ (∀ h ∈ G_ℒ ∀ |u| ≤ 1 : |S_n_ℒ(h + u) - S_n_ℒ(h)| ≤ C + √(|u| e^n_ℒ) ) = 1 + ( e^-50 C y e^-2 y e^-y^2 / n ).
The proof is very similar to Proposition <ref> but we still find it worthwhile to include the details.
If there exists an h ∈ G_ℒ and |u| ≤ 1 such that
|S_n_ℒ(h + u) - S_n_ℒ(h)| > C + √(|u| e^n_ℒ)
then there exists a 0 ≤ k < n_ℒ' := n_ℒ - ⌊ 2 log C ⌋ such that,
sup_|h| ≤ 1
|u| ≤ e^-k + 1 |S_n_ℒ(h + u) - S_n_ℒ(h)| ·1_h ∈ G_ℒ≥ e^(n_ℒ - k) / 2,
where considering the case k ≤ n_ℒ' is enough thanks to the term C in (<ref>).
It now suffices to bound (<ref>)
through a bound for the right-hand side of (<ref>), but with our new definitions for (S_j)_j≥ 1, n_ℒ, n'_ℒ and G_k.
For any 0 ≤ j < k, we have the following analogue of <ref>, which is also obtained by Lemma <ref>:
ℙ ( sup_|h| ≤ 1
|u| ≤ e^- k + 1 |(S_j + 1 - S_j)(h + u) - (S_j + 1 - S_j)(h)|1_h ∈ G_j≥e^(n_ℒ - k) / 2/4 (k - j)^2 )
≪ (k - j)^4v· e^n_ℒ - j + 10 (j ∧ (n - j)^3/4 e^- v (n_ℒ - k)· v! · C^v· e^2v(j - k)·y/j^3/2· e^3/2j log n/n - 2 y j/n - y^2 j/n^2.
Pick v = 100.
Summing over 0 ≤ j < k we see that the sum is dominated by the contribution of the last term j = k - 1, indeed, the sum is
≪ e^- (v - 1) (n_ℒ - k) + 10 (k ∧ (n - k))^3/4·y/k^3/2exp ( 3/2·k log n/n - 2 y k/n - y^2 k/n^2 ).
The contribution of the second term in (<ref>) is bounded similarly to (<ref>), and we obtain
ℙ ( sup_|h| ≤ 1
|u| ≤ e^- k + 1 |(S_n_ℒ - S_k)(h + u) - (S_n_ℒ - S_k)(h)|1_h ∈ G_k≥e^(n_ℒ - k) / 2/4 )
≪ 4^4v· e^n_ℒ - k + 10 (k ∧ (n - k))^3/4· e^- v (n_ℒ - k)· v! · C^v· (n_ℒ - k)^v·y/j^3/2·exp ( 3/2j log n/n - 2 y j/n - y^2 j/n^2 ).
Choosing v = 100 we see that this is also
≪ e^- (v - 2) (n_ℒ - k) + 10 (k ∧ (n - k))^3/4·y/k^3/2exp ( 3/2·k log n/n - 2 y k/n - y^2 k/n^2 ).
Therefore with the new definitions for (S_j)_j≥ 1, n_ℒ, n'_ℒ and G_k, (<ref>) is bounded with
≪∑_0 ≤ k ≤ n_ℒ' e^- 100 (n_ℒ - k)·y/k^3/2exp ( 3/2·k log n/n - 2 y k/n - y^2 k/n^2 ) ≪ e^-50 C· y e^-2 y e^-y^2 / n
as needed, and where the final gain e^-50 C comes from k ≤ n_ℒ' = n_ℒ - ⌊ 2 log C ⌋.
If there exists an h ∈ G_ℒ, then from Proposition <ref>
max_v ∈ G_0min_|u| ≤ 1 (S_n_ℒ(v + u) + √(|u|e^n_ℒ)) ≥min_|u| ≤ 1 (S_n_ℒ(h + u) + √(|u| e^n_ℒ))
≥ S_n_ℒ(h) - 1 ≥ n - 3/4log n - 10C,
outside of a set of probability ≪ e^-50 C y e^-2 y e^-y^2 / n.
Proposition <ref> then implies that outside of a set of τ of probability ≪ e^-n,
max_|h| ≤ 1log |ζ( 12 + τ + h)| > n - 3/4log n + y - 10 C.
In other words,
ℙ (∃ h ∈ G_ℒ ) - ( e^-50 C y e^-2y e^-y^2 / n ) ≤ℙ (max_|h| ≤ 1log |ζ( 12 + τ + h)| > n - 3/4log n + y - 10 C ),
and Proposition <ref> follows.
§.§ Proof of Proposition <ref> and <ref>
Clearly, we have
[#G_n_ℒ]≫ e^n_ℒ·(S_j∈ [L_j,U_j], n_0≤ j≤ n_ℒ),
where we write S_j(0)=S_j for simplicity.
By the definition of U_j,L_j, we have for j≥ n_0
U_j-L_j≪(y-10)-y/nj +(j∧ (n-j))^3/4≪Δ_j^1/4.
Therefore, the proof of Proposition <ref> applies verbatim for all increments j≥ n_0.
For the first n_0 increments, the approximation in terms of Dirichlet polynomials still holds up to a multiplicative constant (as in <cit.>, for example).
These considerations yield
(S_j∈ [L_j,U_j], n_0≤ j≤ n_ℒ)
≫(𝒮_n_0∈ [L_n_0+1,U_n_0-1],𝒮_n_0+𝒢_j ∈ [L_j+1, U_j-1], n_0<j≤ n_ℒ),
where (𝒢_j)_j is defined in (<ref>) and is independent of 𝒮_n_0, now defined as
𝒮_n_0(h) = ∑_loglog p≤ n_0 ( e^θ_p p^-(1/2 + h) + 12 e^2θ_p p^-(1 + 2 h) ).
Note that it differs from (<ref>) as it consists in the first n_0 increments.
The ± 1 in the barriers will not contribute to the estimate, we henceforth drop them to lighten the notations.
We now write f(z) for the density of 𝒮_n_0,
we condition on 𝒢_n_ℒ and apply the Ballot theorem from Proposition <ref>: The right-hand side of (<ref>) is lower bounded with
∫_L_n_0^U_n_0(𝒢_j ∈ [L_j-z, U_j-z], n_0<j≤ n_ℒ) f(z) z
≫∫_L_n_0^U_n_0∫_L_n_ℒ-z^U_n_ℒ-z(U_n_0-z)(U_n_ℒ-z-w)/(n_ℒ-n_0)^3/2 e^-w^2/n_ℒ-n_0 f(z) w z.
Writing w̅=w-α(n_ℒ-n_0) and z̅=z-α n_0, this becomes
≫∫_L̅_n_0^U̅_n_0∫_y-10-z̅^y-z̅(U̅_n_0-z̅)(y-z̅-w̅)/(n_ℒ-n_0)^3/2 e^-(w̅+α(n_ℒ-n_0))^2/(n_ℒ-n_0) f(z̅+α n_0)w̅z̅.
for L̅_n_0=L_n_0+y/n_ℒn_0-n_0^3/4 and U̅_n_0=U_n_0-10log n_0^3/4.
Expanding the square gives
(w̅+α (n_ℒ-n_0))^2/n_ℒ-n_0
=α^2(n_ℒ-n_0) +2αw̅ +w̅^2/n_ℒ-n_0
=(n_ℒ-n_0)-3/2log t +2αw̅ +w̅^2/n_ℒ-n_0+(1).
The integral in w̅ becomes
∫_y-10-z̅^y-z̅ (y-z̅-w̅)e^-2αw̅e^-w̅^2/n_ℒ-n_0w̅≫ e^-2α y e^2αz̅e^-y^2/n≫ e^-2ye^-y^2/n e^2αz̅,
by the assumption on y. So far, we have shown
[#G_n_ℒ]≫ e^n_0· e^-2ye^-y^2/n·∫_L̅_n_0^U̅_n_0 (U̅_n_0-z̅) e^2αz̅ f(z̅+α n_0)z̅.
From the proof of <cit.>, we have f(u)≪ e^-u^2/n_0/√(n_0) uniformly in |u|<100 n_0. This implies
[#G_n_ℒ]≫ e^n_0· e^-2ye^-y^2/n·∫_L̅_n_0^U̅_n_0 (U̅_n_0-z̅) e^2αz̅e^(-z̅+α n_0)^2/n_0/√(n_0)z̅
≫ e^-2ye^-y^2/n∫_L̅_n_0^U̅_n_0 (U̅_n_0-z̅) e^-z̅^2/n_0/√(n_0)z̅≫ ye^-2ye^-y^2/n
since the standard deviation of z̅ is √(n_0)≍√(y) and U̅_n_0=y-10log n_0^3/4.
Proceeding as in Proposition <ref>, the estimate is reduced to
[(#G_n_ℒ)^2]≪∑_h,h'∈ G_0(𝔖(h)∩𝔖(h')),
where
𝔖(h)={𝒮_n_0(h)∈ [L_n_0-1,U_n_0+1],𝒮_n_0(h)+𝒢_j(h)∈ [L_j-1, U_j+1], n_0<j≤ n_ℒ}.
Again, since the ± 1 will not contribute to the estimates, we omit them from the notations.
We write 𝒮_n_0(h)=𝒮_n_0, 𝒮_n_0(h')=𝒮_n_0' and similarly for 𝒢.
We condition on the pair (𝒮_n_0,𝒮_n_0') to get
(𝔖(h)∩𝔖(h'))=
∫_[L_n_0,U_n_0]^2(𝒢_j≤ U_j-z, 𝒢_j'≤ U_j-z', n_0≤ j≤ n_ℒ) f(z,z') z z',
where f now stands for the density of (𝒮_n_0,𝒮_n_0').
The estimate depends on the branching time j^⋆=j^⋆(h,h')=⌊log |h-h'|^-1⌋. We split into two cases (j^⋆≤ n_0 and j^⋆> n_0),
contrary to the proof of Proposition (<ref>) which needs three cases as it requires matching of first and second moments up to 1+(1) precision.
Case j^⋆≤ n_0.
In this case, the decoupling Lemma <ref> can be applied to all increments. The probability in the integral is then
≪∫_[L_n_0,U_n_0]^2( 𝒢_j≤ U_j-z, n_0≤ j≤ n_ℒ) (𝒢'_j≤ U_j-z', n_0≤ j≤ n_ℒ) f(z,z') z z'
where we recall that 𝒢_j=∑_i≤ j𝒩_j and the independent Gaussian centered 𝒩_j's have variance 𝔰_j^2+|ρ_j|.
After conditioning on (𝒢_n_ℒ,𝒢'_n_ℒ), the Ballot theorem from Proposition <ref> can be applied to each term. The above becomes
≪∫_[L_n_0,U_n_0]^2∫_[L_n_ℒ,U_n_ℒ]^2(U_n_0-z)(U_n_ℒ-z-w)(U_n_0-z')(U_n_ℒ-z'-w')(n_ℒ-n_0)^3 e^-w^2+w'^2/n_ℒ-n_0f(z,z') w w' z z'.
The Gaussian density can be expanded as in (<ref>). The integral in w, w' gives a contribution
(e^-2n_ℒe^2n_0e^-4ye^-2y^2/n).
There are (e^2n_ℒ) pairs h,h' with j^⋆≤ n_0, so
∑_h,h': j^⋆≤ n_0(𝔖(h)∩𝔖(h))
≪ e^2n_0 e^-4y∫_[L̅_n_0,U̅_n_0]^2(U̅_n_0-z̅)(U̅_n_0-z̅')e^2α(z̅+z̅')f(z̅+α n_0,z̅'+α n_0)z̅z̅'
≪ y^2e^2n_0 e^-4y∫_[L̅_n_0,U̅_n_0]^2e^2α(z̅+z̅')f(z̅+α n_0,z̅'+α n_0)z̅z̅',
where we used the barrier range to bound |U̅_n_0-z̅|≤ y. Using the Cauchy-Schwarz inequality and recalling that f(u)≪ e^-u^2/n_0/√(n_0), as n_0=y/10, this is
≪ y^2e^2n_0 e^-4y·∫_-∞^∞ e^4 z̅e^-z̅^2/n_0/√(n_0)≪ y^2e^2n_0 e^-4y· e^8 n_0≪ y e^-2y-y^2/n.
Case j^⋆>n_0.
We proceed similarly to the proof of the left tail and consider the center of mass and the difference between the two Gaussian walks as in (<ref>).
We index the value of 𝒢_j^⋆ by v_1, the values 𝒢_j^⋆^⊥ by q, and the values of
two independent copies of
𝒢_j^⋆,n_ℒ
by v_2 and v_3.
Proceeding exactly as for Equation (<ref>), i.e., using Lemma <ref> for the increments after j^*, we obtain
that (𝔖(h)∩𝔖(h')) is
∑_q∈ℤ∫_[L_n_0,U_n_0]^2∫_L_j^⋆^U_j^⋆(U_n_0-z_m)(U_j^⋆-z_m-v_1)/(j^⋆-n_0)^3/2· e^-cq^2f(z,z')e^-v_1^2/j^⋆-n_0
×(𝒢_j^⋆,j+v_1+q≤ U_j-z, j>j^⋆)·(𝒢_j^⋆, j+v_1-q≤ U_j-z', j>j^⋆) v_1 z z'
where we applied Proposition <ref> for 𝒢̅ between n_0 and j^⋆ and where we denoted z_m=z+z'/2=𝒢̅_n_0.
This Ballot theorem can also be applied to the two probabilities in the integral giving, after summing over q,
≪(U_j^⋆-z_m-v_1)^2(U_n_ℒ-z-v_1-v_2)(U_n_ℒ-z'-v_1-v_3)/(n_ℒ-j^⋆)^3 e^-v_2^2+v_3^2/n_ℒ -j^⋆.
After expanding the squares, the densities of v_1, v_2, v_3 become
e^-2n_ℒ+j^⋆ e^n_0n^3/2j^⋆/tn^3(1-j^⋆/n)/(j^⋆-n_0)^3/2(n_ℒ-j^⋆)^3 e^2αv_1-v̅_1^2/j^⋆-n_0-2α(v_1+v_2)-2α(v_1+v_3),
for v_1=v_1-α(j^⋆-n_0), v_i=v_i-α(n_ℒ-j^⋆), i=2,3.
The integral over v_1+v_2∈ [y-10-z̅, y-z̅] is
∫_y-10-z̅^ y-z̅(y-z̅-v_1-v_2)e^-2α(v_1+v_2)v_1v_2≪ e^2αz̅-2y.
The integral over v_1+v_3∈ [y-10-z̅', y-z̅'] is the same and contributes ≪ e^2αz̅'e^-2y.
The integral over v_1 is, for z̅_m=z̅+z̅'/2,
≪∫_L̅_j^⋆^U̅_j^⋆-z̅_m ( U̅_j^⋆-z̅_m -v_1)^2 e^2αv_1 e^-v̅_1^2/j^⋆-n_0v_1≪e^2y-2αz̅_m-y^2/n/(j^⋆∧ (n-j^⋆))^10,
where U̅_j^⋆=y-10 log (j^⋆∧ (n-j^⋆)) and L̅_j^⋆=-10- (j^⋆∧ (n-j^⋆))^3/4.
We now sum over all j^*>n_0 and the (e^-2n_ℒ+j^⋆) pairs with a given j^⋆, so that
∑_h,h': n_0<j^⋆<n_ℒ(𝔖(h)∩𝔖(h))
≪ e^-2y-y^2/n+n_0∫_L̅_n_0^U̅_n_0 (U̅_n_0-z̅_m) e^2αz̅_mf(z̅+α n_0, z̅'+α n_0) z z'
×∑_ n_0<j^⋆<n_ℒ1/(j^⋆∧ (n-j^⋆))^10n^3/2j^⋆/nn^3(1-j^⋆/n)/(j^⋆-n_0)^3/2(n_ℒ-j^⋆)^3.
The integral over z,z' is over a function of z̅_m only, which has density ≪ e^-u^2/n_0/√(n_0) uniformly in |u|<100 n_0.
Moreover, we can simply bound |U̅_n_0-z̅_m|≤ y, hence
∑_h,h': n_0<j^⋆<n_ℒ(𝔖(h)∩𝔖(h))≪ ye^-2y-y^2/n+n_0∫_L̅_n_0^U̅_n_0 e^2z̅_me^-(z̅_m+α n_0)^2/n_0/√(n_0)z̅_m≪ y e^-2ye^-y^2/n,
which concludes the proof of Proposition <ref>.
§ SOME AUXILIARY RESULTS
Let (Z_p, p prime) a sequence of independent and identically distributed random variables, uniformly distributed on the unit circle |z| = 1. For an integer n with prime factorization n = p_1^α_1… p_k^α_k with p_1, …, p_k all distinct, consider
Z_n := ∏_i = 1^k Z_p_i^α_i.
Then we have 𝔼[Z_n Z_m] = 1_n = m, and therefore, for an arbitrary sequence a(n) of complex numbers, the following holds
∑_n ≤ N |a(n)|^2 = 𝔼 [ | ∑_n ≤ N a(n) Z_n |^2 ].
The next lemma shows that the mean value of Dirichlet polynomial is close to the one of the above random model. It follows directly from <cit.>.
We have,
𝔼 [ | ∑_n ≤ N a(n) n^τ |^2 ] = (1 + ( N/T ) ) ∑_n ≤ N |a(n)|^2 = ( 1 + ( N/T ) ) 𝔼 [ | ∑_n ≤ N a(n) Z_n |^2 ].
Remember the definition (<ref>).
There exists an absolute C>0 such that for any λ∈ℝ and n_0≤ j≤ k we have
[exp(λ(𝒮_k(h)-𝒮_j(h) ))]≤exp((k-j+C)λ^2/4).
For any h ∈ [-1,1] and integers k,j,q satisfying n_0≤ j≤ k, 2q ≤ e^n-k, we have
𝔼 [ |S_k(h) - S_j(h)|^2q ] ≪(2q)!/2^q q! (k - j/2)^q.
Moreover, there exists C>0 such that for any 0 ≤ j ≤ k, 2q ≤ e^n-k, we have
𝔼 [ |S_k(h) - S_j(h)|^2q ] ≪ q^1/2(2q)!/2^q q! (k - j+C/2)^q.
Let 2≤ x≤ T and q∈ℕ with x^q≤ T/log T. For any complex numbers a(p), we have
[| ∑_p≤ xa(p)/p^1/2+τ|^2q]≪ q! (∑_p≤ x|a(p)|^2/p)^q.
Let h,h'∈[-1,1]. Consider the increments (𝒴_k(h), 𝒴_k(h')) for 1≤ k≤ n_ℒ, and the corresponding Gaussian vector (𝒩_k(h), 𝒩_k(h')),
of mean 0 and with the covariance given by (<ref>), (<ref>).
There exists a constant c>0 such that, for any intervals A, B and k≥ 1,
ℙ((𝒴_k(h), 𝒴_k(h')) ∈ A× B)
=ℙ((𝒩_k(h), 𝒩_k(h'))∈ A× B)+(e^-c e^k/2).
This follows similaly to <cit.>, based on the Berry-Esseen estimate as stated in <cit.>. The proof is actually more immediate because the covariances of (𝒴,𝒴') and (𝒩,𝒩') exactly coincide.
Let D be a Dirichlet polynomial of length ≤ N.
Then, for any 1≤ k≤loglog N, we have
max_|h| ≤ e^-k |D( 12 + t + h)|^2≪ ∑_|j| ≤ 16 e^-klog N | D (1/2 + t + 2π j/8 log N ) |^2
+ ∑_|j| > 16 e^-klog N1/1 + |j|^100 |D ( 1/2 + t + 2π j/8 log N ) |^2.
We proceed similarly to the proof of <cit.>, but now with maxima on intervals of general length e^-k. With the notations from <cit.>, we have
D( 12 + t + h_0)^2=1/2+ε∑_h∈2πℤ/(2+ε)log ND( 12 + t+ h)^2 V((h-h_0)log N/2π).
Using the triangle inequality and the decay V(x)≪_A(1+|x|)^-A we obtain the result.
§ BALLOT THEOREM
§.§ Results
Most ideas for the results in this section are due to Bramson. As we could not find the exact barrier estimates needed in our setting, this section gives a self-contained and quantitative analogues of some technical results in <cit.> in the setting of Gaussian random walk with arbitrary, comparable, variance of the increments.
Let κ>0 be fixed in all this section, and (X_i)_i≥ 1 be independent, real, centered Gaussian randon variables such that κ<[|X_i|^2]<κ^-1 for all i. For k∈ℕ we denote S_k=∑_i≤ k X_i.
We denote _(s,x) for the distribution of the process (S_k)_k starting at time s from x, _x=_(0,x), =_0, and _(s,x)^(t,y) for the distribution for (S_k)_k starting at time s from x, and conditioned to end at time t at point y.
Let δ>1/2>α>0. Then there exists c=c(α,δ,κ) such that uniformly in the time
t≥ 1, 10≤ y≤ t^1/10,
a,b∈ [1,y-1] and uniformly in the functions
v_s≥ y+min(s,t-s)^δ, |u_s|≤min(s,t-s)^α,
we have
_(0,a)^(t,b)( ∩_0≤ k≤ t{u_k≤ S_k≤ v_k})=
2ab/σ·(1+_α,δ,κ(d^-c))
where d=min(|y-a|,|y-b|,|a|,|b|) and σ=∑_k≤ t[X_k^2].
§.§ Preliminaries on Brownian motion
We denote _(s,x) for the distribution of the Brownian motion starting at time s from x, _x=_(0,x), =_0, and _(s,x)^(t,y) for the distribution for the Brownian bridge starting at time s from x, ending at time t at point y. Context will avoid confusion with the notation from Proposition <ref> as the Gaussian random walk will always be denoted S, and the Brownian motion B.
For such a trajectory B, let M_t=max_0≤ s≤ tB_s, m_t=min_0≤ s≤ tB_s.
Let x,y>0. Then
_(0,x)^(t,y)(m_t≥ 0)=1-e^-2xy/t.
From the reflection principle, for any measurable A⊂(0,∞),
_x(m_t≥ 0,B_t∈ A)=_x(B_t∈ A)-_x(B_t∈ -A).
This implies
_(0,x)^(t,y)(m_t≥ 0)=1-lim_ε→ 0_x(B_t∈ -[y,y+])/_x(B_t∈ [y,y+])=1-e^-2xy/t,
concluding the proof.
Let a,c>0 and A⊂ [-c,a] be measurable. Then
(M_t≤ a,m_t≥ -c,B_t∈ A)≥(m_t≥ -c,B_t∈ A)-(B_t∈ A-2a).
The above left-hand side is
(m_t≥ -c,B_t∈ A)-(M_t≥ a,m_t≥ -c,B_t∈ A)
≥(m_t≥ -c,B_t∈ A)-(M_t≥ a,B_t∈ A).
From the reflection principle, this last probability is (B_t∈ 2a-A).
Let δ>1/2, v_s≥ y+min(s,t-s)^δ. Let c∈(0,2-1/δ). Then uniformly in
t≥ 0, 2≤ y≤ t^1/10, a,b∈ [1,y-1] we have
_(0,a)^(t,b)( ∩_0≤ s≤ t{0≤ B_s≤ v_s})=_(0,a)^(t,b)(m_t≥ 0)·(1+(e^-min(|y-a|,|y-b|)^c)).
In this proof we abbreviate B≥ 0 for m_t≥ 0 and start with
_(0,a)^(t,b)(B≥ 0,∃ s:B_s>v_s)≤
∑_k=0^t/2(_(0,a)^(t,b)(B≥ 0,∃ s∈[k,k+1]:B_s>v_k)
+_(0,b)^(t,a)(B≥ 0,∃ s∈[k,k+1]:B_s>v_k).
)
The first probability above is smaller than
_(0,a)^(t,b)(B≥ 0,∃ s∈[0,k+1]:B_s>v_k)
=_(0,a)^(t,b)(B≥ 0)-_(0,a)^(t,b)(B≥ 0,max_[0,k+1]B<v_k).
We write
_(0,a)^(t,b)(B≥ 0,max_[0,k+1]B<v_k)
=
∫_0^v_k_(0,a)^(k+1,x)(B≥ 0,max_[0,k+1]B<v_k)
_(k+1,x)^(t,b)(B≥ 0) _(0,a)^(t,b)(B_k+1∈ x).
The first probability in this integral is estimated with Lemma <ref>:
_(0,a)^(k+1,x)(B≥ 0,max_[0,k+1]B<v_k)≥_(0,a)^(k+1,x)(B≥ 0)-e^-2(v_k-a)(v_k-x)/k+1.
This gives
_(0,a)^(t,b)(B≥ 0,∃ s∈[k,k+1]:B_s>v_k)
≤∫_0^v_ke^-2(v_k-a)(v_k-x)/k+1_(k+1,x)^(t,b)(B≥ 0) _(0,a)^(t,b)(B_k+1∈ x).
From Lemma <ref>, we have _(k+1,x)^(t,b)(B≥ 0)≤ 5xb/t.
Moreover, _(0,a)^(t,b)(B_s∈ x)=_(0,0)^(t,0)(B_s∈ x+x_s)=e^-(x-x_s)^2/(2w_s)/√(2π w_s) x where w_s=s(t-s)/t, x_s=(1-s/t)a+s/tb.
This gives
_(0,a)^(t,b)(B≥ 0,∃ s∈[k,k+1]:B_s>v_k)≤ C
b/t∫_0^v_ke^-2(v_k-a)(v_k-x)/k+1
x e^-(x-x_k+1)^2/2w_k+1/√(w_k+1) x.
In the above integral, the contribution from |x-x_k+1|>v_k/3 is bounded with
∫_|x-x_k+1|>v_k/3(x_k+1+|x-x_k+1|)e^-(x-x_k+1)^2/2w_k+1/√(w_k+1) x≤ C x_k+1 e^-v_k^2/100w_k+1.
The regime |x-x_k+1|<v_k/3 gives the error
∫_|x-x_k+1|<v_k/3e^-(k^δ+|y-a|)k^δ/k+1xe^-(x-x_k+1)^2/2w_k+1/√(w_k+1) x≤ C x_k+1 e^-(k^δ+|y-a|)k^δ/k+1.
We first bound the sum of the error terms from (<ref>) as 0≤ k≤ t/2.
For k^δ<|y-a|, from the hypothesis y<t^1/10 we have x_k+1<a+1<2a, so that
∑_k^δ<|y-a| x_k+1 e^-(k^δ+|y-a|)k^δ/k+1≤ 2a ∑_k^δ<|y-a| e^-|y-a|k^δ/k+1≤ 2a |y-a|^1/δe^-|y-a|^2-1/δ=a (e^-|y-a|^c).
For k^δ>|y-a|, we obtain
∑_k≥ |y-a|^1/δx_k e^-k^2δ-1≤∑_k≥ |y-a|^1/δ(a+kb/t)e^-k^2δ-1≤ (a+1)∑_k≥ |y-a|^1/δk e^-k^2δ-1
≤ C_δ(a+1)∫_v>|y-a|^2-1/δv^3-2δ/2δ-1 e^-v v=a (e^-|y-a|^c).
The same estimate can be obtained for the sum over k from (<ref>). We have thus obtained
_(0,a)^(t,b)( ∩_0≤ s≤ t{0≤ B_s≤ v_s})=_(0,a)^(t,b)(m_t≥ 0)+(ab/te^-min(|y-a|,|y-b|)^c).
The result follows from the above estimate and Lemma <ref>.
Let δ>1/2>α>0. Then there exists c=c(α,δ), such that, uniformly in
t≥ 0, y≥ 10, a,b∈ [1,y-1], and uniformly in the functions v_s≥ y+min(s,t-s)^δ, |u_s|≤min(s,t-s)^α, we have
_(0,a)^(t,b)( ∩_0≤ s≤ t{u_s≤ B_s≤ v_s})≥_(0,a)^(t,b)(m_t≥ 0)·(1+(d^-c)),
where d=min(|y-a|,|y-b|,|a|,|b|).
Without loss of generality we can assume α+δ<1. We also pick ε∈(0,1) and write d_0=d^1-ε. Let s_1,s_2 be the solutions of s_1^α=d_0, (t-s_2)^α=d_0.
Let
u̅_s=(d_0+(s-s_1)α s_1^α-1)1_s^α<d_0+(d_0-(s-s_2)α (t-s_2)^α-1)1_(t-s)^α<d_0+min(s,t-s)^α1_s_1<s<s_2.
In other words, u̅ is the function coinciding with min(s,t-s)^α on (s_0,s_1) and linearly expanded on the complement, with continuous derivative. Note that u̅_0=u̅_t=(1-α) d_0. We also denote
v̅_s=(1-s/t)a+s/tb+min(s,t-s)^δ+d. We have
_(0,a)^(t,b)( ∩_0≤ s≤ t{u_s≤ B_s≤ v_s})≥_(0,a)^(t,b)( ∩_0≤ s≤ t{u̅_s≤ B_s≤v̅_s}).
The Cameron-Martin formula gives
_(0,a)^(t,b)( ∩_0≤ s≤ t{u̅_s≤ B_s≤v̅_s})
=
_(0,a)^(t,b)[
e^-∫_0^t u̇̅̇_s B_s-1/2∫_0^tu̇̅̇_s^2 s1_∩_0≤ s≤ t{d^1-ε≤ B_s≤v̅_s-∫_0^u u̇̅̇_u u}],
where we denote the derivative in s of f by ḟ.
We now bound both terms in the measure bias, deterministically. First,
∫_0^t u̇̅̇_s^2 s≤ C ∫_s>s_1 s^2α-2 s+C∫_s<s_1s_1^2α-2 s≤ C d^-(1/α-2)(1-ε).
Moreover, by integration by parts we have (using the fact that u̅ has continuous derivative)
-∫_0^t u̇̅̇_s B_s=∫_0^t B_s ü̅̈_s s-B_t u̇̅̇_t+B_0 u̇̅̇_0=∫_0^t (B_s-((1-s/t)B_0+s/tB_t)) ü̅̈_s s.
On the set ∩_0≤ s≤ t{B_s≤v̅_s}, we therefore have the deterministic bound
-∫_0^t u̇̅̇_s B_s≥- ∫_s^α>d^1-ε(d^1-ε+s^δ)s^α-2 s≥ -C (d^-(1/α-2)(1-ε)+d^(δ/α+1-1/α)(1-ε)).
We have therefore proved
_(0,a)^(t,b)( ∩_0≤ s≤ t{u̅_s≤ B_s≤v̅_s})
≥exp((d^-(1/α-2)(1-ε)+d^-1-α-δ/α(1-ε)))
_(0,a)^(t,b)( ∩_0≤ s≤ t{d^1-ε≤ B_s≤ v_s-∫_0^su̇̅̇_u u}).
The desired lower bound follows by using Lemma <ref>, noting that (a-d^1-ε)(b-d^1-ε)/t=ab/t(1+(d^-ε)).
§.§ Proofs of barrier estimates for the random walk, Proposition <ref>.
The lower bound is a direct consequence of Lemma <ref> and Lemma <ref>.
For the upper bound, we only need to bound
_(0,a)^(t,b)( ∩_0≤ k≤ t{-u̅_k≤ S_k}),
where u̅ is defined in (<ref>).
By the change of variables S_k=S̃_k-∑_0≤ i≤ k-1(u̅_i+1-u̅_i), and denoting d_1=(1-α)d^1-ε, we obtain
_(0,a)^(t,b)( ∩_0≤ k≤ t{-u̅_k≤ S_k})
=
_(0,a)^(t,b)[ e^-1/2∑_k(u̅_k+1-u̅_k))^2+∑_k( S_k+1- S_k)(u̅_k+1-u̅_k)1_∩_0≤ k≤ t{-d_1≤ S_k}]
≤_(0,a)^(t,b)[ e^∑_k( S_k+1- S_k)(u̅_k+1-u̅_k)1_∩_0≤ k≤ t{-d_1≤ S_k}].
Let S̅_s=S_s-(at-s/t+bs/t). We have
∑_k( S_k+1- S_k)(u̅_k+1-u̅_k)=
∑_k(S̅_k+1-S̅_k)(u̅_k+1-u̅_k)=∑_k a_k S̅_k,
where
the constants a_k satisfy 0≤ a_k≤ 10 min(k,t-k+1)^α-2 and vanish outside [d_2,t-d_2] where we define d_2=d^1-ε/α.
As ab≤max(a^2,b^2), we have obtained
_(0,a)^(t,b)( ∩_0≤ k≤ t{-u̅_k≤ S_k})≤_(0,a)^(t,b)[e^2∑_k≤ t/2 a_kS̅_k1_∩_0≤ k≤ t{-d_1≤ S_k}].
Let ε_0∈(0,1/2-α) and define κ=1-ε/2α(1/2-α-ε_0). Note that for any integer v≥ 1, ∑_k≤ t/2 a_kS̅_k>vd^-κ implies that there exists k such that d_2≤ k≤ t/2 such that
S̅_k> v k^1/2+ε_0. This observation together with the union bound gives
_(0,a)^(t,b)[e^2∑_k≤ t/2 a_kS̅_k1_∩_0≤ k≤ t{-d_1≤ S_k}]-_(0,a)^(t,b)( ∩_0≤ k≤ t{-d_1≤ S_k})(1+d^-κ)
≤ ∑_v≥ 1,d_0≤ k≤ t/2,w≥ v k^1/2+ε_0e^v d^-κ _(0,a)^(t,b)({S̅_k∈[w,w+1]}∩_j≤ k{S_j ≥ -d_1)})
≪ ∑_v≥ 1,d_0≤ k≤ t/2,w≥ v k^1/2+ε_0e^v d^-κ _(0,a)^(t,b)(S̅_k∈[w,w+1])×
(sup_c∈[w,w+1]+at-k/t+bk/t_(0,c)^(t-k,b)(∩_1≤ j≤ t-k{S_j ≥ -d_1}))
where we used the Markov property for the second inequality. To bound the first probability above, note that under _(0,a)^(t,b), the random variable S̅_k is centered, Gaussian with variance k-k^2/t≍ k. For the second probability,
from Lemma <ref>, we have uniformly in all parameters
_(0,c)^(t-k,b)(∩_1≤ j≤ t-k{S_j ≥ -d_1})≪(w+at-k/t+bk/t)b/t.
This allows to bound the left-hand side of (<ref>) with
∑_v≥ 1,d_0≤ k≤ t/2,w≥ v k^1/2+ε_0
e^v d^-κ-cw^2/k·(w+at-k/t+bk/t)b/t
for some absolute c>0.
The above sum over w and then v is ≪ e^-c'k^2ε_0 for some absolute c'>0.
We conclude that uniformly in our parameters, the left-hand side of (<ref>) is bounded with
b/t∑_d_0≤ k≤ t/2
e^-c'k^2ε_0(1+at-k/t+bk/t)≪ab/te^-c' d_0^2ε_0≪ d^-κab/t.
Finally, Lemma <ref> yields
_(0,a)^(t,b)( ∩_0≤ k≤ t{-d_1≤ S_k})=2ab/σ(1+(d^-c)).
This concludes the proof.
There exists C=C(κ)>0 such that for any t≥ 10, a≥ 1, we have
_(0,a)(∩_0≤ k≤ t{S_k≥ 0})≤ Ca/√(t).
By monotonicity in a, we can consider a∈ℕ without loss of generality . Moreover, we have
_(0,a)(∩_0≤ k≤ t{S_k≥ 0})≤_(0,a)(∩_0≤ k≤t/a^2{S_k a^2/a≥ 0})=_(0,1)(∩_0≤ k≤t/a^2{S̃_k≥ 0}),
where S̃_k:=S_ka^2/a has independent Gaussian increments with variance in [κ,κ^-1], like S. This proves that (<ref>) only needs to be proved for a=1.
Consider T=min(t,min{k≤ t| S_k≤ 0}). By the stopping time theorem,
[S_T]=1,
so that
[|S_t|1_T≥ t]=1+[-S_T1_T<t].
Moreover, we have the correlation inequality
[|S_t|1_T>t]≥[max(0,S_t)][T>t],
which is a simple consequence of the Harris inequality. Indeed, consider the random walk Z^(n)_k=1+1/√(n)∑_1≤ j≤ knε_j with independent Bernoulli random variables ε_k, and I^(n)={⌊ n Var S_k⌋,k≥ 1}; denote U^(n)=min{k∈ I^(n):Z^(n)_k≤ 0}. Then the functions
1_U^(n)>t and max(0,Z^(n)_t) are non-decreasing functions of (ε_k)_k≤ nt, so that [max(0,Z^(n)_t)1_U^(n)>t]≥[max(0,Z^(n)_t)]·[U^(n)>t]. This implies (<ref>)
by taking n→∞.
We have obtained
[T>t]≤ C1+[-S_T1_T<t]/√(t),
and we will now prove that
[-S_T1_T<t]≤ C_1,
for some C_1>0 uniform in t, which together with (<ref>) will conclude the proof of (<ref>).
To prove (<ref>), first note that [-S_T1_T<t]≤[-S_T_0] where T_0=min{k≥ 0| S_k≤ 0}. We now consider
Z_n=∑_k≥ 01_S_k∈[n,n+1),k<T_0, n≥ 0,
the time spent by S in [n,n+1) before it hits 0. Define U_0=0 and U_i+1=min{u≥ U_i+n^2:S_u∈[n,n+1)}. For any λ≥ n^2 we have the inclusion
{Z_n≥λ}⊂∩_i≤λ/n^2{S_U_i+n^2-S_U_i≥ -(n+1)}.
By the strong Markov property the events on the right-hand side are independent, and there exists α=α(κ) such that each such event has probability at most 1-α, uniformly in n.
This implies
(Z_n)≤ C n^2
for some C=C(κ), a key estimate in the last inequality below: for any ℓ≥ 1 we have
[|S_T_0|≥ℓ]≤∑_k,n≥ 0[S_k∈[n,n+1),|S_k+1-S_k|≥ℓ+(n+1),k<T_0]
=∑_k,n≥ 0[S_k∈[n,n+1),k<T_0]·[|S_k+1-S_k|≥ℓ+(n+1)]
≤∑_n≥ 0e^-c(ℓ+n)^2[Z_n]≤ Ce^-cℓ^2,
which immediately implies [-S_T_0]≤ C_1 and concludes the proof.
With the same notations as Proposition <ref>, there exists C=C(κ)>0 such that for any t≥ 10, a,b≥ 1 we have
_(0,a)^(t,b)( ∩_0≤ k≤ t{S_k≥ 0})≤ Cab/t.
We first assume that a,b≤√(t). Let n_1=⌊ t/3⌋ and n_2=⌊ 2t/3⌋, and p_t(x)=e^-x^2/(2t)/√(2π t), and abbreviate _(0,a)^(t,b)=_(0,a)^(t,b)(S>0). Then _(0,a)^(t,b)(S>0) is equal to
1/p_t(a-b)∬_x_1,x_2>0p_n_1(x_1-a)_(0,a)^(n_1,x_1)p_n_2-n_1(x_2-x_1)_(n_1,x_1)^(n_2,x_2)p_t-n_2(b-x_2)_(n_2,x_2)^(t,b) x_1 x_2
≤ C∫_x_1p_n_1(x_1-a)_(0,a)^(n_1,x_1) x_1∫_x_2>0p_t-n_2(b-x_2)_(n_2,x_2)^(t,b) x_2≤ Cab/t,
where we have used the trivial bounds _(n_1,x_1)^(n_2,x_2)≤ 1, p_n_2-n_1(x_2-x_1)≤ Ct^-1/2, the estimate (valid for a,b≤√(t)) (p_t(a-b))^-1≤ C√(t), and Lemma <ref>.
For the general case, we can assume a<√(t)<b and ab<t. Let B be a Brownian bridge from a (s=0) to b (s=σ). There exists s_1<…<s_t=σ such that (S_k)_k≤ t and (B_s_k)_k≤ t have the same distribution.
Moreover, from <cit.>, by a simple coupling argument the function
b↦_(a,0)^(b,t)(B_s>0,0≤ s≤σ| B_s_i>0,0≤ i≤ t) .
This implies
_(a,0)^(b,t)(B_s_i≥ 0,1≤ i≤ t)=_(a,0)^(b,t)(B_s_i≥ 0,1≤ i≤ t)/_(a,0)^(b,t)(B_s≥ 0,0≤ s≤σ)_(a,0)^(b,t)(B_s≥ 0,0≤ s≤σ)
≤_(a,0)^(√(t),t)(B_s_i≥ 0,1≤ i≤ t)/_(a,0)^(√(t),t)(B_s≥ 0,0≤ s≤σ)_(a,0)^(b,t)(B_s≥ 0,0≤ s≤σ).
From Lemma <ref>, the denominator is lower-bounded with ca√(t)/t and the last probability is upper-bounded with ab/t. And from the previously discussed case, the numerator is at most a√(t)/t. This concludes the proof.
With the same notations as Proposition <ref>, there exists C=C(κ)>0 such that for any t≥ 10, y≤ t^1/10, 1≤ a,b≤ y-1 we have
_(0,a)^(t,b)( ∩_0≤ k≤ t{S_k≥ 0})=
2ab/σ·(1+_α,δ,κ(d^-c)).
The lower bound follows directly from Lemma <ref>.
For the upper bound, consider first the case a=b=d.
Note that for any k≤ t-1 and u,v>0, under _(k,u)^(k+1,v) we can decompose
B_s=(k+1-s)v+(s-k)u+B̃_s-k-(s-k)B̃_1,
where B̃ is a standard Brownian motion.
Therefore, if there exists s∈[k,k+1] such that B_s<0, we have max_0≤ u≤ 1|B̃_u|+|B̃_1|>min(u,v).
As |B̃_1|+max_0≤ u≤ 1|B̃_u| is clearly dominated by |𝒩| with 𝒩 a Gaussian random variable with variance (1),
by a union bound, we obtain
_(0,d)^(t,d)( ∩_0≤ k≤ t{S_k≥ 0})-_(0,d)^(t,d)( ∩_0≤ s≤ t{B_s≥ 0})
≤ ∑_u.v≥ 0,k≤ t-1max_x∈[u,u+1]_(0,d)^(k,x)(S_i>0,1≤ i≤ k)·(|𝒩|>min(u,v))
·max_y∈[v,v+1]_(k+1,y)^(t,d)(S_i>0,k+1≤ i≤ t)·(B_k∈[u,u+1],B_k+1∈[v,v+1]).
All terms above can be bounded, giving the estimate
∑_0≤ k≤ t/2,u,v≥ 0du/k+1dv/t-ke^-cmin(u,v)^2e^-c(u-d)^2/k+1/√(k+1)e^-c(v-u)^2
≪d^2/t∑_0≤ k≤ t/2,u≥ 0u^2/(k+1)^3/2e^-cu^2-c(u-d)^2/k+1
≪d^2/t∑_0≤ k≤ t/21/(k+1)^3/2e^-cd^2/k+1≤ Cd^2/td^-1/4+,
for any arbitrary ε>0, concluding the proof in the case a=b=d.
For the general case, from (<ref>) assuming b>a without loss of generality, we have
_(0,a)^(t,b)( ∩_0≤ k≤ t{S_k≥ 0}) ≤_(0,a)^(t,b)( ∩_0≤ s≤σ{B_s≥ 0})·_(0,a)^(t,a)( ∩_0≤ k≤ t{S_k≥ 0})/_(0,a)^(t,a)( ∩_0≤ s≤σ{B_s≥ 0})
≤2ab/σ(1+(d^-c))
from the previous discussion, concluding the proof.
ArgBelBou2017article
author=Arguin, L.-P.,
author=Belius, D.,
author=Bourgade, P.,
title=Maximum of the characteristic polynomial of random unitary
matrices,
journal=Comm. Math. Phys.,
volume=349,
date=2017,
number=2,
pages=703–751
ArgBelHar2017article
author=Arguin, L.-P.,
author=Belius, D.,
author=Harper, A. J.,
title=Maxima of a randomized Riemann zeta function, and branching random
walks,
journal=Ann. Appl. Probab.,
volume=27,
date=2017,
number=1,
pages=178–215
ArgBelBouRadSou2019article
author=Arguin, L.-P.,
author=Belius, D.,
author=Bourgade, P.,
author=Radziwiłł, M.,
author=Soundararajan, Kannan,
title=Maximum of the Riemann zeta function on a short interval of the
critical line,
journal=Comm. Pure Appl. Math.,
volume=72,
date=2019,
number=3,
pages=500–535
ArgBouRad2020article
title=The Fyodorov-Hiary-Keating Conjecture. I,
author=Arguin, L.-P.,
author=Bourgade, P.,
author= Radziwiłł, M.,
journal=preprint arXiv:2007.00988,
year=2020
Bra1983article
author=Bramson, M.,
title=Convergence of solutions of the Kolmogorov equation to travelling
waves,
journal=Mem. Amer. Math. Soc.,
volume=44,
date=1983,
number=285,
pages=iv+190
Bra1978article
author=Bramson, M.,
title=Maximal displacement of branching Brownian motion,
journal=Comm. Pure Appl. Math.,
volume=31,
date=1978,
number=5,
pages=531–581
BraDinZei2016article
author=Bramson, M.,
author=Ding, J.,
author=Zeitouni, O.,
title=Convergence in law of the maximum of the two-dimensional discrete
Gaussian free field,
journal=Comm. Pure Appl. Math.,
volume=69,
date=2016,
number=1,
pages=62–123
BraDinZei2016bisarticle
author=Bramson, M.,
author=Ding, J.,
author=Zeitouni, O.,
title=Convergence in law of the maximum of nonlattice branching random
walk,
journal=Ann. Inst. Henri Poincaré Probab. Stat.,
volume=52,
date=2016,
number=4,
pages=1897–1924
CarLed01article
author=Carpentier, L.,
author=Le Doussal, P.,
title=Glass transition of a particle in a random potential, front selection in nonlinear renormalization group, and entropic phenomena in liouville and sinh-gordon models,
journal=Physical review. E, Statistical, nonlinear, and soft matter physics,
volume=63,
date=2001
ChhMadNaj2018article
author=Chhaibi, R.,
author=Madaule, T.,
author=Najnudel, J.,
title=On the maximum of the Cβ E field,
journal=Duke Math. J.,
volume=167,
date=2018,
number=12,
pages=2243–2345
Din2013article
author=Ding, J.,
title=Exponential and double exponential tails for maximum of
two-dimensional discrete Gaussian free field,
journal=Probab. Theory Related Fields,
volume=157,
date=2013,
number=1-2,
pages=285–299
DinZei2014article
author=Ding, J.,
author=Zeitouni, O.,
title=Extreme values for two-dimensional discrete Gaussian free field,
journal=Ann. Probab.,
volume=42,
date=2014,
number=4,
pages=1480–1515
FyoHiaKea2012article
author=Fyodorov, Y.,
author=Hiary, G.,
author=Keating, J.,
title=Freezing Transition, Characteristic Polynomials of Random Matrices, and the Riemann Zeta Function,
journal=Physical Review Letters,
volume=108,
date=2012
FyoKea2014article
title=Freezing transitions and extreme values: random matrix theory, and disordered landscapes,
author=Fyodorov, Y.
author=Keating, J.,
journal=Philosophical Transactions of the Royal Society A,
volume=372,
number=2007,
pages=20120503,
year=2014,
publisher=The Royal Society Publishing.
Har2019article
author=Harper, A. J.,
title=On the partition function of the Riemann zeta function, and the Fyodorov–Hiary– Keating conjecture,
journal=preprint arXiv:1906.05783,
date=2019
KatSar1999book
author=Katz, N. M.,
author=Sarnak, P.,
title=Random matrices, Frobenius eigenvalues, and monodromy,
series=American Mathematical Society Colloquium Publications,
volume=45,
publisher=American Mathematical Society, Providence, RI,
date=1999,
pages=xii+419
KeaSna2000article
author=Keating, J. P.,
author=Snaith, N. C.,
title=Random matrix theory and ζ(1/2+it),
journal=Comm. Math. Phys.,
volume=214,
date=2000,
number=1,
pages=57–89
Mon1972article
author=Montgomery, H. L.,
title=The pair correlation of zeros of the zeta function,
conference=
title=Analytic number theory,
address=Proc. Sympos. Pure Math., Vol. XXIV, St. Louis Univ., St.
Louis, Mo.,
date=1972,
,
book=
publisher=Amer. Math. Soc., Providence, R.I.,
,
date=1973,
pages=181–193
MonVau1974article
author=Montgomery, H. L.,
author=Vaughan, R. C.,
title=Hilbert's inequality,
journal=J. London Math. Soc. (2),
volume=8,
date=1974,
pages=73–82
Naj2018article
author=Najnudel, J.,
title=On the extreme values of the Riemann zeta function on random
intervals of the critical line,
journal=Probab. Theory Related Fields,
volume=172,
date=2018,
number=1-2,
pages=387–452
PaqZei2018article
author=Paquette, E.,
author=Zeitouni, O.,
title=The maximum of the CUE field,
journal=Int. Math. Res. Not. IMRN,
date=2018,
number=16,
pages=5028–5119
PaqZei2022article
author=Paquette, E.,
author=Zeitouni, O.,
title=The extremal landscape for the CβE ensemble,
journal=preprint arXiv:2209.06743,
date=2022
Sou2009article
author=Soundararajan, K.,
title=Moments of the Riemann zeta function,
journal=Ann. of Math. (2),
volume=170,
date=2009,
number=2,
pages=981–993
|
http://arxiv.org/abs/2307.01489v1
|
20230704054413
|
Semantic Segmentation on 3D Point Clouds with High Density Variations
|
[
"Ryan Faulkner",
"Luke Haub",
"Simon Ratcliffe",
"Ian Reid",
"Tat-Jun Chin"
] |
cs.CV
|
[
"cs.CV",
"I.4.6"
] |
inst1]Ryan Faulkner
[inst1]organization=Australian Institute for Machine Learning - University of Adelaide,
addressline=Cnr North Terrace & Frome Road,
city=Adelaide,
postcode=5000,
state=SA,
country=Australia
[inst2]organization=Maptek,
addressline=31 Flemington St,
city=Glenside SA,
postcode=5065,
state=SA,
country=Australia
inst2]Luke Haub
inst2]Simon Ratcliffe
inst1]Ian Reid
inst1]Tat-Jun Chin
LiDAR scanning for surveying applications acquire measurements over wide areas and long distances, which produces large-scale 3D point clouds with significant local density variations. While existing 3D semantic segmentation models conduct downsampling and upsampling to build robustness against varying point densities, they are less effective under the large local density variations characteristic of point clouds from surveying applications. To alleviate this weakness, we propose a novel architecture called HDVNet that contains a nested set of encoder-decoder pathways, each handling a specific point density range. Limiting the interconnections between the feature maps enables HDVNet to gauge the reliability of each feature based on the density of a point, , downweighting high density features not existing in low density objects. By effectively handling input density variations, HDVNet outperforms state-of-the-art models in segmentation accuracy on real point clouds with inconsistent density, using just over half the weights.
Semantic segmentation 3D point clouds Density variation Large scale point clouds Multi-resolution
§ INTRODUCTION
Light Detection and Ranging (LiDAR) devices generate accurate 3D measurements of their surroundings. While the generated point clouds have useful geometric information, practical application often requires semantic labels to be applied to the points. Recent progress of deep models in processing 3D point clouds <cit.> has opened up many applications of LiDAR. In this paper, we focus on semantic segmentation of LiDAR scans <cit.>, , assign each point a semantic label.
Many advances in point cloud semantic segmentation relate to autonomous driving, where the aim is the perception of the immediate surrounds of the vehicle <cit.>. Typically, automotive scans <cit.> do not extend much further than a 100 m; indeed, the hardware limitations of automotive LiDAR devices are such that scans reaching 250 m can be considered long range <cit.>. The low resolution scans (approximately 105 points) have a fast collection rate, making them useful for time-sensitive problems such as obstacle avoidance. On the other hand, terrestrial LiDAR scans of surveying grade are slower, but of higher resolution, benefiting problems which require very high precision but not real-time solutions.
One of the largest public datasets using a surveying-grade scanner, Semantic3D <cit.>, has high resolution scans of up to 108 points, but only reaches physical dimensions as large as 240 m horizontally, and 30 m vertically. In comparison, terrestrial LiDAR scans such as those acquired in mining sites often have dimensions over a kilometre in the horizontal axes, and over 100 m vertically, covering a significantly larger area.
LiDAR scans of a physically larger scale tend to suffer from high density variations; see <ref>. Fundamentally, fewer nearby occlusions yield more scan points further from the scanner, where density is lower. While not to the same extent as surveying-grade scans, the inherently lower resolution and distance limitations of automotive LiDAR cause it to also have density variation even in urban environments.
State-of-the-art 3D semantic segmentation methods <cit.> struggle on large-scale surveying point clouds, due to the higher density variation. In particular, while the methods which operate directly on point clouds <cit.> extract local features in a multi-scale manner through down- and up-sampling, details of how to best propagate and utilise features of different scales are left to the neural network to learn. Some do combat density variation, however they only target variation within the scope of individual feature extraction steps and not across the entire network architecture.
Density variation vs class imbalance
It is vital to contrast density variation and class imbalance, both related factors that influence segmentation accuracy. Classes with fewer point samples tend to be smaller objects with lower point density. While this is an important challenge to tackle, our focus in this paper is the effects of density variation independent of the population size of the class. A single class can appear in a point cloud with each instance having vastly different densities. A wall close to the scanner for example, will have a higher density of points than one far away; see <ref> for density distributions of different LiDAR types.
Contributions
We highlight the importance of effectively accounting for local density variations in semantic segmentation on 3D point clouds, particularly those acquired from real-world surveying tasks. To this end:
* We propose HDVNet (high density variation network), a point cloud segmentation model that contains a nested set of feature extraction pipelines, each handling a specific input local density; see <ref>. Interactions between the pipelines is tightly controlled to exploit potential correlations between density levels. An aggregation layer applies attention scores to the features accordingly, such that low density objects are not classified based on (potentially non-existent) high resolution features, while higher density points remain able to take advantage of their fine features.
* We collected a new dataset, named HDVMine, that consists of LiDAR scans from open-cut mines to evaluate our ideas. Our point cloud scans cover geographic areas which are kilometres in scale, making them larger than existing terrestrial LiDAR datasets <cit.>. A single scan is comparable in scale to an automotive LiDAR drive's frames combined. In addition, existing datasets comprise of “above-ground” scenes where there is a single and consistent ground plane. In contrast, an open-cut mine can have multiple physical tiers, with complex structures embedded therein.
As we will show in Sec. <ref>, HDVNet yields up to 6.7 percentage points higher accuracy in semantic segmentation on our dataset, compared to a state-of-the-art point-cloud models <cit.> despite HDVNet using almost half as many weights.
§ RELATED WORK
Point clouds have useful geometric information for each point, but the lack of any inherent structure to the data makes local context difficult to determine. We first survey existing methods for point cloud segmentation, from those that preprocess the point cloud to alternative representations, to those which directly take the raw point cloud as input.
§.§ Grid-based methods
Many point cloud networks take inspiration from image-processing techniques. Unlike a pixel image however, a point cloud has no inherent grid structure. For the purpose of using convolutions and similar techniques on the point cloud, a common step is first converting from points to a grid-based representation. These representations include two-dimensional pixel images <cit.>, a birds eye view of the scene <cit.>, or a three dimensional voxel grid <cit.>.
Large sections of empty space in the scene lead to poor memory scaling in grid representations. Data structures such as octrees <cit.> avoid wasting memory on empty space, but information is still lost where multiple points are combined into a single voxel. These grid structure representations have demonstrated particular success for low-resolution LiDAR scans where there are less fine details to be lost. State of the art methods for such scans range from modified forms of three-dimensional voxel structures <cit.> to representing the scan in two dimensions such as with a Range Image <cit.>.
§.§ Point Based Methods
Convolutions are performed on grid structures, which makes operating directly on the raw point cloud data difficult. A raw point cloud is simply a set of points, with no consistent ordering. PointNet <cit.> is a pioneering work in directly processing point clouds, which demonstrated the success of using network layers with Multi-Layer Perceptrons (MLPs). Each MLP is limited to operate only on individual points (with shared weights), and any operations performed on the entire point cloud being order-invariant and low-cost such as max-pooling. More research rapidly followed, extending it directly such as PointNetLK <cit.> and PointNet++ <cit.>, or developing new algorithm using MLPs as a base.
These alternative point processing methods are designed to better utilise the local relationship between points in the scene. RandLA-Net by Hu <cit.> does this using K-Nearest Neighbours and MLPs to aggregate features for each point which represent the local neighbourhood. Like other MLP based methods, it is very efficient, scales well to large point clouds, and uses an encoder-decoder structure to get features from multiple scales.
An alternative approach is to apply convolutions to the raw point cloud as if it had a more grid-like structure. This requires modifying the implementation of a convolution <cit.> to apply to unordered points. One example of this is assigning coordinates to the convolution kernel, and using a MLP to determine how much each kernel weight affects a point based on the point's relative position to the kernel <cit.>. This contrasts to a traditional grid-structure kernel where each weight fully affects the value in one specific pixel or voxel co-ordinate and no others.
§.§ Coarse, then fine processing
Raw point clouds have limited features for each point (x,y,z,r,g,b), lacking any local context. To account for this, some networks generate useful features first. Taeo <cit.> have a network identify which points belong to distinct objects, before then classifying each point with semantic labels. Multi-pass approaches to first identify edges <cit.> or narrow down areas of interest before more fine processing <cit.> are also common. Others such as Varney <cit.> and Li <cit.> first extract fine features before downsampling to a sparser point cloud as usual, but then go back and do so a second time after the point cloud's coarse features have been extracted. These methods all assume that fine features exist when extracting and propagating them, which does not hold when scan's density is inhomogeneous.
An alternative approach is to perform coarse segmentation into “superpixels” or “simple objects”, followed by a graph-based approach <cit.>. Such graph-based networks do not scale well to large and complicated scenes. In addition, coarse segmentation which quickly identifies the ground points <cit.> or the edges of the road <cit.>, relies on assumptions such as “the lowest points detected are the ground” which do not hold in contexts such as mining.
§.§ Dealing with density variation
Consider a point cloud of N points ={_i }^N_i=1. The density of the point cloud as a whole is the ratio of points N to the volume occupied by the point cloud. For each given point _i, we define its local density ρ_i using the density of its immediate neighbourhood of K nearby points _i, where _i = [_i,1, _i,2, ... _i,K]. Many existing methods inherently assume a homogenous density, such that the local density ρ_i of any point is roughly the same as the average density of the entire point cloud. As shown previously in <ref> this is not always the case, the local density of points can vary greatly.
In both our method, and many existing pointcloud networks, the local neighbourhood of a point _i is determined using the K-Nearest neighbours, with K a fixed hyperparameter, K ∈ℤ. This creates a receptive field around each point, the K points within making up the local neighbourhood. Each time the point cloud is downsampled, it becomes more sparse, enabling the receptive field to grow in physical size. This enables early network blocks with small receptive fields to extract fine object features, while later blocks extract sparser features using larger receptive fields. These receptive fields encounter issues when density throughout the point cloud is inhomogeneous.
Objects which exist far away from the scanner or near-parallel to the laser will appear in the initial scan with a low point density. Early layers cannot extract useful high-density features when the point's local neighbourhood is sparse to begin with. This causes one of two density-variation issues, depending on whether the receptive field uses a fixed number of neighbours K, or a fixed radius. If K is fixed, the network layers are required to learn how to extract useful information from a wide variety of receptive field sizes, all using the same shared weights. Alternatively, if the physical size of the receptive field is fixed, then the neighbourhood feature will sometimes be generated from no neighbouring points at all. <ref> visualises this issue. In HDVNet, we fix the number of neighbours K, and then take further steps to counter the issue of inconsistent receptive field size, detailed in <ref>.
Existing methods do not address density variation across the entire network like our HDVNet does, but they do take steps to limit the effect on individual network layers <cit.>. Alternative point cloud representations such as voxels tackle density by either weighting each voxel based on how many points it has <cit.>, or implement a minimum density floor, ignoring sparse sections entirely <cit.>.
The reliance on high density features can be addressed by first aggregating high density points together to represent the scene in a more homogeneous, coarse manner <cit.>. Such approaches inevitably result in information loss as higher-density sections are downsampled to achieve consistent density, although performance on low density objects does improve.
§ HIGH DENSITY VARIATION NETWORK - HDVNET
HDVNet is an architecture which processes a point cloud of N points ={_i }^N_i=1. The raw point values _i initially passed to the network are [x_i,y_i,z_i,r_i,g_i,b_i], where x,y,z are the point's spatial co-ordinates, and r,g,b are the colour values.
The number of points N varies throughout the network as shown in <ref>. Each Downsampling Block DS removes points, subsampling the point cloud from one density state d to a sparser density d+1, where d ∈{1,2,3,4,5}. The density state of the initial point cloud being d=1. Formally, DS_d takes N_d points as input, and returns the smaller subset of N_d+1 points, such that {_j }^N_d+1_j=1 = DS_d({_i }^N_d_i=1). We index the pointcloud based on how downsampled it is, with the initial point cloud being ^(1)={_i }^N_1_i=1, and the most downsampled being ^(5) ={_j }^N_5_j=1 such that ^(d+1)⊆^(d). The number of points at each state ={ N_d }^5_d=1 is set as a hyperparemeter. We index upsampling and downsampling blocks using their input pointcloud's density state d, for example the upsampling block US_5 upsamples the pointcloud from N_5 to N_4 points (from ^(5) to ^(4)).
Each point _i has a corresponding feature vector _i, containing a total of T elements such that _i∈ℝ^T. Unique to HDVNet, each feature vector _i can be separated into assigned subsections ^(d)_i⊆_i, where the vector elements of each subsection are “assigned” to a corresponding density state d. Each of these subfeature vectors contains E_d elements, where E_d ∈ℤ^+. The set of integers {E_d}^5_d=1 is defined as a hyperparameter, and is constant throughout the network. In <ref>, we visualise each feature vector subsection {^(d)_i}^N_d_i=1 as different shades of blue.
Each Density Assigned Encoder Block (DB) adds a new subsection ^(d) of E_d elements to each point's corresponding feature vector. We use the number of assigned subsections a to index each feature vector ^(a)_i, though the network, initialising as ^(0)_i with no assignments and no elements. For example, ^(3)_i has the subsections ^(1)_i, ^(2)_i, ^(3)_i and a total of E_1+E_2+E_3 elements. The total number of elements in a given ^(a)_i is therefore T_a, such that T_a = ∑^a_d=1E_d.
We also index DB_a blocks using the number of assigned subsections they involve. Each DB_a takes as input both the existing feature vectors {^(a-1)_i}^N_d_i=1 and the original point values, with a formal definition of
{^(a)_i}^N_d_i=1 = DB_a({^(a-1)_i}^N_d_i=1, {_i }^N_d_i=1)
The first DB block DB_1 takes only the original point values as input, with every ^(0)_i being empty. Details on density states d are given in <ref>, while the effect of subsection “assignments” S^(d)_i are provided in <ref> and <ref>.
As shown in <ref>, HDVNet maps the the initial point cloud ^(1) to four final feature vectors {^(a)}^4_a=1. During training each is passed to one of four classifiers { g_a(^(a))}^4_a=1. Each classifier maps the feature vectors to four sets of probability distributions {^(d)}^4_a=1. Each ^(d) has N_d distributions, and is created using the corresponding classifier g_a, such that a=d. We define ^(d) = {_i }^N_d_i=1 where _i = [q̃_i,1, q̃_i,2, …, q̃_i,k], such that q̃_i,k is the estimated confidence that the corresponding point _i is of semantic class k. After a fine tuning step this is simplified to a single classifier ^(final)={_i }^N_1_i=1 = g_final({^(a)}^4_a=1), used in inference and shown in <ref> as the “Final Classifier”.
§.§ Density Groups and Density States
Our solution to high density variation is to make point density a central part of the network architecture. To do so, HDVNet relies on three different measures of a single point _i's local density. The continuous density estimate ρ_i, which is quantised evenly into discrete groups δ, and unevenly into the density states d.
Density is not a native measure from LiDAR, so the estimate comes from the the K-nearest neighbours. A sphere around each point is made, with the radius r being the euclidean distance from the point to the most distant of its K neighbours. Dividing K by the volume creates a density estimate ρ_i in points per m^3
ρ_i = K/4/3π r^3
This was chosen as a computationally efficient way of estimating a point's volume, with _i already being calculated in order to generate a point's local features. The point clouds in <ref> were coloured using ρ_i, so can be referred to for a visualisation of the density estimate.
While {ρ_i }^N_d_i=1 is a useful estimate of every point's density, it is often too continuous in nature for steps in HDVNet which expect a more discrete input. For this purpose, the points are quantised into discrete grouping buckets δ∈ℤ, with δ=0 being the first group, δ=1 the second, and so forth. The distributions in <ref> were created using these groups δ.
In our experimental setup the initial grouping δ=0 was set to the very high density threshold of ρ_i > 2×10^6 points per m^3. This value was chosen to ensure that very few points fall into the first grouping, with even high-density small-scale urban scans such as those in Semantic3D having very few points over this threshold. The lower threshold t of the density grouping d=0 is thus t_0=2×10^6, and subsequent thresholds t_δ are calculated to ensure that the minimum density (in m^-3) is consistently a quarter that of the prior grouping. Each threshold is thus calculated as:
t_δ = t_δ-1/4
For better reflection of the point cloud's density throughout the network, these quantised groups δ are then combined into larger density states d ∈{0,1,2,3,4,5}. Each d is the point cloud's estimated density state at a given point in the network. There are more density groups δ than there are downsampling blocks DS, so each DS reduces the point cloud's maximum density by multiple groups δ at a time.
We set which contiguous groups δ make up each density state d by analysing the density distribution across all the points in the training dataset. With this average distribution we estimate how many points N will remain after downsampling to each density group δ. As the target number of points for each density state { N_d }^5_d=1 is a known hyperparameter, we use the density group which results in the closest number of remaining points to the target.
Downsampling to a threshold t_d therefore results in approximately N_d points remaining, t_d = t_δ|{ρ_i}^≈ N_d_i=1 >= t_δ. An example of how contiguous density groups δ are combined into a state d is shown in <ref>.
A key point to clarify is a point being within a specific density state of overall pointcloud _i ∈^(d) as opposed to a point's inherent density state. The initial pointcloud ^(1) for example contains all the input points, regardless of how sparse they are. For when we refer exclusively to the subset of points with an inherent density ρ_i between both that density state's lower threshold t_d, and the prior state's threshold t_d-1 we define the subset as
I^(d) := _it_d < ρ_i <= t_d-1
A visualisation of P^(d) and I^(d) is shown in <ref>
§.§ Density assigned encoder block (DB)
Our method's key architecture modification is the assignment of feature elements to density states, as visualised in <ref> as shades of blue for the E_d elements of each subsection ^(d). For our feature extraction blocks, we use our novel density assigned encoder block (DB). Many of our multilayer perceptrons (MLP) and fully connected layers (FC) are also replaced with our density aware MLP (DMLP) and density connected layer (DC).
Some of the operations performed in the density assigned encoder block's hidden layers have higher memory requirements. As shown in <ref> we reduce the number of elements in each subsection ^(d)_i from E_d to H_d, with H_d also set as a hyperparameter. We visualise feature vector subsections with H_d elements as shades of maroon instead of blue. As the total number of elements normally is T_a = ∑^a_d=1E_d, we define the total number of elements in these hidden layers as U_a = ∑^a_d=1H_d.
§.§.§ Continued input of original scene
One addition is the reintroduction of the original raw point values after every downsampling as shown in the bottom left corner of <ref>. Each DB block requires not only the input point feature vector ^(a-1), but also the raw point values of _i.
{^(a)_i}^N_d_i=1 = DB_a({^(a-1)_i}^N_d_i=1, {_i }^N_d_i=1)
This enables sparser feature extraction using original point data instead of relying on unreliable propagated features from higher densities. The features in HDVNet assigned to lower density states, are in this way made robust to the original density of the object.
§.§.§ Density assigned multi-layer perceptrons
To prevent reliance on higher density features, and enforce the “assigned subsections” ^(d)_i of a feature vector, element separation is applied within the encoder blocks. A normal MLP or FC would treat all feature elements the same, so we instead use our density assigned MLP (DMLP) or Density Connected Layer (DC). They both operate on feature vector elements without mixing information across density assigned subsections. For reference, we define a standard multi-layer perceptron (MLP) or fully connected layer (FC) as a mapping from ^in_i features to ^out_i. The difference being that a MLP also includes layernorm (LN) and activation (AVN) layers:
^out_i = FC(^in_i)
^out_i = MLP(^in_i) = AVN(LN(FC(^in_i)))
In HDVNet, feature elements are each assigned either to the current point cloud's density state d or a previous one (d-1, d-2, etc). During feature extraction and processing, such as DMLPs, we allow higher density feature elements to use elements assigned to lower densities as input, but not vice-versa. This rule comes from the fact that an object with high density information will always have low density information once it is downsampled, but the same does not necessarily hold true in reverse. An object which is sparse to begin with will not have any useful high density information to be considered.
The number of feature vector elements assigned to a specific density is E_d. The elements of a feature vector ^(a_in)_i assigned to any of a contiguous set of subsections _start to _end is referred to as _i,start:end. Multiple MLPs each viewing different subsections of a point's features ^(a_in)_i are thus combined into a Density Assigned MLP (DMLP), along with the number of element assignments to include in the output feature vector a_out:
DMLP(^in_i, a_out) = MLP(_i,1:a_in,E_1)
⊕ MLP(_i,2:a_in,E_2)
⋮
⊕ MLP(_i,a_out:a_in,E_a_out),
Where ⊕ is the concatenation of feature vectors, and a_out <= a_in. This creates a feature extractor which is robust to density variation, yet still extracts fine features. A pink square is drawn around this step in <ref>. As sparse features are generated without using fine ones, they can be trusted to be robust to density variation. Our density connected layers (DC) follow the same method, stacking fully connected layers (FC) to preserve density assignments. <ref> is an example of this for d=3, and similar to a DMLP, a DC can be defined as:
DC(^in_i, a_out) = FC(_i,1:a_in,E_1)
⊕ FC(_i,2:a_in,E_2)
⋮
⊕ FC(_i,a_out:a_in,E_a_out),
Our local testing confirmed that as found in other works <cit.> low level features are enough for the majority of the analysis with the benefit of features continually decreasing as they become finer. For this reason all new feature vector elements added to a point cloud of density state d are assigned to the new subsection ^(d)_i, maximising the number of features assigned to lower densities. This is visualised in <ref> and <ref> by the width of subsections remaining constant throughout.
§.§ LiDAR-grid subsampling
Like existing networks, HDVNet subsamples the point cloud to smaller subsets of points. This allows both for more features to be encoded into each point without running into hardware limitations, as well as obtaining features of lower density resolutions. In <ref> this is represented by the downsampling step DS.
Downsampling methods in previous models vary from random sampling <cit.> and farthest point sampling <cit.>, to having the network itself choose which points to keep <cit.>. Such downsampling methods do not retain the inherent scan ordering of terrestrial LiDAR. We use a pseudo-LiDAR downsampling method similar to that of other works <cit.> to preserve scan lines. With the goal of making the scan more homogeneous in density with each downsampling step, objects with higher density in the scan have more points removed, while the lowest density sections of the scan are left untouched.
Such terrestrial LiDAR scanners output not only the 3D coordinates (x,y,z) of each scan point, but often also the (spherical) row and column coordinates (r,c) of the corresponding scan direction. While they can be estimated when the scanner's co-ordinates are known (usually the point of origin for the scan), it is preferable to use the original scanner's row and column values if they are available for better LiDAR scan metadata. We propose that respecting the scan structure of the LiDAR while downsampling enables high-density sections of a scene to better resemble their low-density counterparts after downsampling; see <ref>. When downsampled sections of the point cloud do not resemble naturally sparse objects in the LiDAR scan, the network is less effective at extracting coarse features.
For each point _i, the original metadata _i = [r_i,c_i] is also input to the network if available. The metadata is used exclusively for downsampling, and not directly used in mapping from _i to the class confidence distribution _i. Instead, it enables a more accurate downsampling of high-density points, removing the disparities and differences between different density groupings. We define the difference between the downsampling's target grouping δ_t and the point's inherent density grouping δ_i as Δδ = δ_t - δ_i.
DS_lidar(c_i,r_i, Δδ) =
if c_i % 2^Δδ = 0 and r_i % 2^Δδ = 0 1,
otherwise 0,
We keep _i if DS_lidar(c_i,r_i, Δδ) = 1, and discard it in the downsampling otherwise. Downsampling based on the difference between a point's original density and the target density creates a more homogeneous result, while using rows and columns allows the LiDAR scanline structure to be retained, as shown in the bottom row of <ref>.
One notable downside of this subsampling approach is that the number of points removed is inconsistent, as the number of points _δ in each density grouping varies scan by scan. As a simple solution, LiDAR-grid subsampling is used for successive density groups until a new target density would result less points than desired. The points which would be removed when downsampling to the next target density δ_t are then randomly removed to achieve the desired number of points N_t as shown in <ref>.
We use random subsampling as if we were to select the points based on their local density it would likely remove a small object or section of the scan. Randomly subsampling is both computationally efficient and spreads the sampling throughout the scan. By randomly selecting the points which would have been removed if the LiDAR-grid subsampling was used once more, we also retain scan line structure as much as possible while avoiding subsampling of low-density areas of the scan.
§.§ Existential local feature aggregator (ELFA)
In HDVNet, Local features are extracted and aggregated via a nearest neighbours approach. As shown in <ref>, we have two alternative feature aggregation blocks. LFA is similar to that used in existing networks, specifically using the LFA from RandLA-Net <cit.> as a base. The point co-ordinates and features for the K neighbours in _i are found, and both are used to generate a local feature vector for each point _i. The only modification of note to our LFA implementation is that the MLPs which involve point features are replaced with DMLPs, and fully connected layers (FC) are similarly replaced with density connected layers (DC). The density assignment _d of the point features is thus preserved. ELFA is a more modified, optional variant, which further counters density-variation.
As the feature elements which are assigned to higher densities are calculated for sparse points, there will be “unreliable” or “junk” features, such as the dense features of the bottom right point in <ref> with a larger receptive field. While the network can be designed not to use them at all, that removes both the ability of sparse points to utilise fine features of their higher-density neighbours, as well as take unreliable (due to varying receptive field size) but still potentially useful fine features into consideration.
In ELFA, two neighbourhood features are created. All K neighbours are used to generate 𝐍𝐅_original as usual, while 𝐍𝐅_exists is created from what remains after masking out points which exist at a sparser density than the point cloud's current density state d.
Mask(_i,d) =
if _i ∈{I^(j)}^d_j=1 1,
otherwise 0,
This masking ensures only points with the expected receptive field size contribute to NF_exists. Both neighbourhood features are concatenated, multiplied by an attention score used to determine which features are most reliable, and then a finally passed to a DMLP. This is visualised in <ref>. Through ELFA, the network has a local neighbourhood feature 𝐍𝐅_exists which it can learn whether or not to trust. Without it, there is a higher risk of the network taking and using “junk” dense features from neighbouring points which have a sparser inherent density.
§.§ Initial Training Classifiers
As the network architecture is assigned by density throughout, we are able to utilise multiple classifiers g_1....g4 at the end, each predicting a class confidence distribution _i for each point. Each g_a takes the features from a different density state of the decoder as its input, g_4 using ^(4), g_3 the features from ^(3) and so forth.
The class-weighted cross-entropy loss is calculated for each separate {_̃ĩ}^N_d_i=1 produced by g_a, masked to include only points with inherent densities belonging to that density state or a prior one, _i ∈{I^(j)}^d_j=1. This specialises each classifier for its intended density, preventing g_1 from being expected to classify sparse points of I^(2),I^(3),I^(4) or I^(5) (each density is visualised in <ref>). We include earlier density states due to the LiDAR-grid subsampling making high density objects resemble sparse ones, making them suitable as extra training data for sparser densities.
I^(5) makes up a negligible proportion of any point cloud P^(1), so it does not have a corresponding classifier and cross-entropy loss is not calculated for it. Any points which belong to I^(5) are treated as I^(4) when masking the output {_̃ĩ}^N_d_i=1 and calculating the loss.
To combine them together, the loss L for each density state d is then multiplied by the square of the density state number itself, so that the network can be trained simultaneously for all densities.
L_total = 1^2L_1 + 2^2L_2 + ......d^2 L_d
The lower density weights are thus prevented from being too strongly affected by the higher density outputs which also use coarse features in their calculations, and thus affect the coarse features in their backpropagation.
§.§ Fine tuning for final prediction
While the loss in the section above is used during initial training, there is a final fine-tuning step afterwards. Simply predicting the class label probability {_̃ĩ}^N_d_i=1 using the output from the classifier g_a corresponding to the point's inherent density _i ∈ I^(d) is sufficient. However a benefit can be gained by locking the weights previously trained and fine-tuning new ones which take all the the extracted features as input into a singular g_final shared by all the points.
As shown in <ref>, the features at each density are first up-sampled to cover all the original input points, before being attention scored for each point. This attention score α_i is created based on which density states d the point _i “exists” in as well as it's specific density estimate ρ_i. A boolean value B_i^(d) is used, with the value being true using the same “existence” definition as in ELFA - whether the point belongs to I^(d) or that of a prior density state (<ref>).
As the network is initially trained for the classifiers g_1....g_4, there are no features assigned to d=5 to be attention scored. Therefore no boolean is made for d=5. At d=4 all points other than the negligible amount existing in I^(5) would be given a value of 1 according to <ref> so B_i^4 is not calculated or included either.
α_i = MLP(B_i^1, B_i^2, B_i^3, ρ_i)
As the point's density is known, and each feature is assigned to a designated density state, the network is able to learn which features to rely on for the final prediction of k classes, and apply the attention score α_i accordingly. The loss for this final step of the training is simply a class-weighted cross entropy loss using the {_̃ĩ}^N_1_i=1 output by g_final.
§ DATASET - HDVMINE
With the assistance of an industry partner, we collected 53 individual terrestrial LiDAR scans across five different mine locations; <ref> shows the point cloud from an individual scan. The scope of the individual point clouds range from 183M in one direction to 8.4KM, with an average of 577M. <ref> displays one of the scans from above.
We manually labelled the point clouds into three semantic classes: , and . The classes chosen reflect the aim to understand the overall scene structure for surveying. Unlike in urban environments, and in a mining environment vary significantly in smoothness and orientation. The boundaries between and also defy simple geometric definitions, , the surfaces are not cleanly at right angles. <ref> illustrates these challenging features. Class subsumes a variety of elements such as vegetation, rock piles, and man-made objects, where the latter encompass less than 1% of the points; see <ref>. In total, 353 million points have been labelled. <ref> shows the population size of the classes.
While the LiDAR scans in HDVMine can be combined into contiguous scenes, in our experiments in Sec. <ref>, each scan was treated as an individual input point cloud. Even within a single point cloud however, the local density variation is high (see <ref>), which in turn leads to significant intra-class density variation (see <ref> for and examples).
§ EXPERIMENTS
Experiments were run using three different datasets, HDVMine (high-resolution, large-scale terrestrial LiDAR), Semantic3D (high-resolution terrestrial LiDAR), and HelixNet (low-resolution automotive LiDAR). We ran ablation tests with multiple variations of our architecture:
* HDVNet: The default network, using all methods as outlined in <ref>
* DTC (Density aware Training Classifier): The training classifiers are modified to use DMLP and DC layers as the rest of HDVNet does.
* FCO (Fine Classifer Only): Immediately train using fine classifier, instead of using the training classifier from <ref> and locking the network weights prior to the classifier.
* TCO (Training Classifier Only): Inference is run using the training classifiers from <ref>. Each point p_i uses either g_1, g_2, g_3 or g_4 according to which density state I^(d) it belongs to.
* No FA (No Feature Allocation): All DC and DMLP layers take features of every available density as input. Such DC layers have no practical difference to a FC layer, while each DMLP retains separate layernorm (LN) and activation (AVN) for each small MLPs which it is constructed from.
* No FA (small): As feature allocation reduces the number of weights used by almost half, this variant also uses less features per point throughout the network, for an equivalent number of weights.
* No ELFA: The Existential Local Feature Aggregator from <ref> is not applied
§.§ Results on HDVMine
As there are multiple key differences between the implemented architecture and RandLA-Net, additional ablation tests were run on HDVMine. All tests were run on a single 8GB Nvidia RTX 3070 for 50 Epochs (with each epoch being 1000 batches of batch size 4). To fit the graph on the smaller GPU all networks were trained with the same reduced number of features per point (maxing out at 256 features per point at the end of the decoder). For all tests points were passed in with x,y,z,r,g,b, as well as the density estimate ρ_i.
While the network takes an already-downsampled point cloud ^(1) as input, we upsample the labels and test on the original point cloud ^(0). For analysis we identify the accuracy both on points with an inherent density I^(1) and those with the extremely high density of I^(0).
In addition, as our terrestrial LiDAR scans are too large to pass as input to a standard GPU, we used the same method as RandLA-Net to break it down. Points were randomly chosen from those not yet given a label and combined with a set number of their nearest neighbours, passed into the network as the input point cloud ^(1). This process was repeated until every point had been processed at least once. Points processed in more than one of these “spheres” had their label chosen by weighting the different class distributions using the point's distance from the centre of each respective sphere, and then using the summed probability distribution.
Points are compared at different density groupings. I^(5) is the coarsest, including all points where ρ_i <= 0.12 points per m^3. In comparison I^(0) is the finest, in HDVMine this is all the points where ρ_i > 30,558 points per m^3. The specific t_d thresholds for d=0,1,2,3,4,5 are (30558, 1739, 31, 1.9, 0.12, 0) respectively, based on the known distribution of the training data.
As shown in <ref>, RandLA-Net's use of batchnorm makes it difficult for the network to stabilise when limited GPU memory requirements require a small batch size of 4. Simply swapping it for layernorm (LN) enabled RandLA-Net to train effectively. Replacing random subsampling with our Lidar-grid subsampling (LGS) improved results again. Even with the point's density ρ_i directly passed in alongside rgb as a raw point value, it was unable to learn to combat the same level of density variation as our HDVNet. Finally, we ran RandLA-Net after downsampling the data heavily in pre-processing to obtain homogeneity in the dataset (if all points are sparse, there is no dense-to-sparse variation). This merely results in high accuracy on sparse points coupled with poor results on high-density ones. Unlike HDVNet these higher results on sparse objects come at too high a cost, reducing overall performance as fine features are completely abandoned.
Restricting the network from using the features in ^(a) assigned to higher densities when predicting for a coarse point was shown by “HDVNet: DTC” to reduce performance. As each feature up to this final step is extracted using only information from a specific density and lower, and each prediction _i with corresponding loss L_d is for points of a specific density, it better for the network to learn to ignore an unreliable feature than completely ignore them in the final class probability calculations.
Training with the final classifier from the get go with “HDVNet: FCO” put the majority of HDVNet's architecture to waste. High density points make up the majority of the scene, so all else being equal their gradients will overwhelm those of low-density ones making it difficult for the network to learn robust coarse features. In contrast, the training classifier from <ref> enables the network to learn how to reliably extract coarse features.
The fine tuning step described in <ref> causes a minor improvement compared to “HDVNet: TCO” which does not use it. Applying each point's corresponding label provided by each of the four initial outputs remains sufficient if a faster training time is desired however.
One point of interest in the ablation results is density assigned feature vector subsections (^(d)), a fundamental aspect of HDVNet. As expected, removing it in “HDVNet: No FA” resulted in lesser results on all but the (most common) highest-density category I^(0). Without any forced allocation of features, the network prioritised the more frequent I^(0) and I^(1) points during training.
It was confirmed with “HDVNet: No FA (small)” that it is the explicit assignment of features to density states d improving the results on sparser objects, and not a result of being a simplified network with almost half the weights to learn. This smaller-version performed worse than both the full-size “No FA” and standard HDVNet, as expected.
The existential local neighbourhood feature extraction step (ELFA) can be considered optional, and to be included if the goal is a network which performs especially well on sparse objects in a high density scene. Unlike the other measures taken in HDVNet, the ablation shows that the benefit to sparse objects is outweighed by the cost to dense ones. Even for the high-variation dataset HDVMine, “HDVNet: No ELFA” performs the best overall.
Ultimately HDVNet (No ELFA) achieved a MIoU 6.7 points above that of a RandLA-Net with minimal modifications, outperforming across all densities as well as against further simple RandLA-Net modifications.
Tests were also run using DGCNN for further comparison to existing models. The standard hyperparameter used by DGCNN for indoor scenes is 1.5 metre cubic blocks, with DGCNN taking approximately 8000 points from each block. On the HDVMine dataset, the average block has 8000 points only at 5 metres, so we made this minor change to better accommodate the network. Even at 5 metres, this merely reflects the number of points in an "average" block, with many of the blocks created having less points, some substantially so. As shown in the <ref> models such as DGCNN which split the scene into geometric sections (in this case, five metre cubes) perform poorly on high density variation data such as HDVMine, as they struggle to train with so many low-point blocks. In inference, DGCNN shows a further decrease in performance at lower densities, as those are the blocks which do not have sufficient points for the network to effectively extract features. Further modifications such as reducing the number of points expected from each block or increasing the block size further, would throw away the fine features within the many 5-metre blocks which do have 8000 or more points.
§.§ Results on Semantic3D
HDVNet was also applied to the task of Semantic3D <cit.>. The original metadata _i is not publicly available so angles were estimated using x,y,z, and from these angles rows and columns roughly approximated. Three of the fifteen scans typically used as part of the training set were instead put aside to use for testing. This was done as the Semantic3D test dataset does not have a public ground-truth point annotation, so detailed analysis across densities required sectioning off some of the publicly labelled training data.
As shown in <ref> smaller scale terrestrial LiDAR such as Semantic3D is significantly more homogeneous than HDVMine. The majority of points belong to the density state I^(0), which for Semantic3D is a threshold of ρ_i > 141,471 points per m^3. <ref> confirms that the improved performance seen on the HDVMine dataset does not carry over to datasets with a more homogeneous density, although it continues to perform adequately. In contrast to existing networks HDVNet is designed with the inherent assumption of density variation in the data, instead of homogeneity.
It should be noted that “HDVNet: Everything Implemented” performing better on “All” densities than at any individual one is not a calculation error but a natural result of how the MIoU is calculated. As a general trend, individual classes get the highest IoU for the density they most commonly occur, as this density state is also how they commonly appeared in the training data. In Semantic3D this is I^(0) for all classes except“High Vegetation”, which has 44% of its testing points at I^(1), despite that density only including 4.9% of the testing dataset's points. The MIoU at I^(0) averages each across every class, and so is affected by (relatively) poorer performance of “High Vegetation”. Similarly the MIoU at I^(1) is negatively affected by the IoU of classes which are most populous at I^(0). When calculated for “all” densities, each class IoU is affected primarily by the density where it has the majority of points (each of those points being either a true or false positive in the IoU calculation). This is what results in the “All” point MIoU of 67.9% being higher than for any of its density subsets I^(d). The tables with the IoU of every class, at every density, for every network architecture, are not included in this paper for brevity.
§.§ Results on HelixNet
Analysis was also performed using the automotive LiDAR dataset HelixNet <cit.>. Automotive LiDAR datasets are typically much lower resolution, however also have a higher variance in density than public terrestrial datasets such as Semantic3D. Once again we show improved performance compared to the similarly point-based network RandLA-Net, with performance especially improved on lower resolutions. Similarly ELFA once more improves performance on coarse points, but is detrimental to the overall performance.
HDVNet is built with the assumption that the point cloud still has useful features after downsampling steps. We found that due to the resolution being low to begin with, this assumption no longer holds. Assigning features to the densities I^(5) and I^(4) was counterproductive, with a point cloud downsampled more than three times becoming too sparse to still have useful features to extract from the raw point data. Restricting the density assignment of features to the first three density states resulted in a small increase in performance.
While the assumption of downsampled density states still having features worth extracting is an important weakness of our method to note, it is ultimately intended for high-resolution scenes such as our HDVMine. For low resolution LiDAR scans, state of the art voxel networks have demonstrated great success compared to direct point cloud processing. For Helixnet, as well as other automotive datasets, grid-based networks significantly outperform our method, RandLA-Net, and other methods which directly process raw point clouds. Whether converting to a cylindrical representation, voxels, pillars, , a low-resolution point cloud does not have as much information and detail to potentially be lost in the conversion, reducing the need for direct point processing.
§.§ Qualitative Results
In addition to the tables <ref>, we have produced qualitative results for all architectures on all datasets. We visualise both the class predictions, as well as the point accuracy.
§ CONCLUSIONS
In this paper we introduced the novel network architecture HDVMine for direct point cloud segmentation. We demonstrated improved performance consistent across all densities on data with high density variation, such as that from large-scale land-surveying or mining. The measures ingrained into the architecture were each tested separately in an ablation study to confirm their individual contributions to the final results.
We confirmed that this performance benefit does not translate to more homogeneous terrestrial LiDAR data such as Semantic3D, and while performance in inhomogeneous low-resolution LiDAR scenes improves, grid-based methods remain the state of the art option for low-resolution LiDAR. Further research is required to determine if the “Existential” local neighbourhood feature extraction step could be beneficial on data with more variance than HDVMine, or if its improved performance on sparse objects in the scene is always outweighed by the detriment to the higher density objects which make up the majority of a scan.
§ ACKNOWLEDGEMENTS
This research was carried out with support from the company Maptek, from which data was used to create the dataset HDVMine, and software was used to both label and visualise point clouds.
Funding: Ryan Faulkner was supported by an Australian Government Research Training Program (RTP) Scholarship as well as a supplementary University of Adelaide Industry PhD (UAiPhD) Scholarship funded by Maptek; Tat-Jun Chin is SmartSat CRC Professorial Chair of Sentient Satellites.
elsarticle-num
|
http://arxiv.org/abs/2307.01106v2
|
20230703153311
|
The truncation of the disk of NGC 4565: Detected up to z=4 kpc, with star formation, and affected by the warp
|
[
"Cristina Martinez-Lombilla",
"Raul Infante-Sainz",
"Felipe Jimenez-Ibarra",
"Johan H. Knapen",
"Ignacio Trujillo",
"Sebastien Comeron",
"Alejandro S. Borlaff",
"Javier Roman"
] |
astro-ph.GA
|
[
"astro-ph.GA"
] |
Detected up to z=4 kpc, with star formation, and affected by the warp
Instituto de Astrofísica de Canarias (IAC), La Laguna, 38205, Spain
[email protected]
Departamento de Astrofísica, Universidad de La Laguna (ULL), E-38200, La Laguna, Spain
School of Physics, University of New South Wales, Sydney, NSW 2052, Australia
Australian Research Council Centre of Excellence for All-Sky Astrophysics in 3 Dimensions (ASTRO 3D), Stromlo, ACT 2611, Australia
Centro de Estudios de Física del Cosmos de Aragón (CEFCA), Plaza San Juan 1, 44001, Teruel, Spain
School of Physics & Astronomy, Monash University, Clayton, VIC 3800, Australia
NASA Ames Research Center, Moffett Field, CA 94035, USA
Bay Area Environmental Research Institute, Moffett Field, California 94035, USA
Kavli Institute for Particle Astrophysics & Cosmology (KIPAC), Stanford University, Stanford, CA 94305, USA
Kapteyn Astronomical Institute, University of Groningen, Landleven 12, 9747 AD Groningen, The Netherlands
The hierarchical model of galaxy formation suggests that galaxies are continuously growing. However, our position inside the Milky Way prevents us from studying the disk edge. Truncations are low surface brightness features located in the disk outskirts of external galaxies. They indicate where the disk brightness abruptly drops and their location is thought to change dynamically. In previous analyses of Milky Way-like galaxies, truncations were detected up to 3 kpc above the mid-plane but whether they remain present beyond that height remains unclear.
Our goal is to determine whether truncations can be detected above 3 kpc height in the Milky Way-like galaxy NGC 4565, thus establishing the actual disk thickness. We also aim to study how the truncation relates to disk properties such as star formation activity or the warp.
We perform a vertical study of the disk of NGC 4565 edge in unprecedented detail. We explore the truncation radius at different heights above/below the disk mid-plane (0<z<8 kpc) and at different wavelengths. We use new ultra-deep optical data (μ_g,lim=30.5 mag arcsec^-2; 3 σ within 10 × 10 arcsec^2 boxes) in the g, r and i broad bands, along with near- and far-ultraviolet, Hα, and Hi observations.
We detect the truncation up to 4 kpc in the g, r and i ultra-deep bands which is 1 kpc higher than in any previous study for any galaxy. The radial position of the truncation remains constant up to 3 kpc while higher up it is located at a smaller radius. This result is independent of the wavelength but is affected by the presence of the warp.
We propose an inside-out growth scenario for the formation of the disk of NGC 4565. Our results point towards the truncation feature being linked to a star-forming threshold and to the onset of the disk warp.
The truncation of the disk of NGC 4565:
Cristina Martínez-Lombilla1,2,3,4
Raúl Infante-Sainz1,2,5
Felipe Jiménez-Ibarra6,3
Johan H. Knapen1,2
Ignacio Trujillo1,2
Sébastien Comerón2,1
Alejandro S. Borlaff7,8,9
Javier Román1,2,10
Received XXX; accepted XX
================================================================================================================================================================================================================================================================================
§ INTRODUCTION
Over several decades, the analysis of internal secular processes, star formation history, and the interstellar medium <cit.> has enabled significant advances in our understanding of the evolution of galaxies within their optical radius. In contrast, our knowledge of the outskirts of galaxies —the so-called low surface brightness (LSB) science— has come from a recent set of observational studies and theories that have provided precise details on the hierarchical nature of cosmological structure formation <cit.>.
One aspect of this advance relates to galaxy disk truncations, which constitute a well-defined and abrupt drop in the surface brightness typically observed at radial distances of ∼4 times the exponential scale length of the inner disk <cit.>. These truncations are found in three quarters of the thin disks in spiral galaxies <cit.>. However, the nature of disk truncations remains unclear and different scenarios have been proposed to explain their origin. Some researchers explain the truncation as the location of those stars having the largest angular momentum at the moment of the protogalaxy collapse <cit.>, while others have suggested that truncations are associated with a threshold in star formation <cit.>.
The location of the truncations at the edge of disks and therefore within the LSB regime (i.e. μ_V > 24 mag arcsec^-2) makes these features difficult to detect at large heights above the mid-plane. In consequence, the truncations have historically been considered a property of the galactic mid-plane. However <cit.> studied the truncations of two edge-on nearby galaxies, NGC 4565 and NGC 5907, in a wide wavelength range from the near ultraviolet (NUV) to the near-infrared (NIR) at different altitudes above/below the galaxies disk mid-plane. They found that the radial location of the truncation is independent of both wavelength and altitude above the galaxies' mid-plane. The truncation was detected as high as 3 kpc height above the galaxies mid-plane. In addition, associated the truncation location with a star formation threshold. At the location of the truncation, they obtained a face-on deprojected stellar mass density of 1–2 M_⊙ pc^-2, very close to the critical gas density necessary to transform gas into stars (∼ 3-10 M_⊙ pc^-2), indicating an efficiency of ∼30 percent in transforming gas into stars. Independently, <cit.> confirmed the existence of such a star formation threshold in NGC 4565 although they proposed an accretion-based build-up of the outer disk. Recently, <cit.> analysed the disk of UGC 7321, a well-studied edge-on, low-mass, diffuse, isolated, bulgeless, and ultra-thin galaxy. They reported the discovery of a truncation at and above the mid-plane (∼ 0.5 kpc height, i.e., where 90% of the light comes from the thick disk) at all the probed wavelength ranges, from FUV to NIR. This truncation was also found to be linked to a star formation threshold.
Based on the above results, <cit.> proposed a physically motivated galaxy size indicator based on the location of the gas density for star formation threshold. As a proxy to identify this position, <cit.> suggested using 1 M_⊙ pc^-2. This stellar mass density value refers to the region where the gas density threshold for star formation in galaxies is found theoretically <cit.>. <cit.> has explored at which stellar mass density value the truncation is found for a large number of galaxies. They found that 1 M_⊙ pc^-2 is a good approximation for galaxies changing several orders of magnitude in mass.
In this work, we present a follow-up study to in which we analyse the vertical structure of the disk of the edge-on Milky Way-like galaxy NGC 4565. Due to the limited depth of the data and image quality limitations, did not find the altitude above the galactic mid-plane at which the truncation disappears. Thus, an evaluation of the consequences that the vertical extent of the truncation could have on the structure and origin of the galaxy disk components is missing. In this study, we aim to address the issue using new ultra-deep optical data from the 4.2 m William Hershel Telescope (WHT, μ_g,lim=30.5 mag arcsec^-2; 3 σ within 10 × 10 arcsec^2 boxes) in the g, r and i broad bands. We perform an unprecedented deep vertical analysis of the light and colour distribution of its disk. Our goal is to firmly establish the location of the truncation at any altitude and give a vertical extent of the thin and thick disks of NGC 4565. Based on those results we discuss the origin of the two disk components. In addition, we broaden the study with Hα and Hi data, which allows us to analyse the stellar population and star formation rate (SFR) at the truncation and beyond.
NGC 4565 is a barred edge-on spiral galaxy with a similar mass to that of the Milky Way (88.5 ± 0.5 deg of inclination to the line-of-sight according to , and v_rot = 243.6 ± 4.7 km/s from ). In terms of its mass assembly history, recent studies suggest that NGC 4565 has not experienced a strong interaction <cit.> that disturbs the disk structure. <cit.> proposed a tidal ribbon event to explain the fan-like structure in the northwest part of the outer disk. The physical properties of NGC 4565 and the absence of evidence of a major disruption event guarantee a reliable comparison of the vertical properties of the disk of NGC 4565 to that of the Milky Way. In addition, the combination of its nearby location in the Coma I Group (; ∼ 13 Mpc distance), and the size of its disk (R ∼ 30 kpc; e.g., , ), sets NGC 4565 as the galaxy of this kind with the most extended apparent size on the sky. This means that NGC 4565 provides the best spatial resolution (∼ 65 pc/arcsec) for a detailed study of a galaxy's vertical disk structure.
We have detected the highest truncation in a galaxy disk up to date. We measured the disk truncation up to 4 kpc above/below the NGC 4565 mid-plane, with a mean radial truncation position of 25.9 ± 0.4 kpc for the three g, r and i bands. This truncation feature is also found in the Hα image at the galaxy mid-plane and in Hi data up to ∼ 1.5 - 2 kpc height. Also, a colour analysis allowed us to measure the threshold of star formation associated with the truncation and the blue stellar populations of the disk warp. The results from the analysis of the truncation and the vertical disk structure of the edge-on Milky Way-like galaxy NGC 4565 are presented in this paper as follows. In Section <ref>, we describe the data in each wavelength range; we then explain the methods used to extract the information from that data in Section <ref>, giving details of key processes such as the mask construction, sky subtraction, PSF modelling, profile extraction, how we estimated the truncation radius and the star formation rates. The main results on the truncation radius and how they compare with previous works are presented in Section <ref>. We then discuss the implications of these results regarding the star formation activity at the edge of the disk in Section <ref>, the actual vertical extent of the disk in Section <ref>, and what is the role of the warp in the truncation location in this Milky Way-like galaxy in Section <ref>. Finally, we draw our conclusions in Section <ref>. All magnitudes are provided in the AB system.
§ DATA
We present an ultra-deep photometric study of the vertical structure of the disk of the edge-on galaxy NGC 4565 (see Fig. <ref>). NGC 4565 is a well-known nearby galaxy whose structural component parameters make it a suitable analogue of our Milky Way <cit.>.
In this work, we used the same physical parameters for NGC 4565 as in (see their Table 1). The analysis was mainly performed in the optical wavelength range, including ultra-deep broad-band data in g, r, and i filters, and an extra image in a narrow Hα filter. We also used far- and near-ultraviolet (FUV and NUV respectively), as well as Hi data to extract additional and complementary properties. In the following sections, we explain how each type of data was obtained.
§.§ New WHT ultra-deep broad-band optical data
We obtained the ultra-deep imaging data at the 4.2 m William Herschel Telescope (WHT; La Palma, Spain). The images were obtained with the Physics of the Accelerating Universe Camera <cit.> in three broad-band filters g, r, and i, with a total amount of time observed in each of them of 4.8, 3.1, and 4.4 hours, respectively. The large field of view of PAUCam (∼1 × 1 deg^2) allows us to observe the target but also its surroundings making it possible to perform an exquisite data reduction and sky background treatment. The images have a pixel size of 0.253 arcsec, this is, a resolution of 16 pc pix^-1 considering that the distance to the galaxy is 13.14 Mpc (distance obtained from the average of all calculated distances to NGC 4565 since 2010 in the NASA/IPAC Extragalactic Database[<http://ned.ipac.caltech.edu/>], NED). We also observed very bright stars to construct an extended point spread function (PSF) and correct the scattered light field (see Sect. <ref>).
We designed an observational strategy and data reduction optimised to detect and preserve the low surface brightness structures around our target. When observing, we followed a dithering pattern with large steps (of the order of the size of the galaxy) combined with different rotation positions of the camera. This allows to correct systematic effects introduced by the instrument/telescope system and to perform a night sky flat-fielding correction procedure <cit.>.
We briefly summarise the processing of the raw data. We obtained individual images with an exposure time of 200 seconds each. The PAUCam consists of 14 CCDs, with 4 channels for each CCD. Due to the illumination effects on the more external CCDs, we decided to use only the 8 central CCDs with the best illumination quality. Each channel was bias-corrected and flat-fielded. The master flat field images were obtained from the science images themselves. The methodology was to mask all sources on each individual image using <cit.>, normalise those images, and combine them to obtain the master flat field image. To subtract the sky background we masked all the signal pixels detected by and computed a very smooth surface model to the remaining background pixels. We then subtracted a 3-σ-clipped median value of the sky model from the science images. This procedure ensures the correction of systematics and the sky background while preserving the low surface brightness structures. Further details will be provided in Infante-Sainz et al. (in prep.).
After the correction of the systematic effects, we used the Astrometry[<http://Astrometry.net>] software <cit.> to produce a first-order astrometric solution that was later improved with <cit.>. Relative astrometry techniques are very difficult to implement in images with large dithering steps and, at the same time, obtaining an accurate astrometric solution is key in low surface brightness studies as small offsets in the alignment of the images will introduce a loss of signal-to-noise ratio (S/N) in the final stacked image.
The next step is the co-addition of all the aligned images to increase the S/N of the final very deep images of NGC 4565. We used a 3σ-clipped median to generate a final image per filter. Finally, the photometric calibration was done using SDSS DR12 <cit.>.
The final images of NGC 4565 have a limiting surface brightness of 30.5, 29.9, and 29.3 mag arcsec^-2 (3 σ; 10 × 10 arcsec^2) for the g, r, and i filters, respectively. They were calculated using <cit.> and following the procedure in <cit.> and in <cit.>. This is, obtaining the standard deviation of the images where all the sources are masked (see Sect. <ref>) and applying this method in each band. These surface brightness limits are, ∼3 mag arcsec^-2 deeper than in . The zero point value to convert between instrumental magnitudes and calibrated magnitudes is fixed to 22.5 mag for the three bands.
§.§ INT narrow-band Hα data
We took narrow-band images of the galaxy NGC 4565 using the Wide Field Camera (WFC), an optical mosaic camera located at the prime focus of the 2.5 m Isaac Newton Telescope (INT; La Palma, Spain). We used the narrow filter Hα 6568/95 centred at 6567.98 Å with a FWHM 95.63 Å (and a transmission >80%). We also got images in the R broad-band filter to subtract the contribution of the continuum in the narrow-band images. The WFC has a pixel size of 0.33 arcsec and a field of view of 34 × 34 arcmin^2, wide enough to cover our target diameter of 16.6 ± 0.1 arcmin at the isophotal level 25 mag arcsec^-2 <cit.>.
We performed a basic data reduction of both the Hα and R images using [<https://www.astro.uni-bonn.de/theli>] <cit.>, a software to automatically reduce astronomical images. Once the images were completely reduced, we subtracted the sky background from all the images in both filters. To do that, we adjusted a 2D polynomial function to a version of the images where all the sources were completely masked.
Then, we aligned and combined all the Hα images weighted by their exposure time into a final mosaic with the median value of each pixel obtained after applying a 3σ-clipping rejection algorithm. The total observing time on the source with the narrow-band filter was 16991.86 s. In the same way, we got the R broad-band mosaic, with a total exposure time of 3010.88 s.
Finally, we subtracted the continuum contribution to the Hα flux. The R-band continuum images were scaled and subtracted from the Hα filter image using a scaling factor of R/Hα = 0.0675. We refer the reader to <cit.> for further details about the method used to calculate this scale factor of the continuum image. There are strong light gradients in the observed data that could not be properly subtracted, particularly in the innermost ∼ 4 kpc.
The continuum-subtracted image was photometrically calibrated using public available SDSS DR12 data <cit.>, and found that the calibrated zero point of the continuum-subtracted Hα mosaic is 5.9 × 10^-16 erg s^-1 cm^-2. The final image is shown in the second to last panel in Fig. <ref>.
§.§ VLA Hi data
Hi integrated intensity maps of NGC 4565 were provided by <cit.>. Yim's team obtained the raw data from the Very Large Array (VLA) archive, available at the National Radio Astronomy Observatory[<https://science.nrao.edu/facilities/vla/archive/index>] (NRAO). Then, they reduced them using the CASA (Common Astronomy Software Applications) software package <cit.> which was built for reprocessing the whole data set in a uniform procedure. We refer the reader to <cit.> for a more detailed description of the data acquisition and reduction process.
In Table <ref> we list the observing parameters and in the bottom panel of Fig. <ref> we show the Hi integrated intensity maps of NGC 4565 in units of Jy beam^-1 km s^-1. The one-sided warp of NGC 4565, noted in previous studies <cit.> and in our optical data (see Sect. <ref>) is evident. We visually identified the Hi warp onset radius at ∼ 26-27 kpc.
§.§ GALEX ultraviolet data
We use the deepest available data from the Galaxy Evolution Explorer <cit.>. GALEX images have a circular field of view (FOV) of 1.2 degrees, a pixel size of 1.5 arcsec, and a spatial resolution (FWHM) of 4.2 arcsec and 5.3 arcsec in the FUV and NUV channels, respectively. The effective wavelengths are 1516 Å (FUV) and 2267Å (NUV). The FUV image has an exposure time of 1693.05 seconds and GALEX zero point magnitude m_0, FUV = 18.82 mag. In the case of the NUV data we use the same image as in with a long exposure time of 12050.15 seconds, which allows for a surface brightness limit of μ _AB∼30.5 mag arcsec^-2 (1σ), and m_0, NUV = 20.08 mag. The second and third panels in Fig. <ref> show the NUV and FUV images, respectively.
§ METHODS
As mentioned in the Introduction (Sect. <ref>), this work is a follow-up study of . Here we used deeper images of NGC 4565 data and a wider wavelength range to answer the open questions from our previous work. These are: 1) what is the altitude above the galaxy mid-plane where the truncation disappears?; and 2) what is the amount of stars that are being formed at the very edge of the disk of NGC 4565 –this is, the specific star formation rate (SFR)?
Thus, most of the techniques applied in this study were previously developed as a semi-automatic code optimised for LSB data as described in (see their Sect. 3). The following sections will only detail the changes with respect to the previous work.
§.§ Mask
The aim of the mask is to cover the flux from all the sources other than our target. However, this is a complicated task when working with data that reach LSB levels such as ours, where we detect structures at μ _g≳30 mag arcsec^-2. At these depths, even the light from very faint objects in the surroundings of NGC 4565 could contribute to the measured galaxy flux. Thus, a dedicated mask is key to obtaining reliable surface brightness measurements.
There are two main differences relative to the masking procedure of . The first is that here we used Python-based routines only as they are built from modular and object-oriented scripts, that allow for easy implementation of modifications and improvements. In particular, we used the tools provided by <cit.> to detect astronomical sources using image segmentation. The second difference is the two-step approach using a “hot+cold” combined mask <cit.> as in e.g., <cit.>, <cit.>, or <cit.>. We first obtain a “cold” mask optimised to detect bright and extended sources. Then, over an image –masked with the “cold” mask– we get a “hot” mask that accounts for faint and small objects. In order to smooth the noise and maximise the sensitivity of the algorithm, the images were filtered with 2D circular Gaussian kernels (FWHM sizes of ∼1.3 arcsec and ∼0.8 arcsec for the “cold” and “hot” masks respectively), prior to thresholding. The detection threshold is 1.1σ above the background.
Once the “hot+cold” combined masks were finished, the visual check of the extent of the masks is done accordingly to (see their Sect. 3.1). The aim of this last step is to make sure the masks are covering all undesirable flux. In the case of the NUV data, we directly used the mask obtained in .
§.§ Sky background treatment
We used co-added sky-rectified versions of all our images in which in general, a non-aggressive sky subtraction strategy had been done (see Sect. <ref>). In addition, we performed a second-order (also non-aggressive) local background subtraction. This is, we evaluated the sky covering only our region of interest in the outermost parts of the disk of NGC 4565. Following the procedures in <cit.> and in (see details in their Sect. 3.2), we measured the mean background levels well beyond the disk edges through very extended radial surface brightness profiles. We consider as the sky the flattest part of the outermost region of the extended profiles (R ≳ 700 arcsec). We measured the mean values in counts of that sky region using a 3σ-clipping rejection algorithm. We then subtracted the mean sky from the corresponding image. This local second-order background evaluation allows us to identify clearly where the surface brightness profiles of the disk of NGC 4565 reach the sky limit and in consequence, the location where the sky background starts to be dominant in the profiles. An example of the extracted profiles for our g broad-band optical data is shown in Fig. <ref>.
§.§ PSF correction in WHT broad-band optical data
The PSF characterises how the light coming from a point source is affected by the combination of the telescope, the instrument, the detector, and the atmosphere, and measures the extent of the scattered light <cit.>. Moreover, the outer wings of the PSF of bright stars can add some flux to the outer parts of extended objects or in low surface brightness objects around them. Thus, properties of thick disks, stellar haloes, and faint outskirts of galactic disks can be severely influenced by the wings of the PSF <cit.>. For these reasons, is crucial to model not only the PSF of relatively faint and well-resolved stars but to construct our own very extended PSF that reaches the whole extent of those wings.
We built three extended (R ∼ 20 arcmin) PSF models for each of the g, r, and i filters of the PAUCam at the WHT telescope. Following the methodology outlined in <cit.> and <cit.>, we combined three very bright stars (saturated) with magnitudes 2.57, 4.82 and 5.84 mag <cit.>, and 50 fainter stars of 11 and 13 mag. The resulting PSFs have full-width-at-half-maximum (FWHM) values of 0.8 arcsec in each of the g, r, and i bands. We fit and subtracted all stars in the field-of-view of the camera brighter than G=16 mag, according to the Gaia EDR3 catalogue, in order to minimise contamination in our photometry. To do so, we followed the steps in Sect. 3.2 of <cit.>. These scattered light-subtracted images were the ones used in the following analysis in this work.
The extraordinary depth of the images as well as the extended PSFs characterisation allowed us to perform a detailed and reliable study of the outermost parts of the disk of NGC 4565, including colour and stellar population analyses above/below the mid-plane.
As explained above, the physical properties derived from the outskirts of a galactic disk can be strongly affected by scattered light. Besides this, that PSF effect has been proven to be more dramatic in edge-on than in face-on galaxies, also affecting their outer colours <cit.>. The strength of the PSF effect is also highly correlated with the depth of the images <cit.>. In consequence, it is not only necessary to correct the scattered light field due to the bright stars, but also to the scattered light from the galaxy itself. Thus, NGC 4565 has also to be modelled and corrected.
Interestingly, found that the effect of the PSF only slightly influenced the radial flux distribution and the slope of the surface brightness profiles of the edge-on galaxies NGC 4565 and NGC 5907. Also, in the same work they verified that the radial position of the galaxy truncations was not affected by the PSF (see their Sect. 4.3 and Figures 1 and 4). <cit.> reported a similar result in a study of the disk breaks in a sample of Type-III S0 and E/S0 galaxies at 0.2 < z < 0.6. They found that the PSF tends to increase the scale lengths of the inner and outer disk profiles, but it does not significantly affect either the central surface brightness values of the inner and outer disks or the break location. However, <cit.> showed that a careful PSF treatment is absolutely indispensable for deep imaging of extended objects because if the PSF effect is not accounted for, flux and mass measurements of the outskirts of disks of edge-on galaxies can be overestimated when reaching surface brightness values of ∼28 mag arcsec^-2 or deeper. In particular, they found that the mass of a thick disk can be overestimated by a factor of 1.5–2 in low-mass sources (∼ 10^9 M_) and of 2.5–4 in intermediate- to high-mass galaxies (> 10^10 M_). Previous works such as <cit.> did not find this mass excess because they only reached surface brightness levels of ∼26 mag arcsec^-2. Although the sample of five galaxies of <cit.> is small, it is worth checking whether the PSF is now affecting the location of the truncation in our ultra-deep WHT optical images, especially in the highest (and faintest) parts of the disk of NGC 4565.
We had the scattered light-subtracted images of WHT optical data in the g, r, and i broad bands in which the light of the foreground stars had been corrected. However, it is still necessary to model the effect of the PSF over the galaxy itself. To do that, we use imfit[Precompiled binaries, documentation, and full source code (released under the GNU Public License) are available at the following website: <https://www.mpe.mpg.de/ erwin/code/imfit/> ] <cit.>, which allows us to model the intrinsic light distribution of NGC 4565 and convolve it with the image of the extended PSF. Then, the PSF-convolved model of the galaxy is fitted to the observed data. In this way, a new image of the source is built as the result of the PSF-deconvolved model of NGC 4565 plus the PSF-convolved fitting residuals. These residuals consider all the non-symmetric features, such as the spiral arms, that the model cannot properly fit. A comprehensive overview of the steps, assumptions, and optimization algorithms used in this 2D deconvolution process have already been addressed in previous works <cit.>.
To model NGC 4565 we had to use four galaxy components, as indicated by its morphology <cit.>: a bulge using an elliptical 2D Sérsic function with generalised ellipses (“boxy” to “disky” shapes) for the isophotes instead of pure ellipses <cit.>; a bar represented by a 2D Sérsic function <cit.>; a disk truncated at the same distance as the data represented with a 3D broken exponential disk function; and finally, a halo component with a purely exponential function <cit.>.
An example of the final 2D deconvolved model of NGC 4565 and its components is shown in Fig. <ref> (Appendix <ref>). We see that the PSF is clearly affecting the outermost regions of the disk of NGC 4565 as the width of the galaxy changes after the deconvolution process. This is an effect clearly visible due to the extraordinary depth of our images. By using the scattered light-corrected images and these 2D PSF-deconvolved models of the galaxy, we are in the position to obtain reliable photometry which impacts the subsequent analysis of the physical properties of the outskirts regions in NGC 4565. We, therefore, use the images of the 2D PSF-deconvolved models of the galaxy for further analysis in this work. Hereafter, we refer to the 2D PSF-deconvolved model of the galaxy in each of the bands as “the galaxy model”.
§.§ Profile extraction
§.§.§ Broad-band optical profiles
Our goal is to measure the height above the mid-plane where the disk truncation disappears. For that, we need to extract radial surface brightness profiles (RSBP) at different altitudes with the best possible vertical resolution and up to the highest regions that the S/N of our images allows. A galaxy RSBP can be extracted by calculating the fluxes through a slit with a given width along the radial axis. In general, we followed the method explained in (see details in their Sect. 3.3) over the galaxy models (details in Sect. <ref>). However, the depth of our new data allowed us to implement some statistical improvements as well as a slightly different slit configuration. These changes are described below:
* For each bin, we calculated the surface brightness by applying the Galactic absorption coefficient correction at each wavelength from the NED <cit.>. For simplicity, we applied the same correction coefficient throughout the whole galaxy (A_g =0.051, A_r =0.035, and A_i =0.026 mag). Each resulting surface brightness value was obtained by calculating the 3σ-clipping median of the given set of pixels.
* We increased the number of bins to 180 of each RSBP due to higher S/N of the images (in there were 150 bins in each RSBP). The radial spacing between bins remains evenly spaced on a radial logarithmic scale.
* The uncertainties for each bin were defined as in but this time the RMS quantity was determined over 25000 well-distributed and different background regions of ∼ 10 × 10 arcsec^2 in the masked image.
* The new ultra-deep optical data from the WHT allowed us to reach higher altitudes above and below the NGC 4565 mid-plane. Thus, the widths and vertical locations of the shifted RSBP are slightly different from the configuration. Now, the boxes with 0.5 kpc height reach up to 4 kpc. Then we have two steps of 1.0 kpc at 5.0 and 6.0 kpc in height, and one last step of 1.5 kpc at an 8.0 kpc altitude. In this way, we reach the same external regions as in but with better spatial resolution and higher S/N allowed by our data.
* Our technique provides reliable RSBPs down to μ _g= 30.5 mag arcsec^-2 (3 σ; 10 × 10 arcsec^2).
The size and location of all the apertures and their corresponding bins for the broad-band optical profiles are shown in the first panel of Fig. <ref>.
§.§.§ Hα profiles
The Hα stacked and continuum-subtracted image needs a different treatment in terms of galactic extinction correction. For this narrow filter, it is also required to remove possible contributions to the total flux from other emission lines. In order to extract RSBPs in Hα we undertook the following steps:
* To correct for total absorption for the galaxy, we needed to determine the contribution from both, the foreground Galactic absorption in the R band, A(R), and the internal absorption of NGC 4565 in the Hα narrow filter, A(Hα). Thus, the total absorption in magnitude units can be obtained as the sum of them: A_T = A(R) + A(Hα). For A(R), we used the value given by the NED <cit.>. In the case of A(Hα), several solutions have been proposed in the literature <cit.>. In this work we took the approach of <cit.> where A(Hα) = 0.3 when M_B ≤ -16 <cit.>.
* As our narrow-band filter is wider than 35-40 Å, we also applied the statistical correction to the Hα fluxes for [Nii] forbidden lines contamination at λ6548, 6584 Å. We estimated this contribution using the expression from <cit.>: log([Nii]/Hα) = 0.54 if M_B≤-21. As the FWHM of the Hα 6568/95 filter is 95.63 Å there is no need to correct the [Nii]/Hα ratio for the transmission profile of the filter <cit.>.
* After all the above corrections, we extracted one RSBP along the galaxy mid-plane in a region of 1 kpc width in physical flux units (see the second to last panel in Fig. <ref>). This was the best spatial resolution and the higher height we could reach with the current data. However, the measured flux in the inner galaxy disk (≲ 4 kpc) is slightly overestimated as we could not properly subtract the sky emission and the continuum due to strong light gradients in the observed data.
§.§.§ Hi profiles
We extracted RSBPs from the Hi data as shown in the bottom panel of Fig. <ref>. One profile was extracted along the galaxy mid-plane and the rest at four altitudes above/below it: at 0.5, 1.5, 3, and 5 kpc. The profiles have an increasing width from 0.5 kpc to 2 kpc the higher the position above/below the galaxy mid-plane. However, due to the nature of these data, the procedure is slightly different from the one explained above for the data in other wavelength ranges.
Our VLA zeroth-moment intensity maps are in Jy beam^-1 km s^-1 units but we need to extract a surface mass density profile, this is, in units of M_⊙ pc^-2. To do that, we first converted the value of each bin of the RSBP from flux density per area in Jy beam^-1 to brightness temperature in K, by specifying the beam area. We used the Rayleigh-Jeans equivalent temperature as it shows a linear relation between flux and temperature. This equivalence is usually known as “Antenna Gain” as the flux density sensitivity at a given frequency is related to the aperture size, while the telescope brightness sensitivity is not. Thus, the Rayleigh-Jeans relation is only dependent on the aperture size. For a more comprehensive explanation, see equations 8.16 and 8.19 in <cit.>.
The derived values in K km s^-1 were converted to Hi surface mass densities using the optically thin approximation from <cit.>:
N (HI) [atoms cm^-2] = 1.823 × 10^18 I _HI [K km s^-1] .
This, together with the mass of the Hi atom value and the corresponding unit adjustments, returns the surface mass density values in M_⊙ pc^-2. The uncertainties for each bin of the profiles reflect the standard deviation of the fluxes within each bin. These surface mass density measurements are not corrected for the inclination of the galaxy. The Hi profiles are shown in Fig. <ref>.
§.§.§ UV profiles
The NUV data has poorer spatial resolution and S/N than optical. For this band we enlarged the height of the uppermost bins to increase the S/N (see the second panel in Fig. <ref>). We extracted only 3 profiles, at the mid-plane (of 1 kpc width), at 1.5, and 5 kpc above and below it (the latter two of 2 kpc width). There are two main reasons why we adopted this criterion: first, the NUV data has already been studied in ; and second, because we only use the NUV image to compare the profiles at three particular vertical locations in Sect. <ref>.
The FUV data is used for qualitative comparison purposes on the disk light emission in different regions. Therefore, we only extracted one RSBP along the galaxy mid-plane in a region of 1 kpc width.
§.§ Truncation position
Once we extracted all the RSBPs at all the different highs above and below the galaxy mid-plane and in all the wavelengths, the last step of the process was to determine the position of the truncation in the radial axis. We define the truncation as the sharp edge in the disk of a highly-inclined galaxy <cit.>. This sharp edge is seen in a RSBP as the change in the slope, mimicking a type II break but in the galaxy disk edge. In consequence, the regions before and after the truncation can be modelled with straight lines of different slopes.
We developed a routine to detect the truncation. This routine fits two exponential functions to the g, r, i and Hi RSBPs to fit the regions of the disk inside and outside the truncation. When working in logarithmic scales of flux units, these functions are seen as lines that intersect at a given point. That point is considered as the truncation radial location. The fitting of exponential functions is restricted to either the inner or the outer part of the disk. Those boundaries are estimated by a visual inspection using an interactive interface of the routine. However, with the aim of reaching unbiased results, we carried out a fitting procedure afterwards based on Sequential Least Squares Programming (SLSQP) optimisation algorithm and least squares statistic. These algorithms perform a broken linear fit of the two lines and find the intersection point if any. This fitting procedure is combined with an iterative 3σ-clipping outlier removal technique in which, given a maximum number of iterations (3 in our case), outliers are removed and the fitting is performed for each iteration until no new outliers are found or the maximum number of iterations is reached. In addition, the routine performs a Bootstrap re-sampling over 100 fitting points subsets of each of the two lines in each RSBP. Bootstrap re-sampling is a method that estimates the variability of our results by making a more detailed analysis of the parameter distributions. The combined set of bootstrapped parameter values can be used to estimate confidence intervals, and consequently, to discern between the multiple good solutions.
The derived truncation radii, obtained as the location where both lines intersect, are almost always in agreement with those derived by eye. We computed the errors as the standard deviation of the determination of the truncation position distribution for a given band and height.
§.§ Star formation rate (SFR)
Hα is one of the preferred tracers of star formation as it is a strong emission line with a short timescale (≲ 10 Myr, ), relatively easy to observe, and with a good spatial resolution. However, this narrow wavelength range also presents many uncertainties and difficulties due to extinction, [Nii] contamination, diffuse/absorbed fractions, or sensitivity to the upper initial mass function (IMF) slope in the very upper mass range. So, in order for Hα to be a precise tracer of the SFR one must take into account extinction. Moreover, when using narrow-band filters around Hα to measure the luminosity of the line, it is required to consider the contamination of the Hα flux by the neighbouring [Nii] λ 6548 and 6584 lines. These two corrections were addressed in this work (see Sect. <ref>).
The SFR can be obtained from the measured Hα fluxes. We applied the following relation between the Hα line intensity and the SFR from <cit.>:
SFR[M_⊙ yr^-1] = 5.5 × 10^-42 L(Hα) ,
with L(Hα) being the luminosity, determined as follows:
L(Hα)[erg s^-1] = 4π D (3.086 × 10^24)^2 F_Hα ,
where D is the distance to the galaxy in Mpc and F_ Hα is the flux in Hα corrected by absorption and [Nii] contamination. For the estimation of this SFR conversion factor, we assumed a “Kroupa” IMF <cit.>.
§ DISK TRUNCATION RADIUS RESULTS AND COMPARISON WITH PREVIOUS WORKS
In the following sections, we highlight the main outcomes derived after applying the methods described in Sect. <ref> and how these findings fit within the current context of the field. We have used the deepest available data of the edge-on Milky Way-like galaxy NGC 4565 in a broad wavelength range, to measure the disk truncation radius at different heights above/below the mid-plane. To do that, we have extracted multiple radial surface brightness profiles of the disk of NGC 4565 as shown in Fig. <ref>. Then, we modelled those profiles and fitted the truncation location (see Sect. <ref>). In the following sections, we detail the results derived in this work and compare them with previous relevant studies.
§.§ Truncation detected up to 4 kpc above/below the NGC 4565 mid-plane
We obtained a mean radial truncation position of 25.9 ± 0.4 kpc for the three g, r, and i bands and for all the altitudes above/below the galaxy mid-plane, up to 4 kpc. Beyond this height, the truncation is not detected. These findings are shown in the top panel of Fig. <ref>, where we put together the values of the radial truncation positions obtained from the RSBPs of the galaxy models and those of the optical broad-bands (see Sect. <ref>). The full set of RSBPs above/below the galaxy mid-plane are shown in the Appendix <ref> while in Table <ref> are the values of the corresponding scale lengths before and after the truncation obtained in the fittings of the profile. In Figs. <ref>, <ref>, <ref>, and <ref> we have the RSBPs extracted from the observed data, while Figs. <ref>, <ref> and <ref> show the ones extracted from the models of NGC 4565. As mentioned in Sect. <ref>, we are going to use the optical data from the latter for further results and discussion. The error bars in the profiles account for the uncertainties in the source detection, the detector noise, and the sky fluctuations.
The lower S/N of the data in , limited their measurements of any reliable radial position of the truncation of the disk above 3 kpc in height. However, in this work we are confidently detecting the truncation up to 4 kpc in the ultra-deep optical data, this is, at 1 kpc higher locations than in any previous study for any studied galaxy. These high-altitude off-plane truncation detections are shown in detail in Fig <ref>. Beyond 4 kpc the truncation is not detected while the uncertainties of the surface brightness measurements are smaller than the fluctuations of the profiles. This guarantees that the data provides enough S/N at high altitudes above the galaxy mid-plane, ensuring the non-detection of the truncation beyond 4 kpc.
The location of the radial position of the truncation remains constant up to 3 kpc, in agreement with . Further up, we have two more detections at 3.5 and 4 kpc but they are located at an inner radius (see Fig. <ref>). In other words, we see a shift in the radial location of the truncation towards the centre of the galaxy above 3 kpc. Then, the higher the altitude, the closer to the galactic centre. This shift is contributing to the differences with previous studies when obtaining the mean radial position of the truncation. In this work, the mean truncation radius in the optical broad bands is 25.9 ± 0.4 kpc, a slightly smaller distance than the 26.4 ± 0.4 kpc obtained by (see their Sect. 4.4), but in agreement within errors. However, if we do not take into account the two higher detections at 3.5 and 4 kpc in our estimation of the mean location of the truncation, we got a value of 26.1 ± 0.2 kpc. This is a similar result to what was found before by .
Some other works have also found this independence on the location of the truncation with high above/below the galaxy mid-plane in edge-on disk galaxies <cit.>. However, none of them have reached our combination of high spatial resolution, high altitude in the disk, and ultra-deep imaging. In the case of <cit.>, they also studied the vertical light distribution of the disk of NGC 4565, between ∼2.7–7 kpc above the galaxy mid-plane. They found the truncation radius located at 27.5 ± 0.8 kpc up to a high of 3 kpc using lower spatial resolution data. They did not detect any evidence of truncation at large heights.
We also explored whether there is any significant difference in the position of the truncation depending on the disk quadrant. To do that, we extracted RSBPs from each disk quadrant at five different heights above/below the mid-plane as shown in Fig. <ref>. The truncation is systematically sharper in the right side of the disk (NW; red profiles in Fig. <ref>). Also, the surface brightness of the truncation is slightly brighter (∼0.3 mag arcsec^-2) in the right disk, although this could be expected from the fact that NGC 4565 is not perfectly edge-on. The light profiles show different features depending on the disk side. On the right side (NW), the profiles are composed of two exponential light distributions that intersect at the truncation radius. On the left side of the disk (SE), the light shows an additional break located at ∼ 21 kpc. This inner break is clearly visible in the mid-plane profile but it gradually vanishes as height increases. The asymmetry in the disk of NGC 4565 has been previously reported by <cit.> and <cit.>. The latter authors associated the disk asymmetry with a tidal ribbon produced by the fan-like feature located around the northwest disk side.
From the aforementioned profiles, we measured the truncation radius for each disk quadrant separately. Despite the disk light distribution asymmetry, the truncation radius remains at the same location and almost constant within the uncertainties for all the quadrants and heights above/below the mid-plane up to where we have enough S/N, with the only exception of the quadrant with a clear warp feature (north quadrant). This is shown in the central and bottom panels in Fig. <ref>. When there is no warp, the truncation radius remains constant. However, the truncation radius systematically increases with height above the galaxy mid-plane in the presence of the warp. It seems as if the truncation traces the warp feature, at least, up to 3 kpc height. The possible role of the warp on the origin of the disk of NGC 4565 will be further discussed in Sect. <ref>.
In a previous work, <cit.> reported the truncation location at a smaller radius on the left (SE) side of the disk. However, we consider that the truncation at a smaller radius in the SE side of the disk is an inner break. Then, at 26 kpc, we measure a second decline in the light distribution that coincides with the truncation radius on the right side of the disk. The higher spatial resolution and depth of our data in comparison with that in <cit.> allowed us to distinguish both disk features.
§.§ Truncation radius remains independent of the wavelength
The radial position of the NGC 4565 truncation is the same, within errors, for our full wide wavelength range, independently of the height above/below NGC 4565 mid-plane. We find the same results when measuring the truncation radius in each disk quadrant separately. This is illustrated in Fig. <ref> and also in Fig. <ref> for the optical g, r, and i bands only. For any given altitude, the radial location of the truncation is not systematically either closer or further from the galaxy centre in any of the filters. This result confirms our expectations as previous studies of NGC 4565 found this independence on the radial location of the truncation. <cit.> extracted RSBPs in g and r band with the same outcome, so as , in a wide wavelength range, from NUV to near-infrared (3.6 μm).
There is an evident sharp decline in the brightness of the Hα emission around the truncation region in the mid-plane. We also detect such a decline in surface mass density in Hi gas component around the truncation that gradually vanishes with height to then completely disappears beyond 3 kpc above/below the galaxy mid-plane. We detect the radius of truncation feature in the Hi gas at 24.0 ± 1.2, 24.5 ± 1.2 and 27.4 ± 1.6 kpc for the mid-plane, 0.5 and 1.5 kpc height respectively (see the whole set of Hi RSBPs in Fig. <ref>). Similar trend is shown in both FUV and NUV data (see Fig. <ref>). It is clear that the disk truncation in the FUV, Hα and Hi data is also detected although it is affected by the lower spatial resolution and/or depth (as previously explained in Sects. <ref>, <ref> and <ref>). These findings extend the independence of the location of the truncation in the Milky Way-like galaxy NGC 4565 towards a wider wavelength range, from FUV to Hi, in all measured wavelengths (i.e., FUV, NUV, g, r, i, Hα, 3.6 μm, and Hi).
Our findings are in good agreement with <cit.>. They extracted light profiles of the disk of UGC 7321 from FUV, optical (grz combined mean), and NIR (3.6 μm) images. They reported the discovery of a truncation at and above the mid-plane at all the probed wavelength ranges. This is particularly interesting due to the pristine nature of UGC 7321, a well-studied edge-on, low-mass, diffuse, isolated, bulgeless, and ultra-thin galaxy. As suggested by <cit.>, this supports that disks and truncations can form via internal mechanisms alone.
§.§ PSF affects the outskirts of the disk of NGC 4565 but not the truncation radius
To ensure that our truncation measurements were not affected by the PSF, we fitted analytical 2D models convolved with the PSF to the ultra-deep optical broad-band data (see Sect. <ref>). From these models, we built the galaxy PSF deconvolved models, which are an approximation to the observed images, but without the PSF effects and with the noise and intrinsic asymmetries of the galaxy such as the warp. We refer to those models as the “galaxy models”.
The PSF affects the light distribution in the outermost regions of the disk of NGC 4565 as there is more flux in the observed data than in the galaxy models as it has also previously reported in . Despite that, the truncation radius remains the same within the uncertainties when accounting for the PSF.
To prove the above, we extracted a full set of RSBPs from the final models of NGC 4565 in the optical broad-bands (Figs. <ref>, <ref> and <ref>). From these profiles, we determined the radial positions of the truncation at each height above/below the NGC 4565 mid-plane as described in Sect. <ref>. We did the same for the corresponding RSBPs extracted from the observed data from Figs. <ref>, <ref> and <ref>. We show all those radial positions of the truncation in the top panel of Fig. <ref>. As previously stated, there are no systematic trends or behaviour between both sets of data.
Our galaxy model, shown in Fig. <ref>, is able to properly reproduce and recover the light in the outskirts of the galaxy up to ∼ 1000 arcsec (i.e. ∼ 60 kpc). This is shown in the profiles and also in the images below, where the combination of the four galaxy components (i.e. bulge, bar, disk and halo) successfully mimics the shape of NGC 4565. The residuals basically account for the dusty regions surrounding the galaxy mid-plane and some asymmetries such as the disk warp, both impossible to reproduce with the available analytical functions in the software (see details in Sect. <ref>). <cit.> also obtained a model of NGC 4565 but using just two components: a 3D broken exponential disk and a 2D Sérsic function. By comparing both approaches, their model underestimates the light in the outer parts of the disk by up to a 21 % in g-band <cit.>. Nevertheless, both works agree in the fact that there is no significant variation in the radial location of the truncation between the observed data and the models <cit.>.
Regarding the shape of the RSBPs in the truncation region, the truncation always shows the sharpest decline in the mid-plane of the galaxy and becomes less prominent at large heights in both the galaxy models and the observed data. However, the truncation feature remains clear at higher altitudes above/below mid-plane in the models than in the observed data, for the three bands. This is showed in detail in the figures of the Appendix <ref> (Figs. <ref>, <ref>, <ref>, <ref>, <ref> and <ref>).
§ STAR FORMATION ACTIVITY AT THE EDGE OF THE DISK
We found, for the first time, that quiescent NGC 4565 galaxy is forming 0.011 ± 0.002 solar masses of stars per year at the truncation radius, at a rate of 3.5 × 10^-6± 5 × 10^-6 M_⊙/yr/pc. The SFR values per parsec at the galaxy mid-plane as a function of the radial distance are shown in Fig. <ref> as a radial SFR profile extracted from the continuum-subtracted Hα imaging (see Sect. <ref>). This allows us to investigate the star formation activity in the vicinity and beyond the truncation radius of NGC 4565.
We verified the drop-off in SFR at the disk truncation. Right before the truncation and three more times throughout the galaxy disk, there are peaks of SFR with values of ∼ 1 × 10^-5 M_⊙/yr/pc, located at radii 10.4, 16.6, 19.3 and 24.4 kpc. After the truncation, the SFR decreases until it is completely quenched at radius of ∼ 28 kpc.
The total SFR in the disk is 0.63 ± 0.01 M_⊙ yr^-1 while the median SFR is 0.02 ± 0.01 M_⊙ yr^-1. To get these values we avoided the more active inner 4 kpc in radius that is also the region where we could not properly subtract the sky emission and the continuum due to light gradients in the observed data so the flux is overestimated. NGC 4565 is a widely studied quiescent nearby late-type spiral galaxy with both a low SFR and SFR surface density. In comparison with previous works, <cit.> determined a total SFR of 0.67 ± 0.10 M_⊙ yr^-1 (among the lowest in their HALOGAS sample), while <cit.> obtained a value of 0.73 ± 0.02 M_⊙ yr^-1. These total SFR values are smaller to ours if we consider the whole disk. However, our measurements in the innermost disk region are probably overestimated and, on the other side, <cit.> and <cit.> considered smaller distances to NGC 4565 than in our work.
We detect a clear characteristic U-shaped age gradient up to 2 kpc in height around the location of the truncation in all the optical and NUV colour profiles allowed by our data. The g-r colour radial profiles (Fig. <ref>) show the most prominent U-shape feature in optical wavelength allowing also for high S/N beyond the truncation. The U-shape is clearly detected up to 1 kpc in height and then slowly disappears when moving to larger heights. In Fig. <ref>, we show a colour profile for the region of the disk with a warp (the upper-right disk; red region in the inset of the top panel in Fig. <ref>) and a region less affected by the warp (the lower-right disk; red region in the inset of the bottom panel in Fig. <ref>). The potential role of the warp in the truncation properties will be further discussed in Sect. <ref>.
The bluest colour (i.e., the youngest stellar population) is 0.23 ± 0.02 at a radius of 27 kpc in the g-r colour radial profiles. Beyond the truncation, the colour reddens again. In the case of the g-i colour profile, a less prominent minimum of age is located at a radius of 27 kpc with a colour value of 0.43 ± 0.02. Finally, in the r-i colour, the U-shape feature shows just a hint of its presence in the profiles extracted in the galactic plane and at 0.5 kpc high. Again, we found the minimum colour value (i.e., age) of 0.20 ± 0.02 at a radius of 27 kpc.
It is important to note that the mid-plane regions of the disk of NGC 4565 are strongly affected by dust which could influence our measured colours. However, by comparing the light distribution in NUV, optical, and NIR regimes, found that extinction is mostly limited to the inner areas and there is almost no dust extinction at the location of the truncation and beyond.
This U-shape feature in the colour profiles has originally been found in observations of type II breaks <cit.> at lower radial distances. However, recent works have related the U-shape with the truncation location. The first detection of this feature was by , who found it in the Milky Way-like galaxies NGC 5907 and NGC 4565 <cit.>. <cit.> observed the U-shape at the truncation location of the low-mass ultra-thin galaxy UGC 7321 for both the thin and thick disc. In addition, <cit.> measured a characteristic U-shape from g-r colour profiles in a broad variety of galaxy types (dwarfs to ellipticals) and stellar masses (10^7 M_ < M_* < 10^12 M_).
Theoretical studies on the origin of type II breaks in galaxy disks by <cit.> predict that if breaks and truncations share the same formation process, this would be the result of a radially declining SFR. In that scenario, the minimum stellar age value would be at the location of the truncation or break, combined with radial migration further out, producing the U-shape in the radial colour profiles. As reported above, we find a U-shape in our colour profiles (see Fig. <ref>). Also, these theoretical predictions are in good agreement with our results shown in Fig. <ref>, where the SFR decreases just before reaching the truncation radius.
The star formation threshold linked to the U-shape in the colour profiles (Fig. <ref>) traces, on the one hand, the youngest stellar population at the truncation location and, on the other hand, a likely radial migration of stars towards the outer regions associated with the redder colours beyond the truncation radius <cit.>. The lack of Hα emission beyond the truncation (see Fig. <ref>) supports this scenario as it is indicative of a death of recent star formation (< 10 Myr). However, there is FUV emission beyond the truncation, tracing past star formation of several tens to 100 Myr ago <cit.>. The NUV flux traces a slightly older population of stars formed ≲ 300 Myr ago <cit.> and reaches higher altitudes above/below the galaxy mid-plane than both Hα and FUV. The NUV emission extents up to similar radial distances beyond the truncation than the FUV. We suggest the outward radial migration of stars from the star-forming threshold at the truncation radius to the outer disk as the main mechanism populating the external disk. The young stars then become visible in FUV and NUV wavelengths. Thus, the further from the truncation radius, the older the stellar population. In summary, our observational measurements are consistent with a common origin for breaks and truncations.
§ VERTICAL EXTENT OF THE DISK AND ITS ORIGIN
In , we found that at the truncation could be observed up to a height of ∼ 2.9 kpc. This agrees with our results of a coherent disk structure up to 3 kpc as the truncation is located at a constant radius of 26.1 ± 0.2 kpc. However, the extra truncation detections in this work that extend this feature up to 4 kpc from the mid-plane suggest different possible scenarios for the disk vertical structure.
If the truncation in the mid-plane is a phenomenon associated to the end of the star formation in the disk and the truncation is moving with time to further distances, then one should expect the radial position of the truncations above/below the mid-plane to be located at smaller radial distances. This is supported by the fact that stars in galactic disks are subject to migrations in both the radial and vertical directions <cit.>. Our highest truncation detections at 3.5 and 4 kpc height, which are at closer radial distances to the centre of NGC 4565 (see top panel in Fig. <ref>), support this inside-out disk growth scenario.
Assuming a similar physical origin for the truncation in both the galactic plane and at higher altitudes <cit.>, the brightness of a thick disk component is still not enough to outshine the truncation feature from the thin disk at ∼ 3-4 kpc height. There is also the possibility of a flared thin disk that dominates at large heights in the outskirts of the galaxy. The (g -r) colour map in Fig. <ref> shows evidence of a flare of the younger (bluer) stellar populations in the outer part of the thin disk (R>15 kpc). This flaring is more clear in the SE side of the disk (left), where the warp is less prominent. Also, as NGC 4565 is not perfectly edge-on (i = 88.5 ± 0.5 deg; ), the flare could be partially blurred by the addition of light along the line of sight.
Numerical simulations suggest that flaring cannot be avoided due to a range of different dynamical effects. The main disk flaring mechanism is probably satellite–disk interactions <cit.>, but mergers <cit.>, misaligned gas infall <cit.> and reorientation of the disk rotation axis <cit.> can produce disk flaring too. However, simulations by <cit.> led them to propose that purely secular evolution, based in an inside-out growing disks, also cause flared disks. The latter scenario satisfies both our observational constraints and the environment conditions in NGC 4565.
From observations, <cit.> found flared thin disks in three S0 edge-on galaxies within a dense environment in the Fornax cluster from isophotal and colour analysis. <cit.> supported these findings with high-quality MUSE stellar kinematic data. <cit.>, using deep long-slit spectroscopic data across the massive edge-on galaxy NGC 7572, measured a flare in both the thin and thick disks with similar radial disk scales. However, is important to consider that all these galaxies are located in fairly dense environments, unlike the isolated Milky Way-like galaxy NGC 4565, potentially affecting the formation process of the flare. In the Milky Way, several authors have also reported flaring of younger stellar populations <cit.>.
In terms of the stellar population ages in the NGC 4565 disk, we see that the (g-r) colour is clearly below 0.8 at the truncation/warp, while (g-r) ∼ 1 in the innermost parts (Fig. <ref>), reflecting a young population. The individual light distribution of the galaxy in each wavelength (Fig. <ref>) highlights a thin Hα disk of stars born ∼ 10 Myr ago. The FUV data exhibit a rather thicker thin disk of ∼ 2 kpc (accounting for inclination effects), tracing star formation that occurred from several tens to ∼ 100 Myr ago. The thin disk is even thicker in NUV data, which traces <300 Myr populations <cit.>. However, we do not see star formation in the thick disk. The Hi gas is also concentrated within the thin disk area.
All these findings suggest that the disk of the Milky Way-like galaxy NGC 4565 has been formed mainly through internal processes. In particular, the thin disk stars would be born in the galaxy mid-plane (up to ∼ 1 kpc) and then migrate along the vertical axis, supporting an inside-out and down-up growth of the disk with an outer flare feature.
§ THE ROLE OF THE DISK WARP OF NGC 4565 IN THE TRUNCATION RADIUS
We clearly see a warp in the disk of NGC 4565 (Fig. <ref>) and we report, for the first time, the warp detection in FUV and NUV data. This warp in NGC 4565 has been previously reported in optical and Hi data <cit.>. However, our ultra-deep data indicate a very prominent warp in the north side (upper-right disk quadrant) with ∼ 6-7 kpc length, while it is fainter and shorter in the south edge. As we do not see the warp in Hα, this indicates that there is recent (≳ 100 Myr ago) but not ongoing star formation in this disk structure. The warp also shows clear Hi gas content. These measurements are in agreement with the stellar populations of less than 600 Myr old reported for the warp by <cit.>.
The presence of the warp affects the truncation feature (see Sect. <ref>), contrary to our findings in . The extraordinary depth of the new optical data allows us to measure a truncation radius that is systematically larger the higher the detection above the NGC 4565 mid-plane (see central panel in Fig. <ref>). Both the (g-r) colour profiles in Fig. <ref> and the maps in Fig. <ref>, highlight younger (bluer) stellar populations at the truncation radius in the presence of the prominent warp. The Hi map also reveals that the truncation radii is related to the warp location. From Frig. <ref> we can estimate the onset of the warp in Hi at ∼ 26-27 kpc, coinciding in radius with the mean radial truncation value in the optical bands (25.9 ± 0.4 kpc).
We measured a two moving U-shapes in the colour profiles (see Fig. <ref>) of the disk quadrant with a warp. Such U-shapes are indicative of a threshold in star formation. We find a first U-shape tracing the truncation at R ∼ 26 kpc (see Sect. <ref>) and a second one further outside and at higher altitudes off the galaxy mid-plane following the warp beyond R ∼ 38 kpc and above 1.5 kpc height. To our knowledge, this is the first time such a feature has been detected in a disk warp.
Truncations have previously been linked to the maximum angular momentum of the protogalactic cloud <cit.>, but also to the presence of disk warps <cit.> and to star forming thresholds <cit.>. We find that the truncation of the Milky Way-like galaxy NGC 4565 is linked to two of the above-mentioned mechanisms: a threshold in star formation and the warp.
§ CONCLUSIONS
We perform a multi-wavelength vertical analysis of the disk of the edge-on Milky Way-like galaxy NGC 4565 to establish the height up to where the truncation is still present at the edge of the disk and how is it related with disk properties such as the warp or the disk thickness. This is a follow-up work of including a wider wavelength range and ultra-deep optical data from the 4.2 m WHT. Our findings can be summarised in the following statements:
* We obtain a mean radial truncation position of 25.9 ± 0.4 kpc for the g, r, and i bands up to a height of 4 kpc above/below the galaxy mid-plane.
* We confidently detect the truncation up to 4 kpc in the new ultra-deep optical images, that is, at locations 1 kpc higher than in any previous work for any studied galaxy. The location of the truncation radius remains constant up to a height of 3 kpc, in agreement with . Higher up, we report two more detections at 3.5 and 4 kpc but they are located at an inner radius.
* The truncation is systematically sharper on the right side of the disk (NW). Despite this, the truncation radius remains almost constant within the uncertainties for all the disk quadrants and heights above/below the mid-plane with the exception of the north quadrant, that is, the one with a clear warp.
* The radial position of the truncation is the same, within uncertainties, for our full wide wavelength range, independently of the height above/below NGC 4565 mid-plane. Combining these results with those in , we confirm that the truncation radius is similar at all measured wavelength ranges from FUV to Hi.
* Despite the effect of PSF scattering in the light distribution of NGC 4565, the radial position of the truncation remains unaffected (within uncertainties).
* Our observational measurements on the colour profiles (U-shape feature), SFR profile, and UV emission in the outer disk are in agreement with the predicted common origin for breaks and truncations.
* We propose that the disk of NGC 4565 has been mainly formed through internal processes. The stars were born in the galaxy mid-plane (up to ∼ 1 kpc) and then migrated vertically forming an outer flare. This supports an inside-out growth mechanism of the disk.
* We report the first detection of the warp of NGC 4565 in FUV and NUV data.
* The onset radius of the disk warp in Hi is located at ∼ 26-27 kpc, coinciding in radius with the truncation location in the optical bands.
* Our findings suggest that the truncation of the Milky Way-like galaxy NGC 4565 is linked to both a threshold in star formation and the presence of a warp.
This work highlights the connection between the truncation of the disk, the star formation activity, and the onset of the warp of the nearby galaxy NGC 4565 using an unprecedentedly wide wavelength range and ultra-deep optical data. Further analyses of large samples of edge-on galaxies, including studies of the dependence of these connections with the redshift, are required to understand the role of the truncations in galaxy disks.
We acknowledge constructive remarks by an anonymous referee that help to improve this paper. We thank Kijeong Yim for kindly sharing the VLA Hi reduced integrated maps with. CML thanks Sarah Brough and Simón Díaz-García for useful discussions and support during the project.
CML acknowledges the support of the Australian Research Council Discovery Project DP190101943. Parts of this research were supported by the Australian Research Council Centre of Excellence for All Sky Astrophysics in 3 Dimensions (ASTRO 3D), through project number CE170100013.
RIS acknowledges the funding by the Governments of Spain and Aragón through the Fondo de Inversiones de Teruel; and the Spanish Ministry of Science, Innovation and Universities (MCIU/AEI/FEDER, UE) with grant PGC2018-097585-B-C21.
We acknowledge financial support from the State Research Agency (AEI-MCINN) of the Spanish Ministry of Science and Innovation under the grants PID2019-107427GB-C32 and "The structure and evolution of galaxies and their central regions" with reference PID2019-105602GB-I00/10.13039/501100011033, from the ACIISI, Consejería de Economía, Conocimiento y Empleo del Gobierno de Canarias and the European Regional Development Fund (ERDF) under a grant with reference PROID2021010044, and from IAC projects P/300624 and P/300724, financed by the Ministry of Science and Innovation, through the State Budget and by the Canary Islands Department of Economy, Knowledge and Employment, through the Regional Budget of the Autonomous Community.
SC acknowledges support from the Ramón y Cajal programme funded by the Spanish Government (references RYC2020-030480-I), and from the State Research Agency (AEI-MCINN) of the Spanish Ministry of Science and Innovation under the grant “Thick discs, relics of the infancy of galaxies" with reference PID2020-113213GA-I00.
This research includes computations using the computational cluster Katana supported by Research Technology Services at UNSW Sydney. This research has made use of the SVO Filter Profile Service (<http://svo2.cab.inta-csic.es/theory/fps/>) supported by the Spanish MINECO through grant AYA2017-84089. This article is based on observations made in the Observatorios de Canarias del IAC with the WHT and INT operated on the island of La Palma by the Isaac Newton Group in the Observatorio del Roque de los Muchachos.
A.B. was supported by an appointment to the NASA Postdoctoral Program at the NASA Ames Research Center, administered by Oak Ridge Associated Universities under contract with NASA. A.B. is supported by a NASA Astrophysics Data Analysis grant (22-ADAP22-0118), program Hubble Archival Research project AR 17041, and Chandra Archival Research project ID #24610329, provided by NASA through a grant from the Space Telescope Science Institute and the Center for Astrophysics Harvard & Smithsonian, operated by the Association of Universities for Research in Astronomy, Inc., under NASA contract NAS 5-03127.
Facilities: William Herschel Telescope (WHT), Isaac Newton Telescope (INT), Very Large Array (VLA), and Galaxy Evolution Explorer (GALEX), UNSW Katana cluster (<https://doi.org/10.26190/669x-a286>).
Software: (The Astropy Collaboration et al. 2018), v0.7.2 <cit.>, <cit.>, <cit.>, v2.19.5 <cit.>, v2.38.0 <cit.>, v2.0.4 <cit.>, imfit <cit.>, <cit.>.
aa
122
natexlab#1#1[Abadi et al.(2003)Abadi, Navarro, Steinmetz, & Eke]Abadi2003
Abadi, M. G., Navarro, J. F., Steinmetz, M., & Eke, V. R. 2003, , 597, 21
[Akhlaghi(2019)]Akhlaghi2019
Akhlaghi, M. 2019, arXiv e-prints, arXiv:1909.11230
[Akhlaghi & Ichikawa(2015)]Akhlaghi2015
Akhlaghi, M. & Ichikawa, T. 2015, , 220, 1
[Alam et al.(2015)Alam, Albareti, Prieto, Anders, Anderson,
Anderton, Andrews, Armengaud, Aubourg, Bailey, Basu, Bautista, Beaton, Beers,
Bender, Berlind, Beutler, Bhardwaj, Bird, Bizyaev, Blake, Blanton, Blomqvist,
Bochanski, Bolton, Bovy, Bradley, Brandt, Brauer, Brinkmann, Brown,
Brownstein, Burden, Burtin, Busca, & Cai]Alam2015
Alam, S., Albareti, F. D., Prieto, C. A., et al. 2015, , 219, 12
[Athanassoula et al.(1990)Athanassoula, Morin, Wozniak, Puy, Pierce,
Lombard, & Bosma]Athanassoula1990
Athanassoula, E., Morin, S., Wozniak, H., et al. 1990, , 245, 130
[Aumer & White(2013)]Aumer2013
Aumer, M. & White, S. D. M. 2013, , 428, 1055
[Azzollini et al.(2008)Azzollini, Trujillo, &
Beckman]Azzollini2008a
Azzollini, R., Trujillo, I., & Beckman, J. E. 2008, , 679, L69
[Bakos et al.(2008)Bakos, Trujillo, & Pohlen]Bakos2008
Bakos, J., Trujillo, I., & Pohlen, M. 2008, , 683, L103
[Bertin(2006)]Bertin2006
Bertin, E. 2006, in Astronomical Society of the Pacific Conference Series, Vol.
351, Astronomical Data Analysis Software and Systems XV, ed. C. Gabriel,
C. Arviset, D. Ponz, & S. Enrique, 112
[Bertin & Arnouts(1996)]Bertin1996
Bertin, E. & Arnouts, S. 1996, , 117, 393
[Bertin et al.(2002)Bertin, Mellier, Radovich, Missonnier, Didelon,
& Morin]Bertin2002
Bertin, E., Mellier, Y., Radovich, M., et al. 2002, in Astronomical Society
of the Pacific Conference Series, Vol. 281, Astronomical Data Analysis
Software and Systems XI, ed. D. A. Bohlender, D. Durand, & T. H.
Handley, 228
[Bland-Hawthorn & Gerhard(2016)]BlandHawthorn2016
Bland-Hawthorn, J. & Gerhard, O. 2016, , 54, 529
[Borlaff et al.(2017)Borlaff, Eliche-Moral, Beckman, Ciambur,
Pérez-González, Barro, Cava, & Cardiel]Borlaff2017
Borlaff, A., Eliche-Moral, M. C., Beckman, J. E., et al. 2017, , 604,
A119
[Borlaff et al.(2019)Borlaff, Trujillo, Román,
Beckman, Eliche-Moral, Infante-Sáinz, Lumbreras-Calle, de
Almagro, Gómez-Guijarro, Cebrián, Dorta, Cardiel,
Akhlaghi, & Martínez-Lombilla]Borlaff2019
Borlaff, A., Trujillo, I., Román, J., et al. 2019, , 621, A133
[Bournaud et al.(2009)Bournaud, Elmegreen, & Martig]Bournaud2009
Bournaud, F., Elmegreen, B. G., & Martig, M. 2009, , 707, L1
[Bradley et al.(2020)Bradley, Sipőcz, Robitaille, Tollerud,
Vinícius, Deil, Barbary, Wilson, Busko, Günther, Cara, Conseil, Bostroem,
Droettboom, Bray, Bratholm, Lim, Barentsen, Craig, Pascual, Perren, Greco,
Donath, de Val-Borro, Kerzendorf, Bach, Weaver, D'Eugenio, Souchereau, &
Ferreira]Bradley2020
Bradley, L., Sipőcz, B., Robitaille, T., et al. 2020, astropy/photutils:
1.0.1
[Buta et al.(2015)Buta, Sheth, Athanassoula, Bosma, Knapen,
Laurikainen, Salo, Elmegreen, Ho, Zaritsky, Courtois, Hinz, Muñoz-Mateos,
Kim, Regan, Gadotti, Gil de Paz, Laine, Menéndez-Delmestre, Comerón,
Erroz Ferrer, Seibert, Mizusawa, Holwerda, & Madore]Buta2015
Buta, R. J., Sheth, K., Athanassoula, E., et al. 2015, , 217, 32
[Carraro et al.(2015)Carraro, Vázquez, Costa, Ahumada, &
Giorgi]Carraro2015a
Carraro, G., Vázquez, R. A., Costa, E., Ahumada, J. A., & Giorgi, E. E.
2015, , 149, 12
[Chamba et al.(2022)Chamba, Trujillo, & Knapen]Chamba2022
Chamba, N., Trujillo, I., & Knapen, J. H. 2022, , 667, A87
[Chrobáková et al.(2022)Chrobáková, Nagy, &
López-Corredoira]Chrobakova2022
Chrobáková, Ž., Nagy, R., & López-Corredoira, M. 2022, ,
664, A58
[Comerón et al.(2012)Comerón, Elmegreen, Salo,
Laurikainen, Athanassoula, Bosma, Knapen, Gadotti, Sheth, Hinz, Regan, Gil de
Paz, Muñoz-Mateos, Menéndez-Delmestre, Seibert, Kim, Mizusawa, Laine,
Ho, & Holwerda]Comeron2012
Comerón, S., Elmegreen, B. G., Salo, H., et al. 2012, , 759, 98
[Comerón et al.(2018)Comerón, Salo, &
Knapen]Comeron2017
Comerón, S., Salo, H., & Knapen, J. H. 2018, , 610, A5
[Das et al.(2020)Das, Sardone, Leroy, Mathur, Gallagher, Pingel,
Pisano, & Heald]Das2020
Das, S., Sardone, A., Leroy, A. K., et al. 2020, , 898, 15
[de Jong(2008)]deJong2008
de Jong, R. S. 2008, , 388, 1521
[de Jong et al.(2007)de Jong, Seth, Radburn-Smith, Bell, Brown,
Bullock, Courteau, Dalcanton, Ferguson, Goudfrooij, Holfeltz, Holwerda,
Purcell, Sick, & Zucker]deJong2007
de Jong, R. S., Seth, A. C., Radburn-Smith, D. J., et al. 2007, , 667,
L49
[Debattista et al.(2017)Debattista, Roškar, &
Loebman]Debattista2017
Debattista, V. P., Roškar, R., & Loebman, S. R. 2017, The Impact of
Stellar Migration on Disk Outskirts, ed. G. d. P. Knapen, Lee (Astrophysics
and Space Science Library, Springer International Publishing, Cham,
Switzerland), 77–114
[Díaz-García et al.(2022)Díaz-García,
Comerón, Courteau, Watkins, Knapen, & Román]DiazGarcia2022
Díaz-García, S., Comerón, S., Courteau, S., et al. 2022, ,
667, A109
[Ding et al.(2021)Ding, Xue, Yang, Zhao, Zhang, & Zhu]Ding2021
Ding, P.-J., Xue, X.-X., Yang, C., et al. 2021, , 162, 112
[Duc et al.(2015)Duc, Cuillandre, Karabal, Cappellari, Alatalo,
Blitz, Bournaud, Bureau, Crocker, Davies, Davis, de Zeeuw, Emsellem,
Khochfar, Krajnović, Kuntschner, McDermid, Michel-Dansac, Morganti,
Naab, Oosterloo, Paudel, Sarzi, Scott, Serra, Weijmans, & Young]Duc2015
Duc, P.-A., Cuillandre, J.-C., Karabal, E., et al. 2015, , 446, 120
[Erben et al.(2005)Erben, Schirmer, Dietrich, Cordes, Haberzettl,
Hetterscheidt, Hildebrandt, Schmithuesen, Schneider, Simon, Deul, Hook,
Kaiser, Radovich, Benoist, Nonino, Olsen, Prandoni, Wichmann, Zaggia, Bomans,
Dettmar, & Miralles]Erben2005
Erben, T., Schirmer, M., Dietrich, J. P., et al. 2005, Astronomische
Nachrichten, 326, 432
[Erwin(2015)]Erwin2015
Erwin, P. 2015, , 799, 226
[Feast et al.(2014)Feast, Menzies, Matsunaga, &
Whitelock]Feast2014
Feast, M. W., Menzies, J. W., Matsunaga, N., & Whitelock, P. A. 2014, ,
509, 342
[Gaia Collaboration et al.(2021)Gaia Collaboration, Brown,
Vallenari, Prusti, de Bruijne, Babusiaux, Biermann, Creevey, Evans, Eyer,
Hutton, Jansen, Jordi, Klioner, Lammers, Lindegren, Luri, Mignard, Panem,
Pourbaix, Randich, Sartoretti, Soubiran, Walton, Arenou, Bailer-Jones,
Bastian, Cropper, Drimmel, Katz, Lattanzi, van Leeuwen, Bakker, Cacciari,
Castañeda, De Angeli, Ducourant, Fabricius, Fouesneau, Frémat,
Guerra, Guerrier, Guiraud, Jean-Antoine Piccolo, Masana, Messineo, Mowlavi,
Nicolas, Nienartowicz, Pailler, Panuzzo, Riclet, Roux, Seabroke, Sordo,
Tanga, Thévenin, Gracia-Abril, Portell, Teyssier, Altmann, Andrae,
Bellas-Velidis, Benson, Berthier, Blomme, Brugaletta, Burgess, Busso, Carry,
Cellino, Cheek, Clementini, Damerdji, Davidson, Delchambre, Dell'Oro,
Fernández-Hernández, Galluccio, García-Lario, Garcia-Reinaldos,
González-Núñez, Gosset, Haigron, Halbwachs, Hambly, Harrison,
Hatzidimitriou, Heiter, Hernández, Hestroffer, Hodgkin, Holl, Janßen,
Jevardat de Fombelle, Jordan, Krone-Martins, Lanzafame, Löffler, Lorca,
Manteiga, Marchal, Marrese, Moitinho, Mora, Muinonen, Osborne, Pancino,
Pauwels, Petit, Recio-Blanco, Richards, Riello, Rimoldini, Robin, Roegiers,
Rybizki, Sarro, Siopis, Smith, Sozzetti, Ulla, Utrilla, van Leeuwen, van
Reeven, Abbas, Abreu Aramburu, Accart, Aerts, Aguado, Ajaj, Altavilla,
Álvarez, Álvarez Cid-Fuentes, Alves, Anderson, Anglada Varela,
Antoja, Audard, Baines, Baker, Balaguer-Núñez, Balbinot, Balog,
Barache, Barbato, Barros, Barstow, Bartolomé, Bassilana, Bauchet,
Baudesson-Stella, Becciani, Bellazzini, Bernet, Bertone, Bianchi,
Blanco-Cuaresma, Boch, Bombrun, Bossini, Bouquillon, Bragaglia, Bramante,
Breedt, Bressan, Brouillet, Bucciarelli, Burlacu, Busonero, Butkevich, Buzzi,
Caffau, Cancelliere, Cánovas, Cantat-Gaudin, Carballo, Carlucci,
Carnerero, Carrasco, Casamiquela, Castellani, Castro-Ginard, Castro Sampol,
Chaoul, Charlot, Chemin, Chiavassa, Cioni, Comoretto, Cooper, Cornez, Cowell,
Crifo, Crosta, Crowley, Dafonte, Dapergolas, David, David, de Laverny,
De Luise, De March, De Ridder, de Souza, de Teodoro, de Torres, del Peloso,
del Pozo, Delbo, Delgado, Delgado, Delisle, Di Matteo, Diakite, Diener,
Distefano, Dolding, Eappachen, Edvardsson, Enke, Esquej, Fabre, Fabrizio,
Faigler, Fedorets, Fernique, Fienga, Figueras, Fouron, Fragkoudi, Fraile,
Franke, Gai, Garabato, Garcia-Gutierrez, García-Torres, Garofalo,
Gavras, Gerlach, Geyer, Giacobbe, Gilmore, Girona, Giuffrida, Gomel, Gomez,
Gonzalez-Santamaria, González-Vidal, Granvik, Gutiérrez-Sánchez,
Guy, Hauser, Haywood, Helmi, Hidalgo, Hilger, Hładczuk, Hobbs, Holland,
Huckle, Jasniewicz, Jonker, Juaristi Campillo, Julbe, Karbevska, Kervella,
Khanna, Kochoska, Kontizas, Kordopatis, Korn, Kostrzewa-Rutkowska,
Kruszyńska, Lambert, Lanza, Lasne, Le Campion, Le Fustec, Lebreton,
Lebzelter, Leccia, Leclerc, Lecoeur-Taibi, Liao, Licata, Lindstrøm,
Lister, Livanou, Lobel, Madrero Pardo, Managau, Mann, Marchant, Marconi,
Marcos Santos, Marinoni, Marocco, Marshall, Martin Polo, Martín-Fleitas,
Masip, Massari, Mastrobuono-Battisti, Mazeh, McMillan, Messina, Michalik,
Millar, Mints, Molina, Molinaro, Molnár, Montegriffo, Mor, Morbidelli,
Morel, Morris, Mulone, Munoz, Muraveva, Murphy, Musella, Noval,
Ordénovic, Orrù, Osinde, Pagani, Pagano, Palaversa, Palicio, Panahi,
Pawlak, Peñalosa Esteller, Penttilä, Piersimoni, Pineau, Plachy,
Plum, Poggio, Poretti, Poujoulet, Prša, Pulone, Racero, Ragaini,
Rainer, Raiteri, Rambaux, Ramos, Ramos-Lerate, Re Fiorentin, Regibo,
Reylé, Ripepi, Riva, Rixon, Robichon, Robin, Roelens, Rohrbasser,
Romero-Gómez, Rowell, Royer, Rybicki, Sadowski, Sagristà Sellés,
Sahlmann, Salgado, Salguero, Samaras, Sanchez Gimenez, Sanna, Santoveña,
Sarasso, Schultheis, Sciacca, Segol, Segovia, Ségransan, Semeux, Shahaf,
Siddiqui, Siebert, Siltala, Slezak, Smart, Solano, Solitro, Souami, Souchay,
Spagna, Spoto, Steele, Steidelmüller, Stephenson, Süveges, Szabados,
Szegedi-Elek, Taris, Tauran, Taylor, Teixeira, Thuillot, Tonello, Torra,
Torra, Turon, Unger, Vaillant, van Dillen, Vanel, Vecchiato, Viala, Vicente,
Voutsinas, Weiler, Wevers, Wyrzykowski, Yoldas, Yvard, Zhao, Zorec, Zucker,
Zurbach, & Zwitter]Collaboration2021
Gaia Collaboration, Brown, A. G. A., Vallenari, A., et al. 2021, , 649,
A1
[Gilhuly et al.(2020)Gilhuly, Hendel, Merritt, Abraham, Danieli,
Lokhorst, Liu, van Dokkum, Conroy, & Greco]Gilhuly2020
Gilhuly, C., Hendel, D., Merritt, A., et al. 2020, , 897, 108
[Gregory & Thompson(1977)]Gregory1977
Gregory, S. A. & Thompson, L. A. 1977, , 213, 345
[Hao et al.(2011)Hao, Kennicutt, Johnson, Calzetti, Dale, &
Moustakas]Hao2011
Hao, C.-N., Kennicutt, R. C., Johnson, B. D., et al. 2011, , 741, 124
[Heald et al.(2012)Heald, Józsa, Serra, Zschaechner, Rand,
Fraternali, Oosterloo, Walterbos, Jütte, & Gentile]Heald2012
Heald, G., Józsa, G., Serra, P., et al. 2012, , 544, C1
[Helmboldt et al.(2004)Helmboldt, Walterbos, Bothun, O'Neil, &
de Blok]Helmboldt2004
Helmboldt, J. F., Walterbos, R. A. M., Bothun, G. D., O'Neil, K., & de Blok,
W. J. G. 2004, , 613, 914
[Hunter et al.(2012)Hunter, Ficut-Vicas, Ashley, Brinks, Cigan,
Elmegreen, Heesen, Herrmann, Johnson, Oh, Rupen, Schruba, Simpson, Walter,
Westpfahl, Young, & Zhang]Hunter2012
Hunter, D. A., Ficut-Vicas, D., Ashley, T., et al. 2012, , 144, 134
[Infante-Sainz et al.(2020)Infante-Sainz, Trujillo, &
Román]InfanteSainz2020
Infante-Sainz, R., Trujillo, I., & Román, J. 2020, , 491, 5317
[Iodice et al.(2019a)Iodice, Sarzi, Bittner, Coccato,
Costantin, Corsini, van de Ven, de Zeeuw, Falcón-Barroso, Gadotti,
Lyubenova, Martín-Navarro, McDermid, Nedelchev, Pinna, Pizzella,
Spavone, & Viaene]Iodice2019
Iodice, E., Sarzi, M., Bittner, A., et al. 2019a, , 627,
A136
[Iodice et al.(2019b)Iodice, Spavone, Capaccioli,
Peletier, van de Ven, Napolitano, Hilker, Mieske, Smith, Pasquali, Limatola,
Grado, Venhola, Cantiello, Paolillo, Falcon-Barroso, D'Abrusco, &
Schipani]Iodice2019b
Iodice, E., Spavone, M., Capaccioli, M., et al. 2019b, ,
623, A1
[James et al.(2005)James, Shane, Knapen, Etherton, &
Percival]James2005
James, P. A., Shane, N. S., Knapen, J. H., Etherton, J., & Percival, S. M.
2005, , 429, 851
[Joye & Mandel(2003)]Joye2003
Joye, W. A. & Mandel, E. 2003, in Astronomical Society of the Pacific
Conference Series, Vol. 295, Astronomical Data Analysis Software and Systems
XII, ed. H. E. Payne, R. I. Jedrzejewski, & R. N. Hook, 489
[Kado-Fong et al.(2018)Kado-Fong, Greene, Hendel, Price-Whelan,
Greco, Goulding, Huang, Johnston, Komiyama, Lee, Lust, Strauss, &
Tanaka]KadoFong2018
Kado-Fong, E., Greene, J. E., Hendel, D., et al. 2018, , 866, 103
[Kalberla et al.(2014)Kalberla, Kerp, Dedes, & Haud]Kalberla2014
Kalberla, P. M. W., Kerp, J., Dedes, L., & Haud, U. 2014, , 794, 90
[Kasparova et al.(2020)Kasparova, Katkov, &
Chilingarian]Kasparova2020
Kasparova, A. V., Katkov, I. Y., & Chilingarian, I. V. 2020, , 493, 5464
[Kaviraj et al.(2017)Kaviraj, Laigle, Kimm, Devriendt, Dubois,
Pichon, Slyz, Chisari, & Peirani]Kaviraj2017
Kaviraj, S., Laigle, C., Kimm, T., et al. 2017, , 467, 4739
[Kazantzidis et al.(2008)Kazantzidis, Bullock, Zentner, Kravtsov, &
Moustakas]Kazantzidis2008
Kazantzidis, S., Bullock, J. S., Zentner, A. R., Kravtsov, A. V., & Moustakas,
L. A. 2008, , 688, 254
[Kennicutt & Kent(1983)]Kennicutt1983
Kennicutt, R. C., J. & Kent, S. M. 1983, , 88, 1094
[Kennicutt(1998)]Kennicutt1998
Kennicutt, Robert C., J. 1998, , 36, 189
[Kennicutt et al.(2009)Kennicutt, Hao, Calzetti, Moustakas, Dale,
Bendo, Engelbracht, Johnson, & Lee]Kennicutt2009
Kennicutt, Robert C., J., Hao, C.-N., Calzetti, D., et al. 2009, , 703,
1672
[Kennicutt et al.(2008)Kennicutt, Lee, Funes, S., Sakai, &
Akiyama]Kennicutt2008
Kennicutt, Robert C., J., Lee, J. C., Funes, J. G., et al. 2008, , 178,
247
[Kennicutt(1989)]Kennicutt1989
Kennicutt, Jr., R. C. 1989, , 344, 685
[Kim et al.(2012)Kim, Sheth, Hinz, Lee, Zaritsky, Gadotti, Knapen,
Schinnerer, Ho, Laurikainen, Salo, Athanassoula, Bosma, de Swardt,
Muñoz-Mateos, Madore, Comerón, Regan, Menéndez-Delmestre, Gil de
Paz, Seibert, Laine, Erroz-Ferrer, & Mizusawa]Kim2012
Kim, T., Sheth, K., Hinz, J. L., et al. 2012, , 753, 43
[Knapen et al.(2004)Knapen, Stedman, Bramich, Folkes, &
Bradley]Knapen2004
Knapen, J. H., Stedman, S., Bramich, D. M., Folkes, S. L., &
Bradley, T. R. 2004, , 426, 1135
[Kormendy & Bender(2019)]Kormendy2019
Kormendy, J. & Bender, R. 2019, , 872, 106
[Kormendy & Kennicutt(2004)]Kormendy2004
Kormendy, J. & Kennicutt, Jr., R. C. 2004, , 42, 603
[Kroupa & Weidner(2003)]Kroupa2003
Kroupa, P. & Weidner, C. 2003, , 598, 1076
[Lang et al.(2010)Lang, Hogg, Mierle, Blanton, & Roweis]Lang2010
Lang, D., Hogg, D. W., Mierle, K., Blanton, M., & Roweis, S. 2010, , 139,
1782
[Makarov et al.(2014)Makarov, Prugniel, Terekhova, Courtois, &
Vauglin]Makarov2014
Makarov, D., Prugniel, P., Terekhova, N., Courtois, H., & Vauglin, I. 2014,
, 570, A13
[Martin et al.(2005)Martin, Fanson, Schiminovich, Morrissey,
Friedman, Barlow, Conrow, Grange, Jelinsky, Milliard, Siegmund, Bianchi,
Byun, Donas, Forster, Heckman, Lee, Madore, Malina, Neff, Rich, Small,
Surber, Szalay, Welsh, & Wyder]Martin2005
Martin, D. C., Fanson, J., Schiminovich, D., et al. 2005, , 619, L1
[Martín-Navarro et al.(2012)Martín-Navarro, Bakos,
Trujillo, Knapen, Athanassoula, Bosma, Comerón, Elmegreen, Erroz-Ferrer,
Gadotti, Gil de Paz, Hinz, Ho, Holwerda, Kim, Laine, Laurikainen,
Menéndez-Delmestre, Mizusawa, Muñoz-Mateos, Regan, Salo, Seibert, &
Sheth]Martin-Navarro2012
Martín-Navarro, I., Bakos, J., Trujillo, I., et al. 2012, , 427,
1102
[Martínez-Delgado et al.(2009)Martínez-Delgado,
Pohlen, Gabany, Majewski, Peñarrubia, & Palma]Martinez-Delgado2009
Martínez-Delgado, D., Pohlen, M., Gabany, R. J., et al. 2009, ,
692, 955
[Martínez-Lombilla et al.(2023)Martínez-Lombilla, Brough,
Montes, Baena-Gallé, Akhlaghi, Infante-Sainz, Driver, Holwerda, Pimbblet,
& Robotham]MartinezLombilla2023
Martínez-Lombilla, C., Brough, S., Montes, M., et al. 2023, , 518,
1195
[Martínez-Lombilla & Knapen(2019)]MartinezLombilla2019b
Martínez-Lombilla, C. & Knapen, J. H. 2019, , 629, A12
[Martínez-Lombilla et al.(2019)Martínez-Lombilla,
Trujillo, & Knapen]MartinezLombilla2019a
Martínez-Lombilla, C., Trujillo, I., & Knapen, J. H. 2019, ,
483, 664
[McMullin et al.(2007)McMullin, Waters, Schiebel, Young, &
Golap]McMullin2007
McMullin, J. P., Waters, B., Schiebel, D., Young, W., & Golap, K. 2007, in
Astronomical Society of the Pacific Conference Series, Vol. 376, Astronomical
Data Analysis Software and Systems XVI, ed. R. A. Shaw, F. Hill, & D. J.
Bell, 127
[Michard(2002)]Michard2002
Michard, R. 2002, , 384, 763
[Mihos et al.(2017)Mihos, Harding, Feldmeier, Rudick, Janowiecki,
Morrison, Slater, & Watkins]Mihos2017
Mihos, J. C., Harding, P., Feldmeier, J. J., et al. 2017, , 834, 16
[Minchev et al.(2015)Minchev, Martig, Streich, Scannapieco, de Jong,
& Steinmetz]Minchev2015
Minchev, I., Martig, M., Streich, D., et al. 2015, , 804, L9
[Montes(2022)]Montes2022
Montes, M. 2022, Nature Astronomy, 6, 308
[Montes et al.(2020)Montes, Infante-Sainz, Madrigal-Aguado,
Román, Monelli, Borlaff, & Trujillo]Montes2020
Montes, M., Infante-Sainz, R., Madrigal-Aguado, A., et al. 2020, , 904,
114
[Montes & Trujillo(2018)]Montes2018
Montes, M. & Trujillo, I. 2018, , 474, 917
[Montes & Trujillo(2019)]Montes2019
Montes, M. & Trujillo, I. 2019, , 482, 2838
[Morrissey et al.(2007)Morrissey, Conrow, Barlow, Small, Seibert,
Wyder, Budavári, Arnouts, Friedman, Forster, Martin, Neff, Schiminovich,
Bianchi, Donas, Heckman, Lee, Madore, Milliard, Rich, Szalay, Welsh, &
Yi]Morrissey2007
Morrissey, P., Conrow, T., Barlow, T. A., et al. 2007, , 173, 682
[Mosenkov et al.(2020)Mosenkov, Rich, Koch, Brosch, Thilker,
Román, Müller, Smirnov, & Usachev]Mosenkov2020
Mosenkov, A., Rich, R. M., Koch, A., et al. 2020, , 494, 1751
[Naeslund & Joersaeter(1997)]Naeslund1997
Naeslund, M. & Joersaeter, S. 1997, , 325, 915
[Niklas et al.(1997)Niklas, Klein, & Wielebinski]Niklas1997
Niklas, S., Klein, U., & Wielebinski, R. 1997, , 322, 19
[Oliphant(2006)]Oliphant2006
Oliphant, T. E. 2006, All Faculty Publications, 278
[Oosterloo et al.(2007)Oosterloo, Fraternali, &
Sancisi]Oosterloo2007
Oosterloo, T., Fraternali, F., & Sancisi, R. 2007, , 134, 1019
[Padilla et al.(2019)Padilla, Castander, Alarcón, Aleksic,
Ballester, Cabayol, Cardiel-Sas, Carretero, Casas, Castilla, Crocce, Delfino,
Díaz, Eriksen, Fernández, Fosalba, García-Bellido,
Gaztañaga, Gaweda, Grañena, María Ílla, Jiménez,
López, Martí, Miquel, Neissner, Pío, Sánchez, Serrano,
Sevilla-Noarbe, Tallada, Tonello, & de Vicente]Padilla2019
Padilla, C., Castander, F. J., Alarcón, A., et al. 2019, , 157, 246
[Peters et al.(2017)Peters, van der Kruit, Knapen, Trujillo, Fliri,
Cisternas, & Kelvin]Peters2017
Peters, S. P. C., van der Kruit, P. C., Knapen, J. H., et al. 2017, ,
470, 427
[Pinna et al.(2019a)Pinna, Falcón-Barroso, Martig,
Coccato, Corsini, de Zeeuw, Gadotti, Iodice, Leaman, Lyubenova,
Martín-Navarro, Morelli, Sarzi, van de Ven, Viaene, &
McDermid]Pinna2019b
Pinna, F., Falcón-Barroso, J., Martig, M., et al. 2019a,
, 625, A95
[Pinna et al.(2019b)Pinna, Falcón-Barroso, Martig,
Sarzi, Coccato, Iodice, Corsini, de Zeeuw, Gadotti, Leaman, Lyubenova,
McDermid, Minchev, Morelli, van de Ven, & Viaene]Pinna2019a
Pinna, F., Falcón-Barroso, J., Martig, M., et al. 2019b,
, 623, A19
[Pohlen & Trujillo(2006)]PohlenTrujillo2006
Pohlen, M. & Trujillo, I. 2006, , 454, 759
[Qu et al.(2011)Qu, Matteo, Lehnert, & van Driel]Qu2011
Qu, Y., Matteo, P. D., Lehnert, M. D., & van Driel, W. 2011, , 530, A10
[Radburn-Smith et al.(2014)Radburn-Smith, de Jong, Streich, Bell,
Dalcanton, Dolphin, Stilp, Monachesi, Holwerda, &
Bailin]Radburn-Smith2014
Radburn-Smith, D. J., de Jong, R. S., Streich, D., et al. 2014, , 780,
105
[Radburn-Smith et al.(2012)Radburn-Smith, Roškar, Debattista,
Dalcanton, Streich, de Jong, Vlajić, Holwerda, Purcell, Dolphin, &
Zucker]RadburnSmith2012
Radburn-Smith, D. J., Roškar, R., Debattista, V. P., et al. 2012, ,
753, 138
[Rix et al.(2004)Rix, Barden, Beckwith, Bell, Borch, Caldwell,
Häussler, Jahnke, Jogee, McIntosh, Meisenheimer, Peng, Sanchez,
Somerville, Wisotzki, & Wolf]Rix2004
Rix, H.-W., Barden, M., Beckwith, S. V. W., et al. 2004, , 152, 163
[Román et al.(2020)Román, Trujillo, & Montes]Roman2020
Román, J., Trujillo, I., & Montes, M. 2020, , 644, A42
[Roškar et al.(2010)Roškar, Debattista, Brooks, Quinn,
Brook, Governato, Dalcanton, & Wadsley]Roskar2010
Roškar, R., Debattista, V. P., Brooks, A. M., et al. 2010, , 408,
783
[Roškar et al.(2008a)Roškar, Debattista,
Quinn, Stinson, & Wadsley]Rovskar2008b
Roškar, R., Debattista, V. P., Quinn, T. R., Stinson, G. S., & Wadsley,
J. 2008a, , 684, L79
[Roškar et al.(2008b)Roškar, Debattista,
Stinson, Quinn, Kaufmann, & Wadsley]Rovskar2008a
Roškar, R., Debattista, V. P., Stinson, G. S., et al.
2008b, , 675, L65
[Rupen(1991)]Rupen1991
Rupen, M. P. 1991, , 102, 48
[Saifollahi et al.(2022)Saifollahi, Zaritsky, Trujillo, Peletier,
Knapen, Amorisco, Beasley, & Donnerstein]Saifollahi2022
Saifollahi, T., Zaritsky, D., Trujillo, I., et al. 2022, , 511, 4633
[Sánchez-Gallego et al.(2012)Sánchez-Gallego, Knapen,
Wilson, Barmby, Azimlu, & Courteau]Sanchez-Gallego2012
Sánchez-Gallego, J. R., Knapen, J. H., Wilson, C. D., et al. 2012,
, 422, 3208
[Sandin(2014)]Sandin2014
Sandin, C. 2014, , 567, A97
[Sandin(2015)]Sandin2015
Sandin, C. 2015, , 567, A106
[Scannapieco et al.(2009)Scannapieco, White, Springel, &
Tissera]Scannapieco2009
Scannapieco, C., White, S. D. M., Springel, V., & Tissera, P. B. 2009, ,
396, 696
[Schaye(2004)]Schaye2004
Schaye, J. 2004, , 609, 667
[Schirmer(2013)]Schirmer2013
Schirmer, M. 2013, , 209, 21
[Schlafly & Finkbeiner(2011)]Schlafly2011
Schlafly, E. F. & Finkbeiner, D. P. 2011, , 737, 103
[Sellwood(2014)]Sellwood2014
Sellwood, J. A. 2014, Reviews of Modern Physics, 86, 1
[Sersic(1968)]Sersic1968
Sersic, J. L. 1968,
Nature), 342–354
[Slater et al.(2009)Slater, Harding, & Mihos]Slater2009
Slater, C. T., Harding, P., & Mihos, J. C. 2009, , 121, 1267
[Trujillo et al.(2020)Trujillo, Chamba, & Knapen]Trujillo2020
Trujillo, I., Chamba, N., & Knapen, J. H. 2020, , 493, 87
[Trujillo et al.(2021)Trujillo, D'Onofrio, Zaritsky,
Madrigal-Aguado, Chamba, Golini, Akhlaghi, Sharbaf, Infante-Sainz, Román,
Morales-Socorro, Sand, & Martin]Trujillo2021
Trujillo, I., D'Onofrio, M., Zaritsky, D., et al. 2021, , 654, A40
[Trujillo & Fliri(2016)]TrujilloFliri2016
Trujillo, I. & Fliri, J. 2016, , 823, 123
[van der Kruit(1979)]vanderKruit1979
van der Kruit, P. C. 1979, , 38, 15
[van der Kruit(1987)]vanderkruit1987a
van der Kruit, P. C. 1987, , 173, 59
[van der Kruit & Freeman(2011)]KruitFreeman2011
van der Kruit, P. C. & Freeman, K. C. 2011, , 49, 301
[van der Kruit & Searle(1981a)]vanderKruit1981a
van der Kruit, P. C. & Searle, L. 1981a, , 95, 105
[van der Kruit & Searle(1981b)]vanderKruit1981b
van der Kruit, P. C. & Searle, L. 1981b, , 95, 116
[Villalobos & Helmi(2008)]Villalobos2008
Villalobos, Á. & Helmi, A. 2008, , 391, 1806
[Wiegert et al.(2015)Wiegert, Irwin, Miskolczi, Schmidt, Mora,
Damas-Segovia, Stein, English, Rand, Santistevan, Walterbos, Krause, Beck,
Dettmar, Kepley, Wezgowiec, Wang, Heald, Li, MacGregor, Johnson, Strong,
DeSouza, & Porter]Wiegert2015
Wiegert, T., Irwin, J., Miskolczi, A., et al. 2015, , 150, 81
[Wilson et al.(2009)Wilson, Warren, Israel, Serjeant, Bendo, Brinks,
Clements, Courteau, Irwin, Knapen, Leech, Matthews, Mühle, Mortier,
Petitpas, Sinukoff, Spekkens, Tan, Tilanus, Usero, van der Werf, Wiegert, &
Zhu]Wilson2009
Wilson, C. D., Warren, B. E., Israel, F. P., et al. 2009, , 693, 1736
[Wu et al.(2002)Wu, Burstein, Deng, Zhou, Shang, Zheng, Chen, Su,
Windhorst, ping Chen, Zou, Xia, Jiang, Ma, Xue, Zhu, Cheng, Byun, Chen, Deng,
Fan, Fang, Kong, Li, Lin, Lu, hsin Sun, shun Tsay, Xu, Yan, Zhao, &
Zheng]Wu2002
Wu, H., Burstein, D., Deng, Z., et al. 2002, , 123, 1364
[Yim et al.(2014)Yim, Wong, Xue, Rand, Rosolowsky, van der Hulst,
Benjamin, & Murphy]Yim2014
Yim, K., Wong, T., Xue, R., et al. 2014, , 148, 127
[Zaritsky et al.(2019)Zaritsky, Behroozi, Peeples, Tuttle, Werk, &
Zhang]Zaritsky2019
Zaritsky, D., Behroozi, P., Peeples, M. S., et al. 2019, , 51, 127
[Zibetti et al.(2004)Zibetti, White, & Brinkmann]Zibetti2004a
Zibetti, S., White, S. D. M., & Brinkmann, J. 2004, , 347, 556
[Zschaechner et al.(2012)Zschaechner, Rand, Heald, Gentile, &
Józsa]Zschaechner2012
Zschaechner, L. K., Rand, R. J., Heald, G. H., Gentile, G., & Józsa, G.
2012, , 760, 37
§ RADIAL SURFACE BRIGHTNESS PROFILES (RSBP)
In Figs. <ref>, <ref>, and <ref>, we plot all the RSBPs of the observed data at different heights above/below the NGC 4565 mid-plane for the three ultra-deep g, r, and i bands, respectively. Then, we show the same but for the Hi surface mass density profiles in Fig. <ref>. The RSBPs of the corresponding models in the g, r, and i bands are shown in Figs. <ref>, <ref>, and <ref> respectively.
Each figure shows the RSBP at all the different heights above/below the galaxy mid-plane in a given band. In this way, we explicitly illustrate the variations of the radial position of the truncation along the vertical axis as well as the profiles where the truncation is not detected. The black line always shows the mid-plane surface brightness profile. From top to bottom and from left to right, the panels show in colour the surface brightness profile at a given altitude above the galaxy mid-plane (indicated in the legend) and, in light grey, the surface brightness profiles plotted in the previous panels. When the truncation is detected by our algorithm, its radial position is also indicated with a value in kpc in the bottom left corner of each panel with a typical uncertainty of 2% of the truncation radius value.
§ NGC 4565 MODEL
Figure <ref> shows the details of the analytical 2D model of NGC 4565 and its components in comparison to the observed data in r-band. There is a good agreement between the observed data and the model in both, the 1D profiles and the 2D images. The results for the g and i band are similar. The full details on how to obtain this model are in Sect. <ref>.
|
http://arxiv.org/abs/2307.01293v1
|
20230703185915
|
Hidden Multiscale Organization and Cascading Dynamics in Real Multiplex Networks
|
[
"Gangmin Son",
"Meesoon Ha",
"Hawoong Jeong"
] |
physics.soc-ph
|
[
"physics.soc-ph",
"cond-mat.dis-nn",
"cond-mat.stat-mech"
] |
Department of Physics, Korea Advanced Institute of Science and Technology, Daejeon 34141, Korea
[Corresponding author; ][email protected]
Department of Physics Education, Chosun University, Gwangju 61452, Korea
Department of Physics, Korea Advanced Institute of Science and Technology, Daejeon 34141, Korea
Center of Complex Systems, Korea Advanced Institute of Science and Technology, Daejeon 34141, Korea
Hidden geometry underlying complex networks reveals their multiscale nature. We extend the framework to multiplex networks for the investigation of the multiscale organization and its role in cascading dynamics. We find that real multiplexes exhibit an anomalous evolution of interlayer geometric correlations as coarse-graining. This originates from clans, groups of nodes that preserve their local geometric arrangement across layers. We uncover the intimate relationship between clan unfolding and mutual percolation, leading to an ambivalent role of clans: making a system fragile yet less prone to complete shattering. Finally, we confirm the correlation between the multiscale nature of geometric organization and the overall robustness. Our findings expand the significance of hidden geometry in interdependent systems.
Hidden Multiscale Organization and Cascading Dynamics in Real Multiplex Networks
Hawoong Jeong
August 1, 2023
================================================================================
Complex systems generally possess an intricate architecture that spans multiple scales.
The network geometry paradigm paves the way for exploring the multiscale organization of complex networks <cit.>. In particular, the concept of hidden metric spaces with hyperbolic geometry gives natural explanations for the common properties of real networks, such as degree heterogeneity, strong clustering, and small-worldness <cit.>. Coarse-graining of nodes based on their distances in a hidden metric space enriches zooming-in/out processes on networks <cit.>. This approach has been shown to be useful in studying self-similarity, or scale-invariance of real complex systems, such as the human brain <cit.>.
The investigation of multiscale properties, however, has still been limited to single-layer networks. Indeed, many real networked systems consist of multiple interdependent systems represented by multilayer or multiplex networks, which are of theoretical and practical significance due to the intriguing phenomena not seen in single-layer networks <cit.>. In multiplex networks, all nodes exist in every layer, and each layer corresponds to a different interaction type. If a node in one layer is attacked, its dependent nodes in the other layers also break down. This interdependent nature can yield a catastrophic cascade of failures, which makes understanding the robustness of multiplexes fascinating <cit.>.
In this context, hidden geometry is essential in studying structural properties and cascading dynamics in real multiplexes, as evidenced by recent publications <cit.>: real multiplexes have geometric organization correlated across layers, which mitigate their vulnerability against targeted attacks. As such, exploring the multiscale nature of multiplexes would give novel insights into the interplay between their geometry and function.
In this Letter, we reveal the crucial role of multiscale organization in the robustness of real multiplexes by employing the hidden geometry framework <cit.>. Notably, we find that real multiplexes have nontrivial multiscale organization characterized by geometric correlations <cit.> of their downscaled replicas. The anomalous behaviors originate from the presence of groups of nodes preserving their local geometric arrangement, which we refer to as clans. Furthermore, we uncover the effects of clan structure on cascading dynamics in multiplexes: the rapid spread of inter-clan cascades makes a system fragile at the early stage whereas the intra-clan organization constrains complete shattering at the end. The intimate relationship between clan unfolding and mutual percolation elucidates the ambivalent role of clan structure both in real and synthetic multiplexes. Finally, we confirm that the multiscale properties of real systems can enhance their overall robustness.
We begin with our extension of multiscale unfolding to multiplexes (see Fig. <ref>). This approach relies on the basic assumption that each node in a network has radial and angular coordinates, r_i and θ_i, of a two-dimensional hyperbolic space <cit.>. Since the radial coordinate r_i basically provides information about the expected degree of the node κ_i, we only focus on angular coordinates {θ_i} by adopting a simple method <cit.> for coarse-graining of nodes in single-layer networks, instead of a more sophisticated technique <cit.> (see Supplemental Material (SM), Sec. I <cit.>). Given a network where the angular coordinates of nodes are known and a block size λ, consecutive λ nodes along the circle are grouped into a supernode whose angular coordinate ϕ is defined by
ξ e^ϕ = 1/λ∑_j=1^λe^iθ_j
where θ_j is the angular coordinate of node j, and ξ is the absolute value of the right hand side <cit.>. In order to extend this to multiplexes, the same mapping from nodes to supernodes should be applied to every layer. Therefore, one chooses a standard layer to define the mapping. Iteration of this process yields a sequence of downscaled replicas per a multiplex as shown in Fig. <ref>. Our extension allows us to measure the geometric correlations of the downscaled replicas, or the geometric correlation spectrum. For the sake of specificity, we denote geometric correlation as the normalized mutual information (NMI) <cit.> between the sequences of the angular coordinates in different layers; thus, the geometric correlation spectrum is represented by the NMI as a function of the zooming-out level l (see SM, Sec. I <cit.>).
To investigate the geometric correlation spectra of real multiplexes, we use an empirical dataset (see SM, Sec. III and Table S1 <cit.>). Since the single-scale geometric correlation should be considered as a control variable, we compare each case with its null counterpart that gives the same values of NMI for the raw instance (l=0) but can exhibit different behaviors for the downscaled replicas (l≥1). To generate the corresponding null counterparts, we employ the interlayer connection process in the geometric multiplex model (GMM) for generating synthetic multiplexes with geometric correlations <cit.>. In this model, if the angular coordinates in a layer, Layer 1, are assigned, those in the other layer, Layer 2, are determined by shifting the angular coordinate of each node in Layer 1 based on independent noise. Therefore, for a given multiplex, we can obtain a multiplex where the topologies of layers are the same as the original instance, but the geometric correlation is established by independent local shifts as in the GMM (see SM, Sec. II <cit.>).
Figure <ref>(a) shows geometric correlation spectra for the arXiv collaboration (ArXiv, A48) and the Internet (Internet, I12) multiplexes as well as their null counterparts with similar NMI values at l=0. Strikingly, we observe a significant discrepancy between the original and the null. In the null, the spectra tend to increase monotonically, indicating that local noise is washed out as zooming-out, whereas the real systems exhibit decreases at the early stage of their spectra. This discrepancy can be quantified by the maximum difference between the spectra of the original and the null as
m=max_l[NMI_null(l) - NMI_org(l)].
This is also found that all the cases in the dataset exhibit large values of m (see SM Table S1 <cit.>).
To explain the anomalous interlayer correlations in real multiplexes, we propose a synthetic multiplex model, named the multiscale GMM (MGMM). To cause the loss of correlations between the layers as zooming out on a multiplex, the interlayer connections should not be governed by totally independent noise. Therefore, we introduce groups of nodes that preserve the local intra-group organization in both layers, named clans, to the model. Specifically, if angular coordinates in a layer, Layer 1, are assigned, consecutive Λ nodes in Layer 1 belong to the same clan; in the other layer, their closeness is preserved but the inter-clan arrangement is totally randomized (see SM, Sec. II.B <cit.>). Figure <ref>(b) schematically illustrates our model, the MGMM, and its null counterpart. The clan size of the MGMM is set to Λ=2^2, so that the zooming-out level l=2 leads to completely uncorrelated angular coordinates, whereas the null counterpart shows more correlated angular arrangements after the coarse-graining. Figure <ref>(a) shows NMI_org(l=2)=0 in the MGMM (Model). Finally, the model reproduces the nontrivial decreases observed in real multiplexes, implying the presence of the clan structure in real multiplexes.
To actually reveal the clan structure in a given multiplex, we develop a method for identifying clans, or groups of nodes that preserve their closeness across layers. If the angular distance d_ij between two nodes i and j is less than a certain angular window, θ_w, in both layers, they have the same clan membership. Therefore, clans can be obtained from the connected components of a proximity network with the connection probability is p_ij=Θ(θ_w - d_ij^(1))Θ(θ_w - d_ij^(2)) where Θ denotes the Heaviside step function and the superscripts of d reflect layers. Given that the expected largest gap θ_c≈ 2πlogN / N among N points distributed randomly on [0,2π) serves as a characteristic scale of θ_w, we define a resolution factor z as follows:
z=1/1+θ_w/θ_c.
For z=0, or θ_w=∞, all nodes belong to a single clan, and for z=1, or θ_w=0, all clans just correspond to isolated nodes.
The identified clans in the Internet multiplex and its null counterpart are shown in Fig. <ref> against z. As aforementioned, in both the original and the null, every node belongs to a giant clan for z=0 and every clan has only one node for z=1. However, for 0<z<1, they exhibit different behaviors in the clan unfolding. For z=1/6, in the original, plenty of mesoscopic clans appear, whereas, in the null, most nodes still belong to a giant clan. For z=2/3, the null has more clans than the original, but most clans merely correspond to isolated nodes or pairs of nodes. Thus, we conclude that the decrease of interlayer correlations as zooming-out on real multiplexes in Fig. <ref>(a) suggests mesoscopic clans that tend to be preserved as z increases.
The unfolding of clans is reminiscent of certain percolation dynamics. As aforementioned, clans are equivalent to the connected components of a network where edges correspond to angular proximity in both layers. In addition, the resolution factor z reflects the threshold of the overlap proximity, thus indicating that the increase of z corresponds to the gradual removal of the longest edges from a fully connected network at z=0. Hence, the qualitative description of the clan unfolding in Fig. <ref> can be apparent by several interesting quantities in percolation: the normalized number of clusters 𝒩, the relative size of the largest cluster 𝒮, and the average size of clusters, often called susceptibility, ⟨ s ⟩ = ∑^'_ss^2 n_s / ∑^'_ss n_s, where n_s is the number of clusters with size s and the primed sum excludes the largest cluster.
In Figs. <ref>(a) and <ref>(c), we present the quantities as a function of z for the Internet multiplex where the gray dotted lines guide the cases of z=1/6 and z=2/3 in Fig. <ref>. Our scenario derived from Fig. <ref> is that the clan structure in the original leads to an earlier appearance of mesoscopic clans that remains longer as z increases. As expected, the number of clans 𝒩 of the original is larger than the null for z=1/6, but a reversal occurs between z=1/6 and z=2/3 [see Fig. <ref>(a)]. The opposite behaviors in the largest clan size also support our conjecture [see the inset in Fig. <ref>(c)]. Moreover, as shown in the main panel of Fig. <ref>(c), the average clan size ⟨ s ⟩ of the original exhibits an earlier jump, indicating the earlier breakdown of the giant clan (-1 is added to emphasize the initial jumps from ⟨ s ⟩ = 1), whereas the slower decay suggests that the mesoscale clans remain longer due to the strong intra-clan organization. The other real multiplexes in our dataset typically exhibit similar behaviors as those observed on the Internet (see SM Figs. S1–4 <cit.>).
We extend the notion that clan unfolding can be regarded as percolation in a hypothetical proximity network to the correspondence to mutual percolation <cit.> in an actual multiplex. First, the connection probability p in the actual network is set to a function of the angular distance, p ∼ d^1/T where the temperature T controls the interaction range <cit.>. Although the power-law form implies long-range connections, the limitation of T → 0 makes the connection probability similar to that in the proximity network. Secondly, mutual percolation concerns mutually connected components (MCCs), which are defined by a constraint that is similar but less stringent compared to the components derived from overlapped edges. Third, the targeted attack strategy, the removal of the highest-degree nodes, especially resembles the removal of the longest edges. Specifically, the expected value of the average angular length of edges incident to a node with the expected degree κ is given by
∫ d(θ, θ') p(θ, κ, θ', κ') dθdθ'dκ' ∼logκ.
Therefore, we speculate that clan structure also plays an analogous role in mutual percolation against targeted attacks. In a sense that our analysis controls the single-scale geometric correlation, this direction emphasizes the underlying mechanisms of the robustness of real multiplexes beyond Ref. <cit.>.
Figures <ref>(b) and <ref>(d) show the percolation quantities considered in Figs. <ref>(a) and <ref>(c), but for MCCs instead of clans, as a function of the removal fraction of nodes f. Remarkably, as similar to the results of clan unfolding shown in Figs. <ref>(a) and <ref>(c), crossing patterns between the original and the null appear in 𝒩 and 𝒮 [see Fig. <ref>(b) and the inset of Fig. <ref>(d)]. Considering the other cases in our dataset, the clear reversals in mutual percolation are not common (see SM Figs. S5–7 <cit.>). However, the smaller 𝒩 and the greater 𝒮 are quite common in the real multiplexes, implying that clan structure impedes complete breakdown against targeted attacks in multiplexes. Moreover, the earlier jump and the slower decay in ⟨ s ⟩ is also in line with the analogy we proposed [see the main panel of Fig. <ref>(d)]. The prominent difference with Fig. <ref>(c) is that the original has a much higher peak than that of the null. This originates from the fact that the actual network has long-range connections, unlike the proximity network. Since the proximity network forms a ring along the angular axis, its initial giant cluster with size N breaks down by two angular gaps, thus leading to the trivial second largest cluster with the expected size N/4 and a peak at ⟨ s ⟩∼𝒪(N/4). The number of nodes of the Internet (N=4710) yields the peaks near 10^3 shown in Fig. <ref>(c). On the other hand, in the actual multiplex, these trivial phenomena are absent so that the clan structure clearly affects the peaks in Fig. <ref>(d). The higher ⟨ s ⟩ peak of the original is typically observed in the mutual percolation in other real multiplexes (see SM Fig. S8 <cit.>).
We further investigate the role of clans in cascading dynamics via synthetic networks generated by the MGMM as the planted clan size Λ varies. In Figs. <ref>(a) and <ref>(b), we present the results of 𝒩 for clan unfolding and mutual percolation in the synthetic networks, respectively. Given that geometric correlations are similar with high values NMI≈ 0.9 against varying Λ, we take a single null counterpart for them. Notably, the sign reversals in the difference of 𝒩 between the original and the null [in the main panel of Fig. <ref>] indicate the fragility at the early stage and the mitigation of complete shattering at the final stage as the same as in Figs. <ref>(a) and <ref>(b). In contrast to real multiplexes, the synthetic networks possess planted clans with the same size Λ. Accordingly, for clan unfolding, the planted clans are revealed and preserved for a long regime as z increases, thus leading to plateaus with 𝒩=1/Λ [see the inset of Fig. <ref>(a)]. Therefore, as the planted clans become larger, the early vulnerability tends to disappear, while the mitigation of complete shattering is enhanced [see the main panel of Fig. <ref>(a)].
Owing to the point mentioned above that the actual multiplexes contain long-range links, the distinct clan partitions are blurred so that plateaus are absent [see the inset of Fig. <ref>(b)]. However, despite such differences, we observe qualitatively similar behaviors for mutual percolation with increasing Λ [see the main panel of Fig. <ref>(b)]. The results of the average cluster size ⟨ s ⟩ and the largest cluster size 𝒮 also clarify the ambivalent role of clans in synthetic networks (see SM Fig. S9 <cit.>).
Finally, our model results raise a question regarding real multiplexes: Does the strength of the multiscale nature correspond to the overall robustness of real systems? The multiscale nature of a multiplex can be quantified by the discrepancy in the geometric correlation spectrum with its null counterpart, m defined in Eq. (<ref>). In the context of our analogy, we define the overall robustness ℛ by the suppression of complete shattering, as follows:
ℛ = max_f[𝒩_𝓃𝓊𝓁𝓁 - 𝒩_ℴ𝓇ℊ].
In other words, ℛ describes how mesoscale clusters remaining after the removal of hubs are durable. As shown in Fig. <ref>, we observe a strong positive correlation between the multiscale nature in geometric correlations m and the overall robustness ℛ (Pearson correlation coefficient ρ≈ 0.72 with the p-value≈ 0.0002). This supports our conjecture based on the model results and emphasizes the significance of multiscale organization in cascading dynamics in real multiplexes.
In summary, we have demonstrated the importance of multiscale organization in cascading dynamics of real multiplex networks by employing the hidden geometry framework. In particular, we have shown that real multiplexes exhibit nontrivial geometric correlation spectra and their origin is the presence of clans, groups of nodes preserving their local geometric arrangement across layers. The new concept of clan has an intimate relationship with mutual percolation against targeted attacks, which leads to that clan structure promotes the rapid spread of inter-clan cascades while constraining intra-clan propagation. Our findings offer novel insights into the interplay between geometry and function in real interdependent systems.
This research was supported by Basic Science Research Program through the National Research Foundation of Korea (NRF) (KR) [NRF-2020R1A2C1007703 (G.S., M.H.) and NRF-2022R1A2B5B02001752 (G.S., H.J.)].
|
http://arxiv.org/abs/2307.00298v1
|
20230701105037
|
Effects of a Late Gravitational Transition on Gravitational Waves and Anticipated Constraints
|
[
"Evangelos Achilleas Paraskevas",
"Leandros Perivolaropoulos"
] |
astro-ph.CO
|
[
"astro-ph.CO",
"gr-qc"
] |
C[1]>m#1
R[1]>m#1
L[1]>m#1
./Definitions/
Astron. Astroph.
Astroph. & Space Science
Astroph. J. Lett.
Astroph. J.
Mon. Not. R. Ast. Soc.
Astron. Astrophys.
Astron. J.
Phys. Rev. Lett.
Phys. Rev. D
Nucl. Phys.
Nature
Phys. Lett. B
JETP Lett.
Λ
19731720232023Academic Editor: Lorenzo Iorio10 June 202324 June 2023
27 June 202330 June 2023
https://doi.org/10.3390/ universe9070317
Effects of a Late Gravitational Transition on Gravitational Waves and Anticipated Constraints
Effects of a Late Gravitational Transition on Gravitational Waves and Anticipated ConstraintsEvangelos Achilleas Paraskevas
and Leandros Perivolaropoulos *Evangelos Achilleas Paraskevas, Leandros PerivolaropoulosParaskevas, E.A.; Perivolaropoulos, L.
[1]Department of Physics, University of Ioannina, 45110 Ioannina, Greece; [email protected]=1 =1.05em Correspondence: [email protected]
|
http://arxiv.org/abs/2307.01861v1
|
20230704181831
|
The $K$-distribution of random graph $\mathrm{C}^*$-algebras
|
[
"Bhishan Jacelon",
"Igor Khavkine"
] |
math.OA
|
[
"math.OA",
"46L35, 46L80, 15B52, 60B20"
] |
We appeal to results from combinatorial random matrix theory to deduce that various random graph -algebras are asymptotically almost surely Kirchberg algebras with trivial K_1. This in particular implies that, with high probability, the stable isomorphism classes of such algebras are exhausted by variations of Cuntz algebras that we term Cuntz polygons. We also use computer simulations to experimentally verify the behaviour predicted by theory and to estimate the asymptotic probabilities of obtaining stable isomorphism classes represented by actual Cuntz algebras. These probabilities depend on the frequencies with which the Sylow p-subgroups of K_0 are cyclic. For random symmetric r-regular multigraphs, current theory can describe these frequencies for finite sets of odd primes p not dividing r-1. A novel aspect of the collected data is the observation of new heuristics outside of this case, leading to a conjecture for the asymptotic probability of these graphs yielding -algebras stably isomorphic to Cuntz algebras.
Comparing Globular Cluster System Properties with Host Galaxy Environment[Based on observations made with the NASA/ESA Hubble Space Telescope, obtained from the Data Archive at the Space Telescope Science Institute, which is operated by the Association of Universities for Research in Astronomy, Inc., under NASA contract NAS5-26555. These observations are associated with programs 14219 and 15265.]
Jenny E. Greene
August 1, 2023
================================================================================================================================================================================================================================================================================================================================================================================================================
§ INTRODUCTION
This article is a continuation of <cit.>, which initiated the study of random constructions of several classes of -algebras of interest to the Elliott classification programme. Here, we focus on the -algebras associated to directed multigraphs that are created by randomly adding edges to a large number of vertices (see Section <ref>). With high probability, the graphs we consider are strongly connected and consequently describe purely infinite simple -algebras. This means that our graph algebras fall under the remit of the Kirchberg–Phillips classification theorem <cit.>, that is, their (stable) isomorphism classes are determined by K-theory. Actually, our algebras are specifically Cuntz–Krieger algebras (see Section <ref>), so we could alternatively appeal to Rørdam's earlier classification <cit.>.
In <cit.>, it was explained how some of the powerful machinery developed by Wood <cit.> could be adapted to the setting of graph -algebras. We considered multigraphs E built from unions of perfect matchings, and observed that the distributions of Sylow subgroups of random cokernels established in <cit.> were also present for K_0((E)), which happens to be given by the cokernel of the transpose of the multigraph's adjacency matrix shifted by the identity. The original motivation for the current work was to experimentally verify the K_0-behaviour predicted by this theory. We were indeed able to do this (see Section <ref>), but what immediately became apparent from the data was something overlooked in <cit.>, namely, asymptotically almost-sure triviality of the K_1-group, which happens to be given by the kernel of the shifted, transposed adjacency matrix.
The singularity problem for random matrices has a storied history in combinatorics (see the discussion and references in <cit.>). It posits that sufficiently random square matrices are nonsingular with probability approaching 1 as the matrix size grows to infinity. Our working hypothesis is that this should be applicable not just to the adjacency matrices A_E of appropriate random graphs E, but equally well to M_E=A_E^t-I. As alluded to earlier, this matrix determines K_0((E))= M_E and K_1((E))= M_E (see the discussion surrounding (<ref>)), and its nonsingularity exactly means triviality of K_1. We will see in Section <ref> that this applies to Bernoulli digraphs _n,q in which each possible edge (allowing loops but not multiple edges) occurs independently with fixed probability q, and also to random r-regular multigraphs _n,r built from perfect matchings.
Because of this K_1-triviality, and in light of K-theoretic classification, the stable isomorphism classes of our random graph algebras (and, given the scope of the singularity problem, likely more generally) can with high probability be described by rather simple directed multigraphs. These representatives, which we call Cuntz polygons (see Figure <ref>), are generalisations of Cuntz algebras in the sense that K_0 need not be cyclic but can be any finite abelian group. Stable isomorphism to an actual Cuntz algebra is detected by cyclicity of K_0. As it turns out, for large Bernoulli digraphs this happens about 85% of the time, a probability which can be computed asymptotically from existing theory (see Theorem <ref>) and which is supported by the data (Table <ref>). For random r-regular multigraphs, there are currently three obstacles to computing the asymptotic probability of the graph algebra being stably isomorphic to a Cuntz algebra. They all stem from the fact that cyclicity of K_0 is equivalent to cyclicity of the Sylow p-subgroups of K_0 for all primes p. Firstly and secondly, although the probability of cyclicity of Sylow p-subgroups is known for odd primes p that do not divide r-1, it is unknown for p=2 or p| r-1. Thirdly, the known theory covers the case of only finitely many primes, not all primes simultaneously. See the discussion around Theorem <ref>, which is adapted from <cit.>.
However, we conjecture from the data that the asymptotic probability of the graph algebra being stably isomorphic to a Cuntz algebra still converges to the product over all primes p of the asymptotic probability of cyclicity of the Sylow p-subgroup of K_0 (see Table <ref>). When the regularity degree is one more than a power of two, we estimate this product to be about 40% (see Table <ref>). As for cyclicity of the Sylow p-subgroup of K_0(_n,r) for large n when p| 2(r-1) (that is, p=2 or p is an odd prime dividing r-1), the data reveal new heuristics (see Table <ref>, Table <ref> and Conjecture <ref>). In particular, the percentage probability of cyclicity for p=2 is about 42 (thus providing a question to an answer of D. Adams).
On the other hand, the symmetric versions _n,q of _n,q (that is, Erdős–Rényi graphs allowing loops) lead to -algebras (_n,q) for which all Sylow p-subgroups of K_0((_n,q)), including p=2, follow the limiting behaviour (<ref>), (<ref>) and (<ref>). This is proved in Theorem 3.12 of Wood's article <cit.>, where it is also stated that forthcoming work will extend this result to infinitely many primes. Moreover, asymptotic triviality of K_1 also holds for these algebras, for the same reason as for the algebras (_n,r) (namely, <cit.>). One would therefore expect about 79% of the algebras (_n,q) for a given large n to be stably isomorphic to Cuntz algebras.
We have gathered the data reported in Section <ref> by running computer simulations. Our code generates samples of random graphs _n,r, _n,q or _n,q (our typical sample size being m=10^5) and collects K-theoretic data.
The primary tool for computing cokernels of integer matrices, and hence the K-theory of graph algebras, is the Smith normal form (SNF) algorithm. It has been used extensively in the literature (see, for example, <cit.>) and is an integral piece of our analysis of the graphs. The SNF algorithm provides, for a given M_E∈ M_n(), a diagonalisation UM_EV=D=(d_1,…,d_n), where U,V∈() and the d_i are integers with d_i|d_i+1 for 1≤ i≤ n-1. Then,
M_E ≅ D ≅/d_1⊕…⊕/d_n.
From this, we can also determine the structure of the p-Sylow subgroup of M_E ≅ K_0((E)) for any prime p. We also record:
* whether or not each graph is strongly connected;
* whether K_0 is cyclic, by removing any 0 entries from the list L=[d_1,…,d_n] (storing these as K_1), and also any 1s (as these do not contribute to the cokernel), then checking whether there remains at most one entry in L;
* the rank of K_1 (in other words, the number of 0 entries removed from L);
* tallies of cyclicity of Sylow p-subgroups for specified primes p, as well as instances where these subgroups are of the form /(p^N) or (/p)^N, for integers N that are small enough for these events to be statistically observable (see (<ref>) and (<ref>));
* 99% confidence intervals for the various proportions above, using the normal approximation to the binomial distribution.
The simulation and analysis code is written in Python. For efficiency reasons, the SNF is computed by calling the PARI library <cit.>, which is written in C.
We have organised this article as follows. Section <ref> provides some background on the -algebras associated to finite graphs, as well as their K-theoretic classification. Section <ref> introduces the relevant random graph models and describes some asymptotic behaviour guaranteed by random matrix theory. Finally, in Section <ref> we present empirical data that either experimentally verify the theory or provide estimates for probabilities that are not yet theoretically computable.
§.§ Acknowledgements
BJ is supported by the GAČR project 22-07833K and partially supported by the Simons Foundation Award No 663281 granted to the Institute of Mathematics of the Polish Academy of Sciences for the years 2021–2023. IK is partially supported by the Praemium Academiae of M. Markl and by Czech science foundation (GAČR) under the grant GA22-00091S. This collaboration would not have been possible without the excellent working environment at the Institute of Mathematics of the Czech Academy of Sciences (RVO: 67985840). The computer code was written with some facilitation by ChatGPT <cit.>.
§ GRAPH -ALGEBRAS
§.§ Graph algebras
A directed graph E consists of a vertex set E^0, an edge set E^1, and range and source maps r,s E^1→ E^0. A path in E is a (possibly finite) sequence of edges (α_i)_i≥1 such that r(α_i)=s(α_i+1) for every i, and a cycle is a finite path (α_i)_i=1^n whose initial and final vertices coincide and with no other repeated vertices. A vertex v∈ E^0 is called a sink if s^-1(v)=∅. Every graph considered in this article will be finite and (except for some of the graphs mentioned in Remark <ref> and Table <ref>) will with high probability have no sinks.
We recall the definition of the graph algebra associated to E. (Note that a different convention is used in <cit.>. What follows is the definition that appears, for example, in <cit.>.) A Cuntz–Krieger E-family associated to a directed graph E=(E^0,E^1,s,r) is a set
{p_v | v∈ E^0}∪{s_e | e∈ E^1},
where the p_v are mutually orthogonal projections and the s_e are partial isometries satisfying:
s_e^*s_f = 0 ∀ e,f∈ E^1 with e f
s_e^*s_e = p_r(e) ∀ e∈ E^1
s_es_e^* ≤ p_s(e) ∀ e∈ E^1
p_v = ∑_e∈ s^-1(v) s_es_e^* ∀ v∈ E^0 that is not a sink.
The graph algebra (E) is the universal -algebra with generators (<ref>) satisfying the relations (<ref>). Every graph algebra (E) is separable, nuclear and satisfies the universal coefficient theorem (UCT) (see, for example, <cit.> and <cit.>), and (E) is unital if and only if E^0 is finite, in which case 1_(E)=∑_v∈ E^0p_v.
§.§ Cuntz–Krieger algebras
By a Cuntz–Krieger algebra we mean the graph -algebra of a finite graph without sinks. This is not the original definition that appears in <cit.>, but is equivalent to it (see <cit.> and the ensuing discussion, and also <cit.>).
If E is such a graph, and A_E is its adjacency matrix
A_E(v,w) = |{e∈ E^1 | s(e)=v, r(e)=w}|,
then the K-theory of (E) can be computed as follows. From the proof of <cit.>, the map ^E^0→ K_0((E)) that sends the standard basis vector e_v to [p_v] is surjective with kernel isomorphic to (A_E^t-I)^E^0. In particular,
(K_0((E)),[1_(E)]_0,K_1((E))) ≅ ((A_E^t-I),[(1,…,1)], (A_E^t-I)).
A UCT Kirchberg algebra is by definition a purely infinite, simple, nuclear, separable -algebra that satisfies the universal coefficient theorem. By the Kirchberg–Phillips theorem (see <cit.>, and also <cit.> and <cit.>), these algebras are classified up to stable isomorphism by K-theory. Further, specifying the class [1]_0 of the unit determines the isomorphism class of the algebra (see <cit.>).
The following is a consequence of <cit.>.
If E is strongly connected and does not consist of a single cycle (so in particular, every cycle has an exit), then the Cuntz–Krieger algebra (E) is purely infinite and simple, so is a UCT Kirchberg algebra.
* We do not actually need the full strength of the Kirchberg–Phillips theorem in this article, and could instead appeal to Rørdam's earlier classification <cit.> of simple Cuntz–Krieger algebras.
* While the graph algebras of primary interest to us will be covered by Proposition <ref>, it should be noted that the full class of unital graph algebras is classifiable by an invariant called ordered reduced filtered K-theory that accounts for the ideal structure of the algebras (see <cit.>).
§.§ Cuntz polygons
For n∈ and m̅ = (m_1,…,m_n)∈^n, let E_m̅ be the graph with vertices
E_m̅^0={v_1,…,v_n}
and edges
E_m̅^1 = {e_i,j| 1≤ i ≤ n, 1≤ j ≤ m_i}∪{l_1,…,l_n}
where, for 1≤ i≤ n and 1≤ j≤ m_i,
r(l_i)=s(l_i)=v_i, r(e_i,j)=v_i, s(e_i,j)=v_i-1 ( n).
We call the graph algebra :=(E_m̅) a Cuntz n-gon. A Cuntz polygon is a Cuntz n-gon for some n.
The reason for the terminology, apart from the shapes of their associated graphs (see Figure <ref>), is that Cuntz polygons are generalisations of Cuntz algebras. Note indeed that a Cuntz 1-gon 𝒫_(m) is isomorphic to _m+1. Moreover, we have the following.
* If A is a Cuntz polygon (E_m̅), then A is a UCT Kirchberg algebra with
(K_0(A),[1_A]_0,K_1(A)) ≅(⊕_i=1^n/m_i,(1,…,1),0).
* A Cuntz polygon is stably isomorphic to a Cuntz algebra if and only if K_0() is cyclic, which happens if and only if the components m_1,…,m_n of m̅ are pairwise coprime.
* If E is a strongly connected finite graph whose adjacency matrix A_E is not a permutation matrix and is such that A_E^t-I is nonsingular, then (E) is stably isomorphic to for some m̅.
* Since E_m̅ is strongly connected and does not consist of a single cycle, A is a UCT Kirchberg algebra by Proposition <ref>. The adjacency matrix of E_m̅ is A_E_m̅=I+PD, where D is the diagonal matrix (m_1,…,m_n) and P is the permutation matrix whose action on the standard basis of ^E^0 is given by Pe_v_i=e_v_i-1 (mod n). So, A^t_E_m̅-I=DP^t, which implies that (A^t_E_m̅-I)=0 and (A^t_E_m̅-I)= D ≅⊕_i=1^n/m_i. Therefore, (<ref>) follows from (<ref>).
* The second assertion follows from the Kirchberg–Phillips (or Rørdam) classification theorem together with the fact that the Cuntz algebra _n is a UCT Kirchberg algebra with K-theory
(K_0(_n),[1__n]_0,K_1(_n)] ≅ (/(n-1),1,0)
(see <cit.> and <cit.>). Note also that ⊕_i=1^n/m_i is cyclic if and only if the m_i are pairwise coprime, and in this case, is in fact isomorphic to _1+∏_i=1^nm_i.
* Finally, if A_E is not a permutation matrix, then E does not consist of a single cycle. By Proposition <ref>, (E) is therefore a UCT Kirchberg algebra. By (<ref>) and (<ref>), there exists some Cuntz polygon that has the same K-theory as (E), which is then in the same stable isomorphism class by the Kirchberg–Phillips (or Rørdam) classification theorem.
* While Cuntz polygons cover all stable isomorphism classes of purely infinite simple Cuntz–Krieger algebras with trivial K_1, they are far from being able to capture all isomorphism classes. The reason of course is the position of the unit in K_0: this is determined by the matrix U in the SNF decomposition A_E^t-I=UDV, and this information is encoded by the potentially complicated structure of the graph. In fact, the class of the unit can be arbitrary: by <cit.>, for any finitely generated abelian group G, distinguished element g∈ G and free abelian group F with F = G (for example, F=0 and G any finite abelian group), there exists a finite graph E such that (E) is a Kirchberg algebra with
(K_0((E)), [1_(E)]_0, K_1((E))) ≅ (G,g,F).
* If A is a unital UCT Kirchberg algebra that is stably isomorphic to a Cuntz algebra _n, then in fact there exists k∈ such that A≅ M_k(_n) (namely, k=[1_A]_0∈ K_0(A) ≅/(n-1)). This is in general not the case for Cuntz polygons. In fact, by (<ref>) and the Chinese remainder theorem, if A is stably isomorphic to then A≅ M_k() for some k∈ if and only if there are representatives g_1,…,g_n of the residue classes of [1_A]_0∈ K_0(A) ≅⊕_i=1^n/m_i such that g_i ≡ g_j (m_i,m_j) for every i and j. So, one really must pass to the compact operators 𝕂 to obtain the isomorphism A⊗𝕂≅⊗𝕂.
§ RANDOM GRAPH MODELS
We will consider three methods of constructing directed multigraphs on n labelled vertices by randomly adding edges. Equivalently, we create adjacency matrices A(n) whose entries are random variables taking values in the nonnegative integers.
* First, we insist that these entries be independent, leading in particular to Bernoulli digraphs _n,q in which each possible edge (allowing loops but not multiple edges) occurs independently with fixed probability q. That is, each entry of A(n) is either 1 (with probability q) or 0 (with probability 1-q).
* By contrast, there are two sources of dependence among the entries of our second model of random adjacency matrix, which is now required to be symmetric with each row summing to a fixed integer r≥3. That is, we build random r-regular undirected multigraphs and convert them into directed ones by replacing each edge (between given vertices i,j) by two (one from i to j and one from j to i). Specifically, we use the perfect matchings model
_n,r=_n,1+…+_n,1_r,
that is, the union of r independent, uniformly random perfect matchings on the n∈2 vertices.
* The third model _n,q is in a sense intermediate between the first two. These Erdős–Rényi graphs are constructed in the same manner as _n,q, but symmetrically. They exhibit a pattern of asymptotic connectivity similar to _n,q, and asymptotic K-theoretic behaviour similar to _n,r.
In each case, we make statements about the asymptotic probabilities of:
* strong connectedness of the graph (entailing simplicity and pure infiniteness of the graph algebra);
* nonsingularity of M(n)=A(n)^t-I (leading to K_1=0 for the graph algebra);
* occurrences of specific Sylow p-subgroups of the K_0-group of the graph algebra, for fixed primes p;
* cyclicity of (Sylow subgroups of) the K_0-group (allowing us to compute or estimate the probability that the graph algebra is stably isomorphic to a Cuntz algebra).
If G is a finite group and p a prime, then G_p denotes the Sylow p-subgroup of G. If P is a finite set of primes, then a P-group is a group whose order is a product of powers of primes in P, and we write G_P for ⊕_p∈ PG_p. A sequence of events is said to occur asymptotically almost surely, or with high probability, if the probability of occurrence converges to 1.
§.§ Bernoulli digraphs
The following in particular applies to Bernoulli digraphs _n,q. Note that, since the entries of the adjacency matrices of these graphs are all either 0 or 1, if q∈(0,1) then the condition (<ref>) holds for all primes p.
Fix a constant ε∈(0,1) and a finite set S of nonnegative integers. For each n∈, let A(n) be an n× n matrix with independent random entries taking values in S such that, for every i,j∈{1,…,n} and every prime p,
max_s∈ S(A(n)_ij≡ s p)≤ 1-ε.
Then:
* the graph _n associated with A(n) is asymptotically almost surely strongly connected;
* M(n)=A(n)^t-I is asymptotically almost surely nonsingular (and so is A(n) itself);
* (_n) is asymptotically almost surely stably isomorphic to a Cuntz polygon;
* for any finite set P of primes and any finite abelian P-group G,
lim_n→∞(K_0((_n))_P ≅ G) = 1/|(G)|∏_p∈ P∏_k=1^∞(1-p^-k);
* the probability c_n that (_n) is stably isomorphic to a Cuntz algebra satisfies
lim_n→∞ c_n = ∏_p prime(1+1/p^2-p) ∏_k=2^∞ζ(k)^-1≈ 0.84694,
where ζ is the Riemann zeta function.
* For the first assertion, see <cit.>. While this theorem is specifically about loopless Bernoulli digraphs, adding loops or multiple edges does not affect connectivity.
* The second assertion is a consequence of <cit.>; there, S can be any finite set of complex numbers (with (<ref>) replaced by max_s∈ S(A(n)_ij = s)≤ 1-ε), so in particular, the entries of M(n) also satisfy the necessary hypotheses. (In fact, it is shown there that the asymptotic probability of singularity is (√(1-ε)+o(1))^n.)
* Since A(n) with high probability is not a permutation matrix, and we have just established strong connectivity of _n and nonsingularity of M(n), the third claim follows from Proposition <ref>.<ref>.
* For the computation of the distribution of Sylow p-subgroups, see <cit.>.
* For the final equation, note that by definition and Theorem <ref>.<ref>, and in light of Theorem <ref>.<ref>, c_n is asymptotically equal to the probability that M(n) ≅ K_0((_n)) is cyclic. Lest the appearance of the zeta function in the limiting value of c_n seem overly mysterious, consider the following derivation, which is heuristic only in skipping the justification of exchanging the limits n→∞ and |P|→∞. Let us define the related quantity
c^P_n = (K_0((_n))_P is cyclic) ,
so that lim_n→∞ c_n = lim_n→∞lim_|P|→∞ c^P_n. If we could exchange the two limits, then we could compute this by taking the |P|→∞ limit of (<ref>). First, recall that for any prime p and any integer N≥1
|(/p^N)| = p^N(1-p^-1) ,
as well as |(G_1 ⊕ G_2)| = |(G_1)| · |(G_2)| when |G_1| and |G_2| are coprime. Let N_P = (N_p)_p∈ P denote a sequence of non-negative integers, and recall also that a cyclic group of order divisible only by some power of p∈ P must be of the form ⊕_p∈ P/p^N_p, so that
lim_n→∞ c_n^P
= lim_n→∞(K_0((_n))_P is cyclic)
= lim_n→∞∑_N_P(K_0((_n))_P ≅⊕_p∈ P/p^N_p)
“=”∑_N_Plim_n→∞(K_0((_n))_P ≅⊕_p∈ P/p^N_p)
= ∑_N_P1/|(⊕_p∈ P/p^N_p)|∏_p∈ P∏_k=1^∞(1-p^-k)
= ∑_N_P∏_p∈ P1/|(/p^N_p)|∏_k=1^∞(1-p^-k)
= ∏_p∈ P(1 + ∑_N=1^∞p^-N/(1-p^-1)) ∏_k=1^∞(1-p^-k)
= ∏_p∈ P(1-1/p+1/p-1) ∏_k=2^∞(1-p^-k)
= ∏_k=2^∞(1+1/p^2-p) ∏_p∈ P(1-p^-k) .
By taking the limit |P|→∞ and using Euler's product formula
ζ(k)^-1 = ∏_p prime(1-p^-k),
we recover (<ref>).
Of course, in this heuristic derivation we have not justified the exchanges of the sum over N_P and the n→∞, |P|→∞ limits and that is part of what is accomplished in <cit.> (with u=0 and α_n=ε). On the other hand, while this theorem essentially does establish (<ref>), we cannot apply it automatically because the entries of M(n)=A(n)^t-I are not identically distributed (the probability distribution for the diagonal entries has been shifted by 1). However, a close inspection of its proof reveals that the conclusion remains valid. We now offer a brief justification of this claim.
One must show that, for every prime p, (M(n) mod p) ≥ n-1. `Small', `medium' and `large' primes are considered separately. The case of small primes <cit.> is already observed for random matrices whose entries are not necessarily identical. The medium primes case <cit.> depends on Odlyzko's lemma <cit.> and its `union bound' corollary <cit.> (whose proofs are easily seen to cover random vectors with non-identical entries that share the same bound, which is certainly the case for us), <cit.> (which does not assume identical distributions), and <cit.> (the u=0 case of which is <cit.>, discussed below). The large primes case <cit.>, our use of which falls under <cit.>, depends on the Erdős–Littlewood–Offord result <cit.> (which by convexity, that is, conditioning over the possible values taken by the first entry of the random vector X and using the triangle inequality, implies the version in which this entry is not identical to the others), <cit.> (which follows from Odlyzko's lemma), <cit.> (which is deterministic) and <cit.> (which follows from the previous three results).
It remains to carefully inspect the proof of <cit.>, which is split into many lemmas, propositions and theorems. The key ones for us to examine are <cit.> (which is the aforementioned Erdős–Littlewood–Offord result), <cit.>, <cit.> and <cit.>. The other results are either deterministic or are proved from these key ones (together with Odlyzko's lemma) without further appeal to the assumption of identically distributed entries.
For each of these key results, the only change is that instead of one measure μ specifying the distribution of the entries of the random vector X, there are two, μ and μ_1 say (to account for the shifted first entry), and products of the form ∏_l=1^nμ(t_l) should be replaced by μ_1(t_1)∏_l=2^nμ(t_l). Here, μ denotes the Fourier transform defined on <cit.>, the absolute value of which is not affected by the shift μ_1(t)=μ(t+1). So, the proof of <cit.> carries over unchanged. For <cit.> (which is used to prove <cit.>), the measure ν_1 associated to μ_1 (obtained from <cit.>) has the same Fourier transform as ν associated to μ (since by construction, ν(ξ)=1-γ+γ|μ(ξ)|^2 for a suitable γ∈(0,1)), so again the same proof works (in fact with ν_1=ν). The rest of the proof of <cit.> also needs no change.
Apart from expanding the analysis from finitely many primes to all primes simultaneously, a substantial portion of Nguyen and Wood's work in <cit.> goes towards proving that, for random matrices with identically distributed entries, the probability bound 1-ε can be weakened to 1-α_n with α_n≥ n^-1+ε. The argument we have described in the proof of Theorem <ref> works for α_n≥ n^-1/6+ε (which is the case considered in <cit.>), but a great deal more care is needed to extend to the general case. On the other hand, as pointed out in <cit.>, the bound α_n≥ n^-1+ε is asymptotically best possible: if the matrix entries can take the value 0 with probability at least 1-log n/n, then with nonzero probability the matrix has a row of zeros, so is singular and in particular has infinite cokernel. The statistics of these sparser graphs are thus rather different, a fact that is supported by the data (see Table <ref>).
§.§ Random regular multigraphs
For a finite abelian group G, a symmetric, -bilinear map φ G× G→^* is a perfect pairing if the only g∈ G with φ(g,G)=1 is g=0. (Equivalently, φ induces an isomorphism G→G = (G,^*) via g↦φ(g,·).) We define
N(G) = |{symmetric, bilinear, perfect φ G× G→^*}|/|G|·|(G)|.
If G is a finite abelian p-group
G=⊕_i=1^M /p^λ_i
with λ_1≥λ_2≥…≥λ_M, then
N(G) = p^-∑_iμ_i(μ_i+1)/2∏_i=1^λ_1∏_j=1^⌊μ_i-μ_i+1/2⌋(1-p^-2j)^-1,
where μ_i=|{j|λ_j≥ i}| (see <cit.>). It is worth noticing that, if the λ_i are interpreted as the lengths of the rows of a Young diagram, the μ_j are the lengths of its columns.
The expression (<ref>) makes use of the following observation. If G_1 is a finite abelian p_1-group and G_2 is a finite abelian p_2-group with p_2 p_1, then N(G) = N(G_1)· N(G_2) for G=G_1⊕ G_2. After all, if we count the perfect pairings in the definition (<ref>) of N(G) as group homomorphisms G→G, any such homomorphism comes from an independent pair of homomorphisms G_1 →G_1 and G_2 →G_2, which are themselves perfect pairings. The factorisation of N(G) then follows by noting also the obvious factorisations |G|=|G_1| · |G_2| and |(G)| = |(G_1)| · |(G_2)|.
The following adaptation of <cit.> was mostly observed in <cit.>.
Let n∈2 and 3≤ r∈, and recall that _n,r denotes a random r-regular multigraph on n vertices. Then:
* _n,r is asymptotically almost surely strongly connected;
* M(n)=A(n)^t-I is asymptotically almost surely nonsingular (and so is A(n));
* (_n,r) is asymptotically almost surely stably isomorphic to a Cuntz polygon;
* for any finite set P of odd primes not dividing r-1, and any finite abelian P-group G,
lim_n∈2(K_0((_n,r))_P≅ G) = ∏_p∈ P N(G_p)∏_k=1^∞(1-p^-2k+1).
The first and last assertions appear in <cit.>. The second then follows from <cit.> (see also the rest of the proof of <cit.>), and the third then follows from Proposition <ref>.<ref>, since A(n) is not a permutation matrix (because each of its rows sums to r) and we have already established asymptotic strong connectivity of E as well as asymptotic nonsingularity of A(n)^t-I.
In particular, for any odd prime p coprime to r-1 and any integer N≥0,
lim_n∈2(K_0((_n,r))_p≅/p^N) = p^-N∏_k=1^∞(1-p^-2k+1)
≈ p^-N,
while
lim_n∈2(K_0((_n,r))_p≅ (/p)^N) = p^-N(N+1)/2∏_j=1^⌊ N/2 ⌋(1-p^-2j)^-1∏_k=1^∞(1-p^-2k+1)
≈ p^-N(N+1)/2,
the approximations holding for sufficiently large primes p.
Using (<ref>) and (<ref>), we can compute the asymptotic probability that (K_0((_n,r))_P is cyclic, for any finite set P of odd primes not dividing r-1. The following also applies to Erdős–Rényi graphs _n,q for P={p}, where p can be any prime (see Theorem <ref>).
Let (_n)_n∈ be a family of random graphs and P a finite set of primes such that
lim_n∈(K_0((_n))_P≅ G) = ∏_p∈ P N(G_p)∏_k=1^∞(1-p^-2k+1)
for every finite abelian P-group G. Then,
lim_n∈(K_0((_n))_P is cyclic) = ∏_p∈ P∏_k=2^∞(1-p^-2k+1).
As in the proof of Theorem <ref>, let us write c_n^P:=(K_0((_n))_P is cyclic), and for N_P = (N_p)_p∈ P let c_n^N_P = (K_0((_n))_P ≅⊕_p∈ P/p^N_p), so that c_n^P = ∑_N_P c_n^N_P.
To show the equality (<ref>) we essentially want to interchange the limit with the summation to get
lim_n∈ c_n^P
= lim_n∈∑_N_p c_n^N_P
= ∑_N_Plim_n∈ c_n^N_P
and sum the right-hand side, where by hypothesis (<ref>) and (<ref>) we have
∑_N_Plim_n∈ c_n^N_P = ∑_N_P∏_p∈ P p^-N_p∏_k=1^∞(1-p^-2k+1)
= ∏_p∈ P∑_N_p=0^∞ p^-N_p∏_k=1^∞(1-p^-2k+1)
= ∏_p∈ P∏_k=2^∞(1-p^-2k+1) .
The interchange in (<ref>) is justified by the Vitali convergence theorem <cit.>, which in general applies to the interchange of a limit with integration of absolutely summable functions over a measure space. In our case, with the pointwise convergence of c_n^N_P over the discrete measure space of all N_P's, the remaining hypothesis to check is equismallness at infinity (a.k.a equisummability at infinity; for series this is spelled out in <cit.>). Namely, for every ε>0 we need to find a finite set X_ε such that for all n
∑_N_P ∉X_ε c_n^N_P < ε .
By virtue of being probability distributions (hence absolutely summable, with no need to take absolute values because of pointwise positivity) there exist finite sets Z_ε and Z_n,ε of abelian P-groups such that
∑_G∉Z_εlim_m∈(K_0((_m))_P ≅ G) < ε/2 and ∑_G∉Z_n,ε(K_0((_n))_P ≅ G) < ε .
By pointwise convergence of the probability distributions in (<ref>), there also exists an M∈ such that for all n≥ M and G∈ Z_ε
|(K_0((_n))_P ≅ G) - lim_m∈(K_0((_m))_P ≅ G) | < ε/2|Z_ε| .
Hence, we can set Z_n,ε = Z_ε for all n≥ M. Noting that c^N_P and c_n^N_P are restrictions of the above probability distributions to cyclic groups and setting X_ε to be the intersection of the set Z_ε∪⋃_n<M Z_n,ε with cyclic groups shows that c_n^N_P is equismall at infinity, allowing us to apply Vitali's convergence theorem in (<ref>).
* By definition, asymptotically almost-sure events in the perfect matchings model are also asymptotically almost sure for any contiguous model (see <cit.>). This is in particular the case for the uniform model 𝔾'_n,r, that is, a random element of the set of r-regular multigraphs with the uniform distribution, conditioned on there being no loops. Had we adopted this model instead, the resultant graph algebras would therefore still be stably isomorphic to Cuntz polygons with high probability.
* If r=2^j+1 for some j, then (<ref>), (<ref>) and (<ref>) hold for _n,r for any finite set P of odd primes (as all odd primes are coprime to r-1=2^j). This is why we we pay special attention to these values of r in Section <ref>. But we do not have any theoretical distribution for p=2. In addition, because of the subtleties arising from the two limits, n→∞ and enlarging P to the set of all primes, one cannot immediately extend (<ref>) to infinitely many primes. For Bernoulii digraphs, this more delicate analysis is carried out in <cit.>, but there is as yet no such result for r-regular multigraphs. These are exactly the obstacles to computing the asymptotic probability that (_n,r) is stably isomorphic to a Cuntz algebra, which we mentioned in the Introduction.
§.§ Erdős–Rényi graphs
The following in particular applies to Erdős–Rényi graphs _n,q, the symmetric versions of _n,q, provided that q∈(0,1).
Fix a constant ε∈(0,1) and a finite set S of nonnegative integers. For each n∈, let A(n) be a symmetric n× n matrix such that, for every i≤ j∈{1,…,n}, the entries A(n)_ij are independent random variables taking values in S that satisfy
max_s∈ S(A(n)_ij≡ s p)≤ 1-ε
for every prime p. Then:
* the graph _n associated with A(n) is asymptotically almost surely strongly connected;
* M(n)=A(n)^t-I is asymptotically almost surely nonsingular (and so is A(n) itself);
* (_n) is asymptotically almost surely stably isomorphic to a Cuntz polygon;
* for any prime p and any finite abelian p-group G,
lim_n∈(K_0((_n))_p≅ G) = N(G)∏_k=1^∞(1-p^-2k+1),
where N(G) is as in (<ref>), (<ref>).
The first assertion follows from <cit.> (see <cit.>) and the last from <cit.>. The second then follows from <cit.>, and the third then follows from Proposition <ref>, exactly as in the proof of Theorem <ref>.<ref>.
By Theorem <ref>.<ref>, all Sylow p-subgroups of K_0((_n,q)), including p=2, follow the limiting behaviour (<ref>), (<ref>) and (<ref>) with P={p}. The data we have collected for these graphs are consistent with this (see Figure <ref> and Table <ref>).
§ EMPIRICAL K-DATA
In this section, we present some of the data produced by our computer code. We generated samples of size m=10^5 for various random Bernoulli digraphs _n,q, Erdős–Rényi graphs _n,q and regular multigraphs _n,r, and collected the K-theoretic data outlined in Section <ref> for several small primes (up to p=37) and a couple larger ones (p=137 and p=277). Typical sampled graphs are illustrated in Figure <ref> and Figure <ref>.
All collected data were consistent with available theory, as long as the number of vertices was sufficiently large. In practice, n=50 vertices seemed to be large enough to produce expected results (see Table <ref>), though most of the time we opted for n=100. Note that, since our sample sizes are very large, the margin of error for the data is small. Using the normal approximation to the binomial distribution, at the 99% level of confidence this margin is at most
z_0.005.√(x(1-x)/n)|_x=1/2≈2.576/2√(10^5)≈ 0.004
for any of our proportions of interest. This means that if our sampling procedure were to be repeated many times, and γ denotes our empirical estimate of some true probability γ, then the interval [γ-0.004, γ+0.004] should contain γ approximately 99% of the time. In practice, the margin of error depends on the variability of the data characteristic being measured, possibly resulting in a narrower confidence interval. See, for example, Table <ref>, and notice also that the error bars in graphs such as Figure <ref> vary from bar to bar.
§.§ Bernoulli graphs _n,q
Here, our emphasis was on (<ref>) for small primes p and (<ref>) for small p-groups G=/(p^N) and G=(/p)^N. See Figure <ref> and Table <ref>, respectively.
As discussed in Remark <ref>, sparser Bernoulli digraphs (those for which q=log n/n) exhibit rather different statistics. This expectation was consistent with the data, some of which is presented in Table <ref>. As for connectivity, note that if q=(log n + ω)/n for some function ω=ω(n), then
lim_n→∞(_n,q is strongly connected) =
0 if ω→-∞
e^-2e^-c if ω→ c constant
1 if ω→∞
(see <cit.>, which is derived from the similar behaviour <cit.> exhibited by the symmetric versions _n,q). If n is large enough and q=log n/n (that is, ω=0), we would expect strong connectivity about e^-2≈13.5% of the time. Our observation of 15.6% for n=100 (see Table <ref>) is not inconsistent with this.
§.§ Erdős–Rényi graphs _n,q
From (<ref>) and (<ref>), we can compute the asymptotic cyclicity probabilities for K_0((_n,q))_p, for any prime p. The recorded data closely agree with these probabilities (see Figure <ref>). It is expected, though not yet proved, that the asymptotic probability that K_0((_n,q)) is cyclic is equal to the product of the respective probabilities over all primes p, that is, to
∏_p prime∏_k=2^∞(1-p^-2k+1) ≈ 0.79352.
The data in Table <ref> are in accordance with this expectation.
§.§ Regular multigraphs _n,r
With n=100, the data collected for K_0((_n,r))_p for all primes p between 3 and 37 (and also p=137 and p=277) were remarkably in line with the limiting distributions (<ref>) and (<ref>) provided that p∤ r-1. See, for example, Table <ref>, Figure <ref> and Figure <ref>.
For p=2, the event “K_0((_n,r))_2 is cyclic” tended to be concentrated on a single outcome K_0((_n,r))_2=/2^N, with N=0 if 2∤ r-1. (On the other hand, note that, in general, K_0((_n,r))_p cannot be trivial if p| r-1.) We did however observe some interesting deviations from this pattern (see Figure <ref> and Figure <ref>), including variable behaviour for a fixed value of r and different n (compare Figure <ref> with Figure <ref>).
In Table <ref>, γ_n,r and π_p,r denote our estimators of the probabilities
γ_n,r := (K_0((_n,r)) is cyclic)
and
π_p,r := lim_n∈2(K_0((_n,r))_p is cyclic).
While we are somewhat begging the question here, as the definition of π_p,r does assume that the limit exists, note that, from Proposition <ref>,
π_p,r=∏_k=2^∞(1-p^-2k+1)
if p∤ 2(r-1). Convergence outside of this setting, that is, for p=2 or p| r-1, does seem to be supported by the data. In fact, if we rely on the heuristic principle that n=100 is large enough to indicate asymptotic behaviour, then from Table <ref> we see that π_2,r is consistently about 0.42, independently of r. As for the provenance of this number, we suspect it to be (<ref>) adjusted to also include the k=1 term. Fascinatingly, this adjustment also appears to govern the asymptotic probability of cyclicity of K_0((_n,r))_p for primes p dividing r-1 (see Table <ref>). Arguing heuristically as in the proof of Theorem <ref>.<ref>, we would also expect γ_n,r to converge as n→∞ to ∏_p primeπ_p,r. This expectation is indeed supported by the data (see Table <ref> and Table <ref>). Putting this together, we are led to make the following.
For an integer r and prime p, let π_p,r := lim_n∈2((K_0((_n,r))_p is cyclic). Then, for any r≥3:
* π_p,r = ∏_k=1^∞(1-p^-2k+1) if p=2 (in which case, π_p,r≈ 0.419) or if p is odd and p| r-1;
* the asymptotic probability that (_n,r) is stably isomorphic to a Cuntz algebra is
∏_p primeπ_p,r = ∏_p prime
p | 2(r-1)(1-p^-1) ∏_p prime∏_k=2^∞(1-p^-2k+1) (≈ 0.397 if r=2^j+1).
10
Arklint:2015tu
S. E. Arklint and E. Ruiz.
Corners of Cuntz-Krieger algebras.
Trans. Amer. Math. Soc., 367(11):7595–7612, 2015.
Bates:2000fk
T. Bates, D. Pask, I. Raeburn, and W. Szymański.
The C^*-algebras of row-finite graphs.
New York J. Math., 6:307–324 (electronic), 2000.
Bourgain:2010aa
J. Bourgain, V. H. Vu, and P. M. Wood.
On the singularity probability of discrete random matrices.
J. Funct. Anal., 258(2):559–603, 2010.
Cornelissen:2008aa
G. Cornelissen, O. Lorscheid, and M. Marcolli.
On the K-theory of graph C^*-algebras.
Acta Appl. Math., 102(1):57–69, 2008.
Cuntz:1977qy
J. Cuntz.
Simple C^*-algebras generated by isometries.
Comm. Math. Phys., 57(2):173–185, 1977.
Cuntz:1981kq
J. Cuntz.
K-theory for certain C^*-algebras.
Ann. of Math. (2), 113(1):181–197, 1981.
Cuntz:1980hl
J. Cuntz and W. Krieger.
A class of -algebras and topological Markov chains.
Invent. Math., 56(3):251–268, 1980.
Dunford:1958
N. Dunford and J. T. Schwartz.
Linear Operators. I. General Theory.
Interscience Publishers, New York-London, 1958.
ISBN 0470226056
Eilers:2016wz
S. Eilers, T. Katsura, M. Tomforde, and J. West.
The ranges of K-theoretic invariants for nonsimple graph
algebras.
Trans. Amer. Math. Soc., 368(6):3811–3847, 2016.
Eilers:2021aa
S. Eilers, G. Restorff, E. Ruiz, and A. P. W. Sørensen.
The complete classification of unital graph C^*-algebras:
geometric and strong.
Duke Math. J., 170(11):2421–2517, 2021.
Erdos:1959aa
P. Erdős and A. Rényi.
On random graphs. I.
Publ. Math. Debrecen, 6:290–297, 1959.
Frieze:2016tw
A. Frieze and M. Karoński.
Introduction to random graphs.
Cambridge University Press, Cambridge, 2016.
Jacelon:2023aa
B. Jacelon.
Random amenable C^*-algebras.
Math. Proc. Cambridge Philos. Soc., First View:1–22, 2023.
Kirchberg:2000kq
E. Kirchberg and N. C. Phillips.
Embedding of exact C^*-algebras in the Cuntz algebra
𝒪_2.
J. Reine Angew. Math., 525:17–53, 2000.
Maples:2013aa
K. Maples.
Singularity of random matrices over finite fields.
arXiv:1012.2372 [math.CO], 2013.
Nguyen:2020aa
H. H. Nguyen and E. Paquette.
Surjectivity of near-square random matrices.
Combin. Probab. Comput., 29(2):267–292, 2020.
Nguyen:2018vh
H. H. Nguyen and M. M. Wood.
Cokernels of adjacency matrices of random r-regular graphs.
arXiv:1806.10068 [math.PR], 2018.
Nguyen:2022uf
H. H. Nguyen and M. M. Wood.
Random integral matrices: universality of surjectivity and the
cokernel.
Invent. Math., 228(1):1–76, 2022.
ChatGPT
OpenAI.
ChatGPT [Large language model], June 2023.
<https://chat.openai.com/chat>
PARI2
The PARI Group.
PARI/GP version , Univ. Bordeaux, 2022.
Available from <http://pari.math.u-bordeaux.fr/>.
Phillips:2000fj
N. C. Phillips.
A classification theorem for nuclear purely infinite simple C^*-algebras.
Doc. Math., 5:49–114 (electronic), 2000.
Raeburn:2005aa
I. Raeburn.
Graph algebras, volume 103 of CBMS Regional Conference
Series in Mathematics.
Published for the Conference Board of the Mathematical Sciences,
Washington, DC; by the American Mathematical Society, Providence, RI, 2005.
Raeburn:2004tg
I. Raeburn and W. Szymański.
Cuntz-Krieger algebras of infinite graphs and matrices.
Trans. Amer. Math. Soc., 356(1):39–59, 2004.
Rordam:1995aa
M. Rørdam.
Classification of Cuntz-Krieger algebras.
K-Theory, 9(1):31–58, 1995.
Rordam:2002yu
M. Rørdam and E. Størmer.
Classification of nuclear C^*-algebras. Entropy in
operator algebras, volume 126 of Encyclopaedia of Mathematical
Sciences.
Springer-Verlag, Berlin, 2002.
Operator Algebras and Non-commutative Geometry, 7.
Treves:1967
F. Trèves.
Topological vector spaces, distributions and kernels.
Academic Press, New York-London, 1967.
Wood:2017tk
M. M. Wood.
The distribution of sandpile groups of random graphs.
J. Amer. Math. Soc., 30(4):915–958, 2017.
Wood:2019wn
M. M. Wood.
Random integral matrices and the Cohen-Lenstra heuristics.
Amer. J. Math., 141(2):383–398, 2019.
Wood:2023aa
M. M. Wood.
Probability theory for random groups arising in number theory.
arXiv:2301.09687 [math.NT], 2023.
|
http://arxiv.org/abs/2307.00576v1
|
20230702140146
|
Using Cascade in Quantum Key Distribution
|
[
"Devashish Tupkary",
"Norbert Lütkenhaus"
] |
quant-ph
|
[
"quant-ph"
] |
[email protected]
Institute for Quantum Computing and Department of Physics and Astronomy, University of Waterloo, Waterloo, Ontario, Canada, N2L 3G1
[email protected]
Institute for Quantum Computing and Department of Physics and Astronomy, University of Waterloo, Waterloo, Ontario, Canada, N2L 3G1
We point out a critical flaw in the analysis of Quantum Key Distribution (QKD) protocols that employ the two-way error correction protocol Cascade. Specifically, this flaw stems from an incomplete consideration of all two-way communication that occurs during the Cascade protocol. We present a straightforward and elegant alternative approach that addresses this flaw and produces valid key rates. We exemplify our new approach by comparing its key rates with those generated using older, incorrect approaches, for Qubit BB84 and Decoy-State BB84 protocols. We show that in many practically relevant situations, our rectified approach produces the same key rate as older, incorrect approaches. However, in other scenarios, our approach produces valid key rates that are lower, highlighting the importance of properly accounting for all two-way communication during Cascade.
Using Cascade in Quantum Key Distribution
Norbert Lütkenhaus
August 1, 2023
=========================================
§ INTRODUCTION
Quantum Key Distribution (QKD) <cit.> can provide information-theoretic security of secret keys between two communicating parties, Alice and Bob. Since the quantum channel connecting Alice and Bob is not perfect in any practical realization, QKD protocols implement an error-correction step to correct errors in the measurement data collected by Alice and Bob. This involves classical communication between the two parties, and leaks additional information to the eavesdropper Eve, which must be accounted for when calculating the achievable secret key rate. Cascade <cit.> is one of the most widely used two-way error correction protocol for QKD. A lot of work has been done optimizing various parameters of the Cascade protocol, such as its blocksizes, number of rounds, efficiency etc <cit.>. Cascade has also been used in a large number of QKD experiments <cit.>.
Our main result is to rectify a flaw in the analysis of QKD protocols using Cascade, which stems from an incomplete consideration of the two-way classical communication during Cascade. We observe that in past literature, only the communication from Alice to Bob has been accounted for when considering information leakage about the key to Eve. For a rigorous security proof, the communication from Bob to Alice must also be included when bounding the information leaked to Eve.
One valid approach is to simply adding the number of bits leaked during the Bob to Alice communication to the information leakage term. However, we show that such an approach results in low key rates, since it almost doubles the information leaked to Eve during error correction. A better approach is required to make Cascade fit for use in QKD protocols.
We propose a straightforward and elegant alternative approach that produces valid key rates. The main idea is to compute key rates for a protocol that leaks all the communnication from Alice to Bob during Cascade, along with all location of errors in Alice and Bob's raw data to Eve, in the information reconciliation step. We show that this leaks more information to Eve than Cascade, and thus any key rate for such a protocol is a valid key rate for the original protocol that uses Cascade.
We apply our solution to the qubit based BB84 protocol, and the polarization encoded weak coherent pulse (WCP) BB84 with decoy intensities, for a variety of channel models and constraints. We use the numerical framework from <cit.> for our calculations. In this work, we restrict our attention to the asymptotic regime for simplicity, where one can assume an IID collective attack without loss of generality <cit.>. However, our solution can be directly adapted to the analysis of finite size protocols. This is because many such analyses ultimately involve the optimization of the same objective function <cit.> (with different constraints), and our approach only modifies the objective function.
This paper is organized as follows. In Sec. <ref> we explain the steps in a generic QKD protocol and explain the Cascade protocol briefly. In Sec. <ref> we explain the problem with past analysis of QKD protocols using Cascade, and present our arguments for correcting it. We also review the numerical framework that we used to compute key rates in this work. In Sec. <ref> and Sec. <ref> we apply our solution to the BB84 protocol implemented using qubits, and WCP states with decoy intensities respectively. In Sec. <ref> we present concluding remarks.
§ BACKGROUND
§.§ Protocol Description
In this subsection, we give a description of the asymptotic formulation of a typical QKD protocol that can use Cascade in the information reconciliation step.
* Quantum Phase: In an entanglement-based protocol, Alice and Bob receive states from a source and perform measurements on them. In a prepare-and-measure protocol, Alice prepares and sends signals to Bob, who measures them. The security analysis of a prepare-and-measure scheme can be reduced to that of an entanglement-based scheme using the source replacement scheme <cit.>.
* Acceptance Test (Parameter Estimation): Alice and Bob announce the measurements obtained, and signals sent, for a small fraction of signals. They then perform a test to decide whether to abort or continue the protocol. This step is modelled as Alice and Bob performing some measurements given by POVMs {Γ_k}, obtaining expectation values {γ_k }. The POVMs and expectations values depends on whether the protocol implements “fine-graining” or “coarse-graining” during the acceptance test <cit.>, and the exact nature of the coarse-graining.
* Classical processing: For the remaining signals, Alice and Bob perform some blockwise processing of data. This involves operations such as public announcements and sifting to remove unwanted signals. Alice then implements a key map that maps her local data and the information exchanged in the blockwise processing, to her raw key.
* Error correction and verification: Alice and Bob implement error correction by exchanging classical information. Cascade can be used in this step. Alice and Bob then compare a randomly chosen hash of their raw keys for error verification, and abort the protocol if the hashes do not match.
* Privacy Amplification : Alice and Bob choose a common two-universal hash function and apply it to their raw keys to generate their final secret key.
If Ã, B̃ denote announcements made by Alice and Bob in the blockwise processing step, and Z denotes the result of the key map implemented by Alice, and E is Eve's quantum system, then the key rate is given by <cit.>
R = min_ρ∈ S(γ⃗) S(Z| E ÃB̃ ) - p_pass×δ_leak,
where the minimization is over all states ρ belonging to S(γ⃗) = {ρ∈ H_+ | Tr(Γ_k ρ)= γ_k } and H_+ denotes positive semidefinite operators, p_pass denotes the probability of the signal to pass sifting, and δ_leak is the number of bits used during error correction, per bit of raw key.
§.§ Cascade
In this subsection we briefly describe the error correction protocol Cascade <cit.>. Cascade is a simple and efficient error-correction protocol, and its principal limitation is the requirement for highly interactive communications, as compared to approaches such as LDPC codes (which suffer from a high computational cost in iterative decoding) <cit.>. To understand Cascade we first look at a subprotocol called BINARY, which corrects a single error in bit strings that contain an odd number of errors.
* BINARY: If bit strings X and Y have odd number of errors, then Alice divides her string into halves and sends the parity of the first half to Bob. Bob divides his string the same way, and announces whether the parity of the first half is wrong, or the parity of the second half is wrong. Alice and Bob repeat the operation on the half whose parity was wrong.
* The process terminates when Alice reveals the single bit which contains an error, and Bob corrects that error.
* This process involves sending ≈log(k) bits from Alice to Bob, and ≈log(k) bits from Bob to Alice where k is the length of the strings X and Y. The process corrects one error.
Cascade: The Cascade protocol consists of several passes and proceeds as follows.
* Alice and Bob divide their bit strings X_1...X_N and Y_1...Y_N, where N is the total number of sifted bits, into blocks the size of k_1. In pass 1, Alice and Bob reveal the parity of each block to determine the blocks with an odd number of errors. For each block with odd number of errors, Alice and Bob run BINARY to correct one error. At the end of pass 1, all blocks have even number of errors.
* In any pass i ≥ 1, Alice and Bob choose a blocksize k_i and random function f_i : [1...N] → [1 ... N/k_i], which assigns each bit to a block in round i. The bits whose position belongs to K^i_j = { l | f_i (l)=j} form the jth block in the ith round.
* Alice sends the parity of each block P_(A,i,j) = ⊕_l ∈ K^i_j X_l to Bob, who computes his parity for the same block and announces it. For each block where P_(A,i,j)≠ P_(B,i,j), Alice and Bob perform the following operations.
* Alice and Bob perform BINARY on the block defined by K^i_j and correct one error, say at position l. Now, all blocks in previous rounds which contained l have an odd number of errors. In this way, a single error corrected in each block in later rounds leads to the identification of several error-containing blocks in earlier rounds. Let the set of such blocks be 𝒦.
* Alice and Bob choose the smallest block from 𝒦 and run BINARY to correct one error. They again compute the set of blocks containining an odd number of errors 𝒦. This process is repeated until 𝒦 contains no blocks.
* At the end of pass i, all blocks generated in all rounds contain an even number of errors. Alice and Bob then move to the next pass.
The main ingredient of Cascade that we will use is the fact that for every parity bit Alice sends to Bob, Bob sends the corresponding parity bit to Alice (except in the last bit of the BINARY protocol, where Bob simply corrects his bit). There are several variants of the Cascade protocol, which vary in the manner in which blocks are created, blocksizes used, and number of passes. Such variations do not change the fact that Bob announces the same set of parities as Alice, and thus our claims will hold for all such variants.
We note that the details of the blocks generated in a given pass have to be communicated between Alice and Bob. However, the blocks are generated randomly and independent of the QKD protocol. Therefore, the act of communicating the details of these blocks does not provide any additional information to Eve about the key <cit.>.
§ USING CASCADE IN QKD PROTOCOLS
§.§ The Problem
In the original proposal for Cascade <cit.>, an analytical upper bound δ^A_leak on the number of bits sent from Alice to Bob per bit of raw key is obtained. In an actual experiment, an upper bound δ^A_leak can also be chosen empirically by running multiple iterations of Cascade for the expected error rate.
For the purposes of this work, it does not matter how δ^A_leak is obtained. For convenience, we will denote the upper bound as δ^A_leak = f h(e), where e is the error-rate in the raw key, h is the binary entropy function, and f is a number that denotes the efficiency. Typical values of f for Cascade are between 1 and 1.5, and can be found in <cit.>.
The original Cascade paper <cit.> only provides an upper bound on the number of bits sent from Alice to Bob, i.e δ^A_leak, and defines `efficiency' of Cascade as the ratio of the actual number of bits per signal sent from Alice to Bob, and h(e), where e is the error rate and h is the binary entropy function.
Therefore, it has been erroneously assumed that δ^A_leak is the true value of δ_leak in Eq. (<ref>) when Cascade is used in QKD. It is assumed incorrectly that
R_incorrect = min_ρ∈ S(γ⃗) S(Z| E ÃB̃ ) - p_pass×δ^A_leak
is the expression for the key rate. However, all classical communication must be assumed to be known to Eve, and the above equation does not account for the communication from Bob to Alice during Cascade. In fact, since Bob's data is correlated with that of Alice, it is entirely possible for Bob's communication to leak additional information about Alice's raw key to Eve.
§.§ A Naive Approach
One naive approach to fix Eq. (<ref>) is to include δ^B_leak, an upper bound on the number of bits leaked during the Bob to Alice communication, in δ_leak in Eq. (<ref>). Therefore, a naive, but correct expression for key rate would be
R_naive = min_ρ∈ S(γ⃗) S(Z| E ÃB̃ ) - p_pass× (δ^A_leak + δ^B_leak).
Here δ^A_leak can be replaced with f h(e). Cascade requires Bob to send a bit to Alice for every bit Alice sends to Bob, unless the bit corresponds to the last step of the BINARY protocol. The last step of the BINARY protocol corrects a single bit, and every bit corrected is the last step of some BINARY protocol. Therefore δ^B_leak=δ^A_leak - e, since every BINARY protocol in the last step corrects exactly one error.
Therefore, (δ^A_leak + δ^B_leak) = 2 f h(e) - e, which almost doubles the cost of error correction. Using this value in δ_leak in Eq. (<ref>) will yield valid key rates. However, the values obtained will be far worse than the ones obtained for any one-way error correction protocol, therefore making Cascade typically unsuitable for QKD.
§.§ Our Solution
We show that one can do better than Eq. (<ref>). Recalling Remark <ref>, we note that the communication from Bob to Alice can be computed from two pieces of information: (1) the communication from Alice to Bob, and (2) the knowledge of the location of errors W_i=X_i⊕ Y_i for each bit of Alice and Bob's data. This is because for any parity bit P_(A,i,j) sent by Alice during Cascade, Bob sends a bit P_(B,i,j) that is the parity of the same set of bits of his data. Therefore P_(A,i,j)⊕ P_(B,i,j) = ∑_l ∈ K^i_j X_l ⊕ Y_l = ∑_l ∈ K^i_j W_l, where we recall that K^i_j is the set of positions of the bits that made up the jth block in the ith pass of Cascade.
This property implies that a modified protocol that leaks all communication from Alice to Bob, and additionally leaks W, can only leak more (or equal) information to Eve than Cascade. Thus, to lower bound the key rate for a QKD protocol using Cascade, we can lower bound the key rate for the QKD protocol that announces W, and only involves Alice to Bob part of the communication from Cascade. Therefore, we can compute
R = min_ρ∈ S(γ⃗) S(Z| E W ÃB̃ ) - p_pass×δ^A_leak
as a valid key rate for any QKD protocol using Cascade.
We note that Eq. (<ref>) will always produce a key rate that is greater than or equal to the one produced from Eq. (<ref>), since S(Z|EÃB̃ ) ≥ S(Z|EWÃB̃ ) from subadditivity. We show that both the equality and inequality can occur, therefore proving that using Eq. (<ref>) can produce key rates that are not justified. In some cases, we obtain equality, which indicates that announcing W gives Eve no new information. In such cases, although the valid key rate does not change, the argument that properly address the communication from Bob to Alice is lacking in the literature and is provided by this work.
§.§ Computing Key rates
We are interested in the difference between the incorrect key rate from Eq. (<ref>) and the key rate from our proposed approach from Eq. (<ref>). Therefore we compute two quantities, F =min_ρ∈ S(γ⃗) S(Z| E ÃB̃ ) and F^' = min_ρ∈ S(γ⃗) S(Z| E W ÃB̃ ).
We use the numerical framework from <cit.> to perform these optimizations. This framework equivalently describes the steps in the QKD protocol via Kraus operators {K_i}, which represent measurements, announcements and sifting done by Alice and Bob, and {Z_j} which implement a pinching channel on the key register. Our solution, which requires the computation of F^' instead of F can be easily implemented by a suitable change in the Kraus operators for the optimization problem for F.
The numerical framework equivalently describes the optimization problem for F as
F = min_ρ∈ S(γ⃗) f(ρ),
where
f(ρ) = D (𝒢 (ρ) || 𝒵 (𝒢(ρ) ) ),
𝒢(ρ) = ∑_i K_i ρ K_i^†,
𝒵 (𝒢(ρ) ) = ∑_j Z_j 𝒢 (ρ) Z_j^†,
and where D(X || Y) = Tr (X log (X) ) - Tr (X log (Y)) is the quantum relative entropy with log as the matrix logarithm.
Let α, β be announcements (such as basis choice) made by Alice and Bob, and let Alice and Bob's POVMs be given by P^A = {P^A_(α,x)}, and P^B = {P^B_(β,y)}, where x,y denote bits that represent measurement outcomes. Let 𝐀 be the set of announcements (α,β) that are kept after sifting. Furthermore, let r(α,β,x) be the keymap Alice implements. Then, the Kraus operators for Eq. (<ref>) are given by
K_α,β =∑_x,y|r(α,β,x) ⟩_Z ⊗√(P^A_(α,x)⊗ P^B_(β,y))
⊗|x⟩_X⊗|y⟩_Y⊗|α,β⟩_ÃB̃,
and the set of operators generating the 𝒢 map is given by {K_i} = { K_α,β | (α,β) ∈𝐀} <cit.>.
The 𝒵 map is implemented by operators { Z_i} given by Z_i = |i⟩⟨i|_Z ⊗ I_ABX Y ÃB̃. Notice that the output state 𝒢(ρ) is classical in α,β, which reflects the fact that the basis choices are announced and known to Eve.
To compute F^', we must include an additional announcement that announces w=x ⊕ y. This is implemented by
K^'_α,β, w =∑_w ∑_x,y
x⊕ y = w|r(α,β,x) ⟩_Z ⊗√(P^A_(α,x)⊗ P^B_(β,y))
⊗|x⟩_X ⊗|y⟩_Y ⊗|α,β⟩_ÃB̃⊗|w⟩_W,
where the set of operators generating the new 𝒢^' map can be given by {K^'_i} = { K^'_α,β, w | (α,β) ∈𝐀}. The 𝒵^' map is implemented by { Z^'_i} given by Z^'_i = |i⟩⟨i|_Z ⊗ I_ABX Y ÃB̃W.
In the remainder of this paper, we compute both F= min_ρ∈ S(γ⃗) f(ρ) and F^' = min_ρ∈ S(γ⃗) f^'(ρ) for the various implementations of the BB84 protocol. If we find that F=F^', then this indicates that the previous analysis of Cascade is wrong but gives correct answers. In this case, Eqs. (<ref>) and (<ref>) will give identical key rates.
If we find that F > F^', then this indicates that the previous analysis was wrong and gave incorrect answers. The difference between the key rates obtained from Eqs. (<ref>) and (<ref>) is equal to F-F^'.
Note that the above formulation applies to situations where Alice and Bob generate a bit string from their measurements (x,y,x ⊕ y are bits). Events such as no-detection either need to be discarded during sifting, or should be mapped to bits. This assumption is necessary to use Cascade, since it is a protocol that corrects errors in bit strings. We also remark that since many finite-size key rate analyses involve the optimization of the same objective function (F) over different constraints, our solution can be easily applied to such finite-size analyses as well, by simply changing F to F^'.
§ QUBIT BB84
§.§ Protocol Description
In this section, we present our results for the standard qubit-based BB84 protocol, where Alice prepares each of the four signal states {|0⟩, |1⟩, |+⟩, |-⟩} with equal probability, and Bob chooses the Z or X basis with equal probability.
Alice and Bob then implement an acceptance test on their observed statistics.
If the protocol accepts, Alice and Bob announce their basis, and throw away signals where they measured in different basis. Alice maps her measurement outcome to the raw key, and then proceeds to perform error-correction (Cascade) and privacy amplification. For the descriptions of the exact Kraus operators of the protocol, we refer the reader to Appendix <ref>.
Alice and Bob obtain statistics shown in Table <ref> during the acceptance test. There are a variety of ways they can use these statistics to perform the acceptance test. We use the phrase “fine-grained constraints” to refer to the case where all the entries in Table <ref> are used for the acceptance test, and therefore in the constraints for S(γ⃗). We use “sifted fine-grained” to refer to the case where only the entries marked in red are used. We use “coarse-grained" constraints to refer to the case where only the (unnormalized) QBER and Gain constraints for each basis (given by Q_Z = γ_HV+ γ_VH, Q_X = γ_+- + γ_-+, gain_Z = γ_HH+ γ_HV+ γ_VH+ γ_VV, and gain_X = γ_+++ γ_+-+ γ_-++ γ_–) are used. The gain sets the constraints on the probability of choosing each basis for measurement, while the QBER in each basis sets the constraints on the observed error-rate. We note that this is a departure from the nomenclature of <cit.>, where the “coarse-grained” case refers to the “sifted fine-grained case” as defined above. Additionally, we use the constraints from source-replacement that characterize Alice's system for prepare and measure protocols.
We consider a channel with misalignment and depolarization to compute statistics in Table <ref>, which is described in Appendix <ref>. We also consider a channel model that includes a “replacement” channel Φ_replace, that replaces the state leaving Alice's lab with the fixed signal state |0⟩ (|H⟩) with probability λ=0.2. The output of the replacement channel is then sent through a channel with misalignment and depolarization. The replacement channel is interesting because it breaks symmetries in the observed statistics.
If the replacement channel is also included in the channel model, then the new statistics can be obtained by replacing each row γ⃗_i of Table <ref> by (1-λ)γ⃗_i + λγ⃗_H (since Alice sends each state with equal probability).
§.§ Reduction to Bell-diagonal states
In certain cases, the optimization of f(ρ) over all states ρ in S(γ⃗)⃗, can be reduced to that over all Bell-diagonal states in S(γ⃗), denoted by S_bell (γ⃗). That is, it can be shown that
min_ρ∈ S_bell(γ⃗) f(ρ) = min_ρ∈ S (γ⃗) f(ρ).
For Bell-diagonal states shared between Alice and Bob, Eve's state E is always block-diagonal in the parities W (see Appendix <ref>), and therefore the additional announcement W gives Eve no new information. In such cases, F=F^'.
There are several such arguments in the literature, which are identical at their core, but differ only in details of the protocols (such as type of constraints, number of basis used for key generation, and type of classical processing). Ref. <cit.> proves Eq. (<ref>) for the case where f(ρ) is the total key rate including the information leakage term, and the states are constrained in S only by the average QBER over all bases. The analysis is done for d dimensional systems in general. Ref. <cit.> proves Eq. (<ref>) for BB84 and six-state protocols, where f(ρ) is the uncertainity of Eve about the raw key (with a modified classical processing), and the states are constrained in S by each individual QBER, but only the Z basis is used for key generation. Ref. <cit.> generalizes this to a wider variety of classical processing in f(ρ), while still constraining S with separate QBERs, but using only the Z basis for key generation. In this work, we will attempt to present a coherent unified picture of all such arguments for the convenience of the reader. We also point out how symmetry in observed values can help in proving the reduction to Bell-diagonal states.
We start by defining the “twirling map" <cit.> as
𝒯(ρ)= 1/4∑_i=1^4 ρ_i = 1/4∑_i=1^4 (σ_i ⊗σ_i) ρ (σ_i ⊗σ_i)^†.
where σ_i for i ∈{0,1,2,3} denotes the identity and the Pauli X,Y and Z operators. 𝒯(ρ) is often referred to as the “twirled” state, and can be shown to be always Bell-diagonal <cit.>. The proof of Eq. (<ref>) now proceeds in two steps.
Step 1 : It is shown that
f(𝒯(ρ)) ≤ f(ρ) ∀ρ.
Step 2 : It is shown that
ρ∈ S 𝒯(ρ) ∈ S_bell.
The proof of Eq. (<ref>) is then as follows : Clearly min_ρ∈ S_bell (γ⃗) f(ρ) ≥min_ρ∈ S(γ⃗) f(ρ), since S_bell (γ⃗) ⊆ S (γ⃗). To show the other direction of the inequality, let ρ^* be the state that achieves the minimization on the right hand side of Eq. (<ref>). Then, from Eq. (<ref>), we obtain f(𝒯(ρ^*)) ≤ f(ρ^*). From Eq. (<ref>), we know that T(ρ^*) ∈ S_bell (γ⃗). Therefore, min_ρ∈ S_bell (γ⃗) f(ρ) ≤ f(𝒯(ρ^*)) ≤min_ρ∈ S (γ⃗) f(ρ).
Thus, to obtain Eq. (<ref>) for a protocol of interest, one must show the validity of Eqs. (<ref>) and (<ref>). We prove that Eq. (<ref>) holds for qubit protocols where key generation is done in all the X, Z (and if applicable Y) basis in Appendix <ref>. Thus, to reduce the optimization to Bell-diagonal states and obtain F=F^', we now only need to check the validity of Eq. (<ref>). This has to be considered separately for every choice of constraints, and observed values, and is done in the next section.
§.§ Results
We numerically check the difference between F and F^'. The results are summarized in Table <ref>. Since the numerical method is capable of producing both an upper bound and lower bound for F and F^', it is straightforward to determine when F>F^'. We claim F=F^' when the bounds for F and F^' overlap. In some cases, F=F^' can be analytically argued, by proving the validity of Eq. (<ref>) (see Sec. <ref>), as we do below.
To check the validity of Eq. (<ref>), we need to look at the constraints that define S(γ⃗). There are two types of constraints. First, we have the constraint obtained from the source-replacement scheme, which is of the form Tr_B(ρ) = σ_A and represents the fact that in prepare and measure protocols, Alice's state is known and never leaves her lab. For the qubit BB84 protocol, one can take σ_A = I_A /2 (see Appendix. <ref>). Since Tr_B(ρ_AB) = I_A / 2 is always true for a Bell-diagonal state, this constraint is always satisfied by T(ρ).
The remaining constraints are obtained from the acceptance test, and are of the form Tr(Γ_k ρ ) = γ_k. Thus, to check the validity of Eq. (<ref>), we check whether Tr(Γ_k 𝒯(ρ)) = Tr(𝒯^†(Γ_k) ρ ) =γ_k ∀ρ.
* Coarse-grained statistics : In this case, Γ_k is the POVM element corresponding to QBER and gain in each basis. It is therefore easy to check with a simple calculation that 𝒯^† (Γ_k) = Γ_k which implies Tr(Γ_k 𝒯(ρ)) = Tr(Γ_k ρ). Thus, for coarse-grained constraints the state shared between Alice and Bob can be assumed to be Bell-diagonal, and F=F^'. This explains the coarse-grained row in Table <ref>.
* Sifted Fine-grained statistics: Let us turn to the case of “sifted fine-grained” constraints. For the Z basis, let the POVMs that make up the constraints be given by Γ_HH,Γ_HV, Γ_VH, Γ_VV (with similar expressions for the X basis). In general, these POVMs are not invariant under 𝒯^†, and thus Tr(Γ_k 𝒯(ρ)) ≠Tr(Γ_k ρ). However, one can find that
(σ_i ⊗σ_i)^†{Γ_HH, Γ_VV} (σ_i ⊗σ_i) ∈{Γ_HH, Γ_VV}
(σ_i ⊗σ_i)^†{Γ_HV, Γ_VH} (σ_i ⊗σ_i) ∈{Γ_VH, Γ_HV}
(σ_i ⊗σ_i)^†{Γ_++, Γ_–} (σ_i ⊗σ_i) ∈{Γ_++, Γ_–}
(σ_i ⊗σ_i)^†{Γ_+-, Γ_-+} (σ_i ⊗σ_i) ∈{Γ_+-, Γ_-+}
𝒯^† ( Γ_HH) = 𝒯^†( Γ_VV ) = 1/2 (Γ_HH+Γ_VV),
𝒯^†( Γ_HV) = 𝒯^† ( Γ_VH ) = 1/2 ( Γ_VH+ Γ_HV),
𝒯^† ( Γ_++ ) = 𝒯^†( Γ_– ) = 1/2( Γ_++ + Γ_– ),
𝒯^† ( Γ_+- ) = 𝒯^†( Γ_-+ ) = 1/2( Γ_+-+ Γ_-+ ).
Therefore, in this case, one can claim a reduction to Bell-diagonal as long as the statistics obey certain symmetries. That is, if one obtains statistics satisfying γ_HH=γ_VV, γ_HV=γ_VH, γ_++=γ_–, γ_+-=γ_-+, then
Tr(Γ_k ρ) = γ_k Tr(Γ_k 𝒯(ρ) ) = γ_k,
even when 𝒯^† (Γ_k) ≠Γ_k.
The statistics obey this symmetry when the channel consists of any combination of loss and misalignment, and therefore for these channel models we again obtain F=F^' due to the reduction to Bell-diagonal states. Introducing the additional replacement channel Φ_replace destroys this symmetry, and we obtain F≠ F^'. This explains the sifted fine-grained row in Table <ref>.
* Fine-grained statistics : In this case, in addition to Eq. (<ref>), it is possible to show that each POVM in the off-diagonal block of Table <ref> is mapped to the same off-diagonal block by 𝒯^†. That is,
𝒯^†( Γ_H+) = 𝒯^†(Γ_H-) = 𝒯^†(Γ_V+) = 𝒯^†(Γ_V-)
= 1/4 (Γ_H++Γ_H-+Γ_V++Γ_V-)
𝒯^†( Γ_+H) = 𝒯^†(Γ_+V) = 𝒯^†(Γ_-H) = 𝒯^†(Γ_-V)
= 1/4 (Γ_+H+Γ_+V+Γ_-H+Γ_-V)
That is, if one obtains statistics satisfying γ_HH=γ_VV, γ_HV=γ_VH, γ_++=γ_–, γ_+-=γ_-+ along with γ_H+ = γ_H- = γ_V+ = γ_V-, and γ_+H = γ_+V = γ_-H = γ_-V, then we again can claim that
Tr(Γ_k ρ) = γ_k Tr(Γ_k 𝒯(ρ) ) = γ_k,
even when 𝒯^† (Γ_k) ≠Γ_k.
This is the case when the channel only contains depolarization.
For only misalignment, it has already been shown that fine-grained constraints allow us to show that Eve factors off and holds a state that is independent of the Alice-Bob state <cit.>. Since Eve's quantum system factors off, the F=F^' follows from the fact that W and Z are independent random variables for each basis, i.e S(Z | ÃB̃ W) = S(Z | ÃB̃ ). For the remaining two cases, we find that the optimal values of the two objective functions are unequal, and no reduction to the Bell-diagonal case is possible. This explains the fine-grained row of Table <ref>.
We plot F, F^' corresponding to the last two columns of Table <ref> in Figs. <ref>, <ref>.
§ WCP DECOY STATE BB84
In this section, we present results for the WCP decoy-state BB84 protocol <cit.> in the same manner. The key rate calculations, protocol description, channel simulation, and decoy analysis is exactly identical to the one from <cit.>. Therefore, these aspects will be only briefly described in this work. The only difference lies in the modification of Kraus operators according to Eqs. (<ref>), and the inclusion of the replacement channel Φ_replace in the channel simulation.
§.§ Protocol Specification
Alice prepares and sends a phase-randomized weak coherent pulse (WCP) pulse in the polarization mode H,V,A,D with equal probability, choosing to use the “signal intensity” with probability close to one, and some “decoy intensities”. Bob implements passive basis choice with equal probability. We use a squashing model on Bob's side <cit.> to describe his measurements, and Bob's squashed POVMs can be found in Appendix <ref>. Alice and Bob announce a small fraction of their data, and perform the acceptance test. If the protocol accepts, Alice and Bob announce basis, and throw away the signals where they measured in different basis, or where Bob got a no detection event. Alice then maps her measurement outcomes to the raw key, followed by error correction (Cascade) and privacy amplification. For the descriptions of the exact Kraus operators of the protocol, we refer the reader to Appendix <ref>.
The fine-grained statistics obtained by Alice and Bob are given by the Table <ref>. Again, as in Sec. <ref>, we use the phrase “fine-grained constraints” to refer to the case where all the entries in Table <ref> are used for the acceptance test, “sifted fine-grained” when only the entries marked in red are used, and “coarse-grained" constraints when only the (unnormalized) QBER and Gain constraints for each basis are used. Additionally, we use the constraints from source-replacement that characterize Alice's system for prepare and measure protocols.
The exact manner in which statistics from a channel consisting of misalignment and loss are computed is identical to the procedure described in Ref. <cit.>. We will not repeat those calculations here. We include an additional “replacement channel” Φ_replace which replaces the state leaving Alice's lab with the signal state corresponding to H with probability λ = 0.2. This is interesting since it breaks symmetries in observed statistics. If we include the replacement channel, then each row γ⃗^μ_i_j of Table <ref> (computed for loss and misalignment), is replaced by (1-λ)γ⃗^μ_i_j + λγ⃗^μ_i_H (since Alice sends each state with equal probability).
§.§ Results
The optimization problem for decoy protocols is solved by obtaining bounds on the single photon yields as
γ^1,L_y | x≤γ^1_y|x≤γ^1,U_y|x, ∀ x,y
where γ^1_y|x denotes the probability of Bob obtaining outcome y, given Alice sent signal x and 1 photon. One can then compute lower and upper bounds on γ^1_x,y, by using γ^1_x,y = (x) γ^1_y|x, where (x) denotes the probability of Alice sending signal x.
The optimization problem <cit.> is then given by (see Appendix <ref>)
F = min_ρ∈ S_1^' f(ρ),
S_1^' = {ρ∈ H_+ | γ^1,L_k ≤Tr(Γ_k ρ) ≤γ^1,U_k, ∀ k}
where H_+ denotes positive semidefinite operators, and S^'_1 is the set of density operators compatible with observed statistics, and k depends on the exact nature of coarse-graining.
We numerically compute the difference between F= min_ρ∈ S_1^' f(ρ) and F^' = min_ρ∈ S_1^' f^'(ρ) for all our channel models, and various types of constraints. The results are summarized in Table <ref>.
Since after squashing, the single photon contribution to the objective function involves both Alice and Bob having qubits (or vacuum), our intuition from the qubit BB84 picture can be used to understand the results in Table <ref>. We believe a more rigorous justification can be made along the same lines as for the qubit case, however that is not a contribution of this work. For coarse-grained constraints, we expect symmetry arguments to allow us to restrict to Bell-diagonal states, in which case announcing W provides no new information to Eve. For the “sifted fine-grained” and “fine-grained” case, we expect symmetry arguments to not work in general, but to allow a restriction to Bell-diagonal states if observations are also symmetric, as seen in Sec. <ref>. We plot F, F^' corresponding to the last two columns of the Table <ref> in Figs. <ref> and <ref>.
Effect of zero-photon contribution: The above analysis is done for the case where we only keep the single-photon contribution to the key in Eq. (<ref>) (Eq. (<ref>)). Let us consider the zero-photon contribution to the key. In this case, note that since no signal left Alice's lab, Eve cannot know anything about Alice's key bit. Therefore, the zero-photon contribution to F is equal to p^(0)_pass, where p^(0)_pass is the probability of zero-photon event leading to detection and passing sifting.
Moreover, if Alice sends zero photons, the state giving rise to Bob's detection must be assumed to be known to Eve. Therefore Eve has perfect knowledge of Bob's data. In this case, if W is announced, Eve has perfect knowledge of Alice's data as well. Therefore, the zero-photon contribution to F^' is always zero.
Therefore, in this case F > F^' always, regardless of the type of constraints used.
§ CONCLUSION
In this work, we pointed out a critical flaw in the analysis of QKD protocols using Cascade, that stems from an improper consideration of the classical communication during Cascade. This leads to the computation of secret key rates that are not justified. We proposed a simple and elegant fix, involving the construction of a convenient virtual protocol that cannot leak less information to Eve than the one using Cascade. Therefore, its key rate can be safely used in any protocol using Cascade. Our approach is easy and straightforward to implement in the numerical framework of <cit.>. We applied our solution to various implementations of the BB84 protocol, and compared our results with those of earlier, incorrect approaches. In many cases, we found that the numerical value of the key rate does not change, indicating that the communication from Bob to Alice does not leak additional information to Eve. A number of such cases were shown to arise due to symmetries in the protocol, and in the observed statistics.
All code used in this work will be made available soon.
We would like to thank Lars Kamin and Shlok Nahar for giving useful feedback on early drafts of this work. We would like to thank John Burniston and Scott Johnstun for help with debugging code. We would like to thank Louis Salvail for helpful discussions on the Cascade protocol, and for sending us his PhD thesis. This work was funded by the NSERC Discovery Grant, and was conducted at the Institute for Quantum Computing, University of Waterloo, which is funded by Government of Canada through ISED.
§ PROTOCOL DESCRIPTIONS
§.§ Qubit BB84
Using the source-replacement scheme <cit.>, the protocol can be equivalently described as Alice creating the Bell-state |ψ⟩_AA^' = |ϕ_+⟩ = |00⟩+|11⟩/√(2), and sending A^' to Bob. We model misalignment as a rotation of angle θ about the Y axis on A^', with
U(θ) = I_A ⊗[ cos(θ) -sin(θ); sin(θ) cos(θ), ]
ℰ_misalign (ρ) = U(θ) ρ U(θ)^†.
Depolarization is modelled as a map
ℰ_depol (ρ)= (1-q) (ρ)+q Tr_A^' (ρ) ⊗I_B/2 .
The state on which statistics are computed is given by ρ_AB = ℰ_depol ( ℰ_misalign(|ϕ_+⟩⟨ϕ_+|) ). The entries in Table <ref> can be computed via γ_i = Tr (Γ_i ρ_AB).
Both Alice and Bob perform measurements on qubit systems, and their POVMs are given by { P_(Z,0) = p_z |0⟩⟨0|, P_(Z,1) = p_z |1⟩⟨1|, P_(X,0) = p_x |+⟩⟨+| , P_(X,1) = p_x |-⟩⟨-|}, with p_z=p_x=1/2 . In addition, Alice implements the keymap by simply copying the measurement outcome to the key register.
From the discussion in Appendix A of <cit.>, we can remove certain registers created by the generic form of the Kraus operators in Eq. (<ref>). In particular, we do not need to consider the registers that store Alice and Bob's outcome, and we only need one copy of the announcement register.
In this case, the general form for the Kraus operators from Eq. (<ref>) now becomes
K_α = ∑_x |r(α,α,x) ⟩_Z ⊗√(∑_y P^A_(α,x)⊗ P^B_(α,y))⊗|α⟩_Ã,
while Eq. (<ref>) becomes
K^'_α,w = ∑_x |r(α,α,x) ⟩_Z ⊗√(∑_y
x⊕ y = w P^A_(α,x)⊗ P^B_(α,y))
⊗|α⟩_Ã⊗|w⟩_W,
where α,β denotes basis choice, and x,y denotes measurement outcomes. Alice and Bob's POVMs are given by P^A = {P^A_(α,x)}, and P^B = {P^B_(α,y)}. Since Alice and Bob throw away all signals that have basis mismatch, the set of operators generating the 𝒢 map can be given by { K_α}, and the set of operators generating 𝒢^' is given by {K^'_α,w}.
The 𝒵 map has Kraus operators { Z_i} given by Z_i = |i⟩⟨i|_Z ⊗ I_AB Ã.
Therefore, the final Kraus operators for F are given by
K_Z = [ [ 1; 0 ]_Z ⊗√(p_z)[ 1 ; 0 ]_A + [ 0; 1 ]_Z ⊗√(p_z)[ 0 ; 1 ]_A ]
⊗√(p_z)[ 1 ; 1 ]_B ⊗[ 1; 0 ]_Ã,
K_X = [ [ 1; 0 ]_Z ⊗√(p_x/2)[ 1 1; 1 1 ]_A + [ 0; 1 ]_Z ⊗√(p_x/2)[ 1 -1; -1 1 ]_A ]
⊗√(p_x)[ 1 ; 1 ]_B ⊗[ 0; 1 ]_Ã,
and
Z_1 = [ 1 ; 0 ]⊗𝕀__A ×_B × 2,
Z_2 = [ 0 ; 1 ]⊗𝕀__A ×_B × 2.
The operators for F^' can be constructed from Eq. (<ref>).
§.§ WCP Decoy BB84
Along with source-replacement, we use the squashing model from <cit.> to squash Bob's system to three dimensions. Since we only generate key from the single-photon pulses, Alice's POVMs are given by { P_(Z,0) = p_z |0⟩⟨0|, P_(Z,1) = p_z |1⟩⟨1|, P_(X,0) = p_x |+⟩⟨+| , P_(X,1) = p_x |-⟩⟨-|}, with p_z=p_x=1/2 . Bobs POVMs are given by
P^B_(Z,0) = p_z [ 0 0 0; 0 1 0; 0 0 0 ] , P^B_(Z,1) = p_z [ 0 0 0; 0 0 0; 0 0 1 ],
P^B_(X,0) = p_x/2[ 0 0 0; 0 1 1; 0 1 1 ] , P^B_(X,1) = p_x/2[ 0 0 0; 0 1 -1; 0 -1 1 ],
P^B_ = [ 1 0 0; 0 0 0; 0 0 0 ],
with p_x=p_z= 1/2.
Here, the first column corresponds to the vacuum subspace, while the second and third column make up the qubit subspace.
Again, from the discussion in <cit.>, we can remove certain registers created by the generic form of the Kraus operators in Eq. (<ref>). After removing these registers, the form of the Kraus operators is given by Eq. (<ref>).
Therefore, the final Kraus operators for F are given by
K_Z = [ [ 1; 0 ]_Z ⊗[ 1 ; 0 ; 0 ; 0 ]_A + [ 0; 1 ]_Z ⊗[ 0 ; 1 ; 0 ; 0 ]_A ]
⊗√(p_z)[ 0 ; 1 ; 1 ]_B ⊗[ 1; 0 ]_Ã,
K_X = [ [ 1; 0 ]_Z ⊗[ 0 ; 0 ; 1 ; 0 ]_A + [ 0; 1 ]_Z ⊗[ 0 ; 0 ; 0 ; 1 ]_A ]
⊗√(p_x)[ 0 ; 1 ; 1 ]_B ⊗[ 0; 1 ]_Ã,
K_Z = [ [ 1; 0 ]_Z ⊗√(p_z)[ 1 ; 0 ]_A + [ 0; 1 ]_Z ⊗√(p_z)[ 0 ; 1 ]_A]
⊗√(p_z)[ 0 ; 1 ; 1 ]_B ⊗[ 1; 0 ]_Ã,
K_X = [ [ 1; 0 ]_Z ⊗√(p_x/2)[ 1 1; 1 1 ]_A + [ 0; 1 ]_Z ⊗√(p_x/2)[ 1 -1; -1 1 ]_A ]
⊗√(p_x)[ 0 ; 1 ; 1 ]_B ⊗[ 0; 1 ]_Ã,
and
Z_1 = [ 1 ; 0 ]⊗𝕀__A ×_B × 2,
Z_2 = [ 0 ; 1 ]⊗𝕀__A ×_B × 2.
The operators for F^' can be constructed from Eq. (<ref>),
§ BELL-DIAGONAL STATES
For Bell-diagonal states, we can show that the announcement of the location of errors W leaks no new information to Eve, by showing that Eve's state is block-diagonal in W anyway.
In the Bell-diagonal case, the state shared between Alice and Bob can be written as
ρ_AB =λ_0 |ϕ_+⟩⟨ϕ_+| + λ_1 |ϕ_-⟩⟨ϕ_-|
+ λ_2 |ψ_+⟩⟨ψ_+|+λ_3 |ψ_-⟩⟨ψ_-|,
where |ϕ_+/-⟩, |ψ_+/-⟩ are the Bell states, and λ_is are related to quantum bit error rate (QBER) via, Q_Z=λ_3+λ_4, Q_X=λ_2+λ_4, Q_Y=λ_2+λ_3. We can assume Eve holds a purification of the form
|ψ⟩_ABE =√(λ_0)|ϕ_+⟩|e_0⟩ + √(λ_1)|ϕ_-⟩|e_1⟩
+ √(λ_2)|ψ_+⟩|e_2⟩+√(λ_3 )|ψ_-⟩|e_3⟩,
where |e_i⟩ are orthonormal basis vectors for Eve's system.
Let us suppose Alice and Bob measure in the basis α∈{X,Z}, the (unnormalized) state after the measurement is given by
ρ^(α)_XYE=∑_x,y ∈{0,1}|x⟩⟨x|⊗|y⟩⟨y|⊗ρ^(α),x,y_E ,
where ρ^(α),x,y_E=Tr [(P^A_(α,x)⊗ P^B_(α,y)⊗ I_E ) |ψ⟩⟨ψ|_ABE].
A simply calculation shows that the support (ρ^(α),0,0_E, ρ^(α),1,1_E) is orthogonal to the support of (ρ^(α),1,0_E, ρ^(α),0,1_E). Thus, we can conclude that Eve can be assumed to always know the value of x ⊕ y for the entire raw key, if the state shared between Alice and Bob is Bell-diagonal. In fact, the above discussion is also true when Alice and Bob measure in the Y basis, and is therefore also applicable to the six-state protocol.
§ TWIRLING REDUCES KEY RATE
For our protocol, f(ρ) = S(Z|E ÃB̃)_ρ. One can always expand
S(Z|E ÃB̃) = ∑_α,βProb(α,β) S(Z|E,Ã=α, B̃=β)
= ∑_αProb(α) S(Z|E,Ã=α),
where we used the fact that Z= for basis mismatch, and those signals are thrown away. Now, let
𝒯(ρ) = 1/4∑_i=1^4 ρ_i = 1/4∑_i=1^4 (σ_i ⊗σ_i) ρ (σ_i ⊗σ_i)^† .
Then,
S(Z|E,Ã=α)_𝒯 (ρ) ≤1/4∑_i=1^4 S(Z|E,Ã=α)_ρ_i
= 1/4∑_i=1^4 S(Z|E,Ã=α)_ρ
= S(Z|E,Ã=α)_ρ,
where we have used linearity of 𝒯 and concavity of conditional entropy in the first inequality. The second line follows from the fact that the action of the pauli operators on ρ either leave the measurements performed (X,Y, or Z) unchanged, or flip the outcomes, neither of which can affect the entropy.
Combining Eqs. (<ref>),(<ref>), we obtain Eq. (<ref>), which is required for the reduction to Bell-diagonal states.
§ DECOY ANALYSIS
The decoy analysis in this work is similar to that from <cit.>, with small changes in notation, and is included here for sake of completeness. For a phase-randomized weak coherent pulse (WCP), the state is diagonal in photon number and follows the poissonian probability distribution
p_μ_i (n) = μ_i^n/n ! e^-μ_i.
For any statistic γ_y|x, one can then write
γ^μ_i_y|x = ∑ _n p_μ_i (n) γ^n_y|x ,
where γ^μ_i_y|x denotes the probability of Bob obtaining outcome y given Alice sent signal x and intensity μ_i.
If one uses multiple intensities, then one can use the following set of equations
γ^μ_i_y|x ≤∑_n ≤ N p_μ_i(n) γ^n_y|x + (1- ∑_n ≤ N p_μ_i(n) ),
γ^μ_i_y|x ≥∑_n ≤ N p_μ_i(n) γ^n_y|x,
to obtain upper bounds and lower bound on γ^1_y|x.
γ^1,L_y | x≤γ^1_y|x≤γ^1,U_y|x, ∀ x,y
Noting that we can now compute bounds on γ^1_x,y = (x) γ^1_y|x, we obtain bounds on the all single-photon statistics for any particular coarse-graining, which we refer to as
S_1^' = {ρ∈ H_+ | γ^1,L_k ≤Tr(Γ_k ρ) ≤γ^1,U_k, ∀ k}
where γ^1_k means the kth statistics obtained from 1 photon signals, and the range of k depends on the exact nature of the coarse-graining.
Objective function: The state shared between Alice and Bob after source-replacement can be assumed to be block-diagonal in the photon number of Alice's signal, given by
ρ_AA_SB = ∑_n p_n |n⟩⟨n|_A_S⊗ρ^(n)_AB,
where A and B are Alice and Bob's systems, and A_S is a shield system.
In such cases, the objective function can be shown to satisfy <cit.>
min_ρ∈ S f(ρ) = ∑_n p_n min_ρ^(n)_AB∈ S^'_n f(ρ^(n)_AB).
For polarization encoded phase-randomized pulses, Eve can perform a photon-number-splitting attack <cit.>. This implies that no key can be generated for n > 1 in the above equation. Therefore, we have
min_ρ∈ S f(ρ) = p_0 min_ρ^(0)_AB∈ S^'_0 f(ρ^(0)_AB) + p_1 min_ρ^(1)_AB∈ S^'_1 f(ρ^(1)_AB)
≥ p_1 min_ρ^(1)_AB∈ S^'_1 f(ρ^(1)_AB).
In this work, we will use the second expression above.
Since one does not know the exact single-photon statistics, but rather knows bounds on them due to decoy analysis, the optimization problem is then given by
F = min_ρ∈ S_1^' (G) f(ρ),
S_1^' = {ρ∈ H_+ | γ^1,L_k ≤Tr(Γ_k ρ) ≤γ^1,U_k, ∀ k}
|
http://arxiv.org/abs/2307.03000v1
|
20230706140401
|
Holographic Weyl anomaly in string theory
|
[
"Lorenz Eberhardt",
"Sridip Pal"
] |
hep-th
|
[
"hep-th"
] |
ALPGEN
EVTGEN
PYTHIA
patterns,decorations.pathreplacing
|
http://arxiv.org/abs/2307.00753v1
|
20230703045731
|
KMT-2022-BLG-0475Lb and KMT-2022-BLG-1480Lb: Microlensing ice giants detected via non-caustic-crossing channel
|
[
"Cheongho Han",
"Chung-Uk Lee",
"Ian A. Bond",
"Weicheng Zang",
"Sun-Ju Chung",
"Michael D. Albrow",
"Andrew Gould",
"Kyu-Ha Hwang",
"Youn Kil Jung",
"Yoon-Hyun Ryu",
"In-Gu Shin",
"Yossi Shvartzvald",
"Hongjing Yang",
"Jennifer C. Yee",
"Sang-Mok Cha",
"Doeon Kim",
"Dong-Jin Kim",
"Seung-Lee Kim",
"Dong-Joo Lee",
"Yongseok Lee",
"Byeong-Gon Park",
"Richard W. Pogge",
"Shude Mao",
"Wei Zhu",
"Fumio Abe",
"Richard Barry",
"David P. Bennett",
"Aparna Bhattacharya",
"Hirosame Fujii",
"Akihiko Fukui",
"Ryusei Hamada",
"Yuki Hirao",
"Stela Ishitani Silva",
"Yoshitaka Itow",
"Rintaro Kirikawa",
"Iona Kondo",
"Naoki Koshimoto",
"Yutaka Matsubara",
"Shota Miyazaki",
"Yasushi Muraki",
"Greg Olmschenk",
"Clément Ranc",
"Nicholas J. Rattenbury",
"Yuki Satoh",
"Takahiro Sumi",
"Daisuke Suzuki",
"Taiga Toda",
"Mio Tomoyoshi",
"Paul J. Tristram",
"Aikaterini Vandorou",
"Hibiki Yama",
"Kansuke Yamashita"
] |
astro-ph.EP
|
[
"astro-ph.EP",
"astro-ph.GA"
] |
KMT-2022-BLG-0475Lb and KMT-2022-BLG-1480Lb: Microlensing ice giants
Department of Physics, Chungbuk National University, Cheongju 28644, Republic of Korea,
Korea Astronomy and Space Science Institute, Daejon 34055, Republic of Korea
Institute of Natural and Mathematical Science, Massey University, Auckland 0745, New Zealand
Center for Astrophysics | Harvard & Smithsonian, 60 Garden St., Cambridge, MA 02138, USA
Department of Astronomy, Tsinghua University, Beijing 100084, China
University of Canterbury, Department of Physics and Astronomy, Private Bag 4800, Christchurch 8020, New Zealand
Max-Planck-Institute for Astronomy, Königstuhl 17, 69117 Heidelberg, Germany
Department of Astronomy, Ohio State University, 140 W. 18th Ave., Columbus, OH 43210, USA
Korea University of Science and Technology, Korea, (UST), 217 Gajeong-ro, Yuseong-gu, Daejeon, 34113, Republic of Korea
Department of Particle Physics and Astrophysics, Weizmann Institute of Science, Rehovot 76100, Israel
School of Space Research, Kyung Hee University, Yongin, Kyeonggi 17104, Republic of Korea
Institute for Space-Earth Environmental Research, Nagoya University, Nagoya 464-8601, Japan
Code 667, NASA Goddard Space Flight Center, Greenbelt, MD 20771, USA
Department of Astronomy, University of Maryland, College Park, MD 20742, USA
Komaba Institute for Science, The University of Tokyo, 3-8-1 Komaba, Meguro, Tokyo 153-8902, Japan
Instituto de Astrofísica de Canarias, Vía Láctea s/n, E-38205 La Laguna, Tenerife, Spain
Department of Earth and Space Science, Graduate School of Science, Osaka University, Toyonaka, Osaka 560-0043, Japan
Department of Physics, The Catholic University of America, Washington, DC 20064, USA
Department of Astronomy, Graduate School of Science, The University of Tokyo, 7-3-1 Hongo, Bunkyo-ku, Tokyo 113-0033, Japan
Sorbonne Université, CNRS, UMR 7095, Institut d'Astrophysique de Paris, 98 bis bd Arago, 75014 Paris, France
Department of Physics, University of Auckland, Private Bag 92019, Auckland, New Zealand
University of Canterbury Mt. John Observatory, P.O. Box 56, Lake Tekapo 8770, New Zealand
We investigate the microlensing data collected in the 2022 season from the high-cadence
microlensing surveys in order to find weak signals produced by planetary companions to lenses.
From these searches, we find that two lensing events KMT-2022-BLG-0475 and KMT-2022-BLG-1480
exhibit weak short-term anomalies. From the detailed modeling of the lensing light curves, we
identify that the anomalies are produced by planetary companions with a mass ratio to the
primary of q∼ 1.8× 10^-4 for KMT-2022-BLG-0475L and a ratio q∼ 4.3×
10^-4 for KMT-2022-BLG-1480L.
It is estimated that the host and planet masses and the projected planet-host separation are
(M_ h/M_⊙, M_ p/M_ U, a_⊥/ au) = (0.43^+0.35_-0.23,
1.73^+1.42_-0.92, 2.03^+0.25_-0.38) for KMT-2022-BLG-0475L, and
(0.18^+0.16_-0.09, 1.82^+1.60_-0.92, 1.22^+0.15_-0.14)
for KMT-2022-BLG-1480L, where M_ U denotes
the mass of Uranus. Both planetary systems share common characteristics that the primaries of
the lenses are early-mid M dwarfs lying in the Galactic bulge and the companions are ice giants
lying beyond the snow lines of the planetary systems.
KMT-2022-BLG-0475Lb and KMT-2022-BLG-1480Lb: Microlensing ice giants detected via non-caustic-crossing channel
Cheongho Han01
Chung-Uk Lee02
Ian A. Bond03
Weicheng Zang04,05
(Leading authors)
Sun-Ju Chung02, 04
Michael D. Albrow06
Andrew Gould07,08
Kyu-Ha Hwang02
Youn Kil Jung02,09
Yoon-Hyun Ryu02
In-Gu Shin04
Yossi Shvartzvald10
Hongjing Yang05
Jennifer C. Yee04
Sang-Mok Cha02,11
Doeon Kim01
Dong-Jin Kim02
Seung-Lee Kim02
Dong-Joo Lee02
Yongseok Lee02,11
Byeong-Gon Park02
Richard W. Pogge08
(The KMTNet collaboration)
Shude Mao 05
Wei Zhu 05
(Microlensing Astronomy Probe Collaboration)
Fumio Abe12
Richard Barry13
David P. Bennett13,14
Aparna Bhattacharya13,14
Hirosame Fujii12
Akihiko Fukui15,16
Ryusei Hamada17
Yuki Hirao17
Stela Ishitani Silva13,18
Yoshitaka Itow12
Rintaro Kirikawa17
Iona Kondo17
Naoki Koshimoto19
Yutaka Matsubara12
Shota Miyazaki17
Yasushi Muraki12
Greg Olmschenk13
Clément Ranc20
Nicholas J. Rattenbury21
Yuki Satoh17
Takahiro Sumi17
Daisuke Suzuki17
Taiga Toda17
Mio Tomoyoshi17
Paul J. Tristram22
Aikaterini Vandorou13,14
Hibiki Yama17
Kansuke Yamashita17
(The MOA Collaboration)
August 1, 2023
===================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================
§ INTRODUCTION
The microlensing signal of a planet usually appears as a short-term anomaly on the smooth and
symmetric lensing light curve generated by the host of the planet <cit.>.
The signal arises when a source approaches the perturbation region formed around the caustic
induced by the planet. Caustics represent the positions on the source plane at which the lensing
magnification of a point source is infinite, and thus source crossings over the caustic result
in strong signals with characteristic spike features.
The region of planetary deviations extends beyond caustics, and planetary signals can be produced
without the caustic crossing of a source. Planetary signals produced via the non-caustic-crossing
channel are weaker than those generated by caustic crossings, and the strength of the signal
diminishes as the separation of the source from the caustic increases. Furthermore, these signals
do not exhibit characteristic features such as the spikes produced by caustic crossings. Due to
the combination of these weak and featureless characteristics, the planetary signals generated via
the non-caustic channel are difficult to be noticed. If such signals are missed despite the fact
that they meet the criterion of detection, the statistical studies based on the incomplete planet
sample would lead to erroneous results on the demographics of planets. In order to prevent this,
the Korea Microlensing Telescope Network <cit.> group has regularly conducted
systematic inspection of the data collected by the survey experiments in search of weak planetary
signals and has published detected planets in a series of papers <cit.>.
In this work, we present the analyses of the two microlensing events KMT-2022-BLG-0475 and
KMT-2022-BLG-1480, for which weak short-term anomalies were found from the systematic investigation
of the data collected from the high-cadence microlensing surveys conducted in the 2022 season. We
investigate the nature of the anomalies by carrying out detailed analyses of the light curves.
The organization of the paper for the presentation of the analyses and results are as follows.
In Sect. <ref>, we describe the observations and data used in the analyses. In
Sect. <ref>, we begin by explaining the parameters used in modeling the lensing
light curves, and we then detail the analyses conducted for the individual events in the following
subsections: Sect. <ref> for KMT-2022-BLG-0475 and in Sect. <ref>
for KMT-2022-BLG-1480. In Sect. <ref>, we explain the procedure of constraining the
source stars and estimating the angular Einstein radii of the events. In Sect. <ref>,
we explain the procedure of the Bayesian analyses conducted to determine the physical lens
parameters and present the estimated parameters. We summarize results and conclude in
Sect. <ref>.
§ OBSERVATIONS AND DATA
We inspected the microlensing data of the KMTNet survey collected from the observations
conducted in the 2022 season. The total number of KMTNet lensing events detected in the season
is 2803. For the individual events, we first fitted light curves with a single-lens single-source
(1L1S) model and then visually inspected residuals from the model. From this inspection, we
found that the lensing events KMT-2022-BLG-0475 and KMT-2022-BLG-1480 exhibited weak short-term
anomalies. We then cross-checked whether there were additional data from the surveys conducted
by other microlensing observation groups. We found that both events were additionally observed
by the Microlensing Observations in Astrophysics <cit.> group, who
referred to the events as MOA-2022-BLG-185 and MOA-2022-BLG-383, respectively. For
KMT-2022-BLG-1480, there were extra data acquired from the survey observations conducted by
the Microlensing Astronomy Probe (MAP) collaboration during the period from 2021 August to
2022 September, whose primary purpose was to verify short-term planetary signals found by the
KMTNet survey. In the analyses of the events, we used the combined data from the three survey
experiments.[
The Optical Gravitational Microlensing Experiment <cit.> is another major
microlensing survey, although the two events analyzed in this work were not detected by the survey
because the OGLE telescope was not operational in the first half of the 2022 season. Besides
these surveys dedicated to the microlensing program, lensing events are detected from other
surveys such as the Zwicky Transient Facility (ZDF) survey <cit.> and the Asteroid
Terrestrial-impact Last Alert System (ATLAS) survey <cit.>, or observed using
space-based instrument such as the Gaia survey <cit.> and Hubble
Space Telescope <cit.>.
]
The observations of the events were carried out using the telescopes that are operated by the
individual survey groups. The three identical telescopes used by the KMTNet group have a 1.6 m
aperture equipped with a camera yielding 4 deg^2 field of view, and they are distributed in
the three continents of the Southern Hemisphere for the continuous coverage of lensing events.
The sites of the individual KMTNet telescopes are the Cerro Tololo Interamerican Observatory in
Chile (KMTC), the South African Astronomical Observatory in South Africa (KMTS), and the Siding
Spring Observatory in Australia (KMTA). The MOA group utilizes the 1.3 m telescope at the
Mt. John Observatory in New Zealand, and the camera mounted on the telescope has a 1.2 deg^2
field of view. The MAP collaboration uses the 3.6 m Canada-France-Hawaii Telescope (CFHT) in
Hawaii.
Observations by the KMTNet, MOA, MAP groups were done mainly in the I, customized MOA-R,
and SDSS-i bands, respectively. A fraction of images taken by the KMTNet and MOA surveys
were acquired in the V band for the measurement of the source colors of the events. Reduction
of data and photometry of source stars were done using the pipelines of the individual survey
groups. For the data used in the analyses, we readjusted the error bars estimated from the
automated pipelines so that the error bars are consistent with the scatter of data and χ^2
per degree of freedom for each data set becomes unity following the method described in
<cit.>.
§ LIGHT CURVE ANALYSES
The analyses of the lensing events were carried out by searching for lensing solutions
specified by the sets of lensing parameters that best describe the observed light curves.
The lensing parameters vary depending on the interpretation of an event. It is known that
a short-term anomaly can be produced by two channels, in which the first is a binary-lens
single-source (2L1S) channel with a low-mass companion to the lens, and the other is a
single-lens binary-source (1L2S) channel with a faint companion to the source
<cit.>.
The basic lensing parameters used in common for the 2L1S and 1L2S models are (t_0, u_0, ,
ρ). The first two parameters represent the time of the closest source approach to the
lens and the lens-source separation (impact parameter) scaled to the angular Einstein radius
at t_0, respectively. The third parameter denotes the event time scale, that is
defined as the time for the source to transit . The last parameter is the normalized
source radius, that is defined as the ratio of the angular source radius θ_* to .
The normalized source radius is needed in modeling to describe the deformation of a lensing
light curve caused by finite-source effects <cit.>.
In addition to the basic parameters, the 2L1S and 1L2S models require additional parameters
for the description of the extra lens and source components. The extra parameters for the
2L1S model are (s, q, α), where the first two parameters denote the projected separation
scaled to and the mass ratio between the lens components M_1 and M_2, and the
last parameter denotes the source trajectory angle defined as the angle between the direction
of the lens-source relative proper motion and the M_1–M_2 binary axis. The extra
parameters for the 1L2S model include (t_0,2, u_0,2, ρ_2, q_F), which refer to the
closest approach time, impact parameter, the normalized radius of the source companion S_2,
and the flux ratio between the source companion and primary (S_1), respectively. See Table 2
of <cit.> for the summary of lensing parameters that are required to be included
under various interpretations of the lens-system configuration.
For the individual events, we check whether higher-order effects improve the fits by conducting
additional modeling. The considered higher-order effects are the microlens-parallax effect
<cit.> and the lens-orbital effects <cit.>, which are caused by the
orbital motion of Earth and lens, respectively. For the consideration of the microlens-lens
parallax effects, we included two extra parameters (, ), which denote the north
and east components of the microlens-parallax vector
_ E = ( π_ rel)
( μ),
respectively. Here π_ rel=π_ L-π_ S= au(1/ - 1/) denotes
the relative lens-source parallax, while and denote the distances to the lens
and source, respectively. The lens-orbital effects were incorporated into modeling by
including two extra parameters (ds/dt, dα/dt), which represent the annual change
rates of the binary-lens separation and the source trajectory angle, respectively.
We searched for the solutions of the lensing parameters as follows. For the 2L1S modeling, we
found the binary-lens parameters s and q using a grid approach with multiple seed values
of α, and the other parameters were found using a downhill approach based on the MCMC
logic. For the local solutions identified from the Δχ^2 map on the s–q parameter
plane, we then refined the individual solutions by allowing all parameters to vary. We adopted
the grid approach to search for the binary parameters because it was known that the change of
the lensing magnification is discontinuous due to the formation of caustics, and this makes it
difficult to find a solution using a downhill approach with initial parameters (s, q) lying
away from the solution. In contrast, the magnification of a 1L2S event smoothly changes with
the variation of the lensing parameters, and thus we searched for the 1L2S parameters using a
downhill approach with initial values set by considering the magnitude and location of the
anomaly features. In the following subsections, we describe the detailed procedure of modeling
and present results found from the analyses of the individual events.
§.§ KMT-2022-BLG-0475
The source of the lensing event KMT-2022-BLG-0475 lies at the equatorial coordinates ( RA,
DEC)_ J2000 = (18:05:20.56, -27:02:15.61), which correspond to the Galactic coordinates
(l, b) = (3^∘-2pt .835, -2^∘-2pt .804). The KMTNet group first discovered
the event on 2022 April 19, which corresponds to the abridged Heliocentric Julian date ≡-2450000=9688, when the source was brighter than the baseline magnitude I_ base=18.78
by Δ I∼ 0.6 mag. Five days after the KMTNet discovery, the event was independently
found by the MOA group, who designated the event as MOA-2022-BLG-185. Hereafter we use the KMTNet
event notation following the convention of using the event ID reference of the first discovery
survey. The event was in the overlapping region of the two KMTNet prime fields BLG03 and BLG43,
toward which observations were conducted with a 0.5 hr cadence for each field and ∼ 0.25 hr
in combination. The MOA observations were done with a similar cadence.
Figure <ref> shows the light curve of KMT-2022-BLG-0475 constructed from the combination
of the KMTNet and MOA data. The anomaly occurred at around t_ anom=9698.4, which was ∼
0.85 day before time of the peak. The zoom-in view of the region around the anomaly is shown the
upper panel of Figure <ref>. The anomaly lasted for about 0.5 day, and the beginning part
was covered by the MOA data while the second half of the anomaly was covered by the KMTS data.
There is a gap between the MOA and KMTS data during 9698.30 ≤≤ 9698.42, and this
gap corresponds to the night time at the KMTA site, which was clouded out except for the very
beginning of the evening.
Figure <ref> shows the best-fit 2L1S and 1L2S models in the region around the anomaly. From
the 2L1S modeling, we identified a pair of 2L1S solutions resulting from the close–wide degeneracy.
In Table <ref>, we present the lensing parameters of the two 2L1S and the 1L2S solutions
together with the χ^2 values of the fits and degrees of freedom (dof). It was found that
the severity of the degeneracy between the close and wide 2L1S solutions is moderate, with the
close solution being preferred over the wide solution by Δχ^2=8.4.
For the best-fit solution, that is, the close 2L1S solution, we also list
the flux values of the source f_s and blend f_b, where the flux values are approximately
scaled by the relation I=18-2.5log f.
We find that the anomaly in the lensing light curve of KMT-2022-BLG-0475 is best explained by a
planetary 2L1S model. The planet parameters are (s, q)_ close∼ (0.94, 1.76× 10^-4)
for the close solution and (s, q)_ wide∼ (1.14, 1.77× 10^-4) for the wide solution.
The estimated planet-to-host mass ratio is an order of magnitude smaller than the ratio between the
Jupiter and the sun, q∼ 10^-3, indicating that the planet has a mass that is substantially
smaller than a typical gas giant. Although the 1L2S model approximately describes the anomaly, it
leaves residuals of 0.03 mag level in the beginning and ending parts of the anomaly, resulting in
a poorer fit than the 2L1S model by Δχ^2=27.3. It was found that the microlens-parallax
parameters could not be measured because of the short time scale, ∼ 16.8 days, of the event.
In the upper and lower panels of Figure <ref>, we present the lens-system configurations
of the close and wide 2L1S solutions, respectively. In each panel, the inset shows the whole view
of the lens system, and the main panel shows the enlarged view around the central caustic. A
planetary companion induces two sets of caustics, with the "central" caustic indicating the one
lying close to the primary lens, while the other caustic, lying away from the primary, is referred
to as the as the "planetary" caustic. The configuration shows that the anomaly was produced by
the passage of the source through the deviation region formed in front of the protruding cusp of the
central caustic. We found that finite-source effects were detected despite the fact that the source
did not cross the caustic. In order to show the deformation of the anomaly pattern by finite-source
effects, we plot the light curve and residual from the point-source model that has the same lensing
parameters as those of the finite-source model except for ρ, in Figure <ref>.
It is known that the planet separations of the pair of degenerate solutions resulting from a
close–wide degeneracy follow the relation √(s_ close× s_ wide)=1.0
<cit.>. For the close and wide solutions of KMT-2022-BLG-0475, this value is
√(s_ close× s_ wide)=1.032, which deviates from unity with a fractional
discrepancy (√(s_ close× s_ wide)-1.0)/ 1.0∼ 3.2%. We find that the
relation between the two planet separations is better described by the <cit.> relation
s^†_± = √(s_ in× s_ out) =
√(u_ anom^2 +4)± u_ anom 2,
which was introduced to explain the relation between the planet separations s_ in and
s_ out of the two solutions that are subject to the inner–outer degeneracy <cit.>.
Here u_ anom^2=τ^2_ anom+u_0^2, τ_ anom=(t_ anom-t_0)/, t_ anom
is the time of the anomaly, and the sign in the left and right sides of Eq. (<ref>) is "+" for
a major image perturbation and "-" for a minor-image perturbation. The terms "inner" and "outer"
refer to the cases in which the source passes the inner and outer sides of the planetary caustic,
respectively. In the case of KMT-2022-BLG-0475 (and major-image perturbations in general), the
close and wide solutions correspond to the outer and inner solutions, respectively. From the
measured planet separations of s_ in=s_ wide=1.135 and s_ out=s_ close=
0.940, we found that
s^† = (s_ in× s_ out)^1/2 = 1.033.
From the lensing parameters (t_0, t_ anom, , u_0)=(9699.25, 9698.40, 16.8, 0.035), we found
that τ =(t_ anom-t_0)/ =0.050, u_ anom=0.061, and
s^† =√(u_ anom^2+4)+u_ anom 2 = 1.038.
Then, the fraction deviation of the s^† values estimated from Eqs. (<ref>) and (<ref>)
is Δ s^†/s^†=0.5%, which is 6.4 times smaller than the 3.2% fractional discrepancy
of the √(s_ close× s_ wide)=1 relation.
Although the √(s_ close× s_ wide)=1.0 relation is approximately valid in the case
of KMT-2022-BLG-0475, the deviation from the relation can be substantial especially when the planetary
separation is very close to unity, and thus the relation in Eq. (<ref>) helps to identify correct
degenerate solutions.
§.§ KMT-2022-BLG-1480
The lensing event KMT-2022-BLG-1480 occurred on a source lying at ( RA, DEC)_ J2000
= (17:58:54.96, -29:28:23.99), which correspond to (l, b) = (1^∘-2pt .015, -2^∘-2pt
.771). The event was first found by the KMTNet group on 2022 July 11 (=9771), when the source
was brighter than the baseline magnitude, I_ base=18.11, by Δ I∼ 0.65 mag. The source
was in the KMTNet prime field BLG02, toward which observations were conducted with a 0.5 hr cadence.
This field overlaps with the BLG42 field in most region, but the source was in the offset region that
was not covered by the BLG42 field. The event was also observed by the MOA and MAP survey groups, who
observed event with a 0.2 hr cadence and a 0.5–1.0 day cadence, respectively.
In Figure <ref>, we present the light curve of KMT-2022-BLG-1480. We found that a weak
anomaly occurred about 3 days after the peak centered at t_ anom∼ 9793.2. The anomaly
is characterized by a negative deviation in most part of the anomaly and a slight positive deviation
in the beginning part centered at ∼ 9792.2. The anomaly, that lasted about 2.7 days, was
covered by multiple data sets from KMTS, KMTA, and CFHT. The sky at the KMTC site was clouded out
for two consecutive nights from July 30 to August 1 (9791 ≤≤ 9793), and thus the
anomaly was not covered by the KMTC data.
In Table <ref>, we present the best-fit lensing parameters of the 2L1S solution. We did
not conduct 1L2S modeling because a negative deviation cannot be explained with a 1L2S interpretation.
We found a pair of 2L1S solutions, in which one solution has binary parameters (s, q)∼ (0.83,
4.9× 10^-4) and the other solution has parameters (s, q)∼ (1.03, 4.7× 10^-4).
Similar to the case of KMT-2022-BLG-0475L, the estimated mass ratio of order 10^-4 is much
smaller than the Jupiter/sun mass ratio. As we discuss below, the similarity between the model
curves of the two 2L1S solutions is caused by the inner–outer degeneracy, and thus we refer to the
solutions as "inner" and "outer" solutions, respectively. From the comparison of the inner and
outer solutions obtained under the assumption of a rectilinear relative lens-source motion, it
was found that the outer model yields a substantially better fit than the fit of the inner model
by Δχ^2=63.9, indicating that the degeneracy was resolved. In Figure <ref>, we
present the model curves of the two solutions in the region around the anomaly. From the comparison
of the models, it is found that the fit of the outer solution is better than the inner solution in
the region around ∼ 9792, at which the anomaly exhibits slight positive deviations
from the 1L1S model.
Figure <ref> shows the lens-system configurations of the inner and outer 2L1S solutions.
Although the fit is worse, we present the configuration of the inner solution in order to find
the origin of the fit difference between the two solutions. The configuration shows that the
outer solution results in a resonant caustic, in which the central and planetary caustics merge
and form a single caustic, while the central and planetary caustics are detached in the case of
the inner solution. According to the interpretations of both solutions, the source passed the
back-end side of central caustic without caustic crossings. The configuration of the outer
solution results in strong cusps lying on the back-end side, and this caustic feature explains
the slight positive deviation appearing in the beginning part of the anomaly around ∼
9792.2. Similar to the case of KMT-2022-BLG-0475, finite-source effects were detected although
the source did not cross the caustic. We plot the point-source model in Figure <ref>
for the comparison with the finite-source model.
We find that the relation in Eq. (<ref>) is also applicable to the two local solutions of
KMT-2022-BLG-1480. With (s_ in, s_ out)∼ (0.82, 1.03), the fractional deviation
of the value √(s_ in× s_ out) from unity is (√(s_ in×
s_ out)- 1.0)/1.0∼ 8%. On the other hand, the fractional difference between s^†
=√(s_ in× s_ out)=0.919 and s^† =[(u_ anom^2+4)^1/2-u_ anom]
/2=0.934 is Δ s^† /s^†=1.6%, which is 5 times smaller than that of the
√(s_ close× s_ wide)=1 relation. This also indicates that the two local
solutions result from the inner–outer degeneracy rather than the close–wide degeneracy.
We also checked the <cit.> relation between the planet-to-host mass ratio and the
lensing parameters for the "dip-type" anomalies,
q=( Δ t_ dip 4)
s |u_0| |sinα|^3,
where Δ t_ dip∼ 1.9 day is the duration of the dip in the anomaly. With the
lensing parameters (, u_0, s, α)=(26, 0.069, 1.03, 59^∘), we found that the mass
ratio analytically estimated from Eq. (<ref>) is q∼ 5.9× 10^-4, which is close
to the value 4.6× 10^-4 found from the modeling.
We check whether the microlens-parallax vector _ E=(, ) can be measured
by conducting extra modeling considering higher-order effects. We find that the inclusion of the
higher-order effects improves the fit by Δχ^2 =6.9 with respect to the model obtained
under the rectilinear lens-source motion (standard model).
In Table <ref>, we list the lensing parameters of the pair of higher-order models with
u_0 > 0 and u_0 < 0, which result from the mirror symmetry of the source trajectory
with respect to the binary-lens axis <cit.>.
The Δχ^2 maps of the models
on the (, )
parameter plane obtained from the higher-order modeling are shown in Figure <ref>.
It is found that the maps of the solutions results in a similar pattern of
a classical 1-dimensional parallax ellipse,
in which the east component is well
constrained and the north component has a fairly big uncertainty. <cit.>
pointed out that the constraints of the 1-dimensional parallax on the physical lens parameters
are significant, and <cit.> indicated that the parallax constraint should be incorporated
in the Bayesian analysis to estimate the physical lens parameters.
We describe the detailed procedure of imposing the parallax constraint in the second paragraph of
Sect. <ref>. While the parallax parameters (, ) are constrained from the
overall pattern of the light curve, the orbital parameters (ds/dt, dα/dt) are constrained
from the anomaly induced by the lens companion. For KMT-2022-BLG-1480, the orbital parameters are
poorly constrained because the duration of the planet-induced anomaly is very short.
§ SOURCE STARS AND ANGULAR EINSTEIN RADII
In this section, we constrain the source stars of the events to estimate the angular Einstein
radii. Despite the non-caustic-crossing nature of the planetary signals, finite-source effects
were detected for both events, and thus the normalized source radii were measured. With the
measured value of ρ, we estimated the angular Einstein radius using the relation
= θ_* ρ,
where the angular radius of the source was deduced from the reddening- and extinction-corrected
(de-reddened) color and magnitude.
The left and right panels of Figure <ref> show the source locations in the instrumental
color-magnitude diagrams (CMDs) of stars lying around the source stars of KMT-2022-BLG-0475 and
KMT-2022-BLG-1480, respectively.
For each event, the instrumental source color and magnitude, (V-I, I)_ S, were determined
by estimating the flux values of the source f_s and blend f_b from the linear fit to the
relation F_ obs = A(t)f_s + f_b, where the lensing magnification is obtained from the
model. We are able to constrain the blend for KMT-2022-BLG-1480 and mark its location on the
CMD, but the blend flux of KMT-2022-BLG-0475 resulted in a slightly negative value, making it
difficult to constrain the blend position.
By applying the method of <cit.>, we then estimated the de-reddened source
color and magnitude, (V-I, I)_0, S, using the centroid of the red giant clump (RGC), for
which its de-reddened color and magnitude (V-I, I)_0, RGC were known <cit.>, as a reference, that is,
(V-I, I)_ 0,S = (V-I, I)_ 0,RGC + [(V-I, I)_ S - (V-I, I)_ RGC].
Here (V-I, I)_ RGC denotes the instrumental color and magnitude of the RGC centroid, and
thus the last term in the bracket represents the offset of the source from the RGC centroid in
the CMD.
In Table <ref>, we summarize the values of (V-I, I)_ S, (V-I, I)_ RGC,
(V-I, I)_ 0,RGC, and (V-I, I)_ 0,S for the individual events. From the estimated
de-reddened colors and magnitudes, it was found that the source star of KMT-2022-BLG-0475 is an
early K-type turnoff star, and that of KMT-2022-BLG-1480 is a late G-type subgiant. We estimated
the angular source radius by first converting the measured V-I color into V-K color using the
<cit.> color-color relation, and then deducing θ_* from the <cit.>
relation between (V-K, V) and θ_*. With the measured source radius, we then estimated the
angular Einstein radius using the relation in Eq. (<ref>) and the relative lens-source proper
motion using the relation μ=/. The estimated and μ values of the
individual events are listed in Table <ref>.
We note that the uncertainties of the source colors and magnitudes presented in Table <ref>
are the values estimated from the model fitting, and those of θ_* and are estimated by
adding an additional 7% error to consider the uncertain de-reddened RGC color of <cit.>
and the uncertain position of the RGC centroid <cit.>.
§ PHYSICAL LENS PARAMETERS
We determined the physical parameters of the planetary systems using the lensing observables of
the individual events. For KMT-2022-BLG-0475, the measured observables are , and ,
which are respectively related to the mass and distance to the planetary system by
= μ; = (κ M π_ rel)^1/2,
where κ =4G/(c^2 au)=8.14 mas/M_⊙. For KMT-2022-BLG-1480, we additionally
measured the observable , with which the physical parameters can be uniquely determined by
M= κ; = au + π_ S.
We estimated the physical parameters by conducting Bayesian analyses because the observable
was not measured for KMT-2022-BLG-0475, and the uncertainty of the north component of the parallax
vector, , was fairly big although the east component was relatively well constrained.
The Bayesian analysis was done by first generating artificial lensing events from a Monte Carlo
simulation, in which a Galactic model was used to assign the locations of the lens and source
and their relative proper motion, and a mass-function model was used to assign the lens mass.
We adopted the <cit.> Galactic model and the <cit.> mass function. With
the assigned values of (M, , , μ), we computed the lensing observables (t_ E,i,
θ_ E,i, π_ E,i) of each simulated event using the relations in Eqs. (<ref>)
and (<ref>).
Under the assumption that the physical parameters are independently and identically distributed,
we then constructed the Bayesian posteriors of M and by imposing a weight
w_i = exp(-χ_i^2/2). Here the χ_i^2 value for each event was computed by
χ_i^2 =
[ t_ E,i-σ()]^2 +
[ θ_ E,i-σ()]^2 +
∑_j=1^2 ∑_k=1^2 b_j,k
(π_ E,j,i-π_ E,i) (π_ E,k,i-π_ E,i),
where (, , ) represent the observed values of the lensing observables, [σ(),
σ()] denote the measurement uncertainties of and , respectively, b_j,k
denotes the inverse covariance matrix of _ E, and (π_ E,1, π_ E,2)_i
= (, )_i denote the north and east components of the microlens-parallax vector of each
simulated event, respectively. We note that the last term in Eq. (<ref>) was not included for
KMT-2022-BLG-0475, for which the microlens-parallax was not measured.
In the case of the event KMT-2022-BLG-1480, for which the blending flux was measured, we additionally
imposed the blending constraint that was given by the fact that the lens could not be brighter than
the blend. In order to impose the blending constraint, we computed the lens magnitude as
I_ L = M_I, L+5log( pc) -5 + A_I, L,
where M_I, L represents the absolute I-band magnitude of a star corresponding to the lens
mass, and A_I, L represents the I-band extinction to the lens. For the computation of
A_I, L, we modeled the extinction to the lens as
A_I, L = A_I, tot[ 1-exp( - |z| h_z, dust)],
where A_I, tot = 1.53 is the total I-band extinction toward the field, h_z, dust =
100 pc is the vertical scale height of dust, z = sin b + z_0, b is the Galactic latitude,
and z_0=15 pc is vertical position of the sun above the Galactic plane <cit.>. It
turned out that the blending constraint had little effect on the posteriors because the other constraints,
that is, those from (, , ), already predicted that the planet host are remote faint stars
whose flux contribution to the blended flux is negligible.
Figures <ref> and <ref> show the Bayesian posteriors of the lens mass and distances
to the lens and source for KMT-2022-BLG-0475 and KMT-2022-BLG-1480, respectively. In
Table <ref>, we summarize the estimated parameters of the host mass, M_ h, planet
mass, M_ p, distance to the planetary system, projected separation between the planet and host,
a_⊥=s, and the snow-line distances estimated as a_ snow∼ 2.7(M/M_⊙)
<cit.>.
Here we estimated the representative values and uncertainties of the individual physical parameters
as the median values and the 16% and 84% range of the Bayesian posteriors, respectively.
We find that the two planetary systems KMT-2022-BLG-0475L and KMT-2022-BLG-1480L are similar to each
other in various aspects. According to the estimated physical lens parameters, the masses of
KMT-2022-BLG-0475Lb and KMT-2022-BLG-1480Lb are ∼ 1.7 and ∼ 1.8 times the mass of Uranus in
our solar system. The planets are separated in projection from their hosts by ∼ 2.0 au and ∼
1.2 au, respectively. The masses of planet hosts are ∼ 0.43 M_⊙ and ∼ 0.18 M_⊙,
which correspond to the masses of early and mid-M dwarfs, respectively. Considering that the estimated
separations are projected values and the snow-line distances of the planetary systems are a_ snow∼ 1.2 au for KMT-2022-BLG-0475L and ∼ 0.5 au for KMT-2022-BLG-1480L, the planets of both
systems are ice giants lying well beyond the snow lines of the systems. The planetary systems lie at
distances of ∼ 6.6 kpc and ∼ 7.8 kpc from the sun.
The planetary systems are likely to be in the bulge with a probability 70% for KMT-2022-BLG-0475L
and with a probability 83% for KMT-2022-BLG-1480L.
§ SUMMARY AND CONCLUSION
We analyzed the light curves of the microlensing events KMT-2022-BLG-0475 and KMT-2022-BLG-1480,
for which weak short-term anomalies were found from the systematic investigation of the 2022 season
data collected by high-cadence microlensing surveys. We tested various models that could produce the
observed anomalies and found that the anomalies were generated by planetary companions to the lenses
with a planet-to-host mass ratio q∼ 1.8× 10^-4 for KMT-2022-BLG-0475L and a ratio q∼
4.3× 10^-4 for KMT-2022-BLG-1480L. From the physical parameters estimated from the Bayesian
analyses using the observables of the events, it was found that the planets KMT-2022-BLG-0475Lb and
KMT-2022-BLG-1480Lb have masses ∼ 1.7 and ∼ 1.8 times the mass of Uranus in our solar system,
respectively, and they lie well beyond the snow lines of their hosts of early and mid-M dwarfs,
indicating that the planets are ice giants.
Ice giants around M dwarf stars are difficult to be detected by other surveys using the transit and
radical-velocity (RV) methods not only because of the long orbital period of the planet but also
because of the faintness of host stars. The number of low-mass planets increases with the increase
of the observational cadence of microlensing surveys as shown in the histogram of detected
microlensing planets as a function of the planet-to-host mass ratio presented in Fig. 1 of
<cit.>. Being able to complement the transit and RV surveys, high-cadence lensing surveys
will play an important role in the construction of a more complete planet sample, and thus for better
understanding the demographics of extrasolar planets.
The two events are also similar in that they have ρ measurements (and therefore also
measurements) despite the fact that the source does not cross any caustics.
<cit.> predicted that about half of KMT planets would not have caustic crossings, and
<cit.> confirmed this for a statistical sample of 58 planetary events detected during the
2018–2019 period. However, <cit.> showed that about 1/3 of non-caustic-crossing events
nevertheless yield measurements.
Measurements of are important, not only because they improve the Bayesian estimates
(see Sect. <ref>), but also because they allow accurate prediction of when high-resolution
adaptive-optics (AO) imaging can resolve the lens separately from the source, which will then yield
mass measurements of both the host and the planet <cit.>. For KMT-2022-BLG-0475,
with proper motion μ=6.9 mas/yr, the separation in 2030 (approximate first AO light on
30 m class telescopes), will be Δθ∼ 55 mas, which should be adequate to resolve the
lens and source.
Resolving the lens of this event would also be important to confirm the planetary interpretation of the
event because it is difficult to completely rule out
the 1L2S interpretation.
By contrast, for KMT-2022-BLG-1480, with μ=1.9 mas/yr, the separation
will be only Δθ∼ 15 mas, which almost certainly means that AO observations
should be delayed for many additional years. In particular, if the Bayesian mass and distance
estimates are approximately correct, then the expected contrast ratio between the source and lens
is Δ K∼ 7 mag, which will likely require separations of at least 4 FWHM, that is,
55 mas even on the 39 m European Extremely Large Telescope. Hence, the contrast between the
two planets presented in this paper underlines the importance of measurements.
Work by C.H. was supported by the grants of National Research Foundation of Korea
(2019R1A2C2085965).
This research has made use of the KMTNet system operated by the Korea Astronomy and Space
Science Institute (KASI) at three host sites of CTIO in Chile, SAAO in South Africa, and
SSO in Australia. Data transfer from the host site to KASI was supported by the Korea Research
Environment Open NETwork (KREONET).
This research was supported by the Korea Astronomy and Space Science Institute under the R&D
program (Project No. 2023-1-832-03) supervised by the Ministry of Science and ICT.
The MOA project is supported by JSPS KAKENHI
Grant Number JSPS24253004, JSPS26247023, JSPS23340064, JSPS15H00781,
JP16H06287, and JP17H02871.
J.C.Y., I.G.S., and S.J.C. acknowledge support from NSF Grant No. AST-2108414.
Y.S. acknowledges support from BSF Grant No 2020740.
This research uses data obtained through the Telescope Access Program (TAP), which has been
funded by the TAP member institutes. W.Zang, H.Y., S.M., and W.Zhu acknowledge support by the
National Science Foundation of China (Grant No. 12133005). W.Zang acknowledges the support
from the Harvard-Smithsonian Center for Astrophysics through the CfA Fellowship.
C.R. was supported by the Research fellowship of the Alexander von Humboldt Foundation.
[Albrow et al.(2000)]Albrow2000 Albrow, M. D., Beaulieu, J.-P., Caldwell, J. A. R., et al. 2000, , 534, 894
[Bennett & Rhie(1996)]Bennett1996 Bennett, D. P., & Rhie, S. H. 1996, , 472, 660
[Bensby et al.(2013)]Bensby2013 Bensby, T. Yee, J.C., Feltzing, S. et al. 2013, , 549, A147
[Bessell & Brett(1988)]Bessell1988 Bessell, M. S., & Brett, J. M. 1988, , 100, 1134
[Bond et al.(2011)]Bond2001 Bond, I. A., Abe, F., Dodd, R. J., et al. 2001, , 327, 868
[Gaudi & Gould(1997)]Gaudi1997 Gaudi, B. S., & Gould, A. 1997, , 486, 85
[Gaudi(1998)]Gaudi1998 Gaudi, B. S., 1998, , 506, 533
[Gould(1992)]Gould1992a Gould, A. 1992, , 392, 442
[Gould & Loeb(1992)]Gould1992b Gould, A. & Loeb, A. 1992, , 396, 104
[Gould et al.(1994)]Gould1994 Gould, A., Miralda-Escudé, J., & Bahcall, J. N. 1994, , 423, L105
[Gould(2014)]Gould2014 Gould, A. 2014, Journal of the Korean Astronomy Society, 47, 153
[Gould(2022a)]Gould2022a Gould, A. 2022a, arXiv:2209.12051
[Gould et al.(2022b)]Gould2022b Gould, A., Han, C., Weicheng, Z., et al. 2022b, , 664, A13
[Griest & Safizadeh(1998)]Griest1998 Griest, K., & Safizadeh, N. 1998, , 500, 37
[Han et al.(2016)]Han2016 Han, C., Udalski, A., Gould, A., et al. 2016, , 828, 53
[Han et al.(2022a)]Han2022a Han, C., Gould, A., Albrow, M. D., et al. 2022a, , 658, A62
[Han et al.(2022b)]Han2022b Han, C. Bond, I. A., Yee, J., et al. 2022b, , 658, A94
[Han et al.(2022c)]Han2022c Han, C. Kim, D., Gould, A, et al. 2022c, , 664, A33
[Han et al.(2022d)]Han2022d Han, C. Kim, D., Yang, H., et al. 2022d, , 664, A114
[Han et al.(2022e)]Han2022e Han, C. Lee, C.-U., Gould, A., et al. 2022e, , 666, A132
[Han et al.(2023a)]Han2023a Han, C., Udalski, A., Jung, Y. K., et al. 2023b, , 670, A172
[Han et al.(2023b)]Han2023b Han, C., Gould, A., Jung Y. K., et al. 2023a, , in press
[Hwang et al.(2021)]Hwang2021 Hwang, K.-H., Zang, W., Gould, A., et al. 2022, , 163, 43
[Hwang et al.(2022)]Hwang2022 Hwang, K.-H., Zang, W., Gould, A., et al. 2022, , 163, 43
[Jung et al.(2018)]Jung2018 Jung, Y. K., Udalski, A., Gould, A., et al. 2018, , 155, 219
[Jung et al.(2021)]Jung2021 Jung, Y. K., Han, C., Udalski, A., et al. 2021, , 161, 293
[Jung et al.(2022)]Jung2022 Jung, Y. K., Zang, W., Han, C., et al. 2022, , 164, 262
[Jung et al.(2023)]Jung2023 Jung, Y. K., Zang, W., Wang, H., et al. 2023, arXiv:2302.13544
[Kennedy & Kenyon(2008)]Kennedy2008 Kennedy, G. M. & Kenyon, S. J. 2008, , 673, 502
[Kervella et al.(2004)]Kervella2004 Kervella, P., Thévenin, F., Di Folco, E., & Ségransan, D. 2004, , 426, 29
[Kim et al.(2016)]Kim2016 Kim, S.-L., Lee, C.-U., Park, B.-G., et al. 2016, JKAS, 49, 37
[Kruszyńska et al.(2022)]Kruszynska2022 Kruszyńska, K., Wyrzykowski, Ł., Rybicki, K. A., et al. 2022, , 662, A59
[Luberto et al.(2022)]Luberto2022 Luberto, J., Martin, E. C., McGill, P., Leauthaud, A., Skemer, A. J., & Lu, J. R. 2022, , 164, 253
[Medford et al.(2023)]Medford2023 Medford, M. S., Abrams, N. S., Lu, J. R., Nugent, P., & Lam, C. Y. 2023, , 947, 24
[Mao & Paczyński(1991)]Mao1991 Mao, S., & Paczyński, B. 1991, , 374, 37
[Nataf et al.(2013)]Nataf2013 Nataf, D. M., Gould, A., Fouqué, P. et al. 2013, , 769, 88
[Sahu et al.(2022)]Sahu2022 Sahu, K. C., Anderson, J., Casertano, S., et al. 2022, , 933, 83
[Shin et al.(2023)]Shin2023 Shin, I.-G., Yee, J., Zang, W., et al. 2023, arXiv:2303.16881
[Siegert(2019)]Siegert2019 Siegert, T. 2019, , 632, L1
[Skowron et al.(2011)]Skowron2011 Skowron, J., Udalski, A., Gould, A., et al. 2011, , 738, 87
[Tonry et al.(2018)]Tonry2018 Tonry, J. L., Denneau, L., Heinze, A. N., et al. 2018, , 130:064505
[Udalski et al.(1994)]Udalski1994 Udalski, A., Kubiak, M., Szymański, M., Kałużny, J., Mateo, M., Krzemiński, W. 1994, Acta Astron., 44, 317
[Wang et al.(2022)]Wang2022 Wang, H., Zang, W., Zhu, W., et al. 2022, , 510, 1778
[Yee et al.(2012)]Yee2012 Yee, J. C., Shvartzvald, Y., Gal-Yam, A., et al. 2012, , 755, 102
[Yoo et al.(2004)]Yoo2004 Yoo, J., DePoy, D.L., Gal-Yam, A. et al. 2004, , 603, 139
[Zang et al.(2021a)]Zang2021a Zang, W., Han, C., Kondo, I., et al. 2021a, Research in Astron. and Astroph., 21, 239
[Zang et al.(2021b)]Zang2021b Zang, W., Hwang, K.-H., Udalski, A., et al. 2021b, , 162, 163
[Zang et al.(2022)]Zang2022 Zang, W., Yang, H., Han, C., et al. 2022, , 515, 928
[Zang et al.(2023)]Zang2023 Zang, W., Jung, Y.K., Yang, H., et al. 2023, , 165, 103
[Zhu et al.(2014)]Zhu2014 Zhu, W., Penny, M., Mao, S., Gould, A., & Gendron, R. 2014,
|
http://arxiv.org/abs/2307.03165v1
|
20230706174701
|
From Discovery to the First Month of the Type II Supernova 2023ixf: High and Variable Mass Loss in the Final Year Before Explosion
|
[
"Daichi Hiramatsu",
"Daichi Tsuna",
"Edo Berger",
"Koichi Itagaki",
"Jared A. Goldberg",
"Sebastian Gomez",
"Kishalay De",
"Griffin Hosseinzadeh",
"K. Azalee Bostroem",
"Peter J. Brown",
"Iair Arcavi",
"Allyson Bieryla",
"Peter K. Blanchard",
"Gilbert A. Esquerdo",
"Joseph Farah",
"D. Andrew Howell",
"Tatsuya Matsumoto",
"Curtis McCully",
"Megan Newsome",
"Estefania Padilla Gonzalez",
"Craig Pellegrino",
"Jaehyon Rhee",
"Giacomo Terreran",
"József Vinkó",
"J. Craig Wheeler"
] |
astro-ph.HE
|
[
"astro-ph.HE",
"astro-ph.SR"
] |
From Discovery to the First Month of Type II Supernova 2023ixf
Hiramatsu et al.
0000-0002-1125-9187]Daichi Hiramatsu
Daichi Hiramatsu
[email protected]
0000-0002-6347-3089]Daichi Tsuna
0000-0002-9392-9681]Edo Berger
0000-0003-1012-3031]Jared A. Goldberg
0000-0001-6395-6702]Sebastian Gomez
0000-0002-8989-0542]Kishalay De
0000-0002-0832-2974]Griffin Hosseinzadeh
0000-0002-4924-444X]K. Azalee Bostroem
0000-0001-6272-5507]Peter J. Brown
0000-0001-7090-4898]Iair Arcavi
0000-0001-6637-5401]Allyson Bieryla
0000-0003-0526-2248]Peter K. Blanchard
0000-0002-9789-5474]Gilbert A. Esquerdo
0000-0003-4914-5625]Joseph Farah
0000-0003-4253-656X]D. Andrew Howell
0000-0002-9350-6793]Tatsuya Matsumoto
0000-0001-5807-7893]Curtis McCully
0000-0001-9570-0584]Megan Newsome
0000-0003-0209-9246]Estefania Padilla Gonzalez
0000-0002-7472-1279]Craig Pellegrino
0000-0001-9214-7437]Jaehyon Rhee
0000-0003-0794-5982]Giacomo Terreran
0000-0001-8764-7832]József Vinkó
0000-0003-1349-6538]J. Craig Wheeler
We present the discovery of Type II supernova (SN) 2023ixf in M101, among the closest core-collapse SNe in the last several decades, and follow-up photometric and spectroscopic observations in the first month of its evolution.
The light curve is characterized by a rapid rise (≈5 days) to a luminous peak (M_V≈-18 mag) and plateau (M_V≈-17.6 mag) extending to 30 days with a smooth decline rate of ≈0.03 mag day^-1. During the rising phase, U-V color shows blueward evolution, followed by redward evolution in the plateau phase.
Prominent flash features of hydrogen, helium, carbon, and nitrogen dominate the spectra up to ≈5 days after first light, with a transition to a higher ionization state in the first ≈2 days.
Both the U-V color and flash ionization states suggest a rise in the temperature, indicative of a delayed shock-breakout inside dense circumstellar material (CSM). From the timescales of CSM interaction, we estimate its compact radial extent of ∼(3-7)×10^14 cm.
We then construct numerical light-curve models based on both continuous and eruptive mass-loss scenarios shortly before explosion. For the continuous mass-loss scenario, we infer a range of mass-loss history with 0.1-1.0 M_⊙ yr^-1 in the final 2-1 years before explosion, with a potentially decreasing mass loss of 0.01-0.1 M_⊙ yr^-1 in ∼0.7-0.4 years towards the explosion.
For the eruptive mass-loss scenario, we favor eruptions releasing 0.3-1 M_⊙ of the envelope at about a year before explosion, which result in CSM with mass and extent similar to the continuous scenario.
We discuss the implications of the available multi-wavelength constraints obtained thus far on the progenitor candidate and SN 2023ixf to our variable CSM models.
§ INTRODUCTION
The majority of massive stars (zero-age main-sequence masses of M_ZAMS≈ 8-25 M_⊙) end their lives when their iron cores collapse, leading to explosions as hydrogen-rich (H-rich) Type II SNe (SNe II; see e.g., for reviews).
While there is consensus that the progenitors of SNe II are red supergiants (RSGs) from direct progenitor identifications (e.g., ), evidence is mounting that they experience elevated mass loss in the final months to decades before explosion, which is not predicted from standard stellar-evolution theory. The presence of circumstellar material (CSM) from such mass loss can be inferred from early light curve excess above shock-cooling emission (e.g., ), narrow high-ionization emission lines, so-called flash features, excited by the radiation from shock-breakout or SN-CSM shock interface (e.g., ), and/or bremsstrahlung X-ray and synchrotron radio/millimeter emission from SN-CSM shock interaction (e.g., ).
The recent discovery of SN II 2023ixf in M101 (<ref>), among the closest core-collapse SNe in the last several decades, has allowed intensive multi-wavelength observations. A dust-obscured RSG progenitor candidate with a possible periodic variability (≈1000-day period over the last ≈13 years) has been identified in pre-explosion Hubble Space Telescope, Spitzer Space Telescope, and ground-based optical to infrared images <cit.>.
Post-explosion observations have revealed: early light curve excess in the optical bands <cit.>; flash features in low- to high-resolution optical spectra <cit.>; early X-ray detections <cit.>; and millimeter non-detections <cit.>.
Here, we report our discovery of SN 2023ixf in <ref>, and follow-up photometric and spectroscopic observations in the first month of its evolution in <ref>. In <ref>, we analyze the light curves and spectra. We then construct numerical light curve models based on both continuous and eruptive mass-loss scenarios to extract the CSM formation history shortly before explosion in <ref>. Using this information we discuss a consistent picture among the available observational constraints. Finally, we summarize our findings in <ref>.
§ DISCOVERY AND CLASSIFICATION
We <cit.> discovered SN 2023ixf in M101 (Figure <ref>) on 2023 May 19.727 (UT dates are used throughout; MJD = 60083.727) at an unfiltered magnitude of 14.9±0.1 at R.A.=14^h03^m38^s.580 and decl.=+54^∘18'42".10 with the Itagaki Astronomical Observatory 0.35 m telescope (Okayama, Japan) + KAF-1001E CCD. The last non-detection was obtained with the same setup at 17.5 mag on 2023 May 17.749 (MJD = 60081.749).
Subsequently, several other observers have reported limits and detections in the time window between our last non-detection and discovery <cit.>. By taking the midpoint of the ≳ 20.4 mag limit on 2023 May 18.660 (MJD=60082.660) from <cit.> and the detection with 18.76±0.25 mag on 2023 May 18.826 (MJD=60082.826) from <cit.>, we estimate the epoch of first light to be MJD = 60082.743 ± 0.083, corresponding to 0.98 days before our discovery.
<cit.> obtained an optical spectrum of SN 2023ixf on 2023 May 19.93 (MJD = 60083.93; 4.9 hours after discovery) with the SPectrograph for the Rapid Acquisition of Transients (SPRAT; ) on the Liverpool Telescope (LT; ), classifying it as an SN II with flash ionization features.
Subsequent spectra have been obtained and reported by: <cit.> with the Himalaya Faint Object Spectrograph (HFOSC) on the 2-m Himalayan Chandra Telescope (HCT)[<http://www.iiap.res.in/iao/hfosc.html>]; <cit.> with the Alhambra Faint Object
Spectrograph and Camera (ALFOSC) on the Nordic Optical Telescope (NOT); <cit.> with the Beijing Faint Object Spectrograph and Camera (BFOSC) on the Xinglong 2.16-m Telescope <cit.>; and <cit.> with the Spectral Energy Distribution Machine (SEDM; ) on the Palomar 60-inch telescope, confirming the SN II classification.
In this work, we adopt the redshift of z=0.000804±0.000007 <cit.>[Via the NASA/IPAC Extragalactic Database: <http://ned.ipac.caltech.edu/>] and luminosity distance of d_L=6.9±0.1 Mpc (distance modulus of μ=29.194±0.039 mag; ) to M101. We use the estimated first light (MJD = 60082.743 ± 0.083) as the zeropoint reference for all phases unless otherwise specified.
§ OBSERVATIONS AND DATA REDUCTION
§.§ Photometry
Following the discovery, we continued monitoring SN 2023ixf with the Itagaki Astronomical Observatory 0.35 m and 0.5 m telescopes (Kochi, Okayama, and Yamagata, Japan) + unfiltered KAF-1001E CCD. Using our custom software, the aperture photometry was extracted and calibrated to Vega magnitudes from the Fourth US Naval Observatory CCD Astrograph Catalog <cit.>.
Additionally, we include the pre-discovery amateur points listed in <ref> in the subsequent analysis.
Through the Global Supernova Project <cit.>, we obtained Las Cumbres Observatory (LCO; ) UBVgriz-band imaging with the SBIG and Sinistro cameras on the network of 0.4 and 1.0 m telescopes at the Haleakala̅ Observatory (Hawaii, USA), McDonald Observatory (Texas, USA), and Teide Observatory (Canary Islands, Spain) starting on 2023 May 19.97 (MJD=60083.97; 5.8 hours after discovery). The initial photometry up to 2023 May 30 (MJD=60094) has been presented in <cit.>, and we extend it to 2023 June 18 (MJD=60113; 30 days after discovery) following the same reduction procedures; UBV and griz point-spread function (PSF) photometry calibrated to Vega <cit.> and AB <cit.> magnitudes, respectively, using [<https://github.com/LCOGT/lcogtsnpipe>] <cit.>.
In addition, we triggered the Neil Gehrels Swift Observatory Ultraviolet/Optical Telescope (UVOT) follow-up imaging starting on 2023 May 20.27 (MJD=60084.27; 13 hours after discovery).
The aperture photometry was conducted and calibrated to Vega magnitudes using the pipeline for the Swift Optical Ultraviolet Supernova Archive (SOUSA; ), including the zeropoints from <cit.> and the sensitivity correction from September 2020. The initial photometry up to 2023 May 21 (MJD=60085) has been presented in <cit.>, and we extend it to 2023 June 9 (MJD=60104). Unfortunately, the majority of the Swift UVOT observations were saturated, and we only include the unsaturated observations analysed by the standard SOUSA procedures.
Through our FLEET program <cit.>, we also obtained griz-band imaging with KeplerCam <cit.> on the 1.2 m Telescope at the Fred Lawrence Whipple Observatory (FLWO; Arizona, USA) starting on 2023 May 21.32 (MJD=60085.32; 1.59 days after discovery). The PSF photometry was extracted and calibrated to AB magnitudes from the Pan-STARRS1 <cit.> Data Release 2 <cit.>.
To explore possible pre-explosion variability of SN 2023ixf, we also processed and examined the Zwicky Transient Facility (ZTF; ), Asteroid Terrestrial-impact Last Alert System (ATLAS; ), and Wide-field Infrared Survey Explorer (WISE; ) survey data.
ZTF and ATLAS photometry were retrieved respectively from the ZTF forced-photometry service[<https://ztfweb.ipac.caltech.edu/cgi-bin/requestForcedPhotometry.cgi>] <cit.> in the gri bands (from 2018 March 21; MJD=58198) and the ATLAS forced photometry server[<https://fallingstar-data.com/forcedphot/>] <cit.> in the c and o bands (from 2015 December 30; MJD=57386). Then we stacked them with a time-window of every 10 days (≲1% of the observed progenitor period; ) for each filter to obtain deeper measurements.
For the ongoing NEOWISE all-sky survey in the W1 (3.4 μm) and W2 (4.5 μm) bands, we retrieved time-resolved coadded images of the field created as part of the unWISE project <cit.>. With the custom code <cit.> based on the ZOGY algorithm <cit.>, we performed image subtraction on the NEOWISE images using the full-depth coadds of the WISE and NEOWISE mission (obtained during 2010–2014) as reference images. Photometric measurements were obtained by performing forced PSF photometry at the transient position on the subtracted WISE images until the epoch of the unWISE data release (data acquired until 2022 May).
§.§ Spectroscopy
Through our FLEET program, we obtained optical spectra of SN 2023ixf with the FAST spectrograph <cit.> on the FLWO 1.5 m Telescope starting on 2023 May 21.33 (MJD=60085.33; 1.60 days after discovery). The combination of the 300 grating with a 3"-wide slit were used for dispersion, resulting in a wavelength coverage of 4000 with a resolution R≈1000. One-dimensional spectra were extracted, reduced, and calibrated following standard procedures using PyRAF, and flux calibrated to a standard taken during the same night. Additionally, we retrieved the public LT/SPRAT, HCT/HFOSC, NOT/ALFOSC, Xinglong/BFOSC, and P60/SEDM spectra (listed in <ref>) via the Transient Name Server (TNS)[<https://www.wis-tns.org/>] and include them in the subsequent analysis.
We also obtained high-resolution optical spectra on 2023 May 25 and 26 (MJD=60089 and 60090; 5.6 and 6.5 days after discovery) with the Tillinghast Reflector Echelle
Spectrograph (TRES; ) on the FLWO 1.5 m Telescope (3850-9100 coverage with R≈44,000), and on 2023 May 27, 28, and 30 (MJD=60091, 60092, and 60093; 7.4, 8.4, and 9.4 days after discovery) with Hectoechelle <cit.> on the 6.5 m MMT Observatory (6460-6660 coverage centered on Hα with R≈34,000). The reduced data products were provided by the Smithsonian Astrophysical Observatory Optical/Infrared Telescope Data Center <cit.>. Given the lack of narrow flash features in the TRES and Hectoechelle spectra,[Due to the combination of a low signal-to-noise ratio (the first TRES spectrum) and a late phase coverage (the later TRES and Hectoechelle spectra).] we only used them to measure the equivalent widths (EWs) of Na i D lines at the M101 redshift, yielding EW(D1)=0.121 and EW(D2)=0.180. Using the <cit.> calibration, these EWs translate to an M101 extinction of E(B-V)=0.032, in agreement with <cit.>.
§ ANALYSIS
All photometry and spectroscopy of SN 2023ixf are presented in Figures <ref> and <ref>, respectively.
We correct the photometry and spectroscopy for the Milky Way (MW) extinction of E(B-V)=0.0089 <cit.>,[Via the NASA/IPAC Infrared Science Archive: <https://irsa.ipac.caltech.edu/applications/DUST/>] as well as the M101 extinction of E(B-V)=0.032 (measured in <ref>), assuming the <cit.> reddening law with R_V=3.1 and extended to the WISE bands with the relative optical to infrared extinction values from <cit.>.
§.§ Light-Curve Evolution
We determine the detection significance (σ) of the ZTF, ATLAS, and WISE forced photometry from the ratio of measured flux (f) to its uncertainty (f_ err), i.e., σ=|f|/f_ err where the absolute value of f is taken to account for both positive and negative variability. As shown in Figure <ref>, no significant pre-explosion variability (>5σ) is seen at the SN 2023ixf position in ZTF (m_g,r,i≳21.8,22.0,21.3) and ATLAS (m_c,o≳20.2,20.3) within ≈2700 days before first light (except the two possible ZTF i-band detections at 22.5±0.2 mag (≈6.1σ) and 22.6±0.2 mag (≈5.3σ) from ≈1205-1195 days before first light, which may allow short-term variability). This is consistent with the lack of long-term pre-explosion variability reported by <cit.> in the deep Large Binocular Telescope (LBT) UBVR-band observations (m≳25) from ≈5600 to 400 days before first light, and extends such coverage to 0.32 days before first light, albeit with shallower limits.
The few sporadic (non-periodic) detections in the WISE W1 band (∼±25 uJy) are much higher than the variability (∼±8 uJy) seen in the Spitzer/IRAC Chanel 1 (3.6 μ m) observations with a similar temporal coverage (∼11.3–3.6 years before explosion; ), and visual inspections of the WISE difference images indicate imperfect image subtraction of the nearby H ii region (Figure <ref>) due to its relatively large PSF size (full widths at half-maximum ∼6"). Thus we do not consider these as significant detections.
The light curve of SN 2023ixf is characterized by a rapid rise to a bright peak, from M_V≈-10 to -18 mag within ≈5 days of first light (Figure <ref>), with a possible deviation from a single power-law rise in the first ≈ 1 day as noted by <cit.>. During this rising phase, the U-V color shows blueward evolution[Similar blueward color evolution is also seen in the other bands (e.g., g-r), indicating a change in the continuum rather than in specific lines.] from -0.86±0.03 mag (1.26 days) to -1.01±0.04 mag (3.39 days). Assuming a blackbody spectral energy distribution (SED), this color evolution corresponds to a temperature rise from ≈12000 K to 16000 K. Such color/temperature evolution is not expected from pure shock-cooling, and instead suggests a possible delayed shock-breakout through dense CSM, in which a photosphere initially forms inside the unshocked optically-thick CSM (e.g., ), as similarly seen in SN II 2018zd <cit.>.
Following the bright peak, the light curve of SN 2023ixf settles on a bright plateau (M_V≈-17.6 mag) extending to 30 days with a smooth decline rate of ≈0.03 mag day^-1 from the peak (Figure <ref>). This plateau is on the bright and fast-declining ends of SN II population (e.g., ; see also ). Coinciding with the peak, the U-V color shows a transition to redward evolution, reaching ≈ 0.5 mag (≈6000 K) at 30 days. In contrast to the early blueward evolution, this redward evolution is typical of SN II population (e.g., ), likely suggesting a switch in the location of the photosphere to the SN ejecta as it overruns the dense CSM. If we assume a typical SN shock velocity of ∼10000-15000 km s^-1 (e.g., ), the U-V color transition at 3.4-4.4 days corresponds to a CSM radial extent of ∼(3-6)×10^14 cm.
To extract CSM properties from the light curve modeling in <ref>, we construct a pseudobolometric light curve of SN 2023ixf by fitting a blackbody SED to every epoch of photometry containing at least three filters obtained within 30 minutes of each other, and then integrating the fitted blackbody SED over the optical coverage (UBVRI: 3250-8900). For the unfiltered Itagaki photometry with a similar wavelength coverage (≈3200-9900), we estimate its pseudobolometric correction at each epoch by linearly extrapolating or interpolating the blackbody temperatures from the multi-band fits and taking the ratio of the integrated blackbody to observed flux convolved with the CCD response function. For the pre-discovery amateur points, we crudely estimate their pseudobolometric luminosity assuming the same correction as the Itagaki discovery point, albeit with uncharacterized CCD/filter response functions.
We note that the observed SED peaks are bluer than the U band in the first ≈10 days (≳10000 K), and without reliable Swift UVOT coverage (<ref>) during this phase, the fitted blackbody temperatures may be underestimated by up to ∼10000 K (e.g., ), which translates to a factor of ∼ two in the pseudobolometric luminosity.
§.§ Spectral Evolution
As shown in Figure <ref>, the early spectral evolution of SN 2023ixf is characterized by the flash features of H, He i, He ii, C iii, C iv, N iii, and N iv on top of a blue continuum (see also ). The N iii λλ 4634,4641 and C iii λλ 4647,4650 complex is comparable in line strength to He ii λ 4686 in the first spectrum at 1.19 days, but then quickly fades in the second spectrum at 2.13 days. Similarly, C iii λ 5696 is comparable to C iv λλ 5801,5812 at 1.19 days, but fades at 2.13 days. This transition to a higher ionization state (C iii → C iv) likely indicates a rise in temperature if we assume a smooth CSM density profile with constant elemental abundance <cit.>, as similarly seen in SN II 2018zd <cit.>. This is consistent with the possible delayed shock-breakout seen in the U-V
color evolution (Figure <ref>).
The prominent flash features of He ii λ 4686 and C iv λλ 5801,5812 persist until ≈ 5.6 and 4.5 days, respectively (Figure <ref>). Using these timescales as an alternate proxy to the color evolution[These two estimates do not agree perfectly likely due to the difference in the optical depths they probe (i.e., optically-thick photosphere and optically-thin line emission; e.g., ).] for the SN ejecta overrunning the dense CSM and assuming the same SN shock velocity as in <ref>, a radial CSM extent can be estimated as ∼(4-7)×10^14 cm. Together with the U-V color evolution, the flash spectral series indicates a confined CSM, broadly in agreement with the estimates from <cit.>.
§ LIGHT CURVE MODELING WITH CSM INTERACTION
Motivated by the early light curve (<ref>) and spectral evolution (<ref>), we consider two different mass-loss scenarios for producing the confined CSM: continuous and eruptive.
§.§ Continuous Mass Loss
As a first scenario, we explore “superwind” mass loss (e.g., ) in which the continuous mass-loss rate is enhanced in the final years to decades before explosion by a few orders of magnitude (≳10^-3 M_⊙ yr^-1) compared to a typical RSG mass-loss rate (∼10^-5 M_⊙ yr^-1; e.g., ). We assume a constant wind velocity of v_ wind=115 km s^-1 inferred by <cit.> from the narrow Hα line profile in a high-resolution spectrum taken at 2.62 days. As noted by <cit.>, this velocity is higher than a typical RSG wind velocity of ∼ 20 km s^-1, and likely requires radiative acceleration <cit.> or eruptive mass loss (<ref>).
With the estimated CSM radial extent of ∼(3-7)×10^14 cm from the U-V color and flash spectral evolution (<ref>), the velocity gives a mass-loss duration of ∼0.8-1.9 years before explosion, which we explore here.
A broad range of possible masses, 8-20 M_⊙, has been reported for the progenitor of SN 2023ixf from pre-explosion imaging <cit.>. As a base model, we use a RSG model with a low mass (M_ ZAMS=12 M_⊙ and M_ final=11 M_⊙) and large radius (907 R_⊙) from <cit.> given the observed bright light-curve plateau (<ref>). We note that according to the scaling relations for SN II light-curve plateaus (e.g., ), these progenitor properties are degenerate with the explosion energy, which we vary to account for the uncertainties in the progenitor properties. A better characterization of the progenitor properties from light-curve modeling should be explored when the light curve is sampled to the radioactive tail.
We also note that CSM estimates are less sensitive to a particular choice of degenerate parameters that result in a comparable CSM-free light curve (e.g., ).
We use a combination of <cit.> and <cit.> for SN explosion and light curve calculations, respectively.
For the explosion models, we vary the explosion energy (E_ exp=(1.0-2.0) ×10^51 erg with 0.2×10^51 erg increments), with a single value for the synthesized ^56Ni mass (M_ Ni=0.1 M_⊙) as it has no significant effect on the early light curve evolution (e.g., ). We then use the output of these explosion models as an input to by adding a wind density profile, ρ_wind(r) = Ṁ_wind/4π r^2 v_wind, with varying mass-loss rates (Ṁ_wind=10^-4-1 M_⊙ yr^-1 with 0.5 dex increments) and durations (t_wind=0.5-2.0 yr with 0.5 yr increments); the CSM mass is given by M_ CSM = Ṁ_wind t_wind. We use 700 and 100 spatial zones for the SN ejecta and CSM, respectively, and 100 frequency bins. A more detailed description of the + workflow can be found in previous works (e.g, ).
In Figure <ref>, the pseudobolometric light curves of both CSM-free models and a subset of CSM models are compared with that of SN 2023ixf. In these model comparisons, we reference the phase with respect to the Itagaki discovery (the first point with the known CCD characteristic; <ref>) and shift the model phase to match the discovery point since the “first light” is not well defined in the models.
The CSM-free models fail to reproduce the first ≈20 days of light-curve evolution (Figure <ref>, top left), which highlights the need for CSM interaction.
In the following CSM model comparison, we choose an explosion energy of E_ exp=1.0×10^51 erg as its CSM-free model provides the best fit at 30 days after discovery when the effect of CSM interaction is expected to become less significant (but we do not intend to constrain E_ exp given its degeneracy with the progenitor properties). For a given mass loss rate of Ṁ_wind=1.0 M_⊙ yr^-1, the peak luminosity is reasonably well reproduced by a mass-loss duration of t_ wind=1.0 yr (Figure <ref>, top right). A longer (shorter) t_ wind results in a peak that is too wide (narrow). For Ṁ_wind=0.3 M_⊙ yr^-1, t_ wind=2.0 yr better reproduces the peak luminosity. With a higher explosion energy of E_ exp=2.0×10^51 erg, these mass-loss estimates are reduced by ≈0.5 dex (i.e., Ṁ_wind≈0.1-0.3 M_⊙ yr^-1 for t_ wind=2.0-1.0 yr), albeit with an overshoot of the plateau luminosity at >20 days (as already seen in Figure <ref>, top left).
Below Ṁ_wind=0.1 M_⊙ yr^-1, we do not find any models with a peak luminosity ≳ 3×10^42 erg s^-1 (even with E_ exp=2.0×10^51 erg) in this confined CSM configuration with v_ wind=115 km s^-1. Thus we estimate a range of mass-loss history to be Ṁ_ wind=0.1-1.0 M_⊙ yr^-1 for t_ wind=2.0-1.0 yr to reproduce the peak luminosity.
For t_ wind=1.0 and 2.0 yr, the effect of varying Ṁ_wind is shown in Figure <ref> (bottom left). As discussed, Ṁ_wind≥0.3 M_⊙ yr^-1 is required to fit the peak luminosity well, and Ṁ_wind≲10^-3 M_⊙ yr^-1 has negligible effects on the peak luminosity. However, one difficulty with Ṁ_wind=0.3-1.0 M_⊙ yr^-1 for t_ wind=2.0-1.0 yr is that it overproduces the luminosity in pre-discovery phase. The zoomed-in version to this phase is shown in Figure <ref> (bottom right). For t_ wind=1.0 yr, Ṁ_wind=0.1 M_⊙ yr^-1 captures the early excess (whose presence is first noted by ) down to ≈0.6 days before discovery and the following rise up to ≈1 day after discovery, albeit with an overshoot of the deep upper limit at ≈8×10^38 erg s^-1 by ≈0.3 days. Therefore, we speculate a time-variable mass-loss rate in the final 2-1 years before explosion in which a lower mass-loss rate (≈0.01-0.1 M_⊙ yr^-1) follows a higher one (≈0.1-1 M_⊙ yr^-1) within the explosion properties explored here.[This may also be possible with an increasing v_ wind, as ρ_wind∝Ṁ_wind/v_wind. But it would create wind-to-wind collision, which should be explored in future work.] Assuming that the lower mass-loss rate is responsible for the first ∼2 days of light-curve evolution after first light as the SN-CSM shock layer propagates through, this transition might have happened at ∼0.7-0.4 years before explosion (with a SN shock velocity of ∼10000-15000 km s^-1). Then the total CSM mass could be estimated as ∼0.1-0.7 M_⊙.
§.§ Eruptive Mass Loss
Another scenario usually considered for a dense CSM is eruptive mass loss months to years before core-collapse. Such outbursts are observed for a significant fraction of SNe IIn (strongly interacting SNe with narrow H emission lines throughout its evolution; ). While such detections are limited for more standard SNe II (; but see ), this scenario is recently found to be favored over the superwind model from modeling of SN II progenitors <cit.>.
For this scenario, we follow and extend the methods of <cit.>, which used the open-source code <cit.> to simulate mass eruption, followed by <cit.> for calculating the bolometric light curve. The mass loss is assumed to be triggered by energy injection at the base of the envelope, which forms a shock that propagates to the surface and expels a part of the envelope. The model is characterized by two parameters: the injected energy and the amount of time from injection to core-collapse. These control the mass, velocity, and extent of the CSM at core-collapse.
We extend the model grid from <cit.> while using the same RSG progenitor as the earlier work evolved by of 15 M_⊙ at ZAMS, which is within the mass range inferred for the SN 2023ixf progenitor. The progenitor has a mass of 12.8 M_⊙ and radius of 670 R_⊙ at the time of energy injection, with modeling of mixing length and stellar wind outlined in <cit.>. This progenitor choice is different from that in <ref>; however, as noted, a variation in explosion energies should account for the difference given the light-curve scaling relations. For the injected energy, the code uses the injected energy scaled to the binding energy of the envelope (≈ 4.86× 10^47 erg for this progenitor star), defined in the code as f_ inj. A fraction of the envelope is unbound for f_ inj≳ 0.2, and we thus consider a range of values f_ inj=[0.2, 0.25, 0.3, 0.4, 0.5, 0.6, 0.7].
We first simulate explosions with CSM models of 1 and 2 years from energy injection, motivated by the deep LBT limits on apparent pre-SN outbursts of the progenitor up to about 400 days prior to the SN <cit.>.[We note that it takes several months from energy injection for the shock to propagate to the surface and be seen as an outburst.] For the range of f_ inj, the dense CSM expands in these 2 years to radii of ≈ (2-7)× 10^14 cm, with respective masses of ≈ 0.04-1.7 M_⊙. As in <cit.>, we then attach the evolved envelope to the core of the model, and simulate the explosion using . We excise the inner 1.8 M_⊙ of the star, and inject energy at the inner 0.1 M_⊙ as a thermal bomb. We consider the final explosion energy of E_ exp=10^51 and 2× 10^51 erg.
Using the bolometric luminosity output by , we assume that the emission is thermalized at the photosphere and calculate the luminosities in the UBVRI bands (3250-8900 Å).
The resulting light curves for injections 1 and 2 years before the SN are shown as dashed and dotted lines in Figure <ref>, respectively; a reference model with no CSM is shown as dash-dotted line. As in <ref>, the model with no CSM has a monotonic rise and thus significantly underestimates around the peak. The bright plateau of SN 2023ixf after the peak is well reproduced by the cooling emission of the shocked CSM <cit.>, for a range of f_ inj≈ 0.25–0.4.
While no perfect match to observations is found, the 1 year model with E_ exp=10^51 erg appears to best reproduce both the rise timescale and the plateau luminosity.[For a fixed f_ inj, the CSM extent generally scales with the escape velocity of the RSG progenitor. Thus the required timescale can change for different progenitor models, but this change is expected to be of order unity.] The mass of the CSM in these models are M_ CSM≈ 0.3–1 M_⊙, which agrees with the masses estimated in <ref>.
A notable problem for the models is that they generally over-predict the luminosity around the peak. One possible reason for this is a departure from thermal equilibrium in the post-shock region around breakout.
In our CSM model, the density sharply drops around the breakout radius, which can also accelerate the shock. A low CSM density and, more importantly, a large shock velocity (≳ 10^4 km s^-1) can both lead to deviation from thermal equilibrium <cit.>, which reduces the optical luminosity.
Another possibility for the eruptive case is a much earlier mass ejection, where breakout from a less dense CSM powers the peak and continued interaction powers the plateau. We explored models with longer intervals between the eruption and SN, and find that two models, with energy injection 10 (5) years before the SN with f_ inj=0.3 (0.25), reproduce the light curves better around the peak than the 1–2 yr models for E_ exp=10^51 erg. These light curves are shown as solid lines in Figure <ref>. In these models, the CSM extends to ≈ 10^15 (5× 10^14) cm with mass of ≈ 0.06 (0.04) M_⊙, which is consistent with the results of <cit.>.
However, as we explain in the next section, this model of early eruption is less preferred as it may face two challenges: the nearly periodic pre-explosion variability <cit.> and the early X-ray detection <cit.>.
§.§ Comparison to Other Observational Constraints
In both the continuous and eruptive mass-loss cases, we find that the dense CSM (0.1-1 M_⊙) confined within (3-7)×10^14 cm is able to reproduce the early light-curve evolution. Here we discuss other observational constraints obtained thus far, and their implications to our mass-loss models.
The infrared light curve of the progenitor from 2012 is consistent with a nearly regular periodic variability, with no evidence of an additional precursor emission <cit.>. The absence of change in dust optical depth between 5600-400 days before explosion has independently constrained the potential outburst to be ≲ 3-5 times brighter than the progenitor's original luminosity of ∼10^5L_⊙=4×10^38 erg s^-1 <cit.>.
The continuous mass-loss scenario may result in precursor emission due to the sustained energy injection that generates the outflow. With the inferred CSM properties, we estimate the luminosity of a possible precursor to be ≲ 2×10^39 erg s^-1 from <cit.> in a dust-free environment. This is within the allowed luminosity range without altering the dust optical depth too much (τ_V=10-13; ), and such an obscured precursor is consistent with the pre-explosion limits presented in Figure <ref>, as well as the deeper LBT limits <cit.> if the mass loss happens within ≈400 days before explosion (i.e., outside the LBT coverage).
If the mass loss is instead driven by pulsation (e.g., ), a less luminous precursor is likely expected, which may provide a more natural explanation for the periodic progenitor variability.
For the eruptive mass-loss scenario,
the precursor models of <cit.> are dust-free and cannot be directly constrained from the pre-explosion observations of SN 2023ixf. However, the precursor emission for models with f_ inj=0.25 and 0.3, which lasts for ≲ 1 year from eruption, are only ≲ 1 mag brighter <cit.> and is thus within the allowed luminosity range to avoid significant change of the dust optical depth <cit.>. A caveat is that for eruptions much earlier than 2 years before the SN, this scenario does not explain the observed periodic variability after the eruption. An eruption perturbs and oscillates the leftover envelope which may cause the variability, but there is no evident reason that both its timescale and amplitude would be similar to the observed one. It is worthwhile to study the precursor emission in dusty environments, as well as its later oscillation, to quantitatively constrain this scenario.
The X-ray data obtained 4 days after explosion suggest a rather tenuous CSM, with a mass-loss rate of ≈ 3× 10^-4 M_⊙ yr^-1 for an assumed CSM velocity of 50 km s^-1 <cit.>. Since the radius of the CSM probed is ≳(4-6)×10^14 cm for a SN shock velocity of ∼10000-15000 km s^-1, this does not contradict with our preferred models (both continuous and eruptive) of the compact, dense CSM. However, this is in tension with an eruption 5-10 years before explosion, whose density is higher than this by about an order of magnitude. An asymmetric CSM may alleviate this problem, if the line of sight is not subtended by the dense CSM inferred from the optical light curve. An asymmetric CSM is also suggested from early spectra <cit.> and millimeter non-detections (≲ mJy at 230 GHz; ), although the latter require the line of sight to be in the dense part of the CSM assuming a single wind density profile. Alternatively, the millimeter non-detections may be explained with our confined CSM models in that the early non-detections (≲4 days) are due to free-free absorption by the confined CSM, while the later non-detections are due to weaker synchrotron emission from the underlying extended CSM (∼10^-4-10^-5 M_⊙ yr^-1). This merits further investigations especially with a recent fainter radio detection (≈40 μ Jy at 10 GHz) at 29 days after first light <cit.>.
Estimates based on spectra <cit.> find a CSM of M_ CSM≈10^-2–0.1M_⊙, much smaller than our results of 0.1–1M_⊙. This may be due to the light curve and spectra probing different parts of the CSM (see e.g. for a similar discussion on SN II 2013fs). Indeed, the light-curve model from <cit.> with such a lower M_ CSM under-produces the luminosity in the first week of its evolution (see also for a similar discrepancy even with extreme large explosion energies of (2-5)×10^51 erg).
Furthermore, the density profiles of our preferred CSM models are different from a wind profile assumed in spectral modelling. As in <ref>, a single wind profile has difficulties explaining both the light-curve rise and plateau. The CSM from mass eruption is also expected to have a profile much different from a wind <cit.>.
§ SUMMARY
We have presented our discovery of SN 2023ixf in M101 at 6.9 Mpc, among the closest core-collapse SNe in the last several decades, and follow-up photometric and spectroscopic observations in the first month of its evolution. Our observations have revealed:
* A rapid rise (≈5 days) to a luminous light-curve peak (M_V≈-18) and plateau (M_V≈-17.6) extending to 30 days after first light with a smooth decline rate of ≈0.03 mag day^-1.
* Blueward U-V color evolution in the first 4.4 days towards the light-curve peak followed by redward color evolution during the light-curve plateau.
* Prominent flash spectral features of H, He i, He ii, C iii, C iv, N iii, and N iv persisting up to 5.6 days after first light.
* A transition to a higher spectral ionization state (i.e., C iii → C iv) in the first 2.1 days after first light.
These observed properties are not expected from pure shock-cooling emission, and instead suggest a delayed shock-breakout from a dense CSM confined within ∼(3-7)×10^14 cm.
Motivated by this, we have constructed numerical light-curve models based on both continuous and eruptive mass-loss scenarios shortly before explosion. Our key findings are as follows:
* For the continuous mass-loss scenario, we infer a range of mass-loss history with 0.1-1.0 M_⊙ yr^-1 in the final 2.0-1.0 years before explosion, with a potentially decreasing mass loss of 0.01-0.1 M_⊙ yr^-1 towards the explosion (≲0.7-0.4 years) to reproduce the rapid rise and luminous peak.
* For the eruptive mass-loss scenario, we favor energy injections of ∼ 10^47 erg at the base of the envelope in the final year before explosion, which releases 0.3–1 M_⊙ of the envelope as CSM. The mass and extent of the CSM broadly agree with the continuous scenario.
These confined, dense CSM models are compatible with the early X-ray detections, millimeter non-detections, and possibly the progenitor variability without clear outbursts before the final ∼1 year of explosion.
Given its proximity, SN 2023ixf will remain as a primarily SN target for multi-wavelength follow-up observations in the coming years. In ultraviolet to infrared, a fully sampled light curve up to a radioactive tail will allow better characterizations of the progenitor and explosion properties. Nebular-phase spectra have potential to constrain the progenitor mass, possibly better than the current pre-explosion estimates. Deviations from a smooth light-curve plateau and/or tail, or continued X-ray and/or radio/millimeter emission will provide unique opportunities to probe the mass-loss history in the final years to decades of its progenitor evolution.
§ ACKNOWLEDGMENTS
We are grateful to Anthony Piro, Jim Fuller, and Charles Kilpatrick for useful discussions, to Warren Brown, Pascal Fortin, Murdock Hart, David Latham, and Jessica Mink for scheduling, performing, and reducing FLWO KeplerCam, FAST, and TRES observations, and to Nelson Caldwell, Daniel Fabricant, and Sean Moran for scheduling and reducing the MMT Hectochelle observations.
The Berger Time-Domain research group at Harvard is supported by the NSF and NASA.
The LCO supernova group is supported by NSF grants AST-1911151 and AST-1911225.
D.T. is supported by the Sherman Fairchild Postdoctoral Fellowship at the California Institute of Technology.
The Flatiron Institute is supported by the Simons Foundation.
This publication was made possible through the support of an LSSTC Catalyst Fellowship to K.A.B., funded through Grant 62192 from the John Templeton Foundation to LSST Corporation. The opinions expressed in this publication are those of the authors and do not necessarily reflect the views of LSSTC or the John Templeton Foundation.
J.C.W. and J.V. are supported by NSF grant AST1813825. J.V. is also supported by by OTKA grant K-142534 of the National Research, Development and
Innovation Office, Hungary.
This work makes use of observations from KeplerCam on the 1.2 m telescope and FAST and TRES on the 1.5 m telescope at the Fred Lawrence Whipple Observatory.
Observations reported here were obtained at the MMT Observatory, a joint facility of the Smithsonian Institution and the University of Arizona.
This paper uses data products produced by the OIR Telescope Data Center, supported by the Smithsonian Astrophysical Observatory.
This work makes use of observations from the Las Cumbres Observatory global telescope network. The authors wish to recognize and acknowledge the very significant cultural role and reverence that the summit of Haleakala̅ has always had within the indigenous Hawaiian community. We are most fortunate to have the opportunity to conduct observations from the mountain.
We thank the support of the staffs at the Neil Gehrels Swift Observatory.
This work has made use of data from the Zwicky Transient Facility (ZTF). ZTF is supported by NSF grant No. AST-1440341 and a collaboration including Caltech, IPAC, the Weizmann Institute for Science, the Oskar Klein Center at Stockholm University, the University of Maryland, the University of Washington, Deutsches Elektronen-Synchrotron and Humboldt University, Los Alamos National Laboratories, the TANGO Consortium of Taiwan, the University of Wisconsin at Milwaukee, and Lawrence Berkeley National Laboratories. Operations are conducted by COO, IPAC, and UW. The ZTF forced-photometry service was funded under the Heising-Simons Foundation grant
No. 12540303 (PI: Graham).
This work has made use of data from the Asteroid Terrestrial-impact Last Alert System (ATLAS) project. ATLAS is primarily funded to search for near-Earth asteroids through NASA grant Nos. NN12AR55G, 80NSSC18K0284, and 80NSSC18K1575; byproducts of the NEO search include images and catalogs from the survey area. This work was partially funded by Kepler/K2 grant No. J1944/80NSSC19K0112 and HST grant No. GO-15889, and STFC grant Nos. ST/T000198/1 and ST/S006109/1. The ATLAS science products have been made possible through the contributions of the University of Hawaii Institute for Astronomy, the Queen’s University Belfast, the Space Telescope Science Institute, the South African Astronomical Observatory, and The Millennium Institute of Astrophysics (MAS), Chile.
The PS1 and the PS1 public science archives have been made possible through contributions by the Institute for Astronomy, the University of Hawaii, the Pan-STARRS Project Office, the Max-Planck Society and its participating institutes, the Max Planck Institute for Astronomy, Heidelberg and the Max Planck Institute for Extraterrestrial Physics, Garching, The Johns Hopkins University, Durham University, the University of Edinburgh, the Queen's University Belfast, the Harvard-Smithsonian Center for Astrophysics, the Las Cumbres Observatory Global Telescope Network Incorporated, the National Central University of Taiwan, the Space Telescope Science Institute, NASA under grant No. NNX08AR22G issued through the Planetary Science Division of the NASA Science Mission Directorate, NSF grant No. AST-1238877, the University of Maryland, Eotvos Lorand University, the Los Alamos National Laboratory, and the Gordon and Betty Moore Foundation.
This publication makes use of data products from the Wide-field Infrared Survey Explorer, which is a joint project of the University of California, Los Angeles, and the Jet Propulsion Laboratory/California Institute of Technology, funded by the National Aeronautics and Space Administration.
This research has made use of the NASA Astrophysics Data System (ADS), the NASA/IPAC Extragalactic Database (NED), and NASA/IPAC Infrared Science Archive (IRSA, which is funded by NASA and operated by the California Institute of Technology), and IRAF (which is distributed by the National Optical Astronomy Observatory, NOAO, operated by the Association of Universities for Research in Astronomy, AURA, Inc., under cooperative agreement with the NSF).
TNS is supported by funding from the Weizmann Institute of Science, as well as grants from the Israeli Institute for Advanced Studies and the European Union via ERC grant No. 725161.
ADS, ATLAS, FLWO (FAST, KeplerCam, TRES), IRSA, LCO (SBIG, Sinistro), MMT (Hectoechelle), NED, PS1, Swift (UVOT), WISE, ZTF.
Astropy ,
(<https://gist.github.com/thespacedoctor/86777fa5a9567b7939e8d84fd8cf6a76>),
BANZAI <cit.>,
<cit.>,
<cit.>,
<cit.>,
Matplotlib <cit.>,
<cit.>,
NumPy <cit.>,
<cit.>,
PyRAF <cit.>,
SciPy <cit.>,
seaborn <cit.>,
<cit.>,
<cit.>,
<cit.>
.
|
http://arxiv.org/abs/2307.03340v1
|
20230707004819
|
Calibrating Car-Following Models via Bayesian Dynamic Regression
|
[
"Chengyuan Zhang",
"Wenshuo Wang",
"Lijun Sun"
] |
stat.AP
|
[
"stat.AP"
] |
Calibrating Car-Following Models via Bayesian Dynamic Regression
Chengyuan Zhang [email protected]
Department of Civil Engineering
McGill University
Montreal, QC H3A 0C3, Canada
Wenshuo Wang [email protected]
Department of Civil Engineering
McGill University
Montreal, QC H3A 0C3, Canada
Lijun SunCorresponding author. [email protected]
Department of Civil Engineering
McGill University
Montreal, QC H3A 0C3, Canada
August 1, 2023
===============================================================================================================================================================================================================================================================================================================================================================================================================================================================
Car-following behavior modeling is critical for understanding traffic flow dynamics and developing high-fidelity microscopic simulation models. Most existing impulse-response car-following models prioritize computational efficiency and interpretability by using a parsimonious nonlinear function based on immediate preceding state observations. However, this approach disregards historical information, limiting its ability to explain real-world driving data. Consequently, serially correlated residuals are commonly observed when calibrating these models with actual trajectory data, hindering their ability to capture complex and stochastic phenomena. To address this limitation, we propose a dynamic regression framework incorporating time series models, such as autoregressive processes, to capture error dynamics. This statistically rigorous calibration outperforms the simple assumption of independent errors and enables more accurate simulation and prediction by leveraging higher-order historical information. We validate the effectiveness of our framework using HighD and OpenACC data, demonstrating improved probabilistic simulations. In summary, our framework preserves the parsimonious nature of traditional car-following models while offering enhanced probabilistic simulations. The code of this work is open-sourced at <https://github.com/Chengyuan-Zhang/IDM_Bayesian_Calibration>.
Car-Following Models, Dynamic Regression, Bayesian Inference, Microscopic Traffic Simulation
§ INTRODUCTION
Car-following behavior plays a critical role in understanding and predicting traffic flow dynamics. Various car-following models have been developed in the literature, including the optimal speed model (OVM) <cit.>, Gipps model <cit.>, and the intelligent driver model (IDM) <cit.> along with its variants <cit.>. These models typically utilize recent observations of the ego vehicle's speed, relative speed to the leading vehicle, and spacing/gap as inputs to a explicitly predefined nonlinear function. This function computes the acceleration or speed as the driver's decision at the current time step. The parsimonious structure of these models offers computational efficiency, interpretability, and analytical connections with macroscopic relations <cit.>. Consequently, car-following model-based simulations have been widely used to gain insights into complex traffic flow dynamics.
Despite the fact that car-following models can reproduce important physical phenomena such as shockwaves, it has been highlighted in many recent studies that their parsimonious structure limits the effectiveness of reproducing real-world driving behaviors with high fidelity <cit.>. The simplicity of these parsimonious models overlooks high-order historical information, resulting in inaccurate predictions. Recent studies have shown that incorporating observations from the past 1∼4 seconds improves the modeling of car-following decisions <cit.>. However, calibrating parsimonious car-following models based on single-timestamp observations from real-world trajectories leads to temporally correlated errors[Note that “errors” pertain to the true data generating process, whereas “residuals” are what is left over after having an estimated model. Assumptions such as normality, homoscedasticity, and independence purely apply to the errors of the data generating process, but not the model's residuals. Readers should distinguish these two terms in the following.]. Assuming independent errors in the calibration process introduces bias <cit.>. To address this challenge, <cit.> developed a Bayesian calibration framework by modeling errors using Gaussian processes (GPs). Although this approach offers improved and consistent calibration, it is not a flexible solution for simulation due to the computational complexity of the predictive mechanism of GPs.
Although these models are valuable in understanding traffic flow dynamics, the limitations suggest a need for a more nuanced approach to address the biased calibration of existing car-following models. In this paper, we utilize a dynamic regression framework to model car-following sequence data, incorporating a generative time series model to capture errors. The proposed framework builds upon the classic car-following model, i.e., IDM, retaining the parsimonious features of traditional car-following models while significantly improving prediction accuracy and enabling probabilistic simulations by incorporating higher-order historical information. Here we emphasize that our framework doesn't change the parsimonious structure of the existing models and thus can be applied to various car-following models. The experiments demonstrate the effectiveness of this framework in achieving enhanced predictions and a more accurate calibration of the car-following models. Our results suggest that the driving actions within the past 10 seconds should be considered when modeling human car-following behaviors. This aligns with the literature <cit.> where 10-s historical information is testified as the best input.
The contributions of this work are threefold:
* We present a novel calibration method for car-following models based on a dynamic regression framework. This method enhances existing parsimonious car-following models by incorporating higher-order historical information without changing the prevailing models. The inclusion of this flexible form enables unbiased calibration.
* The framework integrates autoregressive (AR) processes within time series models to handle errors, representing an advancement from the conventional assumption of independent and identically distributed (i.i.d.) errors. This enhancement introduces a statistically rigorous approach, offering improved modeling capabilities.
* The data generative processes of our framework offer an efficient probabilistic simulation method for car-following models, which reasonably involves the stochastic nature of human driving behaviors and accurately replicates real-world traffic phenomena.
This paper is organized as follows. Section <ref> overviews the existing car-following models and simulation methods. Section <ref> introduces our proposed dynamic regression framework to address the limitations of existing models. Section <ref> demonstrates the effectiveness of our model in calibration and simulation, followed by conclusions in Section <ref>.
§ RELATED WORKS
Car-following models have been extensively used to understand and predict drivers' behaviors in traffic, and model calibration is crucial. The traditional calibration methods, such as genetic algorithm-based calibration <cit.>, provide only point estimation for model parameters, lacking the ability to capture driving behavior uncertainty. In contrast, probabilistic calibration and modeling approaches are commonly considered effective in addressing both epistemic uncertainties related to unmodeled details and aleatory uncertainty resulting from the model prediction failures <cit.>.
A well-designed car-following model should capture both the inter-driver heterogeneity (diverse driving behaviors of different drivers <cit.>) and intra-driver heterogeneity (the varying driving styles of the same driver <cit.>). Probabilistic calibration collects significant data from various drivers and conditions to fit the model parameters using statistical distributions instead of fixed values. For instance, the IDM parameters like comfortable deceleration and maximum acceleration can be modeled as random variables with specific distributions <cit.>, reflecting the range of driving styles. A typical probabilistic calibration method is maximum likelihood estimation (MLE) <cit.>. Bayesian inference, as another probabilistic approach, combines prior knowledge and data to estimate the model parameter distribution, allowing for capturing inter-driver heterogeneity. By representing behaviors probabilistically, a spectrum of plausible behaviors is obtained instead of a single deterministic response. Bayesian calibration approaches are discussed in detail in <cit.>. To address intra-driver heterogeneity, stochastic car-following models are developed to account for a driver's behavior's dynamic, time-varying nature. These models introduce a random component that captures moment-to-moment behavior changes, such as using a stochastic process like Markov Chain to model a driver's reaction time or attention level. For instance, <cit.> modeled each IDM parameter as a stochastic process and calibrated them in a time-varying manner.
In addition to traditional models, machine learning techniques have gained popularity in driving behavior modeling and prediction. These techniques leverage large datasets to capture complex, non-linear relationships and account for inter-driver and intra-driver heterogeneity. Data-driven models can be trained on a wealth of driving data to predict individual driver behavior and traffic flow, such as K-nearest neighbor algorithm (KNN) <cit.>, Gaussian mixture model (GMM) <cit.>, hidden Markov model (HMM) <cit.>, and long short-term memory (LSTM) networks <cit.>. These models can complement traditional car-following models and enhance their predictive capabilities. Furthermore, the real-time adaptation of model parameters based on the current driving context and state of the driver and the vehicle can capture intra-driver heterogeneity. This involves continuous parameter updates using sensory information to reflect changes in driving conditions <cit.>.
Overall, probabilistic car-following models, enhanced by machine learning and real-time adaptation, offer a robust framework for capturing the observed diverse and complex driving behaviors. These models effectively handle the heterogeneity and uncertainty inherent in driving behavior by leveraging various methodologies. However, each model has its limitations. While data-driven models can capture complex patterns, they may lack interpretability and be data-hungry. They can also be prone to the out-of-distribution problem. Traditional car-following models, though transparent, may not be flexible enough to encompass the full complexity of human driving behaviors. This work aims to leverage the prior knowledge of traditional car-following models while incorporating the flexibility needed to represent diverse human driving behaviors accurately. The efficacy of this approach is explicitly demonstrated through calibration and simulations of the IDM.
§ METHODOLOGY
§.§ Preliminaries: IDM and its Variants
In this part, we assume that the internal states of each driver remain stationary, implying that their driving styles do not change within the region of interest (ROI). Thus we ignore the discussion of intra-driver heterogeneity. This assumption allows us to use a single model with time-invariant parameters to learn observed car-following behaviors. As a starting point, we introduce several models: the basic IDM with a deterministic formulation <cit.>, the probabilistic IDM with action uncertainty <cit.>, the Bayesian IDM with prior knowledge, and the memory-augmented IDM with a temporal structure <cit.>.
§.§.§ Intelligent Driver Model
IDM <cit.> is a continuous nonlinear function f: ℝ^3 ↦ℝ which maps the gap, the speed, and the speed difference (approach rate) to acceleration at a certain timestamp. Let s represent the gap between the following vehicle and the leading vehicle, v denote the following vehicle's speed, and Δ v=-ds/dt indicate the speed difference. The physical meanings of these notations are illustrated in <Ref>, where v_l denotes the speed of the leading vehicle. IDM computes vehicle acceleration using the following nonlinear function f:
f(s,v,Δ v) ≜α (1-(v/v_0)^4-(s^∗(v,Δ v)/s)^2),
s^∗(v,Δ v) = s_0+ s_1 √(v/v_0) + v T+v Δ v/2 √(α β),
where v_0, s_0, T, α, and β, are model parameters with specific physical meanings. The desired speed v_0 is the free-flow speed. The jam spacing s_0 denotes a minimum gap distance from the leading vehicle. The safe time headway T represents the minimum interval between the following and leading vehicles. The acceleration α and the comfortable braking deceleration β are the maximum vehicle acceleration and the desired deceleration to keep safe, respectively. The deceleration is controlled by the desired minimum gap s^∗, and we set s_1=0 following <cit.> to obtain a model with interpretable and easily measurable parameters.
To make the notations concise and compact, we define a vector θ=[v_0,s_0,T,α,β] ∈ℝ^5 as the IDM parameters. For a certain driver d, we formulate the IDM acceleration term as f(s_d^(t),v_d^(t),Δ v_d^(t);θ_d). Compactly, we write the inputs at time t as a vector h_d^(t)=[s_d^(t),v_d^(t),Δ v_d^(t)]. Then, we denote the IDM term as IDM^(t)(h_d^(t);θ_d), further abbreviated as IDM_d^(t), where the subscript d represents the index for each driver and the superscript (t) indicates the timestamp. Given IDM_d^(t), we can update the vehicle speed v^(t+1) and position x^(t+1) following the ballistic integration scheme as in <cit.> with a step of Δ t:
v^(t+1) = v^(t) + a^(t)Δ t,
x^(t+1) = x^(t) + v^(t)Δ t + 1/2a^(t)Δ t^2.
Specifically, three steps are involved in the discrete decision-making processes simulation shown in <Ref>. (i) Initialization: the available information at time t includes a^(t-1), x^(t), and v^(t); (ii) Decision-making: we estimate the possible action IDM^(t) based on IDM; (iii) Action execution and state updates: We take a specific action a^(t) according to the decisions, resulting in the updated motion states v^(t+1) (<Ref>) and x^(t+1) (<Ref>). The calibration of IDM can be performed using different data, such as spacing, speed, and acceleration, as outlined in <cit.>. In this section, we focus on introducing the generative processes of acceleration data and provide detailed calibration methods on the speed or/and spacing in Section <ref>.
§.§.§ Probabilistic IDM with i.i.d. Errors
Imperfect and irregular driving behaviors result in erratic components of the driver's action <cit.>. We can introduce some action noises with standard deviations σ_η that are tolerated in the IDM framework to model such behaviors. Here, a_IDM represents the rational behavior model, while the random term accounts for the imperfect driving behaviors that IDM cannot capture. Considering the random term with the assumption of i.i.d. noise, a probabilistic IDM <cit.> can be developed as
â_d^(t)|h_d^(t),θ_d ∼i.i.d. 𝒩(IDM_d^(t),σ_η^2),
where IDM and σ^2_η represent the mean and variance and â is the observed data of the true acceleration a.
The car-following model calibration is to estimate the model parameters θ by seeking the best mapping from h_d^(t) to a_d^(t), v_d^(t+1), and s_d^(t+1) <cit.>. Various approaches, such as utility-based optimization and maximum-likelihood techniques, are usually adopted (see <cit.>).
§.§.§ Bayesian IDM with i.i.d. Errors
We introduce a hierarchical Bayesian IDM, as proposed by <cit.>, that explicitly captures general driving behaviors at the population level while representing individual-level heterogeneity. This model is described as follows: For any pairs of time step and vehicle (t,d) in the set {(t,d)}_t=t_0, d=1^t_0+TΔ t,D, we have
σ_0 ∼Exp(λ_0),
Σ ∼LKJCholeskyCov(η, σ_0),
ln(θ) ∼𝒩(0, Σ_0),
ln(θ_d) ∼𝒩(ln(θ), Σ),
σ_η ∼Exp(λ_η),
â_d^(t)|h_d^(t),θ_d ∼i.i.d. 𝒩(IDM_d^(t), σ_η^2),
where LKJ represents the LKJ distribution <cit.>, λ_0, η, Σ_0, and λ_η are the manually set hyperparameters, and the other variables are inferred from the data.
However, note that <Ref> assumes i.i.d. errors, but in practice, the residuals are temporally correlated, i.e., non-i.i.d., as illustrated in <Ref>.
§.§.§ Memory-Augmented IDM with Gaussian Processes and i.i.d. Errors
Maintaining temporal consistency in actions in daily driving tasks is crucial for human drivers, commonly referred to as driving persistence <cit.>. However, most stochastic models, including the Bayesian IDM, overlook the persistence of acceleration noise. Instead, they incorrectly model its time dependence as white noise, assuming i.i.d. errors, as shown in <Ref>, which can be reformulated as
a_d^(t) = IDM_d^(t) + ϵ_d^(t), ϵ_d^(t) ∼i.i.d. 𝒩(0,σ_η^2).
Actually, these models and calibrated parameters are only valid when the errors adhere to the i.i.d. assumption, and the temporal correlation in residuals leads to biased estimation of model parameters.
To address this limitation, <cit.> proposed a Bayesian calibration method that involves Gaussian processes (GPs) to depict the “memory effect” <cit.>, named as memory-augmented IDM (MA-IDM), which can capture the temporally correlated errors, a_d^(t) = IDM_d^(t) + a_GP d^(t). Stacking the scalar a_d^(t) along the time horizon ∀ t∈ [t_0+Δ t, t_0+TΔ t] results in a vector a_d∈ℝ^T. Such that we can derive a vector form a_d = IDM_d + a_GP d, with a_GP d∼𝒩(0, K), where K represents the kernel matrix constructed by radial basis function (RBF) kernel with σ_k and ℓ. With this setting, the MA-IDM is written as
σ_0 ∼Exp(λ_0),
Σ ∼LKJCholeskyCov(η, σ_0),
ln(θ) ∼𝒩(0, Σ_0),
ln(θ_d) ∼𝒩(ln(θ), Σ),
σ_k ∼Exp(λ_k),
ln(ℓ) ∼𝒩(μ_ℓ,σ_ℓ),
a_d|h_d,θ_d ∼i.i.d. 𝒩(IDM_d,K).
§.§ Calibration of the Dynamic IDM with Autoregressive Processes and i.i.d. Errors
The kernel function selection restricts the flexibility of modeling the temporally correlated errors with GPs. To address this limitation, we propose a novel unbiased hierarchical Bayesian model called dynamic IDM. This model addresses this limitation by incorporating AR processes to represent the time-dependent stochastic error term with a dynamic regression framework. By absorbing AR processes in the calibration framework, we adopt a simple yet effective approach of adding up actions along the history steps. This approach has been tested and proven effective in modeling car-following behaviors <cit.>.
Here we distinguish the concepts of “states” and “observations”, where states (i.e., the real acceleration) representing the underlying dynamics of the vehicle, and observations are the measured values (e.g., speed and spacing). In <Ref>, we first build the equation of states, which describes how the system evolves along the time horizon. Then in the rest of this subsection, we develop the observation equation, which describes how the underlying state is transformed (with noise added) into something that we directly measure.
§.§.§ Modeling the temporal correlations with AR processes
Specifically, we assume the error of the state follows a p-order AR process denoted by AR(p):
a_d^(t) = IDM_d^(t) + ϵ^(t)_d,
ϵ^(t)_d = ρ_1ϵ_d^(t-1) + ρ_2ϵ_d^(t-2)+… + ρ_pϵ_d^(t-p) + η_d^(t),
where η^(t) ∼i.i.d.𝒩(0,σ_η^2) represents a white noise series. In the following discussions, we refer to IDM_d^(t) as the mean component and ϵ^(t)_d as the stochastic component.
We estimate the model (<Ref>) by constructing the likelihood on a white noise process η_d^(t). Given the model with estimated parameters, the probabilistic prediction can be achieved by first sampling η_d^(t) and then sequentially feeding it into
a_d^(t) = IDM_d^(t)+ ρ_1(a_d^(t-1) - IDM_d^(t-1)) + ρ_2(a_d^(t-2) - IDM_d^(t-2))+ ⋯
+ ρ_p(a_d^(t-p) - IDM_d^(t-p)) +η_d^(t).
provides probabilistic prediction.
The above equation explicitly demonstrates the advantages of our method compared with traditional models — it involves rich information from several historical steps to make decisions for the current step instead of using only one historical step.
Here, we introduce the form of the hierarchical dynamic IDM as ∀ (t,d)∈{(t,d)}_t=t_0, d=1^t_0+TΔ t,D
σ_0 ∼Exp(λ_0),
Σ ∼LKJCholeskyCov(η, σ_0),
ln(θ) ∼𝒩(0, Σ_0),
ln(θ_d) ∼𝒩(ln(θ),Σ),
σ_η ∼Exp(λ_η),
ρ ∼𝒩(0, diag(σ_ρ)),
a_d^(t)|h_d^(t),θ_d ∼i.i.d. 𝒩(IDM_d^(t)+∑_k=1^pρ_k (a_d^(t-k)-IDM_d^(t-k)) , σ_η^2 ),
where ρ=[ρ_1,ρ_2,…,ρ_p] is estimated during model training. Given hyperparameters λ_0 and η, <Ref> and <Ref> set the prior for the covariance matrix Σ∈ℝ^5 of the IDM parameters, which control the balance between the pooled model and the unpooled model, as highlighted by <cit.>. Then <Ref> sets the prior for the population-level IDM parameters θ, and based on which <Ref> describes the prior of the individual-level IDM parameters θ_d for driver d. By providing the hyperparameter λ_η, we can specify the prior for the variance of the random noise in <Ref>. Further, the priors of the AR coefficients are normally distributed with zero means, as shown in <Ref>. Finally, we can derive the likelihood as a normal distribution shown in <Ref>.
The Bayesian IDM is a special case of the dynamic IDM. If we set the AR order equal to zero (i.e., p=0), this model is exactly equivalent to the Bayesian IDM. We compare the probabilistic graphical models of the Bayesian IDM, MA-IDM, and dynamic IDM in <Ref>.
§.§.§ Calibration on speed data with observation noise
Our proposed method can be used for the calibration based on the acceleration, speed, and spacing data. Here, we provide a brief introduction to the calibration method based on speed data. More details could be found in the appendix.
According to <Ref>, the likelihood of a noisy speed observation is written as
v̂_d^(t+1)∼𝒩(v̅_d^(t+1), (σ_ηΔ t)^2+σ_v^2),
where v̅_d^(t+1)=v_d^(t)+ IDM_d^(t)Δ t+ ∑_k=1^p ρ_k (v_d^(t-(k-1)) - v_d^(t-k)) - ∑_k=1^p ρ_k IDM_d^(t-k)Δ t, v̂_d^(t+1) is the observed data of the true speed v_d^(t+1), and σ_v^2 is the variance of the observation noise.
§.§.§ Calibration on position/spacing data with observation noise
According to <Ref>, the likelihood of a noisy positional observation is written as
x̂_d^(t+1)∼𝒩(x̅_d^(t+1), (1/2σ_ηΔ t^2)^2+σ_x^2),
where the mean x̅_d^(t+1)=x_d^(t) + v_d^(t)Δ t + 1/2IDM_d^(t)Δ t^2 + 1/2∑_k=1^p ρ_k (v_d^(t-(k-1)) - v_d^(t-k))Δ t - 1/2∑_k=1^p ρ_k IDM_d^(t-k)Δ t^2, x̂_d^(t+1) is the observed data of the true position x_d^(t+1), and σ_x^2 is the variance of the observation noise of position data. Note that this is a position-based form, but one can easily adapt it into a gap-based form.
§.§.§ Calibration on both speed and position data with joint likelihood
By jointly considering the information and observation noise from the speed and position data, we can derive the likelihood using a bi-variate normal distribution, written as
[[ x̂_d^(t+1); v̂_d^(t+1) ]] ∼𝒩([[ x̅_d^(t+1); v̅_d^(t+1) ]], [[ (1/2σ_ηΔ t^2)^2 1/2σ_η^2Δ t^3; 1/2σ_η^2Δ t^3 (σ_ηΔ t)^2 ]] + [[ σ_x^2 0; 0 σ_v^2 ]] ).
The variance values (σ_x and σ_v) in the observation noise determine the reliability of the position and speed data. When σ_x is set to a large value, this form tends to be equal to pure calibration on the speed data. Similarly, when σ_v is large, it equals calibration only using the position data.
§ EXPERIMENTS AND SIMULATIONS
We begin by evaluating the calibration results on the regression task and analyzing the identified parameters of IDM and the learned AR processes. Next, we demonstrate the simulation capability of our proposed method through short-term and long-term simulations. The short-term simulations quantitatively assess the replication of human behaviors, while the long-term simulations validate the ability of the identified parameters to capture critical car-following dynamics.
§.§ Experimental Settings
§.§.§ Dataset
The car-following model performance can be affected by noise in empirical data <cit.>. To mitigate the impact of data quality and avoid the need for excessive data filtering, selecting an appropriate dataset is crucial. Different datasets can be utilized to verify various aspects of model capability. For instance, studying general car-following behaviors based on deep learning models requires informative data with sufficiently long trajectories. Exploring both population-level and individual-level driving behaviors based on hierarchical models necessitates multi-user driving data. Similarly, analyzing the driving style shifts in a single driver based on behavior models relies on the daily trajectories of that specific driver.
To access hierarchical car-following models for multiple drivers, we evaluate our model using the HighD dataset <cit.>, which offers high-resolution trajectory data collected with drones. The HighD dataset provides several advantages over the NGSIM dataset <cit.> with advanced computer vision techniques and more reliable data-capture methods. It consists of 60 video recordings from several German highway sections, each spanning a length of 420 m. The original dataset is downsampled to a smaller set with a sampling frequency of 5 Hz, achieved by uniformly selecting every 5-th sample. The recorded trajectories, speeds, and accelerations of two types of vehicles (car and truck) are measured and estimated. We follow the same data processing procedures as in <cit.> to transform the data into a new coordinate system. We prefer trajectories with car-following duration longer than a certain threshold t_0 = 50 s for robust estimation of IDM parameters <cit.>. Then, we randomly select several leader-follower pairs from these data for each type of vehicle, respectively.
Since the HighD dataset only provides short trajectories with duration up to about 60 s, we also evaluate our method on OpenACC[<https://data.jrc.ec.europa.eu/dataset/9702c950-c80f-4d2f-982f-44d06ea0009f>.] trajectories with a duration of about 1000 s. OpenACC dataset provides car-following records of a platoon, where a human driver operates the first vehicle, and four following vehicles in the platoon are autonomously controlled by an adaptive cruise control (ACC) module, as shown in <Ref>. The data in OpenACC is also downsampled to 5 Hz.
§.§.§ Model Training
We develop our model using PyMC 4.0 <cit.>. We adopt the No-U-Turn Sampler, a kind of Hamiltonian Monte Carlo method <cit.>, and set the burn-in steps as 3000 to ensure the detailed balance condition holds for the Markov chains. We recommend and utilize the values θ_rec=[33.3, 2.0, 1.6, 1.5, 1.67] in <cit.> for the IDM priors. In addition, we set λ_0=100 and λ=2× 10^6 for the exponential distribution and η=2 for the LKJCholeskyCov distribution. Codes for all experimental results reported in this paper are released in <https://github.com/Chengyuan-Zhang/IDM_Bayesian_Calibration>.
§.§ Calibration Results and Analysis
We compare the parameters of the Bayesian IDM with the dynamic IDM in <Ref>. The parameter σ_η represents the noise level of the i.i.d. errors, which decreases as the AR order increases. This indicates that the residual of the Bayesian IDM (when p=0) retains valuable information, as reflected by a higher value of σ_η. With higher AR order, more information is captured by the AR processes, reducing the contribution of the noise term. It is worth noting that the model with AR (p=0) is equivalent to the Bayesian IDM.
To compare the roles of GP and AR processes, we examine the RBF kernel matrix alongside the empirical covariance matrix estimated from real data and the covariance matrix generated from AR(4) (see <Ref>). The horizontal axes are time steps, where each time step represents 0.2 s. The RBF kernel captures short-term positive correlations and fails to capture the long-term negative correlations observed in the residuals extracted from real driving data. To address this limitation, the dynamic IDM involves AR processes to capture both short-term positive and long-term negative correlations more realistically. This finding suggests that human drivers' immediate stochastic component can be significantly impacted by their short-term ϵ_d^(t) values (up to 5 s) <cit.>, potentially due to driving persistence <cit.>. Moreover, we uncover a notable observation: the stochastic components are also negatively correlated with the previous ϵ_d^(t) values in a long-term period (5∼10 s). For example, if the acceleration value of the human driver's current action is greater than the expected behavior, i.e., the mean component in <Ref>, then in the following 5∼10 s, the driver is very likely to take an action whose value is lower than the expected behaviors.
§.§ Simulations
One of the highlights of the dynamic IDM is its simulation scheme, leveraging the explicitly defined generative processes of the error term presented in <Ref>. This feature allows for the simulation and updating of future motions based on these generative processes. In what follows, we will introduce the parameter generation method for IDM and develop a stochastic simulator, which has been successfully tested in multiple cases.
§.§.§ Probabilistic Simulations
In <cit.>, the authors emphasize a key advantage of the Bayesian method over point estimator-based approaches. The Bayesian method provides joint distributions of parameters instead of only a single value. Consequently, instead of relying solely on the mean values from <Ref>, we can draw a large number of samples from the posterior joint distributions to generate anthropomorphic IDM parameters effectively.
Here We select one driver as an illustrative example to show the calibration results. We generate N=1000 sets of IDM parameters from the posteriors of the calibrated model. Then, given the leading vehicle's full trajectories and the initial states of the follower, we simulate the follower's full trajectories. The follower's behavior is controlled by the calibrated IDM, with the addition of a random noise sampled from the generative processes. The simulation step is set as 0.2 s. According to <Ref>, the simulation process involves three steps: (1) generating the mean model by sampling a set of IDM parameters from θ_d, (2) computing the serial correlation term according to the historical information, and (3) sampling white noise randomly.
We compare our model with the baseline MA-IDM <cit.>. Simulated trajectories are generated using parameter samples from θ_#211 (<Ref>). In the comparison (<Ref>), our method shows tighter containment of ground truth within the envelope of posterior motion states, while MA-IDM requires a wider range. We evaluate the model performance using compare the absolute root mean square errors (RMSE) on motion states (acceleration, speed, and gap) between ground truth and fully simulated trajectories. Additionally, we use the continuously ranked probability score (CRPS) <cit.> to evaluate the performances of stochastic simulations and quantify the uncertainty of posterior motion states, which can be written as
CRPS(y_t) = ∫_-∞^+∞(F(y)-1{y>y_t})^2 dy,
where y_t is the observation at time t, F is the forecast cumulative distribution function, and 1 is the indicator function. We use RMSE/CRPS of many fraction samples with 5 s to evaluate the effectiveness of the short-term simulation, as shown in <Ref>. Results indicate that our model outperforms Bayesian IDM and MA-IDM in 5-second simulations, especially with the AR order p≥4. In what follows, we mainly demonstrate the simulation performance using the model with AR(4). Recall that the length scale of the RBF kernel in MA-IDM is ℓ≈ 1.33 s <cit.>, implying that MA-IDM generally performs well within 4 s (i.e., 3ℓ) simulations. To evaluate this, we also conducted simulations with fraction length varying from 2 s to 8 s, as shown in <Ref>, and the results indicate that the MA-IDM could perform reasonably well within 4 s. For those longer than 4 s, the results of MA-IDM would then be more dominated by random noise rather than IDM, and its performance tend to be similar to the Bayesian IDM.
§.§.§ Multi-Vehicle Simulations: Ring-Road and Platoon
Long-term simulations enable the analysis of dynamic traffic behaviors at the macroscopic level by simultaneously simulating a group of vehicles. To validate the capability for large-scale traffic simulations, we conduct multi-vehicle simulation experiments in a ring-road scenario <cit.> and a platoon.
Ring Road. Key elements of the ring road include a ring radius of 128 m, resulting in a circumference of approximately 804 m, initial speeds set at 11.6 m/s, 32 vehicles for light traffic and 37 vehicles for dense traffic simulated for 15000 steps with simulation step as 0.2 s. The multi-vehicle simulation is conducted with two different models, as shown in <Ref>. The top is simulated with Bayesian IDM and the others are simulated with the dynamic IDM (p=4). <Ref> (a) presents a recurring pattern from the simulations with IDM parameters, although a stochastic term is introduced. On the contrary, different random car-following behaviors in the heterogeneous setting (see <Ref> (b)) are obtained, indicating the drivers' dynamic and diverse driving styles. Dynamic IDM simulation can produce various traffic phenomena in the real world, including shock wave dissipation.
Platoon. In the platoon simulation, the first vehicle is the leader with its entire trajectory available. We simultaneously simulate four successive followers' trajectories based on the dynamic IDM with AR(4). <Ref> shows the four followers' speed profiles, where the black lines represent the actual driving speed. We can see that the dynamic IDM still can accurately capture and replicate the real car-following behaviors even in long-range simulations.
§ CONCLUSION
This paper presents a dynamic regression framework for calibrating and simulating car-following models, which plays a critical role in understanding traffic flow dynamics. Our approach addresses a key limitation of existing car-following models by incorporating historical information, resulting in a more accurate reproduction of real-world driving behavior. By integrating AR processes into the error modeling, we offer a statistically rigorous alternative to the assumption of independent errors commonly found in current models. This enables the consideration of higher-order historical information, leading to improved simulation and prediction accuracy. Experiment results indicate that modeling human car-following behavior should incorporate actions from the past 10 s, capturing short-term positive correlations (0∼5 s) and long-term negative correlations (5∼10 s) in real driving data. The framework's effectiveness is demonstrated through its application to the HighD data. The proposed framework inherits the parsimonious feature of traditional car-following models and provides more realistic simulations. In conclusion, our research sheds light on a potential direction for the development of high-fidelity microscopic traffic simulation models. By integrating historical data, our dynamic regression framework enables more accurate predictions and simulations of car-following behavior, which is crucial for traffic management and planning. We hope our findings could stimulate further research in this direction, fostering a deep understanding of traffic flow dynamics and the development of advanced and efficient microscopic traffic models.
However, as with all research, there are avenues for further development. Future work could explore incorporating a mean model as a car-following model with time-varying parameters to enhance adaptability to diverse scenarios and driving modes. Additionally, adding a moving average component might capture the temporal dependence and irregular patterns, and reduce the noise level through smoothing.
C. Zhang would like to thank the McGill Engineering Doctoral Awards (MEDA), the Interuniversity Research Centre on Enterprise Networks, Logistics and Transportation (CIRRELT), Fonds de recherche du Québec – Nature et technologies (FRQNT), and the Natural Sciences and Engineering Research Council (NSERC) of Canada for providing scholarships and funding to support this study.
§ APPENDIX
§.§ (1) Calibration on speed data with observation noise
Recall that our model follows
a^(t) = IDM^(t)+ ϵ^(t),
ϵ^(t) = ρ_1ϵ^(t-1) + ρ_2ϵ^(t-2)+… + ρ_pϵ^(t-p) + η,
η ∼i.i.d. 𝒩(0,σ^2_η),
and given the dynamic updating in <Ref> v^(t+1) = v^(t)+ a^(t)Δ t, then we have
a^(t) = IDM^(t)+ ρ_1(a^(t-1) - IDM^(t-1)) + ρ_2(a^(t-2) - IDM^(t-2))
+
⋯+ ρ_p(a^(t-p) - IDM^(t-p)) +η,
v^(t+1) = v^(t)+ IDM^(t)Δ t+ ρ_1(a^(t-1)Δ t - IDM^(t-1)Δ t) + ⋯
+ ρ_p(a^(t-p)Δ t - IDM^(t-p)Δ t) +ηΔ t
= v^(t)+ IDM^(t)Δ t+ ρ_1((v^(t-1) + a^(t-1)Δ t) - (v^(t-1) + IDM^(t-1)Δ t))+
⋯
+ρ_p((v^(t-p) + a^(t-p)Δ t) - (v^(t-p) + IDM^(t-p)Δ t)) +ηΔ t
= v^(t)+ IDM^(t)Δ t+ ρ_1(v^(t) - (v^(t-1) + IDM^(t-1)Δ t)) +
⋯
+ ρ_p(v^(t-(p-1)) - (v^(t-p) + IDM^(t-p)Δ t)) +ηΔ t
= v^(t)+ IDM^(t)Δ t+ ∑_k=1^p ρ_k (v^(t-(k-1)) - v^(t-k)) - ∑_k=1^p ρ_k IDM^(t-k)Δ t +ηΔ t.
Such that we can give the likelihood of a noisy observation as
v̂^(t+1)∼𝒩(v̅^(t+1), (σ_ηΔ t)^2+σ_v^2),
where v̅^(t+1)=v^(t)+ IDM^(t)Δ t+ ∑_k=1^p ρ_k (v^(t-(k-1)) - v^(t-k)) - ∑_k=1^p ρ_k IDM^(t-k)Δ t, v̂^(t+1) is the observed data of the true speed v^(t+1), and σ_v^2 is the variance of the observation noise.
§.§ (2) Calibration on position/spacing data with observation noise
Similar to the previous approach, from <Ref> we have x^(t+1) = x^(t) + v^(t)Δ t+ 1/2a^(t)Δ t^2; then according to <Ref>, we can reformat x^(t+1) as
x^(t+1) = x^(t) + 1/2v^(t)Δ t + 1/2(v^(t)+ IDM^(t)Δ t+ ∑_k=1^p ρ_k (v^(t-(k-1)) - v^(t-k))
- ∑_k=1^p ρ_k IDM^(t-k)Δ t +ηΔ t)Δ t
= x^(t) + v^(t)Δ t + 1/2IDM^(t)Δ t^2 + 1/2∑_k=1^p ρ_k (v^(t-(k-1)) - v^(t-k))Δ t
- 1/2∑_k=1^p ρ_k IDM^(t-k)Δ t^2 +1/2ηΔ t^2.
Accordingly, the likelihood is written as
x̂^(t+1)∼𝒩(x̅^(t+1), (1/2σ_ηΔ t^2)^2+σ_x^2),
where the mean x̅^(t+1)=x^(t) + v^(t)Δ t + 1/2IDM^(t)Δ t^2 + 1/2∑_k=1^p ρ_k (v^(t-(k-1)) - v^(t-k))Δ t - 1/2∑_k=1^p ρ_k IDM^(t-k)Δ t^2, x̂^(t+1) is the observed data of the true position x^(t+1), and σ_x^2 is the variance of the observation noise of position data. Note that this is a position-based form, but one can easily adapt it into a gap-based form.
0.2in
|
http://arxiv.org/abs/2307.00253v1
|
20230701072004
|
Causal Functional Connectivity in Alzheimer's Disease Computed from Time Series fMRI data
|
[
"Rahul Biswas",
"SuryaNarayana Sripada"
] |
q-bio.NC
|
[
"q-bio.NC",
"q-bio.QM",
"stat.AP"
] |
The "super-active" accretion phase of T CrB has ended
[
August 1, 2023
=====================================================
Functional Connectivity between brain regions is known to be altered in Alzheimer's disease, and promises to be a biomarker for early diagnosis of the disease. While several approaches for functional connectivity obtain an un-directed network representing stochastic associations (correlations) between brain regions, association does not necessarily imply causation. In contrast, Causal Functional Connectivity is more informative, providing a directed network representing causal relationships between brain regions. In this paper, we obtained the causal functional connectome for the whole brain from recordings of resting-state functional magnetic resonance imaging (rs-fMRI) for subjects from three clinical groups: cognitively normal, mild cognitive impairment, and Alzheimer's disease. We applied the recently developed Time-aware PC (TPC) algorithm to infer the causal functional connectome for the whole brain. TPC supports model-free estimation of whole brain causal functional connectivity based on directed graphical modeling in a time series setting. We then identified the causal brain connections between brain regions which have significantly different strengths between pairs of subject groups, and over the three subject groups. We used the significant causal brain connections thus obtained to compile a comprehensive list of brain regions impacted by Alzheimer's disease according to the current data set. The obtained brain regions are in agreement with existing literature published by researchers from clinical/medical institutions thus validating the approach. We then discuss the soundness and completeness of the results and the potential for using causal functional connectivity obtained using this methodology as a basis for the prognosis and diagnosis of Alzheimer's disease.
Keywords: Causal inference, functional connectivity, brain mapping, directed graphical modeling, Alzheimer's disease, functional magnetic resonance imaging.
§ INTRODUCTION
Alzheimer's disease (AD) is the most common age-related progressive neurodegenerative disorder. It typically begins with a preclinical phase and advances through mild cognitive impairment (MCI) to clinically significant AD, which is a form of dementia <cit.>. Despite significant efforts to identify biomarkers for AD, it still relies on clinical diagnosis, and early and accurate prediction of the disease remains limited <cit.>. Abnormal resting-state functional connectivity (FC) between brain regions has been observed as early as two decades before brain atrophy and the emergence of AD symptoms <cit.>. Therefore, resting-state FC can potentially determine the relative risk of developing AD <cit.>.
Resting-state functional magnetic resonance imaging (rs-fMRI) records the blood-oxygen-level-dependent (BOLD) signal from different brain regions while individuals are awake and not engaged in any specific task. The BOLD signal is popularly used to infer functional connectivity between brain regions partly due to the advantage that BOLD signal provides high spatial resolution <cit.>.
Functional connectivity refers to the stochastic relationship between brain regions with respect to their activity over time. Popularly, functional connectivity involves measuring statistical association between signals from different brain regions. The statistical association measures are either pairwise associations between pairs of brain regions such as Pearson's correlation, or multivariate i.e. incorporating multi-regional interactions such as undirected graphical models <cit.>. Detailed technical explanations of functional connectivity in fMRI can be found in <cit.>. The findings from studies using FC <cit.>, and meta-analyses <cit.> indicate a decrease in connectivity in several brain regions in relation to Alzheimer's disease (AD), such as the hippocampus and posterior cingulate cortex. These regions play a role in memory and attentional processing. On the other hand, some studies have found an increase in connectivity within brain regions in early stages of AD and MCI <cit.>. This is a well known phenomenon, where increase in FC between certain brain regions occurs when the communication between other brain regions is impaired. Such hyperconnectivity has been interpreted as a compensatory mechanism where alternative paths within the brain's network are recruited <cit.>.
In contrast to associative FC, causal FC represents functional connectivity between brain regions more informatively by a directed graph, with nodes as the brain regions, directed edges between nodes indicating causal relationships between the brain regions, and weights of the directed edges quantifying the strength of the corresponding causal relationship <cit.>. However, functional connectomics studies in general, and in relation to fMRI from Alzheimer's disease in particular, have predominantly used associative measures of FC <cit.>. There are a few studies focusing on alterations in CFC in relation to Alzheimer's disease <cit.>, however this area is largely unexplored. This is partly due to the lack of methods that can infer the CFC in a desirable manner as explained next.
Several properties are desirable in the context of causal modeling of functional connectivity <cit.>. Specifically, the CFC should represent causality while free of limiting assumptions such as linearity of interactions. In addition, since the activity of brain regions are related over time, such temporal relationships should be incorporated in defining causal relationships in neural activity. The estimation of CFC should be computationally feasible for the whole brain functional connectivity, instead of limiting to a smaller brain network. It is also desirable to capture beyond-pairwise multi-regional cause and effect interactions between brain regions. Furthermore, since the BOLD signal occurs and is sampled at a temporal resolution that is far slower than the neuronal activity, thereby causal effects often appear as contemporaneous <cit.>. Therefore, the causal model in fMRI data should support contemporaneous interactions between brain regions.
Among the methods for finding CFC, Dynamic Causal Model (DCM) requires a mechanistic biological model and compares different model hypotheses based on evidence from data, and is unsuitable for estimating the CFC of the whole brain <cit.>. On the other hand, Granger Causality typically assumes a vector auto-regressive linear model for activity of brain regions over time, and it tells whether a regions's past is predictive of another's future <cit.>. Furthermore, GC does not include contemporaneous interactions. This is a drawback since fMRI data often consists of contemporaneous interactions <cit.>. In contrast, Directed Graphical Modeling (DGM) has the advantage that it does not require the specification of a parametric equation of the neural activity over time, it is predictive of the consequence of interventions, and supports estimation of whole brain CFC. Furthermore, the approach inherently goes beyond pairwise interactions to include multiregional interactions between brain regions, along with estimation of the cause and effect of such interactions. The Time-aware PC (TPC) algorithm is a recent method for computing the CFC based on DGM in a time series setting <cit.>. In addition, TPC also incorporates contemporaneous interactions among brain regions. A detailed comparative analysis of approaches to find causal functional connectivity is provided in <cit.>. With the development of methodologies such as Time-aware PC, it would be possible to infer the whole brain CFC with the aforementioned desirable properties.
In this paper, we apply the TPC algorithm to infer the causal functional connectivity between brain regions from resting-state fMRI data. By applying the algorithm to the fMRI of subjects, we estimate the subject-specific CFC for all subjects in the dataset. It is noteworthy that different subjects are in different clinical categories: Cognitively Normal (CN), Mild Cognitive Impairment (MCI), and Alzheimer's Disease (AD). To identify which causal connections differ between brain regions across pairs of clinical categories, we utilize Welch's t-test, comparing the weights of causal functional connections of subjects for a pair of clinical categories. This analysis reveals the causal links between brain regions that exhibit significant differences between subjects in distinct clinical categories, such as with cognitively normal vs. Alzheimer's disease. Additionally, we employ a Kruskal-Wallis H-test, which is a non-parametric version of ANOVA test, to identify causal links that show significant differences across subjects of the three clinical categories. These links provide insights into the causal connectivity connections that are relevant in exhibiting differences among the three clinical categories. We then compile a comprehensive list of brain regions impacted by Alzheimer's disease based on the significant causal links obtained from the current dataset. Notably, the obtained brain regions are consistent with existing literature, with each such publication being a report from a team involving a clinical setting and at least one medical expert, thereby validating the approach. We then discuss the soundness and completeness of the results and the potential for using causal functional connectivity obtained using this methodology as a basis for the prognosis and diagnosis of Alzheimer's disease.
§ MATERIALS AND METHODS
§.§ Participants
The resting fMRI and demographic data were downloaded from the Harvard Dataverse (https://doi.org/10.7910/DVN/29352https://doi.org/10.7910/DVN/29352) <cit.>. A total of 30 subjects were included in the study: 10 subjects who are cognitively normal (CN), 10 subjects with mild cognitive impairment (MCI), and 10 subjects with Alzheimer's disease (AD).
In the experiments, general cognitive evaluation of subjects was obtained using the Mini-Mental State Examination (MMSE) <cit.>. The subjects were age-matched (ANOVA test: F = 1.5, p > 0.2) and gender-matched (chi-square test: χ^2 = 1.9, p > 0.3), although subjects with MCI or AD were less educated than subjects in CN group (t-tests: AD vs CN, t = -4.0, p < 0.001; MCI vs CN, t = -2.3, p < 0.05). As expected, MMSE scores had a significant difference between all pairs of groups (t-tests: AD vs CN, t = -6.5, p < 0.001; MCI vs CN, t = -4.6, p < 0.001; MCI vs AD: t = 3.1, p < 0.05).
Table <ref> includes a summary of the participants' demographic and medical information.
§.§ Image Acquisition
The acquisition of fMRI images was performed using a Siemens Magnetom Allegra scanner. The fMRI images were obtained using an echo planar imaging (EPI) sequence at a field strength of 3.0 Tesla, with a repetition time (TR) of 2.08 seconds, an echo time (TE) of 30 milliseconds, and a flip angle of 70 degrees. The matrix size was 64 × 64 pixels, there were 32 axial slices parallel to AC-PC plane, in plane resolution was 3× 3 mm^2, slice thickness was 2.5 mm. Resting scans lasted for 7 mins and 20 secs for a total of 220 volumes during which subjects were instructed to keep their eyes closed, to not think of anything in particular and to refrain from falling asleep.
§.§ fMRI Preprocessing
The fMRI pre-processing steps were carried out using the CONN toolbox version 21a, which utilizes the Statistical Parametric Mapping (SPM12), both of which are MATLAB-based cross-platform software <cit.>. We used the default preprocessing pipeline in CONN, consisting of the following steps in order: functional realignment and unwarp (subject motion estimation and correction), functional centring to (0,0,0) coordinates (translation), slice-time correction with interleaved slice order, outlier identification using Artifact Detection and Removal Tool (ART), segmentation into gray matter, white matter and cerebrospinal fluid tissue, and direct normalization into standard Montreal Neurological Institute (MNI) brain space, and lastly, smoothing using spatial convolution with a Gaussian kernel of 8mm full width half maximum. This pipeline was followed by detrending, and bandpass filtering (0.001-0.1 Hz) to remove low-frequency scanner drift and physiological noise of the fMRI images. The first four time points have been filtered out to remove any artifact.
For the extraction of Regions-Of-Interest (ROIs), the automated anatomical labeling (AAL) atlas was utilized on the preprocessed rs-fMRI dataset <cit.>. The list of all regions in AAL atlas is provided in Appendix <ref> along with their abbreviated, short, and full region names. This specific parcellation method has been demonstrated to be optimal for studying the functional connectivity between brain regions <cit.>. The voxels within each ROI were averaged, resulting in a time series for each ROI.
§.§ Inference of causal functional connectivity: Time-aware PC algorithm
The Time-aware PC (TPC) Algorithm finds causal functional connectivity between brain regions from time series based on Directed Graphical Models (DGM) <cit.>. While traditional DGM is applicable to static data, TPC extends the applicability of DGM to CFC inference in time series by firstly implementing the Directed Markov Property (DMP) to model causal spatial and temporal interactions in the time series by an unrolled Directed Acyclic Graph (DAG) of the time series. The unrolled DAG consists of nodes (v,t), for region of interest v and time t, and edge (v_1,t_1)→ (v_2,t_2) reflecting causal interaction from the BOLD signal in region v_1 at time t_1 to the BOLD signal in region v_2 at time t_2. The estimation of the unrolled DAG is carried out by first transforming the time series into sequential variables with a maximum time delay of interaction τ, and then applying the Peter-Clark (PC) algorithm to infer the unrolled DAG based on the sequential variables <cit.>. TPC then rolls the DAG back to obtain the CFC graph between the regions of interest (see Figure <ref>) <cit.>. We consider τ = 1 for our analyses, which would include interactions of the BOLD signal between regions of interest with a maximum time delay of 2.08 s, the TR of the fMRI acquisition.
The CFC outcome of this methodology is interpretable in the following manner: An edge from region i→ j in the CFC estimate represents significant causal interaction from brain region i at preceding times to region j at following times. The model and the approach are non-parametric, meaning that it does not require the specification of a parametric dynamical equation for neural activity. The method captures beyond-pairwise multivariate interactions between brain regions. It also supports estimation of the CFC for the whole brain in a computationally feasible manner. It also allows for time delays in interactions between the brain units as well as the presence of feedback-loops. Furthermore, it has been shown that if the neural activity obeys an arbitrary dynamical process, the model outcome of TPC is consistent with respect to the causal relationships implied by the dynamical process and is predictive of counterfactual queries such as ablation or modulation <cit.>.
It is noteworthy that implementing the DMP on the unrolled DAG to model causal relationships over time enables contemporaneous interactions e.g. from region u to region v at time t <cit.>. Such contemporaneous interactions are represented by the edge (u,t)→ (v,t) in the unrolled DAG, and presence of such an edge in the unrolled DAG would be reflected as an edge u→ v in the Rolled CFC outcome. Such contemporaneous interactions are especially relevant in fMRI due to the relatively slow temporal resolution of the BOLD signal compared to the underlying neural activity <cit.>.
§.§ Group-wise Comparisons of Estimated CFC
Using the subject-specific CFC estimated by TPC algorithm, we perform further statistical tests. We use Welch's t-test to obtain those connections which have significantly greater or less weight in one clinical category compared to another category <cit.>. To obtain those connections which have significantly unequal weight in either of the three clinical categories of CN, MCI and AD, we use the Kruskal-Wallis H-test, which is a non-parametric version of the ANOVA test <cit.>. The tests are conducted at levels of significance of 0.05 as well as 0.1.
§ RESULTS
§.§ Subject-specific Causal Functional Connectivity
Figure <ref> shows the causal functional connectivity (CFC) estimated using TPC algorithm for an example subject in the cognitively normal group. In Figure <ref>-a, the CFC is represented in the form of a matrix, whose entry (i,j) indicates the presence of connectivity from region index i→ j, and the value at entry (i,j) represents the weight of that causal connection. A positive value (blue) of the weight is indicative of excitatory influence whereas a negative value (red) is indicative of inhibitory influence. The diagonal of the matrix representing self-connections for regions has been filtered out. In Figure <ref>-b, the CFC is represented by a directed graph overlayed on schematics of the brain. The schematics of the brain comprise 2-dimensional brain projections in the Frontal, Axial and Lateral planes. The nodes of the CFC graph correspond to centers of brain regions in the AAL atlas. The nodes are colored light to dark gray according to their depth in the brain, with light gray representing superficial and dark gray representing deeper brain regions. The causal functional connectivity graph provides a highly informative map of causal interactions between brain regions.
§.§ Comparisons of causal functional connectivity over pairwise clinical categories
Figure <ref> shows the CFC edges obtained by the TPC algorithm which are significantly greater or lower in weight between subjects from pairs of disease stages, where the significance is found by Welch's t-test. This provides insights into which brain connections are impacted by different stages of the disease: healthy, mild cognitive impairment, and Alzheimer's disease.
In Figure <ref>, the top 5 significant connections identified from AD>CN and AD<CN comparisons are: Lobule IV, V of cerebellar hemisphere Left → Lobule IV, V of cerebellar hemisphere Right; Superior frontal gyrus, dorsolateral Left → Superior frontal gyrus, medial Left; Middle occipital gyrus Left → Middle occipital gyrus Right; Gyrus rectus Right → Gyrus rectus Left, Heschl's gyrus Right → Heschl's gyrus Left.
§.§ Multi-group comparison of causal functional connectivity
Figure <ref> shows the causal functional connections obtained by TPC, which are significantly different overall across either of the four disease stages. Here this significance is obtained by Kruskal-Wallis H-test. This sheds light on those connections which are impacted during the overall progression of the disease. Here, top 5 connections with highest significance are: Lobule VIIB of cerebellar hemisphere Right → Crus II of cerebellar hemisphere Right; Lobule IV, V of cerebellar hemisphere Left → Lobule IV, V of cerebellar hemisphere Right; Inferior frontal gyrus, pars orbitalis, Right → Middle frontal gyrus, pars orbitalis Right; Amygdala Right → Hippocampus Right; Lobule VI of cerebellar hemisphere Right → Lobule VI of vermis.
§.§ Brain Regions significant for Alzheimer's disease
In Table <ref>, we list 12 brain regions that have significantly different causal functional connections to or from them between subjects of CN, MCI and AD groups, and 5 additional regions at level 0.1. The subject-specific causal functional connectomes have been estimated using TPC algorithm (see Section <ref>). The brain regions identified are in agreement with existing publications cited in Table <ref>-right column.
top=2cm,left=2cm,right=2cm,bottom=2cm
§ DISCUSSION
In this study, we have obtained the causal functional connectivity of the whole brain from resting state fMRI time series. We used the recently developed Time-aware PC (TPC) algorithm based on directed graphical modeling in time series, to compute the causal functional connectivity. In the dataset, the subjects belonged to three clinical categories: cognitively normal (CN), mild cognitive impairment (MCI) and Alzheimer's disease (AD). We performed group-wise comparisons of the subject-specific causal functional connectivity to identify which causal functional connections are significantly different between pairs of subject groups. The significantly different causal functional connections between CN and AD were used to obtain brain regions that are significantly impacted by AD. This resulted in the identification of 12 brain regions where causal functional connections to or from those regions are significantly impacted by the development of Alzheimer's disease at level 0.05, and 5 additional regions at level 0.1.
It is noteworthy that while several studies have concluded decreased connectivity in MCI and AD compared to CN <cit.>, prominent researchers have highlighted that MCI and early stages of AD can involve an increase in functional connectivity between brain regions. This increase occurs when the communication between specific brain regions is impaired. It has been interpreted as a compensatory mechanism where alternative paths within the brain's network are recruited <cit.>. This explains the presence of causal functional connections estimated by TPC algorithm, whose weight in AD is greater compared to CN in addition to connections with weight in AD less than that in CN (see Figure <ref>). The following are top 5 causal functional connections found by TPC, that have strength in AD significantly greater than that in CN at level 0.05: Lobule IV, V of cerebellar hemisphere Left → Lobule IV, V of cerebellar hemisphere Right; Superior frontal gyrus, dorsolateral Left → Superior frontal gyrus, medial Left; Middle occipital gyrus Left → Middle occipital gyrus Right; Middle temporal gyrus Left → Middle temporal gyrus Right; Heschl's gyrus Right → Superior temporal gyrus Right.
In the short term, the augmentation of functional connectivity along alternative pathways exhibits efficiency and adaptability of the brain. However, it is imperative to acknowledge the susceptibility of these densely interconnected hubs to beta-amyloid deposition, which can elicit secondary damage through metabolic stress, ultimately culminating in system breakdown <cit.>. Consequently, the initial state of hyperconnectivity observed in neurodegenerative disorders may gradually transition into hypoconnectivity among the engaged pathways, thereby contributing to cognitive decline as the disease advances <cit.>.
We found the 12 brain regions outlined in Table <ref>-left column to have a significant difference in causal functional connectivity strength at level 0.05 among CN, MCI and AD subject categories. These indicate regions whose impact is of strong significance in Alzheimer's disease. In addition to these regions, Table <ref>-middle column outlines the 5 additional regions that are found to have significant difference in connectivity strength among the groups at the significance level of 0.1. All the regions that we found to be significantly affected in AD are in agreement with published literature (see Table <ref>).
Based on the causal functional connectome outcome alone, this study has been able to identify many significant regions related to Alzheimer's disease, which have been reported across more than 30 different studies, using different feature extraction methods and advanced imaging technologies. Therefore, this study demonstrates the promise of a causal functional connectivity approach based on directed graphical models in time series and estimated by TPC algorithm.
§.§ The TPC-method
This paper described a method and an application of that method to the analysis of Alzheimer's data.
The method, which we call the TPC-method, comprises of three parts:
(a) Application of the TPC algorithm to compute whole-brain causal functional connectivity (CFC)
(b) Interpretation of CFC in the context of Alzheimer's disease using domain (neuropathological) knowledge and
(c) the identification of brain regions impacted by Alzheimer's using statistical tests.
§.§ Soundness of the TPC-method
Soundness refers the the correctness of the inferences made. In this case, it refers to the brain regions identified
as impacted by the disease.
All the regions identified in this paper as significantly impacted by AD are in agreement with peer reviewed
published literature by teams including one or more medical experts attached to clinics or hospitals. This is true for
all the regions that were identified at level 0.05 (primary impact), as well as those that were identified at level
0.1 (secondary impact).
It would be useful to verify soundness using larger datasets, in order to gain more confidence regarding “soundness".
§.§ Completeness of the TPC-method
Completeness refers to the identification of all the brain regions that are likely to be impacted by AD.
While all the identified brain regions in this paper are known to be typically impacted by AD (soundness), this list is by no means
complete. For instance, it is known that the Thalamus gets impacted by AD in due course of time <cit.>. However the Thalamus
has not appeared as a “region of significant impact" in our study. This could be the result of the fact that in the
small dataset (n=20 MCI+AD) that pathology may not have significantly manifested among the subjects.
A larger dataset might identify Thalamus also among the impacted regions. This should be verified by using larger datasets.
Given the nature of AD, progressively more and more regions of the brain get impacted.
Therefore, it would be of immense benefit to have very large datasets to create more accurate CFCs for Alzheimer's data.
§.§ Conclusion
In summary, our results show the promise of the TPC-method, and make the case for the creation of large datasets to
establish soundness and completeness with higher levels of confidence. That would enable the maturation and the use
of the TPC-method (and others approaches) for prognostic and diagnostic purposes for Alzheimer's disease.
apalike
§ AUTOMATED ANATOMICAL LABELING (AAL) ATLAS
The regions in the AAL atlas along with their abbreviated, short and full names are listed in Table <ref>.
rlll
Names of regions in the AAL Atlas
No Abbr. Name Short Name Full Region Name
1 PreCG_L Precentral_L Precentral gyrus Left
2 PreCG_R Precentral_R Precentral gyrus Right
3 SFG_L Frontal_Sup_L Superior frontal gyrus, dorsolateral Left
4 SFG_R Frontal_Sup_R Superior frontal gyrus, dorsolateral Right
5 SFGorb_L Frontal_Sup_Orb_L Superior frontal gyrus, pars orbitalis Left
6 SFGorb_R Frontal_Sup_Orb_R Superior frontal gyrus, pars orbitalis Right
7 MFG_L Frontal_Mid_L Middle frontal gyrus Left
8 MFG_R Frontal_Mid_R Middle frontal gyrus Right
9 MFGorb_L Frontal_Mid_Orb_L Middle frontal gyrus, pars orbitalis Left
10 MFGorb_R Frontal_Mid_Orb_R Middle frontal gyrus, pars orbitalis Right
11 IFGoperc_L Frontal_Inf_Oper_L Inferior frontal gyrus, opercular part Left
12 IFGoperc_R Frontal_Inf_Oper_R Inferior frontal gyrus, opercular part Right
13 IFGtriang_L Frontal_Inf_Tri_L Inferior frontal gyrus, triangular part Left
14 IFGtriang_R Frontal_Inf_Tri_R Inferior frontal gyrus, triangular part Right
15 IFGorb_L Frontal_Inf_Orb_L Inferior frontal gyrus, pars orbitalis, Left
16 IFGorb_R Frontal_Inf_Orb_R Inferior frontal gyrus, pars orbitalis, Right
17 ROL_L Rolandic_Oper_L Rolandic operculum Left
18 ROL_R Rolandic_Oper_R Rolandic operculum Right
19 SMA_L Supp_Motor_Area_L Supplementary motor area Left
20 SMA_R Supp_Motor_Area_R Supplementary motor area Right
21 OLF_L Olfactory_L Olfactory cortex Left
22 OLF_R Olfactory_R Olfactory cortex Right
23 SFGmedial_L Frontal_Sup_Medial_L Superior frontal gyrus, medial Left
24 SFGmedial_R Frontal_Sup_Medial_R Superior frontal gyrus, medial Right
25 SFGmedorb_L Frontal_Med_Orb_L Superior frontal gyrus, medial orbital Left
26 SFGmedorb_R Frontal_Med_Orb_R Superior frontal gyrus, medial orbital Right
27 REC_L Rectus_L Gyrus rectus Left
28 REC_R Rectus_R Gyrus rectus Right
29 INS_L Insula_L Insula Left
30 INS_R Insula_R Insula Right
31 ACC_L Cingulum_Ant_L Anterior cingulate & paracingulate gyri Left
32 ACC_R Cingulum_Ant_R Anterior cingulate & paracingulate gyri Right
33 MCC_L Cingulum_Mid_L Middle cingulate & paracingulate gyri Left
34 MCC_R Cingulum_Mid_R Middle cingulate & paracingulate gyri Right
35 PCC_L Cingulum_Post_L Posterior cingulate gyrus Left
36 PCC_R Cingulum_Post_R Posterior cingulate gyrus Right
37 HIP_L Hippocampus_L Hippocampus Left
38 HIP_R Hippocampus_R Hippocampus Right
39 PHG_L ParaHippocampal_L Parahippocampal gyrus Left
40 PHG_R ParaHippocampal_R Parahippocampal gyrus Right
41 AMYG_L Amygdala_L Amygdala Left
42 AMYG_R Amygdala_R Amygdala Right
43 CAL_L Calcarine_L Calcarine fissure and surrounding cortex Left
44 CAL_R Calcarine_R Calcarine fissure and surrounding cortex Right
45 CUN_L Cuneus_L Cuneus Left
46 CUN_R Cuneus_R Cuneus Right
47 LING_L Lingual_L Lingual gyrus Left
48 LING_R Lingual_R Lingual gyrus Right
49 SOG_L Occipital_Sup_L Superior occipital gyrus Left
50 SOG_R Occipital_Sup_R Superior occipital gyrus Right
51 MOG_L Occipital_Mid_L Middle occipital gyrus Left
52 MOG_R Occipital_Mid_R Middle occipital gyrus Right
53 IOG_L Occipital_Inf_L Inferior occipital gyrus Left
54 IOG_R Occipital_Inf_R Inferior occipital gyrus Right
55 FFG_L Fusiform_L Fusiform gyrus Left
56 FFG_R Fusiform_R Fusiform gyrus Right
57 PoCG_L Postcentral_L Postcentral gyrus Left
58 PoCG_R Postcentral_R Postcentral gyrus Right
59 SPG_L Parietal_Sup_L Superior parietal gyrus Left
60 SPG_R Parietal_Sup_R Superior parietal gyrus Right
61 IPG_L Parietal_Inf_L Inferior parietal gyrus, excluding supramargina...
62 IPG_R Parietal_Inf_R Inferior parietal gyrus, excluding supramargina...
63 SMG_L SupraMarginal_L SupraMarginal gyrus Left
64 SMG_R SupraMarginal_R SupraMarginal gyrus Right
65 ANG_L Angular_L Angular gyrus Left
66 ANG_R Angular_R Angular gyrus Right
67 PCUN_L Precuneus_L Precuneus Left
68 PCUN_R Precuneus_R Precuneus Right
69 PCL_L Paracentral_Lobule_L Paracentral lobule Left
70 PCL_R Paracentral_Lobule_R Paracentral lobule Right
71 CAU_L Caudate_L Caudate nucleus Left
72 CAU_R Caudate_R Caudate nucleus Right
73 PUT_L Putamen_L Lenticular nucleus, Putamen Left
74 PUT_R Putamen_R Lenticular nucleus, Putamen Right
75 PAL_L Pallidum_L Lenticular nucleus, Pallidum Left
76 PAL_R Pallidum_R Lenticular nucleus, Pallidum Right
77 THA_L Thalamus_L Thalamus Left
78 THA_R Thalamus_R Thalamus Right
79 HES_L Heschl_L Heschl's gyrus Left
80 HES_R Heschl_R Heschl's gyrus Right
81 STG_L Temporal_Sup_L Superior temporal gyrus Left
82 STG_R Temporal_Sup_R Superior temporal gyrus Right
83 TPOsup_L Temporal_Pole_Sup_L Temporal pole: superior temporal gyrus Left
84 TPOsup_R Temporal_Pole_Sup_R Temporal pole: superior temporal gyrus Right
85 MTG_L Temporal_Mid_L Middle temporal gyrus Left
86 MTG_R Temporal_Mid_R Middle temporal gyrus Right
87 TPOmid_L Temporal_Pole_Mid_L Temporal pole: middle temporal gyrus Left
88 TPOmid_R Temporal_Pole_Mid_R Temporal pole: middle temporal gyrus Right
89 ITG_L Temporal_Inf_L Inferior temporal gyrus Left
90 ITG_R Temporal_Inf_R Inferior temporal gyrus Right
91 CERCRU1_L Cerebellum_Crus1_L Crus I of cerebellar hemisphere Left
92 CERCRU1_R Cerebellum_Crus1_R Crus I of cerebellar hemisphere Right
93 CERCRU2_L Cerebellum_Crus2_L Crus II of cerebellar hemisphere Left
94 CERCRU2_R Cerebellum_Crus2_R Crus II of cerebellar hemisphere Right
95 CER3_L Cerebellum_3_L Lobule III of cerebellar hemisphere Left
96 CER3_R Cerebellum_3_R Lobule III of cerebellar hemisphere Right
97 CER4_5_L Cerebellum_4_5_L Lobule IV, V of cerebellar hemisphere Left
98 CER4_5_R Cerebellum_4_5_R Lobule IV, V of cerebellar hemisphere Right
99 CER6_L Cerebellum_6_L Lobule VI of cerebellar hemisphere Left
100 CER6_R Cerebellum_6_R Lobule VI of cerebellar hemisphere Right
101 CER7b_L Cerebellum_7b_L Lobule VIIB of cerebellar hemisphere Left
102 CER7b_R Cerebellum_7b_R Lobule VIIB of cerebellar hemisphere Right
103 CER8_L Cerebellum_8_L Lobule VIII of cerebellar hemisphere Left
104 CER8_R Cerebellum_8_R Lobule VIII of cerebellar hemisphere Right
105 CER9_L Cerebellum_9_L Lobule IX of cerebellar hemisphere Left
106 CER9_R Cerebellum_9_R Lobule IX of cerebellar hemisphere Right
107 CER10_L Cerebellum_10_L Lobule X of cerebellar hemisphere Left
108 CER10_R Cerebellum_10_R Lobule X of cerebellar hemisphere Right
109 VER1_2 Vermis_1_2 Lobule I, II of vermis
110 VER3 Vermis_3 Lobule III of vermis
111 VER4_5 Vermis_4_5 Lobule IV, V of vermis
112 VER6 Vermis_6 Lobule VI of vermis
113 VER7 Vermis_7 Lobule VII of vermis
114 VER8 Vermis_8 Lobule VIII of vermis
115 VER9 Vermis_9 Lobule IX of vermis
116 VER10 Vermis_10 Lobule X of vermis
|
http://arxiv.org/abs/2307.01679v1
|
20230704121622
|
An integrable bound for rough stochastic partial differential equations
|
[
"Mazyar Ghani Varzaneh",
"Sebastian Riedel"
] |
math.PR
|
[
"math.PR"
] |
Integrable bound RSPDE]An integrable bound for rough stochastic partial differential equations
Mazyar Ghani Varzaneh
Fakultät für Mathematik und Informatik, FernUniversität in Hagen, Hagen, Germany
[email protected]
Sebastian Riedel
Fakultät für Mathematik und Informatik, FernUniversität in Hagen, Hagen, Germany
[email protected]
Primary 60L50; Secondary 60G15.
We study semilinear rough stochastic partial differential equations as introduced in [Gerasimovičs, Hairer; EJP 2019] and provide an a priori bound. When replacing the driving signal by a suitable Gaussian process, this leads to an integrable bound and implies ℒ^p(Ω)-integrability of the solution.
[
S. Riedel
August 1, 2023
==================
§ INTRODUCTION
In <cit.>, Gerasimovičs and Hairer introduced a new solution concept to study parabolic semilinear stochastic partial differential equations (SPDEs) driven by a finite-dimensional noise. One important property of this theory is that it is completely pathwise since no stochastic integration theory is used to define the solution to the equation. Instead, the paper uses key ideas of Lyons' rough paths theory <cit.>, meaning that the stochastic integral is replaced by a pathwise defined rough integral. A nice feature of this concept of a rough partial differential equation (RPDE) is that it is fully compatible with classical rough path theory. In particular, one can easily study RPDEs driven by Gaussian noises, e.g. fractional Brownian motions, as introduced in <cit.>. Let us mention that this framework of RPDEs was later generalized to study non-autonomous SPDEs, too <cit.>. The theory was particularly applied to study dynamical properties of solutions to SPDEs using the concept of random dynamical systems <cit.> where a pathwise theory is crucial, cf. <cit.>.
Avoiding stochastic integration when defining solutions to SPDEs has many advantages, but also leads to new challenges since probabilistic properties of the solution are often not so easy to obtain. This is particularly true when the driving signal is not a Brownian motion, but a general Gaussian process. One of these problems concerns the ℒ^p(Ω)-integrability of the solution to a random RPDE. For rough differential equations (RDEs), this problem was a major obstacle when generalizing Hörmander theory to rough differential equations driven by Gaussian processes where obtaining the ℒ^p(Ω)-integrability of the Jacobian of an RDE is a crucial step <cit.>. The confusing problem was that the known a priori bounds for solutions to deterministic RDEs, formulated in terms of the rough path norm, were optimal <cit.>, but not integrable if the noise was replaced by a Gaussian process. This problem was solved in the seminal paper <cit.> where the known a priori bounds were modified in such a way that probabilistic properties of the Gaussian rough paths could be applied. Later, the results in <cit.> were slightly simplified and extended to study a larger class of rough differential equations, too <cit.>. For RPDEs in the sense of <cit.>, the problem of finding an integrable a priori bound remained unsolved up to now. In fact, it is stated in <cit.> that The [integrable] moment bounds for the rough path norms of solution and Jacobian (...) [for the RPDE] might not be easy to obtain in general and require a closer look as a separate problem on its own. We decide to postpone the study of such moments but refer the reader to <cit.> where this question was answered for the rough SDE case. In the present paper, we provide exactly these bounds, cf. Theorem <ref> which is our main result. We believe that our conclusions will have many implications and important consequences e.g. when extending non-Markovian Hörmander theory to RPDEs as initiated in <cit.> or in the study of random invariant manifolds as it was started in <cit.>. We will leave these important questions for future work.
§.§ Review on Rough stochastic partial differential equations
We are interested in the solution of the following form of rough SPDEs
dZ_t = AZ_t dt+F(Z_t) dt+G(Z_t)∘d𝐗_t, Z_0=z_0∈ℬ_α
where 𝐗 is a rough path. This family of SPDEs is studied in <cit.> and <cit.>. Let us first quickly review some basic definitions and notations for setting up the solution for this family of equations. For more details, the reader can refer to <cit.>. The following definition is taken from <cit.>.
We call a family of indexed separable Banach spaces { (ℬ_β,| |_β)}_β∈ℝ a monotone
family of interpolation spaces if
(i) For every α≤β: ℬ_β is a dense subset of ℬ_α and the identity map id:ℬ_β→ℬ_α is continuous.
(ii) For every α≤β≤θ and x∈ℬ_α∩ℬ_θ: | x|_β≲| x|_α^θ-β/θ-α| x|_θ^β-α/θ-α.
We also assume :
Let 1/3<γ≤1/2 and 𝐗=(X,𝕏)∈𝒞^γ((0,∞),ℝ^n) be γ-rough path (cf. <cit.> for the precise definition). We will further assume that 0≤σ <1, 0≤δ <γ and θ∈{ 0,γ,γ+δ,2γ}. Also
* F: ℬ_α→ℬ_α-σ is a Lipschitz continuous function.
* G:ℬ_α-θ→ℬ_α-θ-δ is a bounded Fréchet differentiable function up to three times with bounded derivatives.
* A generates a continuous semigroup (S_t)_t≥ 0 such that for every β∈ [min{α-2γ-δ, α-σ},α], S_t∈ℒ(ℬ_β). Also for every σ_1∈ [0,1) with β+σ_1≤α,
| S_tx|_β+σ_1 ≲ t^-σ_1| x|_β,
|(I-S_t)x|_β ≲ t^σ_1| x|_β+σ_1.
Note that as a direct result of (<ref>) for β∈ [min{α-2γ-δ, α-σ},α] and σ_1,σ_2∈ [0,1) such that β-σ_1+σ_2∈ [min{α-2γ-δ, α-σ},α], we have
| S_t-u(I-S_u-v)x|_β≲ (t-u)^-σ_1(u-v)^σ_2| x|_β-σ_1+σ_2.
Let us quickly review the required framework for solving this family of equations and also some preliminary definitions mostly taken from <cit.>. For the interval I⊂ℝ and α̃∈{α-σ,α}, set
ℰ^0,γ_α̃-γ;I:=C(I;ℬ_α̃-γ)∩ C^γ(I;ℬ_α̃-2γ), ℰ^γ,2γ_α̃;I:=C^γ_2(I;ℬ_α̃-γ)∩ C^2γ_2(I;ℬ_α̃-2γ).
We write Z∈𝒟_𝐗,α̃^γ(I) if for s,t∈ I, Z_s,t=Z^'_s(δ X)_s,t+Z^#_s,t such that
‖ Z‖_𝒟_𝐗,α̃^γ(I):=‖ Z‖_C(I;ℬ_α̃)+‖ Z^'‖_ℰ^0,γ_α̃-γ;I+‖ Z^#‖_ℰ^γ,2γ_α̃;I<∞
where
‖ Z‖_C(I;ℬ_α̃):=sup_τ∈ I| Z_τ|_α̃, ‖ Z^'‖_ℰ^0,γ_α̃-γ;I:= max{sup_τ∈ I| Z^'_τ|_α̃-γ,sup_τ,ν∈ I,
τ<ν| (δ Z^')_τ,ν‖_α̃-2γ/(ν-τ)^γ} and
‖ Z^#|_ℰ^γ,2γ_α̃;I:=max{sup_τ,ν∈ I,
τ<ν| Z^#_τ,ν|_α̃-γ/(ν-τ)^γ,sup_τ,ν∈ I,
τ<ν| Z^#_τ,ν|_α̃-2γ/(ν-τ)^2γ} .
We recall that for Y∈𝒟_𝐗,α̃^γ(I), the following limit exists:
∫_s^tS_t-τY_τ∘d𝐗_τ:=lim_|π|→ 0,
π={τ_0=s,τ_1,...τ_m=t }∑_0≤ j<m[S_t-τ_jY_τ_j∘ (δ X)_τ_j,τ_j+1+S_t-τ_jY^'_τ_j∘𝕏_τ_j,τ_j+1].
Let Y∈𝒟_𝐗,α̃^γ(I). Then, it can easily be shown that G(Y) ∈𝒟_𝐗,α̃^γ(I). Finally, we can define the mild solution to <ref>:
We say that Z∈𝒟_𝐗,α^γ(I) solves equation (<ref>) if and only ifZ satisfies the identity
(δ Z)_s,t=S_t-sZ_s+∫_s^tS_t-τF(Z_τ)dτ+∫_s^tS_t-τG(Z_τ)∘d𝐗_τ, Z_0=z_0, s,t∈ℝ,
where the second integral is understood as (<ref>).
Existence and uniqueness of the solution for this type of equation are discussed in several articles. For example, in <cit.>, the authors prove that the mild solutions of the equation under Assumption <ref> exists, is unique and globally defined.
§ MAIN RESULT
We aim to prove that the solution to (<ref>) has an integrable bound. The main obstacle we face here is the presence of the linear part: if we just apply Grönwall's lemma naively, we will only get an exponential bound in terms of the noise which is not integrable. To overcome this problem, we employ a modified version of the greedy points technique and modify the Sewing lemma.
As we stated earlier, the well-posedness and global existence of the solution for this family of equations is well understood. However, the a priori bound that is provided is not optimal and fails to be integrable for Gaussian noises. Therefore, we need some more careful bound for the solution.
Let us first start with the following lemma:
For 0≤δ_1<γ, set
W_𝐗,γ,δ_1:=Δ_T={ (s,t): 0≤ s≤ t≤ T}→ℝ,
W_𝐗,γ,δ_1(s,t):=sup_π,
π={κ_0=s,κ_1,...κ_m=t }{∑_j(κ_j+1-κ_j)^-δ_1/γ-δ_1[|(δ X)_κ_j,κ_j+1|^1/γ-δ_1+‖𝕏_κ_j,κ_j+1‖^1/2(γ-δ_1)]}.
Then W_𝐗,γ,δ_1 is a continuous control function, i.e.
W_𝐗,γ,δ_1(s,u)+W_𝐗,γ,δ_1(u,t)≤ W_𝐗,γ,δ_1(s,t), s≤ u≤ t.
Follows from our assumption on 𝐗.
The next Lemma is basic.
Assume Z∈𝒟_𝐗,α^γ(I) is a mild solution of (<ref>). Then
Z^'_s=G(Z_s) and Z^#_s,t=Z_s,t-G(Z_s)∘ (δ X)_s,t.
Moreover, [G(Z)]^'_s=D_Z_sG [G(Z_s)] and
[G(Z)]^#_s,t = G(Z_t)-G(Z_s)-D_Z_sG [G(Z_s)∘ (δ X)_s,t]
= ∫_0^1∫_0^1σ D^2_Z_s+σ u(δ Z)_s,tG [G(Z_s)∘ (δ X)_s,t,G(Z_s)∘ (δ X)_s,t+Z_s,t^#]du dσ
+∫_0^1D_Z_s+σ (δ Z)_s,tG [Z^#_s,t]dσ .
Let s≤ u≤ v≤ w≤ t and
Ξ̃^u,v_s,t S_t-uG(Z_u)∘ (δ X)_u,v+S_t-uD_Z_uG [G(Z_u)]∘𝕏_u,v.
Then
Ξ̃^u,v_s,t+Ξ̃^v,w_s,t-Ξ̃^u,w_s,t
=S_t-u([G(Z)]^#_u,v)∘ (δ X)_v,w-S_t-v(S_v-u-I)G(Z_v)∘ (δ X)_v,w - S_t-v(S_v-u-I)D_Z_uG [G(Z_u)]∘𝕏_v,w
+ S_t-v(∫_0^1D^2_Z_u+σ (δ z)_u,vG[G(Z_u)∘ (δ X)_u,v+Z^#_u,v,G(Z_u)]dσ)∘𝕏_v,w
+ S_t-v(D_Z_vG[∫_0^1D_Z_u+σ(δ Z)_u,vG[G(Z_u)∘ (δ X)_u,v+Z^#_u,v]dσ])∘𝕏_v,w.
Follows from definition of the mild solutions and our assumption.
Let us fix s<t and set τ^n_m s+n/2^m(t-s) where 0≤ n<2^m-1. We also define
Ξ^n,m_s,t S_t-τ^n_mG(Z_τ^n_m)∘ (δ X)_τ^n_m,τ^n+1_m+S_t-τ^n_mD_Z_τ^n_mG[G(Z_τ^n_m)]∘𝕏_τ^n_m,τ^n+1_m.
Then for A^n_m:=G(Z_τ^2n_m+1)∘ (δ X)_τ^2n_m+1,τ^2n+1_m+1 we have
Ξ^2n,m+1_s,t+Ξ^2n+1,m+1_s,t-Ξ^2n+2,m+1_s,t
= S_t-τ^2n_m+1(∫_0^1∫_0^1σ D^2_Z_τ^2n_m+1+σ u (δ Z)_τ^2n_m+1,τ^2n+1_m+1G[A^n_m;A^n_m+Z^#_τ^2n_m+1,τ^2n+1_m+1 ]du dσ)∘ (δ X)_τ^2n+1_m+1,τ^2n+2_m+1
+ S_t-τ^2n_m+1(∫_0^1D_Z_τ^2n_m+1+σ (δ Z)_τ^2n_m+1,τ^2n+1_m+1G[Z^#_τ^2n_m+1,τ^2n+1_m+1]dσ)∘ (δ X)_τ^2n+1_m+1,τ^2n+2_m+1
+ S_t-τ^2n_m+1(∫_0^1D^2_Z_τ^2n_m+1+σ (δ Z)_τ^2n_m+1,τ^2n+1_m+1G[A^n_m+Z^#_τ^2n_m+1,τ^2n+1_m+1;G(Z_τ^2n_m+1)]dσ)∘𝕏_τ^2n+1_m+1,τ^2n+2_m+1
+ S_t-τ^2n+1_m+1(D_Z_τ^2n+1_m+1G[∫_0^1D_Z_τ^2n_m+1+σ (δ Z)_τ^2n_m+1,τ^2n+1_m+1G [A^n_m+Z^#_τ^2n_m+1,τ^2n+1_m+1]dσ])∘𝕏_τ^2n+1_m+1,τ^2n+2_m+1
- S_t-τ^2n+1_m+1(S_τ^2n+1_m+1-τ^2n_m+1-I)(G(Z_τ^2n+1_m+1)∘ (δ X)_τ^2n+1_m+1,τ^2n+2_m+1+D_Z_τ^2n_m+1G[G(Z_τ^2n_m+1)]∘𝕏_τ^2n+1_m+1,τ^2n+2_m+1).
Follows from (<ref>) and (<ref>).
In the next proposition, we obtain an upper bound over the latter formula in terms of our control function defined in Lemma <ref>.
For i∈{ 0,1,2} and ϵ>0 chosen such that δ+ϵ<γ, there exists a constant M_ϵ depending on ϵ and G such that
∑_m≥ 1∑_0≤ n<2^m| S_t-τ^2n_m+1(∫_0^1∫_0^1σ D^2_Z_τ^2n_m+1+σ u (δ Z)_τ^2n_m+1,τ^2n+1_m+1G[A^n_m;Z^#_τ^2n_m+1,τ^2n+1_m+1 ]du dσ)∘ (δ X)_τ^2n+1_m+1,τ^2n+2_m+1|_α-iγ
+ | S_t-τ^2n_m+1(∫_0^1D_Z_τ^2n_m+1+σ (δ Z)_τ^2n_m+1,τ^2n+1_m+1G[Z^#_τ^2n_m+1,τ^2n+1_m+1]dσ)∘ (δ X)_τ^2n+1_m+1,τ^2n+2_m+1|_α-iγ
+ | S_t-τ^2n_m+1(∫_0^1D^2_Z_τ^2n_m+1+σ (δ Z)_τ^2n_m+1,τ^2n+1_m+1G[Z^#_τ^2n_m+1,τ^2n+1_m+1;G(Z_τ^2n_m+1)])∘𝕏_τ^2n+1_m+1,τ^2n+2_m+1|_α-iγ
+ | S_t-τ^2n+1_m+1(D_Z_τ^2n+1_m+1G[∫_0^1D_Z_τ^2n_m+1+σ (δ Z)_τ^2n_m+1,τ^2n+1_m+1G [Z^#_τ^2n_m+1,τ^2n+1_m+1]dσ])∘𝕏_τ^2n+1_m+1,τ^2n+2_m+1|_α-iγ
≤ M_ϵ(t-s)^iγmax{(t-s)^ϵ(W_𝐗,γ,δ+ϵ(s,t))^γ-δ-ϵ,(t-s)^2ϵ(W_𝐗,γ,δ+ϵ(s,t))^2(γ-δ-ϵ)}| Z^#|_ℰ^γ,2γ_α;I.
We have to show that each term on the left hand side of (<ref>) can be bounded up to a constant depending on ϵ and G by (t-s)^iγ+ϵW_𝐗,γ,δ+ϵ(s,t)^γ-δ-ϵ. We will show this bound for the second and fourth term whose proofs have some distinctions. For the remaining terms, our claim can be confirmed by a similar technique. Remember that ‖ Z^#‖_ℰ^γ,2γ_α;I<∞ and also that (<ref>) holds true. Then
∑_m≥ 1∑_0≤ n<2^m| S_t-τ^2n_m+1(∫_0^1D_σ Z_τ^2n_m+1,τ^2n+1_m+1+Z_τ^2n_m+1G[Z^#_τ^2n_m+1,Zτ^2n+1_m+1]dσ)∘ (δ X)_τ^2n+1_m+1,τ^2n+2_m+1|_α-iγ
≲ | Z^#|_ℰ^γ,2γ_α;I∑_m≥ 1∑_0≤ n<2^m (t-τ^2n_m+1)^γ(i-2)-δ(t-s)^2γ(1/2^m+1)^2γ|X_τ^2n+1_m+1,τ^2n+2_m+1|
≤ (t-s)^γ i+ϵ| Z^#|_ℰ^γ,2γ_α;I∑_m≥ 1∑_0≤ n<2^m (1-2n/2^m+1)^γ(i-2)-δ(1/2^m+1)^(2γ+δ+ϵ)(W_𝐗,γ,δ+ϵ(τ^2n+1_m+1,τ^2n+2_m+1))^(γ-δ-ϵ).
From the Hölder inequality and (<ref>),
∑_0≤ n<2^m (1-2n/2^m+1)^γ(i-2)-δ(1/2^m+1)^(2γ+δ+ϵ)(W_𝐗,γ,δ+ϵ(τ^2n+1_m+1,τ^2n+2_m+1))^(γ-δ-ϵ)
≤ (∑_0≤ n<2^m (1-2n/2^m+1)^γ(i-2)-δ/1-γ+δ+ϵ(1/2^m+1)^2γ+δ+ϵ)/1-γ+δ+ϵ)^1-γ+δ+ϵ(∑_0≤ n<2^mW_𝐗,γ,δ+ϵ(τ^2n+1_m+1,τ^2n+2_m+1))^γ-δ-ϵ
≤ (W_𝐗,γ,δ+ϵ(s,t))^γ-δ-ϵ(∑_0≤ n<2^m (1-2n/2^m+1)^γ(i-2)-δ/1-γ+δ+ϵ(1/2^m+1)^2γ+δ+ϵ)/1-γ+δ+ϵ)^1-γ+δ+ϵ.
Note that 1/2^m+1≤ 1-2n/2^m+1. Therefore,
∑_0≤ n<2^m (1-2n/2^m+1)^γ(i-2)-δ/1-γ+δ+ϵ(1/2^m+1)^2γ+δ+ϵ/1-γ+δ+ϵ≤ (1/2^m)^ϵ/2/1-γ+δ+ϵ∑_0≤ n<2^m(1-2n/2^m+1)^γ(i+1)-1-δ-ϵ/2/1-γ+δ+ϵ1/2^m+1.
Since
lim_m→∞∑_0≤ n<2^m(1-2n/2^m+1)^γ(i+1)-1-δ-ϵ/2/1-γ+δ+ϵ1/2^m+1=1/2∫_0^1(1-x)^γ(i+1)-δ-1-ϵ/2/1-γ+δ+ϵd x,
We can conclude that for some M̃_ϵ<∞,
∑_m≥ 1∑_0≤ n<2^m (1-2n/2^m+1)^γ(i-2)-δ(1/2^m+1)^(2γ+δ+ϵ)(W_𝐗,γ,δ+ϵ(τ^2n+1_m+1,τ^2n+2_m+1))^(γ-δ-ϵ)≤M̃_ϵ(W_𝐗,γ,δ+ϵ(s,t))^γ-δ-ϵ.
This proves our claim for the second term. For estimating the fourth term, we use the same idea with some modifications. First note that in (<ref>) from the interpolation property,
| Z^#_s,t|_α-2γ+δ≲ (t-s)^2γ-δ| Z^#|_ℰ^γ,2γ_α;I.
For
B^i,ϵ_𝐗(s,t) (t-s)^γ i+2ϵW_𝐗,γ,δ+ϵ(s,t)^2(γ-δ-ϵ),
we have
∑_m≥ 1∑_0≤ n<2^m| S_t-τ^2n+1_m+1(D_Z_τ^2n+1_m+1G[∫_0^1D_σ Z_τ^2n_m+1,τ^2n+1_m+1+Z_τ^2n_m+1G [Z^#_τ^2n_m+1,τ^2n+1_m+1]dσ])∘𝕏_τ^2n+1_m+1,τ^2n+2_m+1|_α-iγ
≲ | Z^#|_ℰ^γ,2γ_α;I∑_m≥ 1∑_0≤ n<2^m(t-τ_m+1^2n+1)^γ(i-2)-δ(t-s)^2γ-δ(1/2^m+1)^2γ-δ‖𝕏_τ^2n+1_m+1,τ^2n+2_m+1‖
≤ | Z^#|_ℰ^γ,2γ_α;I∑_m≥ 1∑_0≤ n<2^m(t-τ^2n+1_m+1)^γ(i-2)-δ(t-s)^2γ+δ+2ϵ(1/2^m+1)^2γ+δ+2ϵ(W_𝐗,γ,δ+ϵ(τ^2n+1_m+1,τ^2n+2_m+1))^2(γ-δ-ϵ)
= (t-s)^γ i+2ϵ| Z^#|_ℰ^γ,2γ_α;I∑_m≥ 1∑_0≤ n<2^m(1-2n+1/2^m+1)^γ(i-2)-δ(1/2^m+1)^2γ+δ+2ϵ(W_𝐗,γ,δ+ϵ(τ^2n+1_m+1,τ^2n+2_m+1))^2(γ-δ-ϵ)
≤ B^i,ϵ_𝐗(s,t)| Z^#|_ℰ^γ,2γ_α;I∑_m≥ 1(∑_0≤ n<2^m(1-2n+1/2^m+1)^γ(i-2)-δ/1-2γ+2δ+2ϵ(1/2^m+1)^2γ+δ+2ϵ/1-2γ+2δ+2ϵ)^1-2γ+2δ+2ϵ
≤ B^i,ϵ_𝐗(s,t)| Z^#|_ℰ^γ,2γ_α;I∑_m≥ 1(1/2^m+1)^ϵ(∑_0≤ n<2^m(1-2n+1/2^m+1)^γ(i+2)-2δ-ϵ-1/1-2γ+2δ+2ϵ1/2^m+1)^1-2γ+2δ+2ϵ
≤ M̃̃̃_ϵ(t-s)^γ i+2ϵW_𝐗,γ,δ+ϵ(s,t)^2(γ-δ-ϵ)| Z^#|_ℰ^γ,2γ_α;I,
where M̃̃̃_ϵ<∞ only depends on ϵ.
Now we can prove the following lemma
Assume δ+ϵ<γ and Z∈ℰ^γ,2γ_α;I solves the equation (<ref>). Then for M_ϵ<∞ and i∈{ 0,1,2},
|∫_s^tS_t-τG(Z_τ)∘d𝐗_τ-S_t-sG(Z_s)∘ (δ X)_s,t-S_t-sD_Z_sG[G(Z_s)]∘𝕏_s,t|_α-iγ
≤ M_ϵ(t-s)^iγmax{(t-s)^ϵ(W_𝐗,γ,δ+ϵ(s,t))^γ-δ-ϵ,(t-s)^2ϵ(W_𝐗,γ,δ+ϵ(s,t))^2(γ-δ-ϵ)}| Z^#|_ℰ^γ,2γ_α;I
+ (t-s)^iγ+3(γ-δ)‖ X‖_γ[‖ X‖_γ^2+‖𝕏‖_2γ].
We use the same notation as in Lemma <ref>. Set Γ_s,t^m∑_0≤ n≤ 2^m Ξ^n,m_s,t. By the Sewing lemma,
|∫_s^tS_t-τG(Z_τ)∘d𝐗_τ-S_t-sG(Z_s)∘ (δ X)_s,t-S_t-sD_Z_sG[G(Z_s)]∘𝕏_s,t|_α-iγ
≤ ∑_m≥ 1|Γ_s,t^m+1-Γ_s,t^m|_α-iγ
≤ ∑_m≥ 1∑_0≤ n<2^m|Ξ^2n,m+1_s,t+Ξ^2n+1,m+1_s,t-Ξ^2n+2,m+1_s,t|_α-iγ.
If we plug (<ref>) into (<ref>), then our claim follows from Proposition <ref>.
The following result is well known:
Assume 𝐗=(X,𝕏)∈𝒞^γ([s,t],ℝ^n) be a γ-rough path as before. Let γ^'>0 with γ+γ^'>1, and also for a given path h:[s,t]→ℝ^d
sup_π,
π={κ_0=s,κ_1,...κ_m=t }{∑_j[|h_κ_j,κ_j+1|^1/γ^']}<∞.
Then this path can be enhanced to a rough path 𝐡=(h,∫ h dh) where ∫ h dh is defined by Young integration. In addition, ∫ Xdh and ∫ hdX can also again be defined as Young integrals.
The following estimate will be crucial:
Assume that I =[s,t] is a closed interval and for 1/3<γ<1/2, 𝐗=(X,𝕏) is a weakly geometric rough path such that for δ_1<γ, W_𝐗,γ,δ_1(s,t)<∞. In addition, for γ^'>0 with γ+γ^'>1 and for h: I→ℝ^d being a continuous path satisfying in (<ref>), we assume that W_𝐡,γ^',δ_1(s,t)<∞. If γ+γ^'-2δ_1>1, then
W_T_h(𝐗),γ,δ_1(s,t)≤ C_δ_1[W_𝐗,γ,δ_1(s,t)+ (W_𝐡,γ^',δ_1(s,t))^γ^'-δ_1/γ-δ_1],
where T_h(𝐗)=(h+X,∫ h⊗dh+∫ h⊗dX+∫ X⊗dh+𝕏).
Remember that by Young's integration theory,
∫_s^t (δ h)_s,τ⊗dX_τ:=lim_|π|→ 0∫_πh_s,τ∘dX_τ=lim_|π|→ 0∑_0≤ i<mh_s,κ_i⊗ (δ X)_κ_i,κ_i+1,
where π={κ_0=s,κ_1,...κ_m=t } is a partition of [s,t] and
∫_πh_s,τ∘dX_τ∑_0≤ i<mh_s,κ_i⊗ (δ X)_κ_i,κ_i+1.
Clearly,
‖∫_π h_s,τ⊗dX_τ- ∫_π∖{κ_j} h_s,τ⊗dX_τ‖≤| h_κ_j-1,κ_j|| (δ X)_κ_j,κ_j+1|
≤ [W_𝐡,γ^',δ_1^γ^'-δ_1/γ+γ^'-2δ_1(κ_j-1,κ_j)W_𝐗,γ,δ_1^γ-δ_1/γ+γ^'-2δ_1(κ_j,κ_j+1)]^γ+γ^'-2δ_1(κ_j-κ_j-1)^δ_1(κ_j+1-κ_j)^δ_1.
Note that since W_𝐗,γ,δ_1 and W_𝐡,γ^',δ_1 are control functions (c.f. (<ref>)), consequently we can find 1≤ j<m, such that
W_𝐡,γ^',δ_1^γ^'-δ_1/γ+γ^'-2δ_1(κ_j-1,κ_j)W_𝐗,γ,δ_1^γ-δ_1/γ+γ^'-2δ_1(κ_j,κ_j+1)<2/m-1 W_𝐡,γ^',δ_1^γ^'-δ_1/γ+γ^'-2δ_1(s,t)W_𝐗,γ,δ_1^γ-δ_1/γ+γ^'-2δ_1(s,t).
We can repeat this argument for the new partition π∖{κ_j}, so from (<ref>) and (<ref>) we conclude
‖∫_s^t h_s,τ⊗dX_τ‖≤∑_k≥ 12^γ+γ^'-2δ_1/k^γ+γ^'-2δ_1 (t-s)^2δ_1W_𝐡,γ^',δ_1^γ^'-δ_1(s,t)W_𝐗,γ,δ_1^γ-δ_1(s,t)⟶
‖∫_s^t h_s,τ⊗dX_τ‖^^1/2(γ-δ_1)/(t-s)^δ_1/γ-δ_1≤[∑_k≥ 12^γ+γ^'-2δ_1/k^γ+γ^'-2δ_1]^1/2(γ-δ_1)W_𝐡,γ^',δ_1^γ^'-δ_1/2(γ-δ_1)(s,t)W_𝐗,γ,δ_1^1/2(s,t).
An analogous argument can be run to obtain similar bounds for ∫ X⊗dh and ∫ h⊗dh. This finishes the proof.
We will assume up to the end of this part:
Let (𝒲,ℋ,μ) be an abstract Wiener space and assume X is a Gaussian process defined on it such that it can be enhanced to a weakly γ-Hölder geometric rough path 𝐗=(X,𝕏), 1/3 < γ≤1/2. For every h∈ℋ, let Condition (<ref>) be fulfilled. In this case, by <cit.>, on a measurable subset E⊂𝒲 with full measure,
T_h𝐗(ω)≡𝐗(ω +h), ω∈ E and ∀ h∈ℋ .
We assume that for every h∈ℋ,
W_𝐡,γ^',δ_1(0,1)≲| h|_ℋ^1/γ^'-δ_1.
In the following, we will show that the lift of a fractional Brownian motion satisfies Assumption <ref>.
Assume H∈ (1/4,1/2) and let B^H be a fractional Brownian motion with Hurst parameter H. Let ℋ^H denote the associated Cameron–Martin space for this process. Then Assumption <ref> holds for every 1/2<γ^'<H+1/2 and δ_1<γ^'-1/2.
For θ∈ (0,1) and q∈ (1,∞), for a measurable path g [0,1] →ℝ, define
| g|_W^θ,q(∫_[0,1]^2|g(u)-g(v)|^q/|u-v|^1+θ qdu dv)^1/q.
Then W^θ,q is defined as the set of all measurable paths g such that
| g|_W^θ,q+(∫_0^1|g_u|^qd u)^1/q<∞
and we set
W^θ,q_0 { g∈ W^θ,q:g(0)=0 }.
By <cit.>, for q=2 and 1/2<γ^'<H+1/2, ℋ^H is compactly embedded in W^γ^',2_0. Therefore, for a constant C(γ^')<∞,
(∫∫_[0,1]^2|h(u)-h(v)|^2/|u-v|^1+2γ^'dudv)^1/2≤ C_1(γ^')| h|_ℋ^H
where by | · |_ℋ^H, we mean the corresponding Hilbert norm in ℋ^H. Also by the Besov–Hölder embedding theorem (cf. <cit.>), for all 0≤ s<t≤ 1 and a constant C_2(γ^')<∞,
|h(t)-h(s)|^2≤ C_2(γ^')(t-s)^2γ^'-1 |h|_W^γ^',2^2 = C_2(γ^')(t-s)^2γ^'-1∫∫_[s,t]^2|h(u)-h(v)|^2/|u-v|^1+2γ^' du dv.
Now assume π={κ_0=0,κ_1,...κ_m=1 }. From (<ref>),
∑_0≤ i<m|h(κ_i+1)-h(κ_i)|^1/γ^'-δ_1/(κ_i+1-κ_i)^δ_1/γ^'-δ_1
≤ C_2(θ)^1/2(γ^'-δ_1)∑_0≤ i<m(κ_i+1-κ_i)^2γ^'-2δ_1-1/2(γ^'-δ_1)(∫∫_[κ_i,κ_i+1]^2 |h(u)-h(v)|^2/| u-v|^1+2γ^'dudv)^1/2(γ^'-δ_1).
Now for 0<δ_1<γ^'-1/2, applying the Hölder inequality and (<ref>) yields
∑_0≤ i<m|h(κ_i+1)-h(κ_i)|^1/γ^'-δ_1/(κ_i+1-κ_i)^δ_1/γ^'-δ_1
≤ C_2(γ^')^1/2(γ^'-δ_1)(∑_0≤ i<m∫∫_[κ_i,κ_i+1]^2|h(u)-h(v)|^2/| u-v|^1+2γ^'dudv)^1/2(γ^'-δ_1)
≤ C_2(γ^')^1/2(γ^'-δ_1)(∫∫_[0,1]^2|h(u)-h(v)|^2/| u-v|^1+2γ^'dudv)^1/2(γ^'-δ_1)
≤ C_2(γ^')^1/2(γ^'-δ_1)C_1(γ^')^1/γ^'-δ_1| h|_ℋ^H^1/γ^'-δ_1.
For I=[a,b], the sequence of greedy points denoted with {τ^I_n,δ_1,ω(χ)}_n≥ 0 is defined by setting τ^I_0,δ_1,ω(χ)=a and recursively
τ_n+1,ω^I(χ) sup{τ: τ^I_n,δ_1,ω(χ)≤τ≤ b and W_𝐗(ω),γ,δ_1^γ-δ_1(τ_n,ω^I(χ),τ)≤χ}.
For 0<δ_1< γ and χ>0, we set
N(I,δ_1,χ,𝐗_ω) inf{ n>0: τ_n,ω^I(χ)=b}.
The following proposition is analogous to <cit.>.
Assume γ+γ^'-2δ>1. Choose 0<ϵ<γ+γ^'-2δ-1/2 and set δ_1=δ+ϵ. Then for a constant T(δ_1)<∞, we have
[N(I,δ_1,2^1/γ-δ_1C_δ_1W_𝐗(ω-h),γ,δ_1(a,b),𝐗(ω)) -1] W_𝐗(ω-h),γ,δ_1^γ-δ_1/γ^'-δ_1(a,b)≤ T(δ_1)| h|_ℋ^1/γ^'-δ_1,
where C_δ_1 is the constant in (<ref>). Also for χ>0, there exist M_1(δ_1,χ), M_2(δ_1,χ)<∞ such that
μ{ω: N(I,δ_1,χ,𝐗(ω))> n}≤ M_1(δ_1,χ)exp(-M_2(δ_1,χ)n^2(γ^'-δ_1)).
From (<ref>) and (<ref>)
W_𝐗(ω),γ,δ_1^γ-δ_1(τ_n,ω^I(χ),τ_n+1,ω^I(χ)) = W_T_h(𝐗(ω-h)),γ,δ_1^γ-δ_1(τ_n,ω^I(χ),τ_n+1,ω^I(χ))
≤ C_δ_1^γ-δ_1[W_𝐗(ω-h),γ,δ_1^γ-δ_1(τ_n,ω^I(χ),τ_n+1,ω^I(χ))+ W_𝐡,γ^',δ_1^γ^'-δ_1(τ^I_n,δ_1,ω(χ),τ^I_n+1,δ_1,ω(χ))].
Note that if τ_n+1,ω^I(χ)<b, then the continuity of W_𝐗(ω) yields
W_𝐗(ω),γ,δ_1(τ_n,ω^I(χ),τ_n+1,ω^I(χ))=χ .
Set χ 2^1/γ-δ_1C_δ_1W_𝐗(ω-h),γ,δ_1(a,b). From (<ref>),
W_𝐗(ω-h),γ,δ_1^γ-δ_1/γ^'-δ_1(a,b)≤ W_𝐡,γ^',δ_1(τ^I_n,δ_1,ω(χ),τ^I_n+1,δ_1,ω(χ)) if τ_n+1^I(χ)<b .
Summing up yields
[N(I,δ_1,2^1/γ-δ_1C_δ_1W_𝐗(ω-h),γ,δ_1(a,b),𝐗(ω)) -1] W_𝐗(ω-h),γ,δ_1^γ-δ_1/γ^'-δ_1(a,b)≤ W_𝐡,γ^',δ_1(a,b).
Now it is enough to use (<ref>) to prove (<ref>). For the second claim, let χ>0. From (<ref>)
{ω: N(I,δ_1,χ,𝐗(ω))> n}∩ E⊂
Ω∖({ω∈Ω: χ/2^1/γ-δ_1+1C_δ_1≤ W_𝐗(ω),γ,δ_1(a,b) ≤χ/2^1/γ-δ_1C_δ_1}+r_n𝒦),
where r_n:=(n-1)^γ^'-δ_1χ^γ-δ_1/2^γ-δ_1+1T(δ_1)^γ^'-δ_1C_δ_1^γ-δ_1 and 𝒦 is the unit ball in ℋ. Indeed: Obviously, if χ_1<χ_2, one has N(I,δ_1,χ_2,𝐗(ω))≤ N(I,δ_1,χ_1,𝐗(ω)). Let
ω_1+h∈({ω∈Ω: χ/2^1/γ-δ_1+1C_δ_1≤ W_𝐗(ω),γ,δ_1(a,b) ≤χ/2^1/γ-δ_1C_δ_1}+r_n𝒦).
Then from (<ref>), setting ω=ω_1+h,
[N(I,δ_1,χ,𝐗(ω_1+h))-1](χ/2^1/γ-δ_1+1C_δ_1)^γ-δ_1/γ^'-δ_1≤ (n-1)(χ/2^1/γ-δ_1+1C_δ_1)^γ-δ_1/γ^'-δ_1.
Therefore (<ref>), follows from (<ref>) and the Borell's inequality (cf.<cit.>)
We come back now to the initial question of this article. Remember that the solution Z to (<ref>) satisfies
(δ Z)_s,t = (S_t-s-I)Z_s+∫_s^tS_t-τF(Z_τ)dτ+S_t-sG(Z_s)∘ (δ X(ω))_s,t+S_t-sD_Z_sG[G(Z_s)]∘𝕏_s,t(ω)
+ ∫_s^tS_t-τG(Z_τ)∘d𝐗_τ(ω)-S_t-sG(Z_s)∘ (δ X)_s,t(ω)-S_t-sD_Z_sG[G(Z_s)]∘𝕏_s,t(ω).
Set L(x):=max{ x,x^2} and assume for I=[s,t] that t-s≤ 1. From Lemma <ref> and Assumption <ref>, we can conclude for an M_ϵ>1 which is independent from 𝐗 it holds that
‖ Z‖_𝒟_𝐗,α^γ([u,v]) ≤ M_ϵ[| Z_u|_α+(v-u)^1-σ‖ Z‖_𝒟_𝐗(ω),α^γ([u,v])
+ ‖ X‖_γ;I(‖ X(ω)‖^2_γ;I+‖𝕏(ω)‖_2γ;I)+‖ X(ω)‖_γ;I+‖𝕏(ω)‖_2γ;I
+ L(W_𝐗(ω),γ,δ+ϵ(u,v)^γ-δ-ϵ)‖ Z‖_𝒟_𝐗(ω),α^γ([u,v])],
where [u,v]⊂ [s,t]. Let us choose 0<χ<1 such that M_ϵχ^γ-δ-ϵ≤1/4. We will assume further that M_ϵ(t-s)^1-σ≤1/4 and that {τ^I_n,δ_1,ω(χ)}_n≥ 0 are the greedy points that are defined in Definition <ref> with δ_1=δ+ϵ.
From (<ref>),
‖ Z‖_𝒟_𝐗(ω),α^γ([τ_n,τ_n+1])≤ 2M_ϵ|Z_τ_n|_α+2M_ϵ P(‖ X(ω)‖_γ;I,‖𝕏(ω)‖_2γ;I),
where P(‖ X(ω)‖_γ,‖𝕏(ω)‖_γ)=1+‖𝕏(ω)‖_γ+‖ X(ω)‖_γ(‖ X(ω)‖_γ^2+‖𝕏(ω)‖_2γ).
Therefore, for M̃_ϵlog(2M_ϵ),
sup_τ∈ [s,t]| Z_τ|_α ≤exp(N(I,δ_1,χ,𝐗(ω))M̃_ϵ)| Z_s|_α
+ exp(N(I,δ_1,χ,𝐗(ω))M̃_ϵ+M̃_ϵ)-1/2M_ϵ-1P(‖ X(ω)‖_γ;I,‖𝕏(ω)‖_γ;I).
We can now summarize our main result in the following theorem:
Suppose that F ℬ_α→ℬ_α-σ is a Lipschitz continuous function and G ℬ_α-θ→ℬ_α-θ-δ be a bounded Fréchet differentiable function with 3 bounded derivatives. Assume that Z solves
dZ_t=AZ_tdt+F(Z_t)dt+G(Z_t)∘d𝐗_t, Z_0=ξ∈ℬ_α.
Then the following holds:
(i) (<ref>) admits a unique and global solution Z such that for δ_1=δ+ϵ, one has
sup_τ∈ [s,t]|Z_τ|_α ≤‖ Z‖_𝒟_𝐗,α^γ([s,t])
≤exp(N([s,t],δ_1,χ,𝐗)M̃_ϵ)|Z_s|_α
+ exp(N([s,t],δ_1,χ,𝐗)M̃_ϵ+M̃_ϵ)-1/2M_ϵ-1P(‖ X‖_γ,[s,t],‖𝕏‖_γ,[s,t]),
where M_ϵ>1, M_ϵχ^γ-σ-ϵ≤1/4, M_ϵ(t-s)^1-σ≤1/4 and M̃_ϵ:=log(2M_ϵ). In addition P(x,y)=1+y+ x(x^2+y).
(ii) Let (𝒲,ℋ,μ) be an abstract Wiener space and assume that X is a Gaussian process defined on it that can be enhanced to a weakly γ-geometric rough path 𝐗=(X,𝕏), 1/3 < γ≤1/2. In addition, let γ^'>0 satisfy γ+γ^'-2δ>1, and for every h∈ℋ, for δ_1=δ+ϵ such that 0<ϵ<γ+γ^'-2δ-1/2, we assume that the Conditions (<ref>) and (<ref>) hold. Then
exp(N([s,t],δ_1,χ,𝐗)M̃_ϵ)P(‖ X‖_γ,[s,t],‖𝕏‖_γ,[s,t])∈ℒ^p(Ω)
for every p > 0.
The first item is proved in (<ref>). For the integrability claim, remember that from (<ref>), we know that
μ{ω: N(I,δ_1,χ,𝐗(ω))> n}≤ M_1(δ_1,χ)exp(-M_2(δ_1,χ)n^2(γ^'-δ_1)).
Since by assumption γ+γ^'-2δ_1>1, we can easily conclude 2(γ^'-δ_1)>1. This proves integrability of exp(N([s,t],δ_1,χ,𝐗)M̃_ϵ). Since P is a polynomial term and 𝐗 is a Gaussian process, the integrability for every moment is clear. Finally, the integrability of the product of these two terms is a straightforward consequence of Hölder's inequality.
alpha
|
http://arxiv.org/abs/2307.02956v1
|
20230706124105
|
Circular current in a one-dimensional open quantum ring in the presence of magnetic field and spin-orbit interaction
|
[
"Moumita Patra"
] |
cond-mat.mes-hall
|
[
"cond-mat.mes-hall"
] |
School of Physical Sciences, Indian Association for the Cultivation of Science, 2A & 2B Raja S. C.
Mullick Road, Kolkata 700032, India
In an open quantum system having a channel in the form of loop geometry, the current inside the channel, namely
circular current, and overall junction current, namely transport current, can be different. A quantum ring has
doubly degenerate eigen energies due to periodic boundary condition that is broken in an asymmetric ring where
the ring is asymmetrically connected to the external electrodes. Kramers' degeneracy and spin degeneracy can
be lifted by considering non-zero magnetic field and spin-orbit interaction (SOI), respectively. Here, we find that
symmetry breaking impacts the circular current density vs energy (E) spectra in addition to lifting the degeneracy.
For charge and spin current densities, the corresponding effects are not the same. Under symmetry-breaking they
may remain symmetric or anti-symmetric or asymmetric around E = 0 whereas the transmission function (which is
proportional to the junction current density) vs energy characteristic remains symmetric around E = 0.
This study leads us to estimate the qualitative nature of the circular current and the choices of
Fermi-energy/chemical potential to have a net non-zero current. As a result, we may manipulate the system to
generate pure currents of charge, spin, or both, which is necessary for any spintronic and electronic applications.
Circular current in a one-dimensional open quantum ring in the presence
of magnetic field and spin-orbit interaction
Moumita Patra
====================================================================================================================
§ INTRODUCTION
The current inside the channel and the overall junction current have substantially distinct quantitative and
qualitative characteristics <cit.> for a quantum junction
with a channel shaped like a loop or ring <cit.>. Let us consider
the open quantum junction, having a ring channel as shown in Fig. <ref>. Here, the quantum ring is
connected to the incoming (namely, source) and outgoing (namely, drain) electrodes.
The zero-temperature bond current within the ring and overall junction current can be written by the
Landauer formula as:
I_n → n+1(V) = ∫_E_F - eV/2^E_F+eV/2J_n → n+1(E) dE
and
I_T(V) = 2e/h∫_E_F-eV/2^E_F+eV/2T(E) dE,
respectively <cit.>. Here e is the electronic charge and h is the
Planck’s constant. J_n → n + 1(E) and T(E) are the circular current density
and the transmission function, respectively. E_F is the equilibrium Fermi-function or the chemical potential.
For a linear conductor (as shown in Fig. <ref>)
as these two currents should be equal due to the conservation of charge. Therefore,
J_i → i+1 (E) = 2T(E)
for a linear conductor. In this paper, we choose e = h =1.
In a quantum junction with ring geometry (as shown in Fig. <ref>)
the currents in the upper and lower arms of the ring are not the same <cit.>. But
as the charge current is a conserved quantity, thus each bond in an arm carries the same current.
Let I_C^U and I_C^L be the current in the upper and the lower arms of the ring, respectively
such that each bond in the upper or lower arm carries I_C^U or I_C^L current, respectively.
Due to the current conservation, for a ring junction, we can write
I_T_C = I_C^U - I_C^L.
I_T_C is the junction charge current. Here we assign a positive sign to the current flowing in the
clock-wise direction. From Eq. <ref>, we can see that the currents inside and outside a ring-channel are not
the same. The junction transmission
function T_C(E) is always a positive quantity with upper bound 2 (=1+1 due to contribution of the
up and down spins). Whereas J_i → i+1 (E) may be positive or negative, with no bound for a ring channel.
The circular current is defined as the average of all the bonds current, i.e.,
I = ∑_n l_n→ n+1
I_n→ n+1/∑_n l_n→ n+1;
l_n→ n+1 is the bond-length. ∑_n l_n→ n+1 is the length of the ring L.
For a ring with fixed bond-length a and N number of atomic sites, L = Na. Thus we can rewrite Eq. <ref> as
I = 1/N∑_n I_n→ n+1.
We can determine the characteristics of the circular current inside the channel
(using Eq. <ref>) and the overall junction current (using Eq. <ref>)
by studying circular current density and transmission function as a function of energy, respectively.
A quantum ring has doubly degenerate energy states due to its periodic boundary condition N ≡ N+1,
which implies that the energy eigenstates, with momenta +k and -k have same energy eigen values.
This degeneracy can be broken if we connect the electrodes to the ring in such a way, that the length of the upper
arm and lower arm are not the same. We call it asymmetric ring. According to the Kramers' degeneracy
theorem <cit.>, a Fermionic system has at least two fold degeneracy if it remains unchanged under
time-reversal (TR) transformation which is defined as
:t ↦ -t;
such that [, ] = 0.
TR symmetry can be broken in the presence of magnetic flux ϕ. In this situation
E(k + ϕ, ↑) ≠ E(-k + ϕ, ↑). The spin-degeneracy can be lifted by adding spin-orbit
interaction <cit.>. In the presence of SOI, we can write E(k, ↑) ≠ E(k, ↓).
Whereas in the presence of both ϕ and SOI, we can write E(k + ϕ, ↑) ≠ E(-k + ϕ, ↓).
In this paper, we investigate the circular current density and transmission function as a function
of energy in the presence of an external
magnetic-field, spin-orbit interaction, and both. Here we find that while the transmission function
spectra remain symmetric around around E = 0, the symmetry-breaking
has impact on circular current density vs energy characteristics. For a symmetric-ring (where the
ring is symmetrically connected to the electrodes), we show that the charge current density J_C(E)
is zero without any external fields and in the presence of SOI. J_C(E)
is non-zero in the presence of magnetic-field and both SOI and magnetic flux ϕ.
The circular spin-current density is zero in the absence of SOI and non-zero in the other cases.
The non-zero J_C (E) and J_S (E) are anti-symmetric around E = 0. Thus following Eq. <ref>,
we can write in a symmetric-ring,
the net circular charge and spin currents become 0 if we set the Fermi-energy E_F at 0.
In asymmetric-ring, J_C (E) is always non-zero. It is symmetric around E = 0 in the absence
of ϕ, and asymmetric for non-zero magnetic-field. Thus net circular charge current I_C is
always finite in a asymmetric-ring for any E_F. In an asymmetric-ring, J_S (E) is non-zero
in the presence of SOI (similar to symmetric ring). In the presence of SOI only, J_S (E) is
anti-symmetric around E = 0, thus net I_S becomes zero for E_F = 0. Whereas J_S (E) is
asymmetric around E = 0, in the presence of both ϕ and SOI, thus this becomes finite for
any E_F. The overall junction charge transmission function
T_C (E) is non-zero in the presence and absence of the interactions. T_S (E) is non-zero in
the presence of both ϕ and SOI only.
As they are always symmetric around E = 0, the net I_T_C (and I_T_S in the presence
of both ϕ and SOI) are finite for any choice of E_F.
These findings lead us to manipulate the system to generate pure charge current or pure spin current
or both which is very important for designing electronic and spintronic
devices <cit.>. For example, in the presence of SOI,
when unpolarized electrons are injected in a symmetric-ring, for any non-zero E_F a pure spin current
(where I_C is zero) is generated within the ring. On the other hand, in an asymmetric-ring, for E_F = 0,
pure charge current (where I_S is zero) is produced and for E_F ≠ 0 both I_C and I_S become finite.
The outline of the paper is as follows. First, in section II, we define the general model and describe the
scattering formalism to estimate the current inside the scatterer and overall junction current.
In section III we calculate all the components of circular current densities and transmission functions
in the absence and presence of magnetic field, SOI, and both considering symmetric
and asymmetric ring. Finally we conclude with a discussion in section IV.
§ THEORY
We follow the scattering formalism <cit.>, where we can define the circular currents densities and transmission
function of an open quantum system by wave amplitudes. Let us consider Fig. <ref> where
a Bloch wave incidents with energy E from the left and transmitted to the right. In between, we consider
a tight-binding <cit.> scatterer (shown by blue color). The Hamiltonian for the entire system
is considered as:
= _S + _ +
_D + _C.
Here,
_S/D = ∑_n = -1/N+1^-∞/+∞(_n+1^†_0^†_n
+ _n^†_0_n+1),
_ = ∑_n = 1^N(_n+1^†_n→ n+1^†_n
+ _n^†_n→ n+1_n+1),
_C = (_-1^†_-1→ 1^†_1
+ _1^†_-1→ 1_-1)
+ (_N^†_N→ N+1^†_N+1
+ _N+1^†_N→ N+1_N).
_S, _, _D and
_C are the sub-Hamiltonian representing the left side of scatterer i.e., the source, the scatterer,
right side of scatterer i.e., the drain, and their coupling, respectively. Here
_n^†=([ c_↑,n^† c_↓,n^† ]). c_σ,n^† (σ = ↑, ↓) is the creation operator.
_n=([ c_↑,n; c_↓,n ]). c_σ,n (σ = ↑, ↓) is the annihilation operator.
_σσ', n→ n+1=([ t_↑↑, n → n+1 t_↑↓, n → n+1; t_↓↑, n → n+1 t_↓↓, n→ n+1 ]). t_σσ', n → n+1 is the nearest neighbor coupling for an electron with spin
σ and hopped as spin σ' (σ, σ' = ↑, ↓). _0
is the hopping matrix for the left and right side of the scatterer where
_0=([ t_0 0; 0 t_0 ]).
When up spin incidents with energy E = 2 t_0cos(ka), the solution for the left of the scattering region can be
written in terms of incoming and reflected waves as
|_↑,L⟩ = ∑_n ≤ -1([ A_↑ e^i(n + 1)ka + B_↑↑e^-i(n + 1)ka; B_↑↓e^-i(n + 1)ka ])|n, ↑⟩
Whereas if down spin incidents from left, the wave function looks that
|_↓,L⟩ = ∑_n ≤ -1([ B_↓↑e^-i(n + 1)ka; A_↓ e^i(n + 1)ka + B_↓↓e^-i(n + 1)ka ])|n, ↓⟩
A_σ = Incident amplitude of an electron with spin σ (σ = ↑, ↓).
B_σσ' = Reflection amplitude of a electron which is incident with spin σ and reflects as σ'
(σ,σ' = ↑, ↓).
The solution to the right is given by a transmitted wave in the case of up spin incidence as
|_↑,R⟩ = ∑_n > N([ τ_↑↑e^inka; τ_↑↓ e^inka ])|n, ↓⟩
Similarly for the down spin incidence, the solution is
|_↓,R⟩ = ∑_n > N([ τ_↑↓e^inka; τ_↓↓ e^inka ])|n, ↓⟩.
τ_σσ' = Transmission amplitude of an electron with spin σ and
transmitted as spin σ' (σ, σ' = ↑, ↓).
For the scattering region the steady-state wave function for up and down spins have the form,
|_↑,S⟩ = ∑_n = 1^N([ C_↑↑,n; C_↑↓,n ])|n, ↓⟩,
|_↓,S⟩ = ∑_n = 1^N([ C_↓↑,n; C_↓↓,n ])|n, ↓⟩,
respectively. C_σσ',n (σ, σ' = ↑,↓) are the
wave-amplitudes. The current density between any two neighboring-sites of the scatterer, that is
n and n+1 can be evaluated as
J_σσ',n→ n+1 (E) = e/ħ(C_σσ',n^†
t_σσ',n→ n+1^*C_n+1
- C_n+1^* t_σσ',n → n+1C_n).
ħ is the reduced Planck's constant; Net up and down spin current densities with in a bond are defined as
J_↑, n→ n+1(E) = J_↑↑, n→ n+1(E) +
J_↓↑, n→ n+1(E)
J_↓, n→ n+1(E) = J_↑↓, n→ n+1(E) +
J_↓↓, n→ n+1(E),
respectively. The net charge and spin current densities are given by
J_C, n→ n+1(E) = J_↑, n→ n+1(E) +
J_↓, n→ n+1(E)
J_S, n→ n+1(E) = J_↑, n→ n+1(E) -
J_↓, n→ n+1(E),
respectively. The net current density on the left side of the scatter is
J_σσ', -2→-1(E) = J__σσ'(E) - J__σσ'(E)
J__σσ'(E), J__σσ'(E) are the incident and reflected currents
density respectively. The spin dependent incident, reflected, and transmitted current density are given by
J__σ(E) = Γ/ħ|A_σ|^2,
J__σσ'(E) = Γ/ħ|B_σσ'|^2,
J__σσ'(E) = Γ/ħ|τ_σσ'|^2,
respectively. Γ(E) = 2 |t_0 sin(ka)|. σ, σ' = ↑,↓.
The reflection and transmission functions are defined as
R_σσ'(E) = |B_σσ'|^2
T_σσ'(E) = |τ_σσ'|^2,
respectively. σ, σ' = ↑, ↓. Therefore,
R_↑(E) = R_↑↑(E) + R_↓↑(E);
R_↓(E) = R_↑↓(E) + R_↓↓(E);
T_↑(E) = T_↑↑(E) + T_↓↑(E);
T_↓(E) = R_↑↓(E) + T_↓↓(E).
Once the reflection and transmission functions are known, one can show the conservation condition
(R_↑(E) + R_↓(E)) + (T_↑(E) + T_↓(E)) = 2. The charge and
spin transmission functions are defined as
T_C(E) = T_↑(E) + T_↓(E);
T_S(E) = T_↑(E) - T_↓(E),
respectively.
§ RESULTS AND DISCUSSION
Now we discuss the results of considering a ring scatterer. All the results are computed for ring size N = 6 where
the source is always connected to the first site of the ring, i.e.,
N_S = 1. This is for the better representation of the results though this qualitative analysis is true for any N.
Throughout the calculations we choose the hopping parameters for the electrodes are 2eV and all other hoppings are 1eV.
The inter-atomic spacing (a) is considered to be 10nm.
Now we discus how the current density as well as the transmission function vs energy
spectra are affected by the magnetic field, SOI, and both.
§.§ Transport through a perfect ring without any external field
Let us first understand the open-quantum transport considering a perfect ring scatterer which has
identical nearest-neighbor hopping integral. The carriers have two possible
paths namely, upper or lower arms of the ring to travel from source to drain.
Let us consider Fig. <ref>. The path between site-1 (where the source is connected to the ring) to N_D
(where the drain is attached) is called upper arm and the path between N_D to 1 is called
lower arm. In a symmetric-ring, the source and drain are connected to the ring in such a way that,
the lengths of upper and lower arms are the same. Otherwise it is asymmetric-ring. If we connect the drain
to the N-th site of the ring, the path difference between upper and lower arms becomes maximum. This is
called most asymmetric-ring. Due to the conservation of charge, the bond charge current (or the bond
charge current density) in each bond of a particular arm (upper or lower) of the ring always remains the same.
In the absence of any spin-scattering interaction, the contribution of up and down spin
to the current are equal and current due to spin-flipping scattering becomes zero. That is,
J_↑↑, n → n+1(E) = J_↓↓, n → n+1(E);
J_↑↓, n → n+1(E) = J_↓↑, n → n+1(E) = 0.
Hence,
J_↑, n → n+1(E) = J_↓, n → n+1(E).
Now, the current flowing through the each bond of upper are equal to each other. That is,
J_σ, n → n+1^A(E) = J_σ, n+1 → n+2^A(E).
A = U,L.
As all the bonds in an arm carry equal current we simply denote the bond current density in
the upper and lower arm as J_σ^(E) and J_σ^(E), respectively.
J_σ^U (E) (red curve) and J_σ^L (blue curve)
for symmetric and the most asymmetric ring are shown in the Fig. <ref>(a) - (b),
respectively. The net circular current density (Fig. <ref>(c)) and the overall junction transmission
function (Fig. <ref>(d)) for both the connections are also calculated.
The current densities are symmetric around E=0 (see in Appendix <ref>).
The energies associated with the picks and the dips in the each spectra correspond to the
energy eigenvalues of the ring. The energy dispersion relation for the isolated ring is
E_σ = 2 t (ka) = 2 t [2π m/N].
k = 2 π m/N a. The integer m runs between N/2 ≤ m <N/2. Therefore for +k and -k
the system has the same energy. In general for a quantum ring with any N,
the energy states have two-fold degeneracy due to periodic boundary condition except at m=0 for odd N
and m= -N/2, 0 for even N. For example, in this article, as we choose N = 6, the allowed values of
m are -3, -2, -1, 0, 1, 2. Therefore the eigenstates with m = ± -2 and ± 1. Whereas the eigenstates
with m = -3, that is E = -2eV and m = 0 with E = 2eV, are non-degenerate.
In a symmetric-ring, the currents in the upper and lower arms are equal and opposite
to each other (Fig. <ref>(a)). Therefore we have four peaks in J_σ^U-E (or J_σ^L-E) spectra.
The circular current density in terms of J^_σ and
J^_σ(E) can be written as
J_σ(E) = f^J^_σ(E) +
f^J^_σ(E).
f^ = (N_D - 1)/N and f^ = (N - N_D + 1)/N are the weight
factors for the upper and lower arms, respectively.
As we have already discussed, in symmetric-ring,
J_σ^(E) = - J_σ^(E) σ = ↑, ↓.
Hence the net circular current is zero.
Fig. <ref>(b), the degeneracy between clockwise and anti-clockwise moving electronic wave-functions
are lifted due to the asymmetric ring-to-leads connection. Hence the currents in the two arms become unequal.
The currents associated with the degenerate energy levels propagate in the same direction, which is very unconventional
as through one arm current propagates against the bias. For non-degenerate
energy levels they move in opposite directions similar to Fig. <ref>(a). A net circular current
within the ring is produced within the ring as
J_σ^(E) ≠ J_σ^(E) σ = ↑, ↓.
As across E = ± 2eV, the current propagates in opposite directions, through the two arms of the
ring, vanishingly small current densities are obtained according to Eq. <ref>.
Whereas at the degenerate energies (neglecting spin degeneracy),
the contributions from both of the arms are additive, hence a net circular current
density is obtained as we can see in the Fig. <ref>(c). As in this case J_↑(E) = J_↓(E),
hence J_C(E) = J_↑(E) + J_↓(E) = 2 J_↑(E) = 2 J_↓(E).
In the transmission spectra (Fig. <ref>(d)), we have four pecks in the symmetric case where as
it has six peaks in the asymmetric case indicating removal of degeneracies same as previous results.
We have anti-resonant states with T_σ(E) = 0 in the asymmetric connection at the degeneracies E = ± 1eV.
Here to note that, T_σ(E) is positive with a upper bound 1, whereas J_σ(E) can have positive and
negative values with no bound.
§.§ In the presence of magnetic-field
We consider a net magnetic field passing through the center of the ring. The direction of the field is perpendicular
to the confining plane of ring. In the presence of magnetic-field, the Hamiltonian
_ in Eq. <ref> is modified as
_ = ∑_n (e^iθ_n+1^†_n +
e^-iθ_n^†^†_n+1)
where θ=2 πϕ/N is the phase factor due to the flux ϕ
which is measured in unit of the elementary flux quantum ϕ_0 (=ch/e),
and n=1, 2, 3 …. The circular current densities and the transmission function as a function of E
in the presence of magnetic field is shown in the Fig. <ref>.
Six peaks are visible in each spectrum as the Kramer's degeneracy gets removed with the in corporation of
magnetic-field. The energy dispersion relation is <cit.>
E_↑ = E_↓ =
2 t [2π/N(m+ϕ/ϕ_0)].
m = 0, ± 1 ±2 ….
With ϕ = 0.3, the eigen energies of a 6-th sites ring are E_σ = ± 1.90, ± 1.486, ± 0.4158eV.
Therefore energy levels are symmetric around E = 0. But the magnitudes of J_σ^U (E) or J_σ^L (E)
are asymmetric around E = 0. Unlike Fig. <ref>(a), in the presence of non-zero magnetic field, in a symmetric-ring
J_σ^U (E) ≠ - J_σ^L (E) and they propagate in the same direction. Therefore net circular current
is non-zero in the symmetric-ring. In-fact a magnetic field can induces a net circulating current
with in a isolated ring (without any electrodes) and a lots of research have already been done in this
direction <cit.>. So here in a symmetric-ring,
magnetic field driven circular current appears.
In this case (Fig. <ref>(a)), J_σ^U (E) = - J_σ^L (-E).
Therefore net J_C (E) is anti-symmetric around E = 0 as we can see in the Fig. <ref>(c) shown by orange curve.
Therefore Eq. <ref> implies that the current goes to zero if we set the equilibrium Fermi energy E_F to 0.
On the other hand under, in asymmetric-ring J_C(E) is asymmetric around E =0 (shown by
the black curve in Fig. <ref>(c)). Therefore in this case I_C(V) is finite for any
choice of E_F. The transmission spectra are symmetric (Fig. <ref>(d)) around E = 0.
Unlike Fig. <ref>(d), here the anti-resonances in this spectra get removed in an asymmetric-ring (as shown by
the black curve).
§.§ In the presence of Rashba spin-orbit interaction
We consider a ring scatterer with spin-orbit interaction. The rest parts of the circuit remains SOI free.
Here we take Rashba SOI (RSOI) which is originated due to the
structure inversion-asymmetry, caused by the inversion asymmetry of the confining
potential <cit.>. RSOI <cit.> is an electrically tunable spin-orbit
interaction <cit.>.
In the presence of RSOI, the Hamiltonian _ in Eq. <ref> is modified as
_ = ∑_n (_n+1^†_n + _n^†^†_n+1)
+ ∑_n = 1^N(_n+1^†(i_x)
cosϕ_n,n+1_n + h.c.)
- ∑_n = 3^N_R + 2(_n+1^†(i_y)
sinϕ_n,n+1_n + h.c.)
= ∑_n = 3^N_R + 2(_n+1^†_σσ',n → n+1^_n + ).
α is the strength of SOI. ϕ_n → n+1 = (ϕ_n+ϕ_n+1)/2, with
ϕ_n=2π (n-1)/N is the geometrical phase. _i's (i=x, y, z) are the Pauli spin
matrices in _z diagonal representation. The hopping operator
_σσ', n→ n+1^ has the form
_σσ', n→ n+1^ = ([ t i α e^-iϕ_n → n+1; iα e^i ϕ_n → n+1 t ]).
In terms of Green's functions the current density between any two successive bond can be written as
_σσ', n→ n+1 = 1/ħ(^C*_n_σ, n+1_σ'^*_σσ'n → n+1
- h.c.).
^C*_n_σ, n+1_σ' is the correlated Green's function <cit.>.
In the presence of the SOI, the expressions of eigenvalues of a quantum ring are <cit.>:
E_↑ = -2 t(π/N)(ka+π/N)
+ 2(ka+π/N)√(t^2^2(π/N)+α^2)
and
E_↓ = -2 t(π/N)(ka+π/N)
- 2(ka+π/N)√(t^2^2(π/N)+α^2).
For example with α = 0.2eV, our the eigen values of our present ring scattering
are ± 2.03852, ± 2.03852, ± 1.07703, ± 1.07703, ± 0.961484, ±0.961484eV.
Therefore they are symmetric around E = 0.
The Rashba spin-orbit interaction causes a momentum-dependent spin splitting of electronic bands.
As the time reversal symmetry remains preserve in a two terminal SOI device, we do not observe any
spin-separation in the overall junction current for a symmetric as well as asymmetric-rings.
The transmission function is plotted in Fig. <ref>.
But within the channel a net spin current appears in both cases.
The circular current densities are plotted in Fig. <ref>.
Below, we summarize the relationship between spin-dependent current densities in the presence
of RSOI.
0.2cm
∙ Case I - Symmetric-ring with RSOI:
a. No spin-splitting is seen in the transmission function, i.e., T_↑↓(E) =
T_↓↑(E) = 0 and T_↑↑(E) = T_↓↓(E) or
T_↑(E) = T_↓(E). Transmission function is symmetric arrow E=0 i.e.,
T_σ(E) = T_σ(-E); σ = ↑, ↓ as shown in Fig. <ref>.
b. In the presence of spin-flip scattering, different
components of circular current do not remain conserved for each bond of the upper/lower arm. That is,
J_σσ, n → n+1^A(E) ≠ J_σσ, n+1 → n+2^A(E) and
J_σσ', n → n+1^A(E) ≠ J_σσ', n+1 → n+2^A(E).
σ = ↑, ↓, σ' = ↑, ↓ and A = U, L.
c. J_σσ, n → n+1(E)s are not symmetric around E=0 but
J_σσ', n → n+1(E)s are, i.e., J_σσ, n → n+1(E)
≠ J_σσ, n → n+1(-E) but, J_σσ', n → n+1(E)
= J_σ' σ, n → n+1(-E). σ = ↑, ↓, σ' = ↑, ↓.
d. For a particular bond, J_σσ, n → n+1(E) =
J_σ' σ', n → n+1(-E) and J_σσ', n → n+1(E) =
J_σ' σ, n → n+1(E). σ = ↑, ↓, σ' = ↑, ↓.
e. Net up and down circular bond currents remains conserved in the upper/lower arm i.e.,
J_σ, n → n+1^A(E) = J_σ, n+1 → n+2^A(E), A = U, L. Now onward we
simply denote them as J_σ^U(E) or J_σ^L(E) for the currents in the upper and lower arms,
respectively. They are asymmetric arrow E=0.
f. As shown in Fig. <ref> - (a), J_↑^U(E) = -J_↑^L(-E).
Similarly in Fig. <ref> - (b) we find that J_↓^U(E) = -J_↓^L(-E).
From both these figures i.e., Fig. <ref> - (a) - (b) we find that,
J_↑^U(E) = J_↓^U(-E) and J_↑^L(E) = J_↓^L(-E).
The net circular up and down spin current densities are plotted in Fig. <ref>(c). Here
J_↑ (E) = - J_↑ (-E) and J_↓ (E) = - J_↓ (-E).
And J_↑ (E) = J_↓ (-E). Both J_↑ (E) and J_↓ (E)
are anti-symmetric around E = 0.
g. The total charge current flowing through each bond remains conserved even in the presence of SOI i.e.,
J_C, n → n+1^A(E) = J_C, n+1 → n+2^A(E), A = U, L.
But they are equal and opposite in the upper and lower arms of the ring, such that
J_C^U(E) = - J_C^L(E) as we can see in Fig. <ref>(d).
h. Net circular charge current is zero (Fig. reff6(f) orange line).
i. Spin circular bond current within the ring conductor remains conserved.
Even at the upper arm of the ring, the spin current is identical to that at the lower arm.
Due to their overlap, we only see one curve in Fig. <ref>(e). Net spin current is anti-symmetric around
E =0, i.e., J_S(E) = -J_S(-E) (Fig. <ref>(f)).
0.2cm
∙ Case II - Asymmetric ring with RSOI:
a. The transmission spectra have same feature as case I-(a).
See Fig. <ref> black curve.
b. Same as case I-(b).
c. Same as case I-(c).
d. Same as case I-(d).
e. Same as case I-(e).
f. Unlike case I-(f), in asymmetric-ring J_↑^U ≠ -J_↑^L(-E)
(Fig. <ref> (g)) and J_↓^U ≠ -J_↓^L(-E) (Fig. <ref> (h).
However same as case I-(f), here we have J_↑^U(E) = J_↓^U(-E) and
J_↑^L(E) = J_↓^L(-E) (as shown in Fig. <ref> (g) and (h)).
J_↑ (E) ≠ - J_↑ (-E) and J_↓ (E) ≠ - J_↓ (-E)
in this case (Fig. <ref> (i)). But J_↑ (E) = J_↓ (-E) and vice-versa.
J_↑ (E) and J_↓ (E) are asymmetric around E = 0.
g. Same as symmetric-ring, the total charge current is also conserved here. In an asymmetric-ring,
J_C^U(E) is not equal and opposite to J_C^L(E) ( Fig. <ref>(j)).
h. Unlike symmetric-ring, net circular charge current is not zero here (Fig. <ref>(l) orange line). J_C(E)
is symmetric around E = 0.
i. The spin current densities in the upper and lower arms are plotted in Fig <ref>(k) and the
net spin current density is shown in Fig. <ref>(l) by black line. Their characteristics features are as in case I-(i).
Therefore, a pure spin current appears (charge current is zero) in a symmetric ring.
As the spin current density is anti-symmetric around E=0, the net spin current I_S (given
by the area under the J_S - E curve (Eq. (<ref>))) is zero under the condition E_F = 0.
Therefore to get pure spin current, we need to set Fermi energy E_F other than zero.
In an asymmetric-ring, we have a net charge as well as spin circular currents. As in this case,
J_C(E) is symmetric around E = 0 and J_S(E) is anti-symmetric around E=0, therefore at E_F =0,
the spin I_S(V) current vanishes and we can get a pure charge current. Therefore we can achieve a pure
spin-to-charge conversion and vice-versa by selectively choosing the equilibrium Fermi-energy and
the position of the drain. The phenomena can be explained as follows.
In the presence of RSOI, the up and down spins move in the opposite directions.
When an electron with charge e and momentum p moves in a magnetic field B,
Lorentz force F=-e( p× B)/m is acted on it in the direction perpendicular to its motion.
Similarly, when an electron moves in an electric field = V,
it experiences a magnetic field B_eff∼× p/mc^2
in its rest-frame <cit.>. In quantum wells with broken structural inversion
symmetry, the inter-facial electric field along the z-direction gives rise to the
RSOI coupling (× p)_z = σ_x p_y - σ_y p_x. Therefore
when a spin moves in the x-y plane, it experiences a spin-dependent Lorentz force due to the effective
magnetic field B_eff <cit.>. This force is proportional to the square of the
transverse electric field ξ.
Here we consider a ring geometry that has doubly degenerate energy states, representing electronic wave functions
with equal and opposite momenta. For symmetric-ring, when these degeneracies remain preserved, the velocities
of the up and down spin are exactly equal and opposite to each
other. For asymmetric-ring, the degeneracy gets lifted. As a result, the velocities of the spins are not equal but
opposite to each other.
§.§ Effect of magnetic field and RSOI
In the presence of both SOI and magnetic field the Hamiltonian _
in Eq. <ref> is modified as:
_ = ∑_n (e^iθ_n+1^†_n + e^-iθ_n^†^†_n+1)
- ∑_nα[_n+1^†(i_x cosφ_n,n+1 +
i_y sinφ_n,n+1).
. e^iθ_n + h.c. ]
In the presence of SOI and magnetic field, the hopping operator
_σσ', n → n+1 (Eq. <ref>) is modified as
_σσ', n → n+1 = e^-iθ([ t i α e^-iϕ_n → n+1; iα e^i ϕ_n → n+1 t ]).
The expression for the energy eigen values are <cit.>:
E_↑ = -2 t(π/N)(ka+θ+π/N)
+ 2(ka+θ+π/N)√(t^2^2(π/N)+α^2)
and
E_↓ = -2 t(π/N)(ka+θ+π/N)
- 2(ka+θ+π/N)√(t^2^2(π/N)+α^2)
Therefore in the presence of magnetic field and SOI, the spin and Kramer's degeneracies are
completely removed. For example, with ϕ = 0.3 and α = 0.2eV a 6-th site ring has
E = ± 0.358577, ± 0.489086, ± 1.47027k, ± 1.55955, ± 1.91813, ± 1.95936eV. So
the energy levels are symmetric around E = 0. Now we summarize the relationship between spin-dependent
current densities in the presence of magnetic field and spin-orbit interaction.
0.2cm
∙ Case III - Symmetric-ring with magnetic field and RSOI:
a. Spin separation is now seen in the transmission function where,
T_↑↑(E) ≠ T_↓↓(E). The spin-flipping parts are identical
i.e., T_↑↓(E) = T_↓↑(E) ≠ 0. Therefore net
up T_↑(E)( = T_↑↑(E) + T_↓↑(E))
and down T_↓(E)( = T_↓↓(E) + T_↑↓(E))
are not the same (see Fig. <ref>(a)). Hence a net spin transmission T_S(E) = T_↑(E) - T_↓(E)
is seen along with charge transmission i.e, T_C(E) = T_↑(E) + T_↓(E) (see Fig. <ref>(b)).
All these transmission functions are symmetric arrow E=0 i.e., T_σσ'(E) = T_σσ'(-E).
σ = ↑, ↓ and σ' = ↑, ↓. Therefore T_↑ (E),
T_↓(E), T_S(E), and T_C(E) are also symmetric around E = 0.
b. Same as Case I-(b).
c. All the spin components of the circular bond current densities are asymmetric around E=0.
d. For a particular bond, unlike case I-(d), J_↑↑, n → n+1(E) =
J_↓↓, n → n+1(-E), but similar to case I-(d),
J_↑↓, n → n+1(E) = J_↓↑, n → n+1(E).
e. Same as Case I-(e).
f. Similar to Case I-(f), J_↑^U(E) = - J_↑^L(-E) (Fig. <ref>(a))
and J_↓^U(E) = - J_↓^L(-E) (Fig. <ref>(b)). But here,
J_↑^A(E) ≠ J_↓^A(-E), and vice-versa. A = U or L. Net
circular up and down spin currents are anti-symmetric around E = 0, i.e.,
J_↑ (E) = - J_↑ (-E) and J_↓ (E) = - J_↓ (-E) (Fig. <ref>(c)).
Unlike, Case I-(f), here we see J_↑ (E) ≠ J_↓ (-E).
g. The total charge current flowing through each bond remains conserved. J_C (E) in the upper and lower arms
are not equal. They propagate in the same direction around each eigen energy. J_C^U(E) = -J_C^L(-E) (Fig. <ref>(d)).
h. J_C(E) is anti-symmetric around E=0 (Fig. <ref>(f) shown by orange curve).
i. Circular bond spin current within the ring conductor remains conserved, though
J_S^U(E) and J_S^L(E) are not equal. J_S^U(E) = -J_S^L(-E) (Fig. <ref>(e)). J_S(E) is anti-symmetric
around E=0 (Fig. <ref>(f) shown by black curve).
∙ Case IV - Asymmetric-ring with magnetic field and RSOI:
a. Similar to symmetric-ring, T_↑↑(E) ≠ T_↓↓(E) and
T_↑↓(E) = T_↓↑(E) ≠ 0. The net
up T_↑(E) and down T_↓(E) transmission are also not the same (see Fig. <ref>(c)).
Hence T_S(E) and T_C(E) are both non-zero here (see Fig. <ref>(d)). But unlike symmetric case,
T_↑↑(E), T_↑↓(E), T_↓↓(E), T_↓↑(E),
are asymmetric around E=0. But T_↑(E), T_↓(E), T_C(E) and T_S(E) are
symmetric around E = 0.
b. Same as case I-(b).
c. J_↑↑(E), J_↑↓(E), J_↓↑(E) and
J_↓↓(E) are asymmetric around E = 0.
d. Unlike case I-(b), for a particular, J_σσ, n → n+1(E) ≠
J_σ' σ', n → n+1(-E). But J_σσ', n → n+1(E) =
J_σ' σ, n → n+1(E). σ = ↑, ↓, σ' = ↑, ↓.
e. Same as case I-(e).
f. Unlike case I-(f), J_↑^U(E) ≠ - J_↑^L(-E) (Fig. <ref>(g)) and
J_↓^U(E) ≠ - J_↓^L(-E) (Fig. <ref>(h)). J_↑^A(E) ≠ J_↓^A(-E)
and vice-versa (A = U or L). Net J_↑ (E) and J_↓ (E) are asymmetric around E = 0
(Fig. <ref>(i)). J_↑ (E) is not equal to J_↓ (-E) and vice-versa.
g. The charge current is also conserved here. In asymmetric-ring, J_C^U(E) is not equal
and opposite to J_C^L(E) (Fig. <ref>(j)). Rather they flow in the same direction in the two
arms for the most of the energy window.
h. The net circular charge current is non-zero (Fig. <ref>(l), shown by orange curve). J_C(E) is
asymmetric around E=0.
i. Same as case I-(i) J_S(E) is conserved. But unlike case I-(i) they are not the same in
the upper and lower arms (Fig. <ref>(k)). The net circular spin current density is asymmetric around E=0
(Fig. <ref>(l), shown by black curve).
As in a symmetric ring J_C(E) and J_S(E) both are non-zero and anti-symmetric around E = 0,
thus circular charge and spin currents become zero if we set Fermi energy at 0. On the other hand
in an asymmetric-ring, these are asymmetric around E = 0. Therefore we have net circular charge and spin
currents for any choices of the Fermi energy.
§ CONCLUSION
Though the transmission function vs energy characteristics are always symmetric around E = 0,
for a perfect (without any disorder) open quantum ring, the circular current density - energy
spectra are affected by the degeneracy breaking. Our system is composed of a ring
attached with two-semi infinite electrodes. A quantum ring
has two-fold degeneracy as the electron moving in the clock and anti-clockwise
direction has the same energy. The corresponding degeneracy can be lifted by connecting the electrodes
asymmetrically to the ring. The time-reversal symmetry can be broken by adding magnetic-field to the
system. Whereas spin-degeneracy is lifted by adding spin-orbit interaction. The energy eigenvalues
of a perfect ring are symmetric around E = 0 in the presence or absence of
the magnetic field or SOI or both. The peaks in the current density and the transmission function
appear at these energy eigenvalues. The effects of these factors are summarized as follows:
∙ In the absence of magnetic field and SOI, the circular charge current density J_C(E)
is zero in a symmetric-ring and symmetric around E=0 in an asymmetric-ring.
Thus net circular current is zero and non-zero for symmetric and asymmetric-rings, respectively.
Spin degeneracy is not broken here, thus spin current is zero.
∙ In the presence of a magnetic-field, the charge circular current is anti-symmetric
and asymmetric around E = 0 for symmetric and asymmetric-rings, respectively. Thus in
symmetric-ring we have net non-zero circular charge current for E_F ≠ 0.
For asymmetric-ring it is non-zero for any choice of E_F. The spin current is zero here.
∙ In the presence of SOI, up and down spins separation happens within the ring. The circular charge
current density is zero for the symmetric-ring and it is non-zero and symmetric around E = 0 for the asymmetric-ring.
The circular spin current density is non-zero and anti-symmetric around E = 0 for both situations.
Therefore in a symmetric-ring we have pure spin circular current (as circular charge current is zero) setting the
Fermi-energy at any value other than zero. On the other hand, in an asymmetric-ring, we have both circular charge and
spin current for E_F ≠ 0 and at E_F = 0 we have pure charge current (where spin current is zero).
∙ In the presence of both magnetic field and SOI, in a symmetric-ring, both
the circular charge and spin current densities become non-zero. Both of them are anti-symmetric around E = 0.
Therefore we have non-zero I_C and I_S if E_F ≠ 0, and at E_F = 0 both of them become zero.
For asymmetric-ring J_C(E) and J_S(E) are asymmetric around E = 0. Therefore we have both non-zero
circular charge and spin currents for any choices of the Fermi-energy.
∙ The transmission function is always symmetric around E = 0. Therefore the junction
current is non-zero for any choices of E_F. Unless both the spin and Kramer's degeneracies are removed,
spin separation does not appear at the overall junction current. Therefore T_S (E) is zero unless
both magnetic field and SOI become finite.
As our study focuses on the qualitative nature of current inside and outside of a loop junction, The results
will serve to design the new generation electronic and spintronic devices.
§ ACKNOWLEDGEMENTS
The author acknowledges the financial support through the National Postdoctoral Fellowship (NPDF), SERB file
No. PDF/2022/001168.
§ CIRCULAR CURRENT DENSITY FROM GROUP VELOCITY
The sign and magnitude of current density at ± E can be explained from the group velocity
v_k=1/ħ∂ E/∂ k. We can express the circular current density as
J_C(E) ∼∂ E/∂ k <cit.>. Therefore for a perfect ring,
the J_C is proportional to ∼ -2 t (ka) (using Eq. <ref>). Let us consider Fig. <ref>(a).
Here as we see, in the absence of any external interaction, for a symmetric-ring, the circular current
density in the upper or lower arm, is symmetric around E=0. Thus,
J_U(E = 1 ) = J_U(E = -1 ).
For an isolated ring, E = 1eV corresponds to m = 2 with k = 2π/3 and E = - 1eV
corresponds to m = 1 with k = π/3 (here we consider only the positive values of the momentum).
Hence at E = ± 1eV, J_U (E) ∼ 2 t √(3)/2, hence J_U(E) is symmetric around E = 0.
Similarly in the presence of interactions, from the dispersion relations stated in
Eq. <ref>, Eq. <ref>, Eq. <ref>, Eq. <ref>, and Eq. <ref>, we can predict the
circular current at any ± E.
99
cir1a A. Nakanishi and M. Tsukada, Phys. Rev. Lett. 87,
126801 (2001).
cir2 K. Tagami and M. Tsukada, Curr. Appl. Phys. 3,
439 (2003).
cir3 M. Tsukada, K. Tagami, K. Hirose, and N. Kobayashi, J. Phys.
Soc. Jpn. 74, 1079 (2005).
cir4 G. Stefanucci, E. Perfetto, S. Bellucci, and M. Cini,
Phys. Rev. B 79, 073406 (2009).
Nitzan1 D. Rai, O. Hod, and A. Nitzan, J. Phys. Chem. C 114,
20583 (2010).
cir6 D. Rai, O. Hod, and A. Nitzan, Phys. Rev. B 85,
155440 (2012).
SKM S. K. Maiti, Eur. Phys. J. B 86, 296 (2013).
cir8 S. K. Maiti, J. Appl. Phys. 117, 024306 (2015).
cir9 M. Patra and S. K. Maiti, Sci. Rep. 7, 43343 (2017).
cir10 U. Dhakal and D. Rai, J. Phys.: Condens. Matter 31,
125302 (2019).
ring1 A. Lorke, R. J. Luyken, A. O. Govorov, J. P. Kotthaus, J. M. Garcia, and P. M. Petroff,
Phys. Rev. Lett. 84, 2223 (2000).
ring2 R.J. Warburton, C. Schäflein, D. Haft, F. Bickel, A. Lorke,
K. Karrai, J.M. Garcia, W. Schoenfeld, and P. M. Petroff, Nature (London) 405, 926 (2000).
ring3 U. Fano, Phys. Rev. 124, 1866 (1961).
ring4 J. Göres, et al., Phys. Rev. B 62 2188 (2000).
mpprb M. Patra and S. K. Maiti, Phys. Rev. B 100, 165408 (2019).
jay1 A. M. Jayannavar and P. Singha Deo, Phys. Rev. B 51, 10175 (1995).
jay2 S. Bandopadhyay, P. Singha Deo and A. M. Jayannavar, Phys. Rev. B 70, 075315
(2004).
kra1 S. Lieu, M. McGinley, O. Shtanko, N. R. Cooper, and A. V. Gorshkov,
Phys. Rev. B 105, L121104 (2022).
kra2 H. A. Kramers, Koninkl. Ned. Akad. Wetenschap., Proc. 33,
959 (1930).
kra3 E. Wigner, Nachrichten von der Gesellschaft der
Wissenschaften zu Göttingen, Mathematisch-Physikalische
Klasse 1932, 546 (1932).
spind Lin, W., Li, L., Doḡan, F. et al. Nat Commun 10, 3052 (2019).
dev1 S. A. Wolf, D. D. Awschalom, R. A. Buhrman, J. M. Daughton,
S. von Molnár, M. L. Roukes, A. Y. Chtchelkanova, and D. M. Treger,
Science 294, 1488 (2001).
th1 I. Zutic, J. Fabian, and S. Das Sarma, Rev. Mod. Phys.
76, 323 (2004).
th2 S. Datta and B. Das, Appl. Phys. Lett. 56,
665 (1990).
th3 S. Bellucci and P. Onorato, Phys. Rev. B 78,
235312 (2008).
th4 P. Földi, B. Molnár, M. G. Benedict, and F. M.
Peeters, Phys. Rev. B 71, 033309 (2005).
th5 P. Földi, M. G. Benedict, O. Kálmán, and F. M.
Peeters, Phys. Rev. B 80, 165303 (2009).
th6 S. Bellucci and P. Onorato, J. Phys.: Condens. Matter
19, 395020 (2007).
th7 S.-Q. Shen, Z.-J. Li, and Z. Ma, Appl. Phys. Lett.
84, 996 (2004).
th8 D. Frustaglia and K. Richter, Phys. Rev. B 69,
235310 (2004).
th9 D. Frustaglia, M. Hentschel, and K. Richter, Phys.
Rev. Lett. 87, 256602 (2001).
th10 M. Hentschel, H. Schomerus, D. Frustaglia, and K. Richter,
Phys. Rev. B 69, 155326 (2004).
scatter1 V. B.-Moshe, D. Rai, S. S. Skourtis, and A. Nitzan, J. Chem. Phys. 133,
054105 (2010).
tb J. C. Slater and G. F. Koster, Phys. Rev. 94, 1498 (1954).
gefen H.F. Cheung, Y. Gefen, E.K. Reidel, and W.H. Shih, Phys. Rev. B 37, 6050
(1988).
ding G. H. Ding and B. Dong, Phys. Rev. B 76, 125301
(2007).
butt1 M. Büttiker, Y. Imry, and R. Landauer, Phys. Lett. A
96, 365 (1983).
levy L. P. Lévy, G. Dolan, J. Dunsmuir, and H. Bouchiat,
Phys. Rev. Lett. 64, 2074 (1990).
jari E. M. Q. Jariwala, P. Mohanty, M. B. Ketchen, and R. A.
Webb, Phys. Rev. Lett. 86, 1594 (2001).
bir N. O. Birge, Science 326, 244 (2009).
chand V. Chandrasekhar, R. A. Webb, M. J. Brady, M. B. Ketchen,
W. J. Gallagher, and A. Kleinsasser, Phys. Rev. Lett. 67, 3578
(1991).
blu H. Bluhm, N. C. Koshnick, J. A. Bert, M. E. Huber, and
K. A. Moler, Phys. Rev. Lett. 102, 136802 (2009).
ambe V. Ambegaokar and U. Eckern, Phys. Rev. Lett. 65,
381 (1990).
schm1 A. Schmid, Phys. Rev. Lett. 66, 80 (1991).
schm2 U. Eckern and A. Schmid, Europhys. Lett. 18,
457 (1992).
peet L. K. Castelano, G.-Q. Hai, B. Partoens, and F. M. Peeters,
Phys. Rev. B 78, 195315 (2008).
spl J. Splettstoesser, M. Governale, and U. Zülicke,
Phys. Rev. B 68, 165341 (2003).
Rashba0 Y. Feng, et al., Nat Commun. 10, 4765 (2019).
Rashba1 Y.A. Bychkov, E.I. Rashba, J. Exp. Theor. Phys. Lett.
39, 78 (1984)
Rashba2 A. Manchon, H. Koo, J. Nitta, S. M. Frolov, and R. A.
Duine, Nature Mater 14, 871 (2015).
RashbaTune L. Meier, G. Salis, I. Shorubalko, E. Gini,
S. Schön, K. Ensslin, Nat. Phys. 3, 650 (2007).
coG1 L. Wang, K. Tagami, M. Tsukada, Jpn J. Appl. Phys. 43,
2779 (2004).
coG2 H. K. Yadalam and U. Harbola, Phys. Rev. B 94, 115424 (2016).
maiti S. K. Maiti, M. Dey, S. Sil, A. Chakrabarti, and S. N. Karmakar,
Europhys. Lett. 95, 57008 (2011).
lorentz1 S.-Q. Shen, Phys. Rev. Lett. 95, 187203 (2005).
lorentz2 C. J. Kennedy, G. A. Siviloglou, H. Miyake, W. C. Burton, and W. Ketterle,
Phys. Rev. Lett. 111, 225301 (2013).
mp16 M. Patra, J. Phys.: Condens. Matter 34, 325301 (2022).
fibocharge M. Patra and S.K. Maiti, Eur. Phys. J. B 89, 88 (2016).
|
http://arxiv.org/abs/2307.00644v1
|
20230702194550
|
What if we tried Less Power? -- Lessons from studying the power of choices in hashing-based data structures
|
[
"Stefan Walzer"
] |
cs.DS
|
[
"cs.DS"
] |
=1
newunicodecharRedefining
♢[inner sep=1.5,draw,diamond] ;
☆[inner sep=1,draw,star,star point ratio=2] ;
△
⬜0.5pt[inner sep=1.7,draw,regular polygon,regular polygon sides=4] ;0.5pt
◯[baseline=-3pt] [inner sep=1.7,draw,cloud,cloud puffs=4,cloud puff arc=190] ;
⊥
•
*
✓
✗
……
≔
⁻^-
⁺^+
₋_-
₊_+
ℓℓ
•
*
……
≔
‖‖‖
≤≤
≥≥
≰≰
≱≱
⊕⊕
⊗⊗
≠≠
¬
≡≡
₀_0
₁_1
₂_2
₃_3
₄_4
₅_5
₆_6
₇_7
₈_8
₉_9
⁰^0
¹^1
²^2
³^3
⁴^4
⁵^5
⁶^6
⁷^7
⁸^8
⁹^9
∈∈
∉∉
⊂⊂
⊃⊃
⊆⊆
⊇⊇
⊄
⊅
⊈⊈
⊉⊉
∪∪
∩∩
∀∀
∃∃
∄∄
∨∨
∧∧
ℝℝ
ℕℕ
𝔼𝔼
𝔽𝔽
ℤℤ
𝒪𝒪
⌊⌊
⌋⌋
⌈⌈
⌉⌉
··
∘∘
××
↑↑
↓↓
→→
←←
⇒⇒
⇐⇐
↔↔
⇔⇔
↦↦
∅∅
∞∞
≅≅
≈≈
ℓℓ
𝟙1
𝟘0
αα
ββ
γγ
ΓΓ
δδ
ΔΔ
εε
ζζ
ηη
θθ
ΘΘ
ιι
κκ
λλ
ΛΛ
μμ
νν
ξξ
ΞΞ
ππ
ΠΠ
ρρ
σσ
ΣΣ
ττ
υυ
ϒΥ
φφ
ϕϕ
ΦΦ
χχ
ψψ
ΨΨ
ωω
ΩΩ
··
#1#2#3
x
#1
#2
[ #3 ]
theoremTheorem
conjecture[theorem]Conjecture
digressionDigressionDigressions
calc,shapes.geometric,shapes.symbols
e
Po
Bin
c^△
c^⬜
c^⬜_k
c^△_k
c^△_k,ℓ
c^*_k
c^*_k,ℓ
=#1#1
#1 #1
#1 #1
#1#1
#1
90#1
#1
adapted from
in #1
http://xkcd.com/xkcd/
by Randall Munroe
digression
digression[2][hbt]
digression
|
http://arxiv.org/abs/2307.02935v1
|
20230706115236
|
DisAsymNet: Disentanglement of Asymmetrical Abnormality on Bilateral Mammograms using Self-adversarial Learning
|
[
"Xin Wang",
"Tao Tan",
"Yuan Gao",
"Luyi Han",
"Tianyu Zhang",
"Chunyao Lu",
"Regina Beets-Tan",
"Ruisheng Su",
"Ritse Mann"
] |
cs.CV
|
[
"cs.CV"
] |
Disentanglement of Asymmetrical Abnormality on Bilateral Mammograms
Department of Radiology, Netherlands Cancer Institute (NKI),
1066CX Amsterdam, The Netherlands GROW School for Oncology and Development Biology, Maastricht University,
6200 MD, Maastricht, The Netherlands Faculty of Applied Science, Macao Polytechnic University,
999078, Macao, ChinaDepartment of Radiology and Nuclear Medicine, Radboud University
Medical Centre, Nijmegen, The Netherlands Erasmus Medical Center, Erasmus University,
3015 GD, Rotterdam, The Netherlands
Corresponding author: [email protected]
:
Disentanglement of Asymmetrical Abnormality on Bilateral Mammograms using Self-adversarial Learning
Xin Wang1,2 Tao Tan 1,3 * Yuan Gao1,2 Luyi Han1,4 Tianyu Zhang1,2,4 Chunyao Lu1,4 Regina Beets-Tan1,2 Ruisheng Su5 Ritse Mann1,4
August 1, 2023
======================================================================================================================================
Asymmetry is a crucial characteristic of bilateral mammograms (Bi-MG) when abnormalities are developing. It is widely utilized by radiologists for diagnosis.
The question of “what the symmetrical Bi-MG would look like when the asymmetrical abnormalities have been removed ?" has not yet received strong attention in the development of algorithms on mammograms.
Addressing this question could provide valuable insights into mammographic anatomy and aid in diagnostic interpretation.
Hence, we propose a novel framework, , which utilizes asymmetrical abnormality transformer guided self-adversarial learning for disentangling abnormalities and symmetric Bi-MG. At the same time, our proposed method is partially guided by randomly synthesized abnormalities.
We conduct experiments on three public and one in-house dataset, and demonstrate that our method outperforms existing methods in abnormality classification, segmentation, and localization tasks.
Additionally, reconstructed normal mammograms can provide insights toward better interpretable visual cues for clinical diagnosis. The code will be accessible to the public.
§ INTRODUCTION
Breast cancer (BC) is the most common cancer in women and incidence is increasing <cit.>.
With the wide adoption of population-based mammography screening programs for early detection of BC, millions of mammograms are conducted annually worldwide <cit.>.
Developing artificial intelligence (AI) for abnormality detection is of great significance for reducing the workload of radiologists and facilitating early diagnosis <cit.>.
Besides using the data-driven manner, to achieve accurate diagnosis and interpretation of the AI-assisted system output, it is essential to consider mammogram domain knowledge in a model-driven fashion.
Authenticated by the BI-RADS lexicon <cit.>, the asymmetry of bilateral breasts is a crucial clinical factor for identifying abnormalities.
In clinical practice, radiologists typically compare the bilateral craniocaudal (CC) and mediolateral oblique (MLO) projections and seek the asymmetry between the right and left views. Notably, the right and the left view would not have pixel-level symmetry differences in imaging positions for each breast and biological variations between the two views. Leveraging bilateral mammograms (Bi-MG) is one of the key steps to detect asymmetrical abnormalities, especially for subtle and non-typical abnormalities.
To mimic the process of radiologists, previous studies only extracted simple features from the two breasts and used fusion techniques to perform the classification <cit.>. Besides these simple feature-fusion methods, recent studies have demonstrated the powerful ability of transformer-based methods to fuse information in multi-view (MV) analysis (CC and MLO view of unilateral breasts) <cit.>.
However, most of these studies formulate the diagnosis as an MV analysis problem without dedicated comparisons between the two breasts.
The question of “what the Bi-MG would look like if they were symmetric?" is often considered when radiologists determine the symmetry of Bi-MG. It can provide valuable diagnostic information and guide the model in learning the diagnostic process akin to that of a human radiologist.
Recently, two studies explored generating healthy latent features of target mammograms by referencing contralateral mammograms, achieving state-of-the-art (SOTA) classification performance <cit.>.
None of these studies is able to reconstruct a normal pixel-level symmetric breast in the model design.
Image generation techniques <cit.> for generating symmetric Bi-MG have not yet been investigated.
Visually, the remaining parts after the elimination of asymmetrical abnormalities are the appearance of symmetric Bi-MG.
Disentanglement learning <cit.> with the aid of synthetic images based supervise way for separating asymmetric anomalies from normal regions at the image level is a more interpretable strategy.
In this work, we present a novel end-to-end framework, , which consists of an asymmetric transformer-based classification (AsyC) module and an asymmetric abnormality disentanglement (AsyD) module.
The AsyC emulates the radiologist's analysis process of checking unilateral and comparing Bi-MG for abnormalities classifying.
The AsyD simulates the process of disentangling the abnormalities and normal glands on pixel-level.
Additionally, we leverage a self-adversarial learning scheme to reinforce two modules' capacity, where the feedback from the AsyC is used to guide the AsyD's disentangling, and the AsyD's output is used to refine the AsyC in detecting subtle abnormalities.
To facilitate the learning of semantic symmetry, we also introduce Synthesis, combining randomly created synthetic asymmetrical Bi-MG with real mammograms to supervise the learning process.
Our contributions are summarized as follows:
(1) We propose a framework comprising the AsyC and AcyD modules for exploiting clinical asymmetry classification and also localization in an interpretable way without using any real pixel-level asymmetry annotations.
(2) We propose Synthesis to simulate the asymmetry using normal pair of views to guide the model for providing normal symmetric breast views and to indicate the abnormal regions in an accurate supervised fashion.
(3) We demonstrate the robustness of our approach on four mammogram datasets for classification, segmentation, and localization tasks.
§ METHODOLOGY
In this study, the paired Bi-MG of the same projection is required, which can be formulated as ℐ = {x^r,x^l,y^asy,y^r,y^l}. Here, x∈ℝ ^H × W represents a mammogram with the size of H × W, x^r and x^l correspond to the right and left view respectively.
y^r,y^l,y^asy∈{0,1} are binary labels, indicating abnormality for each side, and the asymmetry of paired Bi-MG.
A paired Bi-MG is considered symmetrical only if both sides are normal.
The overview framework of our is illustrated in Fig <ref>.
Specifically, the AsyC module takes a pair of Bi-MG as input and predicts if it is asymmetric and if any side is abnormal.
We employ an online Class Activation Mapping (CAM) module <cit.> to generate heatmaps for segmentation and localization.
Subsequently, the AsyD module disentangles the abnormality from the normal part of the Bi-MG through the self-adversarial learning and Synthesis method.
§.§ Asymmetric Transformer-based Classification Module
The AsyC module consists of shared encoders ψ_e and asymmetric transformer layers ψ_asyt to extract features and learn bilateral-view representations from the paired mammograms.
In this part, we first extract the starting features f of each side (f^r, f^l represent the right and left features respectively) through ψ_e in the latent space for left-right inspection and comparison, which can be denoted as f = ψ_e(x).
Then the features are fed into the ψ_asyt.
Unlike other MV transformer methods <cit.> that use only cross-attention (CA), our asymmetric transformer employs self-attention (SA) and CA in parallel to aggregate information from both self and contralateral sides to enhance the side-by-side comparison.
This is motivated by the fact that radiologists commonly combine unilateral (identifying focal suspicious regions according to texture, shape, and margin) and bilateral analyses (comparing them with symmetric regions in the contralateral breasts) to detect abnormalities in mammography <cit.>.
As shown in the right of Fig. <ref>, starting features f are transformed into query (f_Q), key (f_K), and value (f_V) vectors through feed-forward network (FFN) layers. The SA and CA modules use multi-head attention (MHA), ψ^h=8_mha(f_Q, f_K, f_V) with the number of heads h=8, which is a standard component in transformers and has already gained popularity in medical image fields <cit.>.
In the SA, the query, key, and value vectors are from the same features, f_SA=ψ^h=8_mha(f_Q, f_K, f_V).
While in the CA, we replace the key and value vectors with those from the contralateral features, f^l_CA=ψ^h=8_mha(f_Q^l, f_K^r, f_V^r) or f^r_CA=ψ^h=8_mha(f_Q^r, f_K^l, f_V^l).
Then, the starting feature f, and the attention features f_SA and f_CA are concatenated
in the channel dimension
and fed into the FFN layers to fuse the information and maintain the same size as f.
The transformer block is repeated N=12 times to iteratively integrate information from Bi-MG, resulting in the output feature f^r_out, f^l_out = ψ^N=12_asyt(f^r, f^l).
To predict the abnormal probability ŷ of each side, the output features f_out are fed into the abnormal classifier.
For the asymmetry classification of paired mammograms, we compute the absolute difference of the output features between the right and left sides (f^asy_out = abs(f^r_out- f^l_out), which for maximizing the difference between the two feature) and feed it into the asymmetry classifier.
We calculate the classification loss using the binary cross entropy loss (BCE) ℒ_bce, denoted as ℒ_diag = ℒ_cls(y^asy, y^r, y^l, x^r, x^l) = ℒ_bce(y^asy, ŷ^asy) + ℒ_bce(y, ŷ).
§.§ Disentangling via Self-adversarial Learning
What would the Bi-MG look like when the asymmetrical abnormalities have been removed?
Unlike previous studies <cit.>, which only generated normal features in the latent space, our AsyD module use weights shared U-Net-like decoders ψ_g, to generate both abnormal (x_ab) and normal (x_n) images for each side through a two-channel separation, as x_n, x_ab = ψ_g(f_out).
We constrain the model to reconstruct images realistically using L1 loss (ℒ_l1) with the guidance of CAMs (M), as follows,
ℒ_rec = ℒ_l1((1-M) x, (1-M) x_n) + ℒ_l1(M x, x_ab).
However, it is difficult to train the generator in a supervised manner due to the lack of annotations of the location for asymmetrical pairs.
Inspired by previous self-adversarial learning work <cit.>, we introduce a frozen discriminator ψ_d to impose constraints on the generator to address this challenge.
The frozen discriminator comprises the same components as AsyC.
In each training step, we update the discriminator parameters by copying them from the AsyC for leading ψ_g to generate the symmetrical Bi-MG.
The ψ_d enforces symmetry in the paired Bi-MG, which can be denoted as ℒ_dics = ℒ_cls(y^asy=0, y^r=0, y^l=0, x_n^r, x_n^l).
Furthermore, we use generated normal Bi-MG to reinforce the ability of AsyC to recognize subtle asymmetry and abnormal cues, as ℒ_refine = ℒ_cls(y^asy, y^r, y^l, x_n^r, x_n^l).
§.§ Asymmetric Synthesis for Supervised Reconstruction
To alleviate the lack of annotation pixel-wise asymmetry annotations, in this study, we propose a random synthesis method to supervise disentanglement. Training with synthetic artifacts is a low-cost but efficient way to supervise the model to better reconstruct images<cit.>.
In this study, we randomly select the number n∈[1, 2, 3] of tumors t from a tumor set 𝒯 inserting into one or both sides of randomized selected symmetric Bi-MG (x^r, x^l|y^asy=0).
For each tumor insertion, we randomly select a position within the breast region.
The tumors and symmetrical mammograms are combined by an alpha blending-based method <cit.>, which can be denoted by x|fake = x∏ ^n_k=1 (1-α_k) + ∑ ^n_k=1 t_k α_k, t ∈𝒯.
The alpha weights α_k is a 2D Gaussian distribution map, in which the co-variance is determined by the size of k-th tumor t, representing the transparency of the pixels of the tumor.
Some examples are shown in Fig. <ref>.
The tumor set 𝒯 is collected from real-world datasets. Specifically, to maintain the rule of weakly-supervised learning of segmentation and localization tasks, we collect the tumors from the DDSM dataset as 𝒯 and train the model on the INBreast dataset.
When training the model on other datasets, we use the tumor set collected from the INBreast dataset. Thus, the supervised reconstruction loss is ℒ_syn=ℒ_l1(x|real, x_n|fake), where x|real is the real image before synthesis and x_n|fake is the disentangled normal image from the synthesised image x|fake.
§.§ Loss Function
For each training step, there are two objectives, training AsyC and AsyD module, and then is the refinement of AsyC.
For the first, the loss function can be denoted by ℒ = λ_1ℒ_diag + λ_2ℒ_rec + λ_3ℒ_dics + λ_4ℒ_syn. The values of weight terms λ_1, λ_2, λ_3, and λ_4 are experimentally set to be 1, 0.1, 1, and 0.5, respectively.
The loss of the second objective is ℒ_refine as aforementioned. It should be noted that the loss computation does not involve the usage of any real pixel-level annotations of abnormalities.
§ EXPERIMENTAL
§.§ Datasets
This study reports experiments on four mammography datasets.
The INBreast dataset <cit.> consists of 115 exams with BI-RADS labels and pixel-wise annotations, comprising a total of 87 normal (BI-RADS=1) and 342 abnormal (BI-RADS ≠1) images.
The DDSM dataset <cit.> consists of 2,620 cases, encompassing 6,406 normal and 4,042 (benign and malignant) images with outlines generated by an experienced mammographer.
The VinDr-Mammo dataset <cit.> includes 5,000 cases with BI-RADS assessments and bounding box annotations, consisting of 13,404 normal (BI-RADS=1) and 6,580 abnormal (BI-RADS≠1) images.
The In-house dataset comprises 43,258 mammography exams from 10,670 women between 2004-2020, collected from a hospital with IRB approvals.
In this study, we randomly select 20% women of the full dataset, comprising 6,000 normal (BI-RADS=1) and 28,732 abnormal (BI-RADS≠1) images. Due to a lack of annotations, the In-house dataset is only utilized for classification tasks.
Each dataset is randomly split into training, validation, and testing sets at the patient level in an 8:1:1 ratio, respectively (except for that INBreast which is split with a ratio of 6:2:2, to keep enough normal samples for the test).
§.§ Experimental Settings
The mammogram pre-processing is conducted following the pipeline proposed by <cit.>.
Then we standardize the image size to 1024×512 pixels.
For training models, we employ random zooming and random cropping for data augmentation.
We employ the ResNet-18 <cit.> with on ImageNet pre-trained weights as the common backbone for all methods.
The Adam optimizer is utilized with an initial learning rate (LR) of 0.0001, and a batch size of 8.
The training process on the INBreast dataset is conducted for 50 epochs with a LR decay of 0.1 every 20 epochs.
For the other three datasets, the training is conducted separately on each one with 20 epochs and a LR decay of 0.1 per 10 epochs.
All experiments are implemented in the Pytorch framework and an NVIDIA RTX A6000 GPU (48GB). The training takes 3-24 hours (related to the size of the dataset) on each dataset.
To assess the performance of different models in classification tasks, we calculate the area under the receiver operating characteristic curve (AUC) metric. For the segmentation task, we utilize Intersection over Union (IoU), Intersection over Reference (IoR), and Dice coefficients. For the localization task, we compute the mean accuracies of IoU or IoR values above a given threshold, following the approach <cit.>. Specifically, we evaluated the mean accuracy with thresholds for IoU at 0.1, 0.2, 0.3, 0.4, 0.5, 0.6, and 0.7, while the thresholds for IoR are 0.1, 0.25, 0.5, 0.75, and 0.9.
§.§ Experimental Results
We compare our proposed with single view-based baseline ResNet18, attention-driven method HAM <cit.>, MV-based late-fusion method <cit.>, current SOTA MV-based methods cross-view-transformer (CVT) <cit.>, and attention-based MV methods proposed by Wang et al, <cit.> on classification, segmentation, and localization tasks.
We also conduct an ablation study to verify the effectiveness of
“AsyC", “AsyD", and “Synthesis". Note that, the asymmetric transformer (asyt) is a core component of our proposed “AsyC".
Thus, when ablating the “AsyC", we only drop the asyt and keep the encoders and classifiers.
Comparison of performance in different tasks:
For the classification task, the AUC results of abnormal classification are shown in Table <ref>.
Our method outperforms all the single-based and MV-based methods in these classification tasks across all datasets.
Furthermore, the ablation studies demonstrate the effectiveness of each proposed model component.
In particular, our “AsyC" only method already surpasses the CAT method, indicating the efficacy of the proposed combination of SA and CA blocks over using CA alone.
Additionally, our “AsyD" only method improves the performance compared to the late-fusion method, demonstrating that our disentanglement-based self-adversarial learning strategy can refine classifiers and enhance the model's ability to classify anomalies and asymmetries.
The proposed “Synthesis" method further enhances the performance of our proposed method.
Moreover, we investigate the ability of different methods to classify abnormalities under various percentages of DDSM, VinDr, and In-house datasets.
The INBreast dataset was excluded from this experiment due to its small size.
Fig. <ref> illustrates the robustness of our method‘s advantage and our approach consistently outperformed the other methods, regardless of the size of the training data used and data sources.
For the weakly supervised segmentation and localization tasks, results are shown in Table <ref>.
The results demonstrate that our proposed framework achieves superior segmentation and localization performance compared to other existing methods across all evaluation metrics.
The results of the ablation experiment also reveal that all modules incorporated in our framework offer improvements for the tasks.
Visualization:
Fig. <ref> displays multiple disentangled normal Bi-MG cases.
Our model achieves the efficient removal of asymmetrical abnormalities while retaining normal symmetric tissue. Without using pixel-level asymmetry ground truth from the “Synthesis" method, our generator tends to excessively remove asymmetric abnormalities at the cost of leading to the formation of black holes or areas that are visibly darker than the surrounding tissue because of the limitation of our discriminator and lack of pixel-level supervision.
The incorporation of proposing synthetic asymmetrical Bi-MG during model training can lead to more natural symmetric tissue generation.
§ CONCLUSION
We present, , a novel asymmetrical abnormality disentangling-based self-adversarial learning framework based on the image-level class labels only.
Our study highlights the importance of considering asymmetry in mammography diagnosis in addition to the general multi-view analysis.
The incorporation of pixel-level normal symmetric breast view generation boosts the classification of Bi-MG and also provides the interpretation of the diagnosis.
The extensive experiments on four datasets demonstrate the robustness of our framework for improving performance in classification, segmentation, and localization tasks.
The potential of leveraging asymmetry can be further investigated in other clinical tasks such as BC risk prediction.
splncs04
|
http://arxiv.org/abs/2307.02787v2
|
20230706054353
|
Shortest Beer Path Queries based on Graph Decomposition
|
[
"Tesshu Hanaka",
"Hirotaka Ono",
"Kunihiko Sadakane",
"Kosuke Sugiyama"
] |
cs.DS
|
[
"cs.DS"
] |
Photometric observations of flares on AD Leo from GWAC-F30 and TESS
Jian-Yan Wei
===================================================================
Given a directed edge-weighted graph G=(V, E) with beer vertices B⊆ V, a beer path between two vertices u and v is a path between u and v that visits at least one beer vertex in B, and the beer distance between two vertices is the shortest length of beer paths.
We consider indexing problems on beer paths, that is, a graph is given a priori, and we construct
some data structures (called indexes) for the graph. Then later, we are given two vertices, and we find the beer distance or
beer path between them using the data structure.
For such a scheme, efficient algorithms using indexes for the beer distance and beer path queries have been proposed for
outerplanar graphs and interval graphs.
For example, Bacic et al. (2021) present indexes with size O(n) for outerplanar graphs and an algorithm using them that answers the beer distance between given two vertices in O(α(n)) time, where α(·) is the inverse Ackermann function; the performance is shown to be optimal.
This paper proposes indexing
data structures
and algorithms for beer path queries on general graphs based on two types of graph decomposition:
the tree decomposition and the triconnected component decomposition.
We propose indexes with size O(m+nr^2) based on the triconnected component decomposition, where r is the size of the largest triconnected component. For a given query u,v∈ V, our algorithm using the indexes can output the beer distance in query time O(α(m)).
In particular,
our indexing data structures and algorithms achieve the optimal performance (the space and the query time) for series-parallel graphs, which is a wider class of outerplanar graphs.
§ INTRODUCTION
Given a directed edge-weighted graph G=(V, E) with beer vertices B⊆ V, a beer path between two vertices u and v is a path between u and v that visits at least one beer vertex in B, and the beer distance between two vertices is the shortest length of beer paths. Here, a graph with B, the set of beer stores, is called a beer graph.
The names “beer path” and “beer distance” come from the following story: A person will visit a friend but does not want to show up empty-handed, and they decide to pick up some beer along the way. They would like to take the fastest way to go from their place to their friend's place while stopping at a beer store to buy some drinks.
The notion of the beer path was recently introduced by Bacic et al. <cit.>.
Although the name is somewhat like a fable, we often encounter similar situations as the above story. Instead of beer stores, we want to stop at a gas station along the way, for example.
Just computing the beer distance or a beer path with the beer distance is easy. A beer path with the beer distance always consists of two shortest paths: from
the source to
one of the beer stores
and from the beer store to the destination.
We can
therefore
compute them by solving the single source shortest path problem twice
from the source and from the destination,
and taking the minimum beer vertex among B.
We consider indexing problems on beer paths, that is, a graph is given a priori, and we construct
some data structures (called indexes) for the graph. Then later, we are given two vertices, and we find the beer distance or
beer path between them using the data structure. This is more efficient than algorithms without using any indexes
if we need to solve queries for many pairs of vertices.
Indeed, car navigation systems might equip such indexing mechanisms; since a system has map information as a graph in advance, it can make indexed information by preprocessing, which enables it to quickly output candidates of reasonable routes from the current position as soon as receiving a goal point. Such a scenario is helpful also for the beer path setting.
Efficient algorithms using indexes for the beer distance and beer path queries have been proposed for
outerplanar graphs <cit.> and interval graphs <cit.>.
This paper presents indexes with efficient query algorithms using graph decomposition for more general classes of graphs.
Namely, we consider graphs of bounded treewidth <cit.>
and graphs with bounded triconnected components size <cit.>.
The performance of our indexing and algorithm generalizes that for outerplanar graphs in <cit.>;
if we apply our indexing and algorithm for outerplanar graphs, the space for indexes, preprocessing time, and query time are equivalent to
those of <cit.>. Furthermore, ours can be applied for general graphs, though the performance worsens for graphs of large treewidth and with a large triconnected component.
§.§ Related Work
For undirected outerplanar beer graphs G of order n, Bacic et al. <cit.> present indexes with size O(n), which can be preprocessed in O(n) time. For any two query vertices u and v, (i) the beer distance between u and v can be reported in O(α(n)) time, where α(n) is the inverse Ackermann function, and (ii) a beer path with the beer distance between u and v can be reported in O(L) time, where L is the number of vertices on this path. The query time is shown to be optimal.
For unweighted interval graphs with beer vertices B, Das et al. <cit.> provides a representation using 2n log n + O(n) + O(|B| log n) bits. This data structure answers beer distance queries in O(log^ε n) time for any constant ε > 0 and shortest beer path queries in O(log^ε n + L) time. They also present a trade-off relation between space and query time. These results are summarized in Table <ref>.
Other than the beer path problem, Farzan and Kamali <cit.> proposed a distance oracle for graphs with n vertices
and treewidth k using asymptotically optimal k(n+o(n)-k/2)+O(n) bit space which can answer a query in O(k^3 log^3 k) time.
For general graphs, indexes for shortest paths (i.e., distance queries) and max flow queries
based on the triconnected component decomposition have been proposed <cit.>.
§.§ Our Contribution
We present indexing data structures and query algorithms for beer distance and beer path queries for general graphs based on graph decomposition.
As graph decomposition, we use the tree decomposition <cit.> and the triconnected component decomposition <cit.>.
The obtained results are summarized in Table <ref>.
We first present faster query algorithms using properties of the triconnected component decomposition.
In this approach, we use r, the size of the largest triconnected component in a graph, as the parameter to evaluate the efficiency
of algorithms. Note that r is not the number of edges in the largest triconnected component; it is the
number of edges in a component after contracting every biconnected component into an edge and therefore it is not so large in practice.
The formal definition will be given in Section <ref>.
Our data structure uses O(m+r·min{m,rn}) space, and the algorithm for undirected graphs with nonnegative integer edge weights requires O(m+r^3·min{m,rn}) time for preprocessing, and it answers for each query in O(α(m)) time.
For directed graphs with nonnegative edge weights, the preprocessing time and query time are, respectively
O(m+r^3(m+nlog r_+)) and O(α(m)). Since the size of indexes and query time have a trade-off relation,
a little slower query time can achieve an indexing data structure with less memory. In such a scenario,
another data structure uses O(m) space, and the algorithm for undirected graphs with nonnegative integer edge weights requires O(m+r^2·min{m,rn}) time for preprocessing, and it answers for each query in O(r^2+α(m)) time.
For directed graphs with nonnegative edge weights, the preprocessing time and query time are, respectively
O(m+r^2(m+nlog r_+)) and O(r^2log r_+ +α(m)).
Because triconnected component decomposition can be regarded as a tree decomposition, we extend our query algorithms for
graphs represented by using the tree decomposition.
Though computing the exact treewidth is NP-hard, whereas
triconnected component decomposition is done in linear time <cit.>,
and query time complexities using tree decomposition is larger than using triconnected component decomposition,
the treewidth is always at most r and therefore algorithms based on the tree decomposition are
faster in some cases.
In view of these, we remake the indexing data structures and algorithms for tree decomposition.
The indexing data structure requires O(t^5n) space and O(t^10n) time to construct, and the algorithm can answer a query in O(t^6+α(tn)) time.
Note that for series-parallel graphs r=0, t=2, and m=O(n) hold. This implies that for series-parallel graphs, our indexing data structures use O(n) space, and the algorithms can answer each query in O(α(n)) time. Since the class of series-parallel graphs is a super class of outerplanar graphs, our results fairly extend the optimal result for outerplanar graphs by <cit.>.
The rest of the paper is organized as follows.
Section <ref> is for preliminaries.
Sections <ref> and <ref> present the main parts that describe the indexing and algorithms under triconnected decomposition.
Section <ref> shows how we remake that to those under tree decomposition.
§ PRELIMINARIES
Let ℤ_≥ 0 be the set of nonnegative integers and ℝ_≥ 0 be the set of nonnegative real numbers.
For nonnegative integers i,j ∈ℤ_≥ 0 (i≤ j), let [i,j]={i,i+1,…, j-1,j}.
For a graph G, let V (G) and E(G) denote its vertex and edge sets, respectively. For two graphs G and G', let G∖ G'= (V (G )∖ V (G' ),E (G )∖ E (G' ) ) and G∪ G'= (V (G )∪ V (G' ),E (G )∪ E (G' ) ). Also, for a graph G and a set of vertex pairs F ⊆ V (G )× V (G ), let G∖ F = (V (G ),E (G )∖ F ) and G∪ F = (V (G ),E (G )∪ F ). Furthermore, for a graph G and its vertex subset S⊆ V (G ), let G[S] be the subgraph of G induced by S.
§.§ Shortest Path Problem and Beer Path Problem / Query
Suppose we are given a graph G, an edge weight w E (G ) → W, and a vertex subset B⊆ V (G ). Note that in this paper, we assume W=ℤ_≥ 0 or W=ℝ_≥ 0.
For vertices u,v∈ V (G ), a path from u to v in G is called a u-v path in G. Usually, a u-v path is not unique.
The length of a path is defined by the sum of the edge weights on the path. The shortest length of all u-v paths is called the u-v distance in G, denoted by (G,w)uv.
Also, for vertices u,v∈ V (G ), a walk from u to v passing through a vertex belonging to B at least once is called a
u-v beer path in G. The length of a u-v beer path is similarly defined as the length of a u-v path,
u-v beer distance in G is defined by the shortest length of all u-v beer paths and is denoted by (G,w,B )uv.
Note that if w and B are clear from the context, we omit them and denote (G,w )uv as Guv and (G,w,B )uv as Guv. Then, a vector whose elements are the distance and the beer distance is denoted by
d⃗ (G,u,v ) = [ Guv; Guv ].
For a given G,w,B and vertices u,v∈ V (G ), the problem of finding Guv (or one u-v path that realizes it) is called Shortest Path and the problem of finding Guv (or one u-v beer path that realizes it) is called Beer Path.
For given u and v, the query asked to return Guv or one u-v beer path with length Guv
is called Beer Path Query on G,w,B.
Here, we review algorithms for the shortest path problem and their computational complexity. When W=ℤ_≥ 0 and G is an undirected graph, the shortest path problem can be solved in O(m) time by using Thorup's algorithm <cit.>. When W=ℝ_≥ 0, the shortest path problem can be solved in O(m+ nlog n) time by Dijkstra's algorithm using Fibonacci heap <cit.>. Hereafter,
let G denote the computational time to solve Shortest Path Problem by one of the above algorithms
according to the setting; for example, if G is undirected and W=ℤ_≥ 0, G=O(m), and if W=ℝ_≥ 0,
G=O(m+nlog m).
§.§ SPQR tree
Let G be a biconnected (multi) undirected graph and {u,v} be its vertex pair.
If G[V(G)∖{u,v}] is
disconnected
or u and v are adjacent in G, {u,v} is called a split pair of G.
We denote the set of split pairs of G by G.
For {u,v}∈G, a maximal subgraph H of G satisfying {u,v}∉H,
and the graph ({u,v},{e}) consisting of the edge e∈ E(G),
are called a split component of the split pair {u,v} of G.
We denote the set of split components of the split pair {u,v} of G by Gu,v.
For {u,v}∈G and {s,t}∈ E(G),
we say that {u,v} is maximal
with respect to {s,t}
if vertices u,v,s,t are in the same split component.
For an edge e={u,v}∈ E(G), we define an SPQR tree (G,e) of G. Here, e is called a reference edge of (G,e) of G.
Each node μ of (G,e) is associated with a graph μ. The root node of (G,e) is denoted by μ_e. The (G,e) is defined recursively as follows.
Trivial Case If Gu,v={({u,v},{e}),({u,v},{e'})} (e'∈ E(G)), that is, G is a two vertices multi graph consisting of two edges e,e', (G,e)=({μ_e}, ∅), μ_e=G. Also, μ_e is said to be a Q node.
Series Case Let Gu,v={({u,v},{e}),H}, where H is formed by a series connection of k (≥ 2) connected components H_1,… H_k. Then, for vertices u=c_0, c_1, c_2,… ,c_k-1, c_k =v (c_1, … , c_k-1 are cut vertices of G), let c_i-1,c_i be the only vertices belonging to H_i (1≤ i≤ k). In this case, if e_i={c_i-1,c_i} (1≤ i≤ k), then
(G,e)=({μ_e},∅)∪⋃_1≤ i≤ k( (H_i∪{e_i},e_i) ∪{{μ_e,μ_e_i}}),
μ_e=({c_0,…,c_k},{e,e_1,…,e_k}).
Also, μ_e is said to be an S node.
Parallel Case If Gu,v={({u,v},{e}),H_1,…,H_k} (k≥ 2), that is, G is formed by the parallel connection of 3 or more split components of {u,v}. In this case, if we let e_i denote the edge corresponding to H_i, then μ_e=({u,v},{e,e_1,…,e_k}) and (G,e) is defined as same as series case. Also, μ_e is said to be a P node.
Rigid Case If the above does not apply, that is, Gu,v={({u,v},{e}),H} and H has no cut vertices, let all maximal split pairs for {u,v} in G∖{{u,v}} be {u_i,v_i} (1≤ i≤ k, k≥ 1). Also, for each i, let H_i be the union of the split components for {u_i,v_i} that does not contain e. That is, H_i = ⋃_H∈Gu_i,v_i:e∉ E(H) H. In this case, if we let e_i denote the edge corresponding to H_i (1≤ i≤ k), then
μ_e=({u,v}∪⋃_1≤ i≤ k{u_i,v_i}, {e}∪⋃_1≤ i≤ k{e_i})
and (G,e) is defined as same as series case. Also, μ_e is said to be an R node.
The tree ({ρ,∅})∪(G,e) ∪{{ρ,μ_e}} obtained by connecting the tree (G,e) obtained by the above definition and Q node ρ with the graph ρ=({u,v},{e}) as a root is called the SPQR tree of G with respect to edge e. Hereafter, we simply call it an SPQR tree and denote it by . Also, we denote the only child node μ_e of the root node ρ by ρ'.
For each node μ∈ V()∖{ρ} of , each edge of μ is a skeleton of a certain graph, so μ is called the skeleton graph of μ. Let n_μ = | V(μ) | be the number of vertices and m_μ = | E(μ) | be the number of edges of the skeleton μ. Also, let the reference edge of μ be μ={x_μ, y_μ} and let μ and μ be the sets of child and descendant nodes of μ in , respectively. We denote the set consisting of S, P, Q, and R nodes by S_, P_, Q_, R_, respectively.
For each μ∈ V()∖{ρ}, let G_μ be the subgraph of G corresponding to the graph of μ without the reference edge. This can be expressed as G_μ=({x_μ, y_μ},{{x_μ, y_μ}}) if μ∈ Q_,
otherwise G_μ=⋃_λ∈μ G_λ. An example of a SPQR tree is shown in Figure <ref> of Apendix.
The following is known for the SPQR tree of G.
Let G be a biconnected undirected graph with n vertices and m edges, and be its SPQR tree. For each node μ∈ V()∖{ρ}, {x_μ, y_μ}∈G. If μ∈ R_, μ is a triconnected graph. Also, |Q_|=m, |S_∪ P_∪ R_| = O(n), ∑_μ∈ S_∪ P_∪ R_m_μ=O(m), and ∑_μ∈ V()n_μ = O(n) hold. Furthermore, van be computed in O(n+m) time.
Let r = max_μ∈ R_{ m_μ} be the maximum number of edges in the skeleton of the R node (triconnected graph). Note that if R_ = ∅), r=0. Also, r_+ = max{1,r }.
An SPQR tree for a directed graph is defined as a graph whose skeleton is replaced by a directed graph after computing the SPQR tree by considering the graph as an undirected graph. In this case, for each μ∈ V()∖{ρ}, we consider two reference edges ⟨ x_μ,y_μ⟩,⟨ y_μ,x_μ⟩ and let μ={⟨ x _μ,y_μ⟩, ⟨ y_μ,x_μ⟩}.
§.§ Query Problems
We present all query problems that will be used in later.
Range Minimum Query For a given array (a)=a[1],a[2],…,a[n] of length n, the query defined by the following pair of inputs and outputs is called Range Minimum Query for the array (a).
Input Positive integers i,j (1≤ i≤ j ≤ n),
Output Minimum value in the subarray a[i],a[i+1],…,a[j-1],a[j] of the array (a).
This query can be answered in O(1) time by preprocessing in O(n) space and O(n) time <cit.>.
Lowest Common Ancestor Query Given a rooted tree T with n vertices. The query defined by the following pair of input and output is called the lowest common ancestor query for the rooted tree T.
Input Vertices u,u'∈ V(T),
Output The deepest (furthest from the root) common ancestor of u,u' in T.
This query can be answered in O(1) time by preprocessing in O(n) space and O(n) time using range minimum queries.
Tree Product Query Given a set S, a semigroup ∘ S^2 → S, a tree T with n vertices, and a mapping f V(T) → S. The query defined by the following pair of input and output is called a Tree Product Query for S,∘,T,f.
Input Vertices u,u'∈ V(T),
Output Let u=v_1, v_2,…,v_k-1,v_k=u' be the only path on T that connects u,u', then f(v_1)∘ f(v_2)∘…∘ f(v_k).
This query can be answered in O(α(n)) time by preprocessing in O(n) space and O(n) time <cit.>.
Here, α is the inverse Ackermann function.
For a nonnegative integer i,ℓ, we define A_ℓ(i) as follows.
A_ℓ(i)={
i+1 ℓ = 0, i≥ 0
A_ℓ -1^(i+1)(i+8) ℓ≥1,i ≥ 0
. .
Note that A_ℓ -1^(i+1) denotes a function that A_ℓ -1 iterated i+1 times. Using this, the inverse Ackermann function α is defined as follows.
α(n) = min{ℓ∈ℤ_≥ 0| A_ℓ(1)>n }.
§ TRICONNECTED COMPONENT DECOMPOSITION-BASED INDEXING
This section describes indexes based on triconnected component decomposition for biconnected graphs. First, for each μ,λ∈, we define K_μ,λ = ( {x_μ,y_μ}∪{x_λ,y_λ} , ({x_μ,y_μ}∪{x_λ,y_λ} )^2 ) to be a complete graph with self loops, and let K_μ,λ^w⃗ denote the graph K_μ,λ with the weight
w⃗ V (K_μ,λ )^2 → W^2 (w⃗ (u,v )=[ w (u,v ); w^B (u,v ) ]).
Also, let
= { K_μ,λ^w⃗|μ,λ∈ V ( ), w⃗ V (K_μ,λ )^2 → W^2 }∪{⊥}
be the union of the set of those weighted graphs and {⊥}.
Next, for convenience, we define the maps F_i dom (F_i )→ that provide data of distance and beer distance (i=1,2,3,4, specific domains are described later). The algorithms for beer path queries precompute some of these as data structures.
For each 𝒳∈dom (F_i ), let F_i (𝒳 )∈ be a complete graph with at most 4 vertices and f⃗_i (𝒳 ) be its weight. Also, we will denote the weight f⃗_i (𝒳 ) (u,v ) of each vertex pair ⟨ u,v ⟩∈ V (F_i (𝒳 ) )^2 by
i𝒳uv=[ i𝒳uv; i𝒳uv ],
omitting some brackets. These maps are defined so that i𝒳uv represents the normal distance and i𝒳uv represents the beer distance.
§.§ Definition of the mapping F_1 and its computation
We define the mapping F_1 V ( )∖{ρ}→ as follows.
For each node μ∈ V ( )∖{ρ}, let F_1 (μ )=K_μ,μ^f⃗_1 (μ ) (a complete graph consists of 2 vertices x_μ, y_μ). The weight of each vertex pair ⟨ u,v ⟩∈{x_μ, y_μ}^2 is 1μuv = d⃗ (G_μ, u,v ).
The F_1(μ) intuitively represents the distance data when using the part of the shown in Figure <ref> of Appendix.
We can compute F_1 from the leaves of to the root as described below.
If μ∈ Q_∖{ρ}, G_μ is a graph that consists of only edges ⟨ x_μ,y_μ⟩,⟨ y_μ,x_μ⟩, so each weight 1μuv can be calculated as in Table <ref>.
From now on, we assume that μ is an inner node and that F_1 (λ) is computed for each of its child nodes λ∈μ. Also, let H_μ be the weighted graph with each edge ⟨ x_λ,y_λ⟩, ⟨ y_λ,x_λ⟩ (λ∈μ) of μ∖μ given a weight 1μx_μy_μ,1μy_μx_μ respectively. Then, from the definition of H_μ, if we consider the path from u to v in G_μ through the subgraph G_λ (⟨ u,v ⟩∈{ x_λ,y_λ}^2).
The distance 1μuv can be obtained by referring to the weight of the edge ⟨ u,v ⟩∈ E (H_μ ) (the beer distance 1μuv is obtained by referring to 1λuv =G_λuv directly). Therefore, F_1 (μ ) can be calculated by using H_μ instead of G_μ.
If μ∈ S_, let μ={μ_1, … , μ_k } and let x_μ=x_μ_1, y_μ_i=x_μ _i+1 (1 ≤ i ≤ k-1 ), y_μ_k=y_μ in μ (see Figure <ref> of Appendix). Here, we define the following six symbols for each μ∈ S_.
* μi,j{∑_i≤ p ≤ j1μ_px_μ_py_μ_p 1≤ i ≤ j ≤ k,
0 otherwise,.
which is the distance from x_μ_i to y_μ_j in ⋃_i≤ p≤ jG_μ_p.
* μi,j{∑_i≤ p ≤ j1μ_py_μ_px_μ_p 1≤ i ≤ j ≤ k,
0 otherwise,.
which is the distance from y_μ_j to x_μ_i in ⋃_i≤ p≤ jG_μ_p.
* μiμ1,i-1 + 1μ_ix_μ_ix_μ_i + μ1,i-1 (1≤ i ≤ k,
which is the distance of the shortest walk that reaches from x_μ=x_μ_1 to y_μ_i-1=x_μ_i in ⋃_1≤ j≤ i-1G_μ_j, back to x_μ_i via a beer vertex in G_μ_i and again to x_μ.
* μi1μ_ix_μ_iy_μ_i-1μ_ix_μ_iy_μ_i (1≤ i ≤ k),
which is the difference between the beer distance and the (mere) distance in moving from x_μ_i to y_μ_i in G_μ_i.
* μi1μ_jy_μ_jx_μ_j-1μ_jy_μ_jx_μ_j (1≤ i ≤ k),
which is the difference between the beer distance and the (mere) distance in moving from y_μ_i to x_μ_i in G_μ_i.
* μiμi+1,k + 1μ_iy_μ_iy_μ_i + μi+1,k (1≤ i ≤ k),
which is the distance of the shortest walk that reaches from y_μ=y_μ_k to x_μ_i+1=y_μ_i in ⋃_i+1≤ j≤ kG_μ_j, back to y_μ_i via a beer vertex in G_μ_i and again to y_μ.
Note that we only preprocess μ1,i,μi,k,μ1,i,μi,k (1≤ i ≤ k) among μi,j,μi,j. The other μi,j and μi,j are obtained and used in O(1) time each time. We also preprocess μ·,μ·,μ·,μ· for Range Minimum Query. All of the above preprocessing can be computed in O(k) space and O(k) time.
By using these, each weight 1μuv can be calculated as in Table <ref>.
If μ∈ P_, we define the following for each ⟨ u,v ⟩∈{x_μ, y_μ}^2 to simplify:
ℓ_u,v=min_λ∈μ{1λuv}, ℓ^B_u,v=min_λ∈μ{1λuv}.
If μ∈ R_, each weight 1μuv can be calculated on H_μ as follows.
1μuv
=[ H_μuv; min_λ∈μ{min_p,q ∈{x_λ,y_λ}{H_μup + 1λpq + H_μqv}} ].
Note that each H_μab in the above equation is calculated by a shortest path algorithm for H_μ. An example of calculating F_1 is shown in Figure <ref> of Appendix.
§.§ Definition of the mapping F_2 and its computation
We define the mapping F_2 V ( )∖{ρ}→ as follows.
For each node μ∈ V ( )∖{ρ}, let F_2 (μ )=K_μ^f⃗_⃗2⃗ (μ ) (a complete graph consists of 2 vertices x_μ, y_μ). The weight of each vertex pair ⟨ u,v ⟩∈{x_μ, y_μ}^2 is 2μuv = d⃗ (G ∖ E (G_μ ), u,v ).
The F_2(μ) intuitively represents the distance data when using the part of the shown in Figure <ref>.
We can compute F_2 from the root of to the leaves. To describe how to compute F_2 (μ ), let λ be the parent node of μ in .
If λ = ρ (root node ), the edges of G ∖ E (G_μ ) are only ⟨ x_λ, y_λ⟩,⟨ y_λ, x_λ⟩, so each weight 2μuv can be calculated by replacing μ to λ in 1μuv
in Table <ref>.
From here, we assume that λ≠ρ. Then, F_2 (μ ) can be calculated by using H_λ∖μ∪λ instead of G ∖ E (G_μ ). We set the weights of the edges ⟨ x_λ,y_λ⟩ and ⟨ y_λ,x_λ⟩ of this graph to 2λx_λy_λ and 2λy_λx_λ, respectively.
For λ∈ S_, let λ={λ_1, … , λ_k }, μ =λ_i, x_λ=x_λ_1, y_λ_j=x_λ_j+1 (1 ≤ j ≤ k-1 ), y_λ_k=y_λ in λ. Each weight 2μuv can be obtained in the same idea as in Table <ref>. We show here the formulas for calculating some of the weights.
2μx_λ_iy_λ_i =λ1,i-1+2λx_λy_λ+λi+1,k,
2μx_λ_iy_λ_i =2μx_λ_iy_λ_i
+ min{min_1≤ j ≤ k, j ≠ i{λj}
2λx_λy_λ-2λx_λy_λ},
2μx_λ_ix_λ_i
=min{min_1≤ j≤ i-1{λj} -(λi,k+λi,k)
λ1,i-1+2λx_λx_λ +λ1,i-1
λ1,i-1+2λx_λy_λ+min_i+1≤ j≤ k{λj}+2λy_λx_λ +λ1,i-1
}.
If λ∈ P_, it can be calculated in the same way as F_1 for P nodes, by noting that x_μ = x_λ, y_μ = y_λ. That is, each 2μuv can be calculated by the formula 1μuv in Table <ref> if we set ℓ_u,v, ℓ_u,v^B as follows:
ℓ_u,v=
min{2λuv
min_θ∈λ∖{μ}{1θuv}}
, ℓ^B_u,v=
min{2λuv
min_θ∈λ∖{μ}{1θuv}}
.
If λ∈ R_, each weight 2μuv can be calculated on H' H_λ∖μ∪λ as follows.
2μuv
=[ H'uv; min{min_p,q ∈{x_λ,y_λ}{H'up + 2λpq + H'qv}
min_θ∈λ∖{μ}{min_p,q ∈{x_θ,y_θ}{H'up + 1θpq + H'qv}}} ].
Note that each H'ab in the above equation is calculated by a shortest path algorithm for H'. An example of calculating F_2 is shown in Figure <ref> of Appendix.
§.§ Definition of the mapping F_3 and its computation
We define the mapping F_3 E ( )∖{{ρ, ρ'}}→ as follows.
For each edge = {μ,λ}∈ E ( )∖{{ρ, ρ'}} (λ∈μ ), F_3 ( )=K_μ,λ^f⃗_3 ( ) (a complete graph consists of at most 4 vertices). The weight of each vertex pair ⟨ u,v ⟩∈ ({x_μ, y_μ}∪{x_λ, y_λ} )^2 is 3uv = d⃗ (G_μ∖ E (G_λ ), u,v ).
The F_3() intuitively represents the distance data when using the part of the shown in Figure <ref>.
In the actual F_3 ( ) calculation, we can consider H_μ∖λ instead of G_μ∖ E (G_λ ).
If μ∈ S_, let μ={μ_1, … , μ_k }, λ=μ_i, and let x_μ=x_μ_1, y_μ_j=x_μ_j+1 (1 ≤ j ≤ k-1 ), y_μ_k=y_μ in μ. Each weight 3uv can be calculated as follows.
If u,v ∈{x_μ}∪{x_μ_i}, then 3uv=0,
3uv={ 0 x_μ∈ B
∞ x_μ∉ B .
if i=1 (x_μ = x_μ_i), and then 3uv is shown in Table <ref> if i≥ 2 (x_μ≠ x_μ_i).
If u,v ∈{y_μ}∪{y_μ_i}, 3uv can be calculated in the same way as above by considering the case when i=k (y_μ = y_μ_i ) and the case when i≤ k-1 (y_μ≠ y_μ_i) separately.
Otherwise, u to v cannot be reached in H_μ∖λ, so 3uv=3uv=∞.
If μ∈ P_, it can be calculated in the same way as F_1 for P nodes, by noting that x_λ = x_μ, y_λ = y_μ. That is, each 3uv can be calculated by the formula 1μuv in Table <ref> if we set ℓ_u,v, ℓ_u,v^B as follows.
ℓ_u,v=min_θ∈μ∖{λ}{1θuv}, ℓ^B_u,v=min_θ∈μ∖{λ}{1θuv}.
If μ∈ R_, each weight 3uv can be calculated on H” H_μ∖λ as follows.
3uv
=[ H”uv; min_θ∈μ∖{λ}{min_p,q ∈{x_θ,y_θ}{H”up + 1θpq + H”qv}} ].
Note that each H”ab in the above equation is calculated by the shortest path algorithm for H”. An example of calculating a part of F_3 is shown in Figure <ref>.
§.§ Definition of the mapping F_4 and its computation
We define the mapping F_4 ⋃_μ∈ V ( )∖ Q_μ2→ as follows.
For each node μ∈ V ( )∖ Q_ and each node pair ψ = {λ, λ'}∈μ2 of μ, F_4 ( ψ )=K_λ,λ'^f⃗_4 ( ψ ) (a complete graph consists of at most 4 vertices). The weight of each vertex pair ⟨ u,v ⟩∈ ({x_λ, y_λ}∪{x_λ', y_λ'} )^2 is 4ψuv = d⃗ (G∖ E (G_λ ) ∖ E (G_λ' ), u,v ).
The F_4(ψ) intuitively represents the distance data when using the part of the shown in Figure <ref>.
In the actual F_4 ( ψ ) calculation, we can consider H_μ∖λ∖λ'∪μ instead of G∖ E (G_λ ) ∖ E (G_λ' ). We set the weights of the edges ⟨ x_μ,y_μ⟩, ⟨ y_μ,x_μ⟩ of this graph to 2μx_μy_μ, 2μy_μx_μ respectively.
If μ∈ S_, let μ={μ_1, … , μ_k }, λ=μ_i, λ'=μ_j (i<j), and let x_μ=x_μ_1, y_μ_p=x_μ_p+1 (1 ≤ p ≤ k-1 ), y_μ_k=y_μ in μ. The weights 4ψuv can be calculated in the same way as the weights of F_3 for the S nodes. We show here the formulas for calculating some of the weights.
4ψx_μ_iy_μ_j =μ1,i-1+2μx_μy_μ+μj+1,k,
4ψx_μ_iy_μ_j =4ψx_μ_iy_μ_j +min{3{μ,λ}x_μ_ix_μ-μ1,i-1
2μx_μy_μ-2μx_μy_μ
3{μ,λ'}y_μy_μ_j-μj+1,k},
4ψx_μ_ix_μ_i
=min{3{μ,λ}x_μ_ix_μ_i
μ1,i-1 +2μx_μx_μ+μ1,i-1
μ1,i-1 +2μx_μy_μ+3{μ,λ'}y_μy_μ+ 2μy_μx_μ+μ1,i-1},
4ψy_μ_ix_μ_j =μi+1,j-1,
4ψy_μ_ix_μ_j ={
0 j=1+1, y_μ_i∈ B
∞ j=1+1, y_μ_i∉ B
4ψy_μ_ix_μ_j+ min_i+1≤ p≤ j-1{μp} j≥ i+2
. ,
4ψy_μ_iy_μ_i ={
0 j=1+1, y_μ_i∈ B
∞ j=1+1, y_μ_i∉ B
min_i+1≤ p≤ j-1{μp} -(μ1,i+μ1,i) j≥ i+2
. .
If μ∈ P_, it can be calculated in the same way as F_1 for P nodes, by noting that x_λ = x_λ' = x_μ, y_λ = y_λ' = y_μ. That is, each 4ψuv can be calculated by the formula 1μuv in Table <ref> if we set ℓ_u,v, ℓ_u,v^B as follows.
ℓ_u,v=
min{2μuv
min_θ∈μ∖{λ, λ' }{1θuv}}
, ℓ^B_u,v=
min{2μuv
min_θ∈μ∖{λ, λ' }{1θuv}}
.
Note that if μ={λ, λ'}, then ℓ_u,v= 2μuv, ℓ^B_u,v=2μuv.
If μ∈ R_, each weight 4ψuv can be calculated on H”' H_μ∖λ∖λ'∪μ as follows.
4ψuv
=[ H”'uv; min{min_p,q ∈{x_μ,y_μ}{H”'up + 2μpq + H”'qv}
min_θ∈μ∖{λ, λ' }{min_p,q ∈{x_θ,y_θ}{H”'up + 1θpq + H”'qv}}} ].
Note that each H”'ab in the above equation is calculated by the shortest path algorithm for H”'. An example of calculating an image of F_4 is shown in Figure <ref>.
Here, if μ∈ S_∪ P_, F_4 ( ψ ) can be computed in O(1) time by using Range Minimum Query. Also, the beer distance is obtained from a graph that is a combination of the images of F_1,F_2,F_3,F_4, but the image of F_4 appears in at most one element of the combination (see subsection <ref> for details). Therefore, it is enough to compute F_4 only for the child node pairs of the R node in the preprocessing. From this it is convenient to consider a mapping restricting the domain of F_4 and we define F_4R⋃_μ∈ R_μ2→ (F_4R ( ψ ) = F_4 ( ψ ), ψ∈⋃_μ∈ R_μ2).
Because of space limitation, we show analyses on computational complexities in Section <ref>.
§ ALGORITHM BASED ON TRICONNECTED COMPONENT DECOMPOSITION
§.§ Definition of binary operations
We define the binary operation ⊕^2 → as follows.
For each H_1,H_2 ∈, H_1 ⊕ H_2 is defined as follows. If H_1 = ⊥ or H_2 = ⊥, H_1 ⊕ H_2 =⊥. If H_1 ≠⊥ and H_2 ≠⊥, let H_i = K_μ_i,λ_i^w⃗_i (i=1,2). Also, let A=({μ_1}∪{λ_1})∩ ({μ_2}∪{λ_2}) be the set of nodes in that give vertices appearing in H_1,H_2 in common.
If |A|≠ 1, H_1 ⊕ H_2 =⊥. If |A|=1, let A={θ}, H_1 ⊕ H_2 = K_θ_1,θ_2^w⃗. Here, we define θ_1,θ_2 as follows.
θ_1={μ_1 (=λ_1 ) μ_1 =θ = λ_1
μ_1 μ_1≠θ =λ_1
λ_1 μ_1 = θ≠λ_1
., θ_2={μ_2 (=λ_2 ) μ_2 =θ = λ_2
μ_2 μ_2≠θ =λ_2
λ_2 μ_2 = θ≠λ_2
..
Also, let H_1 ∪ H_2 be a weighted multi graph with vertex set V(H_1)∪ V(H_2), given distinct edges in H_1 and edges in H_2, and let z⃗ be the weights defined by
z⃗(e)=[ z(e); z^B(e) ]=w⃗_i(p,q) (e=⟨ p,q ⟩∈ E(H_i), i=1,2).
Then, For each ⟨ u,v ⟩∈ ({x_θ_1,y_θ_1}∪{x_θ_2,y_θ_2} )^2, we define w⃗(u,v) as follows.
w(u,v) = (H_1 ∪ H_2 ,z)uv,
w^B(u,v) = min_e=⟨ p,q ⟩∈ E(H_1 ∪ H_2){(H_1 ∪ H_2 ,z)up + z^B(e)+(H_1 ∪ H_2 ,z)qv}.
For a concrete example of this operation, see the computation of H ⊕ H' in Figure <ref>. Furthermore, we define a subset of by
= { K_μ,λ^w⃗∈∖{⊥}|λ∈μ}∪{⊥}
and define the binary operation ⊕^2 → as follows.
For each H_1,H_2 ∈, if H_1 = ⊥ or H_2 = ⊥, then H_1 ⊕ H_2 =⊥, otherwise, let H_i = K_μ_i,λ_i^w⃗_i, λ_i ∈μ_i (i=1,2) then
H_1 ⊕ H_2 ={⊥ λ_1 ≠μ_2
H_1 ⊕ H_2 λ_1 = μ_2
. .
⊕ is a semigroup.
The proof is given in Section <ref>.
§.§ Representation of distance and beer distance using mapping and algorithms for Beer Path Query
By using F_1, F_2, F_3, F_4, F_4R and tree product query data structures, we
can compute beer path between given vertices. Recall that r is the maximum edge number of the skeleton of R nodes in the SPQR tree of G and W is the range of the edge weight function.
If we precompute F_1,F_2 as a data structure, the space required to store the data structure is O(m). The preprocessing time and query time are
* O(m+r·min{m,rn}) and O(n+r·min{m,rn}+α(m)) if W=ℤ_≥ 0 and G is undirected.
* O(m+r(m+nlog r_+)) and O(n+ r(m+nlog r_+) +α(m)) if W=ℝ_≥ 0.
If we precompute F_1,F_2,F_3 as a data structure, the space required to store the data structure is O(m). The preprocessing time and query time are
* O(m+r^2·min{m,rn}) and O(r^2+α(m)) if W=ℤ_≥ 0 and G is undirected.
* O(m+r^2(m+nlog r_+)) and O(r^2log r_+ +α(m)) if W=ℝ_≥ 0.
If we precompute F_1,F_2,F_3,F_4R as a data structure, the space required to store the data structure is O(m+r·min{m,rn}). The preprocessing time and query time are
* O(m+r^3·min{m,rn}) and O(α(m)) if W=ℤ_≥ 0 and G is undirected.
* O(m+r^3(m+nlog r_+)) and O(α(m)) if W=ℝ_≥ 0.
Proofs are given in Section <ref>
§ MISSING PROOFS
§.§ Proof of Lemma <ref>
We arbitrarily take H_1,H_2,H_3∈, and confirm that H_(12)3 (H_1 ⊕ H_2)⊕ H_3 and H_1(23) H_1 ⊕(H_2 ⊕ H_3) are equal. First, if H_1= ⊥ or H_2= ⊥ or H_3= ⊥, clearly H_(12)3=H_1(23)=⊥. In the following, let H_i≠⊥ and H_i=K_μ_i,λ_i^w⃗_i (λ_i ∈μ_i) for each i. If λ_1 ≠μ_2 or λ_2 ≠μ_3, then we can easily obtain H_(12)3=H_1(23)=⊥. If λ_1 = μ_2,λ_2 =μ_3, let H_(12)3=K_μ_1,λ_3^w⃗_(12)3,H_1(23)=K_μ_1,λ_3^w⃗_1(23). Then, we show briefly that w⃗_(12)3(u,v)=w⃗_1(23)(u,v) for each u,v∈{x_μ_1,y_μ_1}∪{x_λ_3,y_λ_3}.
If let H_1 ⊕H_2=H_12=K_μ_1,μ_3^w⃗_12, then from Figure <ref> the following holds for each u'∈{x_μ_1,y_μ_1} and v' ∈{x_μ_3,y_μ_3}.
w_12(u',v')=H_12u'v'=min_p∈{x_μ_2,y_μ_2}{H_1u'p + H_2pv'}.
By using this, the following holds for each u∈{x_μ_1,y_μ_1} and v ∈{x_λ_3,y_λ_3}.
w_(12)3(u,v)
=H_(12)3uv=min_q∈{x_μ_3,y_μ_3}{H_12uq+H_3qv}
=min_q∈{x_μ_3,y_μ_3}{min_p∈{x_μ_2,y_μ_2}{H_1up + H_2pq}+H_3qv}
=min_p∈{x_μ_2,y_μ_2}
q∈{x_μ_3,y_μ_3}{H_1up + H_2pq+H_3qv}.
By the same idea, exactly the same result is obtained for w_1(23)(u,v). We can show w_(12)3(u,v)=w_1(23)(u,v) and w_(12)3^B(u,v)=w_1(23)^B(u,v) for the other weights in the same way.
§.§ Preprocessing algorithms
§.§.§ Algorithm for preprocessing F_1,F_2
We consider an algorithm that preprocesses F_1,F_2. For this algorithm, the preprocessing space is C_1^space+C_2^space=O (m) and the preprocessing time is
C_1^time+C_2^time={
O (m+r·min{m,rn} ) W=ℤ_≥ 0
O (m+r (m+n log r_+ ) ) W=ℝ_≥ 0.
.
Beer Path Query can be solved by computing O(n) images of F_3 and one image of F_4 on Π and combine them using Tree Product Query.
Thus, query time is
O (∑_μ∈ S_∪ P_1+∑_μ∈ R_m_μH_μ+m_πH_π + α (m) )
={
O (n+r·min{m,rn}+ α(m)) W=ℤ_≥ 0
O (n+r (m+n log r_+ )+ α(m)) W=ℝ_≥ 0.
.
§.§.§ Algorithm for preprocessing F_1,F_2,F_3
We consider an algorithm that preprocesses F_1,F_2,F_3. For this algorithm, the preprocessing space is C_1^space+C_2^space+C_3^space=O (m) and the preprocessing time is
C_1^time+C_2^time+C_3^time={
O (m+r^2·min{m,rn} ) W=ℤ_≥ 0
O (m+r^2 (m+n log r_+ ) ) W=ℝ_≥ 0.
.
Beer Path Query can be solved by computing one image of F_4 and combine images of F_3 on Π and F_4 using Tree Product Query.
Thus, query time is
O (m_πH_π + α (m ) )={
O (r^2 +α (m ) )
W=ℤ_≥ 0
O (r^2 log r_+ +α (m ) ) W=ℝ_≥ 0.
.
§.§.§ Algorithm for preprocessing F_1,F_2,F_3,F_4R
We consider an algorithm that preprocesses F_1,F_2,F_3,F_4R. For this algorithm, the preprocessing space is C_1^space+C_2^space+C_3^space+C_4R^space=O (m+r·min{m,rn} ) and the preprocessing time is
C_1^time+C_2^time+C_3^time+C_4R^time={
O (m+r^3·min{m,rn} ) W=ℤ_≥ 0
O (m+r^3 (m+n log r_+ ) ) W=ℝ_≥ 0.
.
Beer Path Query can be solved by combining images of F_3 and F_4 on Π using Tree Product Query.
Thus, query time is O (α (m ) ).
§.§ Computational complexity for each mapping
In this subsection, we analyze the computational complexity for each mapping.
Let C_i^time and C_i^space be the time required to compute each F_i and the space to store, respectively.
§.§.§ Computational complexity for F_1
First, we consider C_1^space. For each μ∈ V ( )∖{ρ}, F_1 (μ ) is a graph of constant size. Also, each μ∈ S_∪ P_ uses O(m_μ) space in the preprocessing for Range Minimum Query and so on. Thus, C_1^space=O( ∑_μ∈ Q_∪ R_ 1+ ∑_μ∈ S_∪ P_ m_μ)=O(m).
Next, we consider C_1^time. If μ∈ Q_∖{ρ}, F_1(μ) can be computed in O(1) time. If μ∈ S_∪ P_, preprocessing and F_1(μ) calculation can be done in O(m_μ) time. Also, if μ∈ R_, F_1 (μ ) can be obtained by running the shortest path algorithm for H_μ for O (m_μ ) times, so it can be computed in O (m_μH_μ ) time. Thus, noting the definition of r, C_1^time is as follows:
C_1^time = O ( ∑_μ∈ Q_ 1 + ∑_μ∈ S_∪ P_ m_μ + ∑_μ∈ R_ m_μH_μ )
=O ( m + m + r ∑_μ∈ R_H_μ )
= {
O (m+r·min{m,rn} ) W=ℤ_≥ 0
O (m+r (m+n log r_+ ) ) W=ℝ_≥ 0. .
§.§.§ Computational complexity for F_2
First, C_2^space can be similarly considered to C_1^space, C_2^space=∑_μ∈ V ( )∖{ρ} O (1)=O(m). Next, we consider C_2^time. Let λ be the parent node of μ in . If λ∈{ρ}∪ S_∪ P_, F_2 (μ ) can be computed in O (1) time by using Table <ref> or Range Minimum Query. Also, if λ∈ R_, F_2 (μ ) can be obtained by running the shortest path algorithm for H_λ∖μ∪λ for O(m_λ) times, so it can be computed in O (m_λH_λ ) time.
Thus, C_2^time can be evaluated as follows:
C_2^time = O ( ∑_λ∈{ρ}∪ S_∪ P_ 1 + ∑_λ∈ R_ m_λH_λ )
=O (n + r ∑_λ∈ R_H_λ )
= {
O (n+r·min{m,rn} ) W=ℤ_≥ 0,
O (n+r (m+n log r_+ ) ) W=ℝ_≥ 0.
.
§.§.§ Computational complexity for F_3
First, we consider C_3^space. In F_3, a graph of a constant size is prepared for each edge of E ( )∖{{ρ, ρ'}}, so C_3^space=O (|E ( )| )=O (m ).
Next, we consider C_3^time. For each ={μ,λ}∈ E ()∖{{ρ, ρ'}} (λ∈μ), if μ∈ S_∪ P_ then F_3 () can be computed in O(1) time by using Range Minimum Query. If μ∈ R_ then F_3 ( ) can be obtained by running the shortest path algorithm for H_μ∖λ for O (m_λ ) times, so it can be computed in O (m_λH_λ ) time.
Thus, C_3^time can be evaluated as follows.
C_3^time = O ( ∑_μ∈ S_∪ P_∑_λ∈μ1 + ∑_μ∈ R_∑_λ∈μ m_μH_μ )
=O ( m + r^2 ∑_μ∈ R_H_μ )
= {
O (m+r^2 ·min{m,rn} ) W=ℤ_≥ 0,
O (m+r^2 (m+n log r_+ ) ) W=ℝ_≥ 0.
.
§.§.§ Computational complexity for F_4
If F_4 is realized as a data structure, it is not efficient because it requires computation and space even for images that can be computed in O (1 ) time, as described in Subsection <ref>. Therefore, we consider the computational complexity of realizing F_4R as a data structure instead of F_4 itself.
First, the space for F_4R is C_4R^space=∑_μ∈ R_O ( m_μ^2 )=O (r·min{m,rn} ). Next, we consider the preprocessing time of F_4R, C_4R^space. For each μ∈ R_ and each node pair ψ ={λ,λ'}∈μ2, F_4R ( ψ )=F_4 ( ψ ) can be obtained by running the shortest path algorithm for H_μ∖λ∖λ'∪μ for O (m_μ ) times, so it can be computed in O (m_μH_μ ) time. Thus, C_4R^time is as follows:
C_4R^time = O ( ∑_μ∈ R_∑_ψ∈μ2 m_μH_μ )
= O ( ∑_μ∈ R_ m_μ^3 H_μ )
=O ( r^3 ∑_μ∈ R_H_μ )
= {
O (r^3 ·min{m,rn} ) W=ℤ_≥ 0,
O (r^3 (m+n log r_+ ) ) W=ℝ_≥ 0.
.
§.§ Proofs for the algorithms
In the following, we describe an outline of the algorithm corresponding to each of the above theorems.
First, we describe the representation of distances and beer distances using each mapping F_i and binary operations. We also show several algorithms for Beer Path Query based on them. We consider computing the distance or beer distance from s to t in a biconnected connected graph G. First, let θ∈ Q_∖{ρ} be a Q node whose skeleton contains vertex s, and let {x_θ,y_θ}={s,s'}. Similarly, take a node θ' ∈ Q_∖{ρ} whose skeleton contains a vertex t and let {x_θ',y_θ'}={t,t'}.
If θ = θ', then we combine F_1 (θ) which contains data on the distance and the beer distance in G_θ, and F_2 (θ) which contains data in G∖ E (G_θ). The combined result is represented by F_1(θ)⊕ F_2(θ), and the distance and beer distance are obtained by referring to the weights of the vertex pair ⟨ s,t ⟩ in F_1(θ)⊕ F_2(θ).
If θ≠θ', let π be the lowest common ancestor of θ and θ' in and denote the θ-θ' path in by the following vertex sequence Π.
Πθ = μ_k , μ_k-1, … , μ_2, μ_1 ,π , λ_1 ,λ_2 ,… , λ_ℓ-1, λ_ℓ = θ'.
F_1 (θ)=F_1 (μ_k ) contains the data of the distance and the beer distance of each pair of {s,s'}={x_μ_k, y_μ_k} in G_μ_k. Also, F_3 ({μ_k-1,μ_k} ) contains the data of each pair of {x_μ_k-1, y_μ_k-1}∪{x_μ_k, y_μ_k} in G_μ_k-1∖ E (G_μ_k ). Therefore, by combining F_1 (μ_k ) and F_3 ({μ_k-1,μ_k} ), we can obtain the data of each pair of {x_μ_k-1, y_μ_k-1}∪{s,s'} in G_μ_k-1. And the combined result can be expressed as F_1 (μ_k )⊕ F_3 ({μ_k-1,μ_k} ).
By applying this idea repeatedly, we can obtain the data of each pair of {x_μ_1, y_μ_1}∪{s,s'} in G_μ_1 by computing
F_1 (μ_k ) ⊕ (F_3 ({μ_1,μ_2} ) ⊕⋯⊕ F_3 ({μ_k-1,μ_k} ) ) = F_1 (μ_k ) ⊕ ( ⊕_i=1^k-1 F_3 ({μ_i,μ_i+1} ) ).
Similarly, by computing
(F_3 ({λ_1,λ_2} ) ⊕⋯⊕ F_3 ({λ_ℓ-1,λ_ℓ} ) ) ⊕ F_1 (λ_ℓ ) = ( ⊕_j=1^ℓ-1 F_3 ({λ_j,λ_j+1} ) ) ⊕ F_1 (λ_ℓ ),
we can obtain the data of each pair of {x_λ_1, y_λ_1}∪{t,t'} in G_λ_1.
Furthermore, F_4 ({μ_1,λ_1} ) contains the data of each pair of {x_μ_1, y_μ_1}∪{x_λ_1, y_λ_1} in G∖ E (G_μ_1 )∖ E (G_λ_1 ).
Therefore, by combining this and the results of the above two operations, we can obtain the data of each pair of {s,s'}∪{t,t'} in G. And the combined result can be expressed as
K_θ,θ'^w⃗_s,t = F_1 (μ_k ) ⊕ ( ⊕_i=1^k-1 F_3 ({μ_i,μ_i+1} ) )
⊕ F_4 ({μ_1,λ_1} ) ⊕ ( ⊕_j=1^ℓ-1 F_3 ({λ_j,λ_j+1} ) ) ⊕ F_1 (λ_ℓ ).
Then, we obtain the distance and the beer distance by Gst = w_s,t (s,t ), Gst = w_s,t^B (s,t ).
Here, the computation of K_θ,θ'^w⃗_s,t can be written
K_θ,θ'^w⃗_s,t = F_1 (μ_k ) ⊕ ( ⊕_i=1^k-1 F_3 ({μ_i,μ_i+1} ) )
⊕ F_4 ({μ_1,λ_1} ) ⊕ ( ⊕_j=1^ℓ-1 F_3 ({λ_j,λ_j+1} ) ) ⊕ F_1 (λ_ℓ )
by using the semigroup ⊕. Therefore, if we preprocess for Tree Product Query regarding ⊕, we can compute K_θ,θ'^w⃗_s,t for the ⊕,⊕ operation in O (α (|V ( )| ) )=O (α (m ) ) time.
§ ALGORITHM BASED ON TREE DECOMPOSITION
In this section, we describe algorithms based on tree decomposition.
For a graph G with n vertices and m edges, denote its treewidth by ttw(G). Also, let be a rooted tree decomposition of G with width t and number of nodes O(tn).
The following theorem holds for the space of data structures to be constructed, the preprocessing time, and the query time.
* When we construct a data structure in O(t^3n) space with preprocessing using O(t^8n) time, we can answer a query in O(t^8n+α(tn)) time.
* When we construct a data structure in O(t^3n) space with preprocessing using O(t^8n) time, we can answer a query in O(t^7+α(tn)) time.
* When we construct a data structure in O(t^5n) space with preprocessing using O(t^10n) time, we can answer a query in O(t^6+α(tn)) time.
In the following, we describe an outline of the proof of the above theorems.
For each node μ in , let X_μ be the vertex subset of G that μ has, and let S_μ = X_μ∪⋃_λ∈μX_λ.
Furthermore, let A_μ be a vertex in X_μ if μ is a root node, and if μ is not the root node, A_μ = X_μ∩ X_λ where μ∈λ (A_μ corresponds to the endpoint set of μ in the SPQR tree).
Then, we define the following symbols as well as the mapping to the SPQR tree.
1μuv d⃗(G[S_μ],u,v) (μ∈ V(), u,v∈ A_μ),
2μuv d⃗(G∖ E(G[S_μ]),u,v) (μ∈ V(), u,v∈ A_μ),
3{μ,λ}uv d⃗(G[S_μ]∖ E(G[S_λ]),u,v) ((μ,λ) ∈ E(), λ∈μ, u,v∈ A_μ∪ A_λ),
4{λ,λ'}uv d⃗(G∖ E(G[S_λ])∖ E(G[S_λ']),u,v)
({λ,λ'}∈μ2, μ∈ V(), u,v∈ A_λ∪ A_λ').
We can calculate f⃗_1 as follows: If μ is a leaf of , then 1μuv=G[X_μ]uv and
1μuv={min_p∈ B∩ X_μ{G[X_μ]up+G[X_μ]pv} B∩ X_μ≠∅
∞ B∩ X_μ = ∅.
.
If μ is not leaves, then
1μuv=min{G[X_μ]uv
min_λ∈μ
p,q∈ A_λ{G[X_μ]up+1λpq+G[X_μ]qv}},
1μuv={min{ℓ_u,v,min_p∈ B∩ X_μ{G[X_μ]up+G[X_μ]pv}} B∩ X_μ≠∅
ℓ_u,v B∩ X_μ = ∅.
.
where
ℓ_u,v=min_λ∈μ
p,q∈ A_λ{G[X_μ]up+1λpq+G[X_μ]qv}.
f⃗_2,f⃗_3,f⃗_4 can be calculated using the same idea.
For each μ∈ V(), |X_μ|=O(t),|E(G[X_μ])|=O(t^2), so G[X_μ]=O(t^2) is obtained regardless of how we take the range W of the weights.
Noting this, the space C_i^space and computation time C_i^time required for each f⃗_i can be evaluated as follows.
C_1^space,C_2^space=∑_μ∈ V()O(|A_μ|^2)=O(t^2 |V()|)=O(t^3n),
C_3^space=∑_∈ E()O(t^2)=O(t^2 |E()|)=O(t^3n),
C_4^space=∑_μ∈ V()∑_ψ∈μ2 O(t^2)=O(t^4 |V()|)=O(t^5n),
C_1^time,C_2^time=∑_μ∈ V()∑_u,v∈ A_μ∑_λ∈μ∑_p,q∈ A_λO( G[X_μ])=O(t^7 |V()|)=O(t^8n),
C_3^time=∑_(μ,λ)∈ E()∑_u,v∈ A_μ∪ A_λ∑_θ∈μ∖{λ}∑_p,q∈ A_θ O(t^2)=O(t^7 |E()|)=O(t^8n),
C_4^time=∑_μ∈ V()∑_{λ,λ'}∈μ2∑_u,v∈ A_λ∪ A_λ'∑_θ∈λ,λ'∖{λ}∑_p,q∈ A_θ O(t^2)=O(t^9 |V()|)=O(t^10n).
Then, we consider the computational complexity of queries when these are precomputed.
First, if f⃗_1,f⃗_2 are precomputed, the query can be solved in O(t^7|V()|+α(|V()|)+t^6)=O(t^8n+α(tn)) time. Next, if f⃗_1,f⃗_2,f⃗_3 are precomputed, the query can be solved in O(t^7+α(|V()|)+t^6)=O(t^7+α(tn)) time. Finally, if f⃗_1,f⃗_2,f⃗_3,f⃗_4 are precomputed, the query can be solved in O(α(tn)+t^6) time.
Here, the t^6 term appearing in each computational time is the time required to perform O(1) times operation (⊕) to integrate the data structure without using Tree Product Query (for ⊕). Of course, the t^6 term could be replaced by α(tn) if a better semigroup could be defined.
These computation complexity shows that the degree of t in each result is larger than that of r in the case of triconnected component decomposition (SPQR tree). The dominant factor of this is that u,v in each i·uv can be taken in O(t^2) ways in the tree decomposition, whereas O(1) ways in triconnected component decomposition.
§ FIGURES AND TABLES
|
http://arxiv.org/abs/2307.02333v1
|
20230705144318
|
Disentangling the Hadronic Components in NGC 1068
|
[
"Marco Ajello",
"Kohta Murase",
"Alex McDaniel"
] |
astro-ph.HE
|
[
"astro-ph.HE",
"astro-ph.GA"
] |
0000-0002-6584-1703]Marco Ajello
Department of Physics and Astronomy,
Clemson University,
Clemson, SC, 29631
0000-0002-5358-5642]Kohta Murase
Department of Physics; Department of Astronomy & Astrophysics; Center for Multimessenger Astrophysics, Institute for Gravitation and the Cosmos, The Pennsylvania State University, University Park, PA 16802, USA
School of Natural Sciences, Institute for Advanced Study, Princeton, NJ 08540, USA
Center for Gravitational Physics and Quantum Information, Yukawa Institute for Theoretical Physics, Kyoto University, Kyoto, Kyoto 606-8502, Japan
0000-0002-8436-1254]Alex McDaniel
Department of Physics and Astronomy,
Clemson University,
Clemson, SC, 29631
The recent detection of high-energy neutrinos by IceCube in the direction of the nearby Seyfert/starburst galaxy NGC 1068 implies that radio-quiet active galactic nuclei can accelerate cosmic-ray ions. Dedicated multi-messenger analyses suggest that the interaction of these high-energy ions with ambient gas or photons happens in a region of the galaxy that is highly opaque for GeV-TeV gamma rays. Otherwise, the GeV-TeV emission would violate existing constraints provided by Fermi-LAT and MAGIC. The conditions of high optical depth are realized near the central super-massive black hole (SMBH). At the same time, the GeV emission detected by the Fermi-Large Area Telescope (LAT) is likely related to the galaxy's sustained star-formation activity. In this work, we derive a 20 MeV - 1 TeV spectrum of NGC 1068 using 14 yrs of Fermi-LAT observations. We find that the starburst hadronic component is responsible for NGC 1068's emission above ∼500 MeV. However, below this energy an additional component is required. In the 20-500 MeV range the Fermi-LAT data are consistent with hadronic emission initiated by non-thermal ions interacting with gas or photons in the vicinity of the central SMBH. This highlights the importance of the MeV band to discover hidden cosmic-ray accelerators.
§ INTRODUCTION
NGC 1068 is one of the brightest and most studied active galactic nuclei (AGN). It also hosts intense star formation (a starburst). Located at a distance of ∼10 Mpc <cit.>, it is classified as a Seyfert 2 galaxy because of the absence of broad emission lines in its optical spectrum <cit.>. However, broad lines have been detected in polarized light <cit.>. This observation represents one of the foundations of the AGN unification model <cit.> as it implies the presence of an obscuring medium (the torus) on parsec scales.
NGC 1068's prominent X-ray emission is well understood as due to photons from the accretion disk being upscattered to X-rays by a population of thermal electrons located above the accretion disk <cit.>. This coronal emission is heavily reprocessed by the dense obscuring torus which is observed nearly edge on <cit.>.
Recent observations <cit.> with the Atacama Large Millimiter Array (ALMA) have resolved the torus, which is found to have a radius of 3.5 pc and a mass of ∼10^5 M_⊙. Further ALMA observations have shown that a wide-angle AGN wind is currently interacting with a large fraction of the molecular torus <cit.>.
CO observations indicate the presence of an AGN-driven massive (>10^7 M_⊙) molecular outflow launched from the inner ∼100 pc region, and a starburst ring located at 1.5 kpc responsible for most of the galaxy's star-formation rate of ≈20 M_⊙ yr^-1 <cit.>.
The GeV gamma-ray emission of NGC 1068 has typically been ascribed to the star-formation activity, which, through the creation of supernova remnants and pulsar wind nebulae, is able to accelerate cosmic rays <cit.>. Recently, neutrino emission from NGC 1068 has been reported by IceCube at a confidence level of 4.2 σ in the 1-20 TeV energy range <cit.>.
In the same energy range, the Major Atmospheric Gamma Imaging Cherenkov <cit.> telescope reported only upper limits on TeV gamma-ray emission from NGC 1068.
These upper limits demonstrate that the TeV gamma-ray flux of NGC 1068 is less than a tenth of the neutrino flux.
This implies that the region of hadronic (pp or pγ) interactions producing the observed neutrinos should be highly opaque to GeV-TeV gamma rays because hadronic interactions inevitably produce neutrinos and gamma rays with similar energies. Multi-messenger data suggest that the neutrino emission radius R is smaller than ∼30-100 Schwartzschild radius <cit.>.
Such hidden sources have independently been predicted by the analyses of the all-sky neutrino flux and the diffuse isotropic gamma-ray background <cit.>.
The conditions of high γγ→ e^+e^- optical depth (τ_γγ) are reached in the immediate vicinity of the central super-massive black hole (SMBH) <cit.>.
The GeV-TeV photons are then reprocessed to MeV energies through pair cascades and then leave the source with a spectrum which depends on the distance from the SMBH.
For this reason, in this work, we extract a 20 MeV-1 TeV spectrum of NGC 1068 using 14.3 yr of Fermi Large Area Telescope <cit.> and interpret it as the sum of two components: a low-energy cascade emission and a high-energy starburst emission. This paper is organized as follows: <ref> describes the analysis of Fermi-LAT data, <ref> describes the modeling, while
<ref> summarizes the results.
In this work we adopt the standard cosmological parameters: H_0=70 km s^-1 Mpc ^-1, Ω_M=1-Ω_Λ=0.3.
§ GAMMA-RAY DATA ANALYSIS
The gamma-ray data used in this analysis were collected over ∼ 14.3 years by the Fermi-LAT between August 4, 2008 and December 1, 2022. The full analysis includes events with energies in the range 20 MeV-1 TeV. We define a 10^∘× 10^∘ region of interest (ROI) centered at the 4FGL coordinates of NGC 1068 (4FGL J0242.6-0000). We use the standard data filters (DATA QUAL>0 and LAT CONFIG==1) and select photons corresponding to the P8R3_SOURCE_V3 class <cit.>. The analysis is performed using (v1.2, ), which utilizes the underlying Fermitools (v2.2.0). The Galactic diffuse emission is modeled using the standard interstellar emission model () and the point source emission is modeled using the 4FGL-DR3 catalog (, ). In order to account for photon leakage from sources outside of the ROI due to the PSF of the detector, the model includes all 4FGL sources within a 15^∘× 15^∘ region. The energy dispersion correction (edisp_bins=-1) is enabled for all sources except the isotropic component. The analysis is split between two energy regimes, the 20 MeV - 50 MeV regime and the 50 MeV-1 TeV regime. At the 50 MeV-1 TeV regime we perform a joint likelihood analysis over the four point spread function (PSF) classes and use a maximum zenith angle of 90^∘. Each PSF type has a designated isotropic spectrum (iso_P8R3_SOURCE_V3_PSFi_v1, for i
ranging from 0-3) that is used in the analysis.
For the 20-50 MeV regime, we adopt the more stringent zenith angle of 80^∘ and do not differentiate among the different PSF classes. We model the extragalactic emission and residual instrumental background using
.
The diffuse emission models are available from the Fermi Science Support
Center[<https://fermi.gsfc.nasa.gov/ssc/>]. During the analysis, NGC 1068 is modeled as a power-law source with free index and normalization. The spectral parameters of the Galactic diffuse component (index and normalization) and the normalization of the isotropic component are left free to vary as are the normalizations of all 4FGL sources with TS ≥ 25 that are within 5^∘ of the ROI center and all sources with TS ≥ 500 and within 7^∘. The spectral data from the Fermi-LAT analysis are listed in Table <ref>. Upper limits are reported at the 95% confidence level and are calculated using the Bayesian method <cit.>.
In Figure <ref> we show an image of NGC 1068 from the European Southern Observatory's (ESO) Very Large Telescope (VLT)[<https://www.eso.org/public/images/eso1720a/>] overlaid with the 95% positional uncertainty ellipse for the 50 MeV - 1 TeV analysis obtained using the function in . The measured spectrum of NGC 1068, shown in Figure <ref>, is in agreement with the one reported in the 4FGL-DR3 catalog <cit.> and extends it to lower and higher energies, respectively.
§ MODELS AND IMPLICATIONS
§.§ Starburst Galaxies
NGC 1068 is one of the starburst galaxies detected by the Fermi-LAT <cit.>, and it was also considered to be among the most promising sources of PeV neutrinos <cit.>. The starburst region is considered to be transparent to GeV-TeV gamma rays, and the observed GeV gamma-ray emission presumably comes from the decay of neutral pions, although the neutrino flux that modeling predicts would be associated with the gamma-ray emission is too low to explain the IceCube data.
Assuming that the starburst region is nearly calorimetric <cit.>,
we calculate the gamma-ray emission produced by cosmic rays via inelastic pp interactions with interstellar gas,
adopting the method used in <cit.>. The normalization of the starburst model is set by the L_γ-L_ IR relation obtained by <cit.>, where log_10 L_ IR=10.97 is used for NGC 1068 <cit.>. In Figure <ref>, we show the 2σ uncertainty bands for the starburst model.
It is known that pionic gamma rays have a spectral break around 0.1 GeV below which the gamma-ray spectrum falls as E F_E∝ E^2. Figure <ref> shows an excess of the data over the starburst model, particularly for energies at ≲500 MeV.
We also note that GeV gamma-ray emission could be produced by cosmic rays accelerated by AGN, perhaps through disk winds <cit.>.
Indeed, the source luminosity as predicted by the L_γ-L_ IR relation slightly underestimates the
true luminosity measured by Fermi-LAT <cit.>.
<cit.> proposed that the observed GeV gamma-ray emission may originate from interactions between the disk wind and the dusty torus. However, the sub-GeV excess exists even for these scenarios as long as the primary gamma-ray emission is produced primarily by hadronuclear interactions. Finally, in starburst galaxies the leptonic component is sub-dominant to the hadronic one and its spectrum is harder than the excess observed here <cit.>.
§.§ AGN Coronae
The excess of ≲500 MeV gamma-ray emission shown in Figure <ref> suggests the presence of another component at these energies. It may be a hint of gamma-ray emission from the coronal regions around the AGN accretion disk. It is widely believed that a hot, strongly-magnetized plasma, the so-called “corona”, may produce X-ray emission through Compton upscattering of disk photons <cit.>. Recent global magnetohydrodynamic simulations <cit.> and particle-in-cell simulations <cit.> have demonstrated that such magnetically-powered coronal regions naturally form as a result of magnetic dissipation in the black hole accretion system.
<cit.> proposed the magnetically-powered corona model for multi-TeV neutrino emission in which cosmic rays are accelerated by magnetic dissipation and the resulting turbulence in the vicinity of SMBHs. High-energy protons interact with optical/UV photons from the accretion disk and X-rays from the corona via pγ interactions as well as the coronal gas via pp interactions. They showed the importance of Bethe-Heitler pair production for the energy losses of the protons making TeV neutrinos, as well as calculated the cascade emission resulting from synchrotron, inverse Compton, and two-photon annihilation. The model not only explains the multi-TeV neutrino flux of NGC 1068 but also the all-sky neutrino intensity in the 10 TeV range; furthermore, it predicts the associated proton-induced cascade gamma-ray emission in the MeV range. The cascade emission largely originates from synchrotron emission for strongly magnetized coronae.
In Figure <ref>, the cascade gamma-ray spectrum of the magnetically-powered corona model is taken from <cit.>, where an emission radius of R=30R_S (given that R_S is the Schwarzschild radius) and an intrinsic X-ray luminosity of L_X=(1-3)×10^43 erg s^-1 are used. The corresponding neutrino spectrum explains the observed IceCube data for NGC 1068 (see Figure <ref>). Interestingly, the cascade gamma-ray emission accompanied by neutrinos may explain the sub-GeV excess indicated by our Fermi-LAT analysis.
The break or cutoff energy of the coronal gamma-ray emission, which is set by τ_γγ∼1, depends on R and L_X. While predictions for hadronic gamma-ray emission at ∼1-10 MeV energies are rather robust, the flux in the ∼0.1 GeV range can be lower for smaller values of R <cit.>. For this reason, in Figure <ref>, the red uncertainty band of the model includes the case of R=3R_s (corresponding to the innermost stable circular orbit radius of a non-rotating black hole) and L_X=7×10^43 erg s^-1 and considers both the minimal pp and pγ models in <cit.>.
§ SUMMARY AND DISCUSSION
In this work, we have measured the gamma-ray spectrum of NGC 1068 using 14.3 yr of Fermi-LAT observations. We have, for the first time, extended the measurement to 20 MeV to constrain potential hadronic components whose gamma-ray emission is absorbed and reprocessed in the MeV band <cit.>.
We have found that above ≳500 MeV, the NGC 1068 spectrum can be well explained as the product of star-formation activity. This emission is mostly hadronic in origin, particularly in starburst galaxies like NGC 1068, which act nearly as proton calorimeters <cit.>. Indeed, in these galaxies the primary and secondary leptonic components are sub-dominant to the π^0-decay component <cit.>, although we do not exclude
a potential contribution to the gamma-ray flux
from electrons accelerated by outflows <cit.>.
In the magnetically-powered corona model <cit.>, it is natural to expect time variability for coronal neutrino and gamma-ray emission. The minimum variability timescale can be the light-crossing time, R/c, which may range from minutes to hours. Longer variability timescales of days or longer – associated with dissipation, rotation and accretion – are also possible. To test this scenario, we extracted a low-energy (50-500 MeV) and a high-energy (500 MeV-1 TeV) yearly (because of the low signal-to-noise ratio) binned lightcurve of the source. These lightcurves are reported in Figure <ref> and show no evidence of variability (p-values of 0.40 and 0.15, respectively)[The significance of variability has been computed as in Appendix A.3 of <cit.>.].
Furthermore, we note that the highest energy photon detected by the Fermi-LAT within 0.25 deg of NGC 1068 (well within the 95 % containment radius at >100 GeV) has an energy of 738 GeV. The second most energetic photon has an energy of 217 GeV. This shows that future observations by the Cherenkov Telescope array <cit.> may detect the high-energy emission of NGC 1068.
MeV gamma-ray emission may also be produced by non-thermal electrons <cit.>. Particle acceleration mechanisms are currently uncertain, and not only stochastic acceleration in turbulence <cit.> but also magnetic reconnection <cit.> and shock acceleration <cit.> have been proposed.
Further multi-messenger and multi-wavelength studies, including MeV gamma-ray observations with e.g., AMEGO-X, <cit.>, will enable us to probe the physics of dissipation and particle acceleration in the coronal regions.
MA and AM acknowledge support from NASA grant 80NSSC22K1580. Clemson University is acknowledged for generous allotment of compute time on Palmetto cluster.
KM is partly supported by the NSF Grants No. AST-1908689, No. AST-2108466 and No. AST-2108467, and KAKENHI No. 20H01901 and No. 20H05852.
The Fermi LAT Collaboration acknowledges generous ongoing support
from a number of agencies and institutes that have supported both the
development and the operation of the LAT as well as scientific data analysis.
These include the National Aeronautics and Space Administration and the
Department of Energy in the United States, the Commissariat à l'Energie Atomique
and the Centre National de la Recherche Scientifique / Institut National de Physique
Nucléaire et de Physique des Particules in France, the Agenzia Spaziale Italiana
and the Istituto Nazionale di Fisica Nucleare in Italy, the Ministry of Education,
Culture, Sports, Science and Technology (MEXT), High Energy Accelerator Research
Organization (KEK) and Japan Aerospace Exploration Agency (JAXA) in Japan, and
the K. A. Wallenberg Foundation, the Swedish Research Council and the
Swedish National Space Board in Sweden.
Additional support for science analysis during the operations phase is gratefully
acknowledged from the Istituto Nazionale di Astrofisica in Italy and the Centre
National d'Études Spatiales in France. This work performed in part under DOE
Contract DE-AC02-76SF00515.
[Abdollahi et al.(2020)]4fgl Abdollahi, S., Acero, F., Ackermann, M., et al. 2020, , 247, 33. doi:10.3847/1538-4365/ab6bcb
[Abdollahi et al.(2022)]4fgl_dr3 Abdollahi, S., Acero, F., Baldini, L., et al. 2022, , 260, 53. doi:10.3847/1538-4365/ac6751
[Acciari et al.(2019)]magic_ngc1068 Acciari, V. A., Ansoldi, S., Antonelli, L. A., et al. 2019, , 883, 135
[Actis et al.(2011)]CTA Actis, M., Agnetta, G., Aharonian, F., et al. 2011, Experimental Astronomy, 32, 193
[Ackermann et al.(2012)]ackermann2012 Ackermann, M., Ajello, M., Allafort, A., et al. 2012, , 755, 164
[Ajello et al.(2021)]ajello2021 Ajello, M., Baldini, L., Ballet, J., et al. 2021, , 921, 144
[Ajello et al.(2020)]ajello2020 Ajello, M., Di Mauro, M., Paliya, V. S., et al. 2020, , 894, 88
[Antonucci & Miller(1985)]antonucci85 Antonucci, R. R. J. & Miller, J. S. 1985, , 297, 621
[Atwood et al.(2009)]atwood2009 Atwood, W. B., Abdo, A. A., Ackermann, M., et al. 2009, , 697, 1071
[Atwood et al.(2013)]atwood2013 Atwood, W. B., Baldini, L., Bregeon, J., et al. 2013, , 774, 76. doi:10.1088/0004-637X/774/1/76
[Bauer et al.(2015)]bauer2015 Bauer, F. E., Arévalo, P., Walton, D. J., et al. 2015, , 812, 116
[Bechtol et al.(2017)]bechtol2017 Bechtol, K., Ahlers, M., Di Mauro, M., Ajello, M. & Vandenbroucke, J. 2017, , 836, 47
[Bruel et al.(2018)]bruel2018 Bruel, P., Burnett, T. H., Digel, S. W., et al. 2018, arXiv:1810.11394. doi:10.48550/arXiv.1810.11394
[Caputo et al.(2022)]caputo2022 Caputo, R., Ajello, M., Kierans, C. A., et al. 2022, JATIS, 8, 044003
[Courtois et al.(2013)]curtois2013 Courtois, H. M., Pomarède, D., Tully, R. B., et al. 2013, , 146, 69
[De Angelis et al.(2017)]angelis2017 De Angelis, A., Tatischeff, V., Tavani, M., et al. 2017, Experimental Astronomy, 44, 25
[Fluetsch et al.(2019)]fluetsch2019 Fluetsch, A., Maiolino, R., Carniani, S., et al. 2019, , 483, 4586
[Galeev et al.(1979)]galeev1979 Galeev, A. A., Rosner, R., & Vaiana, G. S. 1979, , 229, 318. doi:10.1086/156957
[García-Burillo et al.(2014)]garcia2014 García-Burillo, S., Combes, F., Usero, A., et al. 2014, , 567, A125
[García-Burillo et al.(2016)]garcia2016 García-Burillo, S., Combes, F., Ramos Almeida, C., et al. 2016, , 823, L12
[García-Burillo et al.(2019)]garcia2019 García-Burillo, S., Combes, F., Ramos Almeida, C., et al. 2019, , 632, A61
[Groselj et al.(2023)]groselj2023 Groselj, D., Hakobyan, H., Beloborodov, A. M., Sironi, L., & Philippov, A. 2023, arXiv:2301.11327
[Haardt & Maraschi(1991)]haardt1991 Haardt, F. & Maraschi, L. 1991, , 380, L51. doi:10.1086/186171
[Helene(1983)]helene1983 Helene, O. 1983, Nuclear Instruments and Methods in Physics Research, 212, 319. doi:10.1016/0167-5087(83)90709-3
[IceCube Collaboration et al.(2022)]ic_ngc1068 IceCube Collaboration, Abbasi, R., Ackermann, M., et al. 2022, Science, 378, 538
[Inoue et al.(2022)]inoue2022 Inoue, S., Cerruti, M., Murase, K., et al. 2022, arXiv:2207.02097
[Inoue et al.(2020)]inoue2020 Inoue, Y., Khangulyan, D., & Doi, A. 2020, , 891, L33
[Lorenz & MAGIC Collaboration(2004)]lorenz2004 Lorenz, E. & MAGIC Collaboration 2004, , 48, 339. doi:10.1016/j.newar.2003.12.059
[Jiang et al.(2019)]jiang2019 Jiang, Y.-F., Blaes, O., Stone, J. M, & Davis, S. W. 2019, , 885, 144
[Kheirandish et al.(2021)]kheirandish2021 Kheirandish, A., Murase, K., & Kimura, S. S. 2021, , 922, 45
[Lacki et al.(2011)]lacki2011 Lacki, B. C., Thompson, T. A., Quataert, E., et al. 2011, , 734, 107
[Lamastra et al.(2016)]lamastra2016 Lamastra, A., Fiore, F., Guetta, D., et al. 2016, , 596, A68
[Lenain et al.(2010)]lenain2010 Lenain, J. -P., Ricci, C., Türler, M., Dorner, D., & Walter, R. 2010, , 524, A72
[Liu et al.(2017)]liu2017 Liu, R.-Y., Murase, K., Inoue, S., Ge, C., & Wang, X.-Y. 2017, , 780, 137
[McDaniel et al.(2023)]mcdaniel2023 McDaniel, A., Ajello, M., & Karwin, C. 2023, , 943, 168
[Murase(2022)]murase2022 Murase, K. 2022, , 941, L17
[Murase et al.(2016)]murase2016a Murase, K. Guetta. D. & Ahlers, M. 2016, , 116, 071101
[Murase et al.(2020)]murase2020 Murase, K., Kimura, S. S., & Mészáros, P. 2020, , 125, 011101
[Murase & Waxman(2016)]murase2016b Murase, K. & Waxman, E. 2016, , 94, 103006
[Peretti et al.(2019)]peretti2019 Peretti, E., Blasi, P., Aharonian, F., et al. 2019, , 487, 168
[Sanders et al.(2003)]sanders2003 Sanders, D. B., Mazzarella, J. M., Kim, D.-C., et al. 2003, , 126, 1607. doi:10.1086/376841
[Shields & Oke(1975)]shields75 Shields, G. A. & Oke, J. B. 1975, , 197, 5
[Stecker et al.(1991)]stecker1991
Stecker, F. W., Done, C., Salamon, M. H., & Sommers, P. 1991, , 66, 2697
[Urry & Padovani(1995)]urry95 Urry, C. M. & Padovani, P. 1995, , 107, 803
[Wood et al.(2017)]fermipy Wood, M., Caputo, R., Charles, E., et al. 2017, 35th International Cosmic Ray Conference (ICRC2017), 301, 824
[Yoast-Hull et al.(2014)]yoasthull2014
Yoast-Hull, T. M., Gallagher, J. S., Zweibel, E. G., & Everett, J. E. 2014, , 780, 137
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.