text
stringlengths
100
500k
subset
stringclasses
4 values
A weighted $L_p$-theory for second-order parabolic and elliptic partial differential systems on a half space On Compactness Conditions for the $p$-Laplacian May 2016, 15(3): 727-760. doi: 10.3934/cpaa.2016.15.727 Well-posedness and ill-posedness results for the Novikov-Veselov equation Yannis Angelopoulos 1, Department of Mathematics, University of Toronto, Bahen Centre, 40 St. George Street, Toronto, On M5S 2E4, Canada Received September 2014 Revised December 2015 Published February 2016 In this paper we study the Novikov-Veselov equation and the related modified Novikov-Veselov equation in certain Sobolev spaces. We prove local well-posedness in $H^s (\mathbb{R}^2)$ for $s > \frac{1}{2}$ for the Novikov-Veselov equation, and local well-posedness in $H^s (\mathbb{R}^2)$ for $s > 1$ for the modified Novikov-Veselov equation. Finally we point out some ill-posedness issues for the Novikov-Veselov equation in the supercritical regime. Keywords: Novikov-Veselov equation, Fourier restriction norm method., multilinear estimates, dispersive equation. Mathematics Subject Classification: 35Q5. Citation: Yannis Angelopoulos. Well-posedness and ill-posedness results for the Novikov-Veselov equation. Communications on Pure & Applied Analysis, 2016, 15 (3) : 727-760. doi: 10.3934/cpaa.2016.15.727 L. V. Bogdanov, The Veselov-Novikov equation as a natural generalization of the Korteweg de Vries equation, Teoret. Mat. Fiz., 70 (1987), 309-314. English translation: Theoret. and Math. Phys., 70 (1987), 219-223. Google Scholar J. Bourgain, Fourier transform restriction phenomena for certain lattice subsets and applications to nonlinear evolution equations, II: The KdV equation, GAFA, 3 (1993), 107-156. doi: 10.1007/BF01896020. Google Scholar J. Bourgain, On the Cauchy problem for the Kadomtsev-Petviashvili equation, GAFA, 3 (1993), 315-341. doi: 10.1007/BF01896259. Google Scholar J. Bourgain, Periodic Korteweg de Vries equation with measures as initial data, Sel. Math. New. Ser., 3 (1997), 115-159. doi: 10.1007/s000290050008. Google Scholar N. Burq, Éstimations de Strichartz pour des parturbations à longue portée de l'opérateur de Schrödinger,, \emph{S\'eminaire \'E.D.P.}, (): 2001. Google Scholar A. Carbery, C. E. Kenig and S. Ziesler, Restriction for homogeneous polynomial surfaces in $\mathbbR^3$,, arXiv:1108.4123, (). doi: 10.1090/S0002-9947-2012-05685-6. Google Scholar M. Christ, J. Colliander and T. Tao, Ill-posedness for nonlinear Schrödinger and wave equations,, arXiv:math/0311048, (). Google Scholar M. Christ and A. Kiselev, Maximal functions associated to filtrations, J. Funct. Anal., 179 (2001), 409-425. doi: 10.1006/jfan.2000.3687. Google Scholar A. V. Faminskii, The Cauchy problem for the Zakharov-Kuznetsov equation, Differential Equations, 31 (1995), 1002-1012. Google Scholar P. G. Grinevich, The scattering transform for the two-dimensional Schrödinger operator with a potential that decreases at infinity at fixed nonzero energy, Uspekhi Mat. Nauk, 55 (2000), 3-70 (Russian); Russian Math. Surveys, 55 (2000), 1015-1083. doi: 10.1070/rm2000v055n06ABEH000333. Google Scholar P. G. Grinevich and S. V. Manakov, Inverse scattering problem for the two-dimensional Schrödinger operator, the $\bar{\partial}$-method and nonlinear equations, Funktsional. Anal. i Prilozhen, 20 (1986), 14-24; English transl., Functional Anal. Appl., 20 (1986), 94-103. Google Scholar A. Grünrock and S. Herr, The Fourier restriction norm method for the Zakharov-Kuznetsov equation, Discrete Contin. Dyn. Syst. - A., 5 (2014), 2061-2068. doi: 10.3934/dcds.2014.34.2061. Google Scholar Z. Hani, A bilinear oscillatory integral estimate and bilinear refinements to Strichartz estimates on closed manifolds, Analysis & PDE, 5 (2012), 339-363. doi: 10.2140/apde.2012.5.339. Google Scholar A. Kazeykina, Solitons and Large Time Asymptotics of Solutions for the Novikov-Veselov Equation, PhD thesis at \'Ecole Polytechnique, 2012. Google Scholar A. Kazeykina and R. G. Novikov, Large time asymptotics for the Grinevich-Zakharov potentials, Bulletin des Sciences Mathématiques, 135 (2011), 374-382. doi: 10.1016/j.bulsci.2011.02.003. Google Scholar A. Kazeykina and R. G. Novikov, Absence of exponentially localized solitons for the Novikov-Veselov equation at negative energy, Nonlinearity, 24 (2011), 1821-1830. doi: 10.1088/0951-7715/24/6/007. Google Scholar A. Kazeykina and R. G. Novikov, A large time asymptotics for the solution of the Cauchy problem for the Novikov-Veselov equation at negative energy with nonsingular scattering data, Inverse Problems, 28 (2012), 055017. doi: 10.1088/0266-5611/28/5/055017. Google Scholar M. Lassas, J. L. Mueller and S. Siltanen, Mapping properties of the nonlinear Fourier transform in dimension two, Comm. Partial Differential Equations, 32 (2007), 591-610. doi: 10.1080/03605300500530412. Google Scholar M. Lassas, J. L. Mueller, S. Siltanen and A. Stahel, The Novikov-Veselov equation and the inverse scattering method, Part I, Analysis, Phys. D, 241 (2012), 1322-1335. doi: 10.1016/j.physd.2012.04.010. Google Scholar F. Linares and A. Pastor, Well-posedness for the two-dimensional modified Zakharov-Kuznetsov equation, SIAM J. Math. Anal., 41 (2009), 1323-1339. doi: 10.1137/080739173. Google Scholar F. Linares, A. Pastor and J.-C. Saut, Well-posedness for the ZK equation in a cylinder and on the background of a KdV soliton, Communications PDE, 35 (2010), 1674-1689. doi: 10.1080/03605302.2010.494195. Google Scholar F. Linares and J.-C. Saut, The Cauchy problem for the 3D Zakharov-Kuznetsov equation, Discrete Contin. Dyn. Syst., 24 (2009), 547-565. doi: 10.3934/dcds.2009.24.547. Google Scholar L. Molinet and D. Pilod, Bilinear Strichartz estimates for the Zakharov-Kuznetsov equation and applications, Ann. Inst. H. Poincaré Anal. Non Linéaire, 32 (2015), 347-371. doi: 10.1016/j.anihpc.2013.12.003. Google Scholar L. Molinet, J.-C. Saut and N. Tzvetkov, Well-posedness and ill-posedness results for the Kadomtsev-Petviashvili-I equation, Duke Mathematical Journal, 115 (2002), 353-384. doi: 10.1215/S0012-7094-02-11525-7. Google Scholar L. Molinet, J.-C. Saut and N. Tzvetkov, Ill-posedness issues for the Benjamin-Ono and related equations, SIAM J. Math. Anal., 33 (2001), 982-988. doi: 10.1137/S0036141001385307. Google Scholar L. Molinet, J.-C. Saut and N. Tzvetkov, Global well-posedness for the KPII equation on the background of non localized solution, Ann. Inst. H. Poincaré Anal. Non Linéaire, 28 (2011), 653-676. doi: 10.1016/j.anihpc.2011.04.004. Google Scholar C. Muscalu and W. Schlag, Classical and Multilinear Harmonic Analysis, Vol. 1, Cambridge Studies In Advanced Mathematics 137, 2013. Google Scholar R. G. Novikov, Absence of exponentially localized solitons for the Novikov-Veselov equation at positive energy, Phys. Lett. A, 375 (2011), 1233-1235. doi: 10.1016/j.physleta.2011.01.052. Google Scholar S. P. Novikov and A. P. Veselov, Finite-gap two-dimensional potential Schrödinger operators. Explicit formulas and evolution equations, Dokl. Akad. Nauk SSSR, 279 (1984), 20-24.. English translation: Soviet Math. Dokl., 30 (1984), 588-591. Google Scholar S. P. Novikov and A. P. Veselov, Two-dimensional Schrödinger operator: inverse scattering transform and evolutional equations. Solitons and coherent structures, Phys. D, 18 (1986), 267-273. doi: 10.1016/0167-2789(86)90187-9. Google Scholar P. Perry, Miura maps and inverse scattering for the Novikov-Veselov equation,, \emph{Analysis & PDE}, (). doi: 10.2140/apde.2014.7.311. Google Scholar L. Robbiano and C. Zuily, Strichartz estimates for Schrödinger equations with variable coefficients, Société Mathématique de France, Paris: Mémoires de la Société Mathématique de France, (2005), 101-102. Google Scholar G. Staffilani and D. Tataru, Strichartz estimates for a Schrödinger operator with nonsmooth coefficients, Comm. Partial Differential Equations, 27 (2002), 1337-1372. doi: 10.1081/PDE-120005841. Google Scholar T. Tao, Nonlinear Dispersive Equations: Local And Global Analysis, Regional Conference Series In Mathematics, 106, AMS, Providence, RI, 2006. Google Scholar N. Tzvetkov, Remark on the local ill-posedness for KdV equation, C.R. Acad. Sci. Paris Sér. I Math., 329 (1999), 1043-1047. doi: 10.1016/S0764-4442(00)88471-2. Google Scholar Axel Grünrock, Sebastian Herr. The Fourier restriction norm method for the Zakharov-Kuznetsov equation. Discrete & Continuous Dynamical Systems, 2014, 34 (5) : 2061-2068. doi: 10.3934/dcds.2014.34.2061 Felipe Hernandez. A decomposition for the Schrödinger equation with applications to bilinear and multilinear estimates. Communications on Pure & Applied Analysis, 2018, 17 (2) : 627-646. doi: 10.3934/cpaa.2018034 Giuseppe Maria Coclite, Lorenzo di Ruvo. A note on the convergence of the solution of the Novikov equation. Discrete & Continuous Dynamical Systems - B, 2019, 24 (6) : 2865-2899. doi: 10.3934/dcdsb.2018290 Rudong Zheng, Zhaoyang Yin. The Cauchy problem for a generalized Novikov equation. Discrete & Continuous Dynamical Systems, 2017, 37 (6) : 3503-3519. doi: 10.3934/dcds.2017149 Michael Ruzhansky, Jens Wirth. Dispersive type estimates for fourier integrals and applications to hyperbolic systems. Conference Publications, 2011, 2011 (Special) : 1263-1270. doi: 10.3934/proc.2011.2011.1263 Bassam Kojok. Global existence for a forced dispersive dissipative equation via the I-method. Communications on Pure & Applied Analysis, 2009, 8 (4) : 1401-1419. doi: 10.3934/cpaa.2009.8.1401 José Manuel Palacios. Orbital and asymptotic stability of a train of peakons for the Novikov equation. Discrete & Continuous Dynamical Systems, 2021, 41 (5) : 2475-2518. doi: 10.3934/dcds.2020372 Priscila Leal da Silva, Igor Leite Freire. An equation unifying both Camassa-Holm and Novikov equations. Conference Publications, 2015, 2015 (special) : 304-311. doi: 10.3934/proc.2015.0304 Cheng He, Changzheng Qu. Global weak solutions for the two-component Novikov equation. Electronic Research Archive, 2020, 28 (4) : 1545-1562. doi: 10.3934/era.2020081 Yongye Zhao, Yongsheng Li, Wei Yan. Local Well-posedness and Persistence Property for the Generalized Novikov Equation. Discrete & Continuous Dynamical Systems, 2014, 34 (2) : 803-820. doi: 10.3934/dcds.2014.34.803 M. A. Christou, C. I. Christov. Fourier-Galerkin method for localized solutions of the Sixth-Order Generalized Boussinesq Equation. Conference Publications, 2001, 2001 (Special) : 121-130. doi: 10.3934/proc.2001.2001.121 Makoto Nakamura. Remarks on a dispersive equation in de Sitter spacetime. Conference Publications, 2015, 2015 (special) : 901-905. doi: 10.3934/proc.2015.0901 H. Kalisch. Stability of solitary waves for a nonlinearly dispersive equation. Discrete & Continuous Dynamical Systems, 2004, 10 (3) : 709-717. doi: 10.3934/dcds.2004.10.709 Yonggeun Cho, Tohru Ozawa, Suxia Xia. Remarks on some dispersive estimates. Communications on Pure & Applied Analysis, 2011, 10 (4) : 1121-1128. doi: 10.3934/cpaa.2011.10.1121 Fabio Nicola. Remarks on dispersive estimates and curvature. Communications on Pure & Applied Analysis, 2007, 6 (1) : 203-212. doi: 10.3934/cpaa.2007.6.203 Ralf Kirsch, Sergej Rjasanow. The uniformly heated inelastic Boltzmann equation in Fourier space. Kinetic & Related Models, 2010, 3 (3) : 445-456. doi: 10.3934/krm.2010.3.445 Juan H. Arredondo, Francisco J. Mendoza, Alfredo Reyes. On the norm continuity of the hk-fourier transform. Electronic Research Announcements, 2018, 25: 36-47. doi: 10.3934/era.2018.25.005 JaEun Ku. Maximum norm error estimates for Div least-squares method for Darcy flows. Discrete & Continuous Dynamical Systems, 2010, 26 (4) : 1305-1318. doi: 10.3934/dcds.2010.26.1305 Fabrice Planchon, John G. Stalker, A. Shadi Tahvildar-Zadeh. Dispersive estimate for the wave equation with the inverse-square potential. Discrete & Continuous Dynamical Systems, 2003, 9 (6) : 1387-1400. doi: 10.3934/dcds.2003.9.1387 Zhaohui Huo, Boling Guo. The well-posedness of Cauchy problem for the generalized nonlinear dispersive equation. Discrete & Continuous Dynamical Systems, 2005, 12 (3) : 387-402. doi: 10.3934/dcds.2005.12.387 Yannis Angelopoulos
CommonCrawl
N.V. Munts, S.S. Kumkov. On the coincidence of the minimax solution and the value function in a time-optimal game with a lifeline ... P. 200-214 We consider time-optimal differential games with a lifeline. In such games, as usual, there is a terminal set to which the first player tries to guide the system as fast as possible, and there is also a set, called a lifeline, such that the second player wins when the system attains this set. The payoff is the result of applying Kruzhkov's change to the time when the system reaches the terminal set. We also consider Hamilton–Jacobi equations corresponding to such games. The existence of a minimax solution of a boundary value problem for a Hamilton–Jacobi type equation is proved. For this we introduce certain strong assumptions on the dynamics of the game near the boundary of the game domain. More exactly, the first and second players can direct the motion of the system to the terminal set and the lifeline, respectively, if the system is near the corresponding set. Under these assumptions, the value function is continuous in the game domain. The coincidence of the value function and the minimax solution of the boundary value problem is proved under the same assumptions. Keywords: time-optimal differential games with a lifeline, value function, Hamilton–Jacobi equations, minimax solution. The paper was received by the Editorial Office on February 28, 2018. Russian Foundation for Basic Research (Grant Number 18-01-00410). Natal'ya Vladimirovna Munts, Krasovskii Institute of Mathematics and Mechanics, Ural Branch of the Russian Academy of Sciences, Yekaterinburg, 620990 Russia, e-mail: [email protected]. Sergei Sergeevich Kumkov, Cand. Sci. (Phys.-Math.), Krasovskii Institute of Mathematics and Mechanics, Ural Branch of the Russian Academy of Sciences, Yekaterinburg, 620990 Russia, e-mail: [email protected]. 1. Isaacs R. Differential games. N Y, John Wiley and Sons, 1965, 384 p. ISBN: 0471428604 . Translated to Russian under the title Differentsial'nye igry. Moscow, Mir Publ., 1967, 480 p. 2. Petrosjan L. A. A family of differential survival games in the space $\mathbb R^n$. Soviet Math. Dokl., 1965, no. 6, pp. 377–380. 3. Petrosyan L.A. Dispersion surfaces in one family of pursuit games. Proc. Acad. Sci. Armenian SSR, 1966, vol. 43, no. 4, pp. 193–197. 4. Dutkevich Yu.G., Petrosyan L.A. Games with a "life-line". The case of $l$-capture. SIAM J. Control, 1972, vol. 10, no. 1, pp. 40–47. doi: 10.1137/0310004 . 5. Petrosyan L.A. Differential games of pursuit. Singapore: World Scientific, 1993, 325 p. ISBN: 9810209797 . Original Russian text published in Petrosyan L.A., Differentsial'nye igry presledovaniya, Leningrad: Publ. Leningr. Gos. Univ., 1977, 222 p. 6. Krasovskii N.N., Subbotin A.I. Game-theoretical control problems. N Y, Springer, 1988, 517 p. ISBN: 978-1-4612-8318-8 . Original Russian text published in Krasovskii N.N., Subbotin A.I., Pozitsionnye differentsial'nye igry, Moscow, Nauka Publ., 1974, 456 p. 7. Krasovskii N.N., Subbotin A.I. Game-theoretical control problems. N Y: Springer-Verlag, 1988, 517 p. 8. Cardaliaguet P., Quincampoix M., Saint-Pierre P. Some algorithms for differential games with two players and one target. RAIRO – Mod$\acute{e}$lisation math$\acute{e}$matique et analyse num$\acute{e}$rique, 1994, vol. 28, no. 4, pp. 441–461. doi: 10.1051/m2an/1994280404411 . 9. Cardaliaguet P., Quincampoix M., Saint-Pierre P. Set-valued numerical analysis for optimal control and differential games. In: Bardi M., Raghavan T.E.S., Parthasarathy T. (eds): Stochastic and Differential Games. Annals Internat. Soc. Dynamic Games. Boston: Birkh$\ddot{\mathrm{a}}$auser, 1999, vol. 4, pp. 177–247. doi: 10.1007/978-1-4612-1592-9_4 . 10. Cardaliaguet P., Quincampoix M., Saint-Pierre P. Differential games through viability theory: Old and recent results. In: Jјrgensen S., Quincampoix M., Vincent T.L. (eds): Advances in Dynamic Game Theory. Annals Internat. Soc. Dynamic Games, Boston: Birkh$\ddot{\mathrm{a}}$auser, 2007, vol. 9, pp. 3–35. doi: 10.1007/978-0-8176-4553-3_1 . 11. Subbotin A.I. Generalized solutions of First-Order PDEs. The Dynamical optimization perspective. Basel, Birkh$\ddot{\mathrm{a}}$auser, 1995, 314 p. doi: 10.1007/978-1-4612-0847-1 . Translated to Russian under the title Obobshchennye resheniya uravnenii v chastnykh proizvodnykh pervogo poryadka: Perspektivy dinamicheskoi optimizatsii, Moscow, Izhevsk: Inst. Komp'yuter. Issled. Publ., 2003, 336 p. 12. Munts N.V., Kumkov S.S. Existence of value function in time-optimal game with life line [e-resource]. In: Makhnev A.A., Pravdin S.F. (eds.): Proc. 47th Internat. Youth School-Conf. "Modern Problems in Mathematics and its Applications", Yekaterinburg, 2016, pp. 94–99 (in Russian). Available at: http://ceur-ws.org/Vol-1662/opt6.pdf .
CommonCrawl
Spectro-temporal encoded multiphoton microscopy and fluorescence lifetime imaging at kilohertz frame-rates High-speed laser-scanning biological microscopy using FACED Queenie T. K. Lai, Gwinky G. K. Yip, … Kevin K. Tsia Electro-optic imaging enables efficient wide-field fluorescence lifetime microscopy Adam J. Bowman, Brannon B. Klopfer, … Mark A. Kasevich Full spectrum fluorescence lifetime imaging with 0.5 nm spectral and 50 ps temporal resolution Gareth O. S. Williams, Elvira Williams, … Mark Bradley The BrightEyes-TTM as an open-source time-tagging module for democratising single-photon microscopy Alessandro Rossetta, Eli Slenders, … Giuseppe Vicidomini Enhanced photon collection enables four dimensional fluorescence nanoscopy of living systems Luciano A. Masullo, Andreas Bodén, … Ilaria Testa Line excitation array detection fluorescence microscopy at 0.8 million frames per second Chris Martin, Tianqi Li, … Adela Ben-Yakar Multiphoton single-molecule localization by sequential excitation with light minima Luciano A. Masullo & Fernando D. Stefani MINFLUX nanometer-scale 3D imaging and microsecond-range tracking on a common fluorescence microscope Roman Schmidt, Tobias Weihs, … Stefan W. Hell Quantitative real-time imaging of intracellular FRET biosensor dynamics using rapid multi-beam confocal FLIM James A. Levitt, Simon P. Poland, … Simon M. Ameer-Beg Sebastian Karpf ORCID: orcid.org/0000-0002-7665-59141,2, Carson T. Riche3, Dino Di Carlo3, Anubhuti Goel4, William A. Zeiger4, Anand Suresh4, Carlos Portera-Cailliau ORCID: orcid.org/0000-0001-5735-63804 & Bahram Jalali1,3 Nature Communications volume 11, Article number: 2062 (2020) Cite this article Fibre lasers Imaging and sensing Two-Photon Microscopy has become an invaluable tool for biological and medical research, providing high sensitivity, molecular specificity, inherent three-dimensional sub-cellular resolution and deep tissue penetration. In terms of imaging speeds, however, mechanical scanners still limit the acquisition rates to typically 10–100 frames per second. Here we present a high-speed non-linear microscope achieving kilohertz frame rates by employing pulse-modulated, rapidly wavelength-swept lasers and inertia-free beam steering through angular dispersion. In combination with a high bandwidth, single-photon sensitive detector, this enables recording of fluorescent lifetimes at speeds of 88 million pixels per second. We show high resolution, multi-modal - two-photon fluorescence and fluorescence lifetime (FLIM) – microscopy and imaging flow cytometry with a digitally reconfigurable laser, imaging system and data acquisition system. These high speeds should enable high-speed and high-throughput image-assisted cell sorting. The extension of regular one-photon fluorescence microscopy to non-linear two-photon microscopy (TPM)1 has led to important applications, e.g., in brain research, where the advantages of deeper tissue penetration and inherent three-dimensional sectioning capability enable recording neuronal activity to interrogate brain function in living mice2. Further, fluorescence lifetime imaging (FLIM)3 can probe internal biochemical interactions and external environment of molecules, e.g., for quantifying cellular energy metabolism in living cells4,5. The quadratic dependency on intensity of TPM favours laser-scanning over whole-field illumination, so typically raster-scanning with galvanometric mirrors is conducted. However, these mechanical mirrors are inertia limited to line-scan rates of <20 kHz. This limits the imaging frame rate, however faster frame-rates are desirable, e.g., for neuronal activity imaging at >1000 Hz frame-rates6,7,8. Therefore, new technologies were developed with faster acousto-optical scanners6,9, polygonial scanners10, parallelized multi-foci excitation11,12,13, optical scanning8 or sparse sampling7, just to name a few. For FLIM, recent developments also increased imaging speeds14 with techniques employing multifoci15 or widefield imaging16,17,18,19 or increased detection speed by analogue lifetime detection20,21,22 which permitted speeds up to video-rate23. Enhancing imaging speeds through spectral-encoded scanning has been successfully employed for confocal microscopy24, high-speed brightfield imaging25, quantitative phase imaging26 and more, especially by employing the time stretch technique25,26,27. However, these fast imaging approaches could not be used for fluorescence imaging, as both the emission spectrum and the fluorescent lifetime are independent on the excitation wavelength (Kasha's rule), so the original spectral encoding is lost upon the fluorescence emission. We present a solution to this problem by further employing temporal encoding from a wavelength swept laser. This concept achieves spectro-temporal encoded imaging, where the wavelength is used for high-speed inertia-free point scanning and the temporal encoding for one-to-one mapping of the signal to the imaging pixels. This temporal encoding is just like in conventional raster-scanning or laser scanning microscopy. Spectro-temporal encoded imaging has unique advantages over other high-speed non-linear imaging approaches6,7,8,9,11,12,13 in terms of resolution, lifetime modality, compactness, flexibility, and fibre-based setup. Here we report on a high-speed laser scanning technique for non-linear imaging using a rapidly wavelength swept laser in combination with a diffraction grating to achieve inertia-free, very rapid beam-scanning, orders of magnitude faster than mechanical scanners. We employ a high-speed swept source Fourier-domain Mode-locked (FDML) laser28 which is modulated to short pulses and amplified to high-peak powers. This FDML-MOPA laser was previously described in detail29. The swept wavelength output is sent onto a diffraction grating for line scanning (Fig. 1). Each pulse illuminates a distinct pixel both in time and in space (spectro-temporal encoding). The y-axis is scanned with a galvanometric mirror at typically 1 kHz speed (slow axis). At an FDML sweep rate of 342 kHz this achieves a frame rate of 2 kHz for a 256 × 170 pixel frame size and 88 MHz pulse repetition rate. A high-bandwidth detection at single-photon sensitivity enables recording of second harmonic generation (SHG), Two-Photon fluorescence and fluorescent lifetime imaging (FLIM) at speeds up to the excitation rate of 88 million pixels per second. This fast two-photon microscope, which we coin spectro-temporal laser imaging by diffracted excitation (SLIDE) microscope, can be digitally programmed to allow for adapting the pulse repetition rate to the fluorescent lifetimes. This is achieved through the direct pulse modulation through a 20 GHz bandwidth electro-optic modulator (EOM) (cf. Fig. 1 and our previous report29). Fig. 1: Principle of spectro-temporal laser Imaging by diffractive excitation (SLIDE). a A swept-source Fourier Domain Mode-Locked (FDML) laser is pulse-modulated, amplified and diffracted to produce rapid beam steering through spectrum-to-line mapping (b). The mapping pattern along each line is digitally shaped through the pulse modulation onto a fast electro-optic modulator (EOM). For raster-scanning the y-axis is scanned using a galvanometric scanner. c In SLIDE, each pulse has both a unique wavelength and time (spectro-temporal encoding) leading to a sequential and pixelwise illumination. d Fluorescence signals are recorded at high-detection bandwidth to also enable fluorescent lifetime imaging (FLIM) at high speed. Picosecond excitation pulses for rapid lifetime imaging The fast lifetime imaging capability is possible by the direct analogue recording of the fluorescent lifetime decay and is further enhanced by the higher number of photons generated per pulse by picosecond excitation pulses30,31, enabling single pulse per pixel illumination21. This has a number of advantages over traditional illumination: (i) A single pulse per pixel leads to a very low effective repetition rate per pixel, equal to half of the frame-rate (~1 kHz) for bi-directional scanning. This has been shown to decrease photobleaching and thereby increasing signal levels32. As a consequence, even when averaging is desired, e.g., for double-exponential lifetime fits, it may be advantageous to average 2000 frames obtained with SLIDE within 1 s, as opposed to pixel averaging by applying 2000 pulses per pixel with a pixel dwell time of 23 µs and then raster-scanning the image in 1 s. (ii) Longer pulses lead to reduced pulse peak powers at same SNR, thus having the advantage of avoiding higher-than-quadratic effects like photobleaching32,33 and photodamage34,35,36 (scale at orders >2). Thus, the sample can be imaged for comparatively longer times. Still, the picosecond pulses are shorter than time-scales for intersystem crossing (ISC), so no further excitation from the triplet state should occur within the same pulse. (iii) The longer pulses are generated by digitally synthesized EO modulation, which renders the excitation pattern freely programmable. For example, for optimal detection the pixel rate can be tailored to the fluorescence lifetime of the sample and allows warped (anamorphic) spatial illumination that takes advantage of sparsity to achieve optical data compression37. (iv) Longer pulses generate quasi-monochromatic light and this renders the high-speed line-scanning spectral mapping by diffraction gratings possible. (v) The quasi-monochromatic light is optimally compatible with fibre delivery by omitting chromatic dispersion and pulse spreading. The excitation laser presented here is already fully fibre-based, making it compatible with a future implementation into a multiphoton endoscope. Time bandwidth product in spectro-temporal laser imaging by diffractive excitation (SLIDE) Spectro-temporal encoding with high-information density places rigorous requirements on the spectro-temporal bandwidth of the light source37. The wavelength sweep time ∆T is equal to the number of pixels n times the time between pulses ∆ti, governed by the information-interrogation induced latency (here: fluorescent decay time τi). Considering, for example, 256 horizontal (linescan) pixels and a typical total fluorescent decay time of 10 ns, this time calculates to ∆T = 2.56 µs. Assuming a spectral resolution of ∆λi = 100 pm for the diffractive mapping means that the light source needs to sweep over ∆λ = 25.6 nm in ∆T = 2.56 µs. A unique feature in SLIDE is that the spectro-temporal bandwidth scales quadratically with the number of pixels (in linescan): $${\mathrm{Spectro}} - {\mathrm{temporal}}\,{\mathrm{bandwidth}}\,{\mathrm{M}}_{{\mathrm{ST}}} = \Delta {\mathrm{T}}\, \times \, \Delta \lambda = {\mathrm{n}}^{\mathrm{2}}\, \times\, \Delta \lambda _{\mathrm{i}}\, \times\, \Delta {\mathrm{t}}_{\mathrm{i}}$$ A wavelength tuning speed of tens of nm over few microseconds is beyond the reach of conventional tuneable lasers28,38. Although very fast tuning speeds can be achieved by chirping a supercontinuum pulse source in a dispersive medium as employed in time stretch techniques, achieving a time span of 2.56 µs is about three orders of magnitude beyond the reach of available dispersive elements (typically in the ns-regime). Furthermore, the spreading of energy due to the stretching would result in negligible peak powers and would prevent non-linear excitation. Spectro-temporal stretch via an FDML laser solves this predicament37. The FDML laser provides a combination of large spectral span39, along with microseconds time span and narrow instantaneous linewidth. This type of laser has mainly been used for fast optical coherence tomography (OCT)28,38,40, semiconductor compressed pulse generation41 and non-linear stimulated Raman microscopy42 and recently inertia-free LiDAR37. Its low instantaneous linewidth allows us to achieve a high spatial resolution below 1 µm (see Supplementary Fig. 6), a feat that is not possible with chirped supercontinuum sources. SLIDE: Experimental setup The experimental setup of the SLIDE microscope is presented in Fig. 2. The whole system is electronically synchronized by an arbitrary waveform generator (AWG). It drives the FDML-MOPA laser including the pulse pattern, the y-axis galvo mirror and also the trigger and clock of the digitizer card. The modulated pulse length was measured to be 65 ps (Fig. 2g). This pulse width corresponds to a time-bandwidth calculated linewidth of 25 pm, so only a factor of two lower than the measured linewidth29. Raster scanning on the sample is achieved through spectro-temporal encoding along the x-axis and mechanical scanning on the y-axis (cf. Fig. 1b). It shall be noted that the used spectral bandwidth of 12 nm (Fig. 2e) lies well within most Two-Photon absorption bandwidths, so after absorption and excited state relaxation (Stokes shift), the fluorescence emission is equivalent for all pixels along the whole line (Kasha's rule). The x-axis imaging resolution is given by the spectral resolution of the grating (67 pm, see methods) and the instantaneous laser linewidth (56 pm, Fig. 2g). In combination with the swept bandwidth of 12 nm, this should allow for 12 nm/0.067 pm ≈ 180 discernible spots and thus a lateral resolution of 100 µm/180 = 556 nm for the 60x objective (see Methods). We moderately oversample the x-axis by programming 256 pulses along the x-direction. We tested the imaging resolution achieved through spectro-temporal encoding by imaging 100 nm fluorescent beads (Supplementary Fig. 6) which yielded a resolution of 596 nm in x-direction (spectral scanning direction) and 455 nm in y-direction for the 60× NA 1.4 objective, i.e., close to the calculated resolution and sufficient for high-resolution imaging of sub-cellular features. We further verified that the fluorescence signal scales quadratically with the illumination power confirming Two-Photon origin of the signal. The instrument response function (IRF) of the detection was determined by SHG from a Urea sample and fitted to 1026 ps (Fig. 2h). The validity of the analogue lifetime detection approach was already confirmed in a previous work30, where we compared the recorded lifetime value of Rhodamine 6 G to a literature value in good agreement. This analogue detection enables rapid FLIM microscopy at even only a single pulse per pixel illumination (SP-FLIM)21, as discussed above. Fig. 2: Experimental Setup of the SLIDE system. a The light source is an FDML-MOPA laser28,29 at 1060 nm (±6 nm) and 342 kHz sweep repetition rate. The FDML-MOPA laser was already described in detail in a previous report29. c The wavelength-swept laser is modulated by a digitally programmed pulse pattern which determines the line mapping in SLIDE. Typically, each sweep (2.9 µs duration) is modulated to 256 impulses of short temporal width (65 ps) to achieve a pixel rate of 88 MHz. This leaves enough time for most fluorescence transients to decay. The whole system is synchronized through an arbitrary waveform generator (AWG) which drives the FDML-MOPA laser29, the y-axis galvo and the digitizer card (trigger and sample clock). A high-numerical aperture (NA) objective focuses the excitation light on the sample and collects the epi-generated fluorescence signal. A dichroic filter directs only non-linear signals on a fast hybrid photodetector (HPD), connected to a transimpedance amplifier and a fast digitizer (d). e A wavelength span of 12 nm is used for spectral scanning. The EOM modulator achieves a high extinction ratio29 (~30 dB optical). f The recorded temporal pulse width achieved with the EOM is 65 ps. g High-resolution pixel mapping is possible due to the narrow instantaneous linewidth of the pulsed FDML laser pulses measured to 56 pm. h The instrument response function (IRF) of the detection was determined by SHG in a urea sample to 1026 ps FWHM. SLIDE imaging performance was assessed first by recording the field of view (FOV) on a resolution target (Fig. 3b). The x-axis was scanned with the spectro-temporal encoded scan while the y-axis was scanned with a galvanometric mirror (Fig. 3a). Figure 3c shows a two-photon microscopy image of a euglena gracilis microalgae and was recorded within 497 µs at 2 kHz frame rate. The time trace (Fig. 3d) of a single line illustrates the high SNR of up to 490 (peak SNR). This is due to the high photon counts achieved per excitation pulse (note that a single photon generates a voltage of 50 mV). By zooming in on two individual pixels, we see a difference in the transient fluorescence decay time (Fig. 3f). To this end, 3 × 3 spatial binning was performed to yield enough fluorescent photons for a mono-exponential fit43. By fitting with the convoluted IRF a FLIM image can be generated (Fig. 3e) revealing two different domains within the algae cells. The autofluorescent chloroplasts decay fast and are colour-coded red, while Nile Red, an exogenous fluorophore which was added to highlight lipid generation within the microalgae, has a longer decay time and is colour-coded green. Both imaging modalities TPM and FLIM are extracted from the same data which was acquired within 497 µs for this 256 × 170 pixel image. Further imaging examples can be found in the supplementary material (Supplementary Figs. 1, 2, 4–6). Fig. 3: SLIDE high-speed dual-modailty imaging at 2000 frames per second. a In SLIDE microscopy, the x-axis is scanned by spectro-temporal encoded diffraction scanning and the y-axis by a galvo mirror. b The field-of-view (FOV) is 100 × 90 µm2 using the 60x objective (resolution target has 10 µm pitch). c Two-Photon intensity image of Euglena gracilis algae cells (256 × 170 pixels). Each line is acquired within 2.9 µs at high signal-to-noise ratio (SNR) of up to 490 (peak SNR) (cf. d). Zooming into individual pixels (f) reveals different transient fluorescent decay times which can be fitted to extract a fluorescence lifetime value for each pixel. Therefore, a 3 × 3 pixel binning was applied for higher lifetime fidelity. The lifetimes are colour-coded to achieve a TP-FLIM image (e) from the same dataset. In the TP-FLIM image the rapidly decaying autofluorescent chloroplasts (red) can be distinguished from the Nile Red-stained lipid droplets (green, blue). This unaveraged image was acquired at 2.012 kHz frame-rate (497 µs recording time). The average power on the sample was 30 mW. Lifetimes were determined by deconvolution with the IRF (convolution fit shown in f). Photon counts for the curves in (f) are 82 photons (red) and 246 photons (green). Scale bar represents 10 µm. SLIDE imaging flow cytometry To calibrate the speed and accuracy of SLIDE we performed imaging flow cytometry of five different species of fluorescent beads (Fig. 4a, b). The beads range in size from 2 to 15 µm and are thus chosen as examples for typical mammalian cell sizes, although perhaps not comparable in terms of brightness. Figure 4 nicely showcases that SLIDE imaging flow cytometry is capable of obtaining high quality images even at these high throughput rates of >10,000 objects per second. Fig. 4d1–d4 were acquired within 497 µs per image. Since these beads show high signal levels, analysis of the time domain data revealed that about 1–10 fluorescent photons were achieved per excitation pulse at 15 mW average excitation power. After 9 × 9 spatial binning was applied, photon counts were >100 photons per lifetime curve for reliable mono-exponential fitting43. For this application, tail-fitting was applied assuming lifetimes significantly longer than half of the IRF. This approach will achieve fast fitting and still yield reliable qualitative lifetime contrast in order to distinguish different lifetimes. As can be seen by exemplary lifetime fits for all five beads shown in Fig. 4c, the lifetime accuracy was still high, and the measured signals enabled high fidelity lifetime fits. Fig. 4: SLIDE Imaging flow cytometry at 0.2 m/s flow speed. a SLIDE line imaging was performed at 342 kHz line-scanning rate inside a flow channel. Only the direction perpendicular to the flow was scanned via SLIDE, while the flow speed gives the scanning in the y-axis direction. b Five different calibration fluorescent beads were first imaged at rest to obtain reference lifetime colours. Subfigure b shows averaged SLIDE images (1000-times averaging). The lifetimes range from 2–8 ns (b, c). Afterwards, a mixture of all beads was imaged in flow at flow speeds of 0.2 m/s in order to obtain even sampling period in both dimensions at 550 nm pitch (see Methods). The five different species show different lifetimes (c) and can be clearly distinguished via TP-FLIM (d1–d4) even at these high flow speeds. Sample throughput is estimated to be >10,000 beads per second. Images were generated with 170 vertical lines, so each image d1–d4 was recorded within 497 µs. The images have high SNR and high resolution. For lifetime accuracy, 9 × 9 pixel binning was applied to yield >100 photons per fluorescence lifetime decay to allow for reliable mono-exponential lifetime fits43. The pixel rate was 88 MHz, i.e., single excitation pulse per pixel without averaging. The power used was 15 mW, scale bars represent 10 µm. As a biological sample we imaged Euglena gracilis cells in SLIDE imaging flow cytometry mode, where autofluorescence from chloroplasts revealed a short fluorescence lifetime and Nile Red stained lipids show a longer lifetime (see also Fig. 3). Figure 5 presents four different snapshots of Euglena gracilis microalgae in flow where each image pair (Two-Photon & Two-Photon FLIM images) was acquired within 1 ms. The TPM images already show the high resolution which reveals sub-cellular morphology. Yet, through the FLIM images it is possible to discern chloroplasts from lipids through their fluorescent lifetime. The images have very high resolution and high SNR and, even though they were obtain at very high speeds in flow, they have quality comparable to regular one-photon fluorescence microscopy44. Excitation power was 30 mW on the sample and 5 × 5 pixel spatial binning was applied to ensure >100 photons per lifetime curve. Fluorescence lifetime can aid in discerning lipids as it is independent of concentration and signal height. This high-speed lipid content screening by SLIDE FLIM cytometry may help in purification of high lipid content microalgae for efficient biofuel production44. Fig. 5: SLIDE Lifetime Imaging flow cytometry of Euglena gracilis algae cells at 0.2 m/s flow speed. The Nile Red stained lipids can be clearly distinguished from the autofluorescent chloroplasts by their longer fluorescent lifetime (~5 ns and ~1 ns, respectively, cf. Fig. 3), even at these high flow rates. High subcellular resolution is achieved and morphologic structure as well as molecular information is recorded by SLIDE imaging flow cytometry. The displayed images have 256 × 342 pixels resolution which corresponds to a recording time of only 1 ms per image pair (TPM left, SP-FLIM21 right). In the FLIM images a 5 × 5 spatial binning was applied to ensure >100 photons per fluorescence lifetime decay. The power used was 30 mW, scale bars represent 10 µm. It shall be noted that the high speeds achieved by SLIDE require bright samples in order to detect many fluorescence photons per excitation pulse. In fact, the high speed of 88 MHz pixel rate was achieved for two-photon fluorescence imaging, yet for FLIM imaging pixel binning was applied to achieve the necessary photon numbers of >100 photons per pixel43. This blurs the lifetime (colour) resolution, without compromising the morphological resolution, which originates from the fluorescence intensity images. However, not all samples provide sufficient fluorophores in the focal volume, such as genetically encoded fluorescent proteins. We tested imaging of genetically encoded tdTomato fluorescent proteins in ex vivo mouse brain to test whether a kilohertz frame-rate could be achieved in imaging neuronal activity. However, even though we were able to obtain morphological images at 1 kHz rate (see Supplementary Figs. 1, 2, 4), we found that only single fluorescent proteins were present in the focal volume and the signal was saturated, as increasing the laser power did not yield a quadratic signal increase. This does not permit recording neuronal ensemble activity at the kHz speed of the SLIDE systems, as both fluorescence intensity changes or fluorescence lifetime changes require tens to hundreds of photons thus sacrificing the speed as a result of averaging. Therefore, biochemically engineered fast and bright fluorescent proteins are needed, which can also be expressed at high abundance without interfering with cellular behaviour, in order to enable kHz frame-rate imaging of neuronal activity for SLIDE or any other fast technique in the future. It is noted that SLIDE microscopy employs higher average powers (~10–100 mW) compared to conventional two-photon microscopes (1–30 mW). However, we did not notice any sample damage. The higher powers are due to the fast acquisition rates obtained by SLIDE together with the relatively longer pulse length. It shall be noted that the two-photon excitation rate is given by ppeak2*tpulse30,45, so even with longer pulses similar TPM signal levels can be achieved if the duty cycle of the pulses is the same30,45. In SLIDE, the repetition rate is typically 88 MHz and thus duty cycles are normally higher than in femtosecond TPM systems (i.e. lower peak powers in SLIDE). In principle, however, same peak powers and thus same excitation rates can easily be achieved with SLIDE by simply digitally programming the required pulse repetition rate. This powerful pulse on demand feature of SLIDE can be used to trade speed for sensitivity when imaging dim fluorophores, i.e., with low two-photon absorption cross sections. Interestingly, the excitation rates can also be scaled up by employing shorter pulses. Currently, the electronic pulse generation of 65 ps is the practical limiting factor, as the EOM bandwidth permits much shorter pulses. Furthermore, dispersion-based compression could be harnessed to achieve shorter pulses in the single-digit picosecond range41,46. To this end, a large time-bandwidth product is required, which however is achievable with FDML lasers37. On the other hand, SLIDE profits from the longer pulses, since to increase photon numbers per single pulse it is advantageous to increase the pulse length rather than the peak power in order to avoid supra-quadratic scaling effects like photobleaching32,33 and photodamage35,45. This advantage of longer pulses was already reported before30,35,45,47 and was also discussed above. Another factor which reduces photodamage is the longer wavelengths of 1060 nm employed here48. Lastly, even though pulse energies become larger with longer pulses, photothermal effects on the sample are negligible in SLIDE microscopy as they are insignificantly small49 and dissipate in between frame scans. The fast frame-scans also lead to a low effective repetition rate per pixel of only 1 kHz (i.e. half the bi-directional frame-rate) in SLIDE. These low repetition rates per pixel were reported to lead to advantageous relaxation of even long lived triplet states32, which can significantly increase photon numbers and thus further helps to speed up FLIM imaging rates. The SLIDE system presented here has an excitation wavelength around 1060 nm, however future laser developments will target excitation wavelengths at 780 nm for autofluorescence applications4,5 and 940 nm for GCaMP-based imaging13. Regarding sensitivity, Figs. 3 and 4 also show that SLIDE has sensitivity down to the single molecule level, so even samples where only a single fluorescent emitter is excited in the focal volume can be successfully imaged with SLIDE. In this sparse signal case, required photon counts can be reached either by spatial binning or, in applications where high spatial resolution FLIM is required5, by phase-locked detection and averaging. In SLIDE, frame averaging instead of pixel averaging can be performed, leading to low effective pulse repetition rate per pixel which can help increase signal levels6 (see above). The analogue detection for FLIM is furthermore compatible with existing two photon microscopes and can significantly increase FLIM imaging speeds19,20,21,23. In conclusion, in this manuscript we presented the concept of high-speed SLIDE microscopy along with the experimental implementation and application in Two-Photon imaging flow cytometry with two imaging modalities, TPM and FLIM. High speeds of 342 kHz line scanning rates and 88 MHz pixel rates were presented. We presented high quality fluorescence lifetime imaging flow cytometry at very high speeds (>10,000 events per second). We believe that this high throughput and multi-modality enabled by SLIDE microscopy can lead to new insights into rapid biological processes and detection of rare events in applications like liquid biopsy or rare circulating tumour cell detection50. Further, the fiber-based setup of the SLIDE microscopy system may enable an endoscopic application to overcome the relatively shallow penetration depth of optical microscopy. Wavelength-to-space mapping Upon exiting the single-mode fibre, the light was collimated using an f = 37 mm lens (Thorlabs collimator F810APC-1064) followed by a beam-expander (f = 100 mm and f = 150 mm; Thorlabs LA1509-C and LA1433-C). This results in a beam diameter of 11.5 mm filling the 60× microscope objective aperture. The grating (Thorlabs GR25-1210) was positioned at a 30° angle (close to blaze angle), such that the first order was reflected at almost the incident direction in order to minimize ellipticity of the first-order diffraction beam. At 1200 lines/mm the grating only produced a 0 and +1 diffraction order and the first order power was maximized by adjusting the polarization on a polarization control paddle. The blazed grating ensured >80% power in the first order. The grating resolution is calculated to 67 pm. This fits well to the instantaneous linewidth of the FDML, which was measured for a single pulse to be 56 pm (cf. Fig. 2). Microscopy setup Two lenses were used to relay image the beams onto a galvanometric mirror for y-axis scanning. The galvo mirror (Cambridge Technology CT6215H) was driven synchronously, producing 170 lines at 2.012 kHz. A high NA, oil immersion microscope objective was used (Nikon Plan Apo 60× NA 1.4 oil) or a 40× water immersion objective (Nikon N40×-NIR - 40× Nikon CFI APO NIR Objective, 0.80 NA, 3.5 mm WD). The field-of-view (FOV) was determined by inserting a resolution target and recording the reflected excitation light on a CCD camera installed in the microscope, which was sensitive to the 1064 nm excitation light (cf. Fig. 3). A dichroic mirror (Thorlabs DMSP950R) in combination with an additional short-pass optical filter (Semrock FF01-750) transmits the Epi-generated signals to a hybrid photodetector (HPD, Hamamatsu R10467U-40) with high quantum efficiency (45%). The high time resolution of the HPD in combination with a fast digitizer (up to 4GS/s) leads to a fast IRF of only 1026 ps, measured by detecting the instantaneous signal of SHG in urea crystals (cf. Fig. 2). Digitally synthesized waveforms The whole system is driven by an arbitrary waveform generator (AWG, Tektronix AWG7052). This AWG provides all digitally synthesized driving waveforms, driving the FDML laser (Fabry-Pérot Filter waveform and 50% modulation of SOA for buffering), the galvo-mirror and also generates an external sample clock signal for the digitizer. The waveforms are digitally programmed and enable flexibility on the number of pulses per sweep, pulse width, pulse shape, pulse pattern and thus also the image sampling density. As digitizers, either an oscilloscope (Tektronix DPO71604B) at 3.125 GSamples/s or a streaming ADC card (Innovative Integrations Andale X6GSPS and Alazartech ATS9373) with synchronously driven sample clock at 3196 MHz or 3940 MHz, respectively, were employed. The external sample clock was employed such that the data acquisition runs synchronously to the FDML laser and the pulse modulation and ensures sample-accurate fitting. In order to acquire large data sets, a streaming ADC in combination with a RAID-SSD array was employed to store the data. In the flow cytometry recording, the flow-rate was set by two fundamental properties, namely the fluorescence lifetime and the imaging diffraction limit. The lifetime limits the repetition rate to ~100 MHz, while the diffraction limit is sampled at ~380 nm. Consequently, we employed 88 MHz repetition rate at 256 pulses per 2.92 µs linescan rate and 100 µm FOV. The flow rate was equally set to sample each line at 380 nm, i.e., 380 nm/2.92 µs = 0.13 m/s. The scale bars in the flow cytometry images were generated using the known 10 µm size of the Red-species bead to calibrate the actual flow speed. The Red bead was sampled with 18 lines, calculating to a line spacing of 556 nm. Using the line scan rate of 342 kHz, this calculates to a flow speed of ~0.2 m/s. For precise measurements, a deconvolution with the IRF was conducted in order to extract the fluorescent lifetimes (e.g. in Fig. 3 and Supplementary Fig. 4). However, this is time consuming, so for faster processing and qualitative results a tail-fitting algorithm was used. Often time, different species need to be discerned so a qualitative value is sufficient. The presence of multi-exponential behaviour was tested by checking for kinks in the slope of the logarithmic plot or checking for deviations between the mono-exponential fit and the measurement data (see e.g. Supplementary Fig. 4 where the mono-exponential fit yields a reliable fit). Photon numbers were calculated by dividing the area under the curve by the area of a single-photon event. For all images, the data was processed and images created in LabVIEW. The 2P-FLIM images were generated as HSL-images, where Hue was given by the lifetime-values, lightness by the integrated TPEF signal and saturation always set to maximum, i.e., the same for all images. For the TPEF images, the "Red Hot" or "Royal" look-up tables were applied in ImageJ. The plots were generated in GNUPlot and the figures produced using Inkscape. The Pollen grain samples were ordered from Carolina (B690 slide). The fluorescent beads were ordered from ThermoFisher Scientific (# F8825, F8839, F8841, F8843, and F21012). Experimental model systems: Euglena cell culture and fermentation The E. gracilis cells used in the study are Euglena gracilis Z (NIES-48) strain procured from the Microbial Culture Collection at the National Institute for Environmental Studies (NIES), Japan. E. gracilis were cultured heterotrophically in 500 mL flasks using Koren-Hutner (KH) medium at a pH of 3.551. The cell cultures were maintained at 23 °C with a shaking rate of 120 strokes/min under continuous illumination of 100 μmol photons m−2 s−1. Cells were subjected to anaerobic fermentation to induce lipid accumulation. The fermentation was performed on cells in stationary phase by bubbling with nitrogen gas and incubating the flasks in the dark for three days. Nile red staining of intracellular lipid droplets The Nile red stock was prepared by dissolving original dye powder (Sigma) into 4 mL dimethyl sulfoxide (DMSO) to achieve a concentration of 15.9 mg/mL (50 mM). The stain stock solution was diluted 1,000 times with distilled water before use. The E. gracilis cells in the culture medium were washed with distilled water and resuspended in distilled water with a final concentration of 2 × 106 cells/mL. We mixed 15.9 μg/mL of nile red solution with E. gracilis cell suspension solution at a volume ratio of 1:1, which was followed by gentle vibration and incubation in the dark for 10 min. The final concentration of Nile red and E. gracilis cells were 7.95 μg/mL and 1 × 106 cells/mL, respectively. The E. gracilis cells were washed three times with distilled water and centrifugation (2000 × g, 1 min). The cells were resuspended in distilled water and protected from light prior to imaging. Experimental model systems: mouse brain imaging All experiments followed the U.S. National Institutes of Health guidelines for animal research, under an animal use protocol (ARC #2007-035) approved by the Chancellor's Animal Research Committee and Office for Animal Research Oversight at the University of California, Los Angeles. Experiments in Supplementary materials used FVB.129P2 wild-type mice (JAX line 004828) injected with AAV1-pCAG-tdTomato (Addgene plasmid # 59462) into the right primary visual cortex. For the viral injections, mice were anaesthetized with isoflurane (5% induction and 1.5% maintenance) and placed on a stereotaxic surgery frame. Next, a small burr hole was drilled over the right primary visual cortex and ~50 nL of adeno associated virus (AAV) was injected 100–200 microns into the cortex. Following injections, the exposed skull was covered with dental cement. The mice were returned to their home cages and two weeks after viral injection, mice expressing TdTomato were perfused intracardially with 4% paraformaldehyde in phosphate buffer and their brains were extracted and glued on a petri dish, which was filled with PBS buffer for imaging on the inverted microscope. Neurons expressing TdTomato were imaged in Layer 2/3 at a depth of ~150–250 μm below the dura using a long working range 40× microscope objective (Nikon N40×-NIR - 40× Nikon CFI APO NIR Objective, 0.80 NA, 3.5 mm WD). The data that support the findings of this study are available from the corresponding author upon reasonable request. Denk, W., Strickler, J. & Webb, W. Two-photon laser scanning fluorescence microscopy. Science 248, 73–76 (1990). Gong, Y. et al. High-speed recording of neural spikes in awake mice and flies with a fluorescent voltage sensor. Science 350, 1361–1366 (2015). Lakowicz, J. R., Szmacinski, H., Nowaczyk, K., Berndt, K. W. & Johnson, M. Fluorescence lifetime imaging. Anal. Biochem. 202, 316–330 (1992). Blacker, T. S. et al. Separating NADH and NADPH fluorescence in live cells and tissues using FLIM. Nat. Commun. 5, 3936 (2014). Evers, M. et al. Enhanced quantification of metabolic activity for individual adipocytes by label-free FLIM. Sci. Rep. 8, 8757 (2018). Chen, X., Leischner, U., Rochefort, N. L., Nelken, I. & Konnerth, A. Functional mapping of single spines in cortical neurons in vivo. Nature 475, 501–505 (2011). Kazemipour, A. et al. Kilohertz frame-rate two-photon tomography. Nat. Methods 16, 778–786 (2019). Wu, J. et al. Kilohertz in vivo imaging of neural activity. bioRxiv, 543058 (2019). Lechleiter, J. D., Lin, D.-T. & Sieneart, I. Multi-photon laser scanning microscopy using an acoustic optical deflector. Biophysical J. 83, 2292–2299 (2002). Article ADS CAS Google Scholar Kim, K. H., Buehler, C. & So, P. T. C. High-speed, two-photon scanning microscope. Appl. Opt. 38, 6004–6009 (1999). Bewersdorf, J., Pick, R. & Hell, S. W. Multifocal multiphoton microscopy. Opt. Lett. 23, 655–657 (1998). Kim, K. H. et al. Multifocal multiphoton microscopy based on multianode photomultiplier tubes. Opt. Express 15, 11658–11678 (2007). Cheng, A., Goncalves, J. T., Golshani, P., Arisaka, K. & Portera-Cailliau, C. Simultaneous two-photon calcium imaging at different depths with spatiotemporal multiplexing. Nat. Methods 8, 139–142 (2011). Liu, X. et al. Fast fluorescence lifetime imaging techniques: a review on challenge and development. J. Innovative Optical Health Sci. 12, 1930003 (2019). Poland, S. P. et al. A high speed multifocal multiphoton fluorescence lifetime imaging microscope for live-cell FRET imaging. Biomed. Opt. Express 6, 277–296 (2015). Agronskaia, A., Tertoolen, L. & Gerritsen, H. Fast fluorescence lifetime imaging of calcium in living cells. J. Biomed. Opt. 9, 1230–1237 (2004). Krstajić, N. et al. 0.5 billion events per second time correlated single photon counting using CMOS SPAD arrays. Opt. Lett. 40, 4305–4308 (2015). Raspe, M. et al. siFLIM: single-image frequency-domain FLIM provides fast and photon-efficient lifetime data. Nat. Methods 13, 501–504 (2016). Bowman, A. J., Klopfer, B. B., Juffmann, T. & Kasevich, M. A. Electro-optic imaging enables efficient wide-field fluorescence lifetime microscopy. Nat. Commun. 10, 4561 (2019). Giacomelli, M. G., Sheikine, Y., Vardeh, H., Connolly, J. L. & Fujimoto, J. G. Rapid imaging of surgical breast excisions using direct temporal sampling two photon fluorescent lifetime imaging. Biomed. Opt. Express 6, 4317–4325 (2015). Eibl, M. et al. Single pulse two photon fluorescence lifetime imaging (SP-FLIM) with MHz pixel rate. Biomed. Opt. Express 8, 3132–3142 (2017). Ryu, J. et al. Real-time visualization of two-photon fluorescence lifetime imaging microscopy using a wavelength-tunable femtosecond pulsed laser. Biomed. Opt. Express 9, 3449–3463 (2018). Dow, X. Y., Sullivan, S. Z., Muir, R. D. & Simpson, G. J. Video-rate two-photon excited fluorescence lifetime imaging system with interleaved digitization. Opt. Lett. 40, 3296–3299 (2015). Boudoux, C. et al. Rapid wavelength-swept spectrally encoded confocal microscopy. Opt. Express 13, 8214–8221 (2005). Goda, K., Tsia, K. K. & Jalali, B. Serial time-encoded amplified imaging for real-time observation of fast dynamic phenomena. Nature 458, 1145–1149 (2009). Chen, C. L. et al. Deep learning in label-free cell classification. Sci. Rep. 6, 21471 (2016). Bhushan, A. S., Coppinger, F. & Jalali, B. Time-stretched analogue-to-digital conversion. Electron. Lett. 34, 839–841 (1998). Huber, R., Wojtkowski, M. & Fujimoto, J. G. Fourier domain mode locking (FDML): a new laser operating regime and applications for optical coherence tomography. Opt. Express 14, 3225–3237 (2006). Karpf, S. & Jalali, B. Fourier-domain mode-locked laser combined with a master-oscillator power amplifier architecture. Opt. Lett. 44, 1952–1955 (2019). Karpf, S. et al. Two-photon microscopy using fiber-based nanosecond excitation. Biomed. Opt. Express 7, 2432–2440 (2016). Yokoyama, H. et al. Two-photon bioimaging with picosecond optical pulses from a semiconductor laser. Opt. Express 14, 3467–3471 (2006). Donnert, G., Eggeling, C. & Hell, S. W. Major signal increase in fluorescence microscopy through dark-state relaxation. Nat. Methods 4, 81–86 (2007). Patterson, G. H. & Piston, D. W. Photobleaching in two-photon excitation microscopy. Biophysical J. 78, 2159–2162 (2000). Débarre, D., Olivier, N., Supatto, W. & Beaurepaire, E. Mitigating phototoxicity during multiphoton microscopy of live Drosophila embryos in the 1.0–1.2 µm wavelength range. PLoS ONE 9, e104250 (2014). Hopt, A. & Neher, E. Highly nonlinear photodamage in two-photon fluorescence microscopy. Biophysical J. 80, 2029–2036 (2001). König, K., Becker, T. W., Fischer, P., Riemann, I. & Halbhuber, K. J. Pulse-length dependence of cellular response to intense near-infrared laser pulses in multiphoton microscopes. Opt. Lett. 24, 113–115 (1999). Jiang, Y., Karpf, S. & Jalali, B. Time-stretch LiDAR as a spectrally scanned time-of-flight ranging camera. Nat. Photonics 14, 14–18 (2020). Klein, T. & Huber, R. High-speed OCT light sources and systems [Invited]. Biomed. Opt. Express 8, 828–859 (2017). Kolb, J. P., Pfeiffer, T., Eibl, M., Hakert, H. & Huber, R. High-resolution retinal swept source optical coherence tomography with an ultra-wideband Fourier-domain mode-locked laser at MHz A-scan rates. Biomed. Opt. Express 9, 120–130 (2018). Wieser, W. et al. High definition live 3D-OCT in vivo: design and evaluation of a 4D OCT engine with 1 GVoxel/s. Biomed. Opt. Express 5, 2963–2977 (2014). Eigenwillig, C. M. et al. Picosecond pulses from wavelength-swept continuous-wave Fourier domain mode-locked lasers. Nat. Commun. 4, 1848 (2013). Karpf, S., Eibl, M., Wieser, W., Klein, T. & Huber, R. A time-encoded technique for fibre-based hyperspectral broadband stimulated Raman microscopy. Nat. Commun. 6, 6784 (2015). Köllner, M. & Wolfrum, J. How many photons are necessary for fluorescence-lifetime measurements? Chem. Phys. Lett. 200, 199–204 (1992). Article ADS Google Scholar Yamada, K. et al. Efficient selective breeding of live oil-rich Euglena gracilis with fluorescence-activated cell sorting. Sci. Rep. 6, 26327 (2016). Koester, H. J., Baur, D., Uhl, R. & Hell, S. W. Ca2+ fluorescence imaging with pico- and femtosecond two-photon excitation: signal and photodamage. Biophysical J. 77, 2226–2236 (1999). Obrzud, E., Lecomte, S. & Herr, T. Temporal solitons in microresonators driven by optical pulses. Nat. Photonics 11, 600–607 (2017). Bewersdorf, H. Picosecond pulsed two-photon imaging with repetition rates of 200 and 400 MHz. J. Microsc. 191, 28–38 (1998). Chen, S.-Y. et al. In vivo virtual biopsy of human skin by using noninvasive higher harmonic generation microscopy. Sel. Top. Quantum Electron., IEEE J. 16, 478–492 (2010). Schönle, A. & Hell, S. W. Heating by absorption in the focus of an objective lens. Opt. Lett. 23, 325–327 (1998). He, W., Wang, H., Hartmann, L. C., Cheng, J.-X. & Low, P. S. In vivo quantitation of rare circulating tumor cells by multiphoton intravital flow cytometry. Proc. Natl Acad. Sci. USA 104, 11760–11765 (2007). Koren, L. E. High-yield media for photosynthesizing Euglena gracilis Z. J. Protozool. 14, 17 (1967). This research was sponsored in part by the National Institutes of Health grants 5R21GM107924-03 to B.J. and C.P.-C. and R21EB019645 to B.J., Cal-BRAIN grant 350050 (California Blueprint for Research to Advance Innovations in Neuroscience) to B.J. and C.P.-C., as well as grant W81XWH-14-1-0433 (USAMRMC, DOD), and NIH NICHD grant R01 HD054453 to C.P.-C. and by ImPACT Program of the Council of Science, Technology and Innovation (Cabinet office, Government of Japan) to C.R. and D.D.C. Sebastian Karpf gratefully acknowledges a postdoctoral research fellowship from the German Research Foundation (DFG, project KA 4354/1-1), the Juniorprofessorship with financial support by the state of Schleswig-Holstein (Excellence chair program by the universities Kiel and Luebeck) and funding from the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) under Germany's Excellence Strategy – EXC 2167-390884018. Department of Electrical Engineering and Computational Science, University of California, Los Angeles (UCLA), Los Angeles, CA-90095, USA Sebastian Karpf & Bahram Jalali Institute of Biomedical Optics (BMO), University of Luebeck, 23562, Luebeck, Germany Sebastian Karpf Department of Bioengineering, University of California, Los Angeles (UCLA), Los Angeles, CA-90095, USA Carson T. Riche, Dino Di Carlo & Bahram Jalali Department of Neurology, University of California, Los Angeles (UCLA), Los Angeles, CA-90095, USA Anubhuti Goel, William A. Zeiger, Anand Suresh & Carlos Portera-Cailliau Carson T. Riche Dino Di Carlo Anubhuti Goel William A. Zeiger Anand Suresh Carlos Portera-Cailliau Bahram Jalali S.K. conceived the idea, built the system and conducted the experiments. C.R. and D.D.C. provided the Euglena gracilis cells. C.R. and S.K. conducted the Euglena flow measurements. A.G, W.Z., A.S. and C.P.C. provided the mouse brain sample. C.P.C. and S.K. conducted the mouse brain imaging measurement. S.K. and B.J. conceived the digital image processing capabilities. B.J. and S.K. performed system analysis and wrote the manuscript. B.J. supervised the research. Correspondence to Sebastian Karpf. Peer review information Nature Communications thanks the anonymous reviewer(s) for their contribution to the peer review of this work. Peer reviewer reports are available. Publisher's note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. Peer Review File Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this license, visit http://creativecommons.org/licenses/by/4.0/. Karpf, S., Riche, C.T., Di Carlo, D. et al. Spectro-temporal encoded multiphoton microscopy and fluorescence lifetime imaging at kilohertz frame-rates. Nat Commun 11, 2062 (2020). https://doi.org/10.1038/s41467-020-15618-w Towards phase-stabilized Fourier domain mode-locked frequency combs Christin Grill Torben Blömker Robert Huber Communications Physics (2022) All-fiber high-speed image detection enabled by deep learning Zhoutian Liu Lele Wang Qirong Xiao Nature Communications (2022) Optical gearbox enabled versatile multiscale high-throughput multiphoton functional imaging Jianian Lin Zongyue Cheng Meng Cui Queenie T. K. Lai Gwinky G. K. Yip Kevin K. Tsia Nature Protocols (2021) Speed scaling in multiphoton fluorescence microscopy Jianglai Wu Na Ji Nature Photonics (2021) Editors' Highlights Nature Communications (Nat Commun) ISSN 2041-1723 (online)
CommonCrawl
Comparison of four algorithms on establishing continuous reference intervals for pediatric analytes with age-dependent trend Kun Li1,2, Lixin Hu3, Yaguang Peng2, Ruohua Yan2, Qiliang Li3, Xiaoxia Peng1,2, Wenqi Song3 & Xin Ni2,4 Continuous reference intervals (RIs) allow for more precise consideration of the dynamic changes of physiological development, which can provide new strategies for the presentation of laboratory test results. Our study aimed to establish continuous RIs using four different simulation methods so that the applicability of different methods could be further understood. The data of alkaline phosphatase (ALP) and serum creatinine (Cr) were obtained from the Pediatric Reference Interval in China study (PRINCE), in which healthy children aged 0–19 years were recruited. The improved non-parametric method, the radial smoothing method, the General Additive Model for Location Scale and Shape (GAMLSS), and Lambda-Median-Sigma (LMS) were used to develop continuous RIs. The accuracy and goodness of fit of the continuous RIs were evaluated based on the out of range (OOR) and Akaike Information Criterion (AIC) results. Samples from 11,517 and 11,544 participants were used to estimate the continuous RIs of ALP and Cr, respectively. Time frames were partitioned to fulfill the following two criteria: sample size = 120 in each subgroup and mean difference = 2 between adjacent time frames. Cubic spline or penalized spline was used for curve smoothing. The RIs estimated by the four methods approximately overlapped. However, more obvious edge effects were shown in the curves fit by the non-parametric methods than the semi-parametric method, which may be attributed to insufficient sample size. The OOR values of all four methods were smaller than 10%. All four methods could be used to establish continuous RIs. GAMLSS and LMS are more reliable than the other two methods for dealing with edge effects. Reference intervals (RIs) are commonly used in medicine to interpret laboratory test results and have been traditionally used in clinical practice to aid diagnosis [1]. Pediatric RIs represent the physiological conditions of normal children and adolescents during development [2]. Dynamic anatomical and physiological development accounts for high variability of many biochemical analytes with increasing age, particularly in the first years of life and during puberty [3]. Therefore, the definitions of pediatric RIs should consider special population features, among which age and sex are the most important for children and adolescents [4]. A familiar method to elucidate trends of age dependence of biochemical analytes is to establish RIs for each age partition. The use of discrete RIs for different age groups is well-established in clinical practice and allows easy integration into current laboratory information systems. To improve the accuracy of age partitioning, an age partitioning algorithm for RI estimation was developed in our previous publication [2]. However, it is still difficult for that model to describe analyte concentrations at the margins of age partitioning, especially abrupt changes during relatively narrow age periods, such as a radical decrease of alkaline phosphatase (ALP) during puberty [5]. Further, it may be difficult to obtain suitable age partitioning points for analytes with continuous upward trends, such as serum creatinine (Cr). Analogous to other developmental quantities whose relationships with age are routinely analyzed, a continuous description would seem to be more appropriate for laboratory analytes with special age-dependent trends [6]. For example, growth curves were used by the World Health Organization (WHO) to construct child growth standards [7]. The current approaches for establishing continuous RIs can be divided into the non-parametric and semi-parametric method [3, 5, 6, 8, 9]. These different statistical methods of curve simulation could produce different RIs using the same data [1]. Therefore, it is imperative to explore which method is the most appropriate for RI estimation of analytes with various age-dependent trends. To our knowledge, few statistical simulations have been reported to evaluate how well these methods estimate continuous RIs. Our aim in the present study is to compare the accuracy of continuous RIs established using four different curve simulation methods to better understand these methods' applicability. The continuous RIs could facilitate the generation of graphical reports in clinical laboratory settings, which could provide quantitative and dynamic assessments of laboratory test results instead of only absolute values [6]. Data were obtained from the results of the PRINCE study. The eligibility criteria and other detailed information were previously published [10]. In brief, 14,646 healthy children aged 0–19 years were recruited from the northeast (Liaoning Province), north (Beijing Municipality and Hebei Province), northwest (Shaanxi Province), middle (Henan and Hubei provinces), south (Guangdong Province), southwest (Chongqing Municipality and Sichuan Province), and east (Shanghai Municipality and Jiangsu Province) regions of China from January 2017 to August 2018. All participants were confirmed to be eligible based on a questionnaire screening and subsequent physical examination. Considering that the sample size of children aged less than 1 year was limited, we only included healthy children aged 1–19 years. Analyte tests were measured on a Cobas C702 automated biochemistry analyzer (Roche Diagnostics GmbH, Mannherim, Germany) at the Department of Clinical Laboratory Center of Beijing Children's Hospital, which was the central laboratory of the PRINCE study. Detailed information on quality control was described in the published protocol [10]. The ALP and Cr analytes were selected from 13 eligible biochemical markers as typical cases because of their special age-dependent trends in children and adolescents. The study was exempted by the Ethics Committee of Beijing Children's Hospital, affiliated with Capital Medical University, Beijing, China. Data cleaning and management Data cleaning was performed to detect missing values and outliers. Missing values were defined as incomplete information of age, sex, or biochemical analytes. Considering that ALP and Cr are known to vary significantly by age and sex [6], outliers are removed according to sex and age groups (for each 1-year) by Tukey's method [4]. In this method, outliers are removed if they are less than Q1–1.5 × IQR or more than Q3 + 1.5 × IQR, in which Q1 and Q3 are the 25th and 75th percentiles, respectively. IQR is interquartile range, calculated by Q3 − Q1, where the data have a Gaussian distribution. Otherwise, the data should be transformed by the Box-cox method, expressed by the following formula: $$\mathrm{y}=\left\{\begin{array}{cc}\left({\mathrm{x}}^{\uplambda}-1\right)/\uplambda & for\ \lambda \ne 0\\ {}\ln \left(\mathrm{x}+\mathrm{c}\right)& \mathrm{f} or\ \lambda =0\end{array}\right.$$ where x is the original value, y is the value after Box-cox transformation, and λ and c are parameters calculated by maximum likelihood estimation. Statistical simulations All statistical analysis was performed using SAS 9.4 and R 3.5.1. The lower limit and upper limit values of RIs were calculated as the 2.5 and 97.5% quantiles of the corresponding populations, respectively. Four methods were implemented in this study: the improved non-parametric method, the radial smoothing method (RS), the General Additive Model for Location Scale and Shape method (GAMLSS), and the Lambda-Median-Sigma method (LMS) [4, 8, 11,12,13]. Both the improved non-parametric method and RS are considered as non-parametric methods because non-parametric curve estimation methods are used during the RI establishment procedure. Although GAMLSS and LMS use smoothing methods in model terms, they are deemed semi-parametric methods because the response variable requires the assumption of a parametric distribution. In the past decades, several studies have used spline or piecewise polynomial methods to establish continuous RIs of laboratory analytes for interpretation of the age dynamics of children's development [5, 6, 14]. These studies' methods can be divided into three steps. First, the whole dataset was split into several age groups; then, discrete RIs were calculated for each age group; finally, the RIs' limit values for each age group were fit using appropriate smoothing methods, such as spline or polynomial methods [4, 6]. Arzideh et al. optimized the age group classification procedure [3, 15]. They split the whole dataset into overlapping time frames, which allows more precise consideration of rapid changes in analyte concentrations with increasing age. We call Arzideh's method the improved non-parametric method in the present study and used the bootstrap method to calculate the reference limits for each time frame [3, 16]. To find the most suitable smoothing method for the improved non-parametric method, cubic spline, penalized spline, and fractional polynomial smoothing were performed, and the goodness of fit was evaluated by Akaike information criterion (AIC) values calculated under each model [17]. The formula for AIC is as follows. $$AIC=-2 lon\hat{\theta}+k$$ where \(\hat{\theta}\) is the maximized log likelihood function, and k is the number of effective degrees of freedom used in the model, e.g., k = 2. The model with the smallest AIC value is considered to have the best fit. The LMS model contains three parameters: skewness (L) accounts for deviation from the normal distribution after Box-Cox transformation; the median (M) models the outcome variable depending on one explanatory variable; and the coefficient of variation (S) accounts for variation of data points around the mean and adjusts for non-uniform dispersion [9]. GAMLSS is an extension of the LMS method, which was introduced by Rigby and Stasinopoulos as a way of overcoming some of the limitations associated with generalized linear models and generalized additive models [11, 18]. In contrast with LMS, GAMLSS can accommodate more than one covariate and distribution [11, 19]. The Box-Cox t and Box-Cox power exponential distributions were compared to select the most appropriate type of GAMLSS model [20, 21]. Worm plots were used to assess the fitting results of additive terms and to judge whether simulation of kurtosis was required [22]. The procedure was implemented in the GAMLSS package (version 5.1–2) of the R statistical software package. In the RS method, various spline functions, such as B-spline and truncated polynomial functions, can be used as the basis function to fit non-parametric curve estimation [8]. Radial bases are sometimes preferred for higher-dimensional problems because of their straightforward extension. Wan XH et al. provided a percentile curve for calculation of arithmetic based on four moments and the Edgeworth-Cornish-Fisher expansion, which was used for some of the present study's simulations [23]. Before statistical simulation, the whole dataset was randomly partitioned into training and test datasets in an 8:2 ratio. The training dataset were used for model fitting and model selection, and the test dataset were used for assessment of the model's predictive power to fit training data, i.e., the out of range (OOR) percentage. Considering sex differences in analyte concentrations, the data were divided by sex before the training and test datasets were generated. The process of data set partitioning and RI calculation was repeated 100 times for both ALP and Cr to reduce the random error caused by running too few statistical simulations. In addition, the Wilcoxon test was used to compare whether the training and test datasets had similar age distributions. When P ≥ 0.05, the results of dataset partitioning were deemed valid, and otherwise, the partitioning procedure was repeated. According to the recommendations of the Clinical and Laboratory Standards Institute, RIs may be considered valid when the OOR value is < 10% [4], considering that we estimated RIs using 95% intervals. Thus, OOR values close to 2.5% for both the upper and lower reference limits were appropriate for method selection. Furthermore, AIC values were calculated to evaluate the different models' goodness of fit under the GAMLSS method. Then, the accuracy and goodness of fit of continuous RIs were compared comprehensively based on the OOR and AIC values. The statistical simulation process is summarized in Supplemental Figure 1. Characteristics of ALP and Cr distributions The entire data cleaning process is shown in Fig. 1. Scatter diagrams show outliers that were removed by Tukey's method (Supplemental Figure 2). After data cleaning, samples from 11,517 and 11,544 participants aged 1–19 years were included to calculate the RIs of ALP and Cr, respectively. The distributions of ALP and Cr by age and sex are shown in Table 1. The probability density plots had the same age distributions between the training and test datasets (Supplemental Figure 3). Data cleaning procedure. ALP, alkaline phosphatase; Cr, serum creatinine Table 1 The distributions of alkaline phosphatase and serum creatinine by age and sex We represented the density of the data points by color chromaticity using the plotSimpleGamlss function in R. The results demonstrated that girls were more concentrated in the adolescent groups than boys (Fig. 2). Additionally, significant age dependence was shown in the trends of ALP and Cr, and the results differed between the two analytes. For example, Cr continuously increased with age from 1 to 19 years, whereas a sharp decrease in ALP was observed after puberty (age 12 and 14 years for girls and boys, respectively). Cr showed the same tendency between boys and girls throughout the childhood phase, where boys plateau later than girls after a long period of growth. However, boys' and girls' ALP levels showed a decreasing trend in the first 4 years of life, but the levels then increased until puberty. The age dependent trend of alkaline phosphatase and serum creatinine by sex. a. alkaline phosphatase of boys. b. alkaline phosphatase of girls. c. serum creatinine of boys. d. serum creatinine of girls. ALP, alkaline phosphatase; Cr, serum creatinine. Notes: The center lines are fitted by GAMLSS method, the other curves are probability density functions and the horizontal axis represents the probability density for each age group. The density of the data points is represented by the color chromaticity Simulation of time frames for the improved non-parametric method The balance between the sample size in each time frame and the number of time frames was difficult to maintain. Through a process of statistical simulation, we obtained a figure with changing values of n and m (n = sample size in time frame, m = mean difference between adjacent time frames), shown in Fig. 3. Although the curves obtained under various parameters were similar, we found that when the sample size is small (e.g., n < 60), there were more discrete reference limit values, which may drift, influencing the curve fitting results. Moreover, when the sample size was too large (e.g. n > 300), some details of the curve were lost, especially for ALP at ages 14–16 years, which showed a cliff-like descent. The Clinical and Laboratory Standards Institute (CLSI) recommends a minimum sample size of 120 to establish RIs. Combining CLSI's suggestion with Pavlov's research [16], we finally set the sample size within each time frame to 120. Using an excessive number of time frames increases the arithmetic load. Therefore, the n and m parameters were set as 120 and 2, respectively, for both ALP and Cr. Simulation of time frames with different sample sizes (n) and mean difference between adjacent time frames (m). a. Simulation of time frames with different sample sizes (n), n = sample size in time frame. b. Simulation of mean difference between adjacent time frames, m = mean difference between adjacent time frames. ALP, alkaline phosphatase. Notes: Lines are fitted by cubic splines Simulation of smoothing methods for the improved non-parametric method The AIC values for three smoothing methods are shown in Supplemental Table 1. Smoothing parameters were selected by internal (i.e., local) maximum likelihood estimation in the R software package [24]. Among all models, the penalized spline method had the smallest AIC value. The continuous RIs fitted by penalized spline, cubic spline, and fraction polynomials are shown in Supplemental Figure 4. The fraction polynomials did not fit well at the end of the curve for ALP, and there was a cross between the upper and lower percentile curves. Furthermore, fluctuation occurred in the smooth curve simulated by the penalized spline method. Therefore, we adjusted the smoothing parameters of the cubic spline and penalized spline methods through visual inspection. Simulations based on GAMLSS and LMS Using the GAMLSS method, the four models were simulated using two distribution types (Box-Cox t and Box-Cox power exponential) and two smoothing methods (cubic spline or penalized spline). GAMLSS models' AIC values are shown in Supplemental Table 2. Compared with the cubic spline smoothing technique, penalized spline fit the data better according to the AIC value, similar to the simulation results of the non-parametric methods. The worm plots of simulations based on the LMS and GAMLSS methods demonstrate that the GAMLSS models fit the data better than LMS, especially for ALP (Supplemental Figure 5). This is because the GAMLSS model is more consistent with the theoretical distribution, in which data points are uniformly distributed on both sides of the center line [22]. Continuous RIs for pediatric ALP and Cr The continuous RIs were estimated using the improved non-parametric method, the RS method, the GAMLSS method, and the LMS method. Figure 4 shows the results of continuous RI estimation for Cr and ALP using the four methods. The RIs estimated by the LMS, GAMLSS, and RS methods approximately overlapped, while the improved non-parametric method seemed better after visual inspection of the smoothing parameters. However, there were slight differences at the ends and peaks of the curves. Large edge effects were found in the curves fit by the RS method: left and right edge effects appeared for Cr and ALP, respectively. Age-specific reference values found for ALP and Cr using the GAMLSS method are presented in Tables 2 and 3. Continuous RIs of serum creatinine and alkaline phosphatase by sex. a. alkaline phosphatase of boys. b. alkaline phosphatase of girls. c. serum creatinine of boys. d. serum creatinine of girls. GAMLSS, General Additive Model for Location Scale and Shape method; LMS, Lambda-Median-Sigma method; RS, radial smoothing method; ALP, alkaline phosphatase; Cr, serum creatinine Table 2 Age-specific reference values for alkaline phosphatase Table 3 Age-specific reference values for creatinine Figure 5 shows the differences between discrete RIs partitioned by the decision tree technique and continuous RIs calculated by the GAMLSS method. The discrete RIs presented a ladder shape that jumped several times with increasing age. In addition, we added the continuous RIs from the CALIPER study [25]. The curves from CALIPER were smoother than this study's GAMLSS method results, especially for ALP in boys. Further, both the upper and lower reference limits of Cr calculated by CALIPER were slightly lower than those in the present study. Comparing the RIs with CALIPER study. a. alkaline phosphatase of boys. b. alkaline phosphatase of girls. c. serum creatinine of boys. d. serum creatinine of girls. ALP, alkaline phosphatase; Cr, serum creatinine; GAMLSS, General Additive Model for Location Scale and Shape method; CALIPER: Canadian Laboratory Initiative in Pediatric Reference Intervals Verifying RIs by test set The continuous RIs were verified using the test dataset. All OOR values in Table 4 were smaller than 10%. The OOR percentages of the LMS, GAMLSS, and RS methods were much closer to 5%, and both the lower OOR and upper OOR proportions were both close to 2.5%. We also verified the continuous RIs calculated by CALIPER, all OOR values were less than 10%, except for ALP of girls. In addition, we calculated the OOR rates of discrete RIs, which were also close to 5%. Moreover, we created a table of OOR values for each year of age to ensure that the RIs accurately represent the relationship between age and analyte concentration (Supplemental Tables 3 and 4), which clearly showed the differences between continuous and discrete RIs. Although this study's sample size may be relatively insufficient, the OOR values of discrete RIs have larger variation in each age group compared with those of continuous RIs, especially near the thresholds of age divisions. Table 4 Out of range (OOR) of different simulation methods verified with test data set Age partitioning is a common issue not only for pediatric RIs but also for other clinical laboratory indexes [2, 3, 26, 27]. However, the use of age portioning methods to establish RIs still has some limitations, as the use of discrete age groups does not sensitively reflect continuous changes in growth and development. This problem is illustrated in Fig. 5. In contrast to the discrete RIs, the continuous RIs allow a precise representation of age and sex-dependent change during growth and development. Therefore, to provide evidence for the applicability of different algorithms to establish continuous RIs, we presented continuous RIs simulated by four methods from infancy to adulthood. The age-dependent trends of ALP and Cr in the present study were consistent with those in previous studies, which represent distinctive age-dependent trends [6, 25]. Different from study reported by Zierk [6], in which data were collected from hospitals, all the reference individuals were healthy children recruited for the PRINCE study. Moreover, there were slight differences in the continuous RIs between the CALIPER study and the present study: the reference limits were slightly higher in the present study, which may be caused by differences in the reference population and inspection instruments. Establishing a reference interval using a non-parametric method is an indirect process: curve fitting is simulated considering only the values of discrete reference limits, rather than including all data. This weakness leads to curve fluctuations, even if it has the best AIC value and appropriate smoothing parameters. Our research indicates that although the model can obtain a better AIC value, the smoothing parameters adjusted by visual inspection better represent the whole dataset and are more suitable in the non-parametric methods. Therefore, the trends of age dependence should be fully understood before establishing continuous RIs using the non-parametric methods. Moreover, it is necessary to adjust the smoothing parameters through visual inspection instead of only relying on software algorithms. We used a robust bootstrap method to estimate the discrete RIs for each time frame in the improved non-parametric method, after considering the accuracy and feasibility of various methods. In our study, not all analyte levels had normal distributions across the 200,000 time frames. Therefore, the data should be transformed to a normal distribution if we use parametric methods to calculate RIs. However, hypotheses testing and data transformation for large datasets depend on programming capabilities and statistical package functions. We tried to use the powertransform function in R to perform the Box-Cox transformation, but there were still some time frames for which the best λ could not be obtained, and those needed to be debugged manually. However, manual debugging would incur an inestimable time cost. According Pavlov's research [16], the bootstrap method has relatively high accuracy when the sample size is relatively small, so we chose the bootstrap method for the improved non-parametric procedure. Additionally, LMS and GAMLSS have been widely applied to establish growth curves. They also perform well at establishing continuous RIs for analytes. As presented, the OOR percentages of those two methods were close to 5%. In contrast, the OOR proportions of the non-parametric methods were more than 6% for ALP, which means that the non-parametric methods' RIs are narrower than those of the LMS and GAMLSS methods. Further, both the GAMLSS and LMS methods are simple to implement and adapt to complex age-dependent trends, especially when the age distribution of the analyte's concentration is not fully understood. As a new approach for estimation of age-specific reference percentile curves, the RS method performs well at growth curve establishment [8]. However, it did not generate effective RIs for ALP without data transformation by the Box-Cox method (Supplemental Figure 6). This means that the distribution of data should be approximately normal, especially when the analyte has a more intricate age-dependent trend. As for verification results, all four methods' OOR values were less than 10%, which means that all methods showed good fit for establishing continuous RIs of Cr and ALP. However, edge effects were observed in all of the curves fit by these four methods. Even if the smoothing parameters were adjusted by visual inspection, the drift at the end of the curve was still not improved in the non-parametric methods. This phenomenon was most prominent when the RS method was used. These results could be attributed to the limited sample size of reference individuals. In our study, the number of reference individuals in the 19-year-old age group of boys was less than 100 for both ALP and Cr, which is insufficient compared with the other age groups. In contrast, the WHO Multicenter Growth Reference Study enlarged the birth sample to 1737 to minimize the left edge effect [7]. It is particularly difficult to sample more reference individuals aged less than 1 year. Although we removed the reference individuals aged less than 1 year to lessen the edge effects, the sample size of infants is not sufficient. Therefore, a larger sample size would be needed to establish continuous RIs. In comparison with the non-parametric methods, the LMS and GAMLSS methods have fewer edge effects when sample size is relatively lacking. In addition, LMS and GAMLSS are easy to implement and have high accuracy, which could be factors to recommend them as convenient and accurate methods for clinical establishment of RIs. Other factors besides age and sex, such as height and weight, may also affect analyte levels. In future research to establish RIs, multifactorial analysis could be considered. Further, the opinions of clinicians and laboratory physicians should be taken into consideration during the variable selection process. All of these directions would ultimately lead to huge challenges in terms of model selection and subject recruitment. Moreover, different methods often estimated the upper and lower limits with the least amount of bias [1]. The idea of establishing reference limits with two different methods was previously explored by Horn PS et al. [28]. There is a huge gap between the establishment of RIs and clinical practice. A possible solution is to integrate continuous RIs into laboratory testing platforms. The obtained models could be embedded into hospital clinical laboratory testing systems, and the RIs could be obtained from the models according to the information needed. Other quantile curves, such as the 5th, 25th, 50th, and 75th percentiles, can be easily obtained from the model. Therefore, doctors could not only judge whether the individual's laboratory result is abnormal but also provide a graph to present the patient's level compared with continuous RIs. In addition, longitudinal dynamic trends can be determined when individuals have multi-time laboratory results within a certain period (Fig. 6). Compared with a single test, the dynamic trends of some analytes could provide more diagnostic information about changes to individual health status. Moreover, graphical displays of clinical laboratory analytes would provide an improvement in clinical laboratory reporting. The application of continuous reference intervals. The seven curves denote the 97.5th, 90th, 75th, 50th, 20th, 10th and 2.5th percentiles for ALP of girls respectively. Dots show the result of individual laboratory examination. a. The percentile curves for ALP of boys aged 1 to 19 years. b. During hospitalization, patient's ALP continued to increase for several times. c. With the increase of age, patient's ALP decreased. ALP, alkaline phosphatase The concept of continuous RIs is timeless and should become a standard throughout the entire field of laboratory medicine. It is necessary to establish continuous RIs for all ages rather than only focusing on the initial stages of life. When we are limited to the reference population, we cannot make such age divisions. Mørkrid et al. presented an elegant example of this viewpoint [29]. Four statistical methods to estimate continuous RIs for ALP and Cr were simulated and verified. The verification of continuous RIs showed that all four methods could be used to establish continuous RIs of clinical laboratory analytes. The GAMLSS and LMS methods were more reliable than the RS and non-parametric methods, especially when sample size was insufficient. Therefore, the former two can be recommended as convenient and accurate methods for RIs establishment in clinical practice. In addition, the distribution of the data should be approximately normal when using the RS method to establish continuous RIs. The data are not publicly available as they contain information that could compromise research participant privacy. But they are available from the corresponding author on reasonable request. The request to access the raw data treated by deidentification from the PRINCE study must be approved by Academic Committee of PRINCE study. ALP: Cr: Serum creatinine RIs: Reference intervals OOR: PRINCE: Pediatric Reference Interval in China study CALIPER: Canadian Laboratory Initiative in Pediatric Reference Intervals GAMLSS: The General Additive Model for Location Scale and Shape method LMS: The Lambda-Median-Sigma method The radial smoothing method BCT: The Box-Cox t distribution BCPE: The Box-Cox power exponential distribution Daly CH, Higgins V, Adeli K, Grey VL, Hamid JS. Reference interval estimation: methodological comparison using extensive simulations and empirical data. Clin Biochem. 2017;50(18):1145–58. Peng X, Lv Y, Feng G, Peng Y, Li Q, Song W, Ni X. Algorithm on age partitioning for estimation of reference intervals using clinical laboratory database exemplified with plasma creatinine. Clin Chem Lab Med. 2018;56(9):1514–23. Zierk J, Arzideh F, Haeckel R, Rascher W, Rauh M, Metzler M. Indirect determination of pediatric blood count reference intervals. Clin Chem Lab Med. 2013;51(4):863–72. CLSI. Establishing, and verifying reference intervals in the clinical laboratory; approved guideline— third edition. CLSI Document EP28-A3c 2008. Zierk J, Arzideh F, Haeckel R, Cario H, Fruhwald MC, Gross HJ, Gscheidmeier T, Hoffmann R, Krebs A, Lichtinghagen R, et al. Pediatric reference intervals for alkaline phosphatase. Clin Chem Lab Med. 2017;55(1):102–10. Zierk J, Arzideh F, Rechenauer T, Haeckel R, Rascher W, Metzler M, Rauh M. Age- and sex-specific dynamics in 22 hematologic and biochemical analytes from birth to adolescence. Clin Chem. 2015;61(7):964–73. Borghi E, de Onis M, Garza C, Van den Broeck J, Frongillo EA, Grummer-Strawn L, Van Buuren S, Pan H, Molinari L, Martorell R, et al. Construction of the World Health Organization child growth standards: selection of methods for attained growth curves. Stat Med. 2006;25(2):247–65. Wan X, Qu Y, Huang Y, Zhang X, Song H, Jiang H. Nonparametric estimation of age-specific reference percentile curves with radial smoothing. Contemp Clin Trials. 2012;33(1):13–22. De Henauw S, Michels N, Vyncke K, Hebestreit A, Russo P, Intemann T, Peplies J, Fraterman A, Eiben G, de Lorgeril M, et al. Blood lipids among young children in Europe: results from the European IDEFICS study. Int J Obes. 2014;38(Suppl 2):S67–75. Ni X, Song W, Peng X, Shen Y, Peng Y, Li Q, Wang Y, Hu L, Cai Y, Shang H, et al. Pediatric reference intervals in China (PRINCE): design and rationale for a large, multicenter collaborative cross-sectional study. Sci Bull. 2018;63(24):1626–34. Stasinopoulos DM, Rigby RA. Generalized additive models for location scale and shape (GAMLSS) in R. J R Stat Soc. 2005;54(3):507–54. Cole TJ. Using the LMS method to measure skewness in the NCHS and Dutch National height standards. Ann Hum Biol. 1989;16(5):407–19. Cole TJ, Green PJ. Smoothing reference centile curves: the LMS method and penalized likelihood. Stat Med. 1992;11(10):1305–19. Royston P, Altman DG. Regression using fractional polynomials of continuous covariates: parsimonious parametric modeling. J R Stat Soc. 1994;43(3):429–67. Arzideh F, Wosniok W, Haeckel R. Indirect reference intervals of plasma and serum thyrotropin (TSH) concentrations from intra-laboratory data bases from several German and Italian medical centres. Clin Chem Lab Med. 2011;49(4):659–64. Pavlov IY, Wilson AR, Delgado JC. Reference interval computation: which method (not) to choose? Clin Chim Acta. 2012;413(13–14):1107–14. Akaike H. A new look at the statistical model identification. IEEE Trans Automatic Control. 1974;19(6):716–23. Hastie T, Tibshirani R. Generalized additive models. Stat Sci. 1986;1(3):297–310. Lane PW, Wood S, Jones MC, Nelder JA, Lee YJ, Borja MC, Longford NT, Bowman A, Cole TJ. Generalized additive models for location, scale and shape - discussion. Appl Stat. 2005;54:544–54. Rigby RA, Stasinopoulos DM. Using the box-cox t distribution in GAMLSS to model skewness and kurtosis. Stat Model. 2006;6(6):209–29. Rigby RA, Stasinopoulos DM. Smooth centile curves for skew and kurtotic data modelled using the box-cox power exponential distribution. Stat Med. 2004;23(19):3053–76. van Buuren S, Fredriks M. Worm plot: a simple diagnostic device for modelling growth reference curves. Stat Med. 2001;20(8):1259–77. Jaschke SR: The cornish-fisher-expansion in the context of delta - gamma - normal approximations. Sfb Discussion Papers 2001. Rigby RA, Stasinopoulos DM. Automatic smoothing parameter selection in GAMLSS with an application to centile estimation. Stat Methods Med Res. 2014;23(4):318–32. Asgari S, Higgins V, McCudden C, Adeli K. Continuous reference intervals for 38 biochemical markers in healthy children and adolescents: comparisons to traditionally partitioned reference intervals. Clin Biochem. 2019;73:82–9. Schnabl K, Chan MK, Gong Y, Adeli K. Closing the gaps in paediatric reference intervals: the CALIPER initiative. Clin Biochem Rev. 2008;29(3):89–96. Lv Y, Feng G, Ni X, Song W, Peng X. The critical gap for pediatric reference intervals of complete blood count in China. Clin Chim Acta. 2017;469:22–5. Horn PS, Pesce AJ, Copeland BE. A robust approach to reference interval estimation and evaluation. Clin Chem. 1998;44(3):622–31. Morkrid L, Rowe AD, Elgstoen KB, Olesen JH, Ruijter G, Hall PL, Tortorelli S, Schulze A, Kyriakopoulou L, Wamelink MM, et al. Continuous age- and sex-adjusted reference intervals of urinary markers for cerebral creatine deficiency syndromes: a novel approach to the definition of reference intervals. Clin Chem. 2015;61(5):760–8. We acknowledge all participants for their cooperation and sample contributions. We also thank Richard Lipkin, PhD, from Liwen Bianji, Edanz Group China, for editing the English text of a draft of this manuscript. Medical hospital authority, National Health Commission of the People's Republic of China (No. 2017374), only as a unique financial support for this project, has no interest and role in the design and performance, or in the data collection, analysis and interpretation of the results, or in the preparation and approval of this manuscript. This work was also supported by Beijing Municipal Administration of Hospitals Clinical medicine Development of Special Grant (No. ZY201404), Pediatric Medical Coordinated Development, Center of Beijing Municipal Administration of Hospitals (No. XTCX201812), and Capital Medical development research foundation (2019–2-2096). Department of Epidemiology and Health Statistics, School of Public Health, Capital Medical University, Beijing, 100069, China Kun Li & Xiaoxia Peng Center for Clinical Epidemiology and Evidence-based Medicine, Beijing Children's Hospital, Capital Medical University, National Center for Children Health, No.56 Nanlishi Road, Beijing, 100045, China Kun Li, Yaguang Peng, Ruohua Yan, Xiaoxia Peng & Xin Ni Department of Clinical Laboratory Center, Beijing Children's Hospital, Capital Medical University, National Center for Children Health, No.56 Nanlishi Road, Beijing, 100045, China Lixin Hu, Qiliang Li & Wenqi Song Beijing Key Laboratory for Pediatric Diseases of Otolaryngology, Head and Neck, Surgery, Beijing Children's Hospital, Capital Medical University, National Center for Children Health, No.56 Nanlishi Road, Beijing, 100045, China Xin Ni Kun Li Lixin Hu Yaguang Peng Ruohua Yan Qiliang Li Xiaoxia Peng Wenqi Song PX and LK contributed to the study design. HL, SW and LQ contributed to the data acquisition. PY and YR provided statics guidance. LK contributed to drafting the manuscript. PX and NX contributed to revising the manuscript for important intellectual content. All authors read and approved the final manuscript. Correspondence to Xiaoxia Peng or Wenqi Song or Xin Ni. The PRINCE study was approved by the Institutional Review Board of Beijing Children's Hospital (IEC-C-028-A10-V.05). At the same time, the protocol was approved by the institutional review boards of other 10 collaborating centers. Informed written consent was signed by the participant's legally authorized representative (the parent or guardian) in the case of young children aged less than 8 years. Otherwise, the informed consent should be signed by both children himself and the participant's legally authorized representative [10]. Li, K., Hu, L., Peng, Y. et al. Comparison of four algorithms on establishing continuous reference intervals for pediatric analytes with age-dependent trend. BMC Med Res Methodol 20, 136 (2020). https://doi.org/10.1186/s12874-020-01021-y Continuous reference intervals Graphical report
CommonCrawl
HOW much rice? Written by Colin+ in core 2, gcse. There's a legend, so well-known that it's almost a cliche, about the wise man who invented chess. When asked by the great king what reward he wanted, he replied that he'd be satisfied by a chessboard full of rice: one grain on the first square, two on the second, four on the third, doubling each time. The king, of course, laughed at his modest demands, and told his people to make it so. His people nervously told the king that actually, that was quite a lot of rice, and if he knew about his Core 2 geometric sequences, he wouldn't have been so badly duped. After all, $S_n = \frac{ a(1-r^n)}{1-r}$. Here $a=1$, $r=2$ and $n=64$, so that works out to ${2^{64} -1}$, which is 18,446,744,073,709,551,615 grains of rice altogether. "Do we have that much rice?" asked the king. "Well, sire, that's $1.8\times 10^{19}$ grains, and there are about $3.6 \times 10^{6}$ grains in a tonne." "So it's, what, $0.5 \times 10^{13}$... five trillion tonnes?" "Very good, sire." "Do we have that much rice?" "I'm afraid not, sire - even looking far into the future, say in the early 21st century, that'll be roughly the entire worldwide crop for a decade." But how much area would that take up? "One square centimetre of rice," said the king's people, "is about ten grains." "So 18 quintillion grains needs $1.8 \times 10^{18} \text {cm}^2$?" "Yes, sire, although we should convert that into more sensible units." "Fine. A metre squared is 100... no! 10,000 centimetres squared, which takes us down to $1.8 \times 10^{14} \text{m}^2$." "Still a little... unwieldy, sire." "Fine. Let's take it down another million by talking about kilometres squared, so it's $1.8 \times 10^{8} \text {km}^2.$ Is that a big number?" "It's quite big, sire." "How big is the world?" "The world, sire? I don't have that information to hand - but we can work it out. The world's circumference is about $4\times 10^{4}$ kilometres, so its radius is that divided by $2\pi$." "$6.3\times 10^{3}\text {km}$?" guessed the king, who'd had some Ninja training.1 "Yes, sire. So the surface area is..." "$4\pi r^2$," interrupted the king. "$r^2$ is about $4 \times 10^7$, so it's roughly $5 \times 10^8 \text{km}^2$." "One day, sire, someone will invent a machine that will answer such questions in an instant." "But of course, only a third of the Earth's surface is land." "Correct, sire." "Which is $1.7 \times 10^{8} \text{km}$. So the wise man wants enough rice to cover pretty much every landmass on the planet to a depth of one grain. Fine. I think this can be easily solved." And the king had the wise man's head chopped off. Nobody likes a smartarse. * Thanks to Aidan for working this out with me. An Australian Dining Phenomenon Does attitude really equal 100%? The Mathematical Ninja and Cosines The Maths Behind… Cakes The crazy way you have to integrate logs $ 4 ÷ 2\pi \simeq 4 \times \frac {7}{44} = \frac{7}{11} = 0.63$ [↩] 3 comments on "HOW much rice?" lol, should of double checked before saying yes to the wise man… And the wise man should not have been greedy… mark porter All this talk of rice is making me hungry. Where's my curry powder?
CommonCrawl
Why is it important that partial derivatives commute? I am asking this in the context of differential geometry (specifically Riemannian). When the Levi-Civita Connection is defined, we require that the torsion tensor is 0, which in local coordinates translates to the requirement that $\Gamma_{ij}^{k} = \Gamma_{ji}^{k}$; which is the covariant derivative version of saying partial derivatives commute: $\nabla_{\partial_i}(\partial_j)=\nabla_{\partial_j}(\partial_i)$. This is obviously true in the Euclidian settings, and I understand all the details of the proofs. But why is this such an essential property? Why does this capture our intuitive sense of derivatives? dg.differential-geometry connections riemannian-geometry torsion Qfwfq R SR S $\begingroup$ A related question might be: What uses, if any, are there for connections that aren't torsion free? $\endgroup$ – Harald Hanche-Olsen Feb 23 '13 at 15:42 $\begingroup$ There are nice interpretations of torsion in mathoverflow.net/questions/20493/… Especially, check out Tom Boardman's answer. $\endgroup$ – Claudio Gorodski Feb 23 '13 at 20:24 $\begingroup$ So that exterior differentiation makes a chain complex! $\endgroup$ – David Corwin Feb 24 '13 at 16:11 $\begingroup$ Isn't this a repeat of the thread Claudio links to above? $\endgroup$ – Ryan Budney Feb 24 '13 at 19:44 $\begingroup$ "Which is the covariant derivative version of saying partial derivatives commute" not quite. It only says that for scalar functions. $\endgroup$ – Willie Wong Feb 25 '13 at 9:51 To me, a Riemannian metric and the Levi-Civita connection associated with the metric represent the intrinsic geometric properties of a submanifold in Euclidean space induced by the inner product and natural flat connection on Euclidean space. Since they are intrinsic, their definitions can be extended from submanifolds of Euclidean space to abstract manifolds. If you don't assume the connection is torsion-free, then there are an infinite number of connections that are compatible with the metric (instead of exactly one), so the link between the geometric properties of the metric and that of the connection is much weaker. Deane YangDeane Yang $\begingroup$ Thanks. This makes sense, although in a way it kind of cancels the entire point of differential geometry as I see it so far (which is not a lot): I thought the motivation is to define calculus again in an intrinsic, coordinate-free way on general smooth manifolds; specifically, since we believe our universe is modeled well by one. Reducing it all back to the Euclidian setting when we stumble along some complication does not seem to follow the same spirit. Perhaps this means we should think our universe is some complicated embedding in a larger Euclidian space, unreachable to us? $\endgroup$ – R S Feb 23 '13 at 15:59 $\begingroup$ Maybe it's circular in this setting, but there is Nash's embedding theorem. $\endgroup$ – horse with no name Feb 23 '13 at 16:09 $\begingroup$ I have two responses: Designing an intrinsic geometric structure modeled on submanifolds of Euclidean space does not mean we are restricting our attention to submanifolds of Euclidean space, notwithstanding the Nash isometric embedding theorem. Nor does it mean that the geometric structure is not co-ordinate-free. The study of Riemannian geometry does not depend on co-ordinates. Of course, even the study of submanifolds in Euclidean space does not, too. $\endgroup$ – Deane Yang Feb 23 '13 at 16:52 Here is another way of obtaining the Christoffel symbols with the symetry imposed by the torsion free condition $$ \Gamma^i_{k\ell}=\Gamma^i_{\ell k}. $$ This goes back to Riemann's Habillitation. Suppose that $(M,g)$ is a Riemann manifold of dimension $N$, $p\in M$. By fixing an orthonormal frame of $T_pM$ we can find local coordinates $(x^1,\dotsc, x^N)$ near $p$ such that, $\newcommand{\pa}{\partial} $ $$ x^i(p)=0, \;\; g=\sum_{i,j} g_{ij}(x) dx^i dx^j, $$ $$g_{ij}(x)= \delta_{ij} +\sum_{i,j}\left(\sum_k\pa_{x^k}g_{ij}(0) x^k\right) dx^i dx^j + O(|x|^2). $$ In other words, in these coordinates, $$ g_{ij}(x)=\delta_{ij} +O(|x|). $$ Riemann was asking whether one can find new coordinates near $p$ such that in these coordinates the metric $g$ satisfies $g_{ij}=\delta_{ij}$. As a first step, we can ask whether we can find a new system of coordinates such that, in these coordinates the metric $g$ is described by $$ g=\sum_{ij}\hat{g}_{ij} dy^idy^j, $$ $$\hat{g}(y)=\delta_{ij}+ O(|y|^2). \tag{1} $$ The new coordinates $(y^j)$ are described in terms of the old coordinates $(x^i)$ by a family of Taylor approximations $$y^j= x^j + \frac{1}{2}\sum_{ij}\gamma^j_{\ell k} x^\ell x^k + O(|x|^3),\;\; \gamma^j_{\ell k}=\gamma^j_{k\ell}. $$ The constraint (1) implies $$ \gamma^j_{\ell k}=\frac{1}{2}\left(\pa_{x^\ell}g_{jk}+\pa_{x^\ell}g_{jk}-\pa_{x^j}g_{\ell k}\right)_{x=0}. $$ We see that, in the $x$ coordinates $$ \Gamma^i_{k\ell}(p)=\gamma^i_{k\ell}, $$ because $g^{ij}(p)=\delta^{ij}$. It took people several decades after Riemann's work to realize that the coefficients $\Gamma^i_{k\ell}$ are related to parallel transport, and ultimately, to a concept of connection. Ultimately, to my mind, the best explanation for the torsion-free requirement comes from Cartan's moving frame technique. The clincher is the following technical fact: given a connection $\nabla$ on $TM$ and a $1$-form $\alpha\in \Omega^1(M)$ then for any vector fields $X,Y$ on $M$ we have $$d\alpha(X,Y)= X\alpha(Y)-Y\alpha(X)-\alpha([X,Y]) $$ $$= (\nabla_X\alpha)(Y)-(\nabla_Y\alpha)(X)+\alpha(\nabla_XY-\nabla_YX)-\alpha([X,Y]) $$ $$= (\nabla_X\alpha)(Y)-(\nabla_Y\alpha)(X)+\alpha\bigl(\;T_\nabla(X,Y)\;\bigr). $$ If the torsion is zero, the above equality looses a term, and one obtains rather easily Cartan's structural equations of a Riemann manifold. Liviu NicolaescuLiviu Nicolaescu The covariant derivative version of trying to commute partial derivatives is: $\nabla_{\partial_i}\nabla_{\partial_j}-\nabla_{\partial_j}\nabla_{\partial_i} - 0 = R(\partial_i,\partial_j)$. Torsion is measuring something different: It is the covariant derivative of the soldering form $\sigma\in\Omega^1(M,E)$ which you use to identify the vector bundle $E$ with $TM$, where $E$ is the bundle you are considering your covariant derivative on. Peter MichorPeter Michor $\begingroup$ Thanks. It is possible to explain this in simpler terms ("soldering form" seems to be advanced from the basic place in diff. geometry in which I now stand)? Or do you think a true understanding of the torsion tensor requires more advanced concepts? $\endgroup$ – R S Feb 23 '13 at 16:05 $\begingroup$ There are lots of nice explanations of torsion here mathoverflow.net/questions/20493/… although you may consider them to be advanced also. $\endgroup$ – Paul Reynolds Feb 23 '13 at 19:12 The Levi-Civita connection is just a very special one - torsion free. It is interesting that the same geometry may be described by switching to another, non-torsion-free connection. E.g. there is such a version of general relativity which is called teleparallel formulation. While the curvature tensor (based on the new connection) vanishes, all the deviation from flatness has been shifted to the torsion tensor (better: vector-valued 2-form). Einstein exchanged his ideas with Cartan in the 1920 about that. Torison has also an equivalent in physics as dislocation density (disclocations are defects in crystals). The theory has been developed in the 1950's by Kondo, Bilby and Kröner. See also the book Ricci calculus by J.A. Schouten. To summarize, it is not that important that the connection is symmetric, it is merely a matter of choice. The metric, for instance, is independent of the connection. ClassicalPhysicistClassicalPhysicist $\begingroup$ You put the cart before the horse. First, you fix a metric. Then you fix a connection compatible with the metric. If you are interested only in geodesic then, as E. Cartan observed, there are several connections compatible with the metric that give the same geodesics. If you are interested in more than geodesics, then curvature matters, and the curvature does depend on the choice of connection. $\endgroup$ – Liviu Nicolaescu Feb 25 '13 at 13:14 $\begingroup$ I don't think that contradicts what I said, but it is surely more precise to start the discussion with the metric, yes. Thanks for clarifying. $\endgroup$ – ClassicalPhysicist Feb 27 '13 at 10:45 Not the answer you're looking for? Browse other questions tagged dg.differential-geometry connections riemannian-geometry torsion or ask your own question. Help me understand boundary terms in actions over nontrivial manifolds Riemannian manifold of bounded geometry has a normal bundle of bounded geometry Symmetric Ricci Tensor Interpretation of Curvature and Torsion Interpetation of torsion and curvature in terms of families of nearby geodesics Linearisation of Einstein operator Induced connection on null hypersurfaces Taylor Expansion on a Riemannian Manifold in Normal Coordinates What does the torsion-free condition for a connection mean in terms of its horizontal bundle? Covariant derivative of the Monge-Ampere equation on Kähler manifolds
CommonCrawl
Heat Transfer Assignment 6 — Flat Plate Flow $\xi$ is a parameter related to your student ID, with $\xi_1$ corresponding to the last digit, $\xi_2$ to the last two digits, $\xi_3$ to the last three digits, etc. For instance, if your ID is 199225962, then $\xi_1=2$, $\xi_2=62$, $\xi_3=962$, $\xi_4=5962$, etc. Keep a copy of the assignment — the assignment will not be handed back to you. You must be capable of remembering the solutions you hand in. Question #1 Consider the wing of an aircraft as a flat plate of 2.5 m length in the flow direction. The plane is moving at $100$ m/s in air that is at a pressure of 0.7 bar and a temperature of $-10^\circ$C. If the top surface of the wing absorbs solar radiation at a rate of $800$ $\rm W/m^2$, estimate its steady-state temperature with and without the effect of viscous dissipation. Assume the wing to be of solid construction and to have a single, uniform temperature. Ignore incident radiation on the bottom surface and take $\epsilon=0.4$ on the top and bottom surfaces of the wing. A thin, flat plate of length $L=1$ m separates two airstreams that are in parallel flow over opposite surfaces of the plate. One airstream has a temperature of $T_{\infty,1}=200^\circ$C and a velocity of $u_{\infty,1}=60$ m/s, while the other airstream has a temperature of $T_{\infty,2}=25^\circ$C and a velocity of $u_{\infty,2}=10$ m/s. The pressure in both streams corresponds to 1 atm. What is the temperature at the midpoint of the plate? Consider liquid water flowing over a flat plate of length $L=1$ m. The water has the following properties: $$ \rho=1000~{\rm kg/m^3},~~~c_p=4000~{\rm J/kgK},~~~\mu=10^{-3}~{\rm kg/ms},~~~k=0.6~{\rm W/m\cdot^\circ C} $$ Midway through the plate at $x=0.5~$m, you measure a heat flux to the surface of: $$ q^"_{x=0.5~{\rm m}}=3181 ~{\rm W/{m^2}} $$ You also measure an average heat flux to the surface over the length of the plate of: $$ \overline{q^"}=4500 ~{\rm W/{m^2}} $$ Knowing the latter, and knowing that the plate temperature is equal to $20^\circ$C do the following: (a) Is the flow laminar or turbulent, or a mix of both? You must provide proof of this using the data provided. (b) What is the possible range of the freestream velocity $U_{\infty}$? (c) Find a relationship between $T_\infty$ and $U_\infty$ 1. $-3.66^\circ$C, $-8.13^\circ$C. 2. 460.8 K. 3. $T_\infty=20^\circ {\rm C}+6^\circ {\rm C m^{0.5}s^{-0.5}} U_\infty^{-0.5}$. Due on Wednesday May 22nd at 9:00. Do all questions. PDF 1✕1 2✕1 2✕2
CommonCrawl
Posted on February 14, 2021 by Maria Gillespie In my graduate Advanced Combinatorics class last semester, I covered the combinatorics of crystal base theory. One of the concepts that came up in this context was ballot sequences, which are motivated by the following elementary problem about voting: Suppose two candidates, A and B, are running for local office. There are 100 voters in the town, 50 of whom plan to vote for candidate A and 50 of whom plan to vote for candidate B. The 100 voters line up in a random order at the voting booth and cast their ballots one at a time, and the votes are counted real-time as they come in with the tally displayed for all to see. What is the probability that B is never ahead of A in the tally? We'll provide a solution to this classical problem on page 2 of this post. For now, this motivates the notion of a ballot sequence in two letters, which is a sequence of A's and B's such that, as the word is read from left to right, the number of A's that have been read so far is always at least as large as the number of B's. For instance, the sequence AABABB is ballot, because as we read from left to right we get the words A, AA, AAB, AABA, AABAB, and AABABB, each of which has at least as many A's as B's. On the other hand, the sequence ABBAAB is not, because after reading the first three letters ABB, there are more B's than A's. If we replace the A's by $1$'s and $B$'s by $2$'s and reverse the words, we obtain the notion of a ballot sequence in $1$'s and $2$'s described in our previous post on crystals. In particular, we say a sequence of $1$'s and $2$'s is ballot if, when we read the word from right to left, there are at least as many $1$'s as $2$'s at each step. So $221211$ and $211111$ are both ballot, but $111112$ and $211221$ are not. Enumerating all ballot sequences When I introduced this notion in class, one of my students asked the following. How many total ballot sequences of $1$'s and $2$'s are there of length $n$? Now, as in the first question about voting above, the more common version of this type of question is to fix the number of $1$'s and $2$'s in the sequence (the "content" of the word) and ask how many ballot sequences have exactly that many $1$'s and $2$'s. But in this case, the question was asked with no fixed content, resulting in a sum of Littlewood-Richardson coefficients (or, in voting terms, where the voters have not yet decided who they will vote for when they line up, and may vote for either candidate). To start, let's try some examples. For $n=0$, there is only one ballot sequence, namely the empty sequence. For $n=1$, there is also just one: $1$. For $n=2$, there are two: $11$ and $21$. For $n=3$, there are three: $111$, $121$, $211$. For $n=4$, there are six: $1111$, $2111$, $1211$, $1121$, $2211$, $2121$. And for $n=5$, there are ten: $$11111, 21111, 12111, 11211, 11121, 22111, 21211, 21121, 12211, 12121$$ The sequence of answers, $1,1,2,3,6,10,\ldots$, so far agrees with the "middle elements" of the rows of Pascal's triangle: $$\begin{array}{ccccccccccc} & & & & & \color{red}1 & & & & & \\ &&&&\color{red} 1&&1 &&&& \\ &&&1&&\color{red} 2&&1&&& \\ &&1&&\color{red} 3&&3&&1&& \\ &1&&4&&\color{red} 6&&4&&1& \\ 1&&5&&{\color{red}{10}}&&10&&5&&1 \end{array}$$ More formally, it appears that the number of ballot sequences of $1$'s and $2$'s of length $2n$ is $\binom{2n}{n}$, and the number of length $2n+1$ is $\binom{2n+1}{n}$. Now, it is possible to prove this formula holds using a somewhat complicated recursive argument, which we will also illustrate on page 2 of this post. But there is also very elegant solution using crystal operators. Solution using crystals Let's recall the definition of the crystal operator $F_1$ on words of $1$'s and $2$'s. Given such a word, we first replace all $2$'s with left parentheses, "$($", and all $1$'s with right parentheses, "$)$". We then "cancel" left and right parentheses in matching pairs as shown in the following example. \begin{array}{ccccccccccc} 2 & 2 & 1 & 1 & 1 & 1 & 2 & 1 & 2 & 2 & 1 \\ ( & ( & ) & ) & ) & ) & ( & ) & ( & ( & ) \\ ( & & & ) & ) & ) & & & ( & & \\ & & & & ) & ) & & & ( & & Once all matching pairs have been cancelled, we are left with a subsequence of the form $$)))\cdots))(((\cdots(($$ consisting of some number of right parentheses (possibly zero) followed by some number of left parentheses (possibly zero). If there is a $)$ remaining, then $F_1$ changes the rightmost $)$ that was not cancelled to $($, changing that $1$ to $2$ in the original word. The word therefore becomes: 2 & 2 & 1 & 1 & 1 & 2 & 2 & 1 & 2 & 2 & 1. Thus $F_1(22111121221)=22111221221$. If there were no $)$ symbols remaining after cancelling, the operator $F_1$ is undefined. Now, consider the directed graph on all words of $1$'s and $2$'s of length $n$, where we draw an arrow from word $w$ to word $v$ if $F_1(w)=v$. Here is the graph for $n=4$: This graph will in general be a union of disjoint one-directional chains, since when $F_1$ is defined it is invertible: the unbracketed $1$ that is changed to a $2$ is still unbracketed, and we can identify it as the leftmost unbracketed $2$ in the new word. We write $E_1$ to denote this inverse operator, which changes the leftmost unpaired $2$ to a $1$ if it exists, and is undefined otherwise. We also cannot have cycles in the $F_1$ graph, because the number of $1$'s always decreases with every application of $F_1$. Thus we have chains of arrows going forward until we reach an element $w$ for which $F_1(w)$ is undefined. Similarly, going backwards along the $F_1$ arrows, we can continue until $E_1$ is undefined, and we call these top elements of each chain the highest weight words. There are six highest weight words in the above diagram: $1111$, $2111$, $1211$, $1121$, $2211$, $2121$. Notice that these are precisely the two-letter ballot sequences of length $6$! Indeed, if a word is ballot, then every $2$ as a left parentheses will be cancelled with some $1$ as a right parentheses to its right, so $E_1$ is undefined on such a word. Conversely, if a word is not ballot, consider the first step in the right-to-left reading of the word that has more $2$'s than $1$'s. The $2$ that is encountered at that step cannot be bracketed with a $1$ to its right, because there are not enough $1$'s to bracket with the $2$'s in that suffix. Thus a word is ballot if and only if $E_1$ is undefined, which means that it is at the top of its chain, or highest weight. Since there is exactly one highest weight word per chain, we have the following. The number of ballot sequences of length $n$ is equal to the number of chains in the $F_1$ crystal graph on all $2^n$ words of $1$'s and $2$'s of length $n$. So, to count the ballot words, it suffices to count the chains of the $F_1$ graph. And here's the key idea: instead of counting the top elements, count the middle ones! In the picture above, the middle elements of each chain are: $$1122, 2112, 1212, 1221, 2211, 2121$$ which is just the set of all words having exactly two $1$'s and two $2$'s, and is clearly counted by $\binom{4}{2}$. Why does this work in general? Here's where we need one more fact about the $F_1$ chains: they are "content-symmetric". If the top element of a chain has $k$ ones and $n-k$ twos, then the bottom element has $n-k$ ones and $k$ twos. This is because the top element, after pairing off an equal number of $2$'s and $1$'s by matching parentheses, has a certain number of unpaired $1$'s, which then all get changed to $2$'s one step at a time as we move towards the bottom of the chain. In particular, the middle element of each chain has exactly as many $1$'s as $2$'s (or, if $n$ is odd, the two "middle elements" have one more $1$ than $2$ and one less $1$ than $2$ respectively.) Finally, since the graph is drawn on all $2^n$ possible words, every word having the same number of $1$'s as $2$'s (or off by $1$ in the odd case) occurs in exactly one chain. It follows that there is a bijection between the chains and these words, which are enumerated by $\binom{2n}{n}$ for words of length $2n$, and $\binom{2n+1}{n}$ for words of length $2n+1$. For the more elementary approach, and the solution to the classical ballot problem, turn to the next page! This entry was posted in Gemstones, Sapphire by Maria Gillespie. Bookmark the permalink. 2 thoughts on "Counting ballots with crystals" neozhaoliang on June 13, 2021 at 9:55 am said: Very nice post! Is there a proof of the "hook length formula" using cystals? Maria Gillespie on October 12, 2021 at 2:36 pm said: That's a good question! I don't actually know. I'll have to think about it!
CommonCrawl
Gontsov, Renat Ravilevich Statistics Math-Net.Ru Total publications: 15 Scientific articles: 13 Presentations: 12 This page: 2707 Abstract pages: 7576 Full texts: 2519 References: 557 Candidate of physico-mathematical sciences (2005) Analytic theory of differential equations http://www.mathnet.ru/eng/person14020 List of publications on Google Scholar List of publications on ZentralBlatt https://mathscinet.ams.org/mathscinet/MRAuthorID/735007 Publications in Math-Net.Ru 1. Moulay A. Barkatou, Renat R. Gontsov, "Linear Differential Systems with Small Coefficients: Various Types of Solvability and their Verification", SIGMA, 15 (2019), 058, 15 pp. 2. R. R. Gontsov, I. V. Goryuchkina, "Convergence of formal Dulac series satisfying an algebraic ordinary differential equation", Mat. Sb., 210:9 (2019), 3–18 ; Sb. Math., 210:9 (2019), 1207–1221 3. R. R. Gontsov, "On the Dimension of the Subspace of Liouvillian Solutions of a Fuchsian System", Mat. Zametki, 102:2 (2017), 178–185 ; Math. Notes, 102:2 (2017), 149–155 4. I. V. Vyugin, R. R. Gontsov, "On the question of solubility of Fuchsian systems by quadratures", Uspekhi Mat. Nauk, 67:3(405) (2012), 183–184 ; Russian Math. Surveys, 67:3 (2012), 585–587 5. Yu. P. Bibilo, R. R. Gontsov, "Some properties of Malgrange isomonodromic deformations of linear $2\times2$ systems", Tr. Mat. Inst. Steklova, 277 (2012), 22–32 ; Proc. Steklov Inst. Math., 277 (2012), 16–26 6. R. R. Gontsov, V. A. Poberezhnyi, G. F. Helminck, "On deformations of linear differential systems", Uspekhi Mat. Nauk, 66:1(397) (2011), 65–110 ; Russian Math. Surveys, 66:1 (2011), 63–105 7. R. R. Gontsov, "On Movable Singularities of Garnier Systems", Mat. Zametki, 88:6 (2010), 845–858 ; Math. Notes, 88:6 (2010), 806–818 8. I. V. V'yugin, R. R. Gontsov, "Construction of a system of linear differential equations from a scalar equation", Tr. Mat. Inst. Steklova, 271 (2010), 335–351 ; Proc. Steklov Inst. Math., 271 (2010), 322–338 9. R. R. Gontsov, "On Solutions of the Schlesinger Equation in the Neigborhood of the Malgrange $\Theta$-Divisor", Mat. Zametki, 83:5 (2008), 779–782 ; Math. Notes, 83:5 (2008), 707–711 10. R. R. Gontsov, V. A. Poberezhnyi, "Various versions of the Riemann–Hilbert problem for linear differential equations", Uspekhi Mat. Nauk, 63:4(382) (2008), 3–42 ; Russian Math. Surveys, 63:4 (2008), 603–639 11. I. V. Vyugin, R. R. Gontsov, "Additional parameters in inverse problems of monodromy", Mat. Sb., 197:12 (2006), 43–64 ; Sb. Math., 197:12 (2006), 1753–1773 12. R. R. Gontsov, "Refined Fuchs inequalities for systems of linear differential equations", Izv. RAN. Ser. Mat., 68:2 (2004), 39–52 ; Izv. Math., 68:2 (2004), 259–272 13. R. R. Gontsov, "Orders of Zeros of Polynomials on Trajectories of Solutions of a System of Linear Differential Equations with Regular Singular Points", Mat. Zametki, 76:3 (2004), 473–477 ; Math. Notes, 76:3 (2004), 438–442 14. R. R. Gontsov, "Letter to the editors", Izv. RAN. Ser. Mat., 68:6 (2004), 221–222 ; Izv. Math., 68:6 (2004), 1277–1279 15. A. V. Chernavskii, V. P. Leksin, M. Butuzov, I. V. Vyugin, R. R. Gontsov, V. A. Poberezhnyi, Yu. S. Ilyashenko, A. G. Sergeev, S. P. Konovalov, "Reminiscences about Andrei Andreevich Bolibrukh", Uspekhi Mat. Nauk, 59:6(360) (2004), 207–215 ; Russian Math. Surveys, 59:6 (2004), 1213–1224 Presentations in Math-Net.Ru 1. Solving triangular Schlesinger systems via periods of meromorphic differentials R. R. Gontsov Dynamics in Siberia - 2019 2. Полиномиальные решения уравнения Шлезингера Seminar on analytic theory of differential equations 3. Алгебро-геометрические решения треугольных систем Шлезингера 4. Power series and Dirichlet series solving ODEs Seminar "Complex analysis in several variables" (Vitushkin Seminar) 5. Solvability of linear differential systems with small exponents in the Liouvillian sense Seminar on Complex Analysis (Gonchar Seminar) 6. Local solvability by quadratures over the field of meromorphic germs 7. О разрешимости систем линейных дифференциальных уравнений в явном виде Seminar by Department of Differential Equation, Steklov Mathematical Institute of RAS 8. On Galois theory of linear differential equations Seminar of M. S. Pinsker Laboratory 1 IITP RAS 9. Seminar dedicated to the 60th anniversary of the birthday of academician A. A. Bolibrukh D. V. Anosov, Yu. S. Ilyashenko, V. P. Leksin, R. R. Gontsov Steklov Mathematical Institute Seminar 10. Классическая проблема Римана–Гильберта и некоторые ее обобщения Renat Gontsov 11. Функция Лобачевского и ее свойства 12. О числе дополнительных параметров в обратных задачах монодромии I. V. Vyugin, R. R. Gontsov Institute for Information Transmission Problems of the Russian Academy of Sciences (Kharkevich Institute), Moscow National Research University "Moscow Power Engineering Institute"
CommonCrawl
tweetnotebook.com DIRECTORY Your repair guide directory new 100 dollar bill printing error - Economy Computer Services network system error 53 - Red River Internet Solutions network error sirius app - Digital Guru network error tcp_error apache - Loose Screws PC Repair netzero error - Computer Medic network path not found error windows xp - Ware Repair network scanner connecting error - Global Technology Inc new memory card error - P C Housecall new adodb connection error - Dakota Computer Repair networker rap error no jukeboxes are currently usable - R & R Computers new instance error - Fix Tech Computer Repair newprof.exe error - North Plains Communications newdev.dll error in vista - 1Stop Technology Ctr news000006 error - Rhino Tech newer version of quicktime error - Coleman Computer Consulting newton interpolating polynomial error - 1 Stop Hosting & Web Design 105 23rd St E, Williston, ND 58801 http://1stophosting.net newton interpolating polynomial error Ambrose, North Dakota Please try the request again. The technique of rational function modeling is a generalization that considers ratios of polynomial functions. The defect of this method, however, is that interpolation nodes should be calculated anew for each new function f(x), but the algorithm is hard to be implemented numerically. You stated that you know how to find the interpolating polynomial, so we get: $$P_2(x) = 26.8534 x^2-42.2465 x+21.7821$$ The formula for the error bound is given by: $$E_n(x) = {f^{n+1}(\xi(x)) For any table of nodes there is a continuous function f(x) on an interval [a, b] for which the sequence of interpolating polynomials diverges on [a,b].[8] The proof essentially uses the Logga in om du vill lägga till videoklippet i Titta senare Lägg till i Läser in spellistor... Rankning kan göras när videoklippet har hyrts. Roy. A penny saved is a penny What are the legal and ethical implications of "padding" pay with extra hours to compensate for unpaid work? という used right before comma: What does Your cache administrator is webmaster. Stäng Ja, behåll den Ångra Stäng Det här videoklippet är inte tillgängligt. At the n + 1 data points, r ( x i ) = p ( x i ) − q ( x i ) = y i − y i = Several authors have therefore proposed algorithms which exploit the structure of the Vandermonde matrix to compute numerically stable solutions in O(n2) operations instead of the O(n3) required by Gaussian elimination.[2][3][4] These The cost is O(n2) operations, while Gaussian elimination costs O(n3) operations. numericalmethodsguy 27 944 visningar 8:34 Calculus 3.05c - Linear Approximation - Längd: 8:20. The Chebyshev nodes achieve this. Now, when I make a plot of the error estimate it becomes surprisingly equal to the actual error $|f(x)-p_2(x)|$, so equal that I assume it's just the numerical precision that causes Please try the request again. You have contributed nothing new. Polynomial interpolation From Wikipedia, the free encyclopedia Jump to: navigation, search In numerical analysis, polynomial interpolation is the interpolation of a given data set by a polynomial: given some points, find What is the difference (if any) between "not true" and "false"? If we interpolate the polynomial $f(x)=c_3x^3+c_2x^2+c_1x+c_0$ for $x\in[a,b]$ with i.e. We are asked to construct the interpolation polynomial of degree at most two to approximate $f(1.4)$, and find an error bound for the approximation. So the only way r(x) can exist is if A = 0, or equivalently, r(x) = 0. Arbetar ... Either way this means that no matter what method we use to do our interpolation: direct, Lagrange etc., (assuming we can do all our calculations perfectly) we will always get the Since $f''$ is strictly increasing on the interval $(1, 1.25)$, the maximum error of ${f^{2}(\xi(x)) \over (2)!}$ will be $4e^{2 \times 1.25}/2!$. DrPhilClark 38 550 visningar 9:33 Cubic Splines - Längd: 3:27. The matrix on the left is commonly referred to as a Vandermonde matrix. Neville's algorithm. Join them; it only takes a minute: Sign up Here's how it works: Anybody can ask a question Anybody can answer The best answers are voted up and rise to the Browse other questions tagged approximation interpolation or ask your own question. Wikipedia® is a registered trademark of the Wikimedia Foundation, Inc., a non-profit organization. Generated Fri, 21 Oct 2016 09:56:42 GMT by s_wx1202 (squid/3.5.20) In this case, we can reduce complexity to O(n2).[5] The Bernstein form was used in a constructive proof of the Weierstrass approximation theorem by Bernstein and has nowadays gained great importance Logga in om du vill lägga till videoklippet i en spellista. Derek Owens 218 855 visningar 8:20 Mod-01 Lec-05 Error in the Interpolating polynomial - Längd: 49:45. Funktionen är inte tillgänglig just nu. Khan Academy 240 508 visningar 11:27 Newton Forward Interpolation - Längd: 8:14. Where are sudo's insults stored? GSL has a polynomial interpolation code in C Interpolating Polynomial by Stephen Wolfram, the Wolfram Demonstrations Project. Construct interpolation polynomials of degree at most one and at most two to approximate $f(1.4)$, and find an error bound for the approximation. Previous company name is ISIS, how to list on CV? ¿Cómo se dice "with each passing minute/day/year..."? Transkription Det gick inte att läsa in den interaktiva transkriberingen. Lägg till i Vill du titta på det här igen senare? numericalmethodsguy 51 109 visningar 9:37 Taylor's Inequality - Estimating the Error in a 3rd Degree Taylor Polynomial - Längd: 9:33. IMA Journal of Numerical Analysis. 8 (4): 473–486. Polynomial interpolation is also essential to perform sub-quadratic multiplication and squaring such as Karatsuba multiplication and Toom–Cook multiplication, where an interpolation through points on a polynomial which defines the product yields The answer is unfortunately negative: Theorem. Consider r ( x ) = p ( x ) − q ( x ) {\displaystyle r(x)=p(x)-q(x)} . Does there exist a single table of nodes for which the sequence of interpolating polynomials converge to any continuous function f(x)? Visa mer Läser in ... However, those nodes are not optimal. Generated Fri, 21 Oct 2016 09:56:42 GMT by s_wx1202 (squid/3.5.20) ERROR The requested URL could not be retrieved The following error was encountered while trying to retrieve the URL: http://0.0.0.7/ Connection Please try the request again. newton error equation Garisto and his teacher realized what the student had uncovered. This gives us an idea on the speed of convergence of the method. P. The disadvantages of using this method are numerous. What is the error of the next approximation xn + 1 found after one iteration of Newton's method? Privacy policy About Wikiversity Disclaimers Developers Cookie statement Mobile view ERROR The requested URL could not be retrieved The following error was encountered while trying to retrieve the URL: http:... newton cotes error In general, the -point rule is given by the analytic expression (25) where (26) (Whittaker and Robinson 1967, p.154). Sequences A093735 and A093736 in "The On-Line Encyclopedia of Integer Sequences." Ueberhuber, C.W. Numerical Methods for Scientists and Engineers (2nd ed.). Math Easy Solutions 1.021 ???????? 45:31 6.2.4-Numerical Integration: Error for Multiple-Application Trapezoid Rule - ????????: 4:58. Exercise 2[edit] When using Simpson's 1/3, what is the error rati... newton method error k variables, m equations, with m > k[edit] The k-dimensional Newton's method can be used to solve systems of >k (non-linear) equations as well if the algorithm uses the generalized inverse Wolfram Problem Generator» Unlimited random practice problems and answers with built-in Step-by-step solutions. For 1/2 < a < 1, the root will still be overshot but the sequence will converge, and for a ? 1 the root will not be overshot at all. Whittaker, E.T. Amer. http://mathworld.wolfram.c... Thus n + 1 = 2 {\displaystyle n+1=2} . Mathispower4u 37.372 ???????? 7:01 Gauss Quadrature Rule: Two Point Rule - ????????: 8:44. Ordinary Differential Equations: An Elementary Textbook for Students of Mathematics, Engineering, and the Sciences. ISBN0-486-65241-6. The text used in the course was "Numerical Methods for Engineers, 6th ed." by Steven Chapra and Raymond Canale. ????????? ?????????? ????? ?????? ????? YouTube ???????... © Copyright 2018 tweetnotebook.com. All rights reserved.
CommonCrawl
Pullback exponential attractors for differential equations with variable delays DCDS-B Home Analysis of time-domain Maxwell's equations in biperiodic structures January 2020, 25(1): 287-300. doi: 10.3934/dcdsb.2019182 Advances in the LaSalle-type theorems for stochastic functional differential equations with infinite delay Ya Wang 1, , Fuke Wu 1,, , Xuerong Mao 2, and Enwen Zhu 3, School of Mathematics and Statistics, Huazhong University of Science and Technology, Wuhan 430074, China Department of Mathematics and Statistics, University of Strathclyde, Glasgow G1 1XH, UK School of Mathematics and Computing Science, Changsha University of Science and Technology, Changsha, Hunan 410004, China Received January 2019 Published July 2019 Fund Project: The research was supported in part by the National Natural Science Foundations of China (Grant Nos. 1161101211 and 61873320), and the Royal Society and the Newton Fund (NA160317, Royal Society-Newton Advanced Fellowship) This paper considers stochastic functional differential equations (SFDEs) with infinite delay. The main aim is to establish the LaSalle-type theorems to locate limit sets for this class of SFDEs. In comparison with the existing results, this paper gives more general results under the weaker conditions imposed on the Lyapunov function. These results can be used to discuss the asymptotic stability and asymptotic boundedness for SFDEs with infinite delay. In the end, two examples will be given to illustrate applications of our new results established. Keywords: Stochastic functional differential equations, the LaSalle-type theorem, infinite delay, asymptotic stability, nonnegative semimartingale convergence theorem. Mathematics Subject Classification: Primary: 34K50, 60H10; Secondary: 34D45, 37C75. Citation: Ya Wang, Fuke Wu, Xuerong Mao, Enwen Zhu. Advances in the LaSalle-type theorems for stochastic functional differential equations with infinite delay. Discrete & Continuous Dynamical Systems - B, 2020, 25 (1) : 287-300. doi: 10.3934/dcdsb.2019182 L. Arnold, Stochastic Differential Equations: Theory and Applications, Wiley, New York, 1974. Google Scholar [2] A. Friedman, Stochastic Differential Equations and Applications, Academic Press, New York, 1976. doi: 10.1007/978-3-642-11079-5_2. Google Scholar J. K. Hale and S. M. V. Lunel, Introduction to Functional Differential Equations, Springer-Verlag, New York, 1993. doi: 10.1007/978-1-4612-4342-7. Google Scholar Y. Hino, S. Murakami and T. Naito, Functional Differential Equations with Infinite Delay, Springer-Verlag, Berlin, 1991. doi: 10.1007/BFb0084432. Google Scholar [5] V. B. Kolmanovskii and V. R. Nosov, Stability of Functional Differential Equations, Academic Press, London, 1986. Google Scholar J. P. LaSalle, Stability theory for ordinary differential equations, Journal of Differential Equations, 4 (1968), 57-65. doi: 10.1016/0022-0396(68)90048-X. Google Scholar X. Li and X. Mao, The improved lasalle-type theorems for stochastic differential delay equations, Stochastic Analysis and Applications, 30 (2012), 568-589. doi: 10.1080/07362994.2012.684320. Google Scholar X. Mao, Stochastic versions of the lasalle theorem, Journal of Differential Equations, 153 (1999), 175-195. doi: 10.1006/jdeq.1998.3552. Google Scholar X. Mao, Lasalle-type theorems for stochastic differential delay equations, Journal of Mathematical Analysis and Applications, 236 (1999), 350-369. doi: 10.1006/jmaa.1999.6435. Google Scholar X. Mao, A note on the lasalle-type theorems for stochastic differential delay equations, Journal of Mathematical Analysis and Applications, 268 (2002), 125-142. doi: 10.1006/jmaa.2001.7803. Google Scholar X. Mao, The lasalle-type theorems for stochastic functional differential equations, Nonlinear Studies, 7 (2000), 307-328. Google Scholar X. Mao, Stochastic Differential Equations and Applications, 2$^{nd}$ edition, Horwood, Chichester, 2008. doi: 10.1016/B978-1-904275-34-3.50013-X. Google Scholar X. Mao, Razumikhin-type theorems on exponential stability of stochastic functional differential equations, Stochastic Processes and their Applications, 65 (1996), 233-250. doi: 10.1016/S0304-4149(96)00109-3. Google Scholar S. E. A. Mohammed, Stochastic Functional Differential Equations, Pitman (Advanced Publishing Program), Boston, MA, 1984. Google Scholar Y. Shen, Q. Luo and X. Mao, The improved lasalle-type theorems for stochastic functional differential equations, Journal of Mathematical Analysis and Applications, 318 (2006), 134-154. doi: 10.1016/j.jmaa.2005.05.026. Google Scholar F. Wei and K. Wang, The existence and uniqueness of the solution for stochastic functional differential equations with infinite delay, Journal of Mathematical Analysis and Applications, 331 (2007), 516-531. doi: 10.1016/j.jmaa.2006.09.020. Google Scholar F. Wu and S. Hu, The lasalle-type theorem for neutral stochastic functional differential equations with infinite delay, Discrete and Continuous Dynamical Systems, Series A, 32 (2012), 1065-1094. doi: 10.3934/dcds.2012.32.1065. Google Scholar F. Wu, G. Yin and H. Mei, Stochastic functional differential equations with infinite delay: Existence and uniqueness of solutions, solution maps, markov properties, and ergodicity, Journal of Differential Equations, 262 (2017), 1226-1252. doi: 10.1016/j.jde.2016.10.006. Google Scholar Fuke Wu, Shigeng Hu. The LaSalle-type theorem for neutral stochastic functional differential equations with infinite delay. Discrete & Continuous Dynamical Systems - A, 2012, 32 (3) : 1065-1094. doi: 10.3934/dcds.2012.32.1065 Ovide Arino, Eva Sánchez. A saddle point theorem for functional state-dependent delay differential equations. Discrete & Continuous Dynamical Systems - A, 2005, 12 (4) : 687-722. doi: 10.3934/dcds.2005.12.687 Yufeng Shi, Qingfeng Zhu. A Kneser-type theorem for backward doubly stochastic differential equations. Discrete & Continuous Dynamical Systems - B, 2010, 14 (4) : 1565-1579. doi: 10.3934/dcdsb.2010.14.1565 Abdelhai Elazzouzi, Aziz Ouhinou. Optimal regularity and stability analysis in the $\alpha-$Norm for a class of partial functional differential equations with infinite delay. Discrete & Continuous Dynamical Systems - A, 2011, 30 (1) : 115-135. doi: 10.3934/dcds.2011.30.115 Roman Srzednicki. A theorem on chaotic dynamics and its application to differential delay equations. Conference Publications, 2001, 2001 (Special) : 362-365. doi: 10.3934/proc.2001.2001.362 Fuke Wu, Xuerong Mao, Peter E. Kloeden. Discrete Razumikhin-type technique and stability of the Euler--Maruyama method to stochastic functional differential equations. Discrete & Continuous Dynamical Systems - A, 2013, 33 (2) : 885-903. doi: 10.3934/dcds.2013.33.885 Yongxin Jiang, Can Zhang, Zhaosheng Feng. A Perron-type theorem for nonautonomous differential equations with different growth rates. Discrete & Continuous Dynamical Systems - S, 2017, 10 (5) : 995-1008. doi: 10.3934/dcdss.2017052 Evelyn Buckwar, Girolama Notarangelo. A note on the analysis of asymptotic mean-square stability properties for systems of linear stochastic delay differential equations. Discrete & Continuous Dynamical Systems - B, 2013, 18 (6) : 1521-1531. doi: 10.3934/dcdsb.2013.18.1521 Ismael Maroto, Carmen Núñez, Rafael Obaya. Exponential stability for nonautonomous functional differential equations with state-dependent delay. Discrete & Continuous Dynamical Systems - B, 2017, 22 (8) : 3167-3197. doi: 10.3934/dcdsb.2017169 Jiaohui Xu, Tomás Caraballo. Long time behavior of fractional impulsive stochastic differential equations with infinite delay. Discrete & Continuous Dynamical Systems - B, 2019, 24 (6) : 2719-2743. doi: 10.3934/dcdsb.2018272 Minghui Song, Liangjian Hu, Xuerong Mao, Liguo Zhang. Khasminskii-type theorems for stochastic functional differential equations. Discrete & Continuous Dynamical Systems - B, 2013, 18 (6) : 1697-1714. doi: 10.3934/dcdsb.2013.18.1697 Bahareh Akhtari, Esmail Babolian, Andreas Neuenkirch. An Euler scheme for stochastic delay differential equations on unbounded domains: Pathwise convergence. Discrete & Continuous Dynamical Systems - B, 2015, 20 (1) : 23-38. doi: 10.3934/dcdsb.2015.20.23 Tian Zhang, Huabin Chen, Chenggui Yuan, Tomás Caraballo. On the asymptotic behavior of highly nonlinear hybrid stochastic delay differential equations. Discrete & Continuous Dynamical Systems - B, 2019, 24 (10) : 5355-5375. doi: 10.3934/dcdsb.2019062 Anatoli F. Ivanov, Musa A. Mammadov. Global asymptotic stability in a class of nonlinear differential delay equations. Conference Publications, 2011, 2011 (Special) : 727-736. doi: 10.3934/proc.2011.2011.727 Tomás Caraballo, José Real, T. Taniguchi. The exponential stability of neutral stochastic delay partial differential equations. Discrete & Continuous Dynamical Systems - A, 2007, 18 (2&3) : 295-313. doi: 10.3934/dcds.2007.18.295 Jean-François Couchouron, Mikhail Kamenskii, Paolo Nistri. An infinite dimensional bifurcation problem with application to a class of functional differential equations of neutral type. Communications on Pure & Applied Analysis, 2013, 12 (5) : 1845-1859. doi: 10.3934/cpaa.2013.12.1845 Junhao Hu, Chenggui Yuan. Strong convergence of neutral stochastic functional differential equations with two time-scales. Discrete & Continuous Dynamical Systems - B, 2019, 24 (11) : 5831-5848. doi: 10.3934/dcdsb.2019108 Dariusz Idczak. A global implicit function theorem and its applications to functional equations. Discrete & Continuous Dynamical Systems - B, 2014, 19 (8) : 2549-2556. doi: 10.3934/dcdsb.2014.19.2549 Zhenjie Li, Ze Cheng, Dongsheng Li. The Liouville type theorem and local regularity results for nonlinear differential and integral systems. Communications on Pure & Applied Analysis, 2015, 14 (2) : 565-576. doi: 10.3934/cpaa.2015.14.565 Hermann Brunner, Chunhua Ou. On the asymptotic stability of Volterra functional equations with vanishing delays. Communications on Pure & Applied Analysis, 2015, 14 (2) : 397-406. doi: 10.3934/cpaa.2015.14.397 PDF downloads (109) Ya Wang Fuke Wu Xuerong Mao Enwen Zhu
CommonCrawl
Bessel SPDEs and renormalised local times Henri Elad Altman1 & Lorenzo Zambotti ORCID: orcid.org/0000-0002-3028-09931 Probability Theory and Related Fields (2019)Cite this article 83 Accesses In this article, we prove integration by parts formulae (IbPFs) for the laws of Bessel bridges from 0 to 0 over the interval [0, 1] of dimension smaller than 3. As an application, we construct a weak version of a stochastic PDE having the law of a one-dimensional Bessel bridge (i.e. the law of a reflected Brownian bridge) as reversible measure, the dimension 1 being particularly relevant in view of applications to scaling limits of dynamical critical pinning models. We also exploit the IbPFs to conjecture the structure of the stochastic PDEs associated with Bessel bridges of all dimensions smaller than 3. Immediate online access to all issues from 2019. Subscription will auto renew annually. This is the net price. Taxes to be calculated in checkout. Ambrosio, L., Savaré, G., Zambotti, L.: Existence and stability for Fokker–Planck equations with log-concave reference measure. Probab. Theory Relat. Fields 145(3–4), 517–564 (2009) Amdeberhan, T., Espinosa, O., Gonzalez, I., Harrison, M., Moll, V.H., Straub, A.: Ramanujan's master theorem. Ramanujan J. 29(1–3), 103–120 (2012) Bellingeri, C.: An Itô type formula for the additive stochastic heat equation. arXiv preprint arXiv:1803.01744 (2018) Bruned, Y., Hairer, M., Zambotti, L.: Algebraic renormalisation of regularity structures. Invent. Math. 215(3), 1039–1156 (2019) Caputo, P., Martinelli, F., Toninelli, F.: On the approach to equilibrium for a polymer with adsorption and repulsion. Electron. J. Probab. 13, 213–258 (2008) Da Prato, G., Zabczyk, J.: Second Order Partial Differential Equations in Hilbert Spaces, vol. 293. Cambridge University Press, Cambridge (2002) Dalang, R.C., Mueller, C., Zambotti, L.: Hitting properties of parabolic S.P.D.E.'s with reflection. Ann. Probab. 34, 1423–1450 (2006) Deuschel, J.-D., Giacomin, G., Zambotti, L.: Scaling limits of equilibrium wetting models in \((1+1)\)-dimension. Probab. Theory Relat. Fields 132(4), 471–500 (2005) Deuschel, J.-D., Orenshtein, T.: Scaling limit of wetting models in \(1+1\) dimensions pinned to a shrinking strip. Preprint arXiv:1804.02248 (2018) Elad Altman, H.: Bessel SPDEs with general Dirichlet boundary conditions (in preparation) Elad Altman, H.: Bismut–Elworthy–Li Formulae for Bessel Processes. Séminaire de Probabilités XLIX, Lecture Notes in Mathematics, vol. 2215, pp. 183–220. Springer, Cham (2018) Etheridge, A.M., Labbé, C.: Scaling limits of weakly asymmetric interfaces. Commun. Math. Phys. 336(1), 287–336 (2015) Fattler, T., Grothaus, M., Voßhall, R.: Construction and analysis of a sticky reflected distorted Brownian motion. Ann. Inst. Henri Poincaré Probab. Stat. 52(2), 735–762 (2016) Fukushima, M., Oshima, Y., Takeda, M.: Dirichlet Forms and Symmetric Markov Processes, vol. 19. Walter de Gruyter, Berlin (2010) Funaki, T.: Stochastic Interface Models. École d'été de Saint-Flour XXXIII-2003, Lecture Notes in Mathematics, vol. 1869, pp. 103–274. Springer, Berlin (2005) Funaki, T., Ishitani, K.: Integration by parts formulae for Wiener measures on a path space between two curves. Probab. Theory Relat. Fields 137(3–4), 289–321 (2007) Funaki, T., Olla, S.: Fluctuations for \(\nabla \phi \) interface model on a wall. Stoch. Process. Appl. 94(1), 1–27 (2001) Gelfand, I.M., Shilov, G.E.: Generalized Functions, vol. 1, Academic Press [Harcourt Brace Jovanovich, Publishers], New York-London, 1964 [1977] (Properties and operations, Translated from the Russian by Eugene Saletan) Gorenflo, R., Mainardi, F.: Fractional Calculus: Integral and Differential Equations of Fractional Order. arXiv preprint arXiv:0805.3823 (2008) Grothaus, M., Voßhall, R.: Integration by parts on the law of the modulus of the Brownian bridge. arXiv preprint arXiv:1609.02438 (2016) Grothaus, M., Voßhall, R.: Strong Feller property of sticky reflected distorted Brownian motion. J. Theor. Probab. 31(2), 827–852 (2018) Gubinelli, Massimiliano: Peter Imkeller, and Nicolas Perkowski, Paracontrolled distributions and singular PDEs, Forum Math. Pi 3, e6, 75 (2015) Hairer, M., Mattingly, J.: The strong Feller property for singular stochastic PDEs. Ann. Inst. Henri Poincaré Probab. Stat. 54(3), 1314–1340 (2018) Hairer, M.: A theory of regularity structures. Invent. Math. 198(2), 269–504 (2014) Hida, T., Kuo, H.-H., Potthoff, J., Streit, L.: White Noise, Mathematics and Its Applications, vol. 253. Kluwer Academic Publishers Group, Dordrecht (1993). (An infinite-dimensional calculus) Karatzas, I., Shreve, S.E.: Brownian Motion and Stochastic Calculus. Graduate Texts in Mathematics, vol. 113. Springer, New York (1988) Llavona, J.G.: Approximation of Continuously Differentiable Functions, vol. 130. Elsevier, Amsterdam (1986) Ma, Z.-M., Röckner, M.: Introduction to the Theory of (Non-symmetric) Dirichlet Forms. Springer, Berlin (1992) Mueller, C., Mytnik, L., Perkins, E.: Nonuniqueness for a parabolic SPDE with \((\frac{3}{4}-\epsilon )\)-Hölder diffusion coefficients. Ann. Probab. 42(5), 2032–2112 (2014) Mytnik, L., Perkins, E.: Pathwise uniqueness for stochastic heat equations with Hölder continuous coefficients: the White noise case. Probab. Theory Relat. Fields 149(1–2), 1–96 (2011) Nualart, D.: Malliavin Calculus and Its Applications. American Mathematical Society (AMS), Providence, RI (2009) Nualart, D., Pardoux, E.: White noise driven quasilinear SPDEs with reflection. Probab. Theory Relat. Fields 93(1), 77–89 (1992) Pitman, J., Yor, M.: Sur une décomposition des ponts de Bessel. Functional Analysis in Markov Processes, pp. 276–285. Springer, Berlin (1982) Revuz, D., Yor, M.: Continuous Martingales and Brownian Motion, vol. 293. Springer, Berlin (2013) Rogers, L.C.G., Williams, D.: Diffusions, Markov Processes, and Martingales. Vol. 2, Cambridge Mathematical Library. Cambridge University Press, Cambridge (2000). (Itô calculus, Reprint of the second (1994) edition) Shiga, T., Watanabe, S.: Bessel diffusions as a one-parameter family of diffusion processes. Z. Wahrscheinlichkeitstheorie verwandte Gebiete 27(1), 37–46 (1973) Tsatsoulis, P., Weber, H.: Spectral gap for the stochastic quantization equation on the 2-dimensional torus. Ann. Inst. Henri Poincaré Probab. Stat. 54(3), 1204–1249 (2018) Voßhall, Robert: Sticky reflected diffusion processes in view of stochastic interface models and on general domains. Ph.D. Thesis, Technische Universität Kaiserslautern (2016) Zambotti, L.: A reflected stochastic heat equation as symmetric dynamics with respect to the 3-d Bessel bridge. J. Funct. Anal. 180(1), 195–209 (2001) Zambotti, L.: Integration by parts formulae on convex sets of paths and applications to spdes with reflection. Probab. Theory Relat. Fields 123(4), 579–600 (2002) Zambotti, L.: Integration by parts on \(\delta \)-Bessel bridges, \(\delta > 3\), and related SPDEs. Ann. Probab. 31(1), 323–348 (2003) Zambotti, L.: Occupation densities for SPDEs with reflection. Ann. Probab. 32(1A), 191–215 (2004) Zambotti, L.: Integration by parts on the law of the reflecting Brownian motion. J. Funct. Anal. 223(1), 147–178 (2005) Zambotti, L.: Itô–Tanaka's formula for stochastic partial differential equations driven by additive space-time White noise. Stoch. Partial Differ. Equ. Appl. VII 245, 337–347 (2006) Zambotti, L.: Random Obstacle Problems, école d'été de Probabilités de Saint-Flour XLV-2015, vol. 2181. Springer, Berlin (2017) The arguments used in Proposition 5.1 below to show quasi-regularity of the form associated with the law of a reflected Brownian bridge were communicated to us by Rongchan Zhu and Xiangchan Zhu, whom we warmly thank. The first author is very grateful to Jean-Dominique Deuschel, Tal Orenshtein and Nicolas Perkowski for their kind invitation to TU Berlin, and for very interesting discussions. We also thank Giuseppe Da Prato for very useful discussion and for his kindness and patience in answering our questions. The authors would finally like to thank the Isaac Newton Institute for Mathematical Sciences for hospitality and support during the programme "Scaling limits, rough paths, quantum field theory" when work on this paper was undertaken: this work was supported by EPSRC Grant Numbers EP/K032208/1 and EP/R014604/1. The second author gratefully acknowledges support by the Institut Universitaire de France and the project of the Agence Nationale de la Recherche ANR-15-CE40-0020-01 grant LSD. Laboratoire de Probabilités Statistique et Modélisation (LPSM), CNRS, Sorbonne Université, Université de Paris, 75005, Paris, France Henri Elad Altman & Lorenzo Zambotti Search for Henri Elad Altman in: Search for Lorenzo Zambotti in: Correspondence to Lorenzo Zambotti. Appendix A. Proofs of two technical results Proof of Proposition 5.1 Since \(D(\Lambda )\) contains all globally Lipschitz functions on H, for all \(f \in {\mathcal {F}} {\mathcal {C}}^{\infty }_{b}(K)\) we have \(f \circ j \in D(\Lambda )\). A simple calculation shows that for any \(f\in {\mathcal {F}} {\mathcal {C}}^{\infty }_{b}(K)\) of the form (5.4) we have $$\begin{aligned} \nabla (f\circ j)(z) = \nabla f (j(z)) \, \text {sgn}(z). \end{aligned}$$ (A.1) Hence, for all \(f,g \in {\mathcal {F}} {\mathcal {C}}^{\infty }_{b}(K)\), we have $$\begin{aligned} \begin{aligned} {\mathcal {E}}(f,g)&= \frac{1}{2} \int \langle \nabla f(x), \nabla g(x) \rangle \, {\mathrm {d}}\nu (x) = \frac{1}{2} \int \langle \nabla f(j(z)), \nabla g(j(z)) \rangle \, {\mathrm {d}}\mu (z) \\&= \frac{1}{2} \int \langle \nabla (f \circ j)(z), \nabla (g \circ j)(z) \rangle \, {\mathrm {d}}\mu (z) = \Lambda (f \circ j, g \circ j), \end{aligned} \end{aligned}$$ where the third equality follows from (A.1). This shows that the bilinear symmetric form \(({\mathcal {E}},{\mathcal {F}} {\mathcal {C}}^{\infty }_{b}(K))\) admits as an extension the image of the Dirichlet form \((\Lambda , D(\Lambda ))\) under the map j. Since \({\mathcal {F}} {\mathcal {C}}^{\infty }_{b}(K)\) is dense in \(L^{2}(\nu )\), this extension is a Dirichlet form. In particular, \(({\mathcal {E}},{\mathcal {F}} {\mathcal {C}}^{\infty }_{b}(K))\) is closable, its closure \(({\mathcal {E}},D ({\mathcal {E}}))\) is a Dirichlet form, and we have the isometry property (5.6). There remains to prove that the Dirichlet form \(({\mathcal {E}},D ({\mathcal {E}}))\) is quasi-regular. Since it is the closure of \(({\mathcal {E}},{\mathcal {F}} {\mathcal {C}}^{\infty }_{b}(K))\), it suffices to show that the associated capacity is tight. Since K is separable, we can find a countable dense subset \(\{ y_{k}, \, k \in {\mathbb {N}} \} \subset K\) such that \(y_k \ne 0\) for all \(k \in {\mathbb {N}}\). Let now \(\varphi \in C^{\infty }_{b}({\mathbb {R}})\) be an increasing function such that \(\varphi (t)=t\) for all \(t \in [-1,1]\) and \(\Vert \varphi '\Vert _{\infty } \le 1\). For all \(m \in {\mathbb {N}}\), we define the function \(v_{m} : K \rightarrow {\mathbb {R}}\) by $$\begin{aligned} v_{m}(z) := \varphi (\Vert z-y_{m}\Vert ), \quad z \in K. \end{aligned}$$ Moreover, we set, for all \(n \in {\mathbb {N}}\) $$\begin{aligned} w_{n}(z) := \underset{m \le n}{\inf } v_{m}(z), \quad z \in K. \end{aligned}$$ We claim that \(w_{n} \in D({\mathcal {E}})\), \(n \in {\mathbb {N}}\), and that \(w_{n} \underset{n \rightarrow \infty }{\longrightarrow } 0\), \({\mathcal {E}}\) quasi-uniformly in K. Assuming this claim for the moment, for all \(k \ge 1\) we can find a closed subset \(F_{k}\) of K such that \(\text {Cap} (K {\setminus } F_{k}) < 1/k\), and \(w_{n} \underset{n \rightarrow \infty }{\longrightarrow } 0\) uniformly on \(F_{k}\). Hence, for all \(\epsilon >0\), we can find \(n \in {\mathbb {N}}\) such that \(w_{n} < \epsilon \) on \(F_{k}\). Therefore $$\begin{aligned} F_{k} \subset \underset{m \le n}{\bigcup } B(y_{m}, \epsilon ) \end{aligned}$$ where B(y, r) is the open ball in K centered at \(y \in K\) with radius \(r >0\). This shows that \(F_{k}\) is totally bounded. Since it is, moreover, complete as a closed subspace of a complete metric space, it is compact, and the tightness of \(\text {Cap}\) follows. We now justify our claim. For all \(i \in {\mathbb {N}}\), we set \(l_i := \Vert y_i\Vert ^{-1} \, y_i\). Then for all \(i \ge 1\), \(l_{i} \in K\), \(\Vert l_{i}\Vert = 1\) and, for all \(z \in K\) $$\begin{aligned} \Vert z\Vert = \underset{i \ge 0}{\sup } \, \langle l_{i}, z \rangle . \end{aligned}$$ Let \(m \in {\mathbb {N}}\) be fixed. For all \(i \ge 0\), let \(u_{i}(z) := \underset{j \le i}{\sup } \, \, \varphi ( \, \langle l_{j}, z- y_{m} \rangle \, )\), \(z \in K\). We have \(u_{i} \in D({\mathcal {E}})\), and, for \(\nu \) - a.e. \(z \in K\) $$\begin{aligned} \sum _{k=1}^{\infty } \frac{\partial u_{i}}{\partial e_{k}} (z) ^{2} \le \underset{j \le i}{\sup } \left( \sum _{k=1}^{\infty } \varphi '(\langle l_{j}, z - y_{m} \rangle )^{2} \, \langle l_{j}, e_{k} \rangle ^{2} \right) \le 1, \end{aligned}$$ whence \({\mathcal {E}}(u_{i}, u_{i})\le 1\). By the definition of \(v_{m}\), as \(i \rightarrow \infty \), \(u_{i} \uparrow v_{m}\) on K, hence in \(L^{2}(K, \nu )\). By [28, I.2.12], we deduce that \(v_{m} \in D({\mathcal {E}})\), and that \( {\mathcal {E}}(v_{m}, v_{m}) \le 1. \) Therefore, for all \(n \in {\mathbb {N}}\), \(w_{n} \in D({\mathcal {E}})\), and \( {\mathcal {E}}(w_{n}, w_{n}) \le 1. \) But, since \(\{ y_{k}, \, k \in {\mathbb {N}} \}\) is dense in K, as \(n \rightarrow \infty \), \(w_{n} \downarrow 0\) on K. Hence \(w_{n} \underset{n \rightarrow \infty }{\longrightarrow } 0\) in \(L^{2}(K, \nu )\). This and the previous bound imply, by [28, I.2.12], that the Cesàro means of some subsequence of \((w_{n})_{n \ge 0}\) converge to 0 in \(D({\mathcal {E}})\). By [28, III.3.5], some subsequence thereof converges \({\mathcal {E}}\) quasi-uniformly to 0. But, since \((w_{n})_{n \ge 0}\) is non-increasing, we deduce that it converges \({\mathcal {E}}\)-quasi-uniformly to 0. The claimed quasi-regularity follows. There finally remains to check that \(({\mathcal {E}}, D({\mathcal {E}}))\) is local in the sense of Definition [28, V.1.1]. Let \(u,v \in D({\mathcal {E}})\) satisfying \(\text {supp}(u) \cap \text {supp}(v) = \emptyset \). Then, \(u \circ j\) and \(v \circ j\) are two elements of \(D(\Lambda )=W^{1,2}(\mu )\) with disjoint supports, and, recalling (5.6), we have $$\begin{aligned} {\mathcal {E}}(u,v) = \Lambda (u \circ j,v \circ j) = \frac{1}{2} \int _{H} \nabla (u \circ j) \cdot \nabla (v \circ j) \, {\mathrm {d}}\mu =0. \end{aligned}$$ The claim follows. \(\square \) Proof of Lemma 5.3 Recall that \(D({\mathcal {E}})\) is the closure under the bilinear form \({\mathcal {E}}_{1}\) of the space \({\mathcal {F}} {\mathcal {C}}^{\infty }_{b}(K)\) of functionals of the form \(F = \Phi \bigr |_{K}\), where \(\Phi \in {\mathcal {F}} {\mathcal {C}}^{\infty }_{b}(H)\). Therefore, to prove the claim, it suffices to show that for any functional \(\Phi \in {\mathcal {F}} {\mathcal {C}}^{\infty }_{b}(H)\) and all \(\epsilon >0\), there exists \(\Psi \in {\mathscr {S}}\) such that \({\mathcal {E}}_1(\Phi -\Psi ,\Phi -\Psi ) < \epsilon \). Let \(\Phi \in {\mathcal {F}} {\mathcal {C}}^{\infty }_{b}(H)\). We set for all \(\epsilon > 0\) $$\begin{aligned} \Phi _{\epsilon }(\zeta ) := \Phi (\sqrt{\zeta ^{2} + \epsilon }), \quad \zeta \in H. \end{aligned}$$ A simple calculation shows that \(\Phi _{\epsilon } \underset{\epsilon \rightarrow 0}{\longrightarrow } \Phi \) and \(\nabla \Phi _{\epsilon } \underset{\epsilon \rightarrow 0}{\longrightarrow } \nabla \Phi \) pointwise, with uniform bounds \(\Vert \Phi _{\epsilon }\Vert _{\infty } \le \Vert \Phi \Vert _{\infty }\) and \( \Vert \nabla \Phi _{\epsilon } \Vert _{\infty } \le \Vert \nabla \Phi \Vert _{\infty }\). Hence, by dominated convergence, \({\mathcal {E}}_1 (\Phi _{\epsilon } - \Phi , \Phi _{\epsilon } - \Phi ) \underset{\epsilon \rightarrow 0}{\longrightarrow } 0\). Then, introducing for all \(d \ge 1\)\((\zeta ^{d}_{i})_{1 \le i \le d}\) the orthonormal family in \(L^{2}(0,1)\) given by $$\begin{aligned} \zeta ^{d}_{i} := \sqrt{d} \ {\mathbf {1}}_{[\frac{i-1}{d}, \frac{i}{d}[}, \quad i = 1, \ldots , d, \end{aligned}$$ and setting $$\begin{aligned} \Phi ^{d}_{\epsilon }(\zeta ) := \Phi _{\epsilon } \left( \left( \sum _{i=1}^{d} \langle \zeta _{d,i}, \zeta ^{2} \rangle \right) ^{\frac{1}{2}} \right) = \Phi \left( \left( \sum _{i=1}^{d} \langle \zeta _{d,i}, \zeta ^{2} \rangle + \epsilon \right) ^{\frac{1}{2}} \right) , \quad \zeta \in H, \end{aligned}$$ again we obtain the convergence \({\mathcal {E}}_1(\Phi ^{d}_{\epsilon } - \Phi _{\epsilon }, \Phi ^{d}_{\epsilon } - \Phi _{\epsilon }) \underset{d \rightarrow \infty }{\longrightarrow } 0\). There remains to show that any fixed functional of the form $$\begin{aligned} \Phi (\zeta ) = f\left( \langle \zeta _{1}, \zeta ^{2} \rangle , \ldots , \langle \zeta _{d}, \zeta ^{2} \rangle \right) , \quad \zeta \in H \end{aligned}$$ with \(d \ge 1\), \(f \in C^{1}_{b}({\mathbb {R}}_{+}^{d})\), and \((\zeta _{i})_{i=1, \ldots , d}\) a family of elements of K, can be approximated by elements of \({\mathscr {S}}\). Again by dominated convergence, we can suppose that f has compact support in \({\mathbb {R}}_{+}^{d}\). We define \(g\in C^{1}_{b}([0,1]^{d})\), $$\begin{aligned} g(y) := f(-\ln (y_{1}), \ldots , -\ln (y_{d})), \quad y \in \,]0,1]^{d}, \end{aligned}$$ and \(g(y):=0\) if \(y_i=0\) for any \(i=1,\ldots ,d\). By a differentiable version of the Weierstrass Approximation Theorem (see Theorem 1.1.2 in [27]), there exists a sequence \((p_{k})_{k \ge 1}\) of polynomial functions converging to g for the \(C^{1}\) topology on \([0,1]^{d}\). Defining for all \(k \ge 1\) the function \(f_{k}: {\mathbb {R}}_{+}^{d} \rightarrow {\mathbb {R}}\) by $$\begin{aligned} f_{k}(x) = p_{k}(e^{-x_{1}}, \ldots , e^{-x_{d}}), \quad x \in {\mathbb {R}}_{+}^{d}, \end{aligned}$$ we define \(\Phi _{k} \in {\mathscr {S}}\) by $$\begin{aligned} \Phi _{k} (\zeta ) = f_{k} \left( \langle \zeta _{1}, \zeta ^{2} \rangle , \ldots , \langle \zeta _{d}, \zeta ^{2} \rangle \right) , \quad \zeta \in H. \end{aligned}$$ Since \(p_{k} \underset{k \rightarrow \infty }{\longrightarrow } g\) for the \(C^{1}\) topology on \([0,1]^{d}\), \(f_{k} \underset{k \rightarrow \infty }{\longrightarrow } f\) uniformly on \({\mathbb {R}}_{+}^{d}\) together with its first order derivatives. Hence, it follows that \(\Phi _{k} \underset{k \rightarrow \infty }{\longrightarrow } \Phi \) pointwise on K together with its gradient. It also follows that there is some \(C>0\) such that for all \(k \ge 1\) $$\begin{aligned} \forall \zeta \in K, \quad |\Phi _{k}(\zeta )|^{2} + \Vert \nabla \Phi _{k}(\zeta )\Vert ^{2} \le C(1+ \Vert \zeta \Vert ^{2}). \end{aligned}$$ Since the quantity in the right-hand side is \(\nu \) integrable in \(\zeta \), it follows by dominated convergence that \({\mathcal {E}}_1(\Phi _{k}-\Phi , \Phi _{k}-\Phi ) \underset{k \rightarrow \infty }{\longrightarrow } 0\). This yields the claim. \(\square \) Elad Altman, H., Zambotti, L. Bessel SPDEs and renormalised local times. Probab. Theory Relat. Fields (2019) doi:10.1007/s00440-019-00926-0 DOI: https://doi.org/10.1007/s00440-019-00926-0 Mathematics Subject Classification 60J55
CommonCrawl
Asymptotic behavior of solutions to incompressible electron inertial Hall-MHD system in $ \mathbb{R}^3 $ On a class of linearly coupled systems on $ \mathbb{R}^N $ involving asymptotically linear terms Analysis of Boundary-Domain Integral Equations to the mixed BVP for a compressible stokes system with variable viscosity Carlos Fresneda-Portillo 1,, and Sergey E. Mikhailov 2, School of Engineering, Computing and Mathematics, Wheatley Campus, Oxford Brookes University, OX33 1HX, Wheatley, UK Department of Mathematics, Brunel University London, UB8 3PH, Uxbridge, UK Received October 2018 Revised January 2019 Published May 2019 Fund Project: This research was supported by the grants EP/H020497/1, EP/M013545/1, and 1636273 from the EPSRC The mixed boundary value problem for a compressible Stokes system of partial differential equations in a bounded domain is reduced to two different systems of segregated direct Boundary-Domain Integral Equations (BDIEs) expressed in terms of surface and volume parametrix-based potential type operators. Equivalence of the BDIE systems to the mixed BVP and invertibility of the matrix operators associated with the BDIE systems are proved in appropriate Sobolev spaces. Keywords: Compressible Stokes system, variable viscosity, boundary-domain integral equations, parametrix, boundary value problem. Mathematics Subject Classification: Primary: 35J57, 45F15; Secondary: 45P05. Citation: Carlos Fresneda-Portillo, Sergey E. Mikhailov. Analysis of Boundary-Domain Integral Equations to the mixed BVP for a compressible stokes system with variable viscosity. Communications on Pure & Applied Analysis, 2019, 18 (6) : 3059-3088. doi: 10.3934/cpaa.2019137 O. Chkadua, S. E. Mikhailov and D. Natroshvili, Analysis of direct boundary-domain integral equations for a mixed BVP with variable coefficient, Ⅰ: Equivalence and invertibility, J. Integral Equations Appl., 21 (2009), 499-543. doi: 10.1216/JIE-2009-21-4-499. Google Scholar O. Chkadua, S. E. Mikhailov and D. Natroshvili, Analysis of some localized boundary-domain integral equations, J. Integral Equations Appl., 21 (2009), 405-445. doi: 10.1216/JIE-2009-21-3-407. Google Scholar M. Costabel, Boundary integral operators on Lipschitz domains: Elementary results, SIAM J. Math. Anal., 19 (1988), 613-626. doi: 10.1137/0519043. Google Scholar G. Eskin, Boundary Value Problems for Elliptic Pseudodifferential Equations, Transl. of Mathem. Monographs, Amer. Math. Soc., vol. 52: Providence, Rhode Island, 1981. Google Scholar R. Gutt, M. Kohr, S. E. Mikhailov and W. L. Wendland, On the mixed problem for the semilinear Darcy-Forchheimer-Brinkman PDE system in Besov spaces on creased Lipschitz domains, Math. Methods in Appl. Sci., 40 (2017), 7780-7829. doi: 10.1002/mma.4562. Google Scholar R. Grzhibovskis, S. Mikhailov and S. Rjasanow, Numerics of boundary-domain integral and integro-differential equations for BVP with variable coefficient in 3D, Computational Mechanics, 51 (2013), 495-503. doi: 10.1007/s00466-012-0777-8. Google Scholar D. Hilbert, Grundzüge einer allgemeinen Theorie der linearen Integralgleichungen, Teubner, Leipzig-Berlin, 2nd edition, 1924. Google Scholar G. C. Hsiao and W. L. Wendland, Boundary Integral Equations, Springer, Berlin, 2008. doi: 10.1007/978-3-540-68545-6. Google Scholar M. Kohr and W. L. Wendland, Variational boundary integral equations for the Stokes system, Applicable Anal., 85 (2006), 1343-1372. doi: 10.1080/00036810600963961. Google Scholar O. A. Ladyzhenskaya, The Mathematical Theory of Viscous Incompressible Flow, Gordon & Breach, New York, 1969. Google Scholar E. E. Levi, Ⅰ problemi dei valori al contorno per le equazioni lineari totalmente ellittiche alle derivate parziali, Mem. Soc. Ital. dei Sc. XL, 16 (1909), 1-112. Google Scholar J. L. Lions and E. Magenes, Non-Homogeneous Boundary Value Problems and Applications, Springer, Berlin, 1973. Google Scholar W. McLean, Strongly Elliptic Systems and Boundary Integral Equations, Cambridge University Press, 2000. Google Scholar S. G. Michlin and S. Prössdorf, Singular Integral Operators, Springer Berlin, 1986. doi: 10.1007/978-3-642-61631-0. Google Scholar S. E. Mikhailov, Traces, extensions and co-normal derivatives for elliptic systems on Lipschitz domains, J. Math. Anal. Appl., 378 (2011), 324-342. doi: 10.1016/j.jmaa.2010.12.027. Google Scholar S. E. Mikhailov, Localized boundary-domain integral formulations for problems with variable coefficients, Engineering Analysis with Boundary Elements, 26 (2002), 681-690. Google Scholar S. E. Mikhailov and N. A. Mohamed, Numerical solution and spectrum of boundary-domain integral equation for the Neumann BVP with variable coefficient, Internat. J. Comput. Math., 89 (2012), 1488-1503. doi: 10.1080/00207160.2012.679733. Google Scholar S. E. Mikhailov and I. S. Nakhova, Mesh-based numerical implementation of the localized boundary-domain integral equation method to a variable-coefficient Neumann problem, J. Eng. Math., 51 (2005), 251-259. doi: 10.1007/s10665-004-6452-0. Google Scholar S. E. Mikhailov and C. F. Portillo, BDIE system to the mixed BVP for the Stokes equations with variable viscosity, In Integral Methods in Science and Engineering: Theoretical and Computational Advances, C. Constanda and A. Kirsh, eds., Springer, Boston, (2015), 401–412. Google Scholar S. E. Mikhailov and C. F. Portillo, Analysis of boundary-domain integral equations based on a new paramatrix for the mixed diffusion BVP with variable coefficient in an interior Lipschitz domain, Journal of Integral Equations and Applications, forthcoming (2018). Available at https://projecteuclid.org:443/euclid.jiea/1541668069.Google Scholar C. Miranda, Partial Differential Equations of Elliptic Type, 2nd edn., Springer, 1970. Google Scholar A. Pomp, Levi functions for linear elliptic systems with variable coefficients including shell equations, Comput. Mech., 22 (1998), 93-99. doi: 10.1007/s004660050343. Google Scholar A. Pomp, The Boundary-domain Integral Method for Elliptic Systems. With Applications in Shells, volume 1683 of Lecture Notes in Mathematics., Springer, Berlin-Heidelberg-New York, 1998. doi: 10.1007/BFb0094576. Google Scholar B. Reidinger and O. Steinbach, A symmetric boundary element method for the Stokes problem in multiple connected domains, Math. Meth. Appl. Sci., 26 (2003), 77-93. doi: 10.1002/mma.347. Google Scholar C. Le Roux and B. D. Reddy, The steady Navier-Stokes equations with mixed boundary conditions: application to free boundary flows, Nonlinear Analysis, Theory, Methods & Applications, 20 (1993), 1043-1068. doi: 10.1016/0362-546X(93)90094-9. Google Scholar J. Sladek, V. Sladek and S. N. Atluri, Local boundary integral equation (LBIE) method for solving problems of elasticity with nonhomogeneous material properties, Comput. Mech., 24 (2000), 456-462. Google Scholar J. Sladek, V. Sladek and J.-D. Zhang, Local integro-differential equations with domain elements for the numerical solution of partial differential equations with variable coefficients, J. Eng. Math., 51 (2005), 261-282. doi: 10.1007/s10665-004-3692-y. Google Scholar O. Steinbach, Numerical Approximation Methods for Elliptic Boundary Value Problems, Springer Berlin, 2007. doi: 10.1007/978-0-387-68805-3. Google Scholar A. E. Taigbenu, The Green Element Method, Kluwer Academic Publishers, Boston-Dordrecht-London, 1999.Google Scholar H. Triebel, Interpolation Theory, Function Spaces, Differential Operators, North-Holland, Amsterdam, 1978. Google Scholar W. L. Wenland and J. Zhu, The boundary element method for three dimensional Stokes flow exterior to an open surface, Mathematical and Computer Modelling, 6 (1991), 19-42. doi: 10.1016/0895-7177(91)90021-X. Google Scholar T. Zhu, J.-D. Zhang and S. N. Atluri, A local boundary integral equation (LBIE) method in computational mechanics, and a meshless discretization approach, Comput. Mech., 21 (1998), 223-235. doi: 10.1007/s004660050297. Google Scholar T. Zhu, J.-D. Zhang and S. N. Atluri, A meshless numerical method based on the local boundary integral equation (LBIE) to solve linear and non-linear boundary value problems, Eng. Anal. Bound. Elem., 23 (1999), 375-389. Google Scholar Xulong Qin, Zheng-An Yao. Global solutions of the free boundary problem for the compressible Navier-Stokes equations with density-dependent viscosity. Communications on Pure & Applied Analysis, 2010, 9 (4) : 1041-1052. doi: 10.3934/cpaa.2010.9.1041 Hantaek Bae. Solvability of the free boundary value problem of the Navier-Stokes equations. Discrete & Continuous Dynamical Systems - A, 2011, 29 (3) : 769-801. doi: 10.3934/dcds.2011.29.769 Jitao Liu. On the initial boundary value problem for certain 2D MHD-$\alpha$ equations without velocity viscosity. Communications on Pure & Applied Analysis, 2016, 15 (4) : 1179-1191. doi: 10.3934/cpaa.2016.15.1179 Ping Chen, Daoyuan Fang, Ting Zhang. Free boundary problem for compressible flows with density--dependent viscosity coefficients. Communications on Pure & Applied Analysis, 2011, 10 (2) : 459-478. doi: 10.3934/cpaa.2011.10.459 Yoshihiro Shibata. On the local wellposedness of free boundary problem for the Navier-Stokes equations in an exterior domain. Communications on Pure & Applied Analysis, 2018, 17 (4) : 1681-1721. doi: 10.3934/cpaa.2018081 Xianpeng Hu, Dehua Wang. The initial-boundary value problem for the compressible viscoelastic flows. Discrete & Continuous Dynamical Systems - A, 2015, 35 (3) : 917-934. doi: 10.3934/dcds.2015.35.917 Zilai Li, Zhenhua Guo. On free boundary problem for compressible navier-stokes equations with temperature-dependent heat conductivity. Discrete & Continuous Dynamical Systems - B, 2017, 22 (10) : 3903-3919. doi: 10.3934/dcdsb.2017201 Michal Beneš. Mixed initial-boundary value problem for the three-dimensional Navier-Stokes equations in polyhedral domains. Conference Publications, 2011, 2011 (Special) : 135-144. doi: 10.3934/proc.2011.2011.135 Jing Wang, Lining Tong. Stability of boundary layers for the inflow compressible Navier-Stokes equations. Discrete & Continuous Dynamical Systems - B, 2012, 17 (7) : 2595-2613. doi: 10.3934/dcdsb.2012.17.2595 Ping Chen, Ting Zhang. A vacuum problem for multidimensional compressible Navier-Stokes equations with degenerate viscosity coefficients. Communications on Pure & Applied Analysis, 2008, 7 (4) : 987-1016. doi: 10.3934/cpaa.2008.7.987 Shu Wang, Chundi Liu. Boundary Layer Problem and Quasineutral Limit of Compressible Euler-Poisson System. Communications on Pure & Applied Analysis, 2017, 16 (6) : 2177-2199. doi: 10.3934/cpaa.2017108 Helmut Abels. Nonstationary Stokes system with variable viscosity in bounded and unbounded domains. Discrete & Continuous Dynamical Systems - S, 2010, 3 (2) : 141-157. doi: 10.3934/dcdss.2010.3.141 Jishan Fan, Fucai Li, Gen Nakamura. Convergence of the full compressible Navier-Stokes-Maxwell system to the incompressible magnetohydrodynamic equations in a bounded domain. Kinetic & Related Models, 2016, 9 (3) : 443-453. doi: 10.3934/krm.2016002 Yoshikazu Giga. A remark on a Liouville problem with boundary for the Stokes and the Navier-Stokes equations. Discrete & Continuous Dynamical Systems - S, 2013, 6 (5) : 1277-1289. doi: 10.3934/dcdss.2013.6.1277 V. A. Dougalis, D. E. Mitsotakis, J.-C. Saut. On initial-boundary value problems for a Boussinesq system of BBM-BBM type in a plane domain. Discrete & Continuous Dynamical Systems - A, 2009, 23 (4) : 1191-1204. doi: 10.3934/dcds.2009.23.1191 Johnny Henderson, Rodica Luca. Existence of positive solutions for a system of nonlinear second-order integral boundary value problems. Conference Publications, 2015, 2015 (special) : 596-604. doi: 10.3934/proc.2015.0596 André Nachbin, Roberto Ribeiro-Junior. A boundary integral formulation for particle trajectories in Stokes waves. Discrete & Continuous Dynamical Systems - A, 2014, 34 (8) : 3135-3153. doi: 10.3934/dcds.2014.34.3135 Sunghan Kim, Ki-Ahm Lee, Henrik Shahgholian. Homogenization of the boundary value for the Dirichlet problem. Discrete & Continuous Dynamical Systems - A, 2019, 0 (0) : 1-22. doi: 10.3934/dcds.2019234 Corentin Audiard. On the non-homogeneous boundary value problem for Schrödinger equations. Discrete & Continuous Dynamical Systems - A, 2013, 33 (9) : 3861-3884. doi: 10.3934/dcds.2013.33.3861 Angelo Favini, Rabah Labbas, Stéphane Maingot, Maëlis Meisner. Boundary value problem for elliptic differential equations in non-commutative cases. Discrete & Continuous Dynamical Systems - A, 2013, 33 (11&12) : 4967-4990. doi: 10.3934/dcds.2013.33.4967 Carlos Fresneda-Portillo Sergey E. Mikhailov
CommonCrawl
Login | Create Sort by: Relevance Date Users's collections Twitter Group by: Day Week Month Year All time Based on the idea and the provided source code of Andrej Karpathy (arxiv-sanity) On a new and homogeneous metallicity scale for Galactic classical Cepheids - I. Physical parameters (1805.00727) B. Proxauf, R. da Silva, V.V. Kovtyukh, G. Bono, L. Inno, B. Lemasle, J. Pritchard, N. Przybilla, J. Storm, M.A. Urbaneja, E. Valenti, M. Bergemann, R. Buonanno, V. D'Orazi, M. Fabrizio, I. Ferraro, G. Fiorentino, P. Francois, G. Iannicola, C.D. Laney, R.-P. Kudritzki, N. Matsunaga, M. Nonino, F. Primas, M. Romaniello, F. Thevenin May 2, 2018 astro-ph.SR We gathered more than 1130 high-resolution optical spectra for more than 250 Galactic classical Cepheids. The spectra were collected with different optical spectrographs: UVES at VLT, HARPS at 3.6m, FEROS at 2.2m MPG/ESO, and STELLA. To improve the effective temperature estimates, we present more than 150 new line depth ratio (LDR) calibrations that together with similar calibrations already available in the literature allowed us to cover a broad range in wavelength (between 5348 and 8427 angstrom) and in effective temperatures (between 3500 and 7700 K). This means the unique opportunity to cover both the hottest and coolest phases along the Cepheid pulsation cycle and to limit the intrinsic error on individual measurements at the level of ~100 K. Thanks to the high signal-to-noise ratio of individual spectra we identified and measured hundreds of neutral and ionized lines of heavy elements, and in turn, have the opportunity to trace the variation of both surface gravity and microturbulent velocity along the pulsation cycle. The accuracy of the physical parameters and the number of Fe I (more than one hundred) and Fe II (more than ten) lines measured allowed us to estimate mean iron abundances with a precision better than 0.1 dex. Here we focus on 14 calibrating Cepheids for which the current spectra cover either the entire or a significant portion of the pulsation cycle. The current estimates of the variation of the physical parameters along the pulsation cycle and of the iron abundances agree quite well with similar estimates available in the literature. Independent homogeneous estimates of both physical parameters and metal abundances based on different approaches that can constrain possible systematics are highly encouraged. HARPS-N high spectral resolution observations of Cepheids I. The Baade-Wesselink projection factor of {\delta} Cep revisited (1701.01589) N. Nardetto, E. Poretti, M. Rainer, A. Fokin, P. Mathias, R. I. Anderson, A. Gallenne, W. Gieren, D. Graczyk, P. Kervella, A. Mérand, D. Mourard, H. Neilson, G. Pietrzynski, B. Pilecki, J. Storm Jan. 6, 2017 astro-ph.SR The projection factor p is the key quantity used in the Baade-Wesselink (BW) method for distance determination; it converts radial velocities into pulsation velocities. Several methods are used to determine p, such as geometrical and hydrodynamical models or the inverse BW approach when the distance is known. We analyze new HARPS-N spectra of delta Cep to measure its cycle-averaged atmospheric velocity gradient in order to better constrain the projection factor. We first apply the inverse BW method to derive p directly from observations. The projection factor can be divided into three subconcepts: (1) a geometrical effect (p0); (2) the velocity gradient within the atmosphere (fgrad); and (3) the relative motion of the optical pulsating photosphere with respect to the corresponding mass elements (fo-g). We then measure the fgrad value of delta Cep for the first time. When the HARPS-N mean cross-correlated line-profiles are fitted with a Gaussian profile, the projection factor is pcc-g = 1.239 +/- 0.034(stat) +/- 0.023(syst). When we consider the different amplitudes of the radial velocity curves that are associated with 17 selected spectral lines, we measure projection factors ranging from 1.273 to 1.329. We find a relation between fgrad and the line depth measured when the Cepheid is at minimum radius. This relation is consistent with that obtained from our best hydrodynamical model of delta Cep and with our projection factor decomposition. Using the observational values of p and fgrad found for the 17 spectral lines, we derive a semi-theoretical value of fo-g. We alternatively obtain fo-g = 0.975+/-0.002 or 1.006+/-0.002 assuming models using radiative transfer in plane-parallel or spherically symmetric geometries, respectively. The new HARPS-N observations of delta Cep are consistent with our decomposition of the projection factor. VEGA/CHARA interferometric observations of Cepheids. I. A resolved structure around the prototype classical Cepheid delta Cep in the visible spectral range (1609.07268) N. Nardetto, A. Mérand, D. Mourard, J. Storm, W. Gieren, P. Fouqué, A. Gallenne, D. Graczyk, P. Kervella, H. Neilson, G. Pietrzynski, B. Pilecki, J. Breitfelder, P. Berio, M. Challouf, J.-M. Clausse, R. Ligi, P. Mathias, A. Meilland, K. Perraut, E. Poretti, M. Rainer, A. Spang, P.Stee, I. Tallon-Bosc, T. ten Brummelaar Sept. 23, 2016 astro-ph.SR The B-W method is used to determine the distance of Cepheids and consists in combining the angular size variations of the star, as derived from infrared surface-brightness relations or interferometry, with its linear size variation, as deduced from visible spectroscopy using the projection factor. While many Cepheids have been intensively observed by infrared beam combiners, only a few have been observed in the visible. This paper is part of a project to observe Cepheids in the visible with interferometry as a counterpart to infrared observations already in hand. Observations of delta Cep itself were secured with the VEGA/CHARA instrument over the full pulsation cycle of the star. These visible interferometric data are consistent in first approximation with a quasi-hydrostatic model of pulsation surrounded by a static circumstellar environment (CSE) with a size of theta_cse=8.9 +/- 3.0 mas and a relative flux contribution of f_cse=0.07+/-0.01. A model of visible nebula (a background source filling the field of view of the interferometer) with the same relative flux contribution is also consistent with our data at small spatial frequencies. However, in both cases, we find discrepancies in the squared visibilities at high spatial frequencies (maximum 2sigma) with two different regimes over the pulsation cycle of the star, phi=0.0-0.8 and phi=0.8-1.0. We provide several hypotheses to explain these discrepancies, but more observations and theoretical investigations are necessary before a firm conclusion can be drawn. For the first time we have been able to detect in the visible domain a resolved structure around delta~Cep. We have also shown that a simple model cannot explain the observations, and more work will be necessary in the future, both on observations and modelling. Before the Bar: Kinematic Detection of A Spheroidal Metal-Poor Bulge Component (1603.06578) Andrea Kunder, R.M. Rich, J. Storm, D.M. Nataf, R. De Propris, A.R. Walker, G. Bono, C. I. Johnson, J. Shen, Z.Y. Li March 21, 2016 astro-ph.GA, astro-ph.SR We present 947 radial velocities of RR Lyrae variable stars in four fields located toward the Galactic bulge, observed within the data from the ongoing Bulge RR Lyrae Radial Velocity Assay (BRAVA-RR). We show that these RR Lyrae stars exhibit hot kinematics and null or negligible rotation and are therefore members of a separate population from the bar/pseudobulge that currently dominates the mass and luminosity of the inner Galaxy. Our RR Lyrae stars predate these structures, and have metallicities, kinematics, and spatial distribution that are consistent with a "classical" bulge, although we cannot yet completely rule out the possibility that they are the metal-poor tail of a more metal rich ([Fe/H] ~ -1 dex) halo-bulge population. The complete catalog of radial velocities for the BRAVA-RR stars is also published electronically. The Araucaria Project. Precise physical parameters of the eclipsing binary IO Aqr (1508.03188) D. Graczyk, P. F. L. Maxted, G. Pietrzynski, B. Pilecki, P. Konorski, W. Gieren, J. Storm, A. Gallenne, R. I. Anderson, K. Suchomska, R. G. West, D. Pollacco, F. Faedi, G. Pojmanski Aug. 13, 2015 astro-ph.SR Our aim is to precisely measure the physical parameters of the eclipsing binary IO Aqr and derive a distance to this system by applying a surface brightness - colour relation. Our motivation is to combine these parameters with future precise distance determinations from the GAIA space mission to derive precise surface brightness - colour relations for stars. We extensively used photometry from the Super-WASP and ASAS projects and precise radial velocities obtained from HARPS and CORALIE high-resolution spectra. We analysed light curves with the code JKTEBOP and radial velocity curves with the Wilson-Devinney program. We found that IO Aqr is a hierarchical triple system consisting of a double-lined short-period (P=2.37 d) spectroscopic binary and a low-luminosity and low-mass companion star orbiting the binary with a period of ~25000 d (~70 yr) on a very eccentric orbit. We derive high-precision (better than 1%) physical parameters of the inner binary, which is composed of two slightly evolved main-sequence stars (F5 V-IV + F6 V-IV) with masses of M1=1.569+/-0.004 and M2=1.655+/-0.004 M_sun and radii R1=2.19+/-0.02 and R2=2.49+/-0.02 R_sun. The companion is most probably a late K-type dwarf with mass ~0.6 M_sun. The distance to the system resulting from applying a (V-K) surface brightness - colour relation is 255+/-6(stat.)+/-6(sys.) pc, which agrees well with the Hipparcos value of 270+/-73 pc, but is more precise by a factor of eight. PEPSI: The high-resolution echelle spectrograph and polarimeter for the Large Binocular Telescope (1505.06492) K.G. Strassmeier, I. Ilyin, A. Järvinen, M. Weber, M. Woche, S.I. Barnes, S.-M. Bauer, E. Beckert, W. Bittner, R. Bredthauer, T.A. Carroll, C. Denker, F. Dionies, I. DiVarano, D. Döscher, T. Fechner, D. Feuerstein, T. Granzer, T. Hahn, G. Harnisch, A. Hofmann, M. Lesser, J. Paschke, S. Pankratow, V. Plank, D. Plüschke, E. Popow, D. Sablowski, J. Storm May 24, 2015 astro-ph.SR, astro-ph.IM PEPSI is the bench-mounted, two-arm, fibre-fed and stabilized Potsdam Echelle Polarimetric and Spectroscopic Instrument for the 2x8.4 m Large Binocular Telescope (LBT). Three spectral resolutions of either 43 000, 120 000 or 270 000 can cover the entire optical/red wavelength range from 383 to 907 nm in three exposures. Two 10.3kx10.3k CCDs with 9-{\mu}m pixels and peak quantum efficiencies of 96 % record a total of 92 echelle orders. We introduce a new variant of a wave-guide image slicer with 3, 5, and 7 slices and peak efficiencies between 96 %. A total of six cross dispersers cover the six wavelength settings of the spectrograph, two of them always simultaneously. These are made of a VPH-grating sandwiched by two prisms. The peak efficiency of the system, including the telescope, is 15% at 650 nm, and still 11% and 10% at 390 nm and 900 nm, respectively. In combination with the 110 m2 light-collecting capability of the LBT, we expect a limiting magnitude of 20th mag in V in the low-resolution mode. The R=120 000 mode can also be used with two, dual-beam Stokes IQUV polarimeters. The 270 000-mode is made possible with the 7-slice image slicer and a 100- {\mu}m fibre through a projected sky aperture of 0.74", comparable to the median seeing of the LBT site. The 43000-mode with 12-pixel sampling per resolution element is our bad seeing or faint-object mode. Any of the three resolution modes can either be used with sky fibers for simultaneous sky exposures or with light from a stabilized Fabry-Perot etalon for ultra-precise radial velocities. CCD-image processing is performed with the dedicated data-reduction and analysis package PEPSI-S4S. A solar feed makes use of PEPSI during day time and a 500-m feed from the 1.8 m VATT can be used when the LBT is busy otherwise. In this paper, we present the basic instrument design, its realization, and its characteristics. The Araucaria Project. OGLE-LMC-CEP-1718: An exotic eclipsing binary system composed of two classical overtone Cepheids in a 413-day orbit (1403.3617) W. Gieren, B. Pilecki, G. Pietrzynski, D. Graczyk, I.B. Thompson, I. Soszynski, P. Konorski, R. Smolec, A. Udalski, N. Nardetto, G. Bono, P.G. Prada Moroni, J. Storm, A. Gallenne March 14, 2014 astro-ph.SR We have obtained extensive high-quality spectroscopic observations of the OGLE-LMC-CEP-1718 eclipsing binary system in the Large Magellanic Cloud which Soszynski et al. (2008) had identified as a candidate system for containing two classical Cepheids in orbit. Our spectroscopic data clearly demonstrate binary motion of the Cepheids in a 413-day eccentric orbit, rendering this eclipsing binary system the first ever known to consist of two classical Cepheid variables. After disentangling the four different radial velocity variations in the system we present the orbital solution and the individual pulsational radial velocity curves of the Cepheids. We show that both Cepheids are extremely likely to be first overtone pulsators and determine their respective dynamical masses, which turn out to be equal to within 1.5 %. Since the secondary eclipse is not observed in the orbital light curve we cannot derive the individual radii of the Cepheids, but the sum of their radii derived from the photometry is consistent with overtone pulsation for both variables. The existence of two equal-mass Cepheids in a binary system having different pulsation periods (1.96 and 2.48 days, respectively) may pose an interesting challenge to stellar evolution and pulsation theories, and a more detailed study of this system using additional datasets should yield deeper insight about the physics of stellar evolution of Cepheid variables. Future analysis of the system using additional near-infrared photometry might also lead to a better understanding of the systematic uncertainties in current Baade-Wesselink techniques of distance determinations to Cepheid variables. The Araucaria Project : the Baade-Wesselink projection factor of pulsating stars (1309.4886) N. Nardetto, J. Storm, W. Gieren, G. Pietrzynski, E. Poretti The projection factor used in the Baade-Wesselink methods of determining the distance of Cepheids makes the link between the stellar physics and the cosmological distance scale. A coherent picture of this physical quantity is now provided based on several approaches. We present the lastest news on the expected projection factor for different kinds of pulsating stars in the Hertzsprung-Russell diagram. Physical parameters and the projection factor of the classical Cepheid in the binary system OGLE-LMC-CEP-0227 (1308.5023) B. Pilecki, D. Graczyk, G. Pietrzyński, W.Gieren, I. B. Thompson, W. L. Freedman, V. Scowcroft, B. F. Madore, A. Udalski, I. Soszyński, P. Konorski, R. Smolec, N. Nardetto, G. Bono, P. G. Prada Moroni, J. Storm, A. Gallenne A novel method of analysis of double-lined eclipsing binaries containing a radially pulsating star is presented. The combined pulsating-eclipsing light curve is built up from a purely eclipsing light curve grid created using an existing modeling tool. For every pulsation phase the instantaneous radius and surface brightness are taken into account, being calculated from the disentangled radial velocity curve of the pulsating star and from its out-of-eclipse pulsational light curve and the light ratio of the components, respectively. The best model is found using the Markov Chain Monte Carlo method. The method is applied to the eclipsing binary Cepheid OGLE-LMC-CEP-0227 (P_puls = 3.80 d, P_orb = 309 d). We analyze a set of new spectroscopic and photometric observations for this binary, simultaneously fitting OGLE V-band, I-band and Spitzer 3.6 {\mu}m photometry. We derive a set of fundamental parameters of the system significantly improving the precision comparing to the previous results obtained by our group. The Cepheid mass and radius are M_1 = 4.165 +/- 0.032 M_solar and R_1 = 34.92 +/- 0.34 R_solar, respectively. For the first time a direct, geometrical and distance-independent determination of the Cepheid projection factor is presented. The value p = 1.21 +/- 0.03(stat.) +/- 0.04(syst.) is consistent with theoretical expectations for a short period Cepheid and interferometric measurements for {\delta} Cep. We also find a very high value of the optical limb darkening coefficients for the Cepheid component, in strong disagreement with theoretical predictions for static atmospheres at a given surface temperature and gravity. An eclipsing binary distance to the Large Magellanic Cloud accurate to 2 per cent (1303.2063) G. Pietrzyński, D. Graczyk, W. Gieren, I.B. Thompson, B. Pilecki, A. Udalski, I. Soszyński, S. Kozłowski, P. Konorski, K. Suchomska, G. Bono, P. G. Prada Moroni, S. Villanova, N. Nardetto, F. Bresolin, R.P. Kudritzki, J. Storm, A. Gallenne, R. Smolec, D. Minniti, M. Kubiak, M. Szymański, R. Poleski, Ł. Wyrzykowski, K. Ulaczyk, P. Pietrukowicz, M. Górski, P. Karczmarek March 8, 2013 astro-ph.CO, astro-ph.GA In the era of precision cosmology it is essential to determine the Hubble Constant with an accuracy of 3% or better. Currently, its uncertainty is dominated by the uncertainty in the distance to the Large Magellanic Cloud (LMC) which as the second nearest galaxy serves as the best anchor point of the cosmic distance scale. Observations of eclipsing binaries offer a unique opportunity to precisely and accurately measure stellar parameters and distances. The eclipsing binary method was previously applied to the LMC but the accuracy of the distance results was hampered by the need to model the bright, early-type systems used in these studies. Here, we present distance determinations to eight long-period, late- type eclipsing systems in the LMC composed of cool giant stars. For such systems we can accurately measure both the linear and angular sizes of their components and avoid the most important problems related to the hot early-type systems. Our LMC distance derived from these systems is demonstrably accurate to 2.2 % (49.97 +/- 0.19 (statistical) +/- 1.11 (systematic) kpc) providing a firm base for a 3 % determination of the Hubble Constant, with prospects for improvement to 2 % in the future. RR-Lyrae-type pulsations from a 0.26-solar-mass star in a binary system (1204.1872) G. Pietrzynski, I. B. Thompson, W. Gieren, D. Graczyk, K. Stepien, G. Bono, P. G. Prada Moroni, B. Pilecki, A. Udalski, I. Soszynski, G. Preston, N. Nardetto, A. McWilliam, I. Roederer, M. Gorski, P. Konorski, J. Storm April 9, 2012 astro-ph.SR RR Lyrae pulsating stars have been extensively used as tracers of old stellar populations for the purpose of determining the ages of galaxies, and as tools to measure distances to nearby galaxies. There was accordingly considerable interest when the RR Lyr star OGLE-BLG-RRLYR-02792 was found to be a member in an eclipsing binary system4, as the mass of the pulsator (hitherto constrained only by models) could be unambiguously determined. Here we report that RRLYR-02792 has a mass of 0.26 M_sun and therefore cannot be a classical RR Lyrae star. Through models we find that its properties are best explained by the evolution of a close binary system that started with 1.4 M_sun and 0.8 M_sun stars orbiting each other with an initial period of 2.9 days. Mass exchange over 5.4 Gyr produced the observed system, which is now in a very short-lived phase where the physical properties of the pulsator happen to place it in the same instability strip of the H-R diagram occupied by RR Lyrae stars. We estimate that samples of RR Lyr stars may contain a 0.2 percent contamination with systems similar to this one, implying that distances measured with RR Lyrae stars should not be significantly affected by these binary interlopers. CORS Baade-Wesselink distance to the LMC NGC 1866 blue populous cluster (1201.3478) R. Molinaro, V. Ripepi, M. Marconi, I. Musella, E. Brocato, A. Mucciarelli, P. B. Stetson, J. Storm, A. R. Walker Jan. 17, 2012 astro-ph.CO We used Optical, Near Infrared photometry and radial velocity data for a sample of 11 Cepheids belonging to the young LMC blue populous cluster NGC 1866 to estimate their radii and distances on the basis of the CORS Baade-Wesselink method. This technique, based on an accurate calibration of the surface brightness as a function of (U-B), (V-K) colors, allows us to estimate, simultaneously, the linear radius and the angular diameter of Cepheid variables, and consequently to derive their distance. A rigorous error estimate on radius and distances was derived by using Monte Carlo simulations. Our analysis gives a distance modulus for NGC 1866 of 18.51+/-0.03 mag, which is in agreement with several independent results. The Baade-Wesselink p-factor applicable to LMC Cepheids (1109.6763) N. Nardetto, A. Fokin, P. Fouqué, J. Storm, W. Gieren, G. Pietrzynski, D. Mourard, P. Kervella Sept. 30, 2011 astro-ph.CO, astro-ph.SR Context. Recent observations of LMC Cepheids bring new constraints on the slope of the period-projection factor relation (hereafter Pp relation) that is currently used in the Baade-Wesselink (hereafter BW) method of distance determination. The discrepancy between observations and theoretical analysis is particularly significant for short period Cepheids Aims. We investigate three physical effects that might possibly explain this discrepancy: (1) the spectroscopic S/N that is systematically lower for LMC Cepheids (around 10) compared to Galactic ones (up to 300), (2) the impact of the metallicity on the dynamical structure of LMC Cepheids, and (3) the combination of infrared photometry/interferometry with optical spectroscopy. Methods. To study the S/N we use a very simple toy model of Cepheids. The impact of metallicity on the projection factor is based on the hydrodynamical model of delta Cep already described in previous studies. This model is also used to derive the position of the optical versus infrared photospheric layers. Results. We find no significant effect of S/N, metallicity, and optical-versus-infrared observations on the Pp relation. Conclusions. The Pp relation of Cepheids in the LMC does not differ from the Galactic relation. This allows its universal application to determine distances to extragalactic Cepheids via BW analysis. The Araucaria Project. Accurate determination of the dynamical mass of the classical Cepheid in the eclipsing system OGLE-LMC-CEP-1812 (1109.5414) G. Pietrzynski, I. Thompson, D. Graczyk., W. Gieren, B. Pilecki, A. Udalski, I. Soszynski, G. Bono, P. Konorski, N. Nardetto, J. Storm We have analyzed the double-lined eclipsing binary system OGLE-LMC-CEP-1812 in the LMC and demonstrate that it contains a classical fundamental mode Cepheid pulsating with a period of 1.31 days. The secondary star is a stable giant. We derive the dynamical masses for both stars with an accuracy of 1.5%, making the Cepheid in this system the second classical Cepheid with a very accurate dynamical mass determination, following the OGLE-LMC-CEP-0227 system studied by Pietrzynski et al. (2010). The measured dynamical mass agrees very well with that predicted by pulsation models. We also derive the radii of both components and accurate orbital parameters for the binary system. This new, very accurate dynamical mass for a classical Cepheid will greatly contribute to the solution of the Cepheid mass discrepancy problem, and to our understanding of the structure and evolution of classical Cepheids. Calibrating the Cepheid Period-Luminosity relation from the infrared surface brightness technique II. The effect of metallicity, and the distance to the LMC (1109.2016) J. Storm, I. Soszynski Universidad de Concepcion, Warsaw University Observatory, Observatoire de Geneve) Sept. 9, 2011 astro-ph.CO, astro-ph.SR The extragalactic distance scale builds directly on the Cepheid Period-Luminosity (PL) relation as delineated by the sample of Cepheids in the Large Magellanic Cloud (LMC). However, the LMC is a dwarf irregular galaxy, quite different from the massive spiral galaxies used for calibrating the extragalactic distance scale. Recent investigations suggest that not only the zero-point but also the slope of the Milky Way PL relation differ significantly from that of the LMC, casting doubts on the universality of the Cepheid PL relation. We want to make a differential comparison of the PL relations in the two galaxies by delineating the PL relations using the same method, the infrared surface brightness method (IRSB), and the same precepts. The IRSB method is a Baade-Wesselink type method to determine individual distances to Cepheids. We apply a newly revised calibration of the method as described in an accompanying paper (Paper I) to 36 LMC and five SMC Cepheids and delineate new PL relations in the V,I,J, & K bands as well as in the Wesenheit indices in the optical and near-IR. We present 509 new and accurate radial velocity measurements for a sample of 22 LMC Cepheids, enlarging our earlier sample of 14 stars to include 36 LMC Cepheids. The new calibration of the IRSB method is directly tied to the recent HST parallax measurements to ten Milky Way Cepheids, and we find a LMC barycenter distance modulus of 18.45+-0.04 (random error only) from the 36 individual LMC Cepheid distances. We find a significant metallicity effect on the Wvi index gamma(Wvi)=-0.23+-0.10 mag/dex as well as an effect on the slope. The K-band PL relation on the other hand is found to be an excellent extragalactic standard candle being metallicity insensitive in both slope and zero-point and at the same time being reddening insensitive and showing the least internal dispersion. Calibrating the Cepheid Period-Luminosity relation from the infrared surface brightness technique I. The p-factor, the Milky Way relations, and a universal K-band relation (1109.2017) J. Storm, G. Pietrzynski, K. Strassmeier Universidad de Concepcion, Univ. of Texas at Austin, Laboratoire Fizeau, UNS/OCA/CNRS, Nice) We determine Period-Luminosity relations for Milky Way Cepheids in the optical and near-IR bands. These relations can be used directly as reference for extra-galactic distance determination to Cepheid populations with solar metallicity, and they form the basis for a direct comparison with relations obtained in exactly the same manner for stars in the Magellanic Clouds, presented in an accompanying paper. In that paper we show that the metallicity effect is very small and consistent with a null effect, particularly in the near-IR bands, and we combine here all 111 Cepheids from the Milky Way, the LMC and SMC to form a best relation. We employ the near-IR surface brightness (IRSB) method to determine direct distances to the individual Cepheids after we have recalibrated the projection factor using the recent parallax measurements to ten Galactic Cepheids and the constraint that Cepheid distances to the LMC should be independent of pulsation period. We confirm our earlier finding that the projection factor for converting radial velocity to pulsational velocity depends quite steeply on pulsation period, p=1.550-0.186*log(P) in disagrement with recent theoretical predictions. We delineate the Cepheid PL relation using 111 Cepheids with direct distances from the IRSB analysis. The relations are by construction in agreement with the recent HST parallax distances to Cepheids and slopes are in excellent agreement with the slopes of apparent magnitudes versus period observed in the LMC. Distance to Galactic globulars using the near-infrared magnitudes of RR Lyrae stars: IV. The case of M5 (NGC5904) (1105.4031) G. Coppola, M. Dall'Ora, V. Ripepi, M. Marconi, I. Musella, G. Bono, A. M. Piersimoni, P. B. Stetson, J. Storm May 20, 2011 astro-ph.GA We present new and accurate near-infrared (NIR) J, K-band time series data for the Galactic globular cluster (GC) M5 = NGC5904. Data were collected with SOFI at the NTT (71 J + 120 K images) and with NICS at the TNG (25 J + 22 K images) and cover two orthogonal strips across the center of the cluster of \approx 5 \times 10 arcmin^{2} each. These data allowed us to derive accurate mean K-band magnitudes for 52 fundamental (RR_{ab}) and 24 first overtone (RR_{c}) RR Lyrae stars. Using this sample of RR Lyrae stars, we find that the slope of the K-band Period Luminosity (PLK) relation (-2.33 \pm 0.08) agrees quite well with similar estimates available in the literature. We also find, using both theoretical and empirical calibrations of the PLK relation, a true distance to M5 of (14.44 \pm 0.02) mag. This distance modulus agrees very well (1\sigma) with distances based on main sequence fitting method and on kinematic method (14.44 \pm 0.41 mag, \citealt{rees_1996}), while is systematically smaller than the distance based on the white dwarf cooling sequence (14.67 \pm 0.18 mag, \citealt{layden2005}), even if with a difference slightly larger than 1\sigma. The true distance modulus to M5 based on the PLJ relation (14.50 \pm 0.08 mag) is in quite good agreement with the distance based on the PLK relation further supporting the use of NIR PL relations for RR Lyrae stars to improve the precision of the GC distance scale. On the radial extent of the dwarf irregular galaxy IC10 (1009.3917) N. Sanna, G. Bono, P. B. Stetson, I. Ferraro, M. Monelli, M. Nonino, P. G., Prada Moroni, R. Bresolin, R. Buonanno, F. Caputo, M. Cignoni, S. Degl'Innocenti, G. Iannicola, N. Matsunaga, A. Pietrinferni, M. Romaniello, J. Storm, A. R. Walker We present new deep and accurate space (Advanced Camera for Surveys -- Wide Field Planetary Camera 2 at the Hubble Space Telescope) and ground-based (Suprime-Cam at Subaru Telescope, Mega-Cam at Canada-France-Hawaii Telescope) photometric and astrometric data for the Local Group dwarf irregular IC10. We confirm the significant decrease of the young stellar population when moving from the center toward the outermost regions. We find that the tidal radius of IC10 is significantly larger than previous estimates of $r_t \lesssim$ 10\min. By using the $I$,\vmi\ Color Magnitude Diagram based on the Suprime-Cam data we detect sizable samples of red giant (RG) stars up to radial distances of 18-23$'$ from the galactic center. The ratio between observed star counts (Mega-Cam data) across the tip of the RG branch and star counts predicted by Galactic models indicate a star count excess at least at a 3$\sigma$ level up to 34-42\min\ from the center. This finding supports the hypothesis that the huge H{\size{I}} cloud covering more than one degree across the galaxy is associated with IC10 \citep{huchtmeier79,cohen79}. We also provide new estimates of the total luminosity ($L_V\sim9\times$$10^7$ $L_\odot$, $M_V$$\sim$-15.1 mag) that agrees with similar estimates available in the literature. If we restrict to the regions where rotational velocity measurements are available (r$\approx13'$), we find a mass-to-light ratio ($\sim$10 $M_\odot$ $L_\odot$) that is at least one order of magnitude larger than previous estimates. The new estimate should be cautiously treated, since it is based on a minimal fraction of the body of the galaxy. High resolution spectroscopy for Cepheids distance determination. V. Impact of the cross-correlation method on the p-factor and the gamma-velocities (0905.4540) N. Nardetto, W. Gieren, P. Kervella, P. Fouque, J. Storm, G. Pietrzynski, D. Mourard, D. Queloz May 28, 2009 astro-ph.CO, astro-ph.SR The cross correlation method (hereafter CC) is widely used to derive the radial velocity curve of Cepheids when the signal to noise of the spectra is low. However, if it is used with the wrong projection factor, it might introduce some biases in the Baade-Wesselink (hereafter BW) methods of determining the distance of Cepheids. In addition, it might affect the average value of the radial velocity curve (or gamma-velocity) important for Galactic structure studies. We aim to derive a period-projection factor relation (hereafter Pp) appropriate to be used together with the CC method. Moreover, we investigate whether the CC method can explain the misunderstood previous calculation of the K-term of Cepheids. We observed eight galactic Cepheids with the HARPS spectrograph. For each star, we derive an interpolated CC radial velocity curve using the HARPS pipeline. The amplitudes of these curves are used to determine the correction to be applied to the semi-theoretical projection factor derived in Nardetto et al. (2007). Their average value (or gamma-velocity) are also compared to the center-of-mass velocities derived in Nardetto et al. (2008). The correction in amplitudes allows us to derive a new Pp relation: p = [-0.08+-0.05] log P +[1.31+-0.06]. We also find a negligible wavelength dependence (over the optical range) of the Pp relation. We finally show that the gamma-velocity derived from the CC method is systematically blue-shifted by about 1.0 +- 0.2km/s compared to the center-of-mass velocity of the star. An additional blue-shift of 1.0km/s is thus needed to totally explain the previous calculation of the K-term of Cepheids (around 2km/s). The new Pp relation we derived is a solid tool for the distance scale calibration (abridged). The Araucaria Project. The Distance to the Sculptor Galaxy NGC 247 from Near-Infrared Photometry of Cepheid Variables (0905.2699) W. Gieren, G. Pietrzynski, I. Soszynski, O. Szewczyk, F. Bresolin, R.-P. Kudritzki, M. Urbaneja, J. Storm, D. Minniti, A. Garcia-Varela May 16, 2009 astro-ph.CO We have obtained deep near-infrared images in J and K filters of four fields in the Sculptor Group spiral galaxy NGC 247 with the ESO VLT and ISAAC camera. For a sample of ten Cepheids in these fields, previously discovered by Garc{\'i}a-Varela et al. from optical wide-field images, we have determined mean J and K magnitudes and have constructed the period-luminosity (PL) relations in these bands. Using the near-infrared PL relations together with those in the optical V and I bands, we have determined a true distance modulus for NGC 247 of 27.64 mag, with a random uncertainty of $\pm$2% and a systematic uncertainty of $\sim$4% which is dominated by the effect of unresolved stars on the Cepheid photometry. The mean reddening affecting the NGC 247 Cepheids of E(B-V) = 0.18 $\pm$ 0.02 mag is mostly produced in the host galaxy itself and is significantly higher than what was found in the previous optical Cepheid studies in NGC 247 of our own group, and Madore et al., leading to a 7% decrease in the previous optical Cepheid distance. As in other studies of our project, the distance modulus of NGC 247 we report is tied to an assumed LMC distance modulus of 18.50. Comparison with other distance measurements to NGC 247 shows that the present IR-based Cepheid distance is the most accurate among these determinations. With a distance of 3.4 Mpc, NGC 247 is about 1.5 Mpc more distant than NGC 55 and NGC 300, two other Sculptor Group spirals analyzed before with the same technique by our group. Extremely faint high proper motion objects from SDSS stripe 82 - Optical classification spectroscopy of about 40 new objects (0812.1495) R.-D. Scholz, J. Storm, G. R. Knapp, H. Zinnecker Dec. 8, 2008 astro-ph (abridged) Deep multi-epoch Sloan Digital Sky Survey data in a 275 square degrees area along the celestial equator (SDSS stripe 82 = S82) allowed us to search for extremely faint ($i>21$) objects with proper motions larger than 0.14 arcsec/yr. We classify 38 newly detected objects with low-resolution optical spectroscopy using FORS1 @ ESO VLT. All 22 previously known L dwarfs in S82 have been detected in our high proper motion survey. However, 11 of the known L dwarfs have smaller proper motions (0.01$<$$\mu$$<$0.14 arcsec/yr). Although S82 was already one of the best investigated sky regions with respect to L and T dwarfs, we are able to classify 13 new L dwarfs. We have also found eight new M7.5-M9.5 dwarfs. Four new cool white dwarfs (CWDs) discovered by us are about 1-2 mag fainter than those previously detected in SDSS data. All new L-type, late-M and CWD objects show thick disk and halo kinematics. There are 13 objects, mostly with uncertain proper motions, which we initially classified as mid-M dwarfs. Among them we have found 9 with an alternative subdwarf classification (sdM7 or earlier types), whereas we have not found any new spectra resembling the known ultracool ($>$sdM7) subdwarfs. Some M subdwarf candidates have been classified based on spectral indices with large uncertainties. We failed to detect new nearby ($d<50$ pc) L dwarfs, probably because the S82 area was already well-investigated before. With our survey we have demonstrated a higher efficiency in finding Galactic halo CWDs than previous searches. The space density of halo CWDs is according to our results about 1.5-3.0 $\times$ 10$^{-5}$ pc$^{-3}$. 2XMM J083026+524133: The most X-ray luminous cluster at redshift 1 (0805.3817) G. Lamer, M. Hoeft, J. Kohnert, A. Schwope, J. Storm Aug. 25, 2008 astro-ph In the distant universe X-ray luminous clusters of galaxies are rare objects. Large area surveys are therefore needed to probe the high luminosity end of the cluster population at redshifts z >= 1. We correlated extended X-ray sources from the second XMM-Newton source catalogue (2XMM) with the SDSS in order to identify new clusters of galaxies. Distant cluster candidates in empty SDSS fields were imaged in the R and z bands with the Large Binocular Telescope. We extracted the X-ray spectra of the cluster candidates and fitted thermal plasma models to the data. We determined the redshift 0.99 +-0.03 for 2XMM J083026+524133 from its X-ray spectrum. With a bolometric luminosity of 1.8 x 10^45 erg/sec this is the most X-ray luminous cluster at redshifts z >= 1. We measured a gas temperature of 8.2 +- 0.9 keV and and estimate a cluster mass M(500) = 5.6 x 10^14 M(solar). The optical imaging revealed a rich cluster of galaxies. The Araucaria Project. The Distance to the Local Group Galaxy WLM from Near-Infrared Photometry of Cepheid Variables (0805.2655) W. Gieren, G. Pietrzynski, O. Szewczyk, F. Bresolin, R.-P. Kudritzki, M.A. Urbaneja, J. Storm, D. Minniti May 17, 2008 astro-ph We have obtained deep images in the near-infrared J and K filters for several fields in the Local Group galaxy WLM. We report intensity mean magnitudes for 31 Cepheids located in these fields which we previously discovered in a wide-field optical imaging survey of WLM. The data define tight period-luminosity relations in both near-infrared bands which we use to derive the total reddening of the Cepheids in WLM and the true distance modulus of the galaxy from a multiwavelength analysis of the reddened distance moduli in the VIJK bands. From this, we obtain the values E(B-V) = 0.082 $\pm$ 0.02, and $(m-M)_{0} = 24.924 \pm 0.042$ mag, with a systematic uncertainty in the distance of about $\pm$ 3%. This Cepheid distance agrees extremely well with the distance of WLM determined from the I-band TRGB method by ourselves and others. Most of the reddening of the Cepheids in WLM (0.06 mag) is produced inside the galaxy, demonstrating again the need for an accurate determination of the total reddening and/or the use of infrared photometry to derive Cepheid distances which are accurate to 3% or better, even for small irregular galaxies like WLM. The Araucaria Project. The Distance to the Sculptor dwarf spheroidal galaxy from infrared photometry of RR Lyrae stars (0804.0347) G. Pietrzynski, W. Gieren, O. Szewczyk, A. Walker, L. Rizzi, F. Bresolin, R.-P. Kudritzki, K. Nalewajko, J. Storm, M. Dall'Ora, V. Ivanov April 2, 2008 astro-ph We have obtained single-phase near-infrared magnitudes in the J and K bands for a sample of 78 RR Lyrae stars in the Sculptor dSph galaxy. Applying different theoretical and empirical calibrations of the period-luminosity-metallicity relation for RR Lyrae stars in the infrared, we find consistent results and obtain a true, reddening-corrected distance modulus of 19.67 $\pm$ 0.02 (statistical) $\pm$ 0.12 (systematic) mag for Sculptor from our data. This distance value is consistent with the value of 19.68 $\pm$ 0.08 mag which we obtain from earlier V-band data of RR Lyrae stars in Sculptor, and the V magnitude-metallicity calibration of Sandage (1993). It is also in a very good agreement with the results obtain by Rizzi (2002) based on tip of the red giant branch (TRGB, 19.64 $\pm$ 0.08 mag) and horizontal branch (HB, 19.66 $\pm$ 0.15 mag). A new calibration of Galactic Cepheid Period-Luminosity relations from B to K bands, and a comparison to LMC PL relations (0709.3255) P. Fouque, P. Arriagada, J. Storm, T. G. Barnes, N. Nardetto, A. Merand, P. Kervella, W. Gieren, D. Bersier, G. F. Benedict, B. E. McArthur Sept. 20, 2007 astro-ph The universality of the Cepheid Period-Luminosity relations has been under discussion since metallicity effects have been assumed to play a role in the value of the intercept and, more recently, of the slope of these relations. The goal of the present study is to calibrate the Galactic PL relations in various photometric bands (from B to K) and to compare the results to the well-established PL relations in the LMC. We use a set of 59 calibrating stars, the distances of which are measured using five different distance indicators: Hubble Space Telescope and revised Hipparcos parallaxes, infrared surface brightness and interferometric Baade-Wesselink parallaxes, and classical Zero-Age-Main-Sequence-fitting parallaxes for Cepheids belonging to open clusters or OB stars associations. A detailed discussion of absorption corrections and projection factor to be used is given. We find no significant difference in the slopes of the PL relations between LMC and our Galaxy. We conclude that the Cepheid PL relations have universal slopes in all photometric bands, not depending on the galaxy under study (at least for LMC and Milky Way). The possible zero-point variation with metal content is not discussed in the present work, but an upper limit of 18.50 for the LMC distance modulus can be deduced from our data.
CommonCrawl
\(\newcommand{\id}{\mathrm{id}}\) \( \newcommand{\Span}{\mathrm{span}}\) \( \newcommand{\kernel}{\mathrm{null}\,}\) \( \newcommand{\range}{\mathrm{range}\,}\) \( \newcommand{\RealPart}{\mathrm{Re}}\) \( \newcommand{\ImaginaryPart}{\mathrm{Im}}\) \( \newcommand{\Argument}{\mathrm{Arg}}\) \( \newcommand{\norm}[1]{\| #1 \|}\) \( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\) \( \newcommand{\Span}{\mathrm{span}}\) Check out the new BeeLine Reader on LibreTexts: Making Online Reading Much Easier 5.3: Points on Circles Using Sine and Cosine [ "article:topic", "license:ccbysa", "showtoc:no", "authorname:lippmanrasmussen" ] Precalculus & Trigonometry Book: Precalculus - An Investigation of Functions (Lippman and Rasmussen) 5: Trigonometric Functions of Angles Page ID `; function ccDetector() { const cc = getCC(); console.log(cc); if (cc) { $("#myModal").slideDown(800); $('#myModal_Slide').slideDown(800); switch (cc.label) { case "cc-BY": thing_parent.setAttribute("style", "background-color: #aed581;"); thing.innerHTML = `The content you just copied is ${cc.title} licensed: You can can remix and distribute the work as long as proper attribution is given. Learn more about this license here `; break; case "cc-by-sa": thing_parent.setAttribute("style", "background-color: #fff176;"); thing.innerHTML = `The content you just copied is ${cc.title} licensed: You can remix and distribute the work as long as proper attribution is given and your work also comes with this same license. Learn more about this license here `; break; case "cc-by-nc-sa": thing_parent.setAttribute("style", "background-color: #fff176;"); thing.innerHTML = `The content you just copied is ${cc.title} licensed: You can can remix and distribute the work without profit as long as proper attribution is given and your work also comes with this same license. Learn more about this license here `; break; case "cc-by-nc": thing_parent.setAttribute("style", "background-color: #fff176;"); thing.innerHTML = `The content you just copied is ${cc.title} licensed: You can can remix and distribute the work without profit as long as proper attribution is given. Learn more about this license here `; break; case "cc-by-nd": thing_parent.setAttribute("style", "background-color: #f44336;"); thing.innerHTML = `The content you just copied is ${cc.title} licensed: You can can share the work if proper attribution is given, but cannot modify it in any way. Learn more about this license here `; break; case "cc-by-nc-nd": thing_parent.setAttribute("style", "background-color: #f44336;"); thing.innerHTML = `The content you just copied is ${cc.title} licensed: You can can share the work without profit if proper attribution is given, but cannot modify it in any way. Learn more about this liscense here `; break; case "gnu": thing_parent.setAttribute("style", "background-color: #fff176;"); thing.innerHTML = `The content you just copied is ${cc.title} licensed: You have the freedom to run, study, share and modify the software. Learn more about this liscense here`; break; case "gnudsl": thing_parent.setAttribute("style", "background-color: #f44336;"); thing.innerHTML = `The content you just copied is ${cc.title} licensed: You have the freedom to run and remix software without profit. Learn more about this license here `; break; case "gnufdl": thing_parent.setAttribute("style", "background-color: #fff176;"); thing.innerHTML = `The content you just copied is ${cc.title} licensed: You have the freedom to run but not remix any software for profit. Learn more about this license here `; break; case "arr": thing_parent.setAttribute("style", "background-color: #f44336;"); thing.innerHTML = `The content you just copied is ${cc.title} licensed: You are NOT allowed to distribute or remix the content at all.`; break; case "cc-publicdomain": return null; break; } } } window.addEventListener('click',function(event) { if (event.target == modal) { $("#myModal").slideUp(800); $('#myModal_Slide').slideUp(800); } } ); const modal = document.getElementById("myModal"); const thing = document.getElementById("myModal_Child"); const thing_parent = document.getElementById("myModal_Slide"); document.body.appendChild(modal); document.addEventListener("copy", ccDetector); })();/*]]>*/ Contributed by David Lippman & Melonie Rasmussen Professors (Mathematics) at Pierce College Publisher: The OpenTextBookStore \( \newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \) \( \newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}} \)\(\newcommand{\id}{\mathrm{id}}\) \( \newcommand{\Span}{\mathrm{span}}\) \( \newcommand{\kernel}{\mathrm{null}\,}\) \( \newcommand{\range}{\mathrm{range}\,}\) \( \newcommand{\RealPart}{\mathrm{Re}}\) \( \newcommand{\ImaginaryPart}{\mathrm{Im}}\) \( \newcommand{\Argument}{\mathrm{Arg}}\) \( \newcommand{\norm}[1]{\| #1 \|}\) \( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\) \( \newcommand{\Span}{\mathrm{span}}\) \(\newcommand{\id}{\mathrm{id}}\) \( \newcommand{\Span}{\mathrm{span}}\) \( \newcommand{\kernel}{\mathrm{null}\,}\) \( \newcommand{\range}{\mathrm{range}\,}\) \( \newcommand{\RealPart}{\mathrm{Re}}\) \( \newcommand{\ImaginaryPart}{\mathrm{Im}}\) \( \newcommand{\Argument}{\mathrm{Arg}}\) \( \newcommand{\norm}[1]{\| #1 \|}\) \( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\) \( \newcommand{\Span}{\mathrm{span}}\) Important Topics of This Section While it is convenient to describe the location of a point on a circle using an angle or a distance along the circle, relating this information to the x and y coordinates and the circle equation we explored in Section 5.1 is an important application of trigonometry. A distress signal is sent from a sailboat during a storm, but the transmission is unclear and the rescue boat sitting at the marina cannot determine the sailboat's location. Using high powered radar, they determine the distress signal is coming from a distance of 20 miles at an angle of 225 degrees from the marina. How many miles east/west and north/south of the rescue boat is the stranded sailboat? In a general sense, to investigate this, we begin by drawing a circle centered at the origin with radius \(r\) , and marking the point on the circle indicated by some angle \(\theta\). This point has coordinates (\(x\), \(y\)). If we drop a line segment vertically down from this point to the x axis, we would form a right triangle inside of the circle. No matter which quadrant our angle \(\theta\) puts us in we can draw a triangle by dropping a perpendicular line segment to the \(x\) axis, keeping in mind that the values of \(x\) and \(y\) may be positive or negative, depending on the quadrant. Additionally, if the angle \(\theta\) puts us on an axis, we simply measure the radius as the \(x\) or \(y\) with the other value being 0, again ensuring we have appropriate signs on the coordinates based on the quadrant. Triangles obtained from different radii will all be similar triangles, meaning corresponding sides scale proportionally. While the lengths of the sides may change, as we saw in the last section, the ratios of the side lengths will always remain constant for any given angle. \(\dfrac{y_{1} }{r_{1} } =\dfrac{y_{2} }{r_{2} }\) \(\dfrac{x_{1} }{r_{1} } =\dfrac{x_{2} }{r_{2} }\) To be able to refer to these ratios more easily, we will give them names. Since the ratios depend on the angle, we will write them as functions of the angle \(\theta\). Note: sine and cosine For the point (\(x\), \(y\)) on a circle of radius \(r\) at an angle of \(\theta\), we can define two important functions as the ratios of the sides of the corresponding triangle: The sine function: \(\sin (\theta )=\dfrac{y}{r}\) The cosine function: \(\cos (\theta )=\dfrac{x}{r}\) In this chapter, we will explore these functions using both circles and right triangles. In the next chapter, we will take a closer look at the behavior and characteristics of the sine and cosine functions. Example \(\PageIndex{1}\) The point (3, 4) is on the circle of radius 5 at some angle \(\theta\). Find \(\cos (\theta )\)and \(\sin (\theta )\). Knowing the radius of the circle and coordinates of the point, we can evaluate the cosine and sine functions as the ratio of the sides. \[\cos (\theta )=\dfrac{x}{r} =\dfrac{3}{5} \sin (\theta )=\dfrac{y}{r} =\dfrac{4}{5}\nonumber\] There are a few cosine and sine values which we can determine fairly easily because the corresponding point on the circle falls on the \(x\) or \(y\) axis. Find \(\cos (90{}^\circ )\) and \(\sin (90{}^\circ )\). On any circle, the terminal side of a 90 degree angle points straight up, so the coordinates of the corresponding point on the circle would be (0, r). Using our definitions of cosine and sine, \[\cos (90{}^\circ )=\dfrac{x}{r} =\dfrac{0}{r} =0\nonumber\] \[\sin (90{}^\circ )=\dfrac{y}{r} =\dfrac{r}{r} =1\nonumber\] Exercise \(\PageIndex{1}\) Find cosine and sine of the angle \(\pi\). \[\cos (\pi )=-1 \sin (\pi )=0\nonumber\] Notice that the definitions above can also be stated as: coordinates of the point on a circle at a given angle On a circle of radius \(r\) at an angle of \(\theta\), we can find the coordinates of the point (\(x\), \(y\)) Circles:Points on a Circle at that angle using \[x=r\cos (\theta )\] \[y=r\sin (\theta )\] On a unit circle, a circle with radius 1, \(x=\cos (\theta )\) and \(y=\sin (\theta )\). Utilizing the basic equation for a circle centered at the origin, \(x^{2} +y^{2} =r^{2}\), combined with the relationships above, we can establish a new identity. \[x^{2} +y^{2} =r^{2}\nonumber\] substituting the relations above, \[(r\cos (\theta ))^{2} +(r\sin (\theta ))^{2} =r^{2}\nonumber\] simplifying, \[r^{2} (\cos (\theta ))^{2} +r^{2} (\sin (\theta ))^{2} =r^{2}\nonumber\] dividing by \(r^{2}\) \[(\cos (\theta ))^{2} +(\sin (\theta ))^{2} =1\nonumber\] or using shorthand notation \[\cos ^{2} (\theta )+\sin ^{2} (\theta )=1\nonumber\] Here \(\cos ^{2} (\theta )\) is a commonly used shorthand notation for \((\cos (\theta ))^{2}\). Be aware that many calculators and computers do not understand the shorthand notation. In Section 5.1 we related the Pythagorean Theorem \(a^{2} +b^{2} =c^{2}\) to the basic equation of a circle \(x^{2} +y^{2} =r^{2}\), which we have now used to arrive at the Pythagorean Identity. pythagorean indentity The Pythagorean Identity. For any angle \(\theta\), \[\cos ^{2} (\theta )+\sin ^{2} (\theta )=1\nonumber\] One use of this identity is that it helps us to find a cosine value of an angle if we know the sine value of that angle or vice versa. However, since the equation will yield two possible values, we will need to utilize additional knowledge of the angle to help us find the desired value. If \(\sin (\theta )=\dfrac{3}{7}\) and \(\theta\) is in the second quadrant, find \(\cos (\theta )\). Substituting the known value for sine into the Pythagorean identity, \[\cos ^{2} (\theta )+\dfrac{9}{49} =1\nonumber\] \[\cos ^{2} (\theta )=\dfrac{40}{49}\nonumber\] \[\cos (\theta )=\pm \sqrt{\dfrac{40}{49} } =\pm \dfrac{\sqrt{40} }{7} =\pm \dfrac{2\sqrt{10} }{7}\nonumber\] Since the angle is in the second quadrant, we know the \(x\) value of the point would be negative, so the cosine value should also be negative. Using this additional information, we can conclude that \[\cos (\theta )=-\dfrac{2\sqrt{10} }{7}\nonumber\] Values for Sine and Cosine At this point, you may have noticed that we haven't found any cosine or sine values from angles not on an axis. To do this, we will need to utilize our knowledge of triangles. First, consider a point on a circle at an angle of 45 de grees, or \(\dfrac{\pi }{4}\). At this angle, the x and y coordinates of the corresponding point on the circle will be equal because 45 degrees divides the first quadrant in half. Since the \(x\) and \(y\) values will be the same, the sine and cosine values will also be equal. Utilizing the Pythagorean Identity, \[\cos ^{2} \left(\dfrac{\pi }{4} \right)+\sin ^{2} \left(\dfrac{\pi }{4} \right)=1\nonumber\] since the sine and cosine are equal, we can substitute sine with cosine \[\cos ^{2} \left(\dfrac{\pi }{4} \right)+\cos ^{2} \left(\dfrac{\pi }{4} \right)=1\nonumber\] add like terms \[2\cos ^{2} \left(\dfrac{\pi }{4} \right)=1\nonumber\] divide \[\cos ^{2} \left(\dfrac{\pi }{4} \right)=\dfrac{1}{2}\nonumber\] since the \(x\) value is positive, we'll keep the positive root \[\cos \left(\dfrac{\pi }{4} \right)=\sqrt{\dfrac{1}{2} }\nonumber\] often this value is written with a rationalized denominator Remember, to rationalize the denominator we multiply by a term equivalent to 1 to get rid of the radical in the denominator. \[\cos \left(\dfrac{\pi }{4} \right)=\sqrt{\dfrac{1}{2} } \sqrt{\dfrac{2}{2} } =\sqrt{\dfrac{2}{4} } =\dfrac{\sqrt{2} }{2}\nonumber\] Since the sine and cosine are equal, \(\sin \left(\dfrac{\pi }{4} \right)=\dfrac{\sqrt{2} }{2}\) as well. The (\(x\), \(y\)) coordinates for a point on a circle of radius 1 at an angle of 45 degrees are \(\left(\dfrac{\sqrt{2} }{2} ,\dfrac{\sqrt{2} }{2} \right)\). Find the coordinates of the point on a circle of radius 6 at an angle of \(\dfrac{\pi }{4}\). Using our new knowledge that \(\sin \left(\dfrac{\pi }{4} \right)=\dfrac{\sqrt{2} }{2}\) and \(\cos \left(\dfrac{\pi }{4} \right)=\dfrac{\sqrt{2} }{2}\), along with our relationships that stated \(x=r\cos (\theta )\) and \(y=r\sin (\theta )\), we can find the coordinates of the point desired: \[x=6\cos \left(\dfrac{\pi }{4} \right)=6\left(\dfrac{\sqrt{2} }{2} \right)=3\sqrt{2}\nonumber \] \[y=6\sin \left(\dfrac{\pi }{4} \right)=6\left(\dfrac{\sqrt{2} }{2} \right)=3\sqrt{2}\nonumber\] Find the coordinates of the point on a circle of radius 3 at an angle of \(90{}^\circ\). \[\begin{array}{l} {x=3\cos \left(\dfrac{\pi }{2} \right)=3\cdot 0=0} \\ {y=3\sin \left(\dfrac{\pi }{2} \right)=3\cdot 1=3} \end{array}\nonumber\] Next, we will find the cosine and sine at an angle of 30 degrees, or \(\frac{\pi }{6}\) . To do this, we will first draw a triangle inside a circle with one side at an angle of 30 degrees, and another at an angle of -30 degrees. If the resulting two right triangles are combined into one large triangle, notice that all three angles of this larger triangle will be 60 degrees. Since all the angles are equal, the sides will all be equal as well. The vertical line has length \(2y\), and since the sides are all equal we can conclude that \(2y = r\), or \(y=\dfrac{r}{2}\). Using this, we can find the sine value: \[\text{sin}(\dfrac{\pi}{6}) = \dfrac{y}{r} = \dfrac{r/2}{r} = \dfrac{r}{2} \cdot \dfrac{1}{r} = \dfrac{1}{2}\nonumber\] Using the Pythagorean Identity, we can find the cosine value: \[\cos ^{2} \left(\dfrac{\pi }{6} \right)+\left(\dfrac{1}{2} \right)^{2} =1\nonumber\] \[\cos \left(\dfrac{\pi }{6} \right)=\sqrt{\dfrac{3}{4} } =\dfrac{\sqrt{3} }{2}\nonumber\] The (\(x\), \(y\)) coordinates for the point on a circle of radius 1 at an angle of 30 degrees are \(\left(\dfrac{\sqrt{3} }{2} ,\dfrac{1}{2} \right)\). By drawing a the triangle inside the unit circle with a 30 degree angle and reflecting it over the line \(y = x\), we can find the cosine and sine for 60 degrees, or \(\dfrac{\pi }{3}\), without any additional work. By this symmetry, we can see the coordinates of the point on the unit circle at an angle of 60 degrees will be \(\left(\dfrac{1}{2} ,\dfrac{\sqrt{3} }{2} \right)\), giving \(\cos \left(\dfrac{\pi }{3} \right)=\dfrac{1}{2}\) and \(\sin \left(\dfrac{\pi }{3} \right)=\dfrac{\sqrt{3} }{2}\) We have now found the cosine and sine values for all the commonly encountered angles in the first quadrant of the unit circle. Angle \(0\) \(\dfrac{\pi }{6}\), or 30\(\mathrm{{}^\circ}\) \(\dfrac{\pi }{4}\), or 45\(\mathrm{{}^\circ}\) \(\dfrac{\pi }{3}\), or 60\(\mathrm{{}^\circ}\) \(\dfrac{\pi }{2}\), or 90\(\mathrm{{}^\circ}\) Cosine 1 \(\dfrac{\sqrt{3} }{2}\) \(\dfrac{\sqrt{2} }{2}\) \(\dfrac{1}{2}\) 0 Sine 0 \(\dfrac{1}{2}\) \(\dfrac{\sqrt{2} }{2}\) \(\dfrac{\sqrt{3} }{2}\) 1 For any given angle in the first quadrant, there will be an angle in another quadrant with the same sine value, and yet another angle in yet another quadrant with the same cosine value. Since the sine value is the \(y\) coordinate on the unit circle, the other angle with the same sine will share the same \(y\) value, but have the opposite \(x\) value. Likewise, the angle with the same cosine will share the same \(x\) value, but have the opposite \(y\) value. As shown here, angle \(\alpha\) has the same sine value as angle \(\theta\); the cosine values would be opposites. The angle \(\beta\) has the same cosine value as the angle \(\theta\); the sine values would be opposites. It is important to notice the relationship between the angles. If, from the angle, you measured the smallest angle to the horizontal axis, all would have the same measure in absolute value. We say that all these angles have a reference angle of \(\theta\). Definition: reference angle An angle's reference angle is the size of the smallest angle to the horizontal axis. A reference angle is always an angle between 0 and 90 degrees, or 0 and \(\dfrac{\pi }{2}\) radians. Angles share the same cosine and sine values as their reference angles, except for signs (positive or negative) which can be determined from the quadrant of the angle. Find the reference angle of 150 degrees. Use it to find \(\cos (150{}^\circ )\) and \(\sin (150{}^\circ )\). 150 degrees is located in the second quadrant. It is 30 degrees short of the horizontal axis at 180 degrees, so the reference angle is 30 degrees. This tells us that 150 degrees has the same sine and cosine values as 30 degrees, except for sign. We know that \(\sin (30{}^\circ )=\dfrac{1}{2}\) and \(\cos (30{}^\circ )=\dfrac{\sqrt{3} }{2}\). Since 150 degrees is in the second quadrant, the \(x\) coordinate of the point on the circle would be negative, so the cosine value will be negative. The \(y\) coordinate is positive, so the sine value will be positive. \[\sin (150{}^\circ )=\dfrac{1}{2}\text{ and }\cos (150{}^\circ )=-\dfrac{\sqrt{3} }{2}\nonumber\] The (\(x\), \(y\)) coordinates for the point on a unit circle at an angle of \(150{}^\circ\) are \(\left(\dfrac{-\sqrt{3} }{2} ,\dfrac{1}{2} \right)\). Using symmetry and reference angles, we can fill in cosine and sine values at the rest of the special angles on the unit circle. Take time to learn the (\(x\), \(y\)) coordinates of all the major angles in the first quadrant! Find the coordinates of the point on a circle of radius 12 at an angle of \(\dfrac{7\pi }{6}\). Note that this angle is in the third quadrant, where both x and y are negative. Keeping this in mind can help you check your signs of the sine and cosine function. \[x=12\cos \left(\dfrac{7\pi }{6} \right)=12\left(\dfrac{-\sqrt{3} }{2} \right)=-6\sqrt{3}\nonumber \] \[y=12\sin \left(\dfrac{7\pi }{6} \right)=12\left(\dfrac{-1}{2} \right)=-6\nonumber\] The coordinates of the point are \((-6\sqrt{3} ,-6)\). Find the coordinates of the point on a circle of radius 5 at an angle of \(\dfrac{5\pi }{3}\). \[\left(5\cos \left(\dfrac{5\pi }{3} \right),5\sin \left(\dfrac{5\pi }{3} \right)\right)=\left(\dfrac{5}{2} ,\dfrac{-5\sqrt{3} }{2} \right)\nonumber\] We now have the tools to return to the sailboat question posed at the beginning of this section. A distress signal is sent from a sailboat during a storm, but the transmission is unclear and the rescue boat sitting at the marina cannot determine the sailboat's location. Using high powered radar, they determine the distress signal is coming from a point 20 miles away at an angle of 225 degrees from the marina. How many miles east/west and north/south of the rescue boat is the stranded sailboat? We can now answer the question by finding the coordinates of the point on a circle with a radius of 20 miles at an angle of 225 degrees. \[x=20\cos \left(225{}^\circ \right)=20\left(\dfrac{-\sqrt{2} }{2} \right)\approx -14.142\text{ miles}\nonumber\] \[y=20\sin \left(225{}^\circ \right)=20\left(\dfrac{-\sqrt{2} }{2} \right)\approx -14.142\text{ miles}\nonumber\] The sailboat is located 14.142 miles west and 14.142 miles south of the marina. The special values of sine and cosine in the first quadrant are very useful to know, since knowing them allows you to quickly evaluate the sine and cosine of very common angles without needing to look at a reference or use your calculator. However, scenarios do come up where we need to know the sine and cosine of other angles. To find the cosine and sine of any other angle, we turn to a computer or calculator. Be aware: most calculators can be set into "degree" or "radian" mode, which tells the calculator the units for the input value. When you evaluate "cos(30)" on your calculator, it will evaluate it as the cosine of 30 degrees if the calculator is in degree mode, or the cosine of 30 radians if the calculator is in radian mode. Most computer software with cosine and sine functions only operates in radian mode. Evaluate the cosine of 20 degrees using a calculator or computer. On a calculator that can be put in degree mode, you can evaluate this directly to be approximately 0.939693. On a computer or calculator without degree mode, you would first need to convert the angle to radians, or equivalently evaluate the expression \[\cos \left(20 \cdot \dfrac{\pi }{180} \right)\nonumber\] The sine function The cosine function Pythagorean Identity Unit Circle values Reference angles Using technology to find points on a circle 5.2.2E: Angles (Exercises) 5.3.3E: Points on Circles Using Sine and Cosine (Exercises) Section or Page David Lippman & Melonie Rasmussen Show Page TOC © Copyright 2020 Mathematics LibreTexts The LibreTexts libraries are Powered by MindTouch® and are supported by the Department of Education Open Textbook Pilot Project, the UC Davis Office of the Provost, the UC Davis Library, the California State University Affordable Learning Solutions Program, and Merlot. We also acknowledge previous National Science Foundation support under grant numbers 1246120, 1525057, and 1413739. Unless otherwise noted, LibreTexts content is licensed by CC BY-NC-SA 3.0. Have questions or comments? For more information contact us at [email protected] or check out our status page at https://status.libretexts.org.
CommonCrawl
How is partial trace related to operator sum representation? Asked 1 month ago In Quantum Computation and Quantum Information by Nielsen and Chuang, the authors introduce operator sum representation in Section 8.2.3. They denote the evolution of a density matrix, when given an environment, is the following: $$\varepsilon (\rho) = \mathrm{tr}_{\text{env}} [U(\rho\otimes \rho_{\text{env}})U^\dagger]$$ Where you are essentially taking the trace to discard the environment of the unitary evolution of the entire system. What I don't understand is how the operator sum representation is equivalent (Equations 8.9 and 8.11 in N&C) $$\varepsilon (\rho) = \sum_k \langle \mathbf{e}_k|U[\rho \otimes |\mathbf{e}_0\rangle \langle \mathbf{e}_0|]U^\dagger|\mathbf{e}_k\rangle = \sum_k E_k\rho E_k^\dagger$$ In this equation, I take $|\mathbf{e}_k\rangle$ to represent the basis of the system and U to be a 4 x 4 unitary matrix governing the evolution. How is this equivalent to the first equation where you discard the trace? It seems like the second equation (equation 8.9 in N&C) above would yield a scalar quantity. What does this equation mean? I understand the first equation where you take the partial trace, but how does partial trace relate to the 2nd and 3rd equations? I'm a relative beginner in this field. quantum-information nielsen-and-chuang pauli-gates partial-trace C. ArdayfioC. Ardayfio $\begingroup$ Hi. Welcome to Quantum Computing SE! Please do not post mathematical expressions as screenshots, but use MathJax instead. Review Why are images of text, code, and mathematical expressions discouraged?. $\endgroup$ – Sanchayan Dutta Nov 25 '19 at 13:42 I think it helps here to write things explicitly. Suppose $\mathcal E(\rho)=\operatorname{Tr}_E[U(\rho\otimes|\mathbf e_0\rangle\!\langle\mathbf e_0|)U^\dagger]$. Pick a basis for the environment in which $|\mathbf e_0\rangle$ is the first element. Note that here $U$ is a unitary matrix in a bipartite system. The operator before taking the partial trace has matrix elements $$ [U(\rho\otimes|\mathbf e_0\rangle\!\langle\mathbf e_0|)U^\dagger]_{ij,k\ell} = \sum_{\alpha,\gamma} U_{ij,\alpha 0} \rho_{\alpha\gamma} (U^\dagger)_{\gamma0,k\ell} = \sum_{\alpha,\gamma} U_{ij,\alpha0}\bar U_{k\ell,\gamma0} \rho_{\alpha\gamma}. $$ Now notice that the partial trace amounts here to make $j=\ell$ and sum over $j$ (because in this notation the indices $i,k$ refer to the system while $j,\ell$ to the environment), so that we get $$ [\mathcal E(\rho)]_{ik} = \sum_j [U(\rho\otimes|\mathbf e_0\rangle\!\langle\mathbf e_0|)U^\dagger]_{ij,kj} = \sum_{\alpha\gamma j} U_{ij,\alpha0}\bar U_{kj,\gamma0}\rho_{\alpha\gamma}. $$ Notice how this is already essentially an operator sum representation: defining $(E_{j})_{i\alpha}\equiv U_{ij,\alpha0}$, we get $$[\mathcal E(\rho)]_{ik}=\sum_j (E_j)_{i\alpha} (\bar E_j)_{k\gamma}\rho_{\alpha\gamma} = \left[\sum_j E_j \rho E^\dagger_j\right]_{ik}.$$ glSglS $|e_k\rangle$ is the basis of the environment. Taking the sum of projections onto an orthonormal basis of one subsystem is the definition of the partial trace over that subsystem. DaftWullieDaftWullie $\begingroup$ I performed the calculation in MATLAB and was not able to multiply the result of $\langle \mathbf{e}_k|U$ with $[\rho \otimes |\mathbf{e}_0\rangle \langle \mathbf{e}_0|]$. This is because the result of $[\rho \otimes |\mathbf{e}_0\rangle \langle \mathbf{e}_0|]$ is an 8x8 matrix and the result of $\langle \mathbf{e}_k|U$ is a 1x4 matrix. Obviously these two cannot be multiplied (mathematically and in MATLAB), how would this be resolved? $\endgroup$ – C. Ardayfio Nov 25 '19 at 19:03 $\begingroup$ Since you've not told me what the dimensions each of the systems is, it's very hard for me to match up. But U should be a square matrix that is the same size as dim rho times dimension of environment. The the e_k should be a 1x dim of environment vector except that you have to remember that "do nothing on the system " means tensoring with an identity matrix of the appropriate size. $\endgroup$ – DaftWullie Nov 25 '19 at 20:52 $\begingroup$ In the example presented in N&C, the unitary matrix is a controlled NOT gate of dimension 4x4. From this, I'd assume |e_k> to be a 4x1 vector and <e_k| to be a 1x4 vector; however, when you multiply <e_k| with U, you get a 1x4 vector. This can't be multiplied by the tensor product of the system and the environment as this has a dimension of 8x8. Am I correct in saying that <e_k|*U is a 1x4 vector and this clearly can't be multiplied with the second part of the equation? $\endgroup$ – C. Ardayfio Nov 26 '19 at 3:57 $\begingroup$ No, you've got things a bit jumbled. The system is a single qubit, so $\rho$ is $2\times 2$. The environment is also a qubit, so $|e_0\rangle\langle e_0|$ is also $2\times 2$. The unitary acts between the system and environment and so is $4\times 4$ Now, $\langle e_k|$ should be $1\times 2$ except that, implicitly, it's actually $I\otimes\langle e_k|$, which is $2\times 4$, and so all the mutliplications work. $\endgroup$ – DaftWullie Nov 26 '19 at 8:36 Not the answer you're looking for? Browse other questions tagged quantum-information nielsen-and-chuang pauli-gates partial-trace or ask your own question. Is the Kraus representation of a quantum channel equivalent to a unitary evolution in an enlarged space? Tensor product properties used to obtain Kraus operator decomposition of a channel Deduce the Kraus operators of the dephasing channel using the Choi How to formulate the master equation for three systems? Partial Trace over a complicated looking state How does the probability of measurement turn out to be negative? What's the difference between Kraus operators and measurement operators? Operation Elements in Operator-sum Representation Amplitude Damping of a Harmonic Oscillator Trace of Hermitian Operator and Operator Function
CommonCrawl
PDF-Version jetzt herunterladen Warm-needling acupuncture and medicinal cake-separated moxibustion for hyperlipidemia: study protocol for a randomized controlled trial Trials > Ausgabe 1/2017 Mailan Liu, Qian Zhang, Shan Jiang, Mi Liu, Guoshan Zhang, Zenghui Yue, Qin Chen, Jie Zhou, Yifan Zou, Dan Li, Mingzhu Ma, Guobin Dai, Huan Zhong, Zhihong Wang, Xiaorong Chang » Zur Zusammenfassung PDF-Version jetzt herunterladen The online version of this article (doi:10.​1186/​s13063-017-2029-x) contains supplementary material, which is available to authorized users. Alkaline phosphatase Acupuncture and Moxibustion for Hyperlipidemia ANCOVA ApoA1 Apolipoprotein A-1 ApoB Apolipoprotein B Aspartate aminotransferase Bladder meridian Complete analysis set Creatine kinase Diastolic blood pressure Data monitoring committee Effect durability analysis set Estimated glomerular filtration rate Eicosapentaenoic acid Full analysis set HDL-C High density lipoprotein cholesterol High sensitivity C-reactive protein IEC/IRB Independent Ethics Committee/Institutional Review Board Lactate dehydrogenase LDL-C Low-density lipoprotein cholesterol Lower limit of normal LOCF Last observation carried forward Lipoprotein(a) NCEP ATP III National Cholesterol Education Program Adult Treatment Panel III NYHA New York Heart Failure Association Pericardian meridian Peroxisome Proliferator-Activated Receptor Ren meridian Relative Risk RR reductions Serious adverse event Spleen meridian Stomach meridian Therapeutic Lifestyle Change Thyroid Stimulating Hormone ULN Upper limit of normal VLDL-C Very low-density lipoprotein cholesterol Background and rationale Hyperlipidemia refers to a condition caused by abnormal metabolism where low-density lipoprotein-cholesterol (LDL-C), serum total cholesterol (TC), triglycerides (TG), and/or high-density lipoprotein (HDL-C) are above recommended levels. This condition is associated with the deterioration in lifestyle and dietary habits and represents an important risk factor for cardiovascular morbidity and mortality. Hyperlipidemia is a risk factor for atherosclerosis, cardiovascular disease, stroke, and other diseases. Large randomized controlled trials in hyperlipidemia treatment have provided evidence that reducing LDL cholesterol concentration with statins is efficient in both secondary and primary cardiovascular disease (CVD) prevention [ 1 , 2 ]. As a result, individuals at moderate or high CVD risk are often considered as candidates for lipid-lowering therapy with statins. However, statin therapy can incur significant cost to the society. In addition, statin therapy is often associated with low treatment compliance and high rates of side effects [ 3 , 4 ]. Acupuncture and moxibustion has been widely applied to hyperlipidemia treatment in clinical practice of China. Thus, an increasing number of studies have explored whether acupuncture and moxibustion could serve as an alternative treatment for subjects with hyperlipidemia. As shown in a meta-analysis, acupuncture solely, compared to statins, has demonstrated a more significant effect on decreasing TG and increasing HDL-C, but no superiority in lowering the LDL-C and TC [ 5 ]. Meanwhile, moxibustion, which is often administered with acupuncture in Traditional Chinese Medicine (TCM) practice, also plays an essential role in lipid-lowering by warming meridians and facilitating lipid conversion in the TCM theory. The recent studies have further revealed the biological pathway in lipid-lowering of moxibustion [ 6 – 8 ]. This modality, especially the warm-needling acupuncture (acupuncture with a moxa stick), can enhance microcirculation, adjust the lipid metabolism, and thus lower the blood viscosity [ 6 – 8 ]. Medicinal cake-separated moxibustion, an important kind of moxibustion, applies the acupoints, moxibustion, and traditional Chinese herb in an integrative way. It has gained increasing popularity in the practice of hyperlipidemia treatment and thus been further assessed on its potential impact. Some findings have shown, from a perspective of gene transcription and protein expression, that the medicinal cake-separated moxibustion could prevent the formation of atheromatous plaque by adjusting Toll-Like Receptor (TLR) signaling pathways as well as peroxisome proliferator-activated receptors (PPARs), in order to delay atherosclerosis (AS) formation and stabilize atheromatous plaque [ 9 , 10 ]. Chang et al. indicated that both the medicinal cake-separated moxibustion and direct moxibustion have a certain protective action on endothelial cells of the aorta in the rabbit of hyperlipidemia [ 9 ]. Yue et al. found that herb-partition moxibustion delays the formation of atherosclerosis through the inhibition of TLR4 expression [ 10 ]. This has provided a new strategy for the research on AS pathogenesis and prevention. In addition, based on clinical observation and systematic review of TCM literature, the selection of meridians and acupoints potentially having effect on hyperlipidemia treatment, has been studied and identified, which includes ten meridians (five Yin and five Yang meridians), and five acupoints (Stomach (ST), Spleen (SP), Ren (RN), Bladder (BL) and Pericardian (PC)) [ 11 ]. Based on these findings, several clinical studies on assessing medicinal cake-separated moxibustion have been undertaken, testing different acupoint prescriptions, medicinal-cake ingredients, treatment duration, etc., in attempt to identify an effective and standardized regimen [ 6 , 12 – 16 ]. Most of these studies have shown possible therapeutic effects of the medicinal cake-separated moxibustion on hyperlipidemia, superior to the placebo or noninferior to statins. In general, acupuncture and moxibustion is shown to be possibly effective in treating hyperlipidemia, with lower cost and fewer serious adverse events [ 17 , 18 ]. However, due to the lack of robust study design and assessment methodology in existing clinical studies, the findings should be interpreted with caution. According to previous attempts on identifying an optimal regimen of acupuncture and moxibustion for treating hyperlipidemia, the warm needling acupuncture along with medicinal cake-separated moxibustion seems to be a modality that successfully combines the advantages of both acupuncture and moxibustion. This is worth further exploring to warrant its therapeutic effects [ 12 , 15 , 16 ]. So far, very few studies on this combined intervention are available. Hence, there is a need for a well-designed randomized control trial to validate the efficacy and safety of warm-needling acupuncture along with medicinal cake-separated moxibustion, by comparing it with statins. Hypotheses and objectives Warm needling acupuncture and medicinal cake-separated moxibustion (short as "acupuncture and moxibustion" in the following text) is noninferior to active control, in subjects with hypercholesterolemia, on ° percent change of LDL-C ° absolute change of LDL-C ° percent change of HDL-C ° percent change of TC ° percent change of TG ° rate of subject achieving LDL-C goal Acupuncture and moxibustion is superior to active control, in subjects with hypercholesterolemia, on safety and tolerability ° Adherence Primary objectives To evaluate the effect of 12 weeks of acupuncture and moxibustion compared with active control, on percentage change from baseline in low-density lipoprotein cholesterol (LDL-C) among those with hyperlipidemia. Secondary objectives To assess the effects of 12 weeks of acupuncture and moxibustion, compared to active control, in subjects with hypercholesterolemia, on ° the absolute change in LDL-C, ° the percent change in high-density lipoprotein cholesterol (HDL-C), ° the percent change in total cholesterol (TC) ° the percent change in triglyceride(TG) ° the rate of subjects achieving LDL-C goal. To evaluate the safety and tolerability of acupuncture and moxibustion, given for 12 weeks, in subjects with hypercholesterolemia To evaluate the adherence of acupuncture and moxibustion This is a multicenter, open-label, randomized, stratified, active-controlled, noninferiority trial with two parallel groups. Randomization will be performed with a 1:1 allocation. Subjects with LDL-C values above the recommended level who meet the inclusion/exclusion criteria will be stratified based on their risk levels of heart disease [ 19 , 20 ]. Then, they were instructed to follow the NCEP Adult Treatment Panel (ATP) [ 20 ] Therapeutic Lifestyle Change (TLC) diet first. After TLC is completed, subjects who have not reached the target lipid level will be randomly assigned to the treatments of either acupuncture and moxibustion or simvastatin. The duration of treatment for this study will be 12 weeks, followed by another 4 weeks for post-treatment assessment. The SPIRIT figure for schedule of enrolment (Fig. 1), interventions, and assessments of this study, and the SPIRIT Checklist are included as Additional file 1 of this protocol. SPIRIT Figure for schedule of enrolment, interventions, and assessments of the AMHRCTstudy Study setting The study will include four sites in Hunan, China. They are First Affiliated Hospital of Hunan University of Chinese Medicine, Changsha Chinese Medicine Hospital, Yueyang Chinese Medicine Hospital, and Chenzhou Chinese Medicine Hospital. Other sites may be added for participation in the study. Sites that do not enroll any subjects within 3 months of site initiation will be closed. Each site must follow this study protocol; otherwise, it will be closed. The Steering Committee will audit and review the intervention performed in each site, to ensure that they are following this protocol. To be included to the study, patients must: Provide informed consent Be male or female ≥18 to ≤75 years of age Have a fasting triglyceride level ≤400 mg/dL (4.5 mmol/L) by central laboratory at screening Have a fasting LDL-C as determined by central laboratory on admission and meeting the following LDL-C values based on risk factor (see Table 2 in Appendix 1) status [ 19 , 20 ]: 0–1 Risk Factor Group: LDL-C ≥160 mg/dl 2+ Risk Factor Group: LDL-C ≥130 mg/dl CHD or CHD risk equivalents (see Table 3 in Appendix 1): LDL-C ≥100 mg/dl Subjects are excluded from the study if they are diagnosed with one or more of the following conditions: Coronary heart disease (CHD) or CHD risk equivalent who are not receiving statin therapy with LDL-C at screening of ≤99 mg/dL Heart failure of New York Heart Failure Association (NYHA) class II, III or IV or last known left ventricular ejection fraction <30% Cardiac arrhythmia within 3 months prior to randomization that is not controlled by medication Myocardial infarction, unstable angina, percutaneous coronary intervention (PCI), coronary artery bypass graft (CABG) or stroke Planned cardiac surgery or revascularization Type 1 diabetes; Newly diagnosed type 2 diabetes (within 6 months of randomization or new screening fasting plasma glucose ≥126 mg/dL (7.0 mmol/L) or HbA1c ≥6.5%), or poorly controlled type 2 diabetes (HbA1c >8.5%) Persistent systolic blood pressure (SBP) >160 mm Hg or diastolic BP (DBP) >100 mmHg Thyroid stimulating hormone (TSH) < lower limit of normal (LLN) or TSH >1.5 × upper limit of normal (ULN), estimated glomerular filtration rate (eGFR) <30 ml/min/1.73 m2, aspartate aminotransferase (AST) or alanine aminotransferase (ALT) >2 × ULN, creatine kinase (CK) >3 × ULN (all at initial screening or at the end of lipid stabilization period(s) by the central laboratory) Known major active infection, or major hematologic, renal, metabolic, gastrointestinal or endocrine dysfunction Deep vein thrombosis or pulmonary embolism within 3 months prior to randomization Subjects are also excluded if they have taken any of the following medications for more than 2 weeks in the last 3 months prior to LDL-C screening: systemic cyclosporine, systemic steroids, isotretinoin (e.g., Accutane). Some anticoagulation treatments are also excluded (antiplatelet agents are permitted). Female subjects cannot be pregnant or be breastfeeding and premenopausal women must have to be willing to use at least one highly effective method of birth control during treatment and for an additional 15 weeks after the end of treatment. Each investigating site was chosen based on documentation for patient availability. Sites will utilize two main sources for identifying and recruiting potential subjects as described below: Incoming hyperlipidemia patients: the four sites in this study are famous for hyperlipidemia treatment in Hunan, China, thus the incoming patients aiming for hyperlipidemia treatments are the primary source for recruitment Advertisements: the participation opportunity to this study will be advertised widely in local newspapers and other publications that target hyperlipidemia treatments, in order to recruit enough participants in the recruiting period All eligible subjects will be instructed to follow the NCEP Adult Treatment Panel (ATP) lifestyle-modification diet first [ 20 ]. This approach is designated therapeutic lifestyle changes (TLC). After TLC is completed, subjects who have not reached the target lipid level will be randomly assigned to the treatments of either acupuncture and moxibustion or simvastatin. Therapeutic Lifestyle Change (TLC) Subjects who meet the inclusion/exclusion criteria will initiate TLC. ATP III recommends a multifaceted lifestyle approach to reduce risk for CHD. Its essential features are: Reduced intakes of saturated fats (<7% of total calories) and cholesterol (<200 mg per day) (see Table 4 in Appendix 1 for overall composition of the TLC Diet) Therapeutic options for enhancing LDL lowering such as plant stanols/sterols (2 g/day) and increased viscous (soluble) fiber (10–25 g/day) Increased physical activity TLC adherence report form is made and sent to subjects to monitor their lifestyle change process. Re-assessment will be conducted after the TLC period. Acupuncture and moxibustion versus simvastatin According to the re-assessment of fasting lipid after TLC, a decision will be made on whether the lipid level has arrived at the target level (see Table 5 in Appendix 1) and whether subjects should continue with the intervention. For those who have not reached the target lipid level, they will be randomized in equal proportions between "acupuncture" and "simvastatin." Acupuncture and moxibustion Rationale for treatment: Warm-needling acupuncture and medicinal cake-separated moxibustion will be applied in the study which is guided by the approach of Traditional Chinese Medicine (TCM). The cake-separated moxibustion treatment belongs to the category of indirect moxibustion which applies moxibustion, herb and acupoints together. In this treatment, a drug-cake mixture is allocated upon selected acupoints, and then the lit moxa cone is placed onto the cake. In this case, the moxa-effect enhanced by the active components of the drug cake is allowed to penetrate through the skin and the acupoints. Based on the TCM theory that the heart is in charge of the blood circulation and blood vessels, some specific acupoints related to heart meridian and heart are selected, such as the Shu acupoint (RN14) and the Mu acupoint (BL15) of the heart meridian. BL18, BL20 and BL23 are used to tonify the spleen Qi, strengthen the liver Qi and also maintain the kidney Qi, as well as facilitate the conversion of lipids in the liver. ST40 and ST25, which belong to the stomach meridian and help the transformation function of spleen, can protect and work against hyperlipidemia by assisting the removal of phlegm from the body. Additionally, five Chinese herbal medicines are selected, including Salviae Miltiorrhizae Radix (Dan shen), Crataegi Fructus (Shan zha), Curcumae Radix (Yu jin), Rhei Radix et Rhizoma (Da huang), and Alismatis Rhizoma (Ze xie). They are able to activate the blood circulation, remove blood stasis and pain, clear away heart fire, remove irritability, nourish the blood and calm down the mind. In clinical practice, several studies with small sample sizes have assessed a variety of acupoint prescriptions, medicinal cake ingredients, treatment duration, etc. [ 12 , 16 ]. Among these studies, Li et al. applied warm needling acupuncture on acupoints such as Fenglong (ST40, bilateral), Zusanli (ST36, bilateral), etc., once a day for 35 days, over a period of 12 weeks, showing a significant decrease in LDL-C, noninferior to statins [ 12 ]. Li et al. applied medicinal cake-separated moxibustion on two groups of acupoints, alternating over a period of 40 days with each group receiving 20 times of intervention [ 16 ]. This study demonstrated a remarkable decrease in LDL-C and TC, superior to the effect of a placebo. Meanwhile, Chang et al. have also examined effects of medicinal cake-separated moxibustion with different dosage upon blood fat and hemorheology in patients with hyperlipemia, showing that three moxa cones and five moxa cones both have therapeutic effect [ 17 ]. With these studies, so far we know, warm-needling acupuncture and cake-separated moxibustion are both effective for hyperlipidemia, respectively. Together with our previous clinical experiences on treating hyperlipidemia, we selected the studies mentioned above as pilot studies of this trial in determining the procedure course of the intervention. Needling/moxibustion details (demonstrated in Table 1) Treatment regimen: Interventions of warm-needling acupuncture and medicinal cake-separated moxibustion will be split into two groups. Group 1 (warm-needling acupuncture and medicinal cake-separated moxibustion) and group 2 (medicinal cake-separated moxibustion only) will alternate by week over the administration period (shown in Fig. 2), in order to avoid fatigue and nonresponse of acupoints due to constant stimulation Practitioners: registered acupuncturists who have completed post-secondary education on acupuncture, practiced acupuncture for more than 3 years, and passed the national qualification exam for Chinese medicine doctors Needling and acupuncture details Warm needling acupuncture (needle + moxa stick) Acupoints Fenglong (ST40, bilateral), Zusanli (ST36, bilateral), Sanyinjiao (SP6, bilateral) Depths of insertion Fenglong (ST40) for 1.0 to 2.0 cun, Zusanli (ST36) and Sanyinjiao (SP6) for 1.0 to 1.5 cun ("cun" is a traditional Chinese measure using the width of a person's thumb at the knuckle, whereas the width of the 2 forefingers denotes 1.5 cun and the width of 4 fingers (except the thumb) side-by-side is 3 cuns. Therefore, 1 cun may vary from person to person) Needle stimulation Manual manipulation Responses elicited De qi sensation Needle retention time Needle specifications Sterile single-use acupuncture needles of 25–40 mm in length and 0.30 mm in diameter; manufactured by Suzhou Medical Supplies Co., Ltd., Suzhou, China Moxibustion specifications Small moxa stick, made of mugwort, with 1.5 cm length; manufactured by Suzhou Medical Supplies Co., Ltd., Suzhou, China (1) locate and sterilize the acupoints; (2) insert the needles and stimulate to elicit response; (3) attach a small moxa stick to the needle tail and light the moxa stick; and (4) retain needles with moxa sticks for 30 min Medicinal cake-separated moxibustion (medicinal cake + moxa cone) Juque (RN14), Tianshu (ST25, bilateral), Pishu (BL20, bilateral), Xinshu (BL15, bilateral), Ganshu (BL18, bilateral), Shenshu (BL23, bilateral) Dan Shen (Radix Salviae Miltiorrhizae), Shan Zha (Fructus Crataegi), Yu Jin (Radix Curcumae), Da Huang (Radix et Rhizoma Rhei) and Ze Xie (Rhizoma Alismatis) Cake preparation method All cake ingredients in the same quantities were collected, ground into powder, mixed well with vinegar, and made into round, thin cakes of 1.5 cm in diameter, 3 mm in thickness and 3 g in weight; manufactured by Suzhou Medical Supplies Co., Ltd., Suzhou, China Moxa cone, made of mugwort, with 1 cm in diameter; manufactured by Suzhou Medical Supplies Co., Ltd., Suzhou, China 3 cones in total for 30 min (1) locate the acupoints; (2) place the herbal cake on to the acupoints; (3) place a moxa cone onto the herbal cake and light the moxa cone; (4) renew the moxa cone once it is fully consumed, with 3 moxa cones in total per acupoint; and (5) retain herbal cakes with moxa cones for 30 min Treatment regimen flowchart. This figure demonstrates the treatment regimen and flowchart for intervention of warm needling and medicinal cake-separated moxibustion Dosage: simvastatin (10 mg/day) All simvastatin tablets will be stored in the central pharmacy and assigned to each investigator site by a qualified staff. The drug will be administered orally to subjects at the investigator site by a physician or a nurse in accordance with instructions in the medication guides Schedule: 7 days per week for 12 weeks Dosage adjustment: there will be no dose adjustments in this study. If, in the opinion of the investigator, a subject is unable to tolerate a specific dose of simvastatin and requires dosage adjustment, that subject will discontinue simvastatin but will continue to return for all other study procedures and measurements until the end of the study Discontinuity of intervention Subjects in the study are able to withdraw from the study at any time, either partially or entirely. When a subject fully consents to withdrawal from the study, the subject will no longer receive the investigational treatment and will have the right to discontinue any further involvement in the study, including any format of follow-up. Below is a list of reasons for discontinuing intervention: Withdrawal of full consent from subject Subject requests the ending of investigational intervention Administrative decision by the principal investigator Unanimous decision by the principal investigator/physician Pregnancy in a female subject Adverse event (e.g., serious adverse event related to the intervention) Concomitant medication The following treatments are not permitted during the study: Prescribed lipid-regulating medications other than acupuncture and moxibustion or simvastatin, such as fibrates and derivatives, bile-acid sequestering resins Red yeast rice, niacin >200 mg per day, omega-3 fatty acid (e.g., DHA and EPA) >1000 mg per day. Any other drug that significantly affects lipid metabolism (e.g.,, systemic cyclosporine, systemic steroids (administered intravenously (IV), intramuscularly (IM), or per os (PO)), vitamin A derivatives and retinol derivatives for the treatment of dermatologic conditions (e.g.,, Accutane)). Vitamin A as part of a multivitamin preparation is permitted. Prescribed amphetamines, or amphetamine derivatives, and weight-loss medications Besides, simvastatin or foods that are known potent inhibitors of CYP3A (itraconazole, ketoconazole, and other antifungal azoles, the macrolide antibiotics erythromycin, clarithromycin, and the ketolide antibiotic telithromycin, HIV protease inhibitors, the antidepressant nefazodone and grapefruit juice in large quantities (more than 1 quart daily)) should not be used during the study, because of their potential impacts on the metabolism of certain statins. Outcome measurements The primary outcome is the percentage change from baseline to the end of the study (after the 12-week intervention and after another 4-week follow-up) in LDL-C. The calculated LDL-C concentration will be determined at screening (entry to the study), day 1 (after TLC is completed and eligibility is determined), week 6, and week 12. LDL-C will be measured by preparative ultracentrifugation by the central laboratory. The LDL-C concentration calculated at screening will be reported only for the eligibility decision. The baseline lipid measurements for the purpose of analysis will be the day-1 lipid measurement after TLC (lipid stabilization). This will be collected after a ≥9-h fast and before receiving intervention. The secondary outcomes consist of three aspects: efficacy, safety and adherence. The efficacy outcomes include, from the baseline to the end of the study (after 12-week intervention and after another 4-week follow-up), the absolute change in LDL-C, the percentage change in HDL-C, the percentage change in total cholesterol (TC), the percentage change in TG, and the rate of subjects achieving LDL-C goal (see Table 5 in Appendix 1) [ 21 ]. The safety outcomes will be assessed by the physical examination, vital signs (including sitting blood pressure (BP) and heart rate (HR), electrocardiogram (ECG) and subject incidence of adverse events, after 12-week intervention and after another 4-week follow-up. The adherence outcome will be examined by the adherence rate, calculated as: $$ \mathrm{Adherence}\ \mathrm{Rate}=\mathrm{Number}\ \mathrm{of}\ \mathrm{treatments}\ \mathrm{conducted}/\mathrm{Number}\ \mathrm{of}\ \mathrm{treatments}\ \mathrm{planned}. $$ Study procedures The study consists of four periods: Screening, TLC, Intervention, and Follow-up period. For the purpose of this study, a week is defined as seven calendar days. A month is defined as 28 days. Screening is conducted among subjects who meet the inclusion criteria to determine the eligibility of subjects to enter the study. In order to determine the eligibility for the study, they will enter the screening process by signing and dating the Informed Consent Form for this study. The following data will be obtained and procedures performed during initial screening: Vital signs (see Appendix 2) Review for adverse events (serious adverse events (SAEs) and study-related AEs are collected during screening) Concomitant therapy 12-lead ECG in triplicate using centralized ECG services equipment Blood draw for fasting lipids, chemistry and serum pregnancy (women of childbearing potential only) by central laboratory (Appendix 2) TLC period Subjects who complete the screening successfully and who meet the inclusion/exclusion criteria will initiate therapeutic lifestyle change (TLC). The TLC adherence report form is made and sent to subjects to monitor their lifestyle change process. Re-assessment will be conducted after the TLC period. Intervention period Intervention day-1 visit According to the re-assessment of fasting lipids after the TLC, a decision will be made on whether the lipid level has arrived at the target level and whether subjects should continue with the intervention. For those who have not reached the target lipid level, they will undertake randomization and receive first administration of the intervention and the date for that is considered to be day 1. For those who have reached the target lipid level after the TLC, they will be followed up for another 6 weeks and re-assessed for relative indexes in week 6. Week 6 visit (±3 days) Re-assessment on the study indicators of subjects will be conducted, including efficacy, safety, and adherence. Week 12 end of study visit (±3 days) Follow-up period A 4-week follow-up after treatment will be done in order to measure and monitor AEs and the effects of treatments at week 16. The study indicators of subjects will include efficacy and safety. Trial flowchart (Please refer to the supplementary document: Fig. 3. Trial flowchart) Trial flowchart. This figure provides an overview of participant flow in this trial A significance level of 0.95 and a power of 0.80 will be allocated for the calculation of sample size. In accordance with a previous study [ 17 ], we could employ an effect size of 0.343 on absolute value change of LDL-C value, and a standard deviation of 0.7. In this case we estimate that the sample size for the acupuncture group and active control group should be approximately 65, respectively. The total sample size needed is 130. In case of missing observation and patients lost, we assume that 90% of subjects will remain in the study. Thus, the sample size for two groups should be roughly 70 for each group. Stratified and blocked randomization According to the re-assessment of fasting lipid after the TLC, a decision will be made on whether the lipid level has arrived at the target level and whether subjects should continue with the intervention. Subjects will receive the intervention at the site at which they are recruited. Within each site, subjects will be stratified by risk groups (0–1 risk group, 2+ Risk Factor Group, and CHD or CHD risk equivalents group), as target lipid levels differ according to different risk groups. Subjects who have not reached the target lipid level after TLC, will be randomized to two arms at a ratio of 1:1 (acupuncture and moxibustion versus simvastatin). The randomization will be blocked (size = 4) to ensure that equal numbers of subjects are allocated to each arm. A computer-generated, blocked randomization sequence will be used to randomize subjects in the study. Assignment to the two treatment arms will be generated by the principal biostatistician using R (version 3.1.1; R Foundation for Statistical Computing, Austria. ISBN 3-900051-07-0) before the start of the study. The sequence will be held in a secure location in the hospital by the principal biostatistician; the researchers will be blinded to the number of cases in each randomization block and individual subject allocation will be conducted remotely via telephone based on the randomization sequence. Once a subject is eligible to receive the intervention, the principal biostatistician will be informed and will let physicians know each subject's allocation by phone. Due to the nature of the acupuncture and moxibustion interventions, neither participants nor care providers can be blinded to allocation and treatment stages, but are strongly inculcated not to disclose the allocation status of the participant at the follow-up assessments. The outcome assessment will be conducted by outcome assessors blind to the treatment allocation. In the stage of data analyses, an employee outside the research team will feed data into the computer in separate datasheets so that the data analysts can analyze the data without having access to information about the allocation. To maintain the overall quality and legitimacy of the clinical trial, code breaks should occur only in exceptional circumstances when knowledge of the actual treatment is absolutely essential for further management of the patient. Investigators, including the outcome assessors and data analysts blind to treatment allocation, are encouraged to discuss with the medical advisor physicians if they believe that unblinding is necessary. The investigator is encouraged to maintain the blind as far as possible. The actual allocation must not be disclosed to the patient and/or other study personnel. There should not be any written or verbal disclosure of the code in any of the corresponding patient documents. The investigator must report all code breaks (with reason) as they occur on the corresponding Case Report Form page. Unblinding should not necessarily be a reason for study drug discontinuation. Data collection methods The method of outcome data collection has been listed in previous parts. In order to ensure the data quality, each site's personnel involved in this study will be trained to master the data collection requirements; the data to be collected and the procedures to be conducted at each visit will be reviewed in detail. Participant retention In order to promote participant retention, once a patient is enrolled or randomized, the study sites will make every reasonable effort to follow the patient for the entire study period. In detail, study investigators and staff will: Maintain participants' interests by materials and phone calls Provide periodic communications via materials and talks to inform the participants of our acknowledgement for their support Be as flexible as possible with the study schedule in resolving time conflicts with participants' work and life Participant withdrawal Participants will possibly withdraw from the study due to any reason at any time. The study investigators may also withdraw participants from the study with the purpose of protecting them and/or if they are unwilling to follow the study procedures. Once a withdrawal happens, it should be recorded by the steering committee and the data management team. Data forms and data entry In this study, all data will be entered electronically. Original data along with the data collection form will be archived at the participating sites. Once a form has been filled out, the participating site staff will copy the form and send it to data management team for electronic re-entry. Participant files will be stored by participation number. The storage location should be secured and accessible. The files will be kept in storage for 5 years after the study. Data transmission and editing Data transmission refers to the transmission of data from the original data form to computers used by the data management team. Data entry staff will carefully check whether the transmission is correct at the time of data entry. Data editing, either modification on single record or on multiple records, will be documented, along with the reason of the editing. Data discrepancy reports and solutions Errors may be detected by data analysts, including missing data error, or other specific errors in the data. The analyst should summarize the error and report to the data manager. The data manager who receives the data discrepancy reports will forward the specific problem to the data management staff in the participating site, retrieve the original data forms, and check whether there is an inconsistency. There are several solutions for data discrepancy. If the data in the database is inconsistent with the original form, a primary choice is to correct the record in the database. Other alternative choices include missing data imputation techniques, data correction by checking other sources, and modifying the original data. The discrepancy reports and solutions should be documented for future reference. Data back-up The primary database will be backed up once per month. At the same time, the data analysis files will also be backed up. Dataset analysis For the primary endpoint of percentage change in LDL-C from baseline, the efficacy of acupuncture and moxibustion will be evaluated at week 12 by comparing the treatment effect to that of the active control group. A type I error of 0.05 will be allocated to the test. There are three main analysis sets: Full analysis set (FAS): includes all randomized subjects who have received at least 1 investigational treatment Completer Analysis Set (CAS), includes subjects in the FAS who completed their scheduled intervention and have a nonmissing LDL-C value at week 12 Effect Durability Analysis Set (EDAS), includes subjects in the FAS who completed their scheduled intervention and have non-missing LDL-C values at week 6 and week 12 Statistical analysis plan The primary analysis set for the primary endpoint is the FAS. It is to use the repeated measures, linear mixed-effects model, including terms for treatment group, stratification factor, baseline LDL-C, scheduled visit and the interaction of treatment with scheduled visit. Missing values will not be imputed when the repeated measures, linear mixed-effects model is used. We will use the chi-squared test for binary outcomes, and the t test for continuous outcomes. For subgroup analyses, we will use regression methods with appropriate interaction terms (respective subgroup treatment group). Multivariable analyses will be based on logistic regression for binary outcomes and linear regression for continuous outcomes. We will examine the residual to assess model assumptions and goodness-of-fit. For timed endpoints, such as mortality, we will use the Kaplan-Meier survival analysis followed by the multivariable Cox proportional hazards model for adjusting for baseline variables. We will calculate Relative Risk (RR) and RR Reductions (RRR) with corresponding 95% confidence intervals to compare dichotomous variables, and difference in means will be used for additional analysis of continuous variables. P values will be reported to four decimal places with P values < 0.05 reported as P < 0.05. Up-to-date versions of R (R Foundation) will be used to conduct analyses. For all tests, we will use two-sided P values with alpha no greater than 0.05 level of significance. We will use the Bonferroni method to appropriately adjust the overall level of significance for multiple primary outcomes, and secondary outcomes. In investigation of the robustness of the analysis results, the primary analysis will be repeated using the CAS. In addition, parametric analysis of covariance (ANCOVA) and appropriate nonparametric methods will be used on the FAS, in which missing data will be imputed using the last-observation-carried-forward (LOCF) approach. The primary analysis will also be repeated for subgroups of interest. Interim analyses An interim-analysis is performed on the primary endpoint when 50% of patients have been randomized and have completed the interventions. The interim-analysis is performed by an independent statistician, blinded for the treatment allocation. The statistician will report to the independent data monitoring committee. The data monitoring committee will have unblinded access to all data and will discuss the results of the interim-analysis with the steering committee in a joint meeting. The steering committee decides on the continuation of the trial and will report to the central ethics committee. Consistency analysis To evaluate the consistency of the treatment effect of acupuncture and moxibustion, the following analyses will be performed: Consistency of treatment effects (acupuncture and moxibustion versus simvastatin) of week 6 and week 12: the 95% CI of the difference of the treatment effect at week 6 and the treatment effect at week 12 will be provided from the repeated measure mixed-effect model. The estimations will be from the FAS and the EDAS LDL change from week 6 to week 12: the treatment effect of percentage change from week 6 to week 12 will be estimated by using an ANCOVA model. The estimation of 95% CI will be based on the EDAS Analysis of other secondary efficacy endpoints will be similar to the primary analysis of the primary endpoint. All tests will be based on a significance level of 0.05, without adjustment for multiple endpoints. Safety summaries and adherence analysis Safety summaries will include the subject incidence of AEs, summaries of laboratory parameters, vital signs, and ECGs. A Data Monitoring Committee (DMC) has been established. The DMC is independent of the study organizers. During the period of recruitment to the study, interim analyses will be supplied, in strict confidence, to the DMC, together with any other analyses that the committee may request. This may include analyses of data from other comparable trials. In the light of these interim analyses, the DMC will advise the trial steering committee when: The active intervention has been proved, beyond reasonable doubt, to be different from the control (standard management) for all or some types of participants, and The evidence on the outcomes is sufficient to guide a decision from the health care providers The Trial Steering Committee can then decide whether or not to modify intake to the trial. Unless this happens, however, the Steering Committee will remain ignorant of the interim results. An adverse event is defined as "any untoward medical occurrence in a clinical trial subject" [ 22 ]. An adverse event may or may not have a causal relationship with the treatment in the study. All adverse events observed by the investigator or study practitioner (e.g., physicians), or reported by the subjects, are to be recorded in the medical record of the subject. If a pre-existing medical condition of the subject worsens, this is considered one type of adverse event. For example, if a patients' diabetes, migraine headaches or gout worsens in time (e.g., increased in severity, frequency of duration) with the administration of the treatment, this indicates that the administration of the treatment may be associated with a significant worse outcome in the subject. All adverse events observed by the investigator or study practitioner (e.g., physicians) or reported by the subject after randomization through the end of study will be reported by the investigator to the applicable electronic Case Report Form (eCRF) (e.g., Adverse Event Summary eCRF). Adverse event attributes Below are the attributes the investigator must assign to each adverse event: Adverse event diagnosis or syndrome(s), if known (if not known, signs or symptoms), Dates of onset and resolution, Severity (and/or toxicity per protocol) Assessment of relatedness to investigation treatment, and Action taken Adverse event assessment Below is a list of questions the investigators must assess. Each question is answered with either a "yes" or "no": Is there a reasonable possibility that the event may have been caused by the study intervention? Is there a reasonable possibility that the event may be related to screening procedures? Is there a reasonable possibility that the event may have been caused by a study activity/procedure? Adverse events and laboratory tests Laboratory tests results will be reviewed thoroughly by the investigator to determine whether the change in abnormal values from the baseline values is clinically significant (based on clinician's own judgment). After reviewing the changes, the investigator will determine whether an adverse event will be reported. Abnormal laboratory findings that are not clinically significantly different from baseline values will not be recorded as adverse events Laboratory findings that are clinically significant, that require treatments or adjustments from the current treatment will be recorded as adverse events Clinically sequelae should also be recorded as the adverse event, where applicable Adverse events and participant withdrawal The investigator will use his/her clinical judgment to determine whether a subject should be removed from the study due to adverse events. The subject or their legal representatives also have the full right to withdraw from the treatment due to an adverse event or concerns for an adverse events. However, the investigator will encourage the subjects to undergo at least an end of the study assessment. Serious adverse events A serious adverse event is defined as an adverse event that meets at least one of the following criteria: Life-threatening (places the subject at immediate risk of death) Requires in-patient hospitalization or prolongation of existing hospitalization, (the criterion of "requires hospitalization" for an adverse event, indicates an event necessitated an admission to a health care facility, e.g., overnight stay in hospital) Results in persistent or significant disability/incapacity Congenital anomaly/birth defect Other medically important serious event A serious adverse event can still be recorded if an investigator considers an event to be clinically significant. In this case, the event will be classified as "other medically important serious event" and comprehensive documentation of the events' severity will be recorded on the subject's medical record. Some examples of this include: allergic bronchospasm, convulsions, blood dyscrasia, or events that necessitate an emergency room visit, outpatient surgery, or urgent intervention. Reporting procedures for serious adverse events All observed or reported serious adverse events after randomization through 4 weeks after the last investigational treatment will be recorded on the subjects' medical records by the investigator and submitted to the principal investigator within one working day of discovery of the adverse event. These include: new information reported on a previously known serious adverse event, serious adverse event possibly related to the treatment, and full withdraw by the subject from the study due to serious adverse event. In the case of a serious, unexpected, and related adverse event, the subject may be unblinded by the investigator prior to submission of the adverse event to the regulatory authority. Investigators will receive information on how to report adverse serious events to authorities in accordance to local requirements and Good Clinical Practice prior to the clinical trial. All serious adverse events will be reported to appropriate Independent Ethics Committee (IEC). Pregnancy reporting If a pregnancy occurs during the time when the female subject is undertaking the study intervention, the pregnancy should be reported to the research team. Pregnancy after undertaking the study intervention for an additional 4 weeks should also be reported. Study dissemination All study-related information will be stored securely at the study site. All participant information will be stored in locked file cabinets in areas with limited access. All laboratory specimens, reports, data collection, process, and administrative forms will be identified by a coded ID (identification) number only to maintain participant confidentiality. All records that contain names or other personal identifiers, such as locator forms and Informed Consent Forms, will be stored separately from study records identified by code number. All local databases will be secured with password-protected access systems. Forms, lists, logbooks, appointment books, and any other listings that link participant ID numbers to other identifying information will be stored in a separate, locked file in an area with limited access. Participants' study information will not be released outside of the study without the written permission of the participant. Access to data The Data Management Coordinating Center will oversee the intrastudy data sharing process. All data sets will be password-protected. The principal investigator will have direct access to the data sets. With her permission, the statistician will utilize the data sets for analyses. Requests from outside for further research cooperation will be discussed by the Steering Committee. The research protocol will be published. There is no plan for granting public access to the full participant-level data sets. All publications should be conducted and monitored by the principal investigator and the Steering Committee. That means that each paper or abstract must be written, reviewed, released, and published with the authorization of the PI and the Steering Committee. The timing of presentation of the endpoint data and the meetings at which they may be presented will also be determined by PI and the Steering Committee. In order to achieve the study objectives and maintain scientific integrity, data should be analyzed collectively; a participating site is not permitted to publish its data and analysis results independently. If there are some invitations from workshops, symposia, and volumes in related area, individuals who work on such requests should report to the PI and Steering Committee with a detailed proposal, and state clearly what to present and how to present. Authorship eligibility Many topics will be suggested based on the study database, and each topic might be possible to be developed into a peer-reviewed article. The PI and the Steering Committee will work together to decide the authorship of each article, on basis of contributions to the manuscripts; the investigator who contributes most will be considered as the first author. Disputes regarding authorship will be resolved by the PI with careful considerations. If professional writers are needed, the PI and the Steering Committee will determine whether to include them in authorship. The current conventional treatment on hyperlipidemia, statins, not only can entail significant cost to patients and society, but also is associated with low treatment compliance and high rates of side effects [ 3 , 4 ]. Effective treatment alternatives on hyperlipidemia have been explored to address the shortcomings of statins; and acupuncture and moxibustion are considered as potentially effective regimens based on clinical practices and clinical research [ 9 – 11 , 17 , 18 , 23 ]. However, acupuncture and moxibustion, separately, have not been proved to be as effective as statin from previous studies, despite the existence of their clinical effects [ 9 – 11 , 17 , 18 , 23 ]. Additionally, as approaches of complementary and alternative medicine (CAM), their introduction to the mainstream medical community is often limited due to lack of robust evidence and differences in the philosophy adopted. Therefore, the RCT described in this paper has been developed to validate the efficacy and safety of combined acupuncture and moxibustion on hyperlipidemia via a well-designed evaluation. There are several strengths in design of this study. First, the intervention adopted in this study is developed not only based on profound TCM theory but also supported by comprehensive scientific research. The theory guiding the acupoints selection and traditional Chinese herb composition has been studied, identified and well-demonstrated in this study which is discussed in the "Treatment rationale" section. Meanwhile, several studies from the perspectives of laboratory animal medicine and clinical medicine have been conducted [ 9 – 11 , 17 , 18 , 23 ] which serve as the key foundation of intervention development in this study. Specifically, the biological mechanism was explored on how cake-separated moxibustion can have an impact upon hyperlipidemia; and the acupoint selection and regimen implementation from ample TCM theory were validated in multiple clinical studies. Second, the TLC will be included prior to both interventions of acupuncture and moxibustion and statins, which enables this study to reflect the actual practice in clinical settings to the greatest extent. According to the guideline of hyperlipidemia treatment [ 20 ], TLC is recommended as the first step before any other treatment. By adhering to the commonly adopted clinical guideline during study procedure, on the one hand, we can take good care of the "non-maleficence" and "justice" principles in study ethics, because the opportunity of receiving appropriate medication will not be compromised for patients attending this study; on the other hand, this can enhance the generalizability of this study due to following the clinical operations in real life. Third, the stratification on risk levels for heart disease [ 19 , 20 ] will be applied to this study, which makes possible that the effect of interventions of this study can be differentiated and specified according to different risk groups. Patients with hyperlipidemia may enter the study with different initial lipid levels, due to different time points of initiating medication intervention for groups with different risk levels. Meanwhile, the goal lipid level for each patient may also differ accordingly. It is impractical and erroneous to set exactly the same criteria in determining the clinical effect for all the patients. In recognition on this, patients will be stratified by their risk levels during the study, and then the effects of acupuncture and moxibustion versus simvastatin can be observed and examined within different risk levels, instead of within a large group with extensive variance. This stratification reflects that patient's risk levels for heart diseases are considered important impacting factors on the outcomes in this study; it enables exploring this impact upon the effects of interventions, which can further guide the appropriate intervention by targeting on the right populations. In addition, different treatment goals will be adopted for different risk groups according to the standard clinical guideline; this ensures the study will be implemented in accordance with the daily clinical practices to the most extent, thus, the external validity can be mostly warranted. However, certain limitations still exist in this study. One is the absence of a placebo control group due to the limited study budgets. Because of this lack of comparison, it may not be able to determine whether the outcome observed is due to the therapeutic effect of acupuncture and moxibustion or its placebo effect, if acupuncture and moxibustion is proved to be inferior to simvastatin. Nevertheless, judgment from clinical experiences and relevant findings from previous studies can help in identifying and justifying the effect with clinical significance. Another limitation is the absence of blinding in patients and health care providers due to the operation characteristics of acupuncture and moxibustion. With patients knowing their intervention assignments after randomization, there may be a potentially higher possibility of cross-over between and drop-off from these two arms; thus, the potential selection bias may occur. But detailed explanation can be given to patients on these two arms of interventions in order to inform potential effects and safety in both arms. This will help limiting the drop-off and cross-over rates. Moreover, with the intervention assignment revealed, the effect identified from the acupuncture and moxibustion group may not fully result from its therapeutic aspect but a psychological aspect. But the outcome measures examined in this study are all objective – based on the fasting lipid level; and in the analysis process intervention assignment is fully blinding to data assessors. Therefore, the information bias due to lack of blinding could be controlled to a considerable extent. In conclusion, efforts have been devoted, in this study, to developing a well-designed assessment approach to evaluate the efficacy and safety of combined acupuncture and moxibustion on treating hyperlipidemia, by integrating the TCM regimens with the standardized Western medicine appraisal approach. It is expected this assessment will introduce a potentially effective alternative treatment of hyperlipidemia by borrowing the wisdom from the time-tested practice approach, TCM. Primary registry and trial identifying number: ClinicalTrials.gov NCT02269046 Date of registration in primary registry: 16 October 2014 Primary sponsor: The First Affiliated Hospital of Hunan University of Traditional Chinese Medicine Contact for public queries: Dr. Mailan Liu. Email: [email protected] Contact for scientific queries: Dr. Xiaorong Chang. Email: [email protected] Public title: Acupuncture and Moxibustion for Hyperlipidemia Scientific title: Acupuncture and Moxibustion for Hyperlipidemia (AMH-RCT): Study Protocol for a Randomized Controlled Trial Countries of recruitment: China Health condition(s) or problem(s) studied: hyperlipidemia Intervention(s): acupuncture and moxibustion, simvastatin Study type: interventional Date of first enrollment: December 2014 Target sample size: 130 Recruitment status: recruiting We thank our colleagues from Hunan University of Chinese Medicine, China, Rural Coordination Center of BC, Vancouver, Canada, University of British Columbia, Vancouver, Canada, The Third Clinical College of Zhejiang Chinese Medical University, China, and Changchun University of Chinese Medicine, China, who provided insight and expertise that greatly assisted the research. The funding sources are disclosed in the supplementary document: Funding Disclosure. There are three sponsors for this study; all of them are external funding sources. The 1st funding source is the State Administration of Traditional Chinese Medicine of China, with the Grant Number of 2015CB554502. The 2nd funding source is the National 973 Projects of Chines, coming from the Ministry of Science and Technology of China, with the Grant Number of 2014CB543102. The 3rd source is the Third Clinical College of Zhejiang Chinese Medicine University, with the Grant Number of ZTK2014A03. All the three founding sources have no role in the design of the study and collection, analysis, and interpretation of data and in writing the manuscript. Not applicable at this stage. All authors have made substantial intellectual contributions to this protocol. ML and XC have led on the systematic literature review and the research question. ML and QZ led on developing the interventions. ML, QZ, and SJ led on the study design. QZ and SJ led on the statistical analysis scheme. ML, GZ, and XC provided improvements on interventions. ZY, JZ and GD developed the data management scheme. ZW and QC developed the adverse events management scheme. XC, GZ, and JZ provided the principles of study dissemination. ML, QZ, and SJ drafted the manuscript together, and their contributions are equal. YZ, DL, MM, and HZ revised the manuscript critically. All authors have contributed to the drafting process. All authors have read and approved the final manuscript. The consent form for publication will be provided to patients if they participate in this study. Results and articles will be published after the patients/participants they sign the consent form for publication which shows that they agree to the publication of identifiable data. The form has been approved by the Ethics Committee of Hunan University of Chinese Medicine. Please refer to the supplementary document: Consent Form. Research ethics approval The protocol has been reviewed and approved by the Independent Ethics Committee (IEC) of 1st Affiliated Hospital, Hunan University of Chinese Medicine for approval (Ethics Approval Reference Number: HN-LL-KY-2014-005-01). The other three hospitals located in nearby cities participating in this research are affiliated to 1st Affiliated Hospital of Hunan University of Chinese Medicine. Any clinical research activities in the three hospitals are led by 1st Affiliated Hospital and should be approved by the IEC of 1st Affiliated Hospital. The investigator will make safety and progress reports to the IEC within 3 months of study termination. These reports will include the total number of participants enrolled and summaries of data monitoring committee review of safety. Protocol modifications Any modifications to the protocol which may impact the conduct of the study, potential benefit of the patient or may affect patient safety, including changes of study objectives, study design, patient population, sample sizes, study procedures, or significant administrative aspects will require a formal amendment to the protocol. Such amendment will be initiated by the steering committee and approved by IEC of Hunan University of Chinese Medicine. Administrative changes of the protocol are minor corrections and/or clarifications that have no effect on the way the study is to be conducted. These administrative changes will be agreed upon by principal investigator, and will be documented in a memorandum. The IEC of Hunan University of Chinese Medicine may be notified of administrative changes. Participation Consent Physicians responsible for interventions will introduce the information of the trial to potential participants. If the patient shows interests in participating, the physician must provide a consent form for patients and obtain their consents for participation and publication. Written informed consent must be obtained before protocol specific procedures are undertaken. The risks and benefits of participating in the study will be verbally explained to each potential subject before entering into the study. If acupuncture or simvastatin is administered during a study visit, administration must be after completion of vital signs, ECG, and blood draw procedures, as applicable. The original consent form in Chinese with an English Translated version will be submitted as supplementary documents. Supplementary documents Ethics approval has been obtained from Hunan University of Chinese Medicine. Please refer to the supplementary document: Ethics Approval. Consent to participate is included in consent form, please refer to the supplementary document: Consent Form. Open AccessThis article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://​creativecommons.​org/​licenses/​by/​4.​0/​), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The Creative Commons Public Domain Dedication waiver (http://​creativecommons.​org/​publicdomain/​zero/​1.​0/​) applies to the data made available in this article, unless otherwise stated. Major risk factors for CHD other than LDL [ 13 ] Cigarette smoking Hypertension (BP ≥140/90 mmHg or on antihypertensive medication) Low HDL cholesterol (<40 mg/dL) Family history of premature CHD (CHD in male first degree relative <55 years; CHD in female first degree relative <65 years) Age (men ≥45 years; women ≥55 years) CHD and CHD equivalents • Other clinical forms of atherosclerotic disease (peripheral arterial disease, abdominal aortic aneurysm, and symptomatic carotid artery disease) • Multiple risk factors that confer a 10-year risk for CHD >20% Nutrient Composition of the TLC Diet Recommended Intake Saturated fata Less than 7% of total calories Up to 10% of total calories 25-35% of total calories Carbohydrateb 20-30 g/day Approximately 15% of total calories Less than 200 mg/day Total calories (energy)c Balance energy intake and expenditure to maintain desirable body weight/prevent weight gain aTrans fatty acids are another LDL-raising fat that should be kept at a low intake bCarbohydrate should be derived predominantly from foods rich in complex carbohydrates including grains, especially whole grains, fruits, and vegetables cDaily energy expenditure should include at least moderate physical activity (contributing approximately 200 Kcal per day) Three Categories of Risk that Modify LDL-C Goals Risk Category LDL Goal (mg/dL) CHD and CHD risk equivalents Multiple (2+) risk factors 0 to 1 risk factor Appendix 2 Measurement of Lipid level and Vital Signs Measurement of Vital Signs Blood pressure (BP) and heart rate (HR) should be measured at each visit. BP should continue to be measured in the same arm as in the parent study unless a concomitant condition favors the use of a different arm. The appropriate size cuff should be used. The diastolic blood pressure (DBP) will be recorded as the pressure noted when sound disappears (Korotkoff Phase V). BP and HR measurements should be determined after the subject has been seated for at least 5 minutes. The subject's pulse should be measured for 30 seconds and the number should be multiplied by 2 to obtain heart rate. Lipid Measurements The baseline lipid measurements for the purpose of analysis will be the central laboratory screening and day 1 lipid measurement collected after a ≥ 9 hour fast and before receiving intervention. At each time points, calculated LDL-C concentration will be determined. In addition, at screening, day 1, week 6, and week 12 LDL-C will be measured by preparative ultracentrifugation by the central laboratory. Only the screening calculated LDL-C concentration will be reported to the site for the eligibility decision. For subjects who are rescreened, data from the first screening period will not be used for the analysis. Investigators, subjects, and the study team will be blinded to post-randomization central laboratory lipid panel values for the duration of the study until unblinding of the database. In addition, investigators and staff involved with this trial and all medical staff involved in the subject's medical care should refrain from obtaining lipid panels between randomization and at least 4 weeks after the subject ends the study at week 12 (to avoid potential unblinding). If a lipid panel is drawn, all reasonable steps must be undertaken to avoid informing the subject and study personnel of the results. Laboratory Assessments All screening and on-study laboratory samples will be processed and sent to the central laboratory. The central laboratory will provide a study manual that outlines handling, labeling, and shipping procedures for all blood samples. The date and time of sample collection will be recorded in the source documents at the site. Table below outlines the specific analytic for serum chemistry, hematology, urinalysis, and other testing to be conducted. Other Labs (Fasting) BUN or Urea Direct bilirubin AST (SGOT) ALT (SGPT) Epithelial cells MCHC • Neutrophils • Bands • Eosinophils • Basophils • Lymphocytes • Monocytes Fasting lipids • Total cholesterol • HDL-C • LDL-C • Triglycerides • VLDL-C Anti-AMG 145 antibodies • AMG 145 • PCSK9 FSH (if needed per exclusion 4.2.17) eGFR (calculated) Fasting Vitamin E Reiner Z. Statins in the primary prevention of cardiovascular disease. Nat Rev Cardiol. 2013;10:453–64. CrossRefPubMed Taylor F, Huffman MD, Macedo AF, et al. Statins for the primary prevention of cardiovascular disease, The Cochrane Library. 2013. Farnier M, Davignon J. Current and future treatment of hyperlipidemia: the role of statins. Am J Cardiol. 1998;82:3J–10J. CrossRefPubMed Feingold K. Statin therapy in hyperlipidemia: balancing the risk and benefits. 2014. 张宝珍, 张凯, 刘玉珍. 针灸丰隆治疗高脂血症临床随机对照试验 Meta 分析. 中国中医药信息杂志 2014;21:11-15. 马明云. 温和灸影响高脂血症患者微循环的临床研究: 南京: 南京中医药大学; 2013. 许辛寅. 艾灸足三里丰隆对高脂血症模型大鼠干预及临床验证研究: 广州: 广州中医药大学; 2012. 蒋月琴. 温针灸足三里对高脂血症血脂指标的影响分析. 医学理论与实践 2016;29:3226-3228. Xiaorong C, Jie Y, Zenghui Y, et al. Effects of medicinal cake-separated moxibustion on plasma 6-keto-PGF1alpha and TXB2 contents in the rabbit of hyperlipemia. J Tradit Chin Med. 2005;25:145–7. PubMed Yue Z-H, He X-Q, Chang X-R, et al. The effect of herb-partition moxibustion on Toll-like receptor 4 in rabbit aorta during atherosclerosis. J Acupunct Meridian Stud. 2012;5:72–9. CrossRefPubMed Liu M, Hu WH, Xie S, et al. Characteristics of acupoint and meridian selection in acupuncture for hyperlipidemia (Chinese). Chin Acupunct Moxibustion. 2015;35:512–6. 李光华. 针灸治疗高脂血症的效果分析. 中西医结合心血管病电子杂志. 2016;20. 陈仲杰. 高脂血症 "温灸和之" 有效性及不同灸治时程对调脂效应的影响: 万方数据资源系统; 2012. 刘未艾, 常小荣, 刘密, et al. 不同灸量隔药饼灸对高脂血症患者血脂及血液流变学的影响. 辽宁中医杂志 2013;9:1787–1790. 常小荣, 严洁, 易受乡, et al. 隔药饼灸治疗血脂异常的临床研究. 中华中医药学刊 2010:8–10. 李爱军. 隔药饼灸治疗高脂血症的临床研究. 广西中医药 2007;30:12–13. Liu W, Chang X, Liu M, et al. Effects of herbal-cake-separated-moxibustion sized cones at different dosages on blood fat and hemorheology in patients with hyperlipemia. Liaoning Chin Med J. 2013;40:1787–90. Liu M, Chen X, Lu X, et al. Systematic review of acupuncture treatment effects on patients with hyperlipidemia. J Hunan Univ Chin Med. 2014;12:015. Smith SC, Grundy SM. 2013 ACC/AHA guideline recommends fixed-dose strategies instead of targeted goals to lower blood cholesterol. J Am Coll Cardiol. 2014;64:601–12. CrossRefPubMed Program RotNCE. Detection, evaluation, and treatment of high blood cholesterol in adults (Adult Treatment Panel III). In: National Institutes of Health; 2001. NCEPNE Panel. Third report of the National Cholesterol Education Program (NCEP) expert panel on detection, evaluation, and treatment of high blood cholesterol in adults (Adult Treatment Panel III) final report. Circulation. 2002;106:3143. Cox EE, Ghali WA, Hébert P, et al. Patient safety: a fundamental aspect of clinical trials through a review of a study on Canadian adverse events. Elite Team of Editorial Members . 2014;170:103. Chang X, Zhou G, Yan J, et al. Effects of cake-separated moxibustion on HDL-C LDL-C of patients with hyperlipidemia (Chinese). Chin J Basic Med Tradit Chin Med. 1999;5:53–5. Mailan Liu Qian Zhang Shan Jiang Mi Liu Guoshan Zhang Zenghui Yue Qin Chen Jie Zhou Yifan Zou Mingzhu Ma Guobin Dai Huan Zhong Zhihong Wang Xiaorong Chang Lessons from the field: the conduct of randomized controlled trials in Botswana Study protocol Core Outcome Domains for early phase clinical trials of sound-, psychology-, and pharmacology-based interventions to manage chronic subjective tinnitus in adults: the COMIT'ID study protocol for using a Delphi process and face-to-face meetings to establish consensus Evaluating the effect of the Helping Mothers Survive Bleeding after Birth (HMS BAB) training in Tanzania and Uganda: study protocol for a randomised controlled trial The Treatment In Morning versus Evening (TIME) study: analysis of recruitment, follow-up and retention rates post-recruitment Impact of a pharmacist-led medication review on hospital readmission in a pediatric and elderly population: study protocol for a randomized open-label controlled trial A neuromuscular exercise programme versus standard care for patients with traumatic anterior shoulder instability: study protocol for a randomised controlled trial (the SINEX study)
CommonCrawl
Published by the American Mathematical Society, the Representation Theory (ERT) is devoted to research articles of the highest quality in all areas of pure and applied mathematics. The 2020 MCQ for Representation Theory is 0.7. Journals Home eContent Search About ERT Editorial Board Author and Submission Information Journal Policies Subscription Information Explicit doubling integrals for $\widetilde {\mathrm {Sp}_2}(\mathbb {Q}_2)$ using "good test vectors� by Christian A. Zorn PDF Represent. Theory 14 (2010), 285-323 Request permission In a previous paper (see http:/www.math.ohio-state.edu/~czorn/works.html), we computed examples of the doubling integral for constituents of the unramified principal series of $\mathrm {Sp}_2(F)$ and $\widetilde {\textrm {Sp}_2}(F)$ where $F$ was a non-dyadic field. These computations relied on certain "good test vectors� and "good theta test sections� motivated by the non-vanishing of theta lifts. In this paper, we aim to prove a partial analog for $\widetilde {\textrm {Sp}_2}(\mathbb {Q}_2)$. However, due to several complexities, we compute the doubling integral only for certain irreducible principal series representations induced from characters with ramified quadratic twists. We develop some $2$-adic analogs for the machinery in the paper mentioned above; however, these tend to be more delicate and have more restrictive hypotheses than the non-dyadic case. Ultimately, this paper and the one mentioned above develop several computations intended to be used for future research into the non-vanishing of theta lifts. W. Casselman. Introduction to the theory of admissible representations of p-adic reductive groups. Preprint. Accessed at http://www.math.ubc.ca/~cass/research.html. Stephen Gelbart, Ilya Piatetski-Shapiro, and Stephen Rallis, Explicit constructions of automorphic $L$-functions, Lecture Notes in Mathematics, vol. 1254, Springer-Verlag, Berlin, 1987. MR 892097, DOI 10.1007/BFb0078125 Stephen S. Kudla, Seesaw dual reductive pairs, Automorphic forms of several variables (Katata, 1983) Progr. Math., vol. 46, Birkhäuser Boston, Boston, MA, 1984, pp. 244–268. MR 763017 S. Kudla. On the Theta Correspondence. Lectures at European School of Group Theory, Beilngries 1996. Accessed at http://www.math.toronto.edu/~skudla/ssk.research.html. Stephen S. Kudla, Michael Rapoport, and Tonghai Yang, Modular forms and special cycles on Shimura curves, Annals of Mathematics Studies, vol. 161, Princeton University Press, Princeton, NJ, 2006. MR 2220359, DOI 10.1515/9781400837168 S. Rallis, On the Howe duality conjecture, Compositio Math. 51 (1984), no. 3, 333–399. MR 743016 R. Ranga Rao, On some explicit formulas in the theory of Weil representation, Pacific J. Math. 157 (1993), no. 2, 335–371. MR 1197062, DOI 10.2140/pjm.1993.157.335 J.-P. Serre, A course in arithmetic, Graduate Texts in Mathematics, No. 7, Springer-Verlag, New York-Heidelberg, 1973. Translated from the French. MR 0344216, DOI 10.1007/978-1-4684-9884-4 T. A. Springer, Reductive groups, Automorphic forms, representations and $L$-functions (Proc. Sympos. Pure Math., Oregon State Univ., Corvallis, Ore., 1977) Proc. Sympos. Pure Math., XXXIII, Amer. Math. Soc., Providence, R.I., 1979, pp. 3–27. MR 546587 Marko Tadić, Jacquet modules and induced representations, Math. Commun. 3 (1998), no. 1, 1–17 (English, with English and Croatian summaries). MR 1648862 Tonghai Yang, An explicit formula for local densities of quadratic forms, J. Number Theory 72 (1998), no. 2, 309–356. MR 1651696, DOI 10.1006/jnth.1998.2258 Tonghai Yang, Local densities of 2-adic quadratic forms, J. Number Theory 108 (2004), no. 2, 287–345. MR 2098640, DOI 10.1016/j.jnt.2004.05.002 C. Zorn. Computing local $L$-factors for the unramified principal series of $\textrm {Sp}_2(F)$ and its metaplectic cover. Univ. of Maryland Thesis, 2007. C. Zorn. Reducibility of the principal series for $\widetilde {\textrm {Sp}_2}(F)$ over a $p$-adic field. Canadian Journal of Mathematics, to appear. Available online at http://www.math.ohio-state.edu/~czorn/works.html. C. Zorn. Explicit computations of the doubling integral for $\textrm {Sp}_2(F)$ and $\widetilde {\textrm {Sp}_2}(F)$. Preprint. Available online at http://www.math.ohio-state.edu/~czorn/works.html. Retrieve articles in Representation Theory of the American Mathematical Society with MSC (2010): 22E50, 11F70 Retrieve articles in all journals with MSC (2010): 22E50, 11F70 Christian A. Zorn Affiliation: Department of Mathematics, The Ohio State University, 231 W. 18th Ave., Columbus, Ohio 43210 Email: [email protected] Received by editor(s): January 9, 2009 Received by editor(s) in revised form: December 7, 2009 Published electronically: March 15, 2010 The copyright for this article reverts to public domain 28 years after publication. Journal: Represent. Theory 14 (2010), 285-323 MSC (2010): Primary 22E50; Secondary 11F70
CommonCrawl
Only show content I have access to (238) Only show open access (35) Chapters (23) Last 12 months (76) Last 3 years (185) Over 3 years (585) Physics And Astronomy (434) Earth and Environmental Sciences (213) Statistics and Probability (57) Materials Research (24) Journal of Fluid Mechanics (202) Proceedings of the International Astronomical Union (120) Laser and Particle Beams (55) Prehospital and Disaster Medicine (40) Journal of Applied Probability (36) British Journal of Nutrition (19) Parasitology (18) Canadian Journal of Emergency Medicine (17) Publications of the Astronomical Society of Australia (16) Communications in Computational Physics (13) Journal of Mechanics (13) European Journal of Anaesthesiology (11) Cardiology in the Young (10) MRS Online Proceedings Library Archive (10) Macroeconomic Dynamics (10) Advances in Applied Probability (9) High Power Laser Science and Engineering (9) The Aeronautical Journal (9) Epidemiology & Infection (8) Seed Science Research (7) Ryan Test (193) International Astronomical Union (125) Applied Probability Trust (45) World Association for Disaster and Emergency Medicine (40) Materials Research Society (23) Global Science Press (19) Nutrition Society (18) Canadian Association of Emergency Physicians (CAEP) (17) AEPC Association of European Paediatric Cardiology (10) Society for Economic Measurement (SEM) (10) Royal Aeronautical Society (9) test society (9) Nestle Foundation - enLINK (4) Weed Science Society of America (3) AFM (2) Asian Association of Social Psychology (2) Australian Mathematical Society Inc (2) Brazilian Society for Microscopy and Microanalysis (SBMM) (2) Health Technology Assessment International (2) Cambridge Astrophysics (1) Cambridge Monographs on Mathematical Physics (1) Case Studies in Neurology (1) A novel fusion reactor with chain reactions for proton–boron11 Shalom Eliezer, Jose M. Martinez-Val Journal: Laser and Particle Beams , First View Published online by Cambridge University Press: 20 January 2020, pp. 1-6 Using a combination of laser–plasma interactions and magnetic confinement configurations, a conceptual fusion reactor is proposed in this paper. Our reactor consists of the following: (1) A background plasma of boron11 and hydrogen ions, plus electrons, is generated and kept for a certain time, with densities of the order of a mg/cm3 and temperatures of tens of eV. Both the radiation level and the plasma thermal pressure are thus very low. (2) A plasma channel is induced in a solid target by irradiation with a high power laser that creates a very intense shock wave. This mechanism conveys the acceleration of protons in the laser direction. The mechanisms must be tuned for the protons to reach a kinetic energy of 300–1200 keV where the pB11 fusion cross section is significantly large (note that this value is not a temperature). (3) Those ultra-fast protons enter the background plasma and collide with boron11 to produce three alphas. Fusion born alphas collide with protons of the plasma and accelerate them causing a chain reaction. (4) A combination of an induction current and a magnetic bottle keeps the chain reaction process going on, for a pulse long enough to get a high energy gain. (5) Materials for the background plasma and the laser target must be replaced for starting a new chain reaction cycle. Enhancement of the optical properties of copper sulfate crystal by the influence of shock waves Aswathappa Sivakumar, Madeswaran Sarumathi, Sathiyadhas Sahaya Jude Dhas, Sathiyadhas Amalapushpam Martin Britto Dhas Journal: Journal of Materials Research , First View Published online by Cambridge University Press: 17 January 2020, pp. 1-10 A systematic analysis was carried out to study the effect of shock waves on copper sulfate crystal in such a way that its optical properties and surface morphological properties were examined for different number of shock pulses (0, 1, 3, 5, and 7) with the constant Mach number 1.7. The test crystal of copper sulfate was grown by slow evaporation technique. The surface morphological and optical properties were scrutinized by optical microscope and ultraviolet–visible spectrometer, respectively. On exposing to shock waves, the optical transmission of the test crystal started increasing from the range of 35–45% with the increase of shock pulses and thereafter started decreasing to 25% for higher number of applied shocks. The optical band transition modes and optical band gap energies were calculated for pre- and post-shock wave loaded conditions. The experimentally obtained data prove that the optical constants such as absorption coefficient, extinction coefficient, skin depth, optical density, and optical conductivity are strongly altered, so also the optical transmission due to the impact of shock waves. Hence, shock wave induced high transmission test crystal can be used as an appropriate candidate for ultraviolet light filter applications. Holly polyphenols alleviate intestinal inflammation and alter microbiota composition in lipopolysaccharide-challenged pigs Xiao Xu, Hongwei Hua, Longmei Wang, Pengwei He, Lin Zhang, Qin Qin, Cheng Yu, Xiuying Wang, Guolong Zhang, Yulan Liu Journal: British Journal of Nutrition / The effect of holly polyphenols (HP) on intestinal inflammation and microbiota composition was evaluated in a piglet model of lipopolysaccharide (LPS)-induced intestinal injury. A total of 24 piglets were used in a 2 × 2 factorial design including diet type and LPS challenge. After 16 d of feeding with a basal diet supplemented with or without 250 mg/kg HP, pigs were challenged with LPS (100 μg/kg BW) or an equal volume of saline for 4 h, followed by analysis of disaccharidase activities, gene expression levels of several representative tight junction proteins and inflammatory mediators, the short-chain fatty acid (SCFA) concentrations, and microbiota composition in intestinal contents as well as proinflammatory cytokine levels in plasma. Our results indicated that HP enhanced intestinal disaccharidase activities and reduced plasma proinflammatory cytokines including tumor necrosis factor-α and interleukin-6 in LPS-challenged piglets. Moreover, HP upregulated mRNA expression of intestinal tight junction proteins such as claudin-1 and occludin. In addition, bacterial 16S rRNA gene sequencing showed that HP altered hindgut microbiota composition by enriching Prevotella and enhancing SCFA production following LPS challenge. These results collectively suggest that HP is capable of alleviating LPS-triggered intestinal injury by improving intestinal disaccharidase activities, barrier function, and SCFA production, while reducing intestinal inflammation. Evolution of shock-accelerated heavy gas layer Yu Liang, Lili Liu, Zhigang Zhai, Ting Si, Chih-Yung Wen Journal: Journal of Fluid Mechanics / Volume 886 / 10 March 2020 Published online by Cambridge University Press: 08 January 2020, A7 Print publication: 10 March 2020 Richtmyer–Meshkov instability of the SF6 gas layer surrounded by air is experimentally investigated. Using the soap film technique, five kinds of gas layer with two sharp interfaces are generated such that the development of each individual interface is highlighted. The flow patterns are determined by the amplitudes and phases of two corrugated interfaces. For a layer with both interfaces planar, the interface velocity shows that the reflected rarefaction waves from the second interface accelerate the first interface motion. For a layer with the second interface corrugated but the first interface planar, the reflected rarefaction waves make the first interface develop with the same phase as the second interface. For a layer with the first interface corrugated but the second interface planar, the rippled shock seeded from the first interface makes the second interface develop with the same phase as the first interface and the layer evolves into an 'upstream mushroom' shape. For two interfaces corrugated with opposite (the same) phase but a larger amplitude for the first interface, the layer evolves into 'sinuous' shape ('bow and arrow' shape, which has never been observed previously). For the interface amplitude growth in the linear stage, the waves' effects are considered in the model to give a better prediction. In the nonlinear stage, the effect of the rarefaction waves on the first interface evolution is quantitatively evaluated, and the nonlinear growth is well predicted. It is the first time in experiments to quantify the interfacial instability induced by the rarefaction waves inside the heavy gas layer. Interaction of a planar shock wave and a water droplet embedded with a vapour cavity Yu Liang, Yazhong Jiang, Chih-Yung Wen, Yao Liu Journal: Journal of Fluid Mechanics / Volume 885 / 25 February 2020 Published online by Cambridge University Press: 06 January 2020, R6 Print publication: 25 February 2020 The interaction of a shock wave and a water droplet embedded with a vapour cavity is experimentally investigated in a shock tube for the first time. The vapour cavity inside the droplet is generated by decreasing the surrounding pressure to the saturation pressure, and an equilibrium between the liquid phase and the gas phase is obtained inside the droplet. Direct high-speed photography is adopted to capture the evolution of both the droplet and the vapour cavity. The formation of a transverse jet inside the droplet during the cavity-collapse stage is clearly observed. Soon afterwards, at the downstream pole of the droplet, a water jet penetrating into the surrounding air is observed during the cavity-expansion stage. The evolution of the droplet is strongly influenced by the evolution of the vapour cavity. The phase change process plays an important role in vapour cavity evolution. The effects of the relative size and eccentricity of the cavity on the movement and deformation of the droplet are presented quantitatively. A NEW SHOCK MODEL WITH A CHANGE IN SHOCK SIZE DISTRIBUTION Serkan Eryilmaz, Cihangir Kan Journal: Probability in the Engineering and Informational Sciences , First View Published online by Cambridge University Press: 26 December 2019, pp. 1-15 For a system that is subject to shocks, it is assumed that the distribution of the magnitudes of shocks changes after the first shock of size at least d1, and the system fails upon the occurrence of the first shock above a critical level d2 (> d1). In this paper, the distribution of the lifetime of such a system is studied when the times between successive shocks follow matrix-exponential distribution. In particular, it is shown that the system's lifetime has matrix-exponential distribution when the intershock times follow Erlang distribution. The model is extended to the case when the system fails upon the occurrence of l consecutive critical shocks. Numerical study of cavitation regimes in flow over a circular cylinder Filipe L. Brandao, Mrugank Bhatt, Krishnan Mahesh Published online by Cambridge University Press: 23 December 2019, A19 Cavitating flow over a circular cylinder is investigated over a range of cavitation numbers ( $\unicode[STIX]{x1D70E}=5$ to $0.5$ ) for both laminar (at Reynolds number $(Re)=200$ ) and turbulent (at $Re=3900$ ) regimes. We observe non-cavitating, cyclic and transitional cavitation regimes with reduction in free-stream $\unicode[STIX]{x1D70E}$ . The cavitation inside the Kármán vortices in the cyclic regime, is significantly altered by the onset of 'condensation front' propagation in the transitional regime. At the transition, an order of magnitude jump in shedding Strouhal number ( $St$ ) is observed as the dominant frequency shifts from periodic vortex shedding in the cyclic regime, to irregular–regular vortex shedding in the transitional regime. In addition, a peak in pressure fluctuations, and a maximum in $St$ versus $\unicode[STIX]{x1D70E}$ based on cavity length are observed at the transition. Shedding characteristics in each regime are discussed using dynamic mode decomposition. A numerical method based on the homogeneous mixture model, fully compressible formulation and finite rate mass transfer developed by Gnanaskandan & Mahesh (Intl J. Multiphase Flow, vol. 70, 2015, pp. 22–34) is extended to include the effects of non-condensable gas (NCG). It is demonstrated that the condensation fronts observed in the transitional regime are supersonic (referred to as 'condensation shocks'). In the presence of NCG, multiple condensation shocks in a given cycle are required for complete cavity condensation and detachment, as compared to a single condensation shock when only vapour is present. This is explained by the reduction in pressure ratio across the shock in the presence of NCG, effectively reducing its strength. In addition, at $\unicode[STIX]{x1D70E}=0.85$ (near transition from the cyclic to the transitional regime), the presence of NCG suppresses the low frequency irregular–regular vortex shedding. Vorticity transport at $Re=3900$ , in the transitional regime, indicates that the region of attached cavity is nearly two-dimensional, with very low vorticity, affecting Kármán shedding in the near wake. Majority of vortex stretching/tilting and vorticity production is observed following the cavity trailing edge. In addition, the boundary-layer separation point is found to be strongly dependent on the amounts of vapour and gas in the free stream for both laminar and turbulent regimes. Resolvent analysis on the origin of two-dimensional transonic buffet Yoimi Kojima, Chi-An Yeh, Kunihiko Taira, Masaharu Kameda Published online by Cambridge University Press: 20 December 2019, R1 Resolvent analysis is performed to identify the origin of two-dimensional transonic buffet over an airfoil. The base flow for the resolvent analysis is the time-averaged flow over a NACA 0012 airfoil at a chord-based Reynolds number of 2000 and a free-stream Mach number of 0.85. We reveal that the mechanism of buffet is buried underneath the global low-Reynolds-number flow physics. At this low Reynolds number, the dominant flow feature is the von Kármán shedding. However, we show that with the appropriate forcing input, buffet can appear even at a Reynolds number that is much lower than what is traditionally associated with transonic buffet. The source of buffet is identified to be at the shock foot from the windowed resolvent analysis, which is validated by companion simulations using sustained forcing inputs based on resolvent modes. We also comment on the role of perturbations in the vicinity of the trailing edge. The present study not only provides insights on the origin of buffet but also serves a building block for low-Reynolds-number compressible aerodynamics in light of the growing interests in Martian flights. Convergent Richtmyer–Meshkov instability of light gas layer with perturbed outer surface Jianming Li, Juchun Ding, Ting Si, Xisheng Luo The Richtmyer–Meshkov instability of a helium layer surrounded by air is studied in a semi-annular convergent shock tube by high-speed schlieren photography. The gas layer is generated by an improved soap film technique such that its boundary shapes and thickness are precisely controlled. It is observed that the inner interface of the shocked light gas layer remains nearly undisturbed during the experimental time, even after the reshock, which is distinct from its counterpart in the heavy gas layer. This can be ascribed to the faster decay of the perturbation amplitude of the transmitted shock in the helium layer and Rayleigh–Taylor stabilization on the inner surface (light/heavy) during flow deceleration. The outer interface first experiences 'accelerated' phase inversion owing to geometric convergence, and later suffers a continuous deformation. Compared with a sole heavy/light interface, the wave influence (interface coupling) inhibits (promotes) growth of instability of the outer interface. Shock–shock interactions in granular flows Aqib Khan, Shivam Verma, Priyanka Hankare, Rakesh Kumar, Sanjay Kumar Shock–shock interaction structures and a newly discovered dynamic instability in granular streams resulting from such interactions are presented. Shock waves are generated by placing two similar triangular wedges in a gravity-driven granular stream. When the shock waves interact, grains collapse near the centre region of the wedges and a slow-moving concentrated diamond-shaped streak of grains is formed that grows as the inclination of the channel is increased. The diamond streak, under certain geometric conditions, is found to become unstable and start oscillating in the direction transverse to the mainstream. When the wedges are placed too close to each other, the granular flux of the incoming stream is unable to pass through the small gap, resulting in the formation of a single bow shock enveloping both the wedges. Experiments are performed for a wide range of flow speeds, wedge angles and wedge separations to investigate the interaction zone. We discuss a possible mechanism for the formation of the central streak and the associated dynamic instability observed for specific physical parameters. Near-sonic pure steam flow with real-gas effects and non-equilibrium and homogeneous condensation around thin airfoils Akashdeep Singh Virk, Zvi Rusak A small-disturbance asymptotic model is derived to describe the complex nature of a pure water vapour flow with non-equilibrium and homogeneous condensation around a thin airfoil operating at transonic speed and small angle of attack. The van der Waals equation of state provides real-gas relationships among the thermodynamic properties of water vapour. Classical nucleation and droplet growth theory is used to model the condensation process. The similarity parameters which determine the flow and condensation physics are identified. The flow may be described by a nonlinear and non-homogeneous partial differential equation coupled with a set of four ordinary differential equations to model the condensation process. The model problem is used to study the effects of independent variation of the upstream flow and thermodynamic conditions, airfoil geometry and angle of attack on the pressure and condensate mass fraction distributions along the airfoil surfaces and the consequent effect on the wave drag and lift coefficients. Increasing the upstream temperature at fixed values of upstream supersaturation ratio and Mach number results in increased condensation and higher wave drag coefficient. Increasing the upstream supersaturation ratio at fixed values of upstream temperature and Mach number also results in increased condensation and the wave drag coefficient increases nonlinearly. In addition, the effects of varying airfoil geometry with a fixed thickness ratio and chord on flow properties and condensation region are studied. The computed results confirm the similarity law of Zierep & Lin (Forsch. Ing. Wes. A, vol. 33 (6), 1967, pp. 169–172), relating the onset condensation Mach number to upstream stagnation relative humidity, when an effective specific heat ratio is used. The small-disturbance model is a useful tool to analyse the physics of high-speed condensing steam flows around airfoils operating at high pressures and temperatures. On regular reflection to Mach reflection transition in inviscid flow for shock reflection on a convex or straight wedge He Wang, Zhigang Zhai The regular reflection to Mach reflection ( $\text{RR}\rightarrow \text{MR}$ ) transition in inviscid perfect air for shock reflection over convex and straight wedges is investigated. Provided that the variation of shock intensity only has a second-order effect on the wave transition, the possible cases for the occurrence of the $\text{RR}\rightarrow \text{MR}$ transition for curved shock reflection over a wedge are discussed. For a planar shock reflecting over a convex wedge, four different flow regions are classified and the mechanism of the disturbance propagation is interpreted. It is found that the flow-induced rarefaction waves exist between disturbances generated from neighbouring positions and isolate them. For a curved shock reflecting over a convex wedge, although the distributions of the flow regions are different from those in planar shock reflection, the analysis for the planar shock case can be extended to curved shock cases as long as the wedge is convex. When a diverging shock reflects off a straight wedge, the flow-induced rarefaction waves are absent. However, the disturbances generated earlier cannot overtake the reflection point before the pseudo-steady criterion is satisfied. In the cases considered, the flow in the vicinity of the reflection point will not be influenced by the unsteady flow caused by the shock intensity and the wedge angle variations. This is clearly a local property of the shock–wall interaction, no matter what the history of the shock trajectory is. For validation, extensive inviscid numerical simulations are performed, and the numerical results show the reliability of the pseudo-steady criterion for predicting the $\text{RR}\rightarrow \text{MR}$ transition on a convex wedge. Chapter 11 - Sepsis By Matthew Taylor Edited by Adam C. Adler, Arvind Chandrakantan, Ronald S. Litman Book: Case Studies in Pediatric Anesthesia Print publication: 05 December 2019, pp 50-56 This chapter provides the reader with a succinct review on the continuum of the systemic inflammatory response syndrome through septic shock. The author provides a review on the pathophysiology of shock in children, the diagnostic criteria, and relevant monitoring considerations. The surgical procedures often required for patients with sepsis as well as the relevant anesthetic considerations are discussed. Analysis of a civil aircraft wing transonic shock buffet experiment L. Masini, S. Timme, A. J. Peace Published online by Cambridge University Press: 03 December 2019, A1 The physical mechanism governing the onset of transonic shock buffet on swept wings remains elusive, with no unequivocal description forthcoming despite over half a century of research. This paper elucidates the fundamental flow physics on a civil aircraft wing using an extensive experimental database from a transonic wind tunnel facility. The analysis covers a wide range of flow conditions at a Reynolds number of around $3.6\times 10^{6}$ . Data at pre-buffet conditions and beyond onset are assessed for Mach numbers between 0.70 and 0.84. Critically, unsteady surface pressure data of high spatial and temporal resolution acquired by dynamic pressure-sensitive paint is analysed, in addition to conventional data from pressure transducers and a root strain gauge. We identify two distinct phenomena in shock buffet conditions. First, we highlight a low-frequency shock unsteadiness for Strouhal numbers between 0.05 and 0.15, based on mean aerodynamic chord and reference free stream velocity. This has a characteristic wavelength of approximately 0.8 semi-span lengths (equivalent to three mean aerodynamic chords). Such shock unsteadiness is already observed at low-incidence conditions, below the buffet onset defined by traditional indicators. This has the effect of propagating disturbances predominantly in the inboard direction, depending on localised separation, with a dimensionless convection speed of approximately 0.26 for a Strouhal number of 0.09. Second, we describe a broadband higher-frequency behaviour for Strouhal numbers between 0.2 and 0.5 with a wavelength of 0.2 to 0.3 semi-span lengths (0.6 to 1.2 mean aerodynamic chords). This outboard propagation is confined to the tip region, similar to previously reported buffet cells believed to constitute the shock buffet instability on conventional swept wings. Interestingly, a dimensionless outboard convection speed of approximately 0.26, coinciding with the low-frequency shock unsteadiness, is found to be nearly independent of frequency. We characterise these coexisting phenomena by use of signal processing tools and modal analysis of the dynamic pressure-sensitive paint data, specifically proper orthogonal and dynamic mode decomposition. The results are scrutinised within the context of a broader research effort, including numerical simulation, and viewed alongside other experiments. We anticipate our findings will help to clarify experimental and numerical observations in edge-of-the-envelope conditions and to ultimately inform buffet-control strategies. The kinetic Shakhov–Enskog model for non-equilibrium flow of dense gases Peng Wang, Lei Wu, Minh Tuan Ho, Jun Li, Zhi-Hui Li, Yonghao Zhang Journal: Journal of Fluid Mechanics / Volume 883 / 25 January 2020 Published online by Cambridge University Press: 28 November 2019, A48 Print publication: 25 January 2020 When the average intermolecular distance is comparable to the size of gas molecules, the Boltzmann equation, based on the dilute gas assumption, becomes invalid. The Enskog equation was developed to account for this finite size effect that makes the collision non-local and increases the collision frequency. However, it is time-consuming to solve the Enskog equation due to its complicated structure of collision operator and high dimensionality. In this work, on the basis of the Shakhov model, a gas kinetic model is proposed to simplify the Enskog equation for non-ideal monatomic dense gases. The accuracy of the proposed Shakhov–Enskog model is assessed by comparing its solutions of the normal shock wave structures with the results of the Enskog equation obtained by the fast spectral method. It is shown that the Shakhov–Enskog model is able to describe non-equilibrium flow of dense gases, when the maximum local mean free path of gas molecules is still greater than the size of a molecular diameter. The accuracy and efficiency of the present model enable simulations of non-equilibrium flow of dense gases for practical applications. NUMERICAL ENTROPY PRODUCTION AS SMOOTHNESS INDICATOR FOR SHALLOW WATER EQUATIONS Partial differential equations, initial value and time-dependent initial-boundary value problems Hyperbolic equations and systems Basic methods in fluid mechanics SUDI MUNGKASI, STEPHEN GWYN ROBERTS Journal: The ANZIAM Journal , First View Published online by Cambridge University Press: 28 November 2019, pp. 1-18 The numerical entropy production (NEP) for shallow water equations (SWE) is discussed and implemented as a smoothness indicator. We consider SWE in three different dimensions, namely, one-dimensional, one-and-a-half-dimensional, and two-dimensional SWE. An existing numerical entropy scheme is reviewed and an alternative scheme is provided. We prove the properties of these two numerical entropy schemes relating to the entropy steady state and consistency with the entropy equality on smooth regions. Simulation results show that both schemes produce NEP with the same behaviour for detecting discontinuities of solutions and perform similarly as smoothness indicators. An implementation of the NEP for an adaptive numerical method is also demonstrated. Non-ideal compressible flows in supersonic turbine cascades Alessandro Romei, Davide Vimercati, Giacomo Persico, Alberto Guardone Flows in the close proximity of the vapour–liquid saturation curve and critical point are examined for supersonic turbine cascades, where an expansion occurs through a converging–diverging blade channel. The present study illustrates potential advantages and drawbacks if turbine blades are designed for operating conditions featuring a non-monotonic variation of the Mach number through the expansion process, and non-ideal oblique shocks and Prandtl–Meyer waves downstream of the trailing edge. In contrast to ideal-gas flows, for a given pressure ratio across the cascade, the flow field and the turbine performance are found to be highly dependent on the thermodynamic state at the turbine inlet, in both design and off-design conditions. A potentially advantageous design, featuring stationary points of the Mach number at the blade trailing edge, is proposed, which induces a nearly uniform outlet Mach number distribution in the stator–rotor gap with a low sensitivity to slight variations in the outlet pressure. These findings are relevant for turbomachines involved in high-temperature organic Rankine cycle power systems, in particular for supercritical applications. Effect of microramps on flare-induced shock – boundary-layer interaction T. Nilavarasan, G. N. Joshi, A. Misra Journal: The Aeronautical Journal / Volume 124 / Issue 1271 / January 2020 The ability of microramps to control shock - boundary layer interaction at the vicinity of an axisymmetric compression corner was investigated computationally in a Mach 4 flow. A cylinder/flare model with a flare angle of 25° was chosen for this study. Height (h) of the microramp device was 22% of the undisturbed boundary layer thickness (δ) obtained at the compression corner location. A single array of these microramps with an inter-device spacing of 7.5h was placed at three different streamwise locations viz. 5δ, 10δ and 15δ (22.7h, 45.41h and 68.12h in terms of the device height) upstream of the corner and the variations in the flowfield characteristics were observed. These devices modified the separation bubble structure noticeably by producing alternate upwash and downwash regions in the boundary layer. Variations in the separation bubble's length and height were observed along the spanwise (circumferential) direction due to these devices. Various approaches to determine active regions in an unstable global mode: application to transonic buffet Edoardo Paladini, Olivier Marquet, Denis Sipp, Jean-Christophe Robinet, Julien Dandois Journal: Journal of Fluid Mechanics / Volume 881 / 25 December 2019 Print publication: 25 December 2019 The transonic flow field around a supercritical airfoil is investigated. The objective of the present paper is to enhance the understanding of the physical mechanics behind two-dimensional transonic buffet. The paper is composed of two parts. In the first part, a global stability analysis based on the linearized Reynolds-averaged Navier–Stokes equations is performed. A recently developed technique, based on the direct and adjoint unstable global modes, is used to compute the local contribution of the flow to the growth rate and angular frequency of the unstable global mode. The results allow us to identify which zones are directly responsible for the existence of the instability. The technique is firstly used for the vortex-shedding cylinder mode, as a validating case. In the second part, in order to confirm the results of the first part, a selective frequency damping method is locally used in some regions of the flow field. This method consists of applying a low-pass filter on selected zones of the computational domain in order to damp the fluctuations. It allows us to identify which zones are necessary for the persistence of the instability. The two different approaches give the same results: the shock foot is identified as the core of the instability; the shock and the boundary layer downstream of the shock are also necessary zones while damping the fluctuations on the lower surface of the airfoil; and outside the boundary layer between the shock and the trailing edge or above the supersonic zone does not suppress the shock oscillation. A discussion on the several physical models, proposed until now for the buffet phenomenon, and a new model are finally offered in the last section. Accuracy of National Early Warning Score 2 (NEWS2) in Prehospital Triage on In-Hospital Early Mortality: A Multi-Center Observational Prospective Cohort Study Francisco Martín-Rodríguez, Raúl López-Izquierdo, Carlos del Pozo Vegas, Juan F. Delgado Benito, Virginia Carbajosa Rodríguez, María N. Diego Rasilla, José Luis Martín Conty, Agustín Mayo Iscar, Santiago Otero de la Torre, Violante Méndez Martín, Miguel A. Castro Villamor Journal: Prehospital and Disaster Medicine / Volume 34 / Issue 6 / December 2019 In cases of mass-casualty incidents (MCIs), triage represents a fundamental tool for the management of and assistance to the wounded, which helps discriminate not only the priority of attention, but also the priority of referral to the most suitable center. Hypothesis/Problem: The objective of this study was to evaluate the capacity of different prehospital triage systems based on physiological parameters (Shock Index [SI], Glasgow-Age-Pressure Score [GAP], Revised Trauma Score [RTS], and National Early Warning Score 2 [NEWS2]) to predict early mortality (within 48 hours) from the index event for use in MCIs. This was a longitudinal prospective observational multi-center study on patients who were attended by Advanced Life Support (ALS) units and transferred to the emergency department (ED) of their reference hospital. Collected were: demographic, physiological, and clinical variables; main diagnosis; and data on early mortality. The main outcome variable was mortality from any cause within 48 hours. From April 1, 2018 through February 28, 2019, a total of 1,288 patients were included in this study. Of these, 262 (20.3%) participants required assistance for trauma and injuries by external agents. Early mortality within the first 48 hours due to any cause affected 69 patients (5.4%). The system with the best predictive capacity was the NEWS2 with an area under the curve (AUC) of 0.891 (95% CI, 0.84-0.94); a sensitivity of 79.7% (95% CI, 68.8-87.5); and a specificity of 84.5% (95% CI, 82.4-86.4) for a cut-off point of nine points, with a positive likelihood ratio of 5.14 (95% CI, 4.31-6.14) and a negative predictive value of 98.7% (95% CI, 97.8-99.2). Prehospital scores of the NEWS2 are easy to obtain and represent a reliable test, which make it an ideal system to help in the initial assessment of high-risk patients, and to determine their level of triage effectively and efficiently. The Prehospital Emergency Medical System (PhEMS) should evaluate the inclusion of the NEWS2 as a triage system, which is especially useful for the second triage (evacuation priority).
CommonCrawl
Can A Utility Function Take On Negative Values? Can someone provide a rigorous definition of a utility function? I had thought that a utility function only needs to the preserve the order of preferences. Thus a utility function can take on negative values as long as it preserves the order of preferences. Others have told me that a utility function cannot take on negative values. Is this a condition of a rigorous definition of a utility function? Tony BuiTony Bui A utility function can certainly be negative. The utility function is nothing more than a way to represent a preference relationship. This is an important conceptual point. In several theorems that typically show up in introductory texts, we show that sets of preferences that satisfy certain regularity conditions can be represented as utility function. Also, there are different decision theory frameworks that allow the utility function to be transformed. You alluded to something like this in your question. In the traditional framework without uncertainty, the utility function is defined up to a monotonic transformation. Under certain kinds of uncertainty, we get Von Neumann–Morgenstern utility functions that are unique up to affine transformations. You can read more about this elsewhere. For now, the consider the following definition of a utility function. It is taken from Advanced Microeconomic Theory by Jehle and Reny (3rd edition): A preference relation $\succeq$ is defined as follows: where the axioms references are these: Axiom 1: Completeness. For all $x^1$ and $x^2$ in $X$, either $x^1 \succeq x^2$ or $x^2 \succeq x^1$. Axiom 2: Transitivity. For any three elements $x^1$, $x^2$, and $x^3$ in $X$, if $x^1 \succeq x^2$ and $x^2 \succeq x^3$, then $x^1 \succeq x^3$. jmbejarajmbejara $\begingroup$ Nice answer! Solved my question. $\endgroup$ – Jinhua Wang May 19 '19 at 15:20 Here is one possible rigorous definition of a utility function: Let $X$ be a set of alternatives. Let $\succeq$ be a preference relation over those alternatives. $U: X \to \mathbb{R}$ is a utility function means that $U(x) \geq U(y) \iff x \succeq y$. Then if for example $X$ is 'amounts of money you might be given', and $x \succeq y$ only if $x \geq y$, then some possible utility functions are $U(x) = x, U(x) = e^x, U(x) = \log(x)$... Some of these are negative. One could of course require that $U(x) > 0$. Maybe this makes it easier to swallow as an interpretation of the 'well-being' of an individual. But that would rule out many commonly used utility functions, such as $U(x) = x$ or $U(x) = \log(x)$. NickJNickJ $\begingroup$ "Maybe this makes it easier to swallow as an interpretation of the 'well-being' of an individual." But of course it's just best to think of a utility function as a representation of the preference relation $ \succeq$ and nothing more. Some utility functions are always non-positive (e.g., the CRRA type $C^{1-\gamma}/(1-\gamma)$ where $\gamma > 1$). $\endgroup$ – jmbejara Mar 13 '15 at 17:19 $\begingroup$ I wouldn't say it's 'best'. For welfare analysis or for thinking about things like attitudes toward risk we sometimes give more meaning to the utility function than as merely representing a preference relation. One could debate the philosophy of it but there's a long history of interpreting the utility function as representing something real. $\endgroup$ – NickJ Apr 17 '15 at 14:15 $\begingroup$ "there's a long history of interpeting the utility function as representing something real" Yes. But to be specific, this is only the case with "cardinal utliity" as opposed to "ordinal utility." But even VnM utliity functions (which are cardinal and are the utility functions that are most commonly used) are defined up to affine transformations. Thus, the sign of the utility is in some sense still arbitrary. $\endgroup$ – jmbejara Apr 22 '15 at 22:09 Like jmbejara says generally in economics utility is not measured in anything but preference relations, so it's called ordinal utility (which contrasts cardinal utility). So a bundle giving utility of -1 is preferred to any bundle giving less than -1. The number -1 doesn't tell us anything else. "Ordinal utility theory states that while the utility of a particular good or service cannot be measured using a numerical scale bearing economic meaning in and of itself, pairs of alternative bundles (combinations) of goods can be ordered such that one is considered by an individual to be worse than, equal to, or better than the other. This contrasts with cardinal utility theory, which generally treats utility as something whose numerical value is meaningful in its own right." (source: http://en.wikipedia.org/wiki/Ordinal_utility) snoramsnoram Not the answer you're looking for? Browse other questions tagged utility or ask your own question. Algebraic approach towards convexity When can one safely talk about decreasing marginal utility? Does risk aversion cause diminishing marginal utility, or vice versa? What are good dataset alternatives to estimate value functions? How to show that a homothetic utility function has demand functions which are linear in income performance as utility function / price? Ruling out boundary solutions in Utility Maximization Existence of maximum utility with two bads Purpose of a monotonic transformations in utility functions How is the Euler Equation for Consumption derived from from intertemporal budget constraint and lifetime utility function in basic macroeconomics Weakly monotone preferences with singleton indifference curves: do any of them admit a utility representation?
CommonCrawl
Preparation and Enhanced Catalytic Hydrogenation Activity of Sb/Palygorskite (PAL) Nanoparticles Lin Tan1, Muen He1, Aidong Tang ORCID: orcid.org/0000-0002-4887-44581 & Jing Chen2 Nanoscale Research Letters volume 12, Article number: 460 (2017) Cite this article A Sb/palygorskite (PAL) composite was synthesized by a facile solvothermal process and applied in catalytic hydrogenation of p-nitrophenol for the first time. It was found that the Sb nanoparticles with the sizes of 2–5 nm were well dispersed on the fiber of PAL, while partial aggregated Sb nanoparticles with sizes smaller than 200 nm were also loaded on the PAL. The Sb/PAL composite with 9.7% Sb mass amounts showed outstanding catalytic performance by raising the p-nitrophenol conversion rate to 88.3% within 5 min, which was attributed to the synergistical effect of Sb and PAL nanoparticles facilitating the adsorption and catalytic hydrogenation of p-nitrophenol. Antimony as a functional material has attracted considerable attention [1, 2]. More recently, reports show that antimony film electrodes offer an unusual characteristic, namely the favorably negative over-voltage of hydrogen evolution [3]. Besides, a new magnetic nanoparticle-supported antimony catalyst is prepared by taking advantage of the interaction between the alkylamines and the Sb nanoparticles, and such technique is applied in wastewater treatment fields [4]. However, the Sb particles were always aggregated together due to their high surface energy, which would hinder their practical application severely. Therefore, the inhibition of particle aggregation remains a thorny problem that waiting to be solved in the next exploration. Generally, the nanocomposites formed from nanoparticles and various supports demonstrate excellent properties of nanoparticles without the disadvantage of losing any of intrinsic properties of the supports [5,6,7,8,9]. One of the most widely used as supporting materials for the surface modification is clay mineral. The composites by introducing of clay mineral [10,11,12], such as kaolinite [13, 14], halloysite [15, 16], montmorillonite [17], and sepiolite [18], do not only enhance the dispersion of nanoparticles but also improve the gather of reactants which would produce synergetic effect in the catalytic process and further intensify its catalytic performance [19]. Moreover, the cost of clay mineral is lower than the metal catalyst, which would further reduce the costs of catalysts and facilitate its practical application. Palygorskite (PAL), a species of natural clay mineral with theory formula (Mg,Al,Fe)5Si8O20(OH)2(OH2)4·4H2O has been widely applied due to its particular fiber-liked morphology [20,21,22] which endowed unique properties, such as larger surface area [23], nontoxicity [24], and excellent adsorption capacity [25]. Owing to such particular properties, PAL is used as adsorbents [26, 27], catalysts, and catalyst supports [19]. For example, the modified PAL shows superior adsorption capacities than raw PAL [28, 29]. Moreover, Y2O3 functionalized PAL was used as an adsorbent and exhibited potential applications in wastewater treatment [25]. In conclusion, the nanocomposites formed from the combination of PAL and nanoparticles show extraordinary catalytic properties of nanoparticles, and its great surface area allows an increase in the catalyst sensitivity. In our previous study, rich antimony hollow Sb2Se3 sphere particles demonstrate an excellent catalytic property for the hydrogenation of p-nitrophenol [30, 31]. However, the effect of antimony on the process of p-nitrophenol hydrogenation remains unclear. Therefore, a series of Sb/PAL hybrid composites with different Sb contents are prepared, and their catalytic performance of p-nitrophenol hydrogenation also undergoes investigation. The synthesized strategy is dispersing Sb particles on the PAL fiber surface via a facile solvothermal process and creating more reaction active sites to enhance its catalytic property. The PAL was purchased from Xuyi, China. In a typical synthesis process of Sb/PAL composites with the antimony mass content of 9.7% (marked as 9.7% Sb/PAL), antimony potassium tartrate (0.124 g), and PAL (0.456 g) were mixed in 55 ml of ethanol/water solution with the volume ratio of 40:15, then under continuously stirring for 30 min. Subsequently, NaBH4 (0.030 g) was dissolved in 15 ml of deionized water. Afterwards, the solution was dropwise added into above mixture within 10 min. Later, it was transferred into an 80 ml of Teflon-lined autoclave. From that moment, it stayed sealed and maintained at 180 °C for 5 h. Later, the as-synthesized products were washed with ethanol and deionized water for three times, then collected and held at 80 °C in an oven for 6 h. Finally, the products were grinded for further characterization and test. Also, Sb/PAL composites with different loaded amounts of Sb were fabricated in the above similar method through controlling the amounts of antimony potassium tartrate and sodium borohydride while keeping PAL mass constant. The X-ray diffraction analysis (XRD), scanning electron microscopy (SEM), energy dispersive spectrometer (EDS), transmission electron microscopy (TEM), and high-resolution transmission electron microscopy (HRTEM) were tested as previous literature [30]. The UV–vis spectrum was detected on SHIMADZU UV-2450 spectrophotometer, and the spectrum range was 205–500 nm. The Fourier transform infrared analysis (FTIR) was carried out on a Bruker VERTEX-70 spectrometer with KBr pellets between 4000 and 400 cm−1. The inductively coupled plasma emission spectrometry (ICP) was tested on Perkin Elmer Optima 5300. The catalytic activity of as-fabricated products was tested for the p-nitrophenol catalytic hydrogenation to p-aminophenol in the presence of NaBH4. In a typically catalytic procedure, p-nitrophenol aqueous solution (100 μL 0.025 mol/L) was mixed with 20 ml of deionized water, and the following procedures remain the same as our previous works [30]. The XRD patterns of the as-prepared products were displayed in Fig. 1. The main diffraction peaks of the sample of 100% Sb with no PAL addition (Fig. 1 (a)) could be indexed to antimony (PDF No.35-0732). Meanwhile, tiny amounts of Sb2O3 (PDF No.05-0534) could also be found in the figure, which may be produced through the redox reaction on the antimony surface. Besides, the diffraction peaks of the raw palygorskite (Fig. 1 (c)) were in accordance with the palygorskite (PDF No. 29–0855). Meanwhile, the diffraction peak at 2θ = 26.6° was attributed to the quartz [19]. After Sb particle combined with PAL fiber (Fig. 1 (b)), the corresponding diffraction peaks were referred to palygorskite (PDF No.29-0855) and antimony (PDF No.35-0732). These results implied that the Sb particles have loaded on the palygorskite and formed the Sb/PAL hybrid composite. XRD patterns of (a) 100% Sb without PAL, (b) 9.7% Sb/PAL, and (c) PAL sample The SEM images of palygorskite in Fig. 2a, b showed that numerous fibers aggregated into bulk crystal bundles with flat or sheet structures due to the strong interaction among the fibers of palygorskite [32]. It was found that the PAL fiber was about 40 nm in diameter and several hundred nanometers in length. For the 100% Sb without PAL sample as Fig. 2c displayed, several octahedral-shaped particles were surrounded by numerous irregular particles. The size of the octahedral edge was about 1 μm while the size of irregular particle was larger than 100 nm (Fig. 2d). Besides, the irregular particles were aggregated together severely. For 9.7% Sb/PAL displayed in Fig. 2e, f, after the Sb particles anchored on the PAL fiber, some particles with a diameter under 200 nm was aggregated together on the fibers surface while no large-sized Sb particles similar to the octahedral shape displayed in Fig. 2c was found. This phenomenon indicated that the PAL played a key role in limiting the growth of Sb nanoparticles, despite it was still partly aggregated together. SEM images of a, b PAL, c, d 100% Sb without PAL, and e, f 9.7% Sb/PAL sample and g, h EDS patterns of 9.7% Sb/PAL The EDS analysis of different regions in 9.7% Sb/PAL composites was carried out to investigate the Sb nanoparticle distribution, and the results were shown in Fig. 2g, h. For the flat region signed in Fig. 2e, the mass contents of Sb were only 5.24% which was lower than the theoretical amounts of 9.7%. But for the aggregated region marked in Fig. 2f, the mass amounts of Sb was increased from the theoretical values of 9.7% to the realistic values of 40.05%. The above results indicated that the obtained part of Sb nanoparticles were not as well ordered; monodispersed Sb particles as one might be expected on the PAL surface, possibly due to the fact that PAL is difficult to be well dispersed. TEM and HRTEM images of the 9.7% Sb/PAL were tested and displayed in Fig. 3a, b respectively. The diameters of the aggregated spherical Sb particles were about 100 nm, which were corresponded to the SEM results. The monodispersed Sb particles of the size 2–5 nm found on Fig. 3b were widely distributed on the PAL surface, and the d-spacing of the Sb particle was sized as 0.214 nm, which indexed to the (110) plane of Sb (PDF No.35-0732) as well. The selected area electron diffraction (SAED) pattern of the sample displayed in Fig. 3b using inserted figure showed several diffraction ring patterns and diffraction spots, demonstrating that the Sb/PAL hybrid composites were polycrystalline. The element distribution of 9.7% Sb/PAL composites was shown in Fig. 3c–h. The Al, O, Si, Mg, and Sb elements were distributed homogeneously throughout the composites except for Sb element formed three small regions of uneven distribution. This phenomenon further indicated that the Sb nanoparticles were widely distributed on the PAL surface while exhibiting a partial uneven distribution. However, the result of HRTEM clearly demonstrated that the monodispersed Sb particle with the size of 2–5 nm was widely distributed on the PAL surface. a TEM image, b HRTEM image, the inserted image is the SAED pattern, and c – h the elemental map of 9.7% Sb/PAL In order to investigate the interaction between Sb nanoparticle and palygorskite, the FTIR spectrum of raw palygorskite and 9.7% Sb/PAL composites were displayed in Fig. 4. For the raw PAL (Fig. 4 (a)), the band in 3459 and 1646 cm−1 were attributed to the stretching vibrations of hydroxyl group and the bending vibration of adsorbed water respectively [33, 34]. Meanwhile, the wide band around 1031 cm−1 was related to the stretch vibrations of silicon–oxygen bond [20]. And the band at 468 and 511 cm−1 was attributed to the silicon–oxygen–silicon bending vibration [35]. After the Sb particles anchored on PAL fibers (Fig. 4 (b)), although no new absorbance band appeared, the related absorbance band of PAL was shifted to lower wave numbers as marked in yellow highlight in Fig. 4, such as 1027 cm−1 of the stretch vibrations of silicon–oxygen bond and 466 and 509 cm−1 relating to the silicon–oxygen–silicon bending vibration. This phenomenon implied the existence of the chemical interaction between Sb and the silicon hydroxyl group on the PAL surface, weakening the bond of silicon–oxygen–silicon. Such similar effects have been reported by Peng et al. [11]. FTIR spectra of (a) PAL and (b) 9.7% Sb/PAL The catalytic performance of as-prepared sample was tested for the p-nitrophenol catalytic reduction to p-aminophenol in the presence of NaBH4. To identify the catalytic production, the p-nitrophenol aqueous solution was tested by UV–vis spectrophotometer in the range of 205 to 500 nm, and the results were showed in Fig. 5a. After the catalytic reaction, the maximum peak at 400 nm was decreased close to zero, while the position at 300 nm have a noticeable increment, indicating that p-nitrophenol have converted into p-aminophenol [36]. a UV–vis absorption spectra of p-nitrophenol aqueous in the presence of 9.7% Sb/PAL composites catalyst, b catalytic activities of different sample, and c the recyclability of the 9.7% Sb/PAL hybrid composites The catalytic performance of several different samples were tested, and the results were displayed in Fig. 5b. The contents of p-nitrophenol ions were kept almost constant for the pure PAL, indicating that the pure PAL had no contribution to the catalytic process, therefore, implied that the hydrogenation process would not occur in the absence of the catalyst. Meanwhile, for the pure Sb, the p-nitrophenol catalytic efficiency reached to 91.4% within 30 min. Upon adding 5% Sb/PAL composites to this system, the conversion rate of p-nitrophenol ions was measured at 71.5% within a time period of 30 min. With the loaded amounts of Sb increased to 9.7 and 18.2%, the conversion rates were significantly raised to 98.2 and 97.3% respectively, which was higher than the 100% Sb with no PAL sample at the rate of 91.4%. More importantly, it was noteworthy that the catalytic efficiency of 9.7% Sb/PAL composites stands at 88.3% within 5 min, which was about 1.7 times higher than the efficiency of 50.6% achieved within 5 min by using only 100% Sb without PAL. The pure Sb (without PAL) demonstrates higher p-nitrophenol conversion (91.4%) than in the presence of 5% Sb/PAL (71.5%) that is because the Sb content is very low. As Additional file 1: Figure S1 displayed, the peak intensity of Sb in 5% Sb/PAL composites was comparatively low while that of Sb2O3 was high. This results indicated that the number of Sb particles is also a main factor for the reduction of p-nitrophenol. As we have known, the hydrogenation reaction of p-nitrophenol follow a pseudo first-order kinetics equation displayed in Eq. (1) (Fig. 6) when the amount of NaBH4 was far higher than the amount of p-nitrophenol [37]. Hence, to further reveal the catalytic performance of the sample, we calculated the apparent reaction rate constant of 9.7% Sb/PAL sample and collected some other rate constant recorded by previous literature, and the data was given in Additional file 2: Table S1. The reaction rate constant of 9.7% Sb/PAL sample could reach to 0.420 min−1 which showed an excellent catalytic performance. $$ \mathrm{In}\frac{C_t}{C_0}=\hbox{-} k t $$ To investigate the stability of the Sb/PAL composites, the reusability experiment of the 9.7% Sb/PAL was tested and the result was shown in Fig. 5c. It was observed that the conversion efficiency of p-nitrophenol within 30 min was 91.6% after three cycles. The catalytic results demonstrated that the Sb/PAL composites perform an excellent catalytic hydrogenation with a good reusability, which was attributed to the high dispersion of Sb nanoparticles on the palygorskite fiber, providing a more active site; the similar effect was also found in TiO2/halloysite composites [38]. Based on the above experimental results, a possible fabrication mechanism of the Sb/PAL composites was proposed. Firstly, palygorskite was a fibrous clay mineral with a structure consisting of short and alternating inverted 2:1 sheets or ribbons. These ribbons were of an average width (along the Y direction) of two linked tetrahedral chains. The tetrahedral sheet was continuous across the ribbons but with apices pointing up and down in adjacent ribbon [22]. These silica tetrahedral ribbons had abundant Si–OH groups, which could adsorb and hold the cations such as Fe3+, Ni2+ ions [19, 39] and SbO+ ions as well. Secondly, the complex dissociation equilibrium of antimony tartrate complex ions was shown as Eq. (2). Although the antimony potassium tartrate was a stable coordination compound, it could provide a slow complex dissociation way to form a few of SbO+ ions, hence, controlling the rate of the reactive process as well. Thus SbO+ ions were gradually adsorbed on the surface of PAL fiber with abundant Si–OH. The similar effect was found in the Pd/kaolinite composites [40]. Thirdly, when the NaBH4 aqueous solution was dropwise introduced into the above system, the numbers of SbO+ ions would decrease due to the Sb particles forming according to the redox reaction Eq. (3) which would further lead to the dissociation of antimony tartrate complex ions. Besides, the newly formed H+ ions originated from Eq. (3) also benefited to the releasing of SbO+ ions because of the acidic effect which would further improve the Sb precursor combining with the PAL [41]. Therefore, with the reducing agent NaBH4 being introduced into the system, the initial Sb nanoparticle attached on the PAL surface in situ via the Si–OH sited on the silica tetrahedral ribbons. Finally, Sb/PAL composites with highly dispersed Sb nanoparticles were fabricated via the solvothermal process. Furthermore, those dissociative SbO+ ions were reduced, forming some aggregated Sb particles among the layers of PAL. In reverse, if the PAL fiber was absent, the particles would aggregate together and forming large sized Sb particles in a octahedral shape due to their high surface energy. Fig. 6 presents the schematic illustration of the Sb/PAL composites fabrication. The PAL rod served as a template for the growth of Sb nanoparticles and effectively inhibited the aggregation of Sb particles. Although it was found that some Sb nanoparticles were still partly aggregated together since PAL is difficult to be well dispersed, the size of Sb particles clearly declined below 200 nm. In addition, the Sb/PAL hybrid composite showed an excellent catalytic property, ascribed to its abundant interface between the Sb nanoparticles and PAL, which helped significantly in promoting p-nitrophenol adsorption and facilitating the catalytic hydrogenation of p-nitrophenol. The pseudo first-order kinetics equation, corresponding chemical reaction equtions and Schematic illustration of the Sb/PAL composites fabrication The Sb/PAL nanocomposites were synthesized through a facile solvothermal process through using natural palygorskite as a base. According to the characterized results, the PAL fiber could effectively inhibit the Sb nanoparticle aggregation. In addition, the composites were tested for the p-nitrophenol catalytic hydrogenation process. The 9.7% Sb/PAL composites showed excellent catalytic performance and its p-nitrophenol conversion efficiency reaching 88.3% within 5 min, which was about 1.7 times more efficient than using only 100% Sb without PAL adding. Therefore, the tested composites prove outstanding properties and offer excellent potential in future practical catalytic applications. (Mg,Al,Fe)5Si8O20(OH)2(OH2)4·4H2O: Palygorskite EDS: Energy dispersive spectrometer FTIR: Fourier transform infrared spectroscopy HRTEM: High-resolution transmission electron microscopy ICP: Inductively coupled plasma emission spectrometry KBr: NaBH4 : Sodium borohydride PAL: Pd: Powder diffraction files SAED: Selected area electron diffraction Sb: Sb2O3 : Antimonous oxide Transmission electron microscopy XRD: Y2O3 : Yttrium oxide Silwana B, Van Der Horst C, Iwuoha E, Somerset V (2015) Synthesis, characterisation and electrochemical evaluation of reduced graphene oxide modified antimony nanoparticles. Thin Solid Films 592, Part A:124–134 He M, Kravchyk K, Walter M, Kovalenko MV (2014) Monodisperse antimony nanocrystals for high-rate Li-ion and Na-ion battery anodes: nano versus bulk. Nano Lett 14:1255–1262 Serrano N, Díaz-Cruz JM, Ariño C, Esteban M (2016) Antimony-based electrodes for analytical determinations. TrAC Trends Anal Chem 77:203–213 Ma F-P, Li P-H, Li B-L, Mo L-P, Liu N, Kang H-J, Liu Y-N, Zhang Z-H (2013) A recyclable magnetic nanoparticles supported antimony catalyst for the synthesis of N-substituted pyrroles in water. Applied Catalysis A: General 457:34–41 Zhang Y, Tang A, Yang H, Ouyang J (2016) Applications and interfaces of halloysite nanocomposites. Applied Clay Science 119:8–17 Jin J, Fu L, Yang H, Ouyang J (2015) Carbon hybridized halloysite nanotubes for high-performance hydrogen storage capacities. Sci Rep 5:12429 Lun HL, Ouyang J, Tang AD, Yang HM (2015) Fabrication and conductive performance of antimony-doped tin oxide-coated halloysite nanotubes. Nano 10:1–9 Yang Q, Long M, Tan L, Zhang Y, Ouyang J, Liu P, Tang A (2015) Helical TiO2 nanotube arrays modified by Cu–Cu2O with ultrahigh sensitivity for the nonenzymatic electro-oxidation of glucose. ACS Appl Mater Interfaces 7:12719–12730 Zhou Z, Ouyang J, Yang H, Tang A (2016) Three-way catalytic performances of Pd loaded halloysite-Ce 0.5 Zr 0.5 O 2 hybrid materials. Applied Clay Science 121:63–70 Peng K, Fu L, Ouyang J, Yang H (2016) Emerging parallel dual 2D composites: natural clay mineral hybridizing MoS2 and interfacial structure. Adv Funct Mater 26:2666–2675 Peng K, Fu L, Yang H, Ouyang J (2016) Perovskite LaFeO3/montmorillonite nanocomposites: synthesis, interface characteristics and enhanced photocatalytic activity. Scientific Reports 6:19723 Peng K, Fu L, Yang H, Ouyang J, Tang A (2016) Hierarchical MoS2 intercalated clay hybrid nanosheets with enhanced catalytic activity. Nano Research 10:570–583 Li X, Ouyang J, Zhou Y, Yang H (2015) Assembling strategy to synthesize palladium modified kaolin nanocomposites with different morphologies. Sci Rep 5:13763 Long M, Zhang Y, Shu Z, Tang A, Ouyang J, Yang H (2017) Fe2O3 nanoparticles anchored on 2D kaolinite with enhanced antibacterial activity. Chem Commun 53:6255–6258 Zhang Y, He X, Ouyang J, Yang H (2013) Palladium nanoparticles deposited on silanized halloysite nanotubes: synthesis, characterization and enhanced catalytic property. Sci Rep 3:2948 Shu Z, Zhang Y, Ouyang J, Yang H (2017) Characterization and synergetic antibacterial properties of ZnO and CeO2 supported by Halloysite. Appl Surf Sci 420:833–838 Peng K, Yang H (2017) Carbon hybridized montmorillonite nanosheets: preparation, structural evolution and enhanced adsorption performance. Chem Commun 53:6085–6088 Hou K, Wen X, Yan P, Tang A, Yang H (2017) Tin oxide-carbon-coated sepiolite nanofibers with enhanced lithium-ion storage property. Nanoscale Res Lett 12:215 Huo C, Yang H (2012) Attachment of nickel oxide nanoparticles on the surface of palygorskite nanofibers. J Colloid Interface Sci 384:55–60 Huo C, Yang H (2013) Preparation and enhanced photocatalytic activity of Pd–CuO/palygorskite nanocomposites. Applied Clay Science 74:87–94 Chae HS, Piao SH, Maity A, Choi HJ (2015) Additive role of attapulgite nanoclay on carbonyl iron-based magnetorheological suspension. Colloid and Polymer Science 293:89–95 Wang W, Wang A (2016) Recent progress in dispersion of palygorskite crystal bundles for nanocomposites. Applied Clay Science 119:18–30 He X, Yang H (2013) Au nanoparticles assembled on palygorskite: enhanced catalytic property and Au-Au2O3 coexistence. J Mol Catal A Chem 379:219–224 Luo S, Chen Y, Zhou M, Yao C, Xi H, Kong Y, Deng L (2013) Palygorskite-poly(o-phenylenediamine) nanocomposite: an enhanced electrochemical platform for glucose biosensing. Applied Clay Science 86:59–63 He X, Wang J, Shu Z, Tang A, Yang H (2016) Y2O3 functionalized natural palygorskite as an adsorbent for methyl blue removal. RSC Adv 6:41765–41771 Xi Y, Mallavarapu M, Naidu R (2010) Adsorption of the herbicide 2, 4-D on organo-palygorskite. Applied Clay Science 49:255–261 Tang J, Mu B, Wang W, Zheng M, Wang A (2016) Fabrication of manganese dioxide/carbon/attapulgite composites derived from spent bleaching earth for adsorption of Pb (ii) and brilliant green. RSC Adv 6:36534–36543 Liang X, Xu Y, Tan X, Wang L, Sun Y, Lin D, Sun Y, Qin X, Wang Q (2013) Heavy metal adsorbents mercapto and amino functionalized palygorskite: preparation and characterization. Colloids Surf A Physicochem Eng Asp 426:98–105 Guo H, Zhang H, Peng F, Yang H, Xiong L, Huang C, Wang C, Chen X, Ma L (2015) Mixed alcohols synthesis from syngas over activated palygorskite supported Cu–Fe–Co based catalysts. Applied Clay Science 111:83–89 Tang AD, Long M, Liu P, Tan L, He Z (2014) Morphologic control of Sb-rich Sb2Se3 to adjust its catalytic hydrogenation properties for p-nitrophenol. RSC Adv 4:57322–57328 Tang A, Long M, He Z (2014) Electrodeposition of Sb2Se3 on TiO2 nanotube arrays for catalytic reduction of p-nitrophenol. Electrochim Acta 146:346–352 Lu L, Li X-Y, Liu X-Q, Wang Z-M, Sun L-B (2015) Enhancing the hydrostability and catalytic performance of metal–organic frameworks by hybridizing with attapulgite, a natural clay. J Mater Chem A 3:6998–7005 He X, Yang H (2015) Fluorescence and room temperature activity of Y2O3:(Eu3+, Au3+)/palygorskite nanocomposite. Dalton Trans 44:1673–1679 He X, Tang A, Yang H, Ouyang J (2011) Synthesis and catalytic activity of doped TiO2-palygorskite composites. Applied Clay Science 53:80–84 Huo C, Yang H (2010) Synthesis and characterization of ZnO/palygorskite. Applied Clay Science 50:362–366 Bae S, Gim S, Kim H, Hanna K (2016) Effect of NaBH4 on properties of nanoscale zero-valent iron and its catalytic activity for reduction of p-nitrophenol. Applied Catalysis B: Environmental 182:541–549 Zhang S-S, Song J-M, Niu H-L, Mao C-J, Zhang S-Y, Shen Y-H (2014) Facile synthesis of antimony selenide with lamellar nanostructures and their efficient catalysis for the hydrogenation of p-nitrophenol. J Alloys Compd 585:40–47 Wang R, Jiang G, Ding Y, Wang Y, Sun X, Wang X, Chen W (2011) Photocatalytic activity of heterostructures based on TiO2 and halloysite nanotubes. ACS Appl Mater Interfaces 3:4154–4158 Middea A, Fernandes TLP, Neumann R, Gomes ODFM, Spinelli LS (2013) Evaluation of Fe(III) adsorption onto palygorskite surfaces. Appl Surf Sci 282:253–258 Li X, Tang A (2016) Pd modified kaolinite nanocomposite as a hydrogenation catalyst. RSC Adv 6:15585–15591 Tan L, Tang A, Wen X, Wang J, Liu Y (2017) Size control of 1D Sb2Se3 nanorods prepared by a facile mixed solvothermal method with tartaric acid assistance. CrstEngComm 19:2852–2859 This research was financially supported by the National Natural Science Foundation of China (no. 51374250) and the Foundation of Key Laboratory for Palygorskite Science and Applied Technology of Jiangsu Province (HPK201601). School of Chemistry and Chemical Engineering, Central South University, Changsha, 410083, China Lin Tan, Muen He & Aidong Tang Key Laboratory of Palygorskite Science and Applied Technology of Jiangsu Province, Huaiyin Institute of Technology, Huaian, 223003, People's Republic of China Jing Chen Lin Tan Muen He Aidong Tang AT conceived the project and wrote the final paper. LT designed the experiments, synthesized and characterized the materials, and wrote initial drafts of the work. MH synthesized and characterized the materials, and JC analyzed the data. All authors discussed the results and commented on the manuscript. All authors read and approved the final manuscript. Correspondence to Aidong Tang or Jing Chen. Additional file 1: Figure S1. XRD pattern of 5% Sb/PAL (PNG 72 kb) Additional file 2: Table S1. Comparison of the catalytic performances of Sb/PAL and other available catalysts obtained from previous literature. (DOCX 181 kb) Tan, L., He, M., Tang, A. et al. Preparation and Enhanced Catalytic Hydrogenation Activity of Sb/Palygorskite (PAL) Nanoparticles. Nanoscale Res Lett 12, 460 (2017). https://doi.org/10.1186/s11671-017-2220-8 Solvothermal synthesis Catalytic performances
CommonCrawl
The subsequence defined by that term is a constant sequence which is therefore convergent. If A is infinite, then, by the Bolzano-Weierstrass theorem, it has at least one cluster point x ∈ Â. By a slight modification of part (i) of the proof of Theorem 3.10, we can construct a subsequence (x_ {n_ {k}} ) (xnk) which converges to x.cz 97b discontinued; in texas independent psychological services may be provided by Moreover, a function f defined on a bounded interval is Riemann-integrable if and only if it is bounded and the set of points where f is discontinuous has Lebesgue measure zero. An. ebay standard envelope 2022 A metric space is called sequentially compact if every sequence of elements of has a limit point in . Equivalently:. Equivalently:. One corollary of this last result is that the method converges in the Wasserstein distance, another metric on spaces of random variables.23 de dez. de 2012 ... This is the so called Bolzano-Weierstrass Theorem. I will prove that any sequence of real numbers has a monotone subsequence and leave the ...Since the sequence ( x n) n is bounded, every x n is in a bounded interval I 0. Split I 0 into two subintervals of half its length. At least one of them contains infinitely many terms of the sequence, call it I 1. Repeat. After n steps, I n is an interval of length the length of I 0 divided by 2 n, and I n ⊂ I n − 1. sentinelone xdr Every Cauchy sequence of real numbers is bounded, hence by Bolzano–Weierstrass has a convergent subsequence, hence is itself convergent. This proof of the completeness of the real numbers implicitly makes use of the least upper bound axiom.Proof: Every sequence in a closed and bounded subset is bounded, so it has a convergent subsequence, which converges to a point in the set, because the set is closed. Conversely, every bounded sequence is in a closed and bounded set, so it has a convergent subsequence. cannons auctions The Bolzano-Weierstrass Theorem: Every bounded sequence in Rn has a convergent subsequence. ... Proof: Every sequence in a closed and bounded subset is bounded, so it has a convergent subsequence, which converges to a point in the set, because the set is closed. Is 1 n convergent sequence?This problem has been solved! See the answer. Prove that every bounded complex sequence has a convergent subsequence. new houses for sale crewkerneTheorem 3.24 Every bounded sequence of real number contains a convergent subsequence . Proof . Let E be the set of values of a bounded sequence x n . If E is finite , then there exists a ∈ R so that x n = a for infinitely many n . It is then an easy exercise to see that there exists a subsequence x n k of x n such that x n k = a for all k ...Bounded sequence of functions has subsequence convergent a.e.? real-analysis functional-analysis weak-convergence 1,821 Solution 1 There is no subsequence of sin ( n x) that converges a.e. In fact, every subsequence sin ( n k x) diverges a.e. For a proof of this, see Pointwise almost everywhere convergent subsequence of { sin ( n x) } Solution 2Originally Answered: Does every bounded sequence convergent or every bounded sequance have a subsequence that converges? There are bounded sequences of real numbers that don't converge. For example, The Bolzano–Weierstrass theorem states that every bounded sequence in has a convergent subsequence. Amit Goyal houses for sale in amarillo tx The converse is also true, in the sense that if every subsequence of { fn } itself has a uniformly convergent subsequence, then { fn } is uniformly bounded and equicontinuous. Proof The proof is essentially based on a diagonalization argument. The simplest case is of real-valued functions on a closed and bounded interval:How do you tell if a sequence has a convergent subsequence? Proof: Every sequence in a closed and bounded subset is bounded, so it has a convergent subsequence, which converges to a point in the set, because the set is closed. Conversely, every bounded sequence is in a closed and bounded set, so it has a convergent subsequence. Theorem (Bolzano-Weierstraßfor Sequences). Every bounded sequence of real numbers has a convergent subsequence. Now for us this theorem is trivial because ...real analysis. For instance: Bolzano–Weierstrass theorem. Every bounded sequence of real numbers has a convergent subsequence. This can be rephrased as:.What sequence has a convergent subsequence? The theorem states that each bounded sequence in Rnhas a convergent subsequence. An equivalent formulation is that a subset of Rnis sequentially compact if and only if it is closed and bounded. The theorem is sometimes called the sequential compactness theorem.De nition A function fis continuous from the right at a number a if lim x!a+ = f(a). A function fis continuous from the left at a number a if lim x!a = f(a).Example 3 Consider the function k(x) in example 2 above. Ar which of the following x-values is k(x) continuous from the right? x= 0; x= 3; x= 5; x= 7; x= 10:. Examples of Image Analysis Using ImageJ Area Measurements of a Complex Object ...OMG Maths. Bolzano Weierstrass Theorem Every bounded sequence has a convergent sub sequence Theorem of Sequence | Sequence and series | Real analysis | math …Prove that every bounded sequence has a convergent subsequence Hint: one line proof. This problem has been solved! You'll get a detailed solution from a subject matter expert that helps you learn core concepts. grinding teeth on adderall reddit Proof: Every sequence in a closed and bounded subset is bounded, so it has a convergent subsequence, which converges to a point in the set, because the set is closed. Conversely, every bounded sequence is in a closed and bounded set, so it has a convergent subsequence.Solution 2. Not necessarily. For each x ∈ [ 0, 1) let B ( x) = ( x n) n ∈ N be the sequence of digits for the representation of x in base 2, with { n: x n = 0 } being infinite. For n ∈ N let f n ( x) = x n. Let g: N → N be strictly increasing. A convergent binary sequence is eventually constant .SOLVED Non-convergent sequences in metric spaces. Thread starter iligabor; Start date Sep 22, 2010; I. iligabor. Aug 2010 5 0. Sep 22, 2010 #1 I need to prove that in any metric space containing more than one point exists a non-convergent sequence. ... Give an example of a bounded, non-convergent sequence that has an infinite range. is majin buu immortal That is, every infinite sequence contains a convergent subsequence. Lindenstrauss' central idea in the proof of QUE is to exploit the presence of Hecke ...But as we have seen, a bounded sequence might have a convergent subsequence, like ( n1) does. It is an amazing result that every bounded sequence has a convergent subsequence. The Bolzano-Weierstrass Theorem 2.5.5. Every bounded sequence contains a convergent subsequence. Proof. Let (a n) be a bounded sequence. Then there is M2R such that ja nj ...Free essays, homework help, flashcards, research papers, book reports, term papers, history, science, politics bascom palmer eye institute k=1 is a Cauchy sequence. Since Xis complete, the subsequence converges, which proves that a complete, totally bounded metric space is sequentially compact. Lemma 6. If a Cauchy sequence has a convergent subsequence, then the Cauchy sequence converges to the limit of the subsequence. Proof. Suppose that (x n)1 n=1 is a Cauchy sequence in a ...Give a constructive proof of Theorem 8.4.4 that every bounded sequence (X) has a convergent subsequence by completing the following steps. (a) Show that the sequence y₁ = sup (x2, x3, x4,...) y2 = sup (x3, x4, X5,...) y3 = sup {x4, X5, X6,...) = Yn sup (x+1, Xn+2, Xn+3,...) = sup {x;} j>n converges to a limit y.Aug 01, 2022 · Solution 2. Not necessarily. For each x ∈ [ 0, 1) let B ( x) = ( x n) n ∈ N be the sequence of digits for the representation of x in base 2, with { n: x n = 0 } being infinite. For n ∈ N let f n ( x) = x n. Let g: N → N be strictly increasing. A convergent binary sequence is eventually constant . greyhound bus station near me Every Cauchy sequence of real numbers is bounded, hence by Bolzano–Weierstrass has a convergent subsequence, hence is itself convergent. This proof of the completeness of the real numbers implicitly makes use of the least upper bound axiom. Answer to Prove that every bounded sequence has a convergent subsequence.... angularjs refresh page without reloading As shown, every convergent sequence is bounded, but not every bounded sequence is convergent. (-1) is an example of a non-convergent fixed sequence. As a result, the statement that any convergent sequence is bounded has no complete converse. A partial converse is given by the Bolzano-Weierstrass theorem, which is stated next. Convergent ProofWorkplace Enterprise Fintech China Policy Newsletters Braintrust how soon after monoclonal antibodies will i feel better Events Careers rune magic bookThe Bolzano-Weierstrass Theorem: Every bounded sequence in Rn has a convergent subsequence. ... Proof: Every sequence in a closed and bounded subset is bounded, so it has a convergent subsequence, which converges to a point in the set, because the set is closed. How do you tell if a sequence has a convergent subsequence? Proof: Every sequence in a closed and bounded subset is bounded, so it has a convergent subsequence, which converges to a point in the set, because the set is closed. Conversely, every bounded sequence is in a closed and bounded set, so it has a convergent subsequence. fix my microphone In fact, for any infinite dimensional normed linear space, there is a bounded se-quence that has no convergent subsequence (this is Riesz's Theorem of Section 13.3). In this section, we introduce a new kind of convergence for sequences in Lp and give some necessary and sufficient conditions for a bounded sequence to converge in this new ...Solution 3. For a sequence x n define α = lim sup x n and β = lim inf x n, which always exist, and for simplicity assume that they are finite. Consider the subsequence that converges to α, denote it by x n k. Then this subsequence has a further subsequence that converges to x by definition. But since the limits are unique we must have x = α. how often does other stories restock Every Cauchy sequence of real numbers is bounded, hence by Bolzano–Weierstrass has a convergent subsequence, hence is itself convergent. This proof of the completeness of the real numbers implicitly makes use of the least upper bound axiom.How do you tell if a sequence has a convergent subsequence? Proof: Every sequence in a closed and bounded subset is bounded, so it has a convergent subsequence, which converges to a point in the set, because the set is closed.Conversely, every bounded sequence is in a closed and bounded set, so it has a convergent subsequence. mototec 48v pro electric dirt bike 1500w lithium top speed Proof: Every sequence in a closed and bounded subset is bounded, so it has a convergent subsequence, which converges to a point in the set, because the set is closed. Conversely, every bounded sequence is in a closed and bounded set, so it has a convergent subsequence.11 de fev. de 2021 ... Bolzano Weierstrass Theorem Every bounded sequence has a convergent sub sequence Theorem of Sequence | Sequence and series | Real analysis ...In mathematics, a function f defined on some set X with real or complex values is called bounded if the set of its values is bounded . In other words, there exists a real number M such that. for all. usps liberal leave policy 2022. atlas ii army salem mo news drug bust raleigh nc ...Originally Answered: Does every bounded sequence convergent or every bounded sequance have a subsequence that converges? There are bounded sequences of real numbers that don't converge. For example, The Bolzano–Weierstrass theorem states that every bounded sequence in has a convergent subsequence. Amit Goyal real credit card numbers to buy stuff with billing address and zip How do you tell if a sequence has a convergent subsequence? Proof: Every sequence in a closed and bounded subset is bounded, so it has a convergent subsequence, which converges to a point in the set, because the set is closed. Conversely, every bounded sequence is in a closed and bounded set, so it has a convergent subsequence.The subsequence defined by that term is a constant sequence which is therefore convergent. If A is infinite, then, by the Bolzano-Weierstrass theorem, it has at least one cluster …To show the weak convergence of the bounded sequence $(x_n)$assume first that $H$is separable and let $\{x'_1,x'_2,\ldots\}$be a dense set in the dual space. Use a diagonal … baby death matlock Proof: Every sequence in a closed and bounded subset is bounded, so it has a convergent subsequence, which converges to a point in the set, because the set is closed. Conversely, every bounded sequence is in a closed and bounded set, so it has a convergent subsequence.Tour Start here for a quick overview of the site Help Center Detailed answers to any questions you might have Meta Discuss the workings and policies of this site coleman saluspa hot tub pump Every bounded sequence has a convergent subsequence. ... Proof. Recall the theorem of Section 3.4: every sequence has a monotonic subsequence. speech services by google wonpercent27t stop downloading real analysis. For instance: Bolzano–Weierstrass theorem. Every bounded sequence of real numbers has a convergent subsequence. This can be rephrased as:.The Bolzano-Weierstrass Theorem: Every bounded sequence in Rn has a convergent subsequence. of {xmk } is a bounded sequence of real numbers, so it too has a convergent subsequence, ... Conversely, every bounded sequence is in a closed and bounded set, so it has a convergent subsequence.The Bolzano-Weierstrass Theorem: Every bounded sequence in Rn has a convergent subsequence. ... Proof: Every sequence in a closed and bounded subset is bounded, so it has a convergent subsequence, which converges to a point in the set, because the set is closed. clothing manufacturers georgia Proof: Every sequence in a closed and bounded subset is bounded, so it has a convergent subsequence, which converges to a point in the set, because the set is closed. Conversely, every bounded sequence is in a closed and bounded set, so it has a convergent subsequence. How do you determine if a set is bounded?The Bolzano-Weierstrass Theorem: Every bounded sequence in Rn has a convergent subsequence. ... Proof: Every sequence in a closed and bounded subset is bounded, so it has a convergent subsequence, which converges to a point in the set, because the set is closed.The subsequence defined by that term is a constant sequence which is therefore convergent. If A is infinite, then, by the Bolzano-Weierstrass theorem, it has at least one cluster point x ∈ Â. By a slight modification of part (i) of the proof of Theorem 3.10, we can construct a subsequence (x_ {n_ {k}} ) (xnk) which converges to x.The Bolzano-Weierstrass Theorem: Every bounded sequence in Rn has a convergent subsequence. ... Proof: Every sequence in a closed and bounded subset is bounded, so it has a convergent subsequence, which converges to a point in the set, because the set is closed. horizontal delta loop antenna communities including Stack Overflow, the largest, most trusted online community for developers learn, share their knowledge, and build their careers. Visit Stack Exchange Tour Start here for quick overview the site Help Center Detailed answers... lobsterfest newport beach That is, every infinite sequence contains a convergent subsequence. Lindenstrauss' central idea in the proof of QUE is to exploit the presence of Hecke ...i had an emotional affair now what; 1960s psychedelic music; Newsletters; calisthenics program; how long does it take to rehydrate after drinking alcoholThe Bolzano-Weierstrass Theorem is true in Rn as well: The Bolzano-Weierstrass Theorem: Every bounded sequence in Rn has a convergent subsequence. ... Proof: Every sequence in a closed and bounded subset is bounded, so it has a convergent subsequence, which converges to a point in the set, because the set is closed. rooftop bar new orleans Every bounded sequence has a convergent subsequence. Remark Notice that a bounded sequence may have many convergent subsequences (for example, a sequence consisting of a …23 de dez. de 2012 ... This is the so called Bolzano-Weierstrass Theorem. I will prove that any sequence of real numbers has a monotone subsequence and leave the ...sis is on metric space concepts and the pertinent results on the reals are presented as speci c ... Proof: Let (an) be a Cauchy sequence in R and let A be the set of elements of the sequence , i.e. A=fx 2 R : 9n 2 N and an = xg Let =1. Then since (an) is Cauchy , 9N1 s.t. 8n;m N1, jan amj < 1. wattpad game of thrones x male readerProof that every bounded sequence in the real numbers has a convergent subsequence - Mathematics Stack Exchange First, I realise the proof of this can be found in, say, Theorem 3.6 of Principles of Mathematical Analysis by W. Rudin: goes something like if the range of a sequence is finite, then the result is ' Stack Exchange NetworkWe know that any sequence in R has a monotonic subsequence, and any subsequence of a bounded sequence is clearly bounded, so (sn) has a bounded monotonic subsequence. But every bounded monotonic sequence converges. So (sn) has a convergent subsequence, as required. Do all convergent sequences have a limit? child model agency newcastle SOLVED Non-convergent sequences in metric spaces. Thread starter iligabor; Start date Sep 22, 2010; I. iligabor. Aug 2010 5 0. Sep 22, 2010 #1 I need to prove that in any metric space containing more than one point exists a non-convergent sequence. ... Give an example of a bounded, non-convergent sequence that has an infinite range.[Math] If every convergent subsequence converges to $a$, then so does the original bounded sequence (Abbott p 58 q2.5.4 and q2.5.3b) A direct proof is normally easiest when you have some obvious mechanism to go from a given hypothesis to a desired conclusion. (E.g. consider the direct proof that the sum of two convergent sequences is convergent.)How do you tell if a sequence has a convergent subsequence? Proof: Every sequence in a closed and bounded subset is bounded, so it has a convergent subsequence, which converges to a point in the set, because the set is closed. Conversely, every bounded sequence is in a closed and bounded set, so it has a convergent subsequence. upscale image The answer had better be "No" or our definition is suspect. ... Every convergent sequence is bounded. ... Every sequence has a monotonic subsequence.Every bounded sequence has a convergent subsequence. proof: Let be a bounded sequence. Then, there exists an interval suchÖA× Ò+ß, ... The sequence is a subsequence of sincenn 8 5 5 …Aug 01, 2022 · Solution 2. Not necessarily. For each x ∈ [ 0, 1) let B ( x) = ( x n) n ∈ N be the sequence of digits for the representation of x in base 2, with { n: x n = 0 } being infinite. For n ∈ N let f n ( x) = x n. Let g: N → N be strictly increasing. A convergent binary sequence is eventually constant . carhart seat covers Prove that every bounded sequence has a convergent subsequence Hint: one line proof. This problem has been solved! You'll get a detailed solution from a subject matter expert that helps you learn core concepts.The subsequence defined by that term is a constant sequence which is therefore convergent. If A is infinite, then, by the Bolzano-Weierstrass theorem, it has at least one cluster point x ∈ Â. By a slight modification of part (i) of the proof of Theorem 3.10, we can construct a subsequence (x_ {n_ {k}} ) (xnk) which converges to x. Theorem. Every bounded sequence of real numbers has a convergent subsequence. There are several possible proofs. One of them can be found in the textbook ( ...Every bounded sequence has subsequences that converge. The one mentioned above has two subsequences that converge, the one with only zeroes and the the one with only ones. The Bolzano–Weierstrass theorem states that every bounded sequence in has a convergent subsequence. 49 2 More answers below The nth term of a sequence is. dogman series How do you tell if a sequence has a convergent subsequence? Proof: Every sequence in a closed and bounded subset is bounded, so it has a convergent subsequence, which converges to a point in the set, because the set is closed. Conversely, every bounded sequence is in a closed and bounded set, so it has a convergent subsequence. Bolzano Weierstrass TheoremEvery bounded sequence has a convergent sub sequenceTheorem of Sequence | Sequence and series | Real analysis | math tutorials | ...The Bolzano-Weierstrass Theorem: Every bounded sequence in Rn has a convergent subsequence. ... Proof: Every sequence in a closed and bounded subset is bounded, so it has a convergent subsequence, which converges to a point in the set, because the set is closed.Every Cauchy sequence of real numbers is bounded, hence by Bolzano–Weierstrass has a convergent subsequence, hence is itself convergent. This proof of the completeness of the real numbers implicitly makes use of the least upper bound axiom. 2013 chevy silverado starter relay location De nition A function fis continuous from the right at a number a if lim x!a+ = f(a). A function fis continuous from the left at a number a if lim x!a = f(a).Example 3 Consider the function k(x) in example 2 above. Ar which of the following x-values is k(x) continuous from the right? x= 0; x= 3; x= 5; x= 7; x= 10:. Examples of Image Analysis Using ImageJ Area Measurements of a Complex Object ...A quick proof of why every bounded sequence in R has a convergent subsequence. This is a very useful concept in many proofs, and relies on the Monotone Convergence Theorem, (I call it... lexington eye associates The Bolzano-Weierstrass Theorem: Every bounded sequence in Rn has a convergent subsequence. ... Proof: Every sequence in a closed and bounded subset is bounded, so it has a convergent subsequence, which converges to a point in the set, because the set is closed.Most ice-cream cones are bounded, and every bounded sequence in R_3 has a convergent subsequence. The limit might not be in A, but it will be in the closure of A, cl (A), and cl (A) is a subset of R_3. So the answer to your question is "yes", assuming A is bounded. ( A cone isn't bounded but an ice-cream cone is). 2 Matt Jennings[Math] If every convergent subsequence converges to $a$, then so does the original bounded sequence (Abbott p 58 q2.5.4 and q2.5.3b) A direct proof is normally easiest when you have some obvious mechanism to go from a given hypothesis to a desired conclusion. (E.g. consider the direct proof that the sum of two convergent sequences is convergent.)Every Cauchy sequence of real numbers is bounded, hence by Bolzano–Weierstrass has a convergent subsequence, hence is itself convergent. This proof of the completeness of the real numbers implicitly makes use of the least upper bound axiom. The Bolzano-Weierstrass Theorem: Every bounded sequence in Rn has a convergent subsequence. ... Proof: Every sequence in a closed and bounded subset is bounded, so it has a … opta stats shots on target The Bolzano-Weierstrass Theorem: Every bounded sequence in Rn has a convergent subsequence. ... Proof: Every sequence in a closed and bounded subset is bounded, so it has a convergent subsequence, which converges to a point in the set, because the set is closed. Is 1 n convergent sequence?The Bolzano-Weierstrass Theorem: Every bounded sequence in Rn has a convergent subsequence. ... Proof: Every sequence in a closed and bounded subset is bounded, so it has a convergent subsequence, which converges to a point in the set, because the set is closed.Every bounded sequence has a convergent subsequence. proof: Let be a bounded sequence. Then, there exists an interval suchÖA× Ò+ß,Ó8 "" that for all +ŸAŸ, 8Þ"88 Either or contains infinitely many of . ThatÒ+ß Ó Ò ß,Ó ÖA×"" 8 + , + , ## "" "" terms sims 4 yandere trait mod Every bounded sequence has a convergent subsequence. Remark Notice that a bounded sequence may have many convergent subsequences (for example, a sequence consisting of a …What sequence has a convergent subsequence? The theorem states that each bounded sequence in Rnhas a convergent subsequence. An equivalent formulation is that a subset of Rnis sequentially compact if and only if it is closed and bounded. The theorem is sometimes called the sequential compactness theorem. If a sequence an converges, then it is bounded. Note that a sequence being bounded is not a sufficient condition for a sequence to converge. For example, the sequence (−1)n is bounded, … creighton prep basketball schedule Conversely, every bounded sequence is in a closed and bounded set, so it has a convergent subsequence. Does (- 1 n have a convergent subsequence? The sequence (−1)n is not convergent because it has two subsequences (−1)2n and (−1)2n+1 which converge to 1 and −1 respectively. Recall that a convergence sequence is bounded, but that a ...The Bolzano-Weierstrass Theorem: Every bounded sequence in Rn has a convergent subsequence. ... Proof: Every sequence in a closed and bounded subset is1.6 Suppose that every subsequence of {xn } has a subsequence that converges to x, but the full sequence ... recall that all Cauchy sequences are bounded, so C = sup kAn k < ∞. If f ∈ X, then since An f → Af, ... The sequence {xn } has a convergent subsequence, say xnk → x0 . Fix ε > 0. Then since f is continuous, there exists some δ ...Bolzano–Weierstrass Theorem Every bounded sequence in Rd has at least one convergent subsequence. The following non-standard terminology may sometimes be ... uber female drivers only Proof: Every sequence in a closed and bounded subset is bounded, so it has a convergent subsequence, which converges to a point in the set, because the set is closed. Conversely, every bounded sequence is in a closed and bounded set, so it has a convergent subsequence.This video explains that every convergent sequence is bounded in the most simple and easy way possible. I did this proof again with voice explanation as a lo...Proof: Every sequence in a closed and bounded subset is bounded, so it has a convergent subsequence, which converges to a point in the set, because the set is closed. Conversely, every bounded sequence is in a closed and bounded set, so it has a convergent subsequence. kaiju paradise purification We begin with a convergence criterion for a sequence of distribution functions of ordinary random variables. Definition B.l.l. For arbitrary distribution functions on the real line, F and F n, n. dumper acting like dumpee. paris history. hotel lbi cancellation policy. ross creations ... 3 bedroom house for sale in gravesend rightmove Solution 1. For the first problem: tangent may be unbounded, but tangent restricted to angles between $-\pi/4$ and $\pi/4$ is bounded between $-1$ and $1$. So can you find a subsequence of $\langle 0,1,2,\dots\rangle$ such that the radian measure of those integers is always equivalent to angles between $-\pi/4$ and $\pi/4$?The Bolzano-Weierstrass Theorem: Every bounded sequence in Rn has a convergent subsequence. ... Proof: Every sequence in a closed and bounded subset is bounded, so it has a convergent subsequence, which converges to a point in the set, because the set is closed. Aug 01, 2022 · Prove that the sequence has a convergent subsequence. real-analysis 2,107 Solution 1 For the first problem: tangent may be unbounded, but tangent restricted to angles between $-\pi/4$ and $\pi/4$ is bounded between $-1$ and $1$. tremor vs carli suspension Answer to Solved 7. Give a constructive proof of Theorem 8.4.4 that. Transcribed image text: 7. Give a constructive proof of Theorem 8.4.4 that every bounded sequence (X) has a convergent subsequence by completing the following steps. Originally Answered: Does every bounded sequence convergent or every bounded sequance have a subsequence that converges? There are bounded sequences of real numbers that don't converge. For example, The Bolzano–Weierstrass theorem states that every bounded sequence in has a convergent subsequence. Amit GoyalEvery bounded sequence has a convergent subsequence. Remark Notice that a bounded sequence may have many convergent subsequences (for example, a sequence consisting of a …rem) states that every bounded sequence has a convergent subsequence. • Plan. – Subsquences and properties. – The Bolzano-Weierstrass Theorem. cps homes for rent
CommonCrawl
Dominik Schröder IST Austria Random Matrices, Fall 2017 Introductory course on random matrices from October 10 to November 23, 2017 at IST Austria. The course instructor is László Erdős and the teaching assistant is Dominik Schröder. Random matrices were first introduced in statistics in the 1920's, but they were made famous by Eugene Wigner's revolutionary vision. He predicted that spectral lines of heavy nuclei can be modelled by the eigenvalues of random symmetric matrices with independent entries (Wigner matrices). In particular, he conjectured that the statistics of energy gaps is given by a universal distribution that is independent of the detailed physical parameters. While the proof of this conjecture for realistic physical models is still beyond reach, it has recently been shown that the gap statistics of Wigner matrices is independent of the distribution of the matrix elements. Students will be introduced to the fascinating world of random matrices and presented with some of the basic tools for their mathematical analysis in this course. Students with orientation in mathematics, theoretical physics, statistics and computer science. No physics background is necessary. Calculus, linear algebra and some basic familiarity with probability theory is expected. The final grade will be obtained as a combination of the student's performance on the example sheets and an oral exam. Related notes from the recent PCMI summer school on random matrices. The course lasts from October 10 – November 23, 2017. Oct 12 Thu 11.20am–12.35pm Lecture Mondi 3 Oct 17 Tue 10.15am-11.30am Lecture Mondi 3 Oct 17 Tue 11.45am-12.35pm Recitation Mondi 3 Oct 18 Wed 11.30am-12.45pm Lecture Mondi 1 Nov 2 Thu 11.20am–12.35pm Lecture Mondi 3 Nov 7 Tue 10.15am-11.30am Lecture Mondi 3 Nov 7 Tue 11.45am-12.35pm Recitation Mondi 3 Nov 14 Tue 10.15am-11.30am Lecture Mondi 3 Nov 14 Tue 11.45am-12.35pm Recitation Mondi 3 Nov 16 Thu 11.20am–12.35pm Lecture Mondi 3 Basic facts from probability theory. Law of large numbers (LLN) and the central limit theorem (CLT), viewed as universality statements. In the LLN the limit is deterministic, while in CLT the limit is a random variable, namely the Gaussian (normal) one. No matter which distribution the initial random variables had, their appropriately normalized sums always converge to the same distribution — in other words the limiting Gaussian distribution is universal. Wigner random matrices. Real symmetric and complex hermitian. GUE and GOE. Wishart matrices and their relation to Wigner-type matrices. Scaling so that eigenvalues remain bounded. Statement on the concentration of the largest eigenvalue. Introducing the semicircle law as a law of large numbers for the empirical density of the eigenvalues. Linear statistics of eigenvalues (with a smooth function as observable) leads to CLT but with an unusual scaling — indicating very strong correlation among eigenvalues. Statement of the gap universality, Wigner surmise. The limit behavior of the gap is a new universal distribution; in this sense this is the analogue of the CLT. Reading. PCMI lecture notes up to middle of Section 1.2.3. Main questions in random matrix theory: Density on global scale (like LLN) Extreme eigenvalues (especially relevant for sample covariance matrices) Fluctuation on the scale of eigenvalue spacing (like CLT) Mesoscopic density — follows the global behaviour, but it is a non-trivial fact. Eigenfunction (de)localization Definition of $k$-point correlation functions. Relation of the gap distribution to the local correlation functions on scale of the eigenvalue spacing (inclusion-exclusion formula) Rescaled (local) correlation functions. Determinant structure. Sine kernel for complex Hermitian Wigner matrices. Statement of the main universality result in the bulk spectrum (for energies away from the edges of the semicircle law). Reading. PCMI lecture notes up to the end of Section 1.2.3. October 17 (Recitation) Definition of Stieltjes transform \begin{equation}m_\mu(z):=\int_{\mathbb R} \frac{d\mu(\lambda)}{\lambda-z},\quad z\in\mathbb{H}:=\{z\in\mathbb C,\, \Im z>0\}\notag\end{equation} of probability measure $\mu$ and statement of elementary properties (analyticity, trivial bounds on derivatives). Interpretation of the Stieltjes transform of the empirical spectral density as the normalized trace of the resolvent. Interpretation of imaginary part as the convolution with the Poisson kernel, \begin{equation}\Im m_\mu(x+i\eta)= \pi (P_\eta\ast \mu)(x),\quad P_\eta(x):=\frac{1}{\pi}\frac{\eta}{x^2+\eta^2}.\notag\end{equation} The Stieltjes transform $m_\mu(x+i\eta)$ thus contains information about $\mu$ at a scale of $\eta$ around $x$. Stieltjes continuity theorem for sequences of random measures: A sequence of random probability measures $\mu_1,\mu_2,\dots$ converges vaguely, a) in expectation b) in probability c) almost surely to a deterministic probability measure $\mu$ if and only if for all $z\in\mathbb H$, the sequence of numbers $m_{\mu_N}(z)$ converges a) in expectation b) in probability c) almost surely to $m_{\mu}(z)$. Derivation of the Helffer-Sjöstrand formula \begin{equation}f(\lambda)=\frac{1}{2\pi i}\int_{\mathbb C} \frac{\partial_{\overline z} f_{\mathbb C} (z)}{\lambda-z}d \overline z \wedge d z,\quad f_{\mathbb C}(x+i\eta):= \chi(\eta)\big[f(x)+i\eta f'(x)\big] \notag\end{equation} for compactly supported $C^2$-functions $f\colon\mathbb R\to\mathbb R$ and some smooth cut-off function $\chi$. Main motivations for random matrices: Wigner's original motivation: to model energy levels of heavy nuclei. The distribution of the gaps very well matched that of the Wigner random matrices. The density of states depends on the actual nucleus (and it is not the semicircle), but the local statistics (e.g. gap statistics) are universal. Random Schrodinger operators, Anderson transition Gap statistics of the zeros of the Riemann zeta function. Quantum Mechanics in nutshell: Configuration space: $S$ (with a measure) State space: $\ell^2(S)$ (square integrable functions on $S$) Observables: self-adjoint (symmetric) operators on $\ell^2(S)$ A distinguished observable: the Hamilton (or energy) operator Time evolution — Schrödinger equation. Random Schrödinger operator describes a single electron in an ionic (metallic) lattice. $S = \mathbb Z^d$ or a subset of that. $H$ is the sum of the discrete (lattice) Laplace operator and a random potential. Anderson phase transition: depending on the strength of the disorder, the system is either in delocalized (conductor) or localized (insulator) phase. Localized phase is characterized by Localized eigenfunctions Localized time evolution (no transport) Pure point spectrum (for the infinite volume operator) Poisson local spectral statistics, no level repulsion (for the finite volume model) In the delocalized phase, we have delocalized eigenfunctions ("almost" $\ell^2$-normalizable solutions to the eigenvalue equation), quantum transport, absolutely continuous spectrum and random matrix eigenvalue statistics, in particular level repulsion. Reading. PCMI lecture notes Sections 5.1 — 5.3 Phase diagram for the Anderson model (= random Schrödinger operator on the $\mathbb Z^d$ lattice) in $d\ge 3$ dimensions. Localized regime can be proven, delocalized regime is conjectured to exist but no mathematical result. In $d=1$ dimension the Anderson model is always localized (transfer matrix method helps). In $d=2$ nothing is known, even there is no clear agreement in the physics whether it behaves more like $d=1$ (localization) or more like $d=3$ (delocalization); majority believes in localization. Delocalized regime, at least for small disorder, sounds easier to prove because it looks like a perturbative problem (zero disorder corresponds to the pure Laplacian which is perfectly understood). Resolvent perturbation formulas were discussed; major problem: lack of convergence. We gave some explanation why the localization regime is easier to handle mathematically: off-diagonal resolvent matrix elements decay exponentially. This fact provides an effective decoupling and makes localized resolvents almost independent. Random band matrices: naturally interpolate between $d=1$ dimensional random Schrödinger operators (bandwidth $W=O(1)$) and mean field Wigner matrices (bandwidth $W = N$, where $N$ is the linear size of the system). Phase transition is expected at $W = \sqrt{N}$; this is a major open question. There are similar conjectures in higher dimensional band matrices, but we did not discuss them. Finally, we discussed a mysterious connection between the Dyson sine kernel statistics and the location of the zeros of of the zeta function on the critical line. There is only one mathematical result in this direction, Montgomery proved that the two point function of the (appropriately rescaled) zeros follows the sine kernel behavior, but only for test functions with Fourier support in $[-1,1]$. No progress has been made in the last 40 years to relax this condition. Reading. PCMI lecture notes Section 5.3 and the entertaining article "Tea Time in Princeton" by Paul Bourgade about Montgomery's theorem. Analytic definition of (multivariate) cumulants $\kappa_\alpha$ of a random vector $X=(X_1,\dots,X_n)$ as the coefficients of the log-characteristic function \begin{equation}\log \mathbf E e^{i t\cdot X} = \sum_\alpha \kappa_\alpha \frac{(it)^\alpha}{\alpha!}.\notag\end{equation} Proof of the cumulant expansion formula \begin{equation}\mathbf E X_i f(X)=\sum_{\alpha} \frac{\kappa_{\alpha, i }}{\alpha!}\mathbf E f^{(\alpha)}(X)\notag\end{equation} via Fourier transform. Expression of moments in terms of cumulants as the sum of all partitions \begin{equation}\mathbf{E} X_1\dots X_n=\sum_{\mathcal{P}\vdash [n]} \kappa^{\mathcal{P}}=\sum_{\mathcal{P}\vdash [n]}\prod_{P_i\in\mathcal{P}} \kappa( X_j \mid j\in P_i )\notag.\end{equation} Derivation of the inverse relationship \begin{equation}\label{comb:cum}\kappa(X_1,\dots,X_n)=\sum_{\mathcal P\vdash [n]}(-1)^{\lvert\mathcal P\rvert-1}(\lvert\mathcal P\rvert-1)! \prod_{P_i\in\mathcal P} \mathbf E \prod_{j\in P_i} X_j\end{equation} through Möbius inversion on abstract incidence algebras. Note that \eqref{comb:cum} can also serve as a purely combinatorial definition of cumulants. Proof that cumulants of random variables which split into two independent subgroups vanish. There are two natural ways to put a measure on the space of (hermitian) matrices, hence defining two major classes of random matrix ensembles: Choose matrix elements independently (modulo the hermitian symmetry) from some distribution on the complex or real numbers. This results in Wigner matrices (and possible generalizations, when identicality of the distribution is dropped). Equip the space of hermitian matrices with the usual Lebesgue measure and multiply it by a Radon-Nikodym factor that makes the measure finite. We choose the factor invariant under unitary conjugation in the form $\exp(-\text{Tr}\, V(H))$ for some real valued function $V$. These are called invariant ensembles. Only Gaussian matrices belong to both families. For invariant ensembles, the joint probability density function of the eigenvalues can be computed explicitly and it consists of the Vandermonde determinant (to the first or second power, $\beta=1,2$, depending on the symmetry class). We sketched of the proof by change of variables. Invariant ensembles can also be represented as Gibbs measure of N points on the real line with a one-body potential $V$ and a logarithmic two-body interaction. This interpretation allows for choosing any $\beta>0$, yielding the beta-ensembles, even though there is no matrix or eigenvalues behind them. There are analogous universality statements for beta-ensembles, which assert that the local statistics depend only on the parameter beta and are independent of the potential $V$. Reading. PCMI lecture notes Section 1.1.2 Precise statement of the Wigner semicircle law (for i.i.d. case) in the form of weak convergence in probability. In general, there are two methods to prove the semicircle law: Moment method: computes $\text{Tr}\, H^k$, obtains the distribution of the moments of the eigenvalues. The moments are given by the Catalan numbers and they uniquely identify the semicircle law (calculus exercise) using Carleman theorem on the uniqueness of the measure if the moments do not grow too fast. Resolvent method: derives an equation for the limiting Stieltjes transform of the empirical density. The resolvent method in general is more powerful, it works well inside as well as neat the edge of the spectrum. The moment method is powerful only at the extreme edges. Proof of the Wigner semicircle law by moment method: Compute \begin{equation}\frac{1}{N} \mathbb E \text{Tr}\, H^k=\frac{1}{N}\mathbb E\sum_{i_1,\dots,i_k} h_{i_1i_2}h_{i_2i_3}\dots h_{i_{k-1}i_k}h_{i_ki_1}\notag\end{equation} in terms of the number of backtracking paths (only those path give a relevant contribution where every edge is travelled exactly twice and the skeleton of the graph is a tree). We reduced the problem to counting such path — it is an $N$ independent problem. We completed the proof of the Wigner semicircle law by moment method. Last time we showed that to evaluate $\mathbb E \text{Tr}\, H^{2k}$ is sufficient to count the number of backtracking path of total length $2k$. This number has many other combinatorial interpretations. It is the same as the number of rooted, oriented trees on $k+1$ vertices by a simple one to one correspondance. It is also the same as the number of Dyck paths of length $2k$, where a Dyck path is a random walk on the nonnegative integers starting and ending at $0$. Finally, we counted the Dyck paths by deriving the recursion \begin{equation}C_k = C_{k-1} C_0 + C_{k-2} C_1 + … + C_0 C_{k-1}\notag\end{equation} with $C_0=1$ for their number $C_k$. This recursion can be solved by considering the generating function \begin{equation}f(x) = \sum_{k=0}^\infty C_k x^k\notag\end{equation} and observe that \begin{equation}xf^2(x) = f(x) - 1.\notag\end{equation} Thus $f(x)$ can be explicitly computed by the solution formula for the quadratic equation and Taylor expanding around $x=0$. After some calculation with the fractional binomial coefficients, we obtain that $C_k = 1/(k+1) {2k \choose k}$, i.e. the Catalan numbers. Since the Catalan numbers are the moments of the semicircle law (calculus exercise), and these moments do not grow too fast, they identify the measure. This proved that the expectation of the empirical eigenvalue density converges to the semicircle in the sense of moments. Using compact support of the measures (for the empirical density we know it from the homework problem since the norm of $H$ is bounded), by Weierstrass theorem we can extend the convergence for any bounded continuous functions. Finally, the expectation can be removed, by computing the variance of $N^{-1} \text{Tr}\, H^k$, again by the graphical representation (now we have two cycles and studied which edge-coincidences give rise to nonzero contribution). We showed that the variance vanishes in the large $N$ limit and then a Chebyshev inequality converts it into a high probability bound. Reading. PCMI lecture notes Section 2.3 November 7 (Recitation) We found yet another combinatorial description of Catalan numbers. $C_k$ is the number of non-crossing pair partitions of the set ${1,\dots,2k}$. Indeed, denote the number in question by $N_k$. Then there exists some $j$ such that $1$ is paired with $2j$ since due to the absence of crossings there has to be an even number of other integers between $1$ and its partner. The number of non-crossing pairings of the integers ${2,\dots,2j-1}$ and ${2j+1,\dots,2k}$ are given by $N_{j-1}$ and $N_{k-j}$ respectively and it follows that \begin{equation}N_{k}=\sum_{j=1}^k N_{j-1}N_{k-j}, \qquad N_1=1\notag\end{equation} and thus $N_k=C_k$ since they satisfy the same recursion. We defined a commonly used notion of stochastic domination $X\prec Y$ and stated the following large deviation estimates for families of random variables $X_i,Y_i$ of zero mean $\mathbf E X_i=\mathbf E Y_i=0$ and unit variance $\mathbf E \lvert X_i\rvert^2=\mathbf E \lvert Y_i\rvert^2=1$ and deterministic coefficients $b_i$, $a_{ij}$, \begin{equation}\left\lvert\sum_{i} b_i X_i\right\rvert\prec \left(\sum_i\lvert b_i\rvert^2\right)^{1/2}\notag\end{equation} \begin{equation}\left\lvert\sum_{i,j} a_{ij} X_i Y_j\right\rvert\prec \left(\sum_{i,j}\lvert a_{ij}\rvert^2\right)^{1/2}\label{LDE}\end{equation} \begin{equation}\left\lvert\sum_{i\not=j} a_{ij} X_i X_j\right\rvert\prec \left(\sum_{i\not=j}\lvert a_{ij}\rvert^2\right)^{1/2}\notag\end{equation} We proved \eqref{LDE} only for uniformly subgaussian families of random variables but not that uniformly finite moments of all orders are also sufficient for them to hold. Precise statement of the local semicircle laws (entrywise, isotropic, averaged) for Wigner type matrices with moment condition of arbitrary order. Definition of stochastic dominations, some properties. We started the proof of the weak law for Wigner matrices. Schur complement formula. Almost selfconsistent equation for $m_N = N^{-1} \text{Tr}\, G$ assuming that the fluctuation of the quadratic term is small (will be proven later). The other two errors were shown to be small. The smallness of the single diagonal element $h_{ii}$ directly follows from the moment condition. The difference of the Stieltjes transform of the resolvent and its minor was estimated via interlacing and integration by parts. Reading. PCMI lecture notes Section 3.1.1. Proof of the weak local law in the bulk. Stability of the equation for $m_{sc}$, the Stieltjes transform of the semicircle law. Proof for the large eta regime. Breaking the circularity of the argument in two steps: In the first step one proves a weaker bound that allows one to approximate $m$ via $m_{sc}$, then run the argument again but with improved inputs. The bootstrap argument will have the same philosophy next time. Discussion of the uniformity in the spectral parameter. Grid argument to improve the bound for supremum over all $z$. This argument works because (i) the probabilistic bound for any fixed $z$ is very strong (arbitrary $1/N$-power) and (ii) the function we consider $(m-m_{sc})(z)$ has some weak but deterministic Lipschitz continuity. November 14 (Recitation) We presented a cumulant approach to proving local laws for correlated random matrices. Specifically, we gave a heuristic argument that the resolvent $G$ should be well apprixmated by the unique solution $M=M(z)$ to the matrix Dyson equation (MDE) \begin{equation}0=1+zM+\mathcal S[M]M, \quad \Im M>0,\qquad \mathcal S[R]:= \sum_{\alpha,\beta}\text{Cov}(h_\alpha,h_\beta) \Delta^\alpha R\Delta^\beta.\notag\end{equation} We furthermore proved that the error matrix \begin{equation}D=1+zG+\mathcal S[G]G=HG+\mathcal S[G]G\notag\end{equation} satisfies \begin{equation}\mathbf E\lvert\langle x,Dy \rangle\rvert^2 \lesssim \left(\frac{\lVert x\rVert \lVert y\rVert}{\sqrt{N\eta}}\right)^2,\qquad \mathbf E\lvert\langle BD \rangle\rvert^2 \lesssim \left(\frac{\lVert B\rVert}{N\eta}\right)^2\notag\end{equation} in the case of Gaussian entries $h_\alpha$. We completed the rigorous proof of the weak local semicircle law by the bootstrap argument. Then we mentioned two improvements: (i) Strong local law (error bound improved from $(N \eta)^{-1/2}$ to $(N\eta)^{-1}$ and (ii) entrywise local law. Proof of the entrywise local law via the self-consistent vector equation. Stability operator mentioned in the more general setup of Wigner type matrices (when the variance matrix $S$ is stochastic). Diagonal and offdiagonal elements are estimated separately via a joint control parameter $\Lambda$. Main ideas are sketched, the rigorous bootstrap argument was omitted. Reading. PCMI lecture notes Sections 4.1–4.3 Fluctuation averaging phenomenon. Proof of the strong local law in the bulk. Some remarks on the modifications at the edge. Corollaries of the strong local law: optimal estimates on the eigenvalue counting function and rigidity (location of the individual eigenvalues). Bulk universality for Hermitian Wigner matrices. Basic idea: interpolation. Ornstein Uhlenbeck process for matrices (preserves expectation and variance). Crash course on Brownian motion, stochastic integration and Ito's formula. Dyson Brownian motion (DBM) for the eigenvalues. Local equilibration phenomenon due to the strong level repulsion in the DBM. We summarized the three step strategy to prove local spectral universality of Wigner matrices. We discussed the second step: fast convergence to equilibrium of the Dyson Brownian Motion. Relation between SDE and PDE: introduction of the generator. Laplacian is the generator of the standard Brownian motion. Basics of large dimensional analysis: Gibbs measure, entropy, Dirichlet form, generator. The total mass of a probability measure is preserved under the dynamics. Relation between various concepts of closeness to equilibrium. Entropy inequality (total variation norm is bounded by the entropy). Logarithmic Sobolev inequality. Spectral gap inequality. Bakry-Emery theory: (i) the Gibbs measure with a convex Hamiltonian satisfies LSI, (ii) entropy and Dirichlet form decays exponentially fast. The problem sheets can either be handed in during the lecture or put in the letter box of Dominik Schröder in LBW, 3rd floor. Problem sheet I Solutions Oct 18 Oct 25 Problem sheet II Solutions Oct 25 Nov 7 Problem sheet III Solutions Nov 9 Nov 21
CommonCrawl
Search Results: 1 - 10 of 5216 matches for " Cassie Anderson " Page 1 /5216 What Are the Characteristics of Arabinoxylan Gels? [PDF] Cassie Anderson, Senay Simsek Food and Nutrition Sciences (FNS) , 2018, DOI: 10.4236/fns.2018.97061 Abstract: Arabinoxylan gels are commonly characterized to determine the feasibility of utilizing them in numerous applications such as drug delivery systems. The general characteristics of numerous types of arabinoxylan gels as well as their susceptibility to degradation are discussed in this manuscript. There are two main types of arabinoxylan: water-extractable and alkali-extractable. The physicochemical characteristics of the arabinoxylan determine its extractability and gelling characteristics. Gels can be created from numerous types of arabinoxylan including wheat (Triticum aestivum L.) and maize (Zea mays L.). These gels can also be developed with the addition of protein and/or β-glucan, which results in modified mechanical properties of the gels. To create a sound gel, arabinoxylan must be cross-linked, which is often done through ferulic acid. When this takes place, the gel developed is thermo-irreversible, unsusceptible to pH and electrolyte interactions, and does not undergo syneresis during storage. Despite these strengths, arabinoxylan gels can be broken down by the enzymes produced by Bifidobacterium, which is present in the human large intestine. After further development and research on these gels, they could be utilized for many purposes. The infrared dielectric function of solid para-hydrogen Cassie Kettwich,David Anderson,Mark Walker,Artem Tuntsov Abstract: We report laboratory measurements of the absorption coefficient of solid para-H2, within the wavelength range from 1 to 16.7 micron, at high spectral resolution. In addition to the narrow rovibrational lines of H2 which are familiar from gas phase spectroscopy, the data manifest double transitions and broad phonon branches that are characteristic specifically of hydrogen in the solid phase. These transitions are of interest because they provide a spectral signature which is independent of the impurity content of the matrix. We have used our data, in combination with a model of the ultraviolet absorptions of the H2 molecule, to construct the dielectric function of solid para-H2 over a broad range of frequencies. Our results will be useful in determining the electromagnetic response of small particles of solid hydrogen. The dielectric function makes it clear that pure H2 dust would contribute to IR extinction predominantly by scattering starlight, rather than absorbing it, and the characteristic IR absorption spectrum of the hydrogen matrix itself will be difficult to observe. The Potential of Photo-Talks to Reveal the Development of Scientific Discourses [PDF] Cassie Quigley, Gayle Buck Creative Education (CE) , 2012, DOI: 10.4236/ce.2012.32033 Abstract: This study explores the potential of a photo-elicitation technique, photo-talks (Serriere, 2010), for understanding how young girls understand, employ and translate new scientific discourses. Over the course of a nine week period, 24 kindergarten girls in an urban girls' academy were observed, videotaped, photographed and interviewed while they were immersed into scientific discourse. This paper explicitly describes how their emerging discursive patterns were made visible through this methodological tool. The findings are presented in vignettes in three themes uncovered during our analysis which are the following: Presented the recollection of the scientific Discourse, Described the understanding of scientific Discourse, and Created an opportunity for the translation into everyday discourse. Science educators can benefit from this methodological tool as a reflective tool with their participants, to validate and/or complicate data. Additionally, this methodological tool serves to make discourse patterns more visible by providing a visual backdrop to the conversations thus revealing the development as it is occurring in young children. The perceptions of teachers and school principals of each other's disposition towards teacher involvement in school reform Cassie Swanepoel South African Journal of Education , 2008, Abstract: Worldwide teachers are faced with the task of continuously facilitating and implementing educational reform that has been designed without their participation. This exclusion of the key agents, who must mediate between the change agenda and actual change in the classroom, from the planning and decision-making processes, is detrimental to educational reform. Although school-based management has recently emerged as the instrument to accomplish the decentralisation of decision-making powers to school level, the success thereof depends largely on school principals' disposition regarding teacher involvement. It is argued that the expectation of principals regarding their own leadership role, as well as the professional role teachers should fulfil, is a primary determinant of principals' willingness to involve teachers in responsibility-taking processes outside the classroom. The results from an empirical investigation revealed that principals' perception, of the wishes of teachers regarding involvement, significantly underestimated teachers' actual involvement wishes. Likewise, the expectation of teachers regarding the willingness of principals to involve them was a significant underestimation of the involvement level principals are actually in favour of. These misperceptions probably discourage actual school-based management and could jeopardize the implementation of educational reform in general. A comparison between the views of teachers in South Africa and six other countries on involvement in school change Abstract: Worldwide, and especially in South Africa, change and decentralised decision-making have been topical issues in the provision of education for the past years. It appears that teachers - the key agents in implementing the policies concerned - are largely ignored in the pre-implementation phases, and treated merely as implementers of these policies. The results from an empirical investigation revealed that the teachers in the South African sample expressed an exceptional degree of eagerness to be involved in decision-making and responsibility-taking concerning school change, even in aspects of management that could be considered as the principal's 'turf'. Although the views of a group of teachers in six other countries showed very similar result patterns, the sample of South African teachers was considerably more eager to be involved in initiatives of school change and related responsibilities than the teachers in the samples of the other countries. The results are illuminating, taking into consideration the increased workload of teachers, as well as certain other factors. Possible explanations for these observations are discussed. Globalization and Science Education: The Implications for Indigenous Knowledge Systems Cassie Quigley International Education Studies , 2009, DOI: 10.5539/ies.v2n1p76 Abstract: Much of the current diversity literature in science education does not address the complexity of the issues of indigenous learners in their postcolonial environments and calls for a "one size fits all" instructional approach (Lee, 2001). Indigenous knowledge needs to be promoted and supported. There is currently a global initiative of maintaining worldviews, languages, and environments of which science education can be a part (McKinley, 2007). This paper is organized around five main topics that further guide the theoretical framework for this important area: a) describing postcolonialism and indigeneity related to science education, b) defining the terms indigenous knowledge, traditional ecological knowledge, c) western modern science and the effects of globalization on these terms d) examining the research on learning implications of IK and/or TEK in classrooms with a focus on the research into student learning in indigenous language, e) connecting place-based education to curricular implications for indigenous knowledge systems. The involvement of teachers in school change: a comparison between the views of school principals in South Africa and nine other countries Cassie Swanepoel, Johan Booyse Abstract: Previous international studies, in which the authors participated, have revealed that involvement of teachers in decision-making and responsibility-taking processes is crucial for their receptiveness towards implementation of current and future educational change. It is also evident that the role and responsibilities of school principals have changed significantly over the last decade or two. An indication was obtained of the views of South African secondary school principals regarding the involvement of their teachers in processes of school change and these were compared to the views of school principals from other countries. The results for the South African sample, as well as those for the other nine countries, showed that there was fairly strong support for the involvement of teachers in most school-change activities. It also appeared that, in comparison to other countries, principals in the South African sample occupied a middle position in all of four clusters of possible activities, as well as for the mean questionnaire score. South African Journal of Education Vol. 26(2) 2006: 189-198 Levantamento de plantas daninhas aquáticas no reservatório de Alagados, Ponta Grossa - PR Dalva Cassie Rocha Planta Daninha , 2011, Local heuristics and an exact formula for abelian surfaces over finite fields Jeff Achter,Cassie Williams Mathematics , 2014, Abstract: Consider a quartic $q$-Weil polynomial $f$. Motivated by equidistribution considerations we define, for each prime $\ell$, a local factor which measures the relative frequency with which $f\bmod \ell$ occurs as the characteristic polynomial of a symplectic similitude over $\mathbb{F}_\ell$. For a certain class of polynomials, we show that the resulting infinite product calculates the number of principally polarized abelian surfaces over $\mathbb{F}_q$ with Weil polynomial $f$. The effect of a word processor as an accommodation for students with learning disabilities Cassie L. Berger,Larry Lewandowski Journal of Writing Research , 2013, Abstract: The effects of writing format (handwritten (HW) versus word processor (WP)) were examined in a sample of college students with and without learning disabilities (LD). All students wrote two essays, one in each format, scored for quality and length. Groups did not differ in age, gender, ethnicity, mathematical calculation, writing fluency, essay length or essay quality. The "interaction hypothesis" was not supported, in that the use of a word processor as a writing accommodation did not provide a differential boost to students with LD. Both groups produced longer essays in the WP versus HW condition. The best predictor of essay quality was essay length regardless of writing format. Most students in each group preferred the WP format. Interestingly, a smaller percentage of students in the LD group (72%) than NLD group (91%) used the available time for writing.
CommonCrawl
Exact and Approximate Algorithms for Computing Betweenness Centrality in Directed Graphs Haghir Chehreghani, Mostafa Bifet, Albert Abdessalem, Talel Graphs (networks) are an important tool to model data in different domains. Real-world graphs are usually directed, where the edges have a direction and they are not symmetric. Betweenness centrality is an important index widely used to analyze networks. In this paper, first given a directed network $G$ and a vertex $r \in V(G)$, we propose an exact algorithm to compute betweenness score of $r$. Our algorithm pre-computes a set $\mathcal{RV}(r)$, which is used to prune a huge amount of computations that do not contribute to the betweenness score of $r$. Time complexity of our algorithm depends on $|\mathcal{RV}(r)|$ and it is respectively $\Theta(|\mathcal{RV}(r)|\cdot|E(G)|)$ and $\Theta(|\mathcal{RV}(r)|\cdot|E(G)|+|\mathcal{RV}(r)|\cdot|V(G)|\log |V(G)|)$ for unweighted graphs and weighted graphs with positive weights. $|\mathcal{RV}(r)|$ is bounded from above by $|V(G)|-1$ and in most cases, it is a small constant. Then, for the cases where $\mathcal{RV}(r)$ is large, we present a simple randomized algorithm that samples from $\mathcal{RV}(r)$ and performs computations for only the sampled elements. We show that this algorithm provides an $(\epsilon,\delta)$-approximation to the betweenness score of $r$. Finally, we perform extensive experiments over several real-world datasets from different domains for several randomly chosen vertices as well as for the vertices with the highest betweenness scores. Our experiments reveal that for estimating betweenness score of a single vertex, our algorithm significantly outperforms the most efficient existing randomized algorithms, in terms of both running time and accuracy. Our experiments also reveal that our algorithm improves the existing algorithms when someone is interested in computing betweenness values of the vertices in a set whose cardinality is very small. arXiv e-prints arXiv:1708.08739 2017arXiv170808739H Computer Science - Data Structures and Algorithms; Computer Science - Social and Information Networks Fundamenta Informaticae, Volume 182, Issue 3 (November 18, 2021) fi:8624
CommonCrawl
Point distributions in unit square which minimize E[1 / distance] Choose $n$ points $p_1,\ldots,p_n$ in the unit square $[0,1]^2\subset\mathbb{R}^2$ such that $D:=\mathop{\sum}\limits_{1\le i<j\le n}\frac{1}{dist(p_i,p_j)}$ is minimized, where $dist(p_i,p_j)$ is the Euclidean distance between $p_i$ and $p_j$. What is the magnitude of $D$ in $n$? I have worked out that $D\le\Omega(n\log(n))$ by construction. mg.metric-geometry discrete-geometry edited Jul 12 at 9:27 Matt F. Zuo YeZuo Ye Zuo Ye is a new contributor to this site. Take care in asking for clarification, commenting, and answering. Check out our Code of Conduct. $\begingroup$ Will you share the construction? I would expect there to be some limiting continuous distribution where $E[1/dist]=c$, and then to have $D_n$ of order $cn^2$. $\endgroup$ – Matt F. Jul 12 at 9:43 $\begingroup$ It seems strange: Deterministically, $\operatorname{dist}(p_i, p_j)\leq \sqrt{2}$ which implies roughly $D\geq n^2/3$. $\endgroup$ – Dmitry Krachun Jul 12 at 17:27 $\begingroup$ @MattF. If, say, the limiting density $\mu$ is continuous with respect to the Lebesgue measure, then the integral of $1/|x-y|$ against $\mu\times \mu$ is infinite. Also, the divergence is logarithmic, so my guess is that the answer is $\Omega(n^2\log{n})$. $\endgroup$ – Dmitry Krachun Jul 12 at 17:30 $\begingroup$ I asked a similar question here: math.stackexchange.com/questions/3050869/…. For the $1$ dimensional case, you indeed have a $n^2\log(n)$ lower bound and it can be found in the book 'The Cauchy-Schwarz master class', exercise 8.9. However, the technique there does not generalize to even two dimensions. $\endgroup$ – Sandeep Silwal Jul 12 at 19:55 $\begingroup$ @DmitryKrachun Why? We are on the plane, not on the line, so the critical power is now $2$, not $1$. The question essentially is what is the capacity of the unit square and what is the equilibrium measure with respect to the kernel $\frac 1{|x-y|}$. $\endgroup$ – fedja Jul 13 at 1:25 This is very classical (also known as "Thomson problem" usually on the sphere but the same asymptotic results hold for other sets as well - Hausdorff dimension is important). $$ \inf_{\mu}\int \int_{A\times A}\frac{1}{|x-y|^{s}} d\mu(x) d\mu(y)<\infty, $$ where infimum is taken with respect to all probability measures supported on, say a compact set $A \subset \mathbb{R}^{k}$, (this usually happens when $0\leq s <d_{H}(A),$ here $d_{H}(A)$ is the Hausdorff dimension of the set $A$) then one has $\frac{1}{N^{2}} \inf_{x_{1}, \ldots, x_{N} \in A} \sum_{1\leq i \neq j \leq N}\frac{1}{|x_{i}-x_{j}|^{s}} \to \inf_{\mu}\int_{A\times A}\frac{1}{|x-y|^{s}} d\mu(x) d\mu(y)$ as $N$ goes to infinity. In other words $s$-Riesz energy $$ E_{s}(A):=\inf_{x_{1}, \ldots, x_{N} \in A} \sum_{1\leq i \neq j \leq N}\frac{1}{|x_{i}-x_{j}|^{s}} \sim C N^{2}. $$ The standard references is potential theory Landkof, N. S., Foundations of modern potential theory. Translated from the Russian by A. P. Doohovskoy, Die Grundlehren der mathematischen Wissenschaften. Band 180. Berlin-Heidelberg-New York: Springer-Verlag. X, 424 p. Cloth DM 88.00; $ 27.90 (1972). ZBL0253.31001., Springer-Verlag Mattila, Pertti, Geometry of sets and measures in Euclidean spaces. Fractals and rectifiability, Cambridge Studies in Advanced Mathematics. 44. Cambridge: Univ. Press. xii, 343 p. (1995). ZBL0819.28004. In your case $s=1$ and $d_{H}([0,1]^{d})=d$. Another interesting scenario is when $s=d$ (in this case $E_{s}(A)\sim C n^{2} \ln n$); and when $s>d$ (in this case $E_{s}(A) \sim C N^{1+\frac{s}{d}}$). You may look at Hardin, D. P.; Saff, E. B., Discretizing manifolds via minimum energy points, Notices Am. Math. Soc. 51, No. 10, 1186-1194 (2004). ZBL1095.49031. and references therein. edited Jul 13 at 20:16 Paata IvanishviliPaata Ivanishvili Zuo Ye is a new contributor. Be nice, and check out our Code of Conduct. Not the answer you're looking for? Browse other questions tagged mg.metric-geometry discrete-geometry or ask your own question. Honeycomb-type properties of the Delaunay triangulation and Voronoi diagram How to interpolate in 3-D non-euclidean space? How well do random projections preserve the distance between a point and a linear subspace? What fraction of n-point sets in the unit ball have diameter smaller than 1? Find a line such that sum of perpendicular distances of points to the line is minimized Reduction to some physical interpretation of this formula What is the minimal number of lines needed to partition a simplex into cells of diameter at most $\epsilon$? What is the maximal diameter of a cell in a particular partition of the simplex? distance distributions on a hypersphere? A possible characterization of the cube?
CommonCrawl
Model-based adaptive phase I trial design of post-transplant decitabine maintenance in myelodysplastic syndrome Seunghoon Han1, 2, Yoo-Jin Kim3, Jongtae Lee1, 2, Sangil Jeon1, 2, Taegon Hong1, 2, Gab-jin Park1, 2, Jae-Ho Yoon3, Seung-Ah Yahng3, Seung-Hwan Shin3, Sung-Eun Lee3, Ki-Seong Eom3, Hee-Je Kim3, Chang-Ki Min3, Seok Lee3 and Dong-Seok Yim1, 2Email author Journal of Hematology & Oncology20158:118 © Han et al. 2015 This report focuses on the adaptive phase I trial design aimed to find the clinically applicable dose for decitabine maintenance treatment after allogeneic hematopoietic stem cell transplantation in patients with higher-risk myelodysplastic syndrome and secondary acute myeloid leukemia. The first cohort (three patients) was given the same initial daily dose of decitabine (5 mg/m2/day, five consecutive days with 4-week intervals). In all cohorts, the doses for Cycles 2 to 4 were individualized using pharmacokinetic-pharmacodynamic modeling and simulations. The goal of dose individualization was to determine the maximum dose for each patient at which the occurrence of grade 4 (CTC-AE) toxicities for both platelet and neutrophil counts could be avoided. The initial doses for the following cohorts were also estimated with the data from the previous cohorts in the same manner. In all but one patient (14 out of 15), neutrophil count was the dose-limiting factor throughout the cycles. In cycles where doses were individualized, the median neutrophil nadir observed was 1100/mm3 (grade 2) and grade 4 toxicity occurred in 5.1 % of all cycles (while it occurred in 36.8 % where doses were not individualized). The initial doses estimated for cohorts 2 to 5 were 4, 5, 5.5, and 5 mg/m2/day, respectively. The median maintenance dose was 7 mg/m2/day. We determined the acceptable starting dose and individualized the maintenance dose for each patient, while minimizing the toxicity using the adaptive approach. Currently, 5 mg/m2/day is considered to be the most appropriate starting dose for the regimen studied. Clinicaltrials.gov NCT01277484 Model-based drug development Adaptive design Population pharmacokinetics-pharmacodynamics Phase I clinical trial DNA methylation is the best-known epigenetic marker for cancer development [1]. In some hematologic malignancies including myelodysplastic syndrome (MDS), DNA methylation results not only in increased cell proliferation but also in silencing of genes which regulate growth and differentiation [2]. Based upon those mechanisms, the use of a DNA hypomethylating agent (HMA) for hematologic malignancies has been expanded. Accordingly, clinical researches to optimize HMA therapy [3, 4] or to explore epigenetic mechanisms for new drug development have been widely performed [5, 6]. Decitabine (Dacogen®, 5-aza-2′-deoxycytidine) is a HMA that exerts its antitumor activity by inhibiting DNA methylation at low doses and by arresting DNA synthesis at high doses [7, 8]. For several decades, decitabine has been one of the most intensely studied anticancer agents in the field of hematology due to its sophisticated development history [9–11], as well as its impressive clinical outcomes against many hematologic diseases [7, 9, 12–17]. For MDS, the approved indication for decitabine, numerous efforts have been made to optimize the dosing regimen according to patient characteristics, including the regimen evaluated in this study (five consecutive days of dosing with 4-week interval) [12, 18–22]. Recently, HMA maintenance therapy after allogeneic hematopoietic stem cell transplantation (allo-HSCT) has been suggested as a potentially attractive approach to minimize relapse and to improve graft survival [23–25]. Several studies on azacitidine (Vidaza®, 5-azacytidine) reported low toxicity, along with its potential to increase the number of hematopoietic stem cells [11, 26–29]; thus, similar approaches using decitabine were initiated [22]. In this context, we designed and performed a phase I study that aimed to find a clinically applicable dosage regimen for decitabine maintenance treatment after allo-HSCT in patients with higher-risk MDS and secondary acute myeloid leukemia (AML). Our study design incorporated two major considerations: (1) the purpose of the maintenance therapy was to maintain disease-free status in the patient while simultaneously preserving graft function, and (2) the dosage regimen should be determined using the smallest number of patients possible. Considering these aspects, without a confident estimation of the appropriate starting dose, traditional fixed-dose escalation schemes [30] were considered inappropriate for the following reasons: (1) fatal toxicity (e.g., graft failure) might occur in some subjects, (2) the study might need too many patients to find the optimal dose [31], and (3) dose differences between cohorts might be too large or small. Thus, we introduced an adaptive dose individualization design based upon pharmacokinetic (PK)-pharmacodynamic (PD) modeling for the neutropenia and thrombocytopenia caused by decitabine. Dose individualization of anticancer drugs using PK-PD modeling has been theoretically proposed using simulated data [32, 33]; however, our report is the first to implement dose individualization using PK-PD modeling in patients in a phase I clinical trial. We endeavored to titrate the appropriate dose for each patient, with the goal of identifying the highest possible dose that did not result in severe hematologic toxicities. We also anticipated that this approach would more quickly accomplish the study's objectives and avoid having to test several cohorts for the dose escalation. This report focuses on the study design, the PK-PD model development for hematologic toxicities caused by decitabine, and the usefulness of our adaptive approach as it applies to subject safety. Patient characteristics Five patients with secondary AML evolving from MDS and 11 with MDS (9 males, 7 females) were enrolled (Table 1). All the patients received the myeloablative condition regimen and peripheral blood stem cells from the related (n = 6) or unrelated (n = 10) donors. The engraftment achievement of platelet and neutrophil counts was confirmed for all patients by an experienced hematologist upon enrollment. Graft-versus-host disease (GVHD) prophylaxis was calcineurin inhibitors (cyclosporine for related and tacrolimus for unrelated donors) plus short-course methotrexate. Antithymocyte globulin was given to all patients. Decitabine was administered at a median of 86 days (range, 56–90 days) after transplantation. At the time of decitabine treatment, acute (≤ overall grade 2) or chronic GVHD was observed in nine and one patients, respectively. The clinical features are given in Table 2. Age (year) 41.0 ± 17.1 55.7 ± 5.8 Sex (male/female) 1.68 ± 0.05 Body surface area (m2) 1.6 ± 0.1 Patient characteristics and doses given in each subject, cohort, and cycle Subject number Sex/age WHO diagnosis GVHD gradea Cycle (mg/m2/day for 5 days) RAEB-2 5b,c 7.5b,c GVHD graft-versus-host disease, RAEB refractory anemia with excess blast, MSD matched sibling donor, PMUD partially matched unrelated donor, MUD matched unrelated donor aAssessed at the time of decitabine initiation bIndividual dose titration (IDT) by the PK-PD model was not applied cThe cycles where grade 4 toxicities occurred Patient disposition and dataset Patient dispositions are detailed in Fig. 1. In cohort 1, the third patient dropped out of the study without PD sampling; thus, we substituted with an additional patient, since PK-PD results from three patients were needed to obtain the initial dose for cohort 2. Fourteen patients completed all the study-related procedures until Cycle 4, and maintenance dose was determined for each patient at the end of Cycle 4 (Table 2). Patient disposition For each subject, PK sampling was performed according to the protocol, and the average number of PD observations used in individual dose titration (IDT) was 5.76/cycle for both neutrophils and platelets. Among 58 treatment cycles of 15 patients, the doses for Cycles 2 to 4 (a total of 39 cycles) were determined through PK-PD model-based adaptive dose individualization. Cycle 2 doses in four patients were clinically determined for the following reasons: no significant blood cell count decrease after cycle 1 (subjects 8 and 10) and not enough time for PK-PD modeling and IDT from sudden changes in visit schedules for Cycle 2 dosing (subjects 11 and 12). The actual dosing interval was 34.5 ± 8.7 days (mean ± SD). Estimated doses and safety outcomes In all but one patient (14 out of 15), the absolute neutrophil count (ANC) was the dose-limiting factor throughout all cycles. During the cycles in which IDT was performed, the median ANC nadir observed was 1100/mm3 (range, 300/mm3 to 2680/mm3). The maintenance dose determined with four cycle data was higher than the initial doses in 10 out of the 15 patients. The initial doses (Cycle 1 doses) estimated by cohort dose estimation (CDE) were 4, 5, 5.5, and 5 mg/m2/day for cohorts 2, 3, 4, and 5, respectively. The median individual maintenance dose of decitabine was 7 mg/m2/day (Table 2). Maintenance doses for the patients with Cycle 1 data inadequate for PK-PD modeling could be estimated using three cycle data (Cycles 2, 3, and 4) with acceptable model fits. A total of nine dose-limiting toxicities (DLT, platelet count for one case and absolute neutrophil count for eight cases) were observed. Among these toxicities, seven cases occurred in non-IDT cycles (six in Cycle 1 and one in Cycle 2 with clinically determined doses). In the observed toxicities, 36.8 % of the non-IDT cycles (7 out of 19 cycles) showed dose-limiting toxicities, which was an approximately seven times higher occurrence rate than that observed in the IDT cycles (5.1 %, 2 out of 39 cycles). Overall mixed-effect PK-PD analysis A total of 95 PK observations and 622 PD observations (311 for ANC and 311 for platelet count, PC) were used in the overall mixed-effect PK-PD analysis. The one patient whose dose-limiting factor was PC was excluded from this analysis, whose disease entity was considered not to be similar to others, as she suffered from immune thrombocytopenia after transplantation and was managed with steroids. Among the data, 6.9 % (4 out of 58 cycles) was obtained from the cycles where IDT was not applied. A two-compartment model was found to best describe the PK data. The between-subject variability (BSV) for CL (clearance from the central compartment) was the only random effect which could be estimated, except for the proportional residual error. The basic structure of the PD model was identical to that used for IDT and CDE for both PC and ANC (transit compartment model with feedback mechanism): $$ \begin{array}{c}\frac{dA(1)}{dt}\kern0.5em =\kern0.5em {k}_{\mathrm{tr}}\cdot \kern0.5em A(1)\cdot \kern0.5em \left\{\left(1-\mathrm{SLOPE}\cdot C\right)\cdot {\left(\mathrm{BASE}/A(5)\right)}^{\mathrm{GAMMA}}-1\right\}\\ {}\frac{dA(2)}{dt}\kern0.5em =\kern0.5em {k}_{\mathrm{tr}}\cdot \kern0.5em \left(A(1)-A(2)\right)\\ {}\frac{dA(3)}{dt}\kern0.5em =\kern0.5em {k}_{\mathrm{tr}}\cdot \kern0.5em \left(A(2)-A(3)\right)\\ {}\frac{dA(4)}{dt}\kern0.5em =\kern0.5em {k}_{\mathrm{tr}}\cdot \kern0.5em \left(A(3)-A(4)\right)\\ {}\frac{dA(5)}{dt}\kern0.5em =\kern0.5em {k}_{\mathrm{tr}}\cdot \kern0.5em \left(A(4)-A(5)\right)\end{array} $$ where A(N) is the cell count in the Nth compartment and C is the plasma decitabine concentration. A detailed description for the parameters is presented in Table 3. BASE is a parameter indicating the level of cell count maintained at baseline or at the period without drug effect. For platelets, an asymptotic structure describing gradual cell count increase over cycles improved the model significantly, and thus the following structure substituted the simple BASE parameter: Final parameter estimates and bootstrap outcomes Population typical value Between-subject variability Bootstrap median (95 % CI) Estimate (as CV%) Pharmacokinetic parameters L/h·m2 88.3 (72.2–108) 20.5 (13.0–26.8) V c Volume of central compartment V p Volume of peripheral compartment Intercompartmental clearance Pharmacodynamic parameters for platelet k tr,P h−1 Rate constant of inter-compartmental platelet movement 0.0246 (0.0236–0.0254) SLOPE P Drug effect on platelet count 20.8 (1.30–56.0) BASE P /mm3 Baseline platelet count 53,700 (36,100–95,800) GAMMA P Shape factor for platelet count fluctuation 0.299 (0.264–0.325) Maximum degree of platelet count recovery expected 58700 (24200 – 98300) 72.9 (19.5 – 150) Rate constant for asymptotic platelet count recovery 0.000513 (0.000213–0.000691) Pharmacodynamic parameters for neutrophil k tr,N Rate constant of inter-compartmental neutrophil movement SLOPE N Drug effect on neutrophil count BASE N Baseline neutrophil count GAMMA N Shape factor for neutrophil count fluctuation Residual error σ PK 2 Variance of residual error (proportional) for PK σ PD,P 2 Variance of residual error (additive) for platelet count σ PD,N 2 Variance of residual error (additive) for neutrophil count Proportion of successful convergence: 78.8 % for PK model, 78.0 % for PD model NE not estimated $$ {\mathrm{BASE}}_p+\mathrm{IMP}\kern0.5em \left(1\kern0.5em {\displaystyle {-\kern0.5em e}^{-\mathrm{I}\mathrm{M}\mathrm{K}*\mathrm{TIME}}}\right) $$ where IMP is the empirical value of the maximum PC recovery expected, IMK is the rate constant for asymptotic PC recovery, and TIME is the time from the initiation of decitabine treatment. No meaningful covariate was found in either the patient demographic or clinical variables. The parameter descriptions and estimates are given in Table 3. Simulated time courses of ANC changes, under the maintenance dosage of 5 mg/m2/day for four treatment cycles, are presented in Fig. 2. Prediction of neutrophil count change when 5 mg/m2 dose is given for five consecutive days with 4-week interval. (From 1000 simulations using the final PK-PD model) Clinical course and non-hematological events During four cycles of the dose-finding phase of this study, one patient (subject 3) died of pneumonia (protocol violation) while the other two (subject 1 and subject 11) also suffered from pneumonia but fully recovered. One of the three cases developed decitabine-induced neutropenia (subject 11, withdrawn). Aggravation of existing acute or chronic GVHD was not observed, while chronic GVHD was diagnosed in two patients (one in mild and the other in moderate form). Herpes zoster was a complication in three patients. We succeeded in administering the maximum dose allowed for each patient, with minimized toxicity. The dose for each cycle was determined based upon the observed cell counts in the previous cycle(s) which are the ultimate outcome of patient characteristics and drug effect. Thus, the dose can be considered as a reflection of the vulnerability of the graft, the sensitivity to decitabine, and any possible drug interactions affecting cell counts. This method meant that using a large number of cohorts, as typically required in the traditional dose escalation scheme, could be avoided. Moreover, the doses of four patients were reduced from their initial doses because of their relatively vulnerable PD characteristics. The treatment of these patients might have been discontinued if a traditional, fixed-dose design had been used. Most importantly, our study design showed the significant advantage that all dose individualization steps were accomplished with a favorable toxicity profile, judging from the proportion of cycles that exhibited grade 4 toxicities. When IDT was applied, the proportion of cycles exhibiting grade 4 toxicities dropped to approximately one-seventh the level (36.8 versus 5.1 %) compared with the non-IDT cycles. Thus, model-based dose individualization can be a useful option in early-phase clinical trial designs, in particular when the initial dose cannot be set with sufficient confidence. The PK properties of decitabine in Korean patients obtained here are similar to those in previous studies. Liu et al. [34] and Cashen et al. [35] reported that the PK properties of decitabine could be well described with a two-compartment model. The distributional characteristics from these two studies could be indirectly compared using the maximum concentration (C max) predicted upon the completion of decitabine infusion. From previous reports, the maximum concentration of decitabine was within the range of 60–70 ng/mL, which was obtained approximately 1 h after initiation of infusion, when decitabine was administered at a rate of 5 mg/m2/h (3-h infusion of a 15 mg/m2 dose) [35, 36]. This observation is consistent with our finding that the predicted C max after 1-h infusion of 5 mg/m2 was 66.0 ng/mL. In addition, the average terminal half-life was also similar (0.31 h in this study); thus, the decitabine concentration is predicted to drop below 5 % of C max within 1.5–2 h after the completion of infusion. The baseline cell count increase over cycles was modeled for platelet level. This was a consistent finding to the results from previous reports regarding the contribution of decitabine to cell proliferation [37–41]. For neutrophil counts, doses estimated by neutrophil count nadirs were gradually escalated over cycles until reaching the maintenance dose in ten patients while baseline cell count increase was not meaningful. Gradual deflation in the width of the prediction interval for ANC, resulting from improved precision of the model along with increased data points obtained throughout the cycles, seems to be one possible explanation. Dose escalation from this prediction interval deflation lowers the predicted median of course while maintaining the lower 25 % prediction interval above 500/mm3 (grade 4 toxicity). We also found it necessary to modify the interval between cycles that was initially planned as 4 weeks in this study. Although both PC and ANC were recovered to the baseline after decitabine dosing, our PK-PD model predicted that the time to nadir was 3.5 weeks and that the time to recovery from the influence of the last dose (ANC >1000/mm3) was approximately 5 weeks for the ANC. This prediction was consistent with the actual dosing interval practiced in this study (34.5 days on average). This finding implies that the 4-week interval may not be long enough to initiate the next cycle. Moreover, as illustrated in Fig. 2, the lowest value of ANC appears to be achieved in the second cycle (6–7 weeks after treatment initiation). Thus, the initial nadir of ANC within the first 4-week cycle should not be mistaken for the lowest ANC value throughout the cycles. This could also have been a reason for failure in dose determination if traditional fixed-dose escalation based on the first cycle nadir was recruited. To optimize the dosing regimen that may overcome this difficult property of decitabine, an initial loading dose may be considered before giving maintenance doses. We exemplified the adaptive dose titration approach, based upon a quantitative exposure-toxicity model, in this study. This approach seemed most useful, since this method enabled rapid and precise dose individualization. The most appropriate initial dose was determined to be 5 mg/m2/day for five consecutive days. Throughout the course of data analysis, issues such as extending between-cycle intervals and the use of loading doses were also raised. Cohort 6 is ongoing for exploration of the adequacy of the recommended starting dose, and additional report will be provided after completion of 12 cycles of treatment of all participants. Ethics, consent, and permission This study was designed and conducted in accordance with the principles of the Declaration of Helsinki and the good clinical practice guidelines of Korea. The independent institutional review board of Seoul St. Mary's Hospital approved this study protocol before the initiation of any study-related procedure, and written informed consent was obtained from every subject. The registration number of this trial at "ClinicalTrials.gov" is NCT01277484. Patient eligibility Patients starting decitabine treatment on days 42–90 after allo-HSCT and meeting the following criteria were considered eligible: adult aged ≤65; recipient of allo-HSCT for higher-risk (intermediate 2 or high risk) MDS, as assessed by the International Prognostic Scoring System [42], and/or AML evolving from MDS; disease remission with appropriate recoveries of PC >30,000/mm3 and ANC >1000/mm3, both of which were maintained for more than 7 days without any transfusions or growth factors; absence of grade III/IV acute GVHD; Eastern Cooperative Oncology Group (ECOG) performance status of 0 to 2; and no evidence of renal or hepatic impairment. Patients were assigned to cohorts according to their order of enrollment. A cohort consisted of three patients to whom the same initial daily dose of decitabine (according to body surface area) was given. The initial dose for cohort 1 was 5 mg/m2/day. The designated dose was infused intravenously over 60 min daily for five consecutive days in each cycle, and the cycle was repeated every 4 weeks up to Cycle 12. However, dosing was suspended if blood cell counts insufficiently recovered (PC <30,000/mm3, ANC <1000/mm3). For cycles 2 to 4, the dose for each cycle was estimated using IDT according to PK-PD modeling and simulations based on blood cell count data accumulated until the time of dose estimation (just before administration). The maximum dose at which the occurrence of grade 4 hematologic toxicity (dose-limiting toxicity, PC <25,000/mm3 or ANC <500/mm3) could be avoided at the lower limit of the 50 % prediction interval (25th percentile), according to 500 simulations, was determined to be the dose for the next cycle. If the data from the previous cycle were not adequate for PK-PD modeling (e.g., no significant blood cell count decreases), the dose was determined based upon the hematologist's clinical decision [43]. Only the upper limit of the dose increment was pre-determined that the next cycle dose cannot exceed 150 % of the previous dose. The dose determined at Cycle 4 for each individual was maintained thereafter. The fixed initial dose for each cohort was also estimated using PK-PD modeling and simulations and was based on the observations from the previous cohorts (CDE). For cohort 2, all of the data obtained before the initiation of treatment for the first patient in cohort 2 were used for the initial dose estimation; however, only Cycle 1 data from the previous cohorts were used for cohort 3, 4, and 5. A new cohort was not initiated before completion of the first cycle in the last patient of the previous cohort. A schematic diagram of the overall study design is presented in Fig. 3. Overall schema of the study design. Individual dose titration was performed for the next cycle based on the observations from the previous cycle (solid straight arrows). Cohort dose estimation was performed to determine initial doses (broken line arrows): (i) for cohort 2, using all data obtained from cohort 1 until the initiation of cohort 2; (ii) for cohorts 3–5, using only Cycle 1 data of previous subjects. The dose of Cycle 4 was maintained until the completion of decitabine treatment (Cycle 12) (dotted lines) PK and PD samplings To determine plasma concentration measurements, seven whole-blood samples (10 mL each) were collected using EDTA tubes before dosing and then at 20, 40, 60, 90, 120, and 180 min after initiation of the first dose infusion of Cycle 1. The samples were immediately cooled in an ice bath and then centrifuged (3000 rpm, 4 °C, for 10 min) within 1 h from the last sampling time. After centrifugation, 4 mL of plasma from each sample was aliquoted into four microtubes (1 mL each), and 10 μL of 10 mg/mL tetrahydrouridine (THU) solution was added to each microtube. Microtubes were stored at −70 °C until plasma concentration assays. As PD (toxicity) markers, PCs and ANCs were monitored at scheduled follow-up visits (weekly until Cycle 4 and biweekly thereafter). The procedures for obtaining PCs and ANCs followed the routine clinical practices for automated complete blood cell counts at Seoul St. Mary's Hospital. Plasma concentration measurements Plasma samples were analyzed using liquid chromatography coupled with tandem mass spectrometry (API 4000, ABSciex, Canada) based upon a previously reported method [34]. The lower limit of quantification (LLOQ) was 0.5 ng/mL. The coefficients of correlation (r 2) were greater than 0.9975 in the range of 0.5–100 ng/mL decitabine, as determined by weighted linear regression (1/concentration). The precision (% coefficient of variation) and mean intra- and inter-day accuracies were below 11.57 % and 95.55–102 %, respectively. PK-PD modeling and simulation A mixed-effect analysis was performed using NONMEM (ver. 7.2, Icon Development Solution, Ellicott City, MD, USA). During the early phase of this study (e.g., IDT for cohort 1 and CDE for cohort 2), during which sufficient PD data to build a robust model were unavailable, we adopted the PD model proposed by Wallin et al. (2009) [33]. This model was used in conjunction with the one-compartment, first-order elimination PK model to build the initial PK-PD model. Therefore, only the values of the PK-PD parameters for each individual were estimated at this step. Then, as data accumulated, we performed additional modeling to find a better PK-PD model structure that optimally fits the data. Multi-compartment PK models, in addition to PD structures such as baseline cell count increase, were tested in the modeling process. Random effects were also taken into consideration. The structure to describe the residual error, which refers to the deviation of each observation from the value predicted by the PK-PD model, was initially applied to both IDT and CDE procedures as follows: $$ {\mathrm{DV}}_{ij}={\mathrm{IPRED}}_{ij}\cdot \left(1+{\varepsilon}_{\mathrm{prop},ij}\right)+{\varepsilon}_{\mathrm{add},ij} $$ where DV ij is the jth measured concentration or blood cell count in the ith individual, IPRED ij is the model-predicted value for the corresponding observation (DV ij ), and ε prop,ij and ε add,ij are the residual variabilities with means of 0 and variances of σ prop 2 and σ add 2, respectively. For the CDE step, BSV (η i ) of each PK and PD parameter was tested as follows: $$ {P}_{ij}={TVP}_j\cdot \exp \left({\eta}_i\right) $$ where P ij is the jth model parameter in the ith individual and TVP j is the typical value of the jth model parameter. The BSV for each parameter was assumed to follow a normal distribution, with a mean of 0 and differing values of variance (described using the symbol ω i 2). The first-order conditional estimation with interaction option (FOCE-I) method was used whenever applicable. Model adequacies were assessed based upon goodness-of-fit plots, likelihood ratio tests (LRT), and model stability measures (e.g., successful convergence, matrix singularity, and significant digits). Cutoff criteria incorporated a p value of 0.05 (e.g., 3.84 for one parameter addition, 5.99 for two) for LRT to determine statistically significant improvements in the model. Covariate analysis was performed for potential covariates, including demographic variables (sex, age, baseline body weight, and surface area) and clinical variables (mainly results from laboratory tests). After covariate screening via visual correlation check-ups and generalized additive modeling (GAM) procedures, the variables selected from the screening were tested as fixed effects for a certain PK-PD parameter, using LRT and decreases in BSV for the corresponding parameter. This research was supported by the Janssen Pharmaceutical Companies of Johnson & Johnson. An immediate family member of the author Yoo-Jin Kim has been employed, has had a leadership role, and has owned stock of the Janssen Pharmaceutical Companies of Johnson & Johnson. SH, Y-JK, JL, S-EL, C-KM, and D-SY wrote the manuscript. SH, Y-JK, J-HY, S-AY, S-EL, K-SE, H-JK, C-KM, SL, and D-SY designed the research. SH, Y-JK, JL, SJ, TH, G-JP, S-HS, S-EL, and D-SY performed the research. SH, Y-JK, JL, SJ, TH, and D-SY analyzed data. All authors read and approved the final manuscript. Department of Pharmacology, College of Medicine, The Catholic University of Korea, 222 Banpo-Daero, Seochogu, Seoul, Republic of Korea PIPET (Pharmacometrics Institute for Practical Education and Training), 222 Banpo-Daero, Seochogu, Seoul, Republic of Korea Catholic Blood and Marrow Transplantation Center, Seoul St. Mary's Hospital, The Catholic University of Korea, 222 Banpo-Daero, Seochogu, Seoul, Republic of Korea Esteller M. Epigenetics in cancer. N Engl J Med. 2008;358:1148–59.View ArticlePubMedGoogle Scholar Issa JP, Baylin SB, Herman JG. DNA methylation changes in hematologic malignancies: biologic and clinical implications. Leukemia. 1997;11:S7–11.PubMedGoogle Scholar van der Helm LH, Scheepers ER, Veeger NJ, Daenen SM, Mulder AB, van den Berg E, et al. Azacitidine might be beneficial in a subgroup of older AML patients compared to intensive chemotherapy: a single centre retrospective study of 227 consecutive patients. J Hematol Oncol. 2013;6:29.PubMed CentralView ArticlePubMedGoogle Scholar Pleyer L, Stauder R, Burgstaller S, Schreder M, Tinchon C, Pfeilstocker M, et al. Azacitidine in patients with WHO-defined AML - results of 155 patients from the Austrian Azacitidine Registry of the AGMT-Study Group. J Hematol Oncol. 2013;6:32.PubMed CentralView ArticlePubMedGoogle Scholar Bachegowda L, Gligich O, Mantzaris I, Schinke C, Wyville D, Carrillo T, et al. Signal transduction inhibitors in treatment of myelodysplastic syndromes. J Hematol Oncol. 2013;6:50.PubMed CentralView ArticlePubMedGoogle Scholar Hájková H, Fritz MH, Haškovec C, Schwarz J, Šálek C, Marková J, et al. CBFB-MYH11 hypomethylation signature and PBX3 differential methylation revealed by targeted bisulfite sequencing in patients with acute myeloid leukemia. J Hematol Oncol. 2014;7:66.PubMed CentralView ArticlePubMedGoogle Scholar Jabbour E, Issa JP, Garcia-Manero G, Kantarjian H. Evolution of decitabine development: accomplishments, ongoing investigations, and future strategies. Cancer. 2008;112:2341–51.View ArticlePubMedGoogle Scholar Santini V, Kantarjian HM, Issa JP. Changes in DNA methylation in neoplasia: pathophysiology and therapeutic implications. Ann Intern Med. 2001;134:573–86.View ArticlePubMedGoogle Scholar deVos D, van Overveld W. Decitabine: a historical review of the development of an epigenetic drug. Ann Hematol. 2001;84:S3–8.View ArticleGoogle Scholar Lübbert M. DNA methylation inhibitors in the treatment of leukemias, myelodysplastic syndromes and hemoglobinopathies: clinical results and possible mechanisms of action. Curr Top Microbiol Immunol. 2000;249:135–64.PubMedGoogle Scholar Estey EH. Epigenetics in clinical practice: the examples of azacitidine and decitabine in myelodysplasia and acute myeloid leukemia. Leukemia. 2013;27:1803–12.View ArticlePubMedGoogle Scholar Kantarjian HM, Thomas XG, Dmoszynska A, Wierzbowska A, Mazur G, Mayer J, et al. Multicenter, randomized, open-label, phase III trial of decitabine versus patient choice, with physician advice, of either supportive care or low-dose cytarabine for the treatment of older patients with newly diagnosed acute myeloid leukemia. J ClinOncol. 2012;30:2670–7.View ArticleGoogle Scholar Issa JP, Gharibyan V, Cortes J, Jelinek J, Morris G, Verstovsek S, et al. Phase II study of low-dose decitabine in patients with chronic myelogenous leukemia resistant to imatinibmesylate. J ClinOncol. 2005;23:3948–56.View ArticleGoogle Scholar Oki Y, Kantarjian HM, Gharibyan V, Jones D, O'brien S, Verstovsek S, et al. Phase II study of low-dose decitabine in combination with imatinibmesylate in patients with accelerated or myeloid blastic phase of chronic myelogenous leukemia. Cancer. 2007;109:899–906.View ArticlePubMedGoogle Scholar Saunthararajah Y, Molokie R, Saraf S, Sidhwani S, Gowhari M, Vara S, et al. Clinical effectiveness of decitabine in severe sickle cell disease. Br J Haematol. 2008;141:126–9.View ArticlePubMedGoogle Scholar Kantarjian H, Issa JP, Rosenfeld CS, Bennett JM, Albitar M, DiPersio J, et al. Decitabine improves patient outcomes in myelodysplastic syndromes: results of a phase III randomized study. Cancer. 2006;106:1794–803.View ArticlePubMedGoogle Scholar Santos FP, Kantarjian H, Garcia-Manero G, Issa JP, Ravandi F. Decitabine in the treatment of myelodysplastic syndromes. Expert Rev Anticancer Ther. 2010;10:9–22.View ArticlePubMedGoogle Scholar Wijermans PW, Lübbert M, Verhoef G, Klimek V, Bosly A. An epigenetic approach to the treatment of advanced MDS; the experience with the DNA demethylating agent 5-aza-2'-deoxycytidine (decitabine) in 177 patients. Ann Hematol. 2005;84:S9–17.View ArticleGoogle Scholar Lübbert M, Suciu S, Baila L, Rüter BH, Platzbecker U, Giagounidis A, et al. Low-dose decitabine versus best supportive care in elderly patients with intermediate- or high-risk myelodysplastic syndrome (MDS) ineligible for intensive chemotherapy: final results of the randomized phase III study of the European Organisation for Research and Treatment of Cancer Leukemia Group and the German MDS Study Group. J Clin Oncol. 2011;29:1987–96.View ArticlePubMedGoogle Scholar Kantarjian HM, O'Brien S, Huang X, Garcia-Manero G, Ravandi F, Cortes J, et al. Survival advantage with decitabine versus intensive chemotherapy in patients with higher risk myelodysplastic syndrome: comparison with historical experience. Cancer. 2007;109:1133–7.PubMed CentralView ArticlePubMedGoogle Scholar Steensma DP, Baer MR, Slack JL, Buckstein R, Godley LA, Garcia-Manero G, et al. Multicenter study of decitabine administered daily for 5 days every 4 weeks to adults with myelodysplastic syndromes: the alternative dosing for outpatient treatment (ADOPT) trial. J Clin Oncol. 2009;27:3842–8.View ArticlePubMedGoogle Scholar Joeckel TE, Lübbert M. Clinical results with the DNA hypomethylating agent 5-aza-2'-deoxycytidine (decitabine) in patients with myelodysplastic syndromes: an update. Semin Hematol. 2012;49:330–41.View ArticlePubMedGoogle Scholar Goodyear O, Agathanggelou A, Novitzky-Basso I, Siddique S, McSkeane T, Ryan G, et al. Induction of a CD8+ T-cell response to the MAGE cancer testis antigen by combined treatment with azacitidine and sodium valproate in patients with acute myeloid leukemia and myelodysplasia. Blood. 2010;116:1908–18.View ArticlePubMedGoogle Scholar Choi J, Ritchey J, Prior JL, Holt M, Shannon WD, Deych E, et al. In vivo administration of hypomethylating agents mitigate graft-versus-host disease without sacrificing graft-versus-leukemia. Blood. 2010;116:129–39.PubMed CentralView ArticlePubMedGoogle Scholar Schroeder T, Fröbel J, Cadeddu RP, Czibere A, Dienst A, Platzbecker U, et al. Salvage therapy with azacitidine increases regulatory T cells in peripheral blood of patients with AML or MDS and early relapse after allogeneic blood stem cell transplantation. Leukemia. 2013;27:1910–3.View ArticlePubMedGoogle Scholar Jabbour E, Giralt S, Kantarjian H, Garcia-Manero G, Jagasia M, Kebriaei P, et al. Low‐dose azacitidine after allogeneic stem cell transplantation for acute leukemia. Cancer. 2009;115:1899–905.PubMed CentralView ArticlePubMedGoogle Scholar de Lima M, Giralt S, Thall PF, de Padua SL, Jones RB, Komanduri K, et al. Maintenance therapy with low-dose azacitidine after allogeneic hematopoietic stem cell transplantation for recurrent acute myelogenous leukemia or myelodysplastic syndrome: a dose and schedule finding study. Cancer. 2010;116:5420–31.View ArticlePubMedGoogle Scholar Schroeder T, Czibere A, Platzbecker U, Bug G, Uharek L, Luft T, et al. Azacitidine and donor lymphocyte infusions as first salvage therapy for relapse of AML or MDS after allogeneic stem cell transplantation. Leukemia. 2013;27:1229–35.View ArticlePubMedGoogle Scholar Platzbecker U, Wermke M, Radke J, Oelschlaegel U, Seltmann F, Kiani A, et al. Azacitidine for treatment of imminent relapse in MDS or AML patients after allogeneic HSCT: results of the RELAZA trial. Leukemia. 2012;26:381–9.PubMed CentralView ArticlePubMedGoogle Scholar Storer BE. Design and analysis of phase I clinical trials. Biometrics. 1989;45:925–37.View ArticlePubMedGoogle Scholar Piantadosi S, Fisher JD, Grossman S. Practical implementation of a modified continual reassessment method for dose-finding trials. Cancer ChemotherPharmacol. 1998;41:429–36.Google Scholar Wallin JE, Friberg LE, Karlsson MO. Model-based neutrophil-guided dose adaptation in chemotherapy: evaluation of predicted outcome with different types and amounts of information. Basic Clin Pharmacol Toxicol. 2010;106:234–42.View ArticlePubMedGoogle Scholar Wallin JE, Friberg LE, Karlsson MO. A tool for neutrophil guided dose adaptation in chemotherapy. Comput Methods Programs Biomed. 2009;93:283–91.View ArticlePubMedGoogle Scholar Liu Z, Marcucci G, Byrd JC, Grever M, Xiao J, Chan KK. Characterization of decomposition products and preclinical and low dose clinical pharmacokinetics of decitabine (5-aza-2'-deoxycytidine) by a new liquid chromatography/tandem mass spectrometry quantification method. Rapid Commun Mass Spectrom. 2006;20:1117–26.View ArticlePubMedGoogle Scholar Cashen AF, Shah AK, Todt L, Fisher N, DiPersio J. Pharmacokinetics of decitabine administered as a 3-h infusion to patients with acute myeloid leukemia (AML) or myelodysplastic syndrome (MDS). Cancer Chemother Pharmacol. 2008;61:759–66.View ArticlePubMedGoogle Scholar Karahoca M, Momparler RL. Pharmacokinetic and pharmacodynamic analysis of 5-aza-2'-deoxycytidine (decitabine) in the design of its dose-schedule for cancer therapy. Clin Epigenetics. 2013;5:3–18.PubMed CentralView ArticlePubMedGoogle Scholar Milhem M, Mahmud N, Lavelle D, Araki H, DeSimone J, Saunthararajah Y, et al. Modification of hematopoietic stem cell fate by 5aza 2'deoxycytidine and trichostatin A. Blood. 2004;103:4102–10.View ArticlePubMedGoogle Scholar Young JC, Wu S, Hansteen G, Du C, Sambucetti L, Remiszewski S, et al. Inhibitors of histone deacetylases promote hematopoietic stem cell self-renewal. Cytotherapy. 2004;6:328–36.View ArticlePubMedGoogle Scholar Chung YS, Kim HJ, Kim TM, Hong SH, Kwon KR, An S, et al. Undifferentiated hematopoietic cells are characterized by a genome-wide undermethylation dip around the transcription start site and a hierarchical epigenetic plasticity. Blood. 2009;114:4968–78.View ArticlePubMedGoogle Scholar Suzuki M, Harashima A, Okochi A, Yamamoto M, Nakamura S, Motoda R, et al. 5-Azacytidine supports the long-term repopulating activity of cord blood CD34(+) cells. Am J Hematol. 2004;77:313–5.View ArticlePubMedGoogle Scholar Negrotto S, Ng KP, Jankowska AM, Bodo J, Gopalan B, Guinta K, et al. CpG methylation patterns and decitabine treatment response in acute myeloid leukemia cells and normal hematopoietic precursors. Leukemia. 2012;26:244–54.PubMed CentralView ArticlePubMedGoogle Scholar Greenberg P, Cox C, LeBeau MM, Fenaux P, Morel P, Sanz G, et al. International scoring system for evaluating prognosis in myelodysplastic syndromes. Blood. 1997;89:2079–88.PubMedGoogle Scholar Kantarjian H, Oki Y, Garcia-Manero G, Huang X, O'Brien S, Cortes J, et al. Results of a randomized of 3 schedules of low-dose decitabine in higher-risk myelodysplastic syndrome and chronic myelomonocytic leukemia. Blood. 2007;109:52–7.View ArticlePubMedGoogle Scholar
CommonCrawl
A tetrahedral snake, sometimes called a Steinhaus snake, is a collection of tetrahedra, linked face to face. Steinhaus showed in 1956 that the last tetrahedron in the snake can never be a translation of the first one. This is a consequence of the fact that the group generated by the four reflexions in the faces of a tetrahedron form the free product $C_2 \ast C_2 \ast C_2 \ast C_2$. For a proof of this, see Stan Wagon's book The Banach-Tarski paradox, starting at page 68. The tetrahedral snake we will look at here is a snake in the Big Picture which we need to determine the moonshine group $(3|3)$ corresponding to conjugacy class 3C of the Monster. The thread $(3|3)$ is the spine of the $(9|1)$-snake which involves the following lattices \xymatrix{& & 1 \frac{1}{3} \ar@[red]@{-}[dd] & & \\ & & & & \\ 1 \ar@[red]@{-}[rr] & & 3 \ar@[red]@{-}[rr] \ar@[red]@{-}[dd] & & 1 \frac{2}{3} \\ & & 9 & &} \] It is best to look at the four extremal lattices as the vertices of a tetrahedron with the lattice $3$ corresponding to its point of gravity. The congruence subgroup $\Gamma_0(9)$ fixes each of these lattices, and the arithmetic group $\Gamma_0(3|3)$ is the conjugate of $\Gamma_0(1)$ \Gamma_0(3|3) = \{ \begin{bmatrix} \frac{1}{3} & 0 \\ 0 & 1 \end{bmatrix}.\begin{bmatrix} a & b \\ c & d \end{bmatrix}.\begin{bmatrix} 3 & 0 \\ 0 & 1 \end{bmatrix} = \begin{bmatrix} a & \frac{b}{3} \\ 3c & 1 \end{bmatrix}~|~ad-bc=1 \} \] We know that $\Gamma_0(3|3)$ normalizes the subgroup $\Gamma_0(9)$ and we need to find the moonshine group $(3|3)$ which should have index $3$ in $\Gamma_0(3|3)$ and contain $\Gamma_0(9)$. So, it is natural to consider the finite group $A=\Gamma_0(3|3)/\Gamma_9(0)$ which is generated by the co-sets of x = \begin{bmatrix} 1 & \frac{1}{3} \\ 0 & 1 \end{bmatrix} \qquad \text{and} \qquad y = \begin{bmatrix} 1 & 0 \\ 3 & 0 \end{bmatrix} \] To determine this group we look at the action of it on the lattices in the $(9|1)$-snake. It will fix the central lattice $3$ but will move the other lattices. Recall that it is best to associate to the lattice $M.\frac{g}{h}$ the matrix \alpha_{M,\frac{g}{h}} = \begin{bmatrix} M & \frac{g}{h} \\ 0 & 1 \end{bmatrix} \] and then the action is given by right-multiplication. \begin{bmatrix} 1 & 0 \\ 0 & 1 \end{bmatrix}.x = \begin{bmatrix} 1 & \frac{1}{3} \\ 0 & 1 \end{bmatrix}, \quad \begin{bmatrix} 1 & \frac{1}{3} \\ 0 & 1 \end{bmatrix}.x = \begin{bmatrix} 1 & \frac{2}{3} \\ 0 & 1 \end{bmatrix}, \quad \begin{bmatrix} 1 & \frac{2}{3} \\ 0 & 1 \end{bmatrix}.x=\begin{bmatrix} 1 & 0 \\ 0 & 1 \end{bmatrix} \] That is, $x$ corresponds to a $3$-cycle $1 \rightarrow 1 \frac{1}{3} \rightarrow 1 \frac{2}{3} \rightarrow 1$ and fixes the lattice $9$ (so is rotation around the axis through the vertex $9$). To compute the action of $y$ it is best to use an alternative description of the lattice, replacing the roles of the base-vectors $\vec{e}_1$ and $\vec{e}_2$. These latices are projectively equivalent \mathbb{Z} (M \vec{e}_1 + \frac{g}{h} \vec{e}_2) \oplus \mathbb{Z} \vec{e}_2 \quad \text{and} \quad \mathbb{Z} \vec{e}_1 \oplus \mathbb{Z} (\frac{g'}{h} \vec{e}_1 + \frac{1}{h^2M} \vec{e}_2) \] where $g.g' \equiv~1~(mod~h)$. So, we have equivalent descriptions of the lattices M,\frac{g}{h} = (\frac{g'}{h},\frac{1}{h^2M}) \quad \text{and} \quad M,0 = (0,\frac{1}{M}) \] and we associate to the lattice in the second normal form the matrix \beta_{M,\frac{g}{h}} = \begin{bmatrix} 1 & 0 \\ \frac{g'}{h} & \frac{1}{h^2M} \end{bmatrix} \] and then the action is again given by right-multiplication. In the tetrahedral example we have 1 = (0,\frac{1}{3}), \quad 1\frac{1}{3}=(\frac{1}{3},\frac{1}{9}), \quad 1\frac {2}{3}=(\frac{2}{3},\frac{1}{9}), \quad 9 = (0,\frac{1}{9}) \] \begin{bmatrix} 1 & 0 \\ \frac{1}{3} & \frac{1}{9} \end{bmatrix}.y = \begin{bmatrix} 1 & 0 \\ \frac{2}{3} & \frac{1}{9} \end{bmatrix},\quad \begin{bmatrix} 1 & 0 \\ \frac{2}{3} & \frac{1}{9} \end{bmatrix}. y = \begin{bmatrix} 1 & 0 \\ 0 & \frac{1}{9} \end{bmatrix}, \quad \begin{bmatrix} 1 & 0 \\ 0 & \frac{1}{9} \end{bmatrix}. y = \begin{bmatrix} 1 & 0 \\ \frac{1}{3} & \frac{1}{9} \end{bmatrix} \] That is, $y$ corresponds to the $3$-cycle $9 \rightarrow 1 \frac{1}{3} \rightarrow 1 \frac{2}{3} \rightarrow 9$ and fixes the lattice $1$ so is a rotation around the axis through $1$. Clearly, these two rotations generate the full rotation-symmetry group of the tetrahedron \Gamma_0(3|3)/\Gamma_0(9) \simeq A_4 \] which has a unique subgroup of index $3$ generated by the reflexions (rotations with angle $180^o$ around axis through midpoints of edges), generated by $x.y$ and $y.x$. The moonshine group $(3|3)$ is therefore the subgroup generated by (3|3) = \langle \Gamma_0(9),\begin{bmatrix} 2 & \frac{1}{3} \\ 3 & 1 \end{bmatrix},\begin{bmatrix} 1 & \frac{1}{3} \\ 3 & 2 \end{bmatrix} \rangle \] A forgotten type and roots of unity (again) nc-geometry and moonshine? Banach moonshine group Tarski Previous Post the 171 moonshine groups Next Post Smullyan and the President's sanity
CommonCrawl
Bulletin of the Australian Mathematical Society (1) Journal of Materials Research (1) Journal of Paleontology (1) newsociety20160909 (1) Synthesis, surface modification, and applications of magnetic iron oxide nanoparticles Wenhui Ling, Mingyu Wang, Chunxia Xiong, Dengfeng Xie, Qiyu Chen, Xinyue Chu, Xiaoyan Qiu, Yuemin Li, Xiong Xiao Journal: Journal of Materials Research / Volume 34 / Issue 11 / 14 June 2019 Published online by Cambridge University Press: 30 April 2019, pp. 1828-1844 Magnetic iron oxide nanoparticles (MIONPs) are particularly attractive in biosensor, antibacterial activity, targeted drug delivery, cell separation, magnetic resonance imaging tumor magnetic hyperthermia, and so on because of their particular properties including superparamagnetic behavior, low toxicity, biocompatibility, etc. Although many methods had been developed to produce MIONPs, some challenges such as severe agglomeration, serious oxidation, and irregular size are still faced in the synthesis of MIONPs. Thus, various strategies had been developed for the surface modification of MIONPs to improve the characteristics of them and obtain multifunctional MIONPs, which will widen the applicational scopes of them. Therefore, the processes, mechanisms, advances, advantages, and disadvantages of six main approaches for the synthesis of MIONPs; surface modification of MIONPs with inorganic materials, organic molecules, and polymer molecules; applications of MIONPs or modified MIONPs; the technical challenges of synthesizing MIONPs; and their limitations in biomedical applications were described in this review to provide the theoretical and technological guidance for their future applications. Monocular Vision-based Sense and Avoid of UAV Using Nonlinear Model Predictive Control Yizhai Zhang, Wenhui Wang, Panfeng Huang, Zainan Jiang Journal: Robotica , First View Published online by Cambridge University Press: 06 March 2019, pp. 1-13 The potential use of onboard vision sensors (e.g., cameras) has long been recognized for the Sense and Avoid (SAA) of unmanned aerial vehicles (UAVs), especially for micro UAVs with limited payload capacity. However, vision-based SAA for UAVs is extremely challenging because vision sensors usually have limitations on accurate distance information measuring. In this paper, we propose a monocular vision-based UAV SAA approach. Within the approach, the host UAV can accurately and efficiently avoid a noncooperative intruder only through angle measurements and perform maneuvers for optimal tradeoff among target motion estimation, intruder avoidance, and trajectory tracking. We realize this feature by explicitly integrating a target tracking filter into a nonlinear model predictive controller. The effectiveness of the proposed approach is verified through extensive simulations. A NOTE ON THE ERDŐS–GRAHAM THEOREM WENHUI WANG, MIN TANG Journal: Bulletin of the Australian Mathematical Society / Volume 97 / Issue 3 / June 2018 Let ${\mathcal{A}}=\{a_{1}<a_{2}<\cdots \,\}$ be a set of nonnegative integers. Put $D({\mathcal{A}})=\gcd \{a_{k+1}-a_{k}:k=1,2,\ldots \}$ . The set ${\mathcal{A}}$ is an asymptotic basis if there exists $h$ such that every sufficiently large integer is a sum of at most $h$ (not necessarily distinct) elements of ${\mathcal{A}}$ . We prove that if the difference of consecutive integers of ${\mathcal{A}}$ is bounded, then ${\mathcal{A}}$ is an asymptotic basis if and only if there exists an integer $a\in {\mathcal{A}}$ such that $(a,D({\mathcal{A}}))=1$ . The upper Tremadocian (Lower Ordovician) graptolite Ancoragraptus from Yiyang, South China Wang Wenhui, Feng Hongzhen, Li Lixia, Chen Wenjian Journal: Journal of Paleontology / Volume 87 / Issue 6 / November 2013 Discovery of well-preserved specimens from the Nanba section in Yiyang, Hunan Province, in combination with a literature review, enabled us to re-evaluate and revise the graptolite genus Ancoragraptus. This is a genus of biradiate, multiramous anisograptids with horizontal to reclined rhabdosomes and free lower part of the metasicula and slightly isolated autothecal apertures. According to the revised definition, two species are included in Ancoragraptus, i.e., Ancoragraptus bulmani (Spjeldnæs, 1963) and Ancoragraptus psigraptoides Cho, Kim, and Jin, 2009. It is the first time that A. bulmani has been reported from China. The occurrences of Ancoragraptus reported worldwide are reviewed in the present study and found to be restricted to the lower upper Tremadocian. The restriction of Ancoragraptus in stratigraphical distribution makes it a taxon with a high potential for precise biostratigraphical correlation at both regional and global scale. Reversible transformation between α-oxo acids and α-amino acids on ZnS particles: a photochemical model for tuning the prebiotic redox homoeostasis Wei Wang, Xiaoyang Liu, Yanqiang Yang, Wenhui Su Journal: International Journal of Astrobiology / Volume 12 / Issue 1 / January 2013 Published online by Cambridge University Press: 29 October 2012, pp. 69-77 How prebiotic metabolic pathways could have formed is an essential question for the origins of life on early Earth. From the abiogenetic point of view, the emergence of primordial metabolism may be postulated as a continuum from Earth's geochemical processes to chemoautotrophic biochemical procedures on mineral surfaces. In the present study, we examined in detail the reversible amination of α-ketoglutarate on UV-irradiated ZnS particles under variable reaction conditions such as pH, temperature, hole scavenger species and concentrations, and different amino acids. It was observed that the reductive amination of α-ketoglutarate and the oxidative amination of glutamate were both effectively performed on ZnS surfaces in the presence and absence of a hole scavenger, respectively. Accordingly, a photocatalytic mechanism was proposed. The reversible photochemical reaction was more efficient under basic conditions but independent of temperature in the range of 30–60 °C. SO32− was more effective than S2− as the hole scavenger. Finally, we extended the glutamate dehydrogenase-like chemistry to a set of other α-amino acids and their corresponding α-oxo acids and found that hydrophobic amino acid side chains were more conducive to the reversible redox reactions. Since the experimental conditions are believed to have been prevalent in shallow water hydrothermal vent systems of early Earth, the results of this work not only suggest that the ZnS-assisted photochemical reaction can regulate the redox equilibrium between α-amino acids and α-oxo acids, but also provide a model of how prebiotic metabolic homoeostasis could have been developed and regulated. These findings can advance our understanding of the establishment of archaic non-enzymatic metabolic systems and the origins of autotrophy. Finite-Temperature Molecular Dynamics Study for Atomic Structures of Grain Boundary in Transition Metals Fe and Ni Wang Chongyu, Duan Wenhui, Song Quanming Based on Gauss' principle of least constraint and Nosé-Hoover thermostat formulation, the numerical algorithms for molecular dynamics simulation are developed to investigate the properties of grain boundary in transition metals Fe and Ni at finite temperature. By the appropriate choice of heat bath parameter, a constant temperature version can be realized. A series of parameters are introduced to describe quantitatively the crystallographic characteristic and the distortion of structure unit. The results indicate the applicability of the calculation mode developed by us and reveal the feature of the atomic structure of grain boundary at finite temperature. Optical Interferometric Biosensor with PMMA as Functional Layer Wenhui Wang, Xiaodong Ma, Lisa-Jo Clarizia, Xingwei Wang, Melisenda McDonald Journal: MRS Online Proceedings Library Archive / Volume 1133 / 2008 Published online by Cambridge University Press: 01 February 2011, 1133-AA03-02 Silica-on-silicon label-free biosensor with PMMA (Poly(methyl methacrylate), as the functional layer was designed, fabricated, and tested. The sensor is based on Fabry Perot (FP) interferometry. Specific binding was tested with Human IgG and anti-Human IgG. Non-specific binding was tested with Human IgG and Mouse IgG. The testing results show that the sensor has a nearly six-fold greater response upon specific binding than upon non specific binding. Thermal and long term stability experiments show that the sensor is insensitive to the environment fluctuation. The fabrication process is simple without special surface treatment. In addition, this biosensor is inexpensive and easy to use.
CommonCrawl
Cross-cultural adaptation and validation of the Spanish version of the American Academy of Orthopaedic Surgeons-Foot and Ankle Module (AAOS-FAMsp) Manuel González-Sánchez1, Esther Velasco-Ramos2, Maria Ruiz-Muñoz3 & Antonio I. Cuesta-Vargas4,5 The current study performed a cross-cultural adaptation to Spanish and examined the internal and external validation of the AAOS-FAM questionnaire. A direct translation (English to Spanish) and a reverse translation (Spanish to English) were performed by two independent professional native translators. Cronbach's α coefficients were calculated to analyse the internal consistency of the measure. The factor structure and construct validity were analysed after extraction by maximum likelihood (EML); extraction was necessary if the following three requirements were met: accounting for ≥10 % of variance, Eigenvalue >1.0 and a scree plot inflexion point. The standard error of measurement and minimal detectable change 90 (MDC90) were calculated. Criterion validity was calculated by analysing the correlation between the American Academy of Orthopaedic Surgeons-Foot and Ankle Module (Spanish version) (AAOS-FAMsp) and Spanish versions of the questionnaires FFI and FHSQ. Regarding internal consistency, Cronbach's α was 0.877, and in the test-retest analysis, the ICC ranged between 0.899 and 0.942. Error measures were calculated by MDC90 and SEM, which showed values of 3.444 and 1.476 %, respectively. The analysis demonstrated a goodness of fit chi-squared value of 803.166 (p < 0.001). For criterion validity, the correlation value with FFIsp was r = 0.837 (p < 0.01), while the FHSQsp correlation values with different scales ranged from r = 0.206 (p < 0.01) (physical activity) to r = 0.665 (p < 0.01) (pain). The results indicate that AAOS-FAMsp has satisfactory psychometric properties, facilitating the inclusion of Spanish-speaking individuals into both research and clinical practice. In the last 20 years, patient reported outcome measures (PROM) have emerged as an important way to assess and monitor patients and currently are widely used in clinical practice and research [1, 2]. These instruments are inexpensive, easy to use and specific and reliable tools. They facilitate the determination of a patient's health and functional status and the interpretation of results for clinicians, researchers and patients regarding a patient's symptoms, capabilities and/or functioning [1, 3, 4]. Given the structure and function of the foot, any problematic condition in the foot may have a profound negative impact on a patient's quality of life and function [5]. With the intention of assessing the impact of foot problems in a patient, the American Academy of Orthopaedic Surgeons developed a specific module for the subjective assessment of changes in the foot, i.e. the Foot and Ankle Module (AAOS-FAM) [6]. This questionnaire has two scales, the Global Foot and Ankle Scale and the Foot Comfort Scale, comprised of 25 items in total, with a retest reliability between 0.79 and 0.99 [6]. Until now, the AAOS-FAM questionnaire has not been a translation or culturally adapted into Spanish, one of the five most widely spoken languages in the world [7, 8] and an official language of the United Nations [9]. A cultural adaptation and validation of the AAOS-FAM Spanish questionnaire was conducted in this study to facilitate the collection of clinical data from Spanish-speaking individuals and to help improve the standardisation of data collection in clinical research and treatment throughout the country. The aim of this study was to perform a cross-cultural adaptation to Spanish and to examine the internal and external validation of the AAOS-FAM questionnaire, with the intention of facilitating the inclusion of Spanish-speaking individuals into both research and clinical practice. This observational study was conducted with patients recruited from three public and private podiatric clinics in southern Spain. One hundred and ninety-three (193) patients (99 women and 94 men) with a mean age of 55.49 (±16.10) years participated in the present study (Table 1). The inclusion criteria for the participants were as follows: native Spanish speaker, aged 18 years old or older, having an altered foot that requires treatment and able to walk independently. Participants with a cognitive impairment of any aetiology that prevented them from completing the questionnaires independently were excluded from the present study. Table 1 Descriptive anthropometric data of the sample Translation and transcultural adaptation of AAOS-FAM to AAOS-FAMsp The process of translating the original AAOS-FAM to the American Academy of Orthopaedic Surgeons-Foot and Ankle Module (Spanish version) (AAOS-FAMsp) questionnaire was carried out in different phases which are summarised in Fig. 1. Direct translation (English to Spanish) and the reverse translation (Spanish to English) were performed by two independent professional native translators. With the aim of ensuring the conceptual equivalence of the terms used, a translation process was performed, as recommended by the literature [10–12]. Flowchart of the development of AAOS-FAMsp from the original version Between 1 February 2015 and 31 May 2015, all subjects included as participants completed the following questionnaires: AAOS-FAMsp, Foot Function Index (FFI) and Foot Health Status Questionnaire (FHSQ). In order to calculate the reliability of the AAOS-FAMsp, this questionnaire was completed a second time 4 days later. This period of time was used to ensure the condition of the participants had not changed between measurements [13]. The AAOS-FAM consists of 25 questions comprising two scales: the Global Foot and Ankle Scale and the Foot Comfort Scale. The first of the 20-item scales is used to test foot function, inflammation, pain and stiffness, which generated a single score between 0 and 100. The second scale, consisting of five questions, is used to assess comfort in terms of wearing shoes (with a yes or no for each type), which generated a scale ranging from 0 to 100 (0 = poor outcome and 100 = the best possible outcome) [6]. The two scales are combined to provide an index ranging from 0 to 100. We weighed each scale in the final score based on the number of items: Global Foot and Ankle Scale 80 % (20 items) and Foot Comfort Scale 20 % (five items) [6]. The original version of the FFI questionnaire consists of 23 questions [14, 15]. Each question is answered on a visual analogue scale, ranging from 0 to 9. If patients cannot respond to any question, they are instructed to leave it blank and it is not included in the final score of the questionnaire [14–16]. The final result is offered on a scale from 0 to 100, in which all the questions are summed and then divided by 207 (the highest possible score, i.e. 23 × 9) and multiplied by 100 and then rounded up, if necessary, to give an integer between 0 and 100 [16]. The cross-cultural adaptation of the Spanish FFI questionnaire was validated and published by Paez-Moguer et al. (2013) [16]. The FHSQ is an instrument designed to measure the quality of life related to the health of the feet [17–19]. The 19 questions evaluate four domains of foot health: pain, function, general health and footwear. Each is rated on a Likert (numerical) style 0–100 scale, where 0 is the worst health status and 100 indicates the best health status. This was validated by Bennett and Patterson to evaluate the effectiveness of surgical and conservative treatment of diseases involving the skin and nails, as well as neurological, musculoskeletal and orthopaedic disorders [17–19]. Spanish cross-cultural adaptation of the FHSQ questionnaire was validated and published by Cuesta-Vargas et al. (2013) [17]. A descriptive analysis of the anthropometric variables and the characteristics of participants was conducted. The Kolmogorov-Smirnov test was used to analyse the distribution and normality of the sample. Cronbach's α coefficients were calculated to analyse the internal consistency of measures by classifying the values according to the following scale: Cronbach's α ≤0.40 (poor), 0.60 > Cronbach's α > 0.40 (moderate), 0.80 > Cronbach's α ≥ 0.60 (good) and Cronbach's α ≥0.80 (excellent) [20]. To analyse whether item performance was similar between men and women, a comparison of the variables between genders was conducted. All variables presented a parametric distribution, and for this reason, Student's t test was used to calculate the differences between groups. The factor structure and construct validity were analysed after extraction by maximum likelihood (EML); extraction was necessary if the following three requirements were met: accounting for >10 % of variance, Eigenvalue >1.0 and a scree plot inflexion point. The formula \( \mathrm{S}\mathrm{E}\mathrm{M}=\mathrm{s}\sqrt{1-r} \) was used to calculate the standard error of measurement (SEM), where s is the standard deviation (SD) of the test score for both measurements (time 1 and 2) and r is the reliability coefficient for the test and interclass correlation (ICC) between test and retest values. Following the analysis described by Stratford [21], the minimal detectable change 90 (MDC90) was used to measure the sensitivity or measurement error. The formula used for the calculation was: $$ \mathrm{M}\mathrm{D}\mathrm{C}90=\mathrm{S}\mathrm{E}\mathrm{M}\times \sqrt{2 \times 1.65}. $$ Criterion validity was calculated by analysing the correlation between the AAOS-FAMsp and Spanish versions of the questionnaires FFI [16] and FHSQ [17]. The Pearson correlation coefficient was interpreted according to the following scale: r ≤ 0.49 (poor), 0.50 ≤ r ≤ 0.74 (moderate) and r ≥ 0.75 (strong) [22]. The minimum power required to develop strength-related criterion validity of the AAOS-FAMsp indicated a minimum number of 108 subjects, calculated for a 15 % attrition rate with p < 0.05 [12]. The statistical analyses were performed using the statistical analysis programme SPSS (v.17.0). The AAOS-FAM was translated into Spanish and culturally adapted to provide the new AAOS-FAMsp (available in Additional file 1). Table 1 shows the descriptive data of the sample, which included anthropometric data and the number of hours the participant stands during the week. The average value of the AAOS-FAMsp was 45.66 (±7.38); the mean values of the respective scales were 46.02 (±8.43) (Global Foot and Ankle Scale) and 44.20 (±8.25) (Foot Comfort Scale). No significant gender differences emerged when comparing the responses per item. For internal consistency, Cronbach's α was 0.877, and in the test-retest analysis, the ICC ranged between 0.899 and 0.942. Error measures were calculated by MDC90 and SEM, with values of 3.444 and 1.476 %, respectively. Based on the values observed in Bartlett's test of sphericity (p < 0.001) and the Kaiser-Meyer-Olkin values (0.816), the correlation matrix of the AAOS-FAMsp was deemed adequate for EML. Figure 2 shows the scree plot, where a two-factor solution can be observed. Importantly, there were five factors that had Eigenvalues >1.0, explaining 67.411 % of the total variance; however, they did not explain more than 10 % of the variance, meaning they could not be extracted. In the analysis of the loaded factors, not all the items loaded in the same way for each of the extracted factors (Table 2). Particularly significant were questions 2, 5, 9, 10, 19, 21, 22 and 23, which did not load on either extracted factor, indicating that they can be removed from the questionnaire. The analysis demonstrated a goodness of fit chi-squared value of 803.166 (p < 0.001). Scree plot of the exploratory factor analysis: two-factor solution Table 2 Load distribution of the different items on the factors identified following exploratory factor analysis For criterion validity, the correlation value with FFIsp was r = 0.837 (p < 0.01), while the FHSQsp correlation values with different scales ranged from r = 0.206 (p < 0.01) (physical activity) to r = 0.665 (p < 0.01) (pain) (Table 3). Moreover, the higher correlation occurred between each of the subscales as well as with the final score of AAOS-FAMsp. Table 3 Correlation matrix between AAFOS-FAMsp and subscales as well as FHSQ subscales and FFI The process of translating and culturally adapting the AAOS-FAMsp ensures the conceptual equivalence of terms used between the original questionnaire and the final version of the AAOS-FAMsp, facilitating its introduction and use among native speakers of the second most widely spoken language in the world. In addition, an analysis of the psychometric properties of the questionnaire, including the criterion validity, construct validity, internal consistency and reliability of the measurement was performed, and the authors found optimal psychometric properties as well as high internal consistency and reliability with a strong correlation with FFIsp. These results indicate that it can be used for assessment and monitoring among Spanish speakers to facilitate obtaining clinical results of high quality for the evaluation of the foot-ankle joint. Therefore, the aim of this study was achieved. The process of adapting the Spanish AAOS-FAM was conducted following suggestions in the literature [11, 12] and the procedure developed in previous studies that adapted Spanish-specific questionnaires for different body parts such as the upper limbs [3], back [10], lower limbs [1] or ankle and foot [16, 17], using independent and native translators who assured the equivalence of the terms used in the original questionnaire. Cross-cultural adaptation of the AAOS-FAMsp allows clinicians to use this tool to assess the foot-ankle region. An exploratory factor analysis was performed with the results of the AAOS-FAMsp. After conducting the exploratory factor analysis, not all items loaded on the factor model, as indicated by the questionnaire score (questions: 1–18, 24, 25 (Global Foot and Ankle Scale) and 19–23 (Foot Comfort Scale)) [6]. In addition, there were some items that clearly did not load on any of the factors that had originally been allocated for the calculation of the two subscales of the AAOS-FAMsp (Global Foot and Ankle Scale and Foot Comfort Scale), as shown in Table 2. However, a confirmatory factor analysis was not performed, as our study did not meet the minimum sample size (10 subjects per item analysed) or the optimal sample size (20 subjects per item analysed) necessary to ensure reliable results [23]. The AAOS-FAMsp demonstrated excellent internal consistency; Cronbach's α was 0.877, and the test-retest (ICC) per item ranged between 0.899 and 0.942. Although both had excellent Cronbach's α, the value of AAOS-FAMsp (0.877) was slightly higher than the original version (0.83) [6]. However, the test-retest values were consistent between the two versions (0.899–0.942 for AAOS-FAMsp and 0.92 for the original AAOS-FAM [6]). The AAOS-FAMsp version used the Spanish versions of the FFI and FHSQ questionnaires for criterion validity, while the original version that performed the analysis used the Lower Limb Core questionnaire and SF-36 Physical Health as references. This was based on the classification made by Field [22]. The correlation with FFIsp was strong (r = 0.837), ranging from poor (r = 0.206 (physical activity)) to moderate (r = 0.665 (pain)) in the FHSQsp subscales (Table 3), whereas the original version presented a moderate correlation (r = 0.56) with the SF-36 and a strong correlation (r = 0.97) with the Lower Limb Core questionnaire [6]. This study was developed following recommendations in the literature regarding the number of subjects required to conduct a psychometric analysis of the questionnaire, where a minimum of five subjects per item under review is required. The AAOS-FAMsp consists of 25 questions, so a minimum number of 125 subjects would be required; this study included 193 participants [24]. However, this study had some limitations, as not all items loaded onto the two factors described within the original questionnaire (Global Foot and Ankle Scale and Shoe Comfort Scale), so future studies could develop a modified version of the AAOS-FAMsp according to the factors identified in this study. Moreover, no psychometric analyses of variables measured longitudinally, such as sensitivity to change and responsiveness, were presented. It is also important to consider not only the Spanish spoken in Spain, so it would be important for future studies to include Hispanic/Latino speaking participants to resolve any cultural differences with Spanish participants. Finally, the questionnaires were always provided in the same order, which may be a potential source of bias. The AAOS-FAMSp demonstrated high internal consistency, reliability and criterion validity. It is an instrument that can be introduced into the Spanish-speaking environment to be used by clinicians and researchers as a tool to assess and monitor their patients. The AAOS-FAM questionnaire was translated and cross-culturally adapted to Spanish. The psychometric properties of the AAOS-FAMsp were reported, indicating satisfactory and consistent results with the original version. However, the factor structure was slightly different than the original AAOS-FAM questionnaire. AAOS-FAM, American Academy of Orthopaedic Surgeons-Foot and Ankle Module; AAOS-FAMsp, American Academy of Orthopaedic Surgeons-Foot and Ankle Module (Spanish version); EML, extraction by maximum likelihood; FFI, Foot Function Index; FHSQ, Foot Health Status Questionnaire; ICC, interclass correlation; MDC90, minimal detectable change 90; PROM, patient reported outcome measures; SD, standard deviation; SEM, standard error of measurement Cuesta-Vargas AI, Gabel CP, Bennett P. Cross cultural adaptation and validation of a Spanish version of the Lower Limb Functional Index. Health Qual Life Outcomes. 2014;12:75. doi:10.1186/1477-7525-12-75. Garratt A. Patient reported outcome measures in trials. BMJ. 2009;338:a2597. doi:10.1136/bmj.a2597. Cuesta-Vargas AI, Gabel PC. Cross-cultural adaptation, reliability and validity of the Spanish version of the Upper Limb Functional Index. Health Qual Life Outcomes. 2013;11:126. doi:10.1186/1477-7525-11-126. Forget NJ, Higgins J. Comparison of generic patient-reported outcome measures used with upper extremity musculoskeletal disorders: linking process using the International Classification of Functioning, Disability, and Health (ICF). J Rehabil Med. 2014;46(4):327–34. doi:10.2340/16501977-1784. Kaoulla P, Frescos N, Menz HB. A survey of foot problems in community-dwelling older Greek Australians. J Foot Ankle Res. 2011;4(1):23. doi:10.1186/1757-1146-4-23. Johanson NA, Liang MH, Daltroy L, Rudicel S, Richmond J. American Academy of Orthopedic Surgeons lower limb outcomes assessment instruments. Reliability, validity, and sensitivity to change. J Bone Joint Surg Am. 2004;86-A(5):902–9. Instituto Cervantes. Resumen del informe 2013 «El español, una lengua viva» http://www.cervantes.es/sobre_instituto_cervantes/prensa/2013/noticias/diae-resumen-datos-2013.htm [Accessed 20 Jul 2015]. Ruiz Zambrana J. La situación actual de la lengua española en el mundo. Contribuciones a las Ciencias Sociales, 2009. available in: www.eumed.net/rev/cccss/05/jrz.htm [Accessed 9 Mar 2015]. UN. New York; 2014. available in: http://www.un.org/es/sections/about-un/official-languages/index.html [Accessed 20 Jul 2015]. Cuesta-Vargas AI, González-Sánchez M. Spanish version of the screening Örebro musculoskeletal pain questionnaire: a cross-cultural adaptation and validation. Health Qual Life Outcomes. 2014;12:157. doi:10.1186/s12955-014-0157-5. Mokkink LB, Terwee CB, Patrick DL, Alonso J, Stratford PW, Knol DL, Bouter LM, de Vet HC. The COSMIN study reached international consensus on taxonomy, terminology, and definitions of measurement properties for health-related patient-reported outcomes. J Clin Epidemiol. 2010;63(7):737–45. doi:10.1016/j.jclinepi.2010.02.006. Muñiz J, Elosua P, Hambleton RK. International test commission guidelines for test translation and adaptation: second edition. Psicothema. 2013;25:151–7. Vet HC, Ostelo RW, Terwee CB, van der Roer N, Knol DL, Beckerman H, Boers M, Bouter LM. Minimally important change determined by a visual method integrating an anchor-based and a distribution-based approach. Qual Life Res. 2007;16:131–42. Budiman-Mak E, Conrad KJ, Mazza J, Stuck RM. A review of the foot function index and the foot function index—revised. J Foot Ankle Res. 2013;6(1):5. doi:10.1186/1757-1146-6-5. Budiman-Mak E, Conrad K, Stuck R, Matters M. Theoretical model and Rasch analysis to develop a revised Foot Function Index. Foot Ankle Int. 2006;27(7):519–27. Paez-Moguer J, Budiman-Mak E, Cuesta-Vargas AI. Cross-cultural adaptation and validation of the Foot Function Index to Spanish. Foot Ankle Surg. 2014;20(1):34–9. doi:10.1016/j.fas.2013.09.005. Epub 2013 Nov 16. Cuesta-Vargas A, Bennett P, Jimenez-Cebrian AM, Labajos-Manzanares MT. The psychometric properties of the Spanish version of the Foot Health Status Questionnaire. Qual Life Res. 2013;22(7):1739–43. doi:10.1007/s11136-012-0287-3. Bennett PJ, Patterson C. The Foot Health Status Questionnaire (FHSQ): a new instrument for measuring outcomes of foot care. Australasian J Podiatr Med. 1998;32:55–9. Bennett PJ, Patterson C, Wearing S, Baglioni T. Development and validation of a questionnaire designed to measure foot-health status. J Am Podiatr Med Assoc. 1998;88:419–28. Shrout PE, Fleiss JL. Intraclass correlations: uses in assessing rater reliability. Psychol Bull. 1979;86:420–8. Stratford PW. Getting more from the literature: estimating the standard error of measurement from reliability studies. Physiother Can. 2004;56:27–30. Field A. Discovering statistics using SPSS. London: SAGE Publications Ltd; 2005. Costello AB, Osborne JW. Best practices in exploratory factor analysis: four recommendations for getting the most from your analysis. Pract Assess, Res Eval. 2005;10(7):1–9. Kass RA, Tinsley HEA. Factor analysis. J Leisure Res. 1979;11:120–38. The authors would like to thank all who took part in the intervention and enabled the study to take place. The raw database is available for all potentials readers. Please, find it as supplementary material. MG-S participated in the conception and design of the study and in the data collection, analysis and interpretation of the data and helped to draft the manuscript. EV-R participated in the data collection, analysis and interpretation of the data and drafted the manuscript. MR-M participated in the conception of the study, analysis and interpretation of the data and helped to draft the manuscript. AIC-V participated in the analysis and interpretation of the data, helped to draft the manuscript and supervised the development of the study. All authors read and approved the final manuscript. Manuel González-Sánchez PhD is a Professor at the University of Jaén and a physiotherapist. Esther Velasco Ramos MSc is a podiatrist. María Ruiz-Muñoz PhD is a Professor at the University of Málaga and a podiatrist. Antonio I. Cuesta-Vargas PhD is a Senior Lecturer at the University of Málaga, a Professor at the School of Clinical Sciences at Queensland University, Brisbane, Australia, and a physiotherapist. To be included in the study, participants provided signed informed consent. The study was conducted according to the Ethical Principles for Medical Research Involving Human Subjects, and the data were used according to the Spanish Organic Law of Protection of Personal Data 19/55. The present study was approved by the University of Málaga ethics committee. To ensure the independence of the collected data, the analysis of the data was performed by two blinded and independent investigators. Health Science Faculty, Department of Health Science, University of Jaén, Campus de las Lagunillas SN. Ed.B3 - Despacho 066, 23071, Jaén, Spain Manuel González-Sánchez Health Science Faculty, Universidad de Málaga, Málaga, Spain Esther Velasco-Ramos Departamento de Enfermería y Podología, Instituto de Investigación Biomédica de Málaga (IBIMA), Universidad de Málaga, Málaga, Spain Maria Ruiz-Muñoz Departamento de Psiquiatria y Fisioterapia, Instituto de Investigación Biomédica de Málaga (IBIMA), Universidad de Málaga, Málaga, Spain Antonio I. Cuesta-Vargas School of Clinical Sciences at Queensland University, Brisbane, Australia Correspondence to Manuel González-Sánchez. The AAOS-FAM was translated into Spanish and culturally adapted to provide the new AAOS-FAMsp. (PDF 123 kb) Raw data base containing descriptive results variables. (SAV 14 kb) González-Sánchez, M., Velasco-Ramos, E., Ruiz-Muñoz, M. et al. Cross-cultural adaptation and validation of the Spanish version of the American Academy of Orthopaedic Surgeons-Foot and Ankle Module (AAOS-FAMsp). J Orthop Surg Res 11, 74 (2016). https://doi.org/10.1186/s13018-016-0409-7
CommonCrawl
Mapping current and activity fluctuations in exclusion processes: consequences and open questions Matthieu Vanicat, Eric Bertin, Vivien Lecomte, Eric Ragoucy SciPost Phys. 10, 028 (2021) · published 5 February 2021 | Considering the large deviations of activity and current in the Asymmetric Simple Exclusion Process (ASEP), we show that there exists a non-trivial correspondence between the joint scaled cumulant generating functions of activity and current of two ASEPs with different parameters. This mapping is obtained by applying a similarity transform on the deformed Markov matrix of the source model in order to obtain the deformed Markov matrix of the target model. We first derive this correspondence for periodic boundary conditions, and show in the diffusive scaling limit (corresponding to the Weakly Asymmetric Simple Exclusion Processes, or WASEP) how the mapping is expressed in the language of Macroscopic Fluctuation Theory (MFT). As an interesting specific case, we map the large deviations of current in the ASEP to the large deviations of activity in the SSEP, thereby uncovering a regime of Kardar–Parisi–Zhang in the distribution of activity in the SSEP. At large activity, particle configurations exhibit hyperuniformity [Jack et al., PRL 114 060601 (2015)]. Using results from quantum spin chain theory, we characterize the hyperuniform regime by evaluating the small wavenumber asymptotic behavior of the structure factor at half-filling. Conversely, we formulate from the MFT results a conjecture for a correlation function in spin chains at any fixed total magnetization (in the thermodynamic limit). In addition, we generalize the mapping to the case of two open ASEPs with boundary reservoirs, and we apply it in the WASEP limit in the MFT formalism. This mapping also allows us to find a symmetry-breaking dynamical phase transition (DPT) in the WASEP conditioned by activity, from the prior knowledge of a DPT in the WASEP conditioned by the current. Newton series expansion of bosonic operator functions Jürgen König, Alfred Hucht SciPost Phys. 10, 007 (2021) · published 13 January 2021 | We show how series expansions of functions of bosonic number operators are naturally derived from finite-difference calculus. The scheme employs Newton series rather than Taylor series known from differential calculus, and also works in cases where the Taylor expansion fails. For a function of number operators, such an expansion is automatically normal ordered. Applied to the Holstein-Primakoff representation of spins, the scheme yields an exact series expansion with a finite number of terms and, in addition, allows for a systematic expansion of the spin operators that respects the spin commutation relations within a truncated part of the full Hilbert space. Furthermore, the Newton series expansion strongly facilitates the calculation of expectation values with respect to coherent states. As a third example, we show that factorial moments and factorial cumulants arising in the context of photon or electron counting are a natural consequence of Newton series expansions. Finally, we elucidate the connection between normal ordering, Taylor and Newton series by determining a corresponding integral transformation, which is related to the Mellin transform. Generalized Gibbs Ensemble and string-charge relations in nested Bethe Ansatz György Z. Fehér, Balázs Pozsgay SciPost Phys. 8, 034 (2020) · published 3 March 2020 | The non-equilibrium steady states of integrable models are believed to be described by the Generalized Gibbs Ensemble (GGE), which involves all local and quasi-local conserved charges of the model. In this work we investigate integrable lattice models solvable by the nested Bethe Ansatz, with group symmetry $SU(N)$, $N\ge 3$. In these models the Bethe Ansatz involves various types of Bethe rapidities corresponding to the "nesting" procedure, describing the internal degrees of freedom for the excitations. We show that a complete set of charges for the GGE can be obtained from the known fusion hierarchy of transfer matrices. The resulting charges are quasi-local in a certain regime in rapidity space, and they completely fix the rapidity distributions of each string type from each nesting level. (1,0) gauge theories on the six-sphere Usman Naseer SciPost Phys. 6, 002 (2019) · published 8 January 2019 | We construct gauge theories with a vector-multiplet and hypermultiplets of $(1,0)$ supersymmetry on the six-sphere. The gauge coupling on the sphere depends on the polar angle. This has a natural explanation in terms of the tensor branch of $(1,0)$ theories on the six-sphere. For the vector-multiplet we give an off-shell formulation for all supersymmetries. For hypermultiplets we give an off-shell formulation for one supersymmetry. We show that the path integral for the vector-multiplet localizes to solutions of the Hermitian-Yang-Mills equation, which is a generalization of the (anti-)self duality condition to higher dimensions. For the hypermultiplet, the path integral localizes to configurations where the field strengths of two complex scalars are related by an almost complex structure. Large fluctuations of the KPZ equation in a half-space Alexandre Krajenbrink, Pierre Le Doussal SciPost Phys. 5, 032 (2018) · published 12 October 2018 | We investigate the short-time regime of the KPZ equation in $1+1$ dimensions and develop a unifying method to obtain the height distribution in this regime, valid whenever an exact solution exists in the form of a Fredholm Pfaffian or determinant. These include the droplet and stationary initial conditions in full space, previously obtained by a different method. The novel results concern the droplet initial condition in a half space for several Neumann boundary conditions: hard wall, symmetric, and critical. In all cases, the height probability distribution takes the large deviation form $P(H,t) \sim \exp( - \Phi(H)/\sqrt{t})$ for small time. We obtain the rate function $\Phi(H)$ analytically for the above cases. It has a Gaussian form in the center with asymmetric tails, $|H|^{5/2}$ on the negative side, and $H^{3/2}$ on the positive side. The amplitude of the left tail for the half-space is found to be half the one of the full space. As in the full space case, we find that these left tails remain valid at all times. In addition, we present here (i) a new Fredholm Pfaffian formula for the solution of the hard wall boundary condition and (ii) two Fredholm determinant representations for the solutions of the hard wall and the symmetric boundary respectively. The Inhomogeneous Gaussian Free Field, with application to ground state correlations of trapped 1d Bose gases Yannis Brun, Jérôme Dubail SciPost Phys. 4, 037 (2018) · published 25 June 2018 | Motivated by the calculation of correlation functions in inhomogeneous one-dimensional (1d) quantum systems, the 2d Inhomogeneous Gaussian Free Field (IGFF) is studied and solved. The IGFF is defined in a domain $\Omega \subset \mathbb{R}^2$ equipped with a conformal class of metrics $[{\rm g}]$ and with a real positive coupling constant $K: \Omega \rightarrow \mathbb{R}_{>0}$ by the action $\mathcal{S}[h] = \frac{1}{8\pi } \int_\Omega \frac{\sqrt{{\rm g}} d^2 {\rm x}}{K({\rm x})} \, {\rm g}^{i j} (\partial_i h)(\partial_j h)$. All correlations functions of the IGFF are expressible in terms of the Green's functions of generalized Poisson operators that are familiar from 2d electrostatics in media with spatially varying dielectric constants. This formalism is then applied to the study of ground state correlations of the Lieb-Liniger gas trapped in an external potential $V(x)$. Relations with previous works on inhomogeneous Luttinger liquids are discussed. The main innovation here is in the identification of local observables $\hat{O} (x)$ in the microscopic model with their field theory counterparts $\partial_x h, e^{i h(x)}, e^{-i h(x)}$, etc., which involve non-universal coefficients that themselves depend on position --- a fact that, to the best of our knowledge, was overlooked in previous works on correlation functions of inhomogeneous Luttinger liquids ---, and that can be calculated thanks to Bethe Ansatz form factors formulae available for the homogeneous Lieb-Liniger model. Combining those position-dependent coefficients with the correlation functions of the IGFF, ground state correlation functions of the trapped gas are obtained. Numerical checks from DMRG are provided for density-density correlations and for the one-particle density matrix, showing excellent agreement.
CommonCrawl
The formation and characteristics of E-region thin ionization layers at high latitudes Bristow_W_1992.pdf Bristow, William Albert Watkins, Brenton Thin layers of ionization are often observed in the E region ionosphere. The layers are 1 to 3km in thickness and are of high density, often as high as $5\times10\sp5$ cm$\sp{-3}$. The density of the layers is a large enhancement over the background density. The layers are primarily composed of metallic ions, which have long lifetimes and can accumulate to high densities. Thin ionization layers at mid-latitudes are formed by the action of neutral wind shears redistributing the background ionization. At high-latitudes wind-shears are not as effective at the redistribution of ionization, and it is found that the high-latitude electric fields may cause layer formation. Here the mechanism for high-latitude layer formation is studied through numerical simulation and incoherent-scatter radar observations. A one-dimensional simulation examined high-latitude layer formation in detail, in particular it examined the effects of the direction of the electric field. It was found that thin ionization layers formed for electric fields directed in the north-west and south-west quadrants. The layers formed by electric fields in the south-west quadrant form at altitudes which are consistent with observations, while fields in the north-west quadrant resulted in layer altitudes which were higher than usually observed. It was also found that the neutral wind acting in concert with the electric field may affect the altitude and thinness of layers. The one-dimensional simulation was extended to a three-dimensional model of the polar cap ionosphere. The three dimensional simulation showed that large areas of thin ionization layers may form for widely varying geophysical conditions. Incoherent-scatter radar observations showed the presence of thin ionization layers on 12 out of 16 nights of observation. High-resolution spectral data showed that the average ion mass within the layers was higher than that in the background ionization, demonstrating the presence of heavy metallic ions. Concurrent observation of thin layers and electric fields showed layers present for field directions between 40$\sp\circ$ and 140$\sp\circ$ west of magnetic north, which agrees with the simulation however the range of angles is more limited than was predicted. Antenna scanning observations examined the latitudinal extent of the layers, and found the layers were of a limited extent; the largest extent observed was about 200km.
CommonCrawl
Has the cesarean epidemic in Czechia been reversed despite fertility postponement? Tomáš Fait1,2, Anna Šťastná2, Jiřina Kocourková ORCID: orcid.org/0000-0003-1339-85082, Eva Waldaufová2, Luděk Šídlo2 & Michal Kníže1 Although the percentage of cesarean sections (CS) in Czechia is below the average of that of other developed countries (23.6%), it still exceeds WHO recommendations (15%). The first aim of the study is to examine the association between a CS birth and the main health factors and sociodemographic characteristics involved, while the second aim is to examine recent trends in the CS rate in Czechia. Anonymized data on all mothers in Czechia for 2018 taken from the National Register of Expectant Mothers was employed. The risk of cesarean delivery for the observed factors was tested via the construction of a binary logistic regression model that allowed for adjustments for all the other covariates in the model. Despite all the covariates being found to be statistically significant, it was determined that health factors represented a higher risk of a CS than sociodemographic characteristics. A previous CS was found to increase the risk of its recurrence by 33 times (OR = 32.96, 95% CI 30.95–35.11, p<0.001). The breech position increased the risk of CS by 31 times (OR = 31.03, 95% CI 28.14–34.29, p<0.001). A multiple pregnancy increased the odds of CS six-fold and the use of ART 1.8-fold. Mothers who suffered from diabetes before pregnancy were found to be twice as likely to give birth via CS (OR = 2.14, 95% CI 1.76–2.60, p<0.001), while mothers with gestational diabetes had just 23% higher odds of a CS birth (OR = 1.23, 95% CI 1.16–1.31, p<0.001). Mothers who suffered from hypertension gave birth via CS twice as often as did mothers without such complications (OR = 2.01, 95% CI 1.86–2.21, p<0.001). The increasing age of mothers, a significant risk factor for a CS, was found to be independent of other health factors. Accordingly, delayed childbearing is thought to be associated with the increase in the CS rate in Czechia. However, since other factors come into play, further research is needed to assess whether the recent slight decline in the CS rate is not merely a temporal trend. Cesarean section (CS), when used appropriately, should account for 10–15% of births [1, 2]. In recent years, however, the trend toward the use of CS in obstetric practice has been on the increase worldwide. Eastern Europe witnessed one of the highest increases (two-fold) in the use of CS in the period 2000–2015 [3, 4]. This trend was particularly marked in Czechia, where the CS rate increased from 10.3% in 1994 to a maximum value of 26.1% in 2015, followed by a slight decrease to 23.6% in 2018 (Fig. 1). Currently, the CS rate in Czechia is below the average of other developed countries [3,4,5] (Fig. 2). Mean age of mothers at birth, first births and the CS rate, Czechia, 1994–2018. Source: [6,7,8,9,10,11] Cesarean section rate in OECD countries in 2017. Source: [4] One of the most likely reasons for this phenomenon concerns the dynamic increase in the age of mothers [12], which represents a significant recent demographic trend in Czechia [13,14,15]. Between 1994 and 2015 the mean age of mothers increased on a continuous basis, as did the share of CS, both of which stagnated only recently (Fig. 1). Fertility postponement is further connected with a decrease in the probability of having a second child [16, 17], the increased use of assisted reproduction methods [18, 19], and health risks for both mothers and their children [20, 21], i.e. factors which are also related to the increased use of CS [22, 23]. The reasons for the increase in the CS are multifactorial and include health care practices [2, 3]. The care of pregnant women in Czechia is fully entrusted to gynecologists and obstetricians. It is strongly recommended that the birth should take place in a medical facility and, even if it is conducted by a midwife, the doctor remains the legally responsible person. The decision on a planned CS cannot be based on a request from the mother. While some maternity facilities are run by private companies, all the health care facilities used by Czech citizens are covered by the public health insurance system under the same conditions. The first aim of the article is to evaluate, taking Czechia as an example, the association between the use of CS and the main medical factors related to the increased use of CS (complications during pregnancy and childbirth, diabetes, gestational age, the birth weight, the breech position, repeat CS, singleton/multiple pregnancy, and conception method) and to subsequently compare these associations with those between the use of CS and sociodemographic characteristics (the age of the mother, the birth order, marital status and the mother's level of education). The second aim is to examine recent trends in the CS rate in Czechia. Data and methodology The study employed a unique data source that contains anonymized data on all mothers in Czechia for 2018 obtained from the National Registry of Mothers at Childbirth (NRMC), which is managed by the Institute of Health Information and Statistics of the Czech Republic (IHIS CR) [7]. The data contained in the National Register is based on the so-called report on the mother at childbirth, a mandatory statistical report that is completed on all mothers, including foreigners, who give birth in Czechia. Data on the CS rate in the private sector is not reported separately. In 2018, a total of 111,749 mothers gave birth to 113,234 children; 6.2% of them had non-Czech citizenship [7]. Since one of the most important considerations concerning the study of cesarean births is whether ART was used to achieve pregnancy, information on the date of embryo transfer was added to the data set by linking the file from the NRMC with the respective file obtained from the National Register of Assisted Reproduction (NRAR) using the mothers' so-called birth numbers (a unique number that is assigned to all Czechs at birth). Based on the comparison of the date of birth and the date of embryo transfer, it was possible to estimate those pregnancies that resulted from the use of ART. Firstly, a descriptive analysis of the relationships between the observed variables was conducted so as to evaluate the distribution of the increased incidence of CS births according to the various factors considered. Most of the monitored variables contained data on all the mothers, with the exception of marital status and level of education; the completion of these questions is optional. There was a lack of information on 673 mothers concerning marital status (0.6% of the total sample) and on 23,113 mothers in the case of the level of education (20.7% of the total sample). Cesarean deliveries were divided into planned and acute. In order to assess the association between the various covariates and the risk of CS, a binary logistic regression model was constructed, which enabled the testing of the association of the various variables on the incidence of CS births (1 yes, 0 no), assuming all the other characteristics of the mothers were equal. The application of the binary logistic regression model allowed for the removal of the mutual influence of the covariates and the testing of whether they also acted individually, all else being equal. Two logistic regression models were constructed. Model 1 included all the mothers except for those for whom no data was available on the marital status and level of education (N = 88,041, i.e. 79% of the total number of mothers), while Model 2 included only those mothers who had already given birth in the past (a total of 46,127 mothers, i.e. 80% of repeat mothers after excluding women with no data for marital status and/or education). The binary logistic regression model was used to explain the effects of the explanatory variables on the dependent variable "having a childbirth via cesarean section" (Y = 1 for cesarean section, otherwise Y = 0). x = (x1, …. xk)' is the vector of the explanatory variables: $$\mathrm{logit}\left({\text{Pr}}\langle Y=1|x\rangle \right)={\text{log}}\left\{\frac{{\text{Pr}}\langle Y=1|x\rangle }{1-{\text{Pr}}\langle Y=1|x\rangle }\right\}={\beta }_{0}+{x}^{\mathrm{^{\prime}}}\beta ,$$ where β0 is the intercept parameter and β is the vector of the slope parameters. For the sake of clarity, the results were interpreted in terms of odds ratios (OR), which qualify the variables that indicate the odds of cesarean delivery for each category compared to the given reference category. A number of demographic, health and socio-economic characteristics were included in the models as explanatory variables. With the exception of the age of the mother at childbirth (continuous), all the following covariates were categorical and were transformed into dummy variables: Marital status was divided into four categories: single, married (ref.), divorced and widowed. The highest attained level of education was divided into four groups: basic (including incomplete), secondary without the school leaving certificate (SLC), secondary with the SLC (ref.) and tertiary. The WHO classification of premature babies [24] was used for the categorization of the gestational age, namely: extremely preterm (less than 28 weeks), very preterm (28 to < 32 weeks), and moderate to late preterm (32 to < 37 weeks). The post-term birth category was defined according to a report by Spong [25], which indicates a post-term delivery as occurring from the 42nd week of pregnancy. A gestational age of 37–41 weeks was used as the reference category. The birth order was divided into 3 categories, namely women who had not yet given birth (ref.), women who had given birth for a second time, and women with third and higher order births. Singleton (ref.) versus multiple pregnancy. Previous CS birth (only in model 2, in which first-time mothers did not feature): no (ref.), yes. The probable method of pregnancy of the women was estimated based on the embryo transfer date reported for 4,018 mothers. This method of assisted reproduction was used by 3.6% of mothers. This variable was then divided into two categories – without the use of ART (ref.) and following ART. The incidence of diabetes in the mothers was divided into three categories: not detected (ref.), detected prior to pregnancy, detected during pregnancy. Hypertension and threatened preterm labor, which are among the most common health complications, were identified as serious complications during pregnancy and childbirth. Other complications (bleeding in the first, second and third trimesters, placenta previa, placental abruption and other placental abnormalities, cardiovascular complications, preeclampsia, intra-uterine growth restriction and others) were combined in the "other complications" category. The models also considered the incidence of a breech presentation. This variable was assigned the values: no (ref.) and yes. In the case of multiple pregnancies, the pregnancy was classified as "yes" if at least one of the children was in the breech position. The birth weight was not included in the regression model since it is not considered to be a key indicator of CS. The preferred routine adopted by the field of obstetrics in Czechia comprises the evaluation of placental functioning applying the ultrasonographic measurement of flow through the umbilical artery and the middle cerebral artery. The birth weight is recorded only following the birth of the child; data on the estimated birth weight of the child prior to the birth does not form a part of the official data on which a decision on a CS is based. In addition, the variable birth weight, which has a high degree of multicollinearity with the gestational age, was monitored in the descriptive analysis via the following 5 categories: extremely low birth weight (< 1,000 g), very low birth weight (1,000–1,499 g), low birth weight (1,500–2,499 g), normal birth weight (2,500–3,999 g) and high birth weight (≥ 4,000 g) [26]. The analysis considered the lowest birth weight in the case of multiple births. The analysis was performed using SPSS Statistics 26 software. The findings and the discussion are reported according to STROBE Statement guidelines [27]. In 2018, a total of 111,749 mothers gave birth to 113,234 children in Czechia [7]. The highest proportion of mothers comprised the 30–34 age group (34.6%), followed by the 25–29 age group (30.5%). In 2018, 6.9% of mothers gave birth prematurely in Czechia and 48.2% of all mothers gave birth to their first child. 35.3% of mothers had second-order births and the remaining 16.5% had third-order and higher births. 10.5% of mothers had experienced at least one previous CS birth. 1,464 sets of twins were born in Czechia in 2018, i.e. 1.3% of all births. The share of CS births in 2018 was 23.6%. The highest proportion of CS births concerned elective CS planned during pregnancy (42.9%) and, together with elective CS, but performed during labour (7.8%), accounted for a total of 50.7%. Emergent cesarean sections performed during labor accounted for 33.5%, and during pregnancy 15.8%. Of all women who gave birth via CS (23,341) 62% were aged 30 and over. 31.8% of all CS births were repeat CS births, of which 18.9% were breech presentations, 6.4% followed ART and 4.4% were multiple pregnancies. Differences in the frequency of CS deliveries by socio-demographic characteristics The distribution of CS according to age categories indicated an increasing risk with the age of the mother (Table 1). The lowest share of CS births referred to the up to 19 years age category (15.8%), with higher proportions in each subsequent age category. Compared to the total proportion of births via CS of 23.6%, the up to 29 years age group had a lower share than the average, and the 30–34 years category corresponded to the average. A significantly higher proportion of CS births concerned mothers aged 35–39 and over 40, for whom 37.2% of pregnancies ended in CS births. Conversely, the share of women with a vaginal delivery decreased with age from 84.2% before the age of 20 to 62.8% for women aged 40 and over. Table 1 Cesarean delivery by the age of the mother at delivery, Czechia, 2018 In addition, the ratio of planned and emergent CS also varied depending on the age of the mother (Fig. 3), i.e. the share of planned CS births increased with age. Compared to the youngest mother age group (up to 19 years of age), concerning whom 30.2% of all CS births were planned, the over 40 years age group featured more than twice the percentage of mothers with planned CS. Elective and emergent CS by maternal age at delivery, Czechia, 2018. Source: [7], own calculations The proportion of CS births varied slightly depending on the birth order (Fig. 4). The highest share concerned first-time mothers (25.0%) and the lowest share mothers of third and higher birth orders (21.6%). Slight differences were also observed with respect to the education attained and the marital status of the mothers (Fig. 4). Divorced women (27.9%) and widows (28.3%) gave birth via CS more frequently than did single (23.6%) and married (23.2%) women. A lower share of CS births was observed for women with basic and incomplete education levels (22.2%), and the highest share for secondary school (with SLC) graduates (24.5%). Percentage of CS of all deliveries for the given category of mothers, Czechia, 2018. Source: [7], own calculations Differences in the frequency of CS deliveries by health indication Significant differences were determined with respect to the number of pregnancies and whether or not the mother had previously given birth via CS (Table 2). Multiple births were predominantly via CS (78.7%) as were births by women who had previously had a CS birth (71.2%). Table 2 Percentage of CS according to a previous CS, singleton/multiple pregnancy and breech presentation, Czechia, 2018 The proportion of CS births increased in proportion to the occurrence of complications during pregnancy and childbirth (Fig. 5). The risk of CS was higher for those mothers at risk of a pre-term delivery (30.1%). Women with hypertension (37.6%) and other complications (39.0%) gave birth via CS almost twice as often as did women without health complications (20.9%). Percentage of CS according to maternal health complications and ART usage, Czechia, 2018. Source: [7], own calculations The incidence of various types of complications during pregnancy and childbirth also varied depending on the age of the mother. Complications such as first and third trimester bleeding, placenta previa, placental abruption and other placental abnormalities, cardiovascular complications and hypertension primarily affected mothers over 35 years of age, and even more significantly mothers over 40 years of age. Conversely, some complications were characteristic of younger mothers under 24 years of age, specifically the occurrence of a significantly higher proportion of the threat of pre-term birth and intra-uterine growth restriction. The incidence of other health complications (preeclampsia, bleeding in the second trimester) did not differ significantly according to the age of the mother. However, the proportion of diabetes, especially diabetes that was detected during pregnancy, increased with the age of the mothers (Table 3); while 6.2% of mothers aged 25–29 were affected, 11.6% of mothers aged over 40 suffered from this condition. 40% of women with preexisting diabetes gave birth via CS, while 29.9% of women with gestational diabetes and 23% of women without diabetes had CS births. Table 3 Diabetes by the age of the mother at delivery, Czechia, 2018 CS delivery was observed to be less frequent for women who became pregnant without ART (22.9%) than those who underwent assisted reproduction techniques (42.2%) (Fig. 5). Of all children born via CS, 18.6% were in the breech presentation, 3% in the transverse and oblique lie and 78.4% were in the vertex presentation. Only 9.8% of children in the breech presentation were born spontaneously. The proportion of CS births varied significantly according to the birth weight of the child (Fig. 6). CS was significantly more common in the case of newborns who weighed less than 2,500 g than for those with normal birth weights. The highest share of CS births concerned the very low weight category (68.5%). A higher proportion of CS births was also recorded for children with higher birth weights (25.9%) than for those with normal birth weights (21.5%). Percentage of CS of all deliveries according to birth weight and gestational age, Czechia, 2018. Source: [7], own calculations The final observed change concerned the gestational age. Significant differences were observed between mothers of gestational ages ranging from 22 to 45 weeks. The proportion of CS was significantly higher for pre-term births than for term births, while the share was slightly higher for post-term births (Fig. 6). Extremely pre-term and very pre-term births took place via CS in more than half of all such cases, while moderate to late pre-term births involved CS in 40.4% of such cases. The incidence of pre-term births was higher for women younger than 25 years and older than 40 years than for those aged 20–34. Health factors versus the socio-demographics associated with the increased odds of a CS delivery The sociodemographic characteristics and health status of all the women were analyzed together employing binary logistic regression in order to identify the covariates associated with the increased odds of a CS delivery. All the covariates were entered into Model 1 for 88,041 mothers (i.e. 79% of all mothers). The results revealed that the odds of a cesarean birth increases with the maternal age (Table 4 – Model 1). Thus, the increasing age of mothers is an important covariate associated with the increasing incidence of CS births, even when it is adjusted for relevant confounders—other age-dependent risk characteristics (e.g. pregnancy complications, the use of ART and multiple births). Table 4 Odds Ratios (Exp(B)) of undergoing a cesarean delivery, Czechia, 2018 The odds of a CS decreased with the birth order: for second-time mothers the odds were 11% lower than for first-time mothers (OR = 0.89, 95% CI 0.86–0.93, p<0.001) and 25% lower for mothers of higher order births (OR = 0.75, 95% CI 0.71–0.79, p<0.001). Women who gave birth to multiple children had 6-times higher odds of a CS (OR = 6.08, 95% CI 5.13–7.21, p<0.001) than women with singleton pregnancies. Slightly higher odds of a CS birth were detected for single women (OR = 1.06, 95% CI 1.02–1.10, p<0.01) than for married women. No significant difference was observed with respect to the other categories. In terms of the level of education attained, lower odds of giving birth via CS were detected for women with a tertiary education (OR = 0.89, 95% CI 0.85–0.93, p<0.001) compared to women with secondary education with SLC. In terms of health characteristics, the breech position comprises a decisive indication for a CS birth; the odds of a CS birth were more than 30 times higher than for women whose child was in a different position (OR = 31.06, 95% CI 28.14–34.29, p<0,001). Women who gave birth pre-term also had higher odds of giving birth via CS, especially those who had very pre-term births (OR = 2.94, 95% CI 2.33–3.71, p<0.001). Further, women who most likely became pregnant following embryo transfer had significantly higher odds of a cesarean delivery, even after adjusting for the mother's age and the birth order and frequency; the odds of giving birth via CS were 1.8-times higher than for women who did not undergo ART (OR = 1.83, 95% CI 1.69–1.99, p<0.001). Mothers who suffered from diabetes prior to pregnancy were more than twice as likely to give birth via CS than women who did not have the condition (OR = 2.14, 95% CI 1.76–2.60, p<0.001), while those with gestational diabetes had only 1.2-times higher odds (OR = 1.23, 95% CI 1.16–1.31, p<0.001). Furthermore, mothers who suffered from hypertension had twice the odds of a CS birth than those without such complications (OR = 2.01, 95% CI 1.86–2.21, p<0.001). Personal history of cesarean section Model 2 included only those women who had already given birth (57,960 mothers, i.e. 51.9%), which allowed for the addition of the very significant variable of whether the woman had given birth via cesarean section in the past (Table 4 – Model 2). It was revealed that a previous CS birth comprises an absolutely crucial explanatory variable for a subsequent cesarean delivery. Second and higher-order mothers with previous experience of CS had 32-times higher odds of giving birth via CS than those who had previously given birth vaginally (OR = 32.96, 95% CI 30.95–35.11, p<0.001). Either no change was observed with respect to the association of the other monitored variables (diabetes, complications in pregnancy and childbirth, education) or the odds even increased (gestational age, multiple pregnancy and ART use). The odds of CS for women who gave birth very pre-term was 3.5-times higher (OR = 3.56, 95% CI 2.32–5.45, p<0,001) than for those who gave birth within term, and the odds of CS for women with a multiple pregnancy was almost 9-times higher (OR = 8.94, 95% CI 6.93–11.54, p<0,001) than for those who had a singleton pregnancy. In accordance with the Robson classification of CS [28], which is accepted as the global standard for the monitoring of the CS indication spectrum [29], a previous CS birth and the breech presentation were confirmed as the highest risk factors for CS birth in Czechia (31-times higher odds of a CS birth for a breech position and 35-times higher odds of a CS birth for a breech position for multiparous women; 32-times higher odds of a CS birth following a previous CS birth) followed by multiple pregnancies (6-times higher odds and 9-times higher odds for multiparous women) and ART use (2-times higher odds). Our analysis also confirmed the importance of the other health and socio-demographic factors examined, i.e. they evinced statistical significance after adjustment for all the other covariates: gestational age, diabetes, complications in pregnancy and childbirth, the mother's age, marital status and education. The differences in the risk of a CS birth according to marital status and education were statistically significant only for certain categories. A slightly higher risk of CS (1.6-times higher odds) was observed for single compared to married women, and a lower risk of CS (0.89-times lower odds) was observed for tertiary-educated women than for those with a secondary education. Our results confirmed the age factor as an independent risk with concern to a CS birth. With respect to the explanation for the increase in the CS rate in Czechia since the 1990s, both clinical (higher maternal ages at birth, an increase in ART use, multiple pregnancies) and non-clinical factors (health provider practices and guidelines, legislation) played noticeable roles. Despite the use of a comprehensive dataset, the study has a number of limitations. The design does not allow for the causal interpretation of the associations studied. The covariates in the models were restricted to those available in the register. Information on education and marital status is not provided for all the women in the dataset; hence, for this part of the analysis, it was necessary to reduce the dataset by 21%, although no differences were observed between the two groups in terms of the structure of the mothers by age and CS births. Moreover, information on the use of ART methods was estimated based on information on ART cycles performed in Czechia only; foreign women and women who underwent ART abroad were thus classified as non-ART. Given that Czechia is more likely to be a destination country for cross-border reproductive care, we did not anticipate any bias in the results from this point of view. As for the explanatory factors, since we had no information on the maternal pre-pregnancy weight and height, we were unable to adjust for the body mass index. Our results are consistent with literature in terms of reporting significant associations between the studied risk factors and a CS birth. It is reasonable to conclude that these factors have, to various extents, been behind the growth in the CS rate in Czechia since the 1990s. It is important to prevent the further growth in the CS rate and to determine the optimal percentage of CS, especially concerning the elective cesarean delivery of planned primarily indicated CS. It is clear that the underuse of CS results in hypoxic neonatal injury, stillbirth, uterine rupture and obstetrics fistulas [30], while the overuse of CS is associated with the increased risk of anesthesiologic and cardiovascular complications, infection complications and hysterectomy [31], as well as with adverse perinatal outcomes [32]. The incidence of serious complications is so rare due to advances in health care that many obstetricians lack the relevant experience. Nevertheless, the data clearly indicates a higher risk of morbidity and mortality as a result of a CS than a spontaneous delivery, even with respect to VBAC (vaginal birth after cesarean) [33]. However, in Czechia many patients and some obstetricians appear to believe that the opposite is the case, as reflected by the fact that a previous CS was found to be a key risk factor for a subsequent CS. The fact that 71.2% of Czech women with a history of CS give birth again via CS serves to confirm the low chance of a VBAC in such cases. This is in line with another study that documented that a high percentage of births via CS are followed by a subsequent birth via the same method without the option of TOLAC (the trial of labor after cesarean) [34]. Increased maternal age [35] also contributes to the indication of ERCS (elective repeat CS). Enforcing this practice in Czechia may also have contributed to the increase in the CS rate. The increase in women giving birth via CS in their first pregnancy results in an ongoing increase in the repeat CS birth rate [36]. If the CS rate increases for first-time mothers, it can be expected that this will generate a higher proportion of repeat CS. Accordingly, it can be assumed that a change in practice has the potential to reduce the CS rate in Czechia [37]. A further reason for the increase in CS births concerns the move away from spontaneous delivery when the fetus is in the breech presentation. This trend, initiated by the Term Breech Trial Collaborative Group study [38], has gradually led to a decline in the experience of such births and, thus, to a further increase in the use of CS. This approach has begun to be applied consistently in Czechia and is frequently referred to in medical study materials. However, spontaneous delivery when the fetus is in the breech presentation remains inadvisable, especially in the case of pre-term births [39]. The literature shows that a number of maternal health risks are age-related and that the risk of a cesarean birth increases with the maternal age [23, 40, 41]. For example, older mothers are associated with higher risks of the incidence of diabetes mellitus [42], pre-term births [24, 43], lower child birth weights [20, 21, 44, 45] and pre-term births associated with diabetes mellitus [46, 47]. Mothers over 30 years of age also face the increased risk of child health complications, spend longer times in hospital following the birth and face a higher risk of more frequent and longer hospital stays in the first two years of the child's life [48]. The application of logistic regression confirmed that both pregnancy health complications (preterm-birth, diabetes, hypertension) and the mother´s age comprise independent risk factors for a CS birth. The Czech results confirmed that mothers who gave birth very pre-term (28–31 weeks) had 3-times higher odds of a CS than women who had an in-term birth [49]. Mothers who suffered from diabetes before pregnancy had more than two-times higher odds of giving birth via CS than women who did not suffer from this condition, while mothers with gestational diabetes had 1.23-times higher odds; these results correspond to those of other published studies [50]. As expected, mothers who suffer from hypertension gave birth via CS twice as often as did those with no such complications [51]. Furthermore, our results confirmed the age factor as an independent risk for CS birth. Similar results were reported in a British study [52], the sample population of which comprised 76,158 singleton pregnancies with a live fetus at 11 + 0 to 13 + 6 weeks. After adjusting for potential maternal and pregnancy confounding variables, advanced maternal age (defined as ≥ 40 years) was associated with an increased risk of cesarean section (OR, 1.95 (95% CI, 1.77–2.14); P < 0.001). A recent Danish study [12] showed that nulliparous women aged 35–39 years had twice the risk of a CS (adjusted OR, 2.18 (95% CI, 2.11–2.26); P < 0.001). Thus, one of today's most important population trends – fertility postponement – also comprises one of the significant independent factors associated with the risk of a CS birth. According to Timofeev et al. [22], the ideal age of mothers at birth is 25–29 years, at which time the risk of complications in pregnancy and the neonatal period is lowest. The increased risk of an adverse pregnancy is evident as early as between 30 and 34 years and continues to increase with age [20]. The question thus concerns the age that marks the limit in terms of the increased health risks associated with the mother's age. The association becomes significant from the age of 40 onwards [52], sometimes even after the age of 35 [12]. In any case, the risks associated with age are of a progressive character [20, 41]. The highest fertility rate in Czechia in 2018 was attained by women aged 30, in contrast to the early 1990s when maximum fertility was attained at the age of 22 [14]. The shift in fertility to older women in Czechia is further illustrated via a comparison of the share of fertility achieved by the age of 30. In 1989, the proportion stood at 86.6%, whereas by 2018 the share had dropped to 48.6% [17]. Thus, the trend toward delayed childbearing is apparent in Czechia as a result of the second demographic transition [53, 54], which indicates that reverse changes in fertility trends are highly unlikely. Nevertheless, fertility postponement can be decelerated or halted by the introduction of effective measures that act to remove barriers to starting a family [14]. To sum up, the strength of the association between advanced maternal age and CS and the fact that the trend in the share of CS births in Czechia has copied the trend in the mean age of mothers at childbirth (Fig. 1) support the hypothesis of a causal relationship between the maternal age and CS. However, as other factors come into play, further research is required so as to assess whether the recent slight decline in the CS rate is not merely a temporal trend. A further risk factor that is closely connected with fertility postponement concerns the use of ART. Our results confirmed that mothers who most likely became pregnant following embryo transfer also had 1.83 higher odds of a cesarean delivery, even when controlling for the age, order and frequency of birth. According to the meta-analysis of the Medline, EMBASE and CINAHL databases [55], IVF/ICSI pregnancies are associated with a 1.90-fold increase in the odds of a CS (95% CI 1.76–2.06) compared to spontaneous conceptions. Since the late 1990s, Czechia has registered a significant increase in the use of ART and it has become a country with a relatively high proportion of ART live births [18, 19]. Accordingly, the increased use of ART in Czechia may have contributed to the explanation of the increase in the CS rate. It is noteworthy that, despite the decline in marriage, marital status continues to comprise a relevant variable. In Czechia a slightly higher risk of CS (OR 1.06) was observed for single compared to married women despite the control of variables such as the age of the mother and the birth order. The higher risk of giving birth via CS for single women may be due to the fact that marital status is related to the health status, i.e. married persons have a higher level of self-esteem than do single people [56]. With regard to the level of the woman's education, no significant differences were detected in terms of the risk of a CS between women with a basic but incomplete education, secondary without the SLC (school leaving certificate) and secondary with the SLC. The controlling of the age and other variables revealed lower odds of a CS birth (OR 0.89) for tertiary-educated women than those with the SLC. The higher odds of CS for women with lower levels of education could be explained by their working in riskier professions, a higher incidence of smoking or obesity or generally poorer living conditions [57]. Conversely, tertiary-educated women are, in general, more open to practicing a healthy life style and receptive to the promotion of the benefits of natural childbirth in contrast to the numerous risks of CS for the subsequent health of both mothers and their children [58]. Thus, the introduction of health education as a component of the antenatal care process as a form of non-clinical intervention should be considered aimed at reducing the unnecessary use of CS [59]. The trend toward an increase in CS in Czechia can also be understood from the legislation perspective, in particular with concern to the introduction of the new Civil Code in 2014, which replaced clearly-defined compensation levels for personal injury with the decision on the amount thereof being decided solely by the courts. The courts continue to maintain the misconception that CS is the best form of intervention in terms of assuring the health of the child and mother. A similar situation has been reported by Longo with respect to Italy [5, 60, 61]. The share of CS births in Czechia (23.6%) exceeds WHO recommendations of 2015 on the optimal proportion of CS births (10–15%). Based on our results, we doubt whether the WHO recommendations reflect the increasingly older ages of mothers, especially first-time mothers and the high degree of institutionalization of deliveries in developed countries. Trusting the delivery to physicians is usually accompanied by a significantly higher degree of monitoring, with the associated risks of false-positive indications of hypoxia, a higher rate of medication use, and the loss of faith in normal childbirth [62]. Some women prefer a CS since they consider it to be safer for both themselves and the baby, an opinion that runs contrary to current scientific knowledge. A history of CS is associated with a higher risk of uterus rupture, placenta accreta, ectopic pregnancy, stillbirth, pre-term birth, and bleeding and the need for a blood transfusion, injury during surgery and hysterectomy in subsequent pregnancies. A higher birth order CS also increases the risk of maternal mortality and morbidity compared to a vaginal delivery [63]. CS may also lead to enhanced health risks for the baby – altered immune development, the increased likelihood of allergies, atopy, asthma, a reduction in intestinal microbiome diversity [64] and late childhood obesity [65]. The risk is higher for planned CS. Few studies have been conducted to date on the influence of CS on the cognitive and educational outcomes of CS-born children [63]. Thus, it is important that all the indications concerning birth via CS are carefully considered and that this method is not overused. Czechia makes no effort to contribute to efforts to reduce the percentage of cesarean sections; on the contrary, the reimbursement of costs by health insurance companies is higher for a cesarean section than for a spontaneous birth. One of the measures that might significantly prevent the expansion of CS use concerns a recommendation from the relevant professional authorities to strictly refuse cesarean sections on request [66]. Although this recommendation has been mentioned frequently in various professional forums in Czechia [67], efforts persist internationally to enforce dubious indications for a CS birth such as the protection of the pelvic floor [68], which also enjoys some support in Czechia. Nevertheless, in Czechia, CS on request is not legally permitted. Furthermore, the implementation of clinical practice guidelines combined with a mandatory second opinion for a CS indication is also relevant to the reduced risk of CS in Czechia [66]. In conclusion, despite the international concern surrounding the increasing CS rate, the Czech CS rate decreased from 26.1% in 2015 to 23.6% in 2018. Interestingly, this has not been attributed to any particular Czech health strategy aimed at reducing the CS rate. Although it has been perceived as a significant success for the field of Czech obstetrics, further research is needed in order to assess whether this is not merely a temporal trend. Meaning of the study: possible mechanisms and implications for clinicians and policymakers Delayed childbearing appears to be associated with the increasing use of CS in parallel with the expansion of defensive obstetrics that imply a high risk of CS in cases of a breech presentation and following a previous CS. In addition, the increased use of CS also reflects social demand, an increasing trend toward the prosecution of obstetricians in the event of childbirth complications and the erroneous lay perception of CS as the safest and least painful childbirth method. On the other hand, clinical practice based on the official refusal of CS on request could well prevent the overuse of CS. As regards obstetric practice, measures to encourage TOLAC, albeit with a careful eligibility assessment, may also help to reduce CS. As regards non-clinical interventions targeted at women, the support of training programs and health education on the indications and contra-indications of CS may also serve to improve the CS rate. The aim of the study was to contribute to the explanation of recent trends in the CS rate in Czechia based on the examination of the association between a CS birth and selected health factors and sociodemographic characteristics. Our analysis confirmed that the mother's age comprises an independent risk factor for a CS birth in addition to pregnancy health complications and other, sociodemographic, characteristics. Accordingly, delayed childbearing appears to be associated with the increase in the CS rate in Czechia. However, the recent slight decline in the CS rate may be related to the completion of the fertility postponement process in Czechia. Nevertheless, since other factors come into play, further research is required in order to assess whether the recent slight decline in the CS rate is not merely a temporal trend. The data that supports the findings of this study is available from The Institute of Health Information and Statistics of the Czech Republic (IHIS CR); however, restrictions apply to the availability of the data, which was used under license for the current study; hence the data is not publicly available. The data is, however, available from the authors upon reasonable request and with the permission of the IHIS CR. CS: NRMC: National Registry of Mothers at Childbirth IHIS CR: Institute of Health Information and Statistics of the Czech Republic Assisted reproductive technologies NRAR: National Register of Assisted Reproduction VBAC: Vaginal birth after cesarean TOLAC: Trial of labor after cesarean TFR: IVF: In vitro fertilization ICSI: Intracytoplasmic sperm injections SLC: Ye J, Zhang J, Mikolajczyk R, Torloni MR, Gülmezoglu AM, Betran AP. Association between rates of caesarean section and maternal and neonatal mortality in the 21st century: a worldwide population-based ecological study with longitudinal data. BJOG. 2016. https://doi.org/10.1111/1471-0528.13592. Betran AP, Torloni MR, Zhang JJ, Gülmezoglu AM. WHO Working Group on Caesarean Section. WHO Statement on Caesarean Section Rates. BJOG. 2016;123(5):667–70. https://doi.org/10.1111/1471-0528.13526. Boerma T, Ronsmans C, Melesse DY, Barros AJD, Barros FC, Juan L, Moller AB, Say L, Hosseinpoor AR, Yi M, de Lyra RabelloNeto D, Temmerman M. Global epidemiology of use of and disparities in caesarean sections. Lancet. 2018;392(10155):1341–8. https://doi.org/10.1016/S0140-6736(18)31928-7. OECD. Health at a Glance 2019. Chapter 9. Figure 9.16. Caesarean section rates, 2017 (or nearest year). OECD Health Statistics. 2019. https://doi.org/10.1787/888934017918. Laurita Longo V, Odjidja EN, Beia TK, Neri M, Kielmann K, Gittardi I, Di Rosa AI, Boldrini M, Melis GB, Scambia G, Lanzone A. "An unnecessary cut?" multilevel health systems analysis of drivers of caesarean sections rates in Italy: a systematic review. BMC Pregnancy Childbirth. 2020. https://doi.org/10.1186/s12884-020-03462-1. Czech Statistical Office (CZSO).Population movement in the Czech Republic 1920–2020: analytical indicators. 2021. https://www.czso.cz/csu/czso/obyvatelstvo_hu. Accessed 10 Jul 2021. National Registry of Mothers at Childbirth (NRMC). Anonymised individual data from the National Registry of Mothers at Childbirth on women who gave birth in 2016–2018 in the Czech Republic. 2018. IHIS CR. Mother and newborn 1999–2015. Institute of Health Information and Statistics of the Czech Republic. 2001–2017. https://www.uzis.cz/index-en.php?pg=publications--library&id=249. Accessed 15 Apr 2021. Cr IHIS. Mother and newborn 1997. Prague: Institute of Health Information and Statistics of the Czech Republic; 2000. Cr IHIS. Report on mothers at childbirth in 1994–1996. Prague: Institute of Health Information and Statistics of the Czech Republic; 2000. Rydahl E, Declercq E, Juhl M, Maimburg RD. Cesarean section on a rise—Does advanced maternal age explain the increase? A population register-based study. PLoS ONE. 2019;14(1):e0210655. Šťastná A, Kocourková J, Šídlo L. Reprodukční stárnutí v Česku v kontextu Evropy. [Reproduction ageing in Czechia in the European context]. Čas Lék čes. 2019;158:126–32. Kocourková J, Šťastná A. The realization of fertility intentions in the context of childbearing postponement: comparison of transitional and post-transitional populations. J Biosoc Sci. 2021. https://doi.org/10.1017/S002193202000005X. Šťastná A, Slabá J, Kocourková J. Plánování, časování a důvody odkladu narození prvního dítěte v České republice. [The planning, timing, and factors behind the postponement of first births in the Czech Republic]. Demografie. 2017;59(3):207–23. Kurkin R, Šprocha B, Šídlo L, Kocourková J. Fertility factors in Czechia according to the results of the 2011 census. AUC-Geographica. 2018;53(2):137–48. Šťastná A, Slabá J, Kocourková J. Druhé dítě – důvody neplánovaného odkladu a časování jeho narození [Reasons for the Unplanned Postponement and Timing of the Birth of a Second Child]. Demografie. 2019;61(2):77–92. Kocourková J, Fait T. Can increased use of ART retrieve the Czech Republic from the low fertility trap? Neuro Endocrinol Lett. 2009;30(6):739–48. Kocourková J, Burcin B, Kučera T. Demographic relevancy of increased use of assisted reproduction in European countries. Reprod Health. 2014. https://doi.org/10.1186/1742-4755-11-37. Kenny LC, Lavender T, McNamee R, O'Neill SM, Mills T, Khashan AS. Advanced maternal age and adverse pregnancy outcome: evidence from a large contemporary cohort. PLoS ONE. 2013. https://doi.org/10.1371/journal.pone.0056583. Fall CH, Sachdev HS, Osmond C, Restrepo-Mendez MC, Victora C, Martorell R, Stein AD, Sinha S, Tandon N, Adair L, Bas I, Norris S, Richter LM. COHORTS investigators. Association between maternal age at childbirth and child and adult outcomes in the offspring: a prospective study in five low-income and middle-income countries (COHORTS collaboration). Lancet Glob Health. 2015;3(7):e366–77. https://doi.org/10.1016/S2214-109X(15)00038-8. Timofeev J, Reddy UM, Huang CC, Driggers RW, Landy HJ, Laughon SK. Obstetric complications, neonatal morbidity, and indications for cesarean delivery by maternal age. Obstet Gynecol. 2013. https://doi.org/10.1097/AOG.0000000000000017. Sauer MV. Reproduction at an advanced maternal age and maternal health. Fertil Steril. 2015. https://doi.org/10.1016/j.fertnstert.2015.03.004. WHO. Born too soon: the global action report on preterm birth. Geneva: World Health Organization; 2012. Spong CY. Defining "term" pregnancy: recommendations from the Defining "Term" Pregnancy Workgroup. JAMA. 2013. https://doi.org/10.1001/jama.2013.6235. Gill SV, May-Benson TA, Teasdale A, Munsell EG. Birth and developmental correlates of birth weight in a sample of children with potential sensory processing disorder. BMC Pediatr. 2013. https://doi.org/10.1186/1471-2431-13-29. von Elm E, Altman DG, Egger M, Pocock SJ, Gøtzsche PC, Vandenbroucke JP. STROBE Initiative. The Strengthening the Reporting of Observational Studies in Epidemiology (STROBE) statement: guidelines for reporting observational studies. Ann Intern Med. 2007;147(8):573–7. https://doi.org/10.7326/0003-4819-147-8-200710160-00010 . Erratum in: Ann Intern Med. 2008 Jan 15;148(2):168. PMID: 17938396. Robson M, Hartigan L, Murphy M. Methods of achieving and maintaining an appropriate caesarean section rate. Best Pract Res Clin Obstet Gynaecol. 2013. https://doi.org/10.1016/j.bpobgyn.2012.09.004. WHO. WHO Statement on caesarean section rates. Reproductive Health Matters. 2015;23(45):18. Betrán AP, Temmerman M, Kingdon C, Mohiddin A, Opiyo N, Torloni MR, Zhang J, Musana O, Wanyonyi SZ, Gülmezoglu AM, Downe S. Interventions to reduce unnecessary caesarean sections in healthy women and babies. Lancet. 2018. https://doi.org/10.1016/S0140-6736(18)31927-5. Liu S, Liston RM, Joseph KS, Heaman M, Sauve R, Kramer MS. Maternal Health Study Group of the Canadian Perinatal Surveillance System. Maternal mortality and severe morbidity associated with low-risk planned cesarean delivery versus planned vaginal delivery at term. CMAJ. 2007;176(4):455–60. https://doi.org/10.1503/cmaj.060870. Negrini R, da Silva Ferreira RD, Guimarães DZ. Value-based care in obstetrics: comparison between vaginal birth and caesarean section. BMC Pregnancy Childbirth. 2021. https://doi.org/10.1186/s12884-021-03798-2. Wen SW, Rusen ID, Walker M, Liston R, Kramer MS, Baskett T, Heaman M, Liu S. Maternal Health Study Group, Canadian Perinatal Surveillance System. Comparison of maternal mortality and morbidity between trial of labor and elective cesarean section among women with previous cesarean delivery. Am J Obstet Gynecol. 2004;191(4):1263–9. https://doi.org/10.1016/j.ajog.2004.03.022. Schifrin BS, Cohen WR. The effect of malpractice claims on the use of caesarean section. Best Pract Res Clin Obstet Gynaecol. 2013;27(2):269–83. https://doi.org/10.1016/j.bpobgyn.2012.10.004. Hua Z, El Oualja F. Indicators for mode of delivery in pregnant women with uteruses scarred by prior caesarean section: a retrospective study of 679 pregnant women. BMC Pregnancy Childbirth. 2019. https://doi.org/10.1186/s12884-019-2604-0. Roberts CL, Algert CS, Ford JB, et al. Pathways to a rising caesarean section rate: a population-based cohort study. BMJ Open. 2012;2(5):e001725. https://doi.org/10.1136/bmjopen-2012-001. Roberts Christine L, Algert Charles S, Todd Angela L, Morris JM. Reducing caesarean section rates – No easy task. Aust N Z J Obstet Gynaecol. 2013;53:310–3. https://doi.org/10.1111/ajo.12065. Hannah ME, Hannah WJ, Hewson SA, Hodnett ED, Saigal S, Willan AR. Planned caesarean section versus planned vaginal birth for breech presentation at term: a randomised multicentre trial. Term Breech Trial Collaborative Group. Lancet. 2000. https://doi.org/10.1016/s0140-6736(00)02840-3. Toijonen A, Heinonen S, Gissler M, Macharey G. Risk factors for adverse outcomes in vaginal preterm breech labor. Arch Gynecol Obstet. 2021. https://doi.org/10.1007/s00404-020-05731-y. Schoen C, Rosen T. Maternal and perinatal risks for women over 44–a review. Maturitas. 2009;64(2):109–13. https://doi.org/10.1016/j.maturitas.2009.08.012. Kort DH, Gosselin J, Choi JM, Thornton MH, Cleary-Goldman J, Sauer MV. Pregnancy after age 50: defining risks for mother and child. Am J Perinatol. 2012;29(4):245–50. https://doi.org/10.1055/s-0031-1285101 . Epub 2011 Aug 1. Čechurová D, Andělová K. Doporučený postup péče o diabetes mellitus v těhotenství 2014. [Recommendation for the diabetes care in pregnancy]. DMEV. 2014;17(2):55–60. http://www.diab.cz/dokumenty/standard_tehotenstvi.pdf. Nybo Andersen AM, Wohlfahrt J, Christens P, Olsen J, Melbye M. Maternal age and fetal loss: population based register linkage study. BMJ. 2000. https://doi.org/10.1136/bmj.320.7251.1708. Vlachová T, Kocourková J, Fait T. Vyšší věk matky – rizikový faktor pro nízkou porodní váhu. [Advance maternal age – risk factor for low birhtweight]. Česká gynekologie. 2018;83(5):337–40. Kocourková J, Šídlo L, Šťastná A, Fait T. Vliv věku matky na porodní hmotnost novorozenců. [Impact of the mother's age at childbirth on the birth weight of new-born children]. Čas Lék čes. 2019;158:118–25. Goldenberg RL, Culhane JF, Iams JD, Romero R. Epidemiology and causes of preterm birth. Lancet. 2008. https://doi.org/10.1016/S0140-6736(08)60074-4. Kong L, Nilsson IAK, Gissler M, Lavebratt C. Associations of Maternal Diabetes and Body Mass Index With Offspring Birth Weight and Prematurity. JAMA Pediatr. 2019. https://doi.org/10.1001/jamapediatrics.2018.5541. Šídlo L, Šťastná A, Kocourková J, Fait T. Vliv věku matky na zdravotní stav novorozenců v Česku. [Impact of the mother's age at childbirth on the health of new-born children in Czechia]. Demografie. 2019;61(3):155–74. Schneider H. Schonende Geburtsleitung bei sehr frühen Frühgeburten [Gentle obstetrical management for very early preterm deliveries]. Gynakol Geburtshilfliche Rundsch. 2004;44(1):10–8. https://doi.org/10.1159/000074312. Mackin ST, Nelson SM, Kerssens JJ, Wood R, Wild S, Colhoun HM, Leese GP, Philip S, Lindsay RS. SDRN Epidemiology Group. Diabetes and pregnancy: national trends over a 15 year period. Diabetologia. 2018;61(5):1081–8. Agrawal A, Wenger NK. Hypertension During Pregnancy. Curr Hypertens Rep. 2020. https://doi.org/10.1007/s11906-020-01070-0. Khalil A, Syngelaki A, Maiz N, Zinevich Y, Nicolaides KH. Maternal age and adverse pregnancy outcome: a cohort study. Ultrasound Obstet Gynecol. 2013;42(6):634–43. https://doi.org/10.1002/uog.12494. Sobotka T, Šťastná A, Zeman K, Hamplová D, Kantorová V. Czech Republic: A Rapid Transformation of fertility and family behaviour after the collapse of state Socialism. Demographic Res. 2008;19:403–54. Polesná H, Kocourková J. Je druhý demografický přechod stále relevantní koncept pro evropské státy? [Is the second demographic transition the relevant concept for European countries?]. Geografie. 2016. https://doi.org/10.37040/geografie2016121030390. Lodge-Tulloch NA, Elias FTS, Pudwell J, Gaudet L, Walker M, Smith GN, Velez MP. Caesarean section in pregnancies conceived by assisted reproductive technology: a systematic review and meta-analysis. BMC Pregnancy Childbirth. 2021;21(1):244. https://doi.org/10.1186/s12884-021-03711-x. Liu H, Umberson DJ. The times they are a changin': marital status and health differentials from 1972 to 2003. J Health Soc Behav. 2008;49(3):239–53. https://doi.org/10.1177/002214650804900301. Tollånes MC, Thompson JM, Daltveit AK, Irgens LM. Cesarean section and maternal education; secular trends in Norway, 1967–2004. Acta Obstet Gynecol Scand. 2007. https://doi.org/10.1080/00016340701417422. Villar J, Carroli G, Zavaleta N, et al. Maternal and neonatal individual risks and benefits associated with caesarean delivery: multicentre prospective study. BMJ. 2007. https://doi.org/10.1136/bmj.39363.706956.55. Opiyo N, Kingdon C, Oladapo OT, Souza JP, Vogel JP, Bonet M, Bucagu M, Portela A, McConville F, Downe S, Gülmezoglu AM, Betrán AP. Non-clinical interventions to reduce unnecessary caesarean sections: WHO recommendations. Bull World Health Organ. 2020;98:66–8. https://doi.org/10.2471/BLT.19.236729. Francese M, Piacenza M, Romanelli M, Turati G. Understandign inapropriateness in health spending role of regional policies and institutions in Caesarean deliveries. Reg Sci Urban Econ. 2014;2014(49):262–77. Mancuso A, De Vivo A, Fanara G, Settineri S, Triolo O, Giacobbe A. Women's preference on mode of delivery in Southern Italy. Acta Obstet Gynecol Scand. 2006. https://doi.org/10.1080/00016340600645255. Panda S, Daly D, Begley C, Karlström A, Larsson B, Bäck L, Hildingsson I. Factors influencing decision-making for caesarean section in Sweden – a qualitative study. BMC Pregnancy Childbirth. 2018. https://doi.org/10.1186/s12884-018-2007-7. Sandall J, Tribe RM, Avery L, Mola G, Visser GH, Homer CS, Gibbons D, Kelly NM, Kennedy HP, Kidanto H, Taylor P, Temmerman M. Short-term and long-term effects of caesarean section on the health of women and children. Lancet. 2018. https://doi.org/10.1016/S0140-6736(18)31930-5. Salas Garcia MC, Yee AL, Gilbert JA, Dsouza M. Dysbiosis in Children Born by Caesarean Section. Ann Nutr Metab. 2018. https://doi.org/10.1159/000492168. Kuhle S, Tong OS, Woolcott CG. Association between caesarean section and childhood obesity: a systematic review and meta-analysis. Obes Rev. 2015. https://doi.org/10.1111/obr.12267. Chen I, Opiyo N, Tavender E, Mortazhejri S, Rader T, Petkovic J, Yogasingam S, Taljaard M, Agarwal S, Laopaiboon M, Wasiak J, Khunpradit S, Lumbiganon P, Gruen RL, Betran AP. Non-clinical interventions for reducing unnecessary caesarean section. Cochrane Database Syst Rev. 2018. https://doi.org/10.1002/14651858.CD005528.pub3. Cepicky P. Postupy lege artis I. Moderní gynekologie a porodnictví. [Procedures lege artis I. Modern gynaecology and obstetrics.] 2004;13(4):Suppl C. ISBN 80-87070-00-3. López-López AI, Sanz-Valero J, Gómez-Pérez L, Pastor-Valero M. Pelvic floor: vaginal or caesarean delivery? A review of systematic reviews. Int Urogynecol J. 2021. https://doi.org/10.1007/s00192-020-04550-8. The authors thank the General Health Insurance Company of the Czech Republic for providing detailed data sources and the Department of Demography and Geodemography, Faculty of Science, Charles University for providing general support in the processing of the research. This paper was supported by the Czech Science Foundation (No. 18-08013S) "Transition towards the late childbearing pattern: individual prospects versus societal costs" project and by the Charles University Research Centre program (UNCE/HUM/018). Department of Gynecology and Obstetrics, Second Faculty of Medicine, Charles University and Motol University Hospital, Prague, Czechia Tomáš Fait & Michal Kníže Department of Demography and Geodemography, Faculty of Science, Charles University, Prague, Czechia Tomáš Fait, Anna Šťastná, Jiřina Kocourková, Eva Waldaufová & Luděk Šídlo Tomáš Fait Anna Šťastná Jiřina Kocourková Eva Waldaufová Luděk Šídlo Michal Kníže Conceptualization and design of the research: TF, AŠ and JK. Methodology: AŠ and TF. Formal analysis and interpretation of the data: AŠ and EW. Interpretation of the data: LŠ and MK. Writing – original draft preparation: TF and JK. Writing—reviewing and editing: AŠ. Data curation and visualization: LŠ. Manuscript revision for important intellectual content: TF, AŠ, JK, LŠ and MK. All the authors have read and approved the final manuscript. Correspondence to Jiřina Kocourková. Ethical approval was not required for this study. This study did not involve the use of human tissues or animal experimentation. Anonymized data was obtained directly from the Institute of Health Information and Statistics of the Czech Republic by signing a declaration of confidentiality. The Institute of Health Information and Statistics of the Czech Republic is mandated by Act No. 372/2011 Coll., on Health Services and the Conditions of their Provision (the Act on Health Services) and by Act No. 89/1995 Coll., on the National Statistical Service, as subsequently amended, to administrate the National Health Information System (NHIS) and to collect statistical data based on the mandatory statistical reporting of all mothers in Czechia. When processing personal data in the NHIS, the Institute of Health Information and Statistics of the Czech Republic follows Regulation (EU) 2016/679 of the European Parliament and of the Council of 27 April 2016 on the protection of natural persons concerning the processing of personal data and the free movement of such data, and repealing Directive 95/46/EC (the General Data Protection Regulation). The Institute does not provide personal data from the NHIS to any other subjects. Fait, T., Šťastná, A., Kocourková, J. et al. Has the cesarean epidemic in Czechia been reversed despite fertility postponement?. BMC Pregnancy Childbirth 22, 469 (2022). https://doi.org/10.1186/s12884-022-04781-1 Cesarean section (CS) Fertility postponement Health status Breech delivery
CommonCrawl
[Submitted on 28 Sep 2022 (v1), last revised 29 Sep 2022 (this version, v2)] Title:Online Subset Selection using $α$-Core with no Augmented Regret Authors:Sourav Sahoo, Samrat Mukhopadhyay, Abhishek Sinha Abstract: We consider the problem of sequential sparse subset selections in an online learning setup. Assume that the set $[N]$ consists of $N$ distinct elements. On the $t^{\text{th}}$ round, a monotone reward function $f_t: 2^{[N]} \to \mathbb{R}_+,$ which assigns a non-negative reward to each subset of $[N],$ is revealed to a learner. The learner selects (perhaps randomly) a subset $S_t \subseteq [N]$ of $k$ elements before the reward function $f_t$ for that round is revealed $(k \leq N)$. As a consequence of its choice, the learner receives a reward of $f_t(S_t)$ on the $t^{\text{th}}$ round. The learner's goal is to design an online subset selection policy to maximize its expected cumulative reward accrued over a given time horizon. In this connection, we propose an online learning policy called SCore (Subset Selection with Core) that solves the problem for a large class of reward functions. The proposed SCore policy is based on a new concept of $\alpha$-Core, which is a generalization of the notion of Core from the cooperative game theory literature. We establish a learning guarantee for the SCore policy in terms of a new performance metric called $\alpha$-augmented regret. In this new metric, the power of the offline benchmark is suitably augmented compared to the online policy. We give several illustrative examples to show that a broad class of reward functions, including submodular, can be efficiently learned with the SCore policy. We also outline how the SCore policy can be used under a semi-bandit feedback model and conclude the paper with a number of open problems. Subjects: Machine Learning (cs.LG); Artificial Intelligence (cs.AI); Computer Science and Game Theory (cs.GT) From: Abhishek Sinha [view email] [v1] Wed, 28 Sep 2022 16:48:58 UTC (99 KB) [v2] Thu, 29 Sep 2022 20:15:47 UTC (99 KB) cs.GT
CommonCrawl
Steven Zelditch Steven "Steve" Morris Zelditch is an American mathematician, specializing in global analysis, complex geometry, and mathematical physics... Read More Logarithmic lower bound on the number of nodal domains by Steve Zelditch We prove that the number of nodal domains of a density one subsequence of eigenfunctions grows at least logarithmically with the eigenvalue on negatively curved `real Riemann surfaces'. The geometric model is the same as in prior joint work with Junehyuk Jung (arXiv:1310.2919, to appear in J. Diff. Geom), where the number of nodal domains was shown to tend to infinity, but without a specified rate. The proof of the logarithmic rate uses the new logarithmic scale quantum ergodicity results of... Topics: Spectral Theory, Mathematics Source: http://arxiv.org/abs/1510.05315 Concerning the $L^4$ norms of typical eigenfunctions on compact surfaces by Christopher D. Sogge; Steve Zelditch Let $(M,g)$ be a two-dimensional compact boundaryless Riemannian manifold with Laplacian, $\Delta_g$. If $e_\lambda$ are the associated eigenfunctions of $\sqrt{-\Delta_g}$ so that $-\Delta_g e_\lambda = \lambda^2 e_\lambda$, then it has been known for some time \cite{soggeest} that $\|e_\lambda\|_{L^4(M)}\lesssim \lambda^{1/8}$, assuming that $e_\lambda$ is normalized to have $L^2$-norm one. This result is sharp in the sense that it cannot be improved on the standard sphere because of highest... Source: http://arxiv.org/abs/1011.0215v1 Scaling of Harmonic Oscillator Eigenfunctions and Their Nodal Sets Around the Caustic by Boris Hanin; Steve Zelditch; Peng Zhou We study the scaling asymptotics of the eigenspace projection kernels $\Pi_{\hbar, E}(x,y)$ of the isotropic Harmonic Oscillator $- \hbar ^2 \Delta + |x|^2$ of eigenvalue $E = \hbar(N + \frac{d}{2})$ in the semi-classical limit $\hbar \to 0$. The principal result is an explicit formula for the scaling asymptotics of $\Pi_{\hbar, E}(x,y)$ for $x,y$ in a $\hbar^{2/3}$ neighborhood of the caustic $\mathcal C_E$ as $\hbar \to 0.$ The scaling asymptotics are applied to the distribution of nodal sets... Topics: Probability, Spectral Theory, Mathematical Physics, Mathematics Focal points and sup-norms of eigenfunctions on analytic Riemannian manifolds II: the two-dimensional case by Chris Sogge; Steve Zelditch In the recent work arXiv:1311.3999, the authors proved that real analytic manifolds $(M, g)$ with maximal eigenfunction growth must have a self-focal point p whose first return map has an invariant L1 measure on $S^*_p M$. In this addendum we add a purely dynamical argument on circle maps to improve the conclusion to: all geodesics from p are smoothly closed. Topics: Mathematics, Spectral Theory, Analysis of PDEs Source: http://arxiv.org/abs/1409.2063 L^p norms of eigenfunctions in the completely integrable case by John A. Toth; Steve Zelditch The eigenfunctions e^{i \lambda x} of the Laplacian on a flat torus have uniformly bounded L^p norms. In this article, we prove that for every other quantum integrable Laplacian, the L^p norms of the joint eigenfunctions must blow up at a rate \gg \lambda^{p-2/4p - \epsilon} for every \epsilon >0 as \lambda \to \infty. Source: http://arxiv.org/abs/math/0208045v1 Convergence of Bergman geodesics on CP^1 by Jian Song; Steve Zelditch The space of positively curved hermitian metrics on a positive holomorphic line bundle over a compact complex manifold is an infinite-dimensional symmetric space. It is shown by Phong and Sturm that geodesics in this space can be uniformly approximated by geodesics in the finite dimensional spaces of Bergman metrics. We prove a stronger C^2-approximation in the special case of toric (i.e. S^1-invariant) metrics on CP^1. Lower bounds on the Hausdorff measure of nodal sets Let $\ncal_{\phi_{\lambda}}$ be the nodal hypersurface of a $\Delta$-eigenfunction $\phi_{\lambda}$ of eigenvalue $\lambda^2$ on a smooth Riemannian manifold. We prove the following lower bound for its surface measure: $\hcal^{n-1}(\ncal_{\phi_{\lambda}}) \geq C \lambda^{\frac74-\frac{3n}4} $. The best prior lower bound appears to be $e^{- C \lambda}$. The Cauchy problem for the homogeneous Monge-Ampere equation, III. Lifespan by Yanir A. Rubinstein; Steve Zelditch We prove several results on the lifespan, regularity, and uniqueness of solutions of the Cauchy problem for the homogeneous complex and real Monge-Ampere equations (HCMA/HRMA) under various a priori regularity conditions. We use methods of characteristics in both the real and complex settings to bound the lifespan of solutions with prescribed regularity. In the complex domain, we characterize the C^3 lifespan of the HCMA in terms of analytic continuation of Hamiltonian mechanics and... Large deviations for zeros of $P(φ)_2$ random polynomials by Renjie Feng; Steve Zelditch We extend results of Zeitouni-Zelditch on large deviations principles for zeros of Gaussian random polynomials $s$ in one complex variable to certain non-Gaussian ensembles that we call $P(\phi)_2$ random polynomials. The probability measures are of the form $e^{- S(f)} df$ where the actions $S(f)$ are finite dimensional analgoues of those of $P(\phi)_2$ quantum field theory. The speed and rate function are the same as in the associated Gaussian case. As a corollary, we prove that the expected... Heat kernel measures on random surfaces by Semyon Klevtsov; Steve Zelditch The heat kernel on the symmetric space of positive definite Hermitian matrices is used to endow the spaces of Bergman metrics of degree k on a Riemann surface M with a family of probability measures depending on a choice of the background metric. Under a certain matrix-metric correspondence, each positive definite Hermitian matrix corresponds to a Kahler metric on M. The one and two point functions of the random metric are calculated in a variety of limits as k and t tend to infinity. In the... Topics: High Energy Physics - Theory, Complex Variables, Mathematics, Probability Number of nodal domains of eigenfunctions on non-positively curved surfaces with concave boundary by Junehyuk Jung; Steve Zelditch It is an open problem in general to prove that there exists a sequence of $\Delta_g$-eigenfunctions $\phi_{j_k}$ on a Riemannian manifold $(M, g)$ for which the number $N(\phi_{j_k}) $ of nodal domains tends to infinity with the eigenvalue. Our main result is that $N(\phi_{j_k}) \to \infty$ along a subsequence of eigenvalues of density $1$ if the $(M, g)$ is a non-positively curved surface with concave boundary, i.e. a generalized Sinai or Lorentz billiard. Unlike the recent closely related... Patterson-Sullivan distributions and quantum ergodicity by Nalini Anantharaman; Steve Zelditch We relate two types of phase space distributions associated to eigenfunctions $\phi_{ir_j}$ of the Laplacian on a compact hyperbolic surface $X_{\Gamma}$: (1) Wigner distributions $\int_{S^*\X} a dW_{ir_j}= < Op(a)\phi_{ir_j}, \phi_{ir_j}>_{L^2(\X)}$, which arise in quantum chaos. They are invariant under the wave group. (2) Patterson-Sullivan distributions $PS_{ir_j}$, which are the residues of the dynamical zeta-functions $\lcal(s; a): = \sum_\gamma... Bernstein polynomials, Bergman kernels and toric Kähler varieties It does not seem to have been observed previously that the classical Bernstein polynomials $B_N(f)(x)$ are closely related to the Bergman-Szego kernels $\Pi_N$ for the Fubini-Study metric on $\CP^1$: $B_N(f)(x)$ is the Berezin symbol of the Toeplitz operator $\Pi_N f(N^{-1} D_{\theta})$. The relation suggests a generalization of Bernstein polynomials to any toric Kahler variety and Delzant polytope $P$. When $f$ is smooth, $B_N(f)(x)$ admits a complete asymptotic expansion. Integrating it over... About the blowup of quasimodes on Riemannian manifolds by Christopher D. Sogge; John A. Toth; Steve Zelditch On any compact Riemannian manifold $(M, g)$ of dimension $n$, the $L^2$-normalized eigenfunctions ${\phi_{\lambda}}$ satisfy $||\phi_{\lambda}||_{\infty} \leq C \lambda^{\frac{n-1}{2}}$ where $-\Delta \phi_{\lambda} = \lambda^2 \phi_{\lambda}.$ The bound is sharp in the class of all $(M, g)$ since it is obtained by zonal spherical harmonics on the standard $n$-sphere $S^n$. But of course, it is not sharp for many Riemannian manifolds, e.g. flat tori $\R^n/\Gamma$. We say that $S^n$, but not... Real and complex zeros of Riemannian random waves We consider Riemannian random waves, i.e. Gaussian random linear combination of eigenfunctions of the Laplacian on a compact Riemannian manifold with frequencies from a short interval (`asymptotically fixed frequency'). We first show that the expected limit distribution of the real zero set of a is uniform with respect to the volume form of a compact Riemannian manifold $(M, g)$. We then show that the complex zero set of the analytic continuations of such Riemannian random waves to a Grauert... Quantum Ergodicity for Eisenstein functions by Yannick Bonthonneau; Steve Zelditch A new proof is given of Quantum Ergodicity for Eisenstein Series for cusped hyperbolic surfaces. This result is also extended to higher dimensional examples, with variable curvature. Topics: Analysis of PDEs, Spectral Theory, Mathematics Random Geometry, Quantum Gravity and the Kähler Potential by Frank Ferrari; Semyon Klevtsov; Steve Zelditch We propose a new method to define theories of random geometries, using an explicit and simple map between metrics and large hermitian matrices. We outline some of the many possible applications of the formalism. For example, a background-independent measure on the space of metrics can be easily constructed from first principles. Our framework suggests the relevance of a new gravitational effective action and we show that it occurs when coupling the massive scalar field to two-dimensional... Local and global analysis of eigenfunctions This is a survey on eigenfunctions of the Laplacian on Riemannian manifolds (mainly compact and without boundary). We discuss both local results obtained by analyzing eigenfunctions on small balls, and global results obtained by wave equation methods. Among the main topics are nodal sets, quantum limits, and $L^p$ norms of global eigenfunctions. Pluri-potential theory on Grauert tubes of real analytic Riemannian manifolds, I We develop analogues for Grauert tubes of real analytic Riemannian manifolds (M,g) of some basic notions of pluri-potential theory, such as the Siciak extremal function. The basic idea is to use analytic continuations of eigenfunctions in place of polynomials or sections of powers of positive line bundles for pluripotential theory. The analytically continued Poisson-wave kernel plays the role of Bergman kernel. The main results are Weyl laws in the complex domain, distribution of complex zeros... Szego kernels and a theorem of Tian We give a simple proof of Tian's theorem that the Kodaira embeddings associated to a positive line bundle over a compact complex manifold are asymptotically isometric. The proof is based on the diagonal asymptotics of the Szego kernel (i.e. the orthogonal projection onto holomorphic sections). In deriving these asymptotics we use the Boutet de Monvel-Sjostrand parametrix for the Szego kernel. Source: http://arxiv.org/abs/math-ph/0002009v1 Number variance of random zeros on complex manifolds by Bernard Shiffman; Steve Zelditch We show that the variance of the number of simultaneous zeros of $m$ i.i.d. Gaussian random polynomials of degree $N$ in an open set $U \subset C^m$ with smooth boundary is asymptotic to $N^{m-1/2} \nu_{mm} Vol(\partial U)$, where $\nu_{mm}$ is a universal constant depending only on the dimension $m$. We also give formulas for the variance of the volume of the set of simultaneous zeros in $U$ of $k Simple matrix models for random Bergman metrics Recently, the authors have proposed a new approach to the theory of random metrics, making an explicit link between probability measures on the space of metrics on a Kahler manifold and random matrix models. We consider simple examples of such models and compute the one and two-point functions of the metric. These geometric correlation functions correspond to new interesting types of matrix model correlators. We study a large class of examples and provide in particular a detailed study of the... Critical points and supersymmetric vacua, II: Asymptotics and extremal metrics by Michael R. Douglas; Bernard Shiffman; Steve Zelditch Motivated by the vacuum selection problem of string/M theory, we study a new geometric invariant of a positive Hermitian line bundle $(L, h)\to M$ over a compact K\"ahler manifold: the expected distribution of critical points of a Gaussian random holomorphic section $s \in H^0(M, L)$ with respect to the Chern connection $\nabla_h$. It is a measure on $M$ whose total mass is the average number $\mathcal{N}^{crit}_h$ of critical points of a random holomorphic section. We are interested in... A note on $L^p$-norms of quasi-modes In this note we show how improved $L^p$-estimates for certain types of quasi-modes are naturally equaivalent to improved operator norms of spectral projection operators associated to shrinking spectral intervals of the appropriate scale. Using this, one can see that recent estimates that were stated for eigenfunctions also hold for the appropriate types of quasi-modes. Topics: Mathematics, Analysis of PDEs, Classical Analysis and ODEs The Cauchy problem for the homogeneous Monge-Ampere equation, II. Legendre transform We continue our study of the Cauchy problem for the homogeneous (real and complex) Monge-Ampere equation (HRMA/HCMA). In the prequel a quantum mechanical approach for solving the HCMA was developed, and was shown to coincide with the well-known Legendre transform approach in the case of the HRMA. In this article---that uses tools of convex analysis and can be read independently---we prove that the candidate solution produced by these methods ceases to solve the HRMA, even in a weak sense, as... Random complex fewnomials, I We introduce several notions of `random fewnomials', i.e. random polynomials with a fixed number f of monomials of degree N. The f exponents are chosen at random and then the coefficients are chosen to be Gaussian random, mainly from the SU(m + 1) ensemble. The results give limiting formulas as N goes to infinity for the expected distribution of complex zeros of a system of k random fewnomials in m variables. When k = m, for SU(m + 1) polynomials, the limit is the Monge-Ampere measure of a... Measure of nodal sets of analytic Steklov eigenfunctions Let $(\Omega, g)$ be a real analytic Riemannian manifold with real analytic boundary $\partial \Omega$. Let $\psi_{\lambda}$ be an eigenfunction of the Dirichlet-to-Neumann operator $\Lambda$ of $(\Omega, g, \partial \Omega)$ of eigenvalue $\lambda$. Let $\mathcal N_{\lambda_j}$ be its nodal set. Then $\mathcal H^{n-2} (\mathcal N_{\lambda}) \leq C_{g, \Omega} \lambda.$ This proves a conjecture of F. H. Lin and K. Bellova. Topics: Mathematics, Spectral Theory Scaling asymptotics of heat kernels of line bundles by Xiaonan Ma; George Marinescu; Steve Zelditch We consider a general Hermitian holomorphic line bundle $L$ on a compact complex manifold $M$ and let ${\Box}^q_p$ be the Kodaira Laplacian on $(0,q)$ forms with values in $L^p$. The main result is a complete asymptotic expansion for the semi-classically scaled heat kernel $\exp(-u{\Box}^q_p/p)(x,x)$ along the diagonal. It is a generalization of the Bergman/Szeg\"o kernel asymptotics in the case of a positive line bundle, but no positivity is assumed. We give two proofs, one based on the... Topics: Complex Variables, Mathematics, Differential Geometry Random polynomials with prescribed Newton polytope, I The Newton polytope $P_f$ of a polynomial $f$ is well known to have a strong impact on its zeros, as in the Kouchnirenko-Bernstein theorem on the number of simultaneous zeros of $m$ polynomials with given Newton polytopes. In this article, we show that $P_f$ also has a strong impact on the distribution of zeros of one or several polynomials. We equip the space of (holomorphic) polynomials of degree $\leq N$ in $m$ complex variables with its usual $SU(m+1)$-invariant Gaussian measure and then... Random Kähler Metrics The purpose of this article is to propose a new method to define and calculate path integrals over metrics on a K\"ahler manifold. The main idea is to use finite dimensional spaces of Bergman metrics, as an approximation to the full space of K\"ahler metrics. We use the theory of large deviations to decide when a sequence of probability measures on the spaces of Bergman metrics tends to a limit measure on the space of all K\"ahler metrics. Several examples are considered. Ergodicity and intersections of nodal sets and geodesics on real analytic surfaces We consider the the intersections of the complex nodal set of the analytic continuation of an eigenfunction of the Laplacian on a real analytic surface with the complexification of a geodesic. We prove that if the geodesic flow is ergodic and if the geodesic is periodic and satisfies a generic asymmetry condition, then the intersection points condense along the real geodesic and become uniformly distributed with respect to its arc-length. We prove an analogous result for non-periodic geodesics... Overcrowding and hole probabilities for random zeros on complex manifolds by Bernard Shiffman; Steve Zelditch; Scott Zrebiec We give asymptotic large deviations estimates for the volume inside a domain U of the zero set of a random polynomial of degree N, or more generally, of a holomorphic section of the N-th power of a positive line bundle on a compact Kaehler manifold. In particular, we show that for all $\delta>0$, the probability that this volume differs by more than $\delta N$ from its average value is less than $\exp(-C_{\delta,U}N^{m+1})$, for some constant $C_{\delta,U}>0$. As a consequence, the... Quantum ergodic restriction for Cauchy data: Interior QUE and restricted QUE by Hans Christianson; John Toth; Steve Zelditch We prove a quantum ergodic restriction theorem for the Cauchy data of a sequence of quantum ergodic eigenfunctions on a hypersurface $H$ of a Riemannian manifold $(M, g)$. The technique of proof is to use a Rellich type identity to relate quantum ergodicity of Cauchy data on $H$ to quantum ergodicity of eigenfunctions on the global manifold $M$. This has the interesting consequence that if the eigenfunctions are quantum unique ergodic on the global manifold $M$, then the Cauchy data is... Determinants of Laplacians in Exterior Domains by Andrew Hassell; Steve Zelditch We consider classes of simply connected planar domains which are isophasal, ie, have the same scattering phase $s(\l)$ for all $\l > 0$. This is a scattering-theoretic analogue of isospectral domains. Using the heat invariants and the determinant of the Laplacian, Osgood, Phillips and Sarnak showed that each isospectral class is sequentially compact in a natural $C$-infinity topology. In this paper, we show sequential compactness of each isophasal class of domains. To do this we define the... Critical values of random analytic functions on complex manifolds We study the asymptotic distribution of critical values of random holomorphic `polynomials' s_n on a Kaehler manifold M as the degree n tends to infinity. By `polynomial' of degree n we mean a holomorphic section of the nth power of a positive Hermitian holomorphic line bundle $(L, h). In the special case M = CP^m and L = O(1), and h is the Fubini-Study metric, the random polynomials are the SU(m + 1) polynomials. By a critical value we mean the norm ||s_n||_h of s_n at a non-zero critical... Equilibrium distribution of zeros of random polynomials We consider ensembles of random polynomials of the form $p(z)=\sum_{j = 1}^N a_j P_j$ where $\{a_j\}$ are independent complex normal random variables and where $\{P_j\}$ are the orthonormal polynomials on the boundary of a bounded simply connected analytic plane domain $\Omega \subset C$ relative to an analytic weight $\rho(z) |dz|$. In the simplest case where $\Omega$ is the unit disk and $\rho=1$, so that $P_j(z) = z^j$, it is known that the average distribution of zeros is the uniform... Quantum unique ergodicity This short note proves that a Laplacian cannot be quantum uniquely ergodic if it possesses a quasimode of order zero which (i) has a singular limit, and (ii) is a linear combination of a uniformly bounded number of eigenfunctions (modulo an o(1) error). Bouncing ball quasimodes of the stadium are believed to have this property (E.J. Heller et al) and so are analogous quasimodes recently constructed by H. Donnelly on certain non-positively curved surfaces. The main ingredient is the proof that... Critical points and supersymmetric vacua, III: String/M models A fundamental problem in contemporary string/M theory is to count the number of inequivalent vacua satisfying constraints in a string theory model. This article contains the first rigorous results on the number and distribution of supersymmetric vacua of type IIb string theories compactified on a Calabi-Yau 3-fold $X$ with flux. In particular, complete proofs of the counting formulas in Ashok-Douglas and Denef-Douglas are given, together with van der Corput style remainder estimates. We also... Wave invariants at elliptic closed geodesics This paper concerns spectral invariants of the Laplacian on a compact Riemannian manifold (M,g) known as wave invariants. If U(t) denotes the wave group of (M,g), then the trace Tr U(t) is singular when t = 0 or when ti is the length of a closed geodesic. It has a special type of singularity expansion at each length and the coefficients are known as the wave invariants. Our main purpose is to calculate the wave invariants explicitly in terms of curvature, Jacobi fields etc. when the closed... The inverse resonance problem for $\Z_2$-symmetric analytic obstacles in the plane We prove that a two-component mirror-symmetric analytic obstacle in the plane is determined by its resonance poles among such obstacles. The proof is essentially the same as in the interior case (part II of the series). A so-called interior/exterior duality formula is used to simplify the proof. A fair amount of exposition is included for the sake of completeness. Distribution laws for integrable eigenfunctions by Bernard Shiffman; Tatsuya Tate; Steve Zelditch We determine the asymptotics of the joint eigenfunctions of the torus action on a toric Kahler variety. Such varieties are models of completely integrable systems in complex geometry. We first determine the pointwise asymptotics of the eigenfunctions, which show that they behave like Gaussians centered at the corresponding classical torus. We then show that there is a universal Gaussian scaling limit of the distribution function near its center. We also determine the limit distribution for the... Inverse Spectral Problem for Surfaces of Revolution This paper concerns the inverse spectral problem for analytic simple surfaces of revolution. By `simple' is meant that there is precisely one critical distance from the axis of revolution. Such surfaces have completely integrable geodesic flows with global action-angle variables and possess global quantum Birkhoff normal forms (Colin de Verdiere). We prove that isospectral surfaces within this class are isometric. The first main step is to show that the normal form at meridian geodesics is a... Macdonald's identities and the large N limit of $YM_2$ on the cylinder We give a rigorous calculation of the large N limit of the partition function of SU(N) gauge theory on a 2D cylinder in the case where one boundary holomony is a so-called special element of type $\rho$. By MacDonald's identity, the partition function factors in this case as a product over positive roots and it is straightforward to calculate the large N asymptotics of the free energy. We obtain the unexpected result that the free energy in these cases is asymptotic to N times a functional of... Source: http://arxiv.org/abs/hep-th/0305218v1 Spacings between phase shifts in a simple scattering problem by Steve Zelditch; Maciej Zworski We prove a scattering theoretical version of the Berry-Tabor conjecture: for an almost every surface in a class of cylindrical surfaces of revolution, the large energy limit of the pair correlation measure of the quantum phase shifts is Poisson, that is, it is given by the uniform measure. Poincare-Lelong approach to universality and scaling of correlations between zeros by Pavel Bleher; Bernard Shiffman; Steve Zelditch This note is concerned with the scaling limit as N approaches infinity of n-point correlations between zeros of random holomorphic polynomials of degree N in m variables. More generally we study correlations between zeros of holomorphic sections of powers L^N of any positive holomorphic line bundle L over a compact Kahler manifold. Distances are rescaled so that the average density of zeros is independent of N. Our main result is that the scaling limits of the correlation functions and, more... Intertwining the geodesic flow and the Schrodinger group on hyperbolic surfaces We construct an explicit intertwining operator $\lcal$ between the Schr\"odinger group $e^{it \frac\Lap2} $ and the geodesic flow $g^t$ on certain Hilbert spaces of symbols on the cotangent bundle $T^* \X$ of a compact hyperbolic surface $\X = \Gamma \backslash \D$. Thus, the quantization Op(\lcal^{-1} a) satisfies an exact Egorov theorem. The construction of $\lcal$ is based on a complete set of Patterson-Sullivan distributions. Random zeros on complex manifolds: conditional expectations by Bernard Shiffman; Steve Zelditch; Qi Zhong We study the conditional distribution of zeros of a Gaussian system of random polynomials (and more generally, holomorphic sections), given that the polynomials or sections vanish at a point p (or a fixed finite set of points). The conditional distribution is analogous to the pair correlation function of zeros, but we show that it has quite a different small distance behavior. In particular, the conditional distribution does not exhibit repulsion of zeros in dimension one. To prove this, we... Random Riesz energies on compact Kähler manifolds This article determines the asymptotics of the expected Riesz s-energy of the zero set of a Gaussian random systems of polynomials of degree N as the degree N tends to infinity in all dimensions and codimensions. The asymptotics are proved more generally for sections of any positive line bundle over any compact Kaehler manifold. In comparison with the results on energies of zero sets in one complex dimension due to Qi Zhong (arXiv:0705.2000) (see also [arXiv:0705.2000]), the zero sets have... Billiards and boundary traces of eigenfunctions This is a report for the 2003 Forges Les Eaux PDE conference on recent results with A. Hassell on quantum ergodicity of boundary traces of eigenfunctions on domains with ergodic billiards, and of work in progress with Hassell and Sogge on norms of boundary traces. Related work by Burq, Grieser and Smith-Sogge is also discussed. Survey of the inverse spectral problem This is a survey of the inverse spectral problem on (mainly compact) Riemannian manifolds, with or without boundary. The emphasis is on wave invariants: on how wave invariants have been calculated and how they have been applied to concrete inverse spectral problems.
CommonCrawl
52 Things: Number 4: The Complexity Class P This is the fourth blog post talking about '52 Things Every PhD Student Should Know' to do Cryptography, and the first on the topic of Theoretical Computer Science. In this post, I've been asked to define the complexity class P. Having recently joined the Cryptography group at Bristol as a Mathematician, I knew very little theoretical computer science when I first started my PhD and I'm sure there will be plenty of others in my situation, so this blog will start right from the beginning and you can skip over parts you know already. First, we'll give an overview of what complexity means and why it matters, then we'll define Turing machines, and finally arrive at the complexity class P, concluding with an example. Most of the content of this post is a reworking of parts of Introduction to the Theory of Computation by Michael Sipser [1], which I have found hugely helpful. Section 1: Complexity and Big O Notation We want to know how difficult a given task is for a computer to do in order to design efficient programs. The trouble is that the processing power of a computer varies massively depending on the hardware (e.g. see last week's '52 Things' blog). So we want a measure of the difficulty of a task that doesn't depend on the specific details of the machine performing the task. One way to do this is to bound the number of operations that a certain model of a computer would take to do it. This is called (time) complexity theory. Typically, though, the number of operations required will depend on the input to the task and may vary even with inputs of the same length. As a pertinent example, say we design a computer program which tells you whether or not an integer you input is prime. If we give as input the number 256, the program will probably output 'not prime' sooner than if we had given it the number 323 (even though they both have length 9 when written as binary integers, for example), since the first integer has a very small prime factor (2) and the second has larger factors (17 and 19). Therefore we usually opt for a worst-case analysis where we record the longest running time of all inputs of a particular length. So we obtain an algebraic expression $t(n)$ that reflects the longest running time of all inputs of length $n$. Furthermore, when the input length $n$ becomes very large, we can neglect all but the most dominant term in the expression and also ignore any constant factors. This is called asymptotic analysis; we assume $n$ is enormous and ask roughly how many steps the model of computation will take to 'finish' when given the worst possible input of length $n$, writing our answer in the form $\mathcal{O}\left(t\left(n\right)\right)$. For example, if we find that our process takes $6n^3 – n^2 + 1$ steps, we write that it is $\mathcal{O}\left(n^{3}\right)$, since all other terms can be ignored for very large $n$. Section 2: Turing Machines Now we give the model that is most often used in the kind of calculations performed in Section 1. First, recall that an alphabet is a non-empty finite set and a string is a finite sequence of elements (symbols) from an alphabet. A language is simply a set of strings. A Turing machine models what real computers can do. Its 'memory' is an infinitely long tape. At any time, each square of the tape is either blank or contains a symbol from some specified alphabet. The machine has a tape head that can move left or right along the tape, one square at a time, and read from and write to that square. At first, the tape is all blank except for the leftmost $n$ squares which constitute the input (none of which can be blank so that it is clear where the input ends). The tape head starts at the leftmost square, reads the first input symbol and then decides what to do next according to a transition function. The transition function depends on what it reads at the square it is currently on and the state that the machine is currently in (like a record of what it has done so far) and returns a new state another symbol to write to the square it is on (though this symbol might be the same as what was already written there) a direction to move in: left or right. The machine will continue to move one square at a time, read a symbol, evaluate the transition function, write a symbol and move again, until its state becomes some specified accept state or reject state. If the machine ends up in the accept state, we say it accepts its input. Similarly it may reject its input. In either case we say the machine halts on its input. But note that it may enter a loop without accepting or rejecting i.e. it may never halt. If a Turing machine accepts every string in some language and rejects all other strings, then we say the machine decides that language. We can think of this as the machine testing whether or not the input string is a member of the language. Given a language, if there is a Turing machine that decides it, we say the language is decidable. The power of this model comes from the fact that a Turing machine can do everything that a real computer can do (this is called the Church-Turing thesis [2]). We define the time complexity class $\mathrm{TIME}\left(t\left(n\right)\right)$ to be the collection of all languages that are decidable by an $\mathcal{O}\left(t\left(n\right)\right)$ time Turing machine, then we turn computational problems into questions about language membership (is an input string a member of a certain language? e.g. does this string representing an integer belong to the language of strings representing prime integers?) and can partition computational problems into time complexity classes. Section 3: The Complexity Class P Finally, we arrive at the purpose of this blog! If $t(n) = n^k$ for some $k > 0$ then $\mathcal{O}\left(t\left(n\right)\right)$ is called polynomial time. The complexity class P is the class of all languages that are decidable in polynomial time by a Turing machine. Since $k$ could be very large, such Turing machines are not necessarily all practical, (let alone 'fast'!), but this class is a rough model for what can be realistically achieved by a computer. Note that the class P is fundamentally different to those languages where $t(n)$ has $n$ in an exponent, such as $2^n$, which grow much, much faster as $n$ increases – so fast that even if you have a decider for some language, you may find that the universe ends before it halts on your input! We conclude with an example of a polynomial time problem. Suppose you have a directed graph (a set of nodes and edges where there is at most one edge between any pair of nodes and each edge has an arrow indicating a direction). Then if we encode the graph and the two nodes as a single string, we can form a language consisting of those strings representing a graph and two nodes such that it is possible to follow the edges from the first node and eventually arrive at the second. So a decider for this language will effectively answer the question of whether there is a path from the first node A to the second B, called the path problem, by accepting or rejecting the graph and nodes you input. We give a decider for this language and show that it decides in polynomial time. First put a mark on A. Scan all the edges of the graph. If you find an edge from a marked node to an unmarked node, mark the unmarked node. Repeat the above until you mark no new nodes. If B is marked, accept. Otherwise, reject. This process successively marks the nodes that are reachable from A by a path of length 1, then a path of length 2, and so on. So it is clear that a Turing machine implementing the above is a decider for our language. Now we consider the time complexity of this algorithm. If we couldn't do steps 1 and 4 in polynomial time, our machine would be terrible! So we focus on steps 2 and 3. Step 2 involves searching the input and placing a mark on one square, which is clearly polynomial time in the size of the input. Step 3 repeats step 2 no more times than the number of nodes, which is necessarily less than the size of the input (since the input must encode all the nodes of the graph) and is hence polynomial (in particular, linear) in the size of the input. Therefore the whole algorithm is polynomial time and so we say the path problem is in P. [1] http://www.amazon.co.uk/Introduction-Theory-Computation-Michael-Sipser/dp/0619217642 [2] http://en.wikipedia.org/wiki/Church%E2%80%93Turing_thesis Posted by Anonymous at 11:58 AM No comments: SPA and the AES key schedule Today's study group was given by Valentina on the 2014 CHES paper titled "Simple Power Analysis on the AES Key Expansion Revisited" by Christophe Clavier, Damien Marion and Antoine Wurcker from the Universite de Limoges in France. To briefly recap, "simple power analysis" (SPA) is the rather misleading moniker for the class of methods that analyse side-channel information contained within a small amount of side-channel traces captured during the encryption of a single pair of plaintext and key material. The misleading nature of the title is that these methods are anything but simple to perform---the amount of exploitable information leakage contained within a single trace corresponding to the encryption and decryption of a fixed input pair is far, far smaller than that which can be achieved by capturing a large amount of traces for a variety of input pairs to be exploited using differential power analysis (DPA). In the side-channel community there's a growing shift in perception towards viewing side-channel analysis as an auxiliary phase in an enhanced global key search, rather than a stand-alone 'win-or-lose' attack. DPA attacks use the additional information available in the larger set of traces and aim to reduce the final enumeration effort to as small an amount as possible. SPA attacks instead face the challenge of having to exploit all the available information in the trace and perform a potentially demanding enumeration phase. The work of Clavier et. al. explores attacks on the AES key scheduling algorithm. It is sufficient for the purposes of this blog to know that the AES key schedule takes (assuming AES-128) one 16-byte secret key and expands it to 11 16-byte round keys, the first of which is the same as the input secret key. The authors explore two masked implementations of the key schedule algorithm---masking aims to combine random values (masks) with sensitive intermediate values, to break any relationships between the intermediate values and observed side-channel information. The first is termed "11 byte entropy boolean masking" and generates 11 individual random bytes, each of which masks all of the bytes with each round key (the 16 bytes of a single round key are masked using the same random mask). The second is termed "16-byte entropy boolean masking", and essentially is orthogonal to the 11-byte scheme: each byte within a round key is masked with a different random byte, but each round key is masked by the same 16 random bytes. In an ideal case, all of the 11 x 16 = 176 sensitive bytes would be masked by a new random byte each time---the authors claim that their 11- and 16-byte schemes are relevant in scenarios where a device does not have enough memory or available entropy to store or generate this many random values. SPA attacks on the key-schedule attempt to reduce the size of the set of possible secret keys as much as possible. In this work, the authors assume that they know the Hamming-weight (a common form of side-channel leakage is that the Hamming weight of an intermediate value is proportional to side-channel information) of the key byte XORed with the mask. This is a strongly simplifying assumption---in practice, an attacker would have to profile the key schedule on a second device to begin to work towards an approximation for the side-channel leakage. The primary contribution of the paper is an adapted "guess and backtrack"-style algorithm for reducing the set of possible keys by exploiting relationships between key bytes within the key schedule, with full knowledge of the leakage corresponding to the targeted intermediate variable of the key byte XORed with its mask. The additional enumeration effort imposed by the presence of the masks is shown to be manageble, finding that with up to 30 trace measurements their attack succeeds in drastically reducing the remaining search space in a relatively short amount of time. The attacks are further analysed in the presence of a shuffling countermeasure (the order of execution of the combination of the key bytes with the masks is randomised within each 4-byte column), and the authors discover that they can adapt the algorithm to explore all permutations of the ordering, taking on average several hours to succeed. With an assumption of being able to introduce faults in specific parts of the algorithm, this time can be reduced to the order of minutes. The techniques presented for exploiting the information leakage are intricate and clever. The work motivates the following question for this area of research: to discover methods for relaxing the assumption on the attacker needing full knowledge of the information leakage occurring. Posted by Luke Mather at 3:51 PM No comments: Study group: Lattice-based Digital Signatures This week's study group was given by Emmanuela on the subject of digital signature schemes based on lattices. Because not everyone likes lattices as much as I do, Emmanuela decided to first convey some of the history of lattice signatures to the audience. The seminal work that everyone refers to when they talk about any sort of lattice-based cryptography is of course Ajtai's paper from '96 on the connection between the average and worst cases of certain lattice problems. Around the same time, the NTRU and GGH cryptosystems were proposed which provided both encryption and digital signature schemes. However, both schemes had their initial issues, and it turned out that especially the security of the digital signature schemes proved to be problematic. In hindsight it is pretty clear that the problem lies with the 'noise' distribution and that signatures leak information on the secret key. The next steps towards the security of lattice-based signature schemes were taken in '08, when two independent works described schemes that are provably secure based on the hardness of standard lattice problems. The first is by Gentry, Peikert and Vaikuntanathan, which follows the hash-and-sign paradigm and includes a useful method to sample from the discrete Gaussian distribution, which is used in most modern lattice-based crypto schemes. As the word 'hash' implies, the security proof for this scheme is in the random oracle model. The second scheme is by Lyubashevsky and Micciancio in the standard model, but it only provides one-time secure signatures. These can be converted into fully secure signatures using a tree construction, which requires a logarithmic number (in the security parameter) of applications of the one-time scheme. These two works inspired two different lines of research. One line focuses on getting the best efficiency in the random oracle model, whereas the other focuses on getting security in the standard model while maintaining a good asymptotic efficiency as well. The focus paper of the study group was in this second line of research: Improved Short Lattice Signatures in the Standard Model by Ducas and Micciancio from this year's Crypto. It combines the 'vanishing trapdoor' technique due to Boyen and the 'confined guessing' method due to Böhl et al. For their lattice trapdoors, they use the work of Micciancio and Peikert from Eurocrypt '12. This non-trivial combination leads to a scheme where the signatures are short (consisting of only one vector) at the cost of having keys consisting of a logarithmic number of vectors. They also propose a stateful scheme which reduces the key sizes to a log log number of vectors and also tightens the reduction, removing a factor introduced by the confined guessing stage as well as tightening the approximation factor of the underlying lattice problem. Interestingly, the schemes by Ducas and Micciancio require the additional algebraic structure of ideal lattices, whereas previous works only use this structure for the efficiency improvement. In conclusion, the result is a new scheme that compares favourably to previous schemes by either allowing for smaller signatures or smaller keys. But things move fast in the lattice world, as there is already a new paper on the ePrint archive that reduces the keys to a constant number of vectors, at the cost of a bigger approximation factor in the underlying lattice problem. It is still possible to choose parameters such that this approximation is polynomial, but it is also possible to pick them less conservatively, resulting in a subexponential approximation factor. It will be interesting to see whether such choices survive future improvements to cryptanalysis. 52 Things: Number 3: Computational and storage power of different form factors This is the third in a series of blog posts to address the list of '52 Things Every PhD Student Should Know' to do Cryptography. The set of questions has been compiled to give PhD candidates a sense of what they should know by the end of their first year. We will be presenting answers to each of the questions over the next year, one per week, and I am the student assigned to the third question: Q3: Estimate the relative computational and storage capabilities of a smart-card a micro-controller (i.e. a sensor node) an embedded or mobile computer (e.g., a mobile phone or PDA) a laptop- or desktop-class computer. To measure the computational capability of a device we could assess the clock speed of its processors. This may be misleading if the processor enables some form of parallelism---two cores running at 2 GHz obviously possess more computational power than a single core running at 2 GHz, and so finding a direct quantitative measure is not a realistic expectation. For specific devices like general purpose graphics cards, often the total FLOPS (floating point operations per second) the device is capable of sustaining is reported (for either single or double precision arithmetic) but even this measure is not a particularly reliable choice when applied to any given problem---indeed, some services facilitate a comparison by benchmarking the performance of different devices on a variety of problem instances---see, for example, CompuBench. Fortunately the range of capabilities of the devices included in the question makes a sufficient answer less dependent on quantitative metrics. A measure for the storage capabilities of each device is much simpler to find: we can simply compare the approximate number of bytes of information the device is capable of holding on permanent storage. A smart-card is the least computationally powerful device: obviously clock speeds vary for different implementations, but one might expect to see around a 20 MHz core speed. In terms of storage, a typical smart-card might have around 2 kilobytes (KiB) available. A microcontroller is "a small computer on a single integrated circuit containing a processor core, memory, and programmable input/output peripherals" [1]. The range of storage and compute capability available will vary significantly according to the exact definition of microcontroller, but taking the suggested sensor node as an example, a typical microcontroller is likely to have similar computational capabilities as a smart-card and slightly more storage available, perhaps in the order of a few KiB to a few megabytes (MiB). A mobile computer such as a mobile phone has significantly more storage and computing power, and the amount of power available is rapidly increasing over time. Taking the 2008 iPhone and the 2013 Nexus 5 phone as an example, the iPhone used a 412 MHz 32-bit RISC ARM core, and the Nexus 5 CPU used is a 2.3 GHz quad-core processor. In terms of storage, if we ignore the ability of some phones to use removable storage, then a high-end phone in 2013 might expect to provide in the order of 16 to 32 gigabytes (GiB) of storage Finally, most laptop or desktop class computers are likely to have more processing power than a mobile phone: the high-end Intel "Haswell" i7 4960K processor contains 4 cores each clocked at 4 GHz, and the AMD "Piledriver" FX-9590 CPU contains 8 cores at 4.7 GHz---note that a direct comparison between these two processors requires more than just assessing core counts and clock speeds! There are other factors that can affect the computing capabilities of a desktop or laptop computer---in particular, the addition of a graphics processing unit can, for certain problems, provide a large increase in performance. The storage capacity of a laptop or desktop can vary tremendously, but a typical amount of storage in a consumer machine might be between hundreds of gigabytes and several terabytes (TiB)---the largest single hard drive capacities are now around 8 TiB. [1] https://en.wikipedia.org/wiki/Microcontroller Study group: witness-indistinguishable proofs (2014-10-16) Zero-knowledge (ZK) proofs are a fairly common object in cryptography. What's less common knowledge: zero-knowledge does not compose well. For example, for every interactive ZK prover/verifier (P,V), you can build another pair $(\overline P, \overline V)$ that is still a ZK proof of the same language but running two prover/verifier instances in parallel leaks a witness. Back in the early days of ZK, Feige and Shamir came up with an alternative notion called witness indistinguishability (WI). This says that (for some NP language) if $v, w$ are two witnesses to a statement $x$ then a proof using $v$ is indistinguishable from one using $w$. For some languages like "discrete logarithms" this property holds trivially but once there are several witnesses it becomes interesting. For example, a WI proof of a witness to a Pedersen commitment $G^x H^r$ is guaranteed not to reveal $x$, just like the original commitment itself. And WI is closed under composition. In fact, you can do a strong (and composable) form of WI that is information-theoretic and holds even if the verifier knows the witnesses in question, i.e. the proof is statistically independent of the witness. The second topic we looked at are ZAPs. No-one seems to know what ZAP stands for but it's a two-round WI proof in the plain model (plain-ZK would require at least 3 rounds). The idea: start with a "common random string"-model non-interactive ZK scheme. In round 1, the verifier picks a lot of random strings $r_1, \ldots, r_k$. In round 2, the prover picks one random $r$ and sets $c_i = r_i \oplus r$ to get $k$ different CRS values. The prover then sends a CRS-NIZK proof for each of these values; the verifier accepts if all of them verify. An argument on the probability of a proof for a false statement going through on a random CRS then says that the soundness error of this construction is negligible in $k$. At CRYPTO '03, Barak et al. further showed how to derandomise ZAPs. Our final application is a 3-round OT scheme. To transfer a bit, verifier picks an RSA modulus $N = pq$. The prover sends a random string $r$ and the verifier replies with two random elements $y_0, y_1$ and a ZAP w.r.t. $r$ that at least one is a quadratic residue $mod N$. The prover then picks two random $x_0, x_1$ and sends $y_0^{b_0} \cdot x_0^2$ and $y_1^{b_1} \cdot x_1^2$. The verifier can recover one bit by checking which of the two values is not a quadratic residue. To OT a whole bitstring, this protocol can be done for all bits in parallel. This is where it is important that the ZAP (which is WI) still works under concurrent composition. Posted by David at 8:55 AM No comments: 52 Things: Number 2: What is the difference between a multi-core processor and a vector processor? On the face of it, you may be confused as to what the difference is between these two processors. After all, you may be familiar with words like parallel computing and come across these two different types of processor. So what are the differences between them? This is the question of this week's 52Things Every Cryptography PhD Student should know. But before we get into the nitty gritty of it, why don't we first have a look at the concept these two different processors are part of, namely parallel computing. What is parallel computing? Before answering this question we first need to consider the conventional "serial" model of processing. Let's do so by imagining some problem we need to solve. The way serial computing solves this problem is by viewing it as a number of steps (instructions) which the processor deals with in sequential order. The processor deals with each of the instructions and then at the end, the answer comes out and the problem is solved. Whilst being a wonderful way of solving the problem it does however imply a bottleneck in the speed of solving it. Namely, the speed of the processor at executing the individual instructions. This is fine if the problem isn't too large, but what happens when we have to deal with larger problems or want to compute things faster? Is there a way of increasing the speed of computation without the bottleneck of the speed of the processor? The answer as you might have guessed is yes and it comes in the form of something called parallel computing. What parallel computing does to the problem we are trying to solve is to break it down into smaller problems, each of which can be computed separately at the same time. In this way, the problem is distributed over different processing elements which perform each of these different sub problems simultaneously, providing a potentially significant increase in speed of computation – the amount of speed up depends on the algorithm and can be determined by Amdahl's law [1]. So how does this all work? How can you process things in such a way as this? Well two solutions to the problem are multi-core and vector processors. What is a multi-core processor? A multi-core processor is a single computing component that carries out parallel computing by using multiple serial processors to do different things at the same time. The sub problems of the bigger problem discussed earlier are each solved by a separate processor allowing programs to be computed in parallel. It's like having multiple people working on a project where each person is given a different task to do, but all are contributing to the same project. This might take some extra organising to do, but the overall speed of getting the project completed is going to be faster. What is a vector processor? A vector processor is a processor that computes single instructions (as in a serial processor) but carries them out on multiple data sets that are arranged in one dimensional arrays (unlike a standard serial processor which operates on single data sets). The idea here is that if you are doing the same thing many times to different data sets in a program, rather than executing a single instruction for each piece of data why not do the instruction to all the sets of data once? The acronym SIMD (Single Instruction Multiple Data) is often used to denote instructions that work in this way. So that's the general idea, let's sum up with an example. Let's say we want roll 4 big stones across a road and it takes one minute to do each roll. The serial processor rolls them one by one and so takes four minutes. The multi core processor with two cores has two people to roll stones so each one rolls two stones, it takes two minutes. The vector processor gets a long plank of wood, puts it behind all four stones and pushes them all in one, taking one minute. The multi core processor has multiple workers, the vector processor has a way of doing the same thing to multiple things at the same time. [1] http://en.wikipedia.org/wiki/Amdahl%27s_law Posted by David McCann at 4:41 PM No comments: Compiler-based side-channel application and masking This weeks study group was led by David McCann and focused on the "Compiler-based Side Channel Vulnerability Analysis and Optimized Countermeasures Application" [1] paper presented at the Design and Automation Conference (DAC) 2013. At a high-level, the authors consider the output for each instruction executed and determine, for each individual bit, the minimum number of key bits mixed into the result. Using this information, the compiler makes a decision (based on a threshold) on whether or not to mask the intermediate data. Consider a toy example of the AddRoundKey operation from AES where $s = k \oplus m$ where $m$ is the input matrix, $k$ is the key matrix and $s$ is the resulting state matrix. Each bit of $s$ contains only a single bit of keyed information. The authors define security as a threshold for the minimum number of key bits affecting each intermediate state bit. The exception being an intermediate state bit that has no relation to the key bits (in this case, the security is considered infinity). The authors incorporate the additional stages, referred to as the "security-oriented data flow analysis" (SDFA), into the LLVM compiler framework. This has some immediate advantages over applying masks directly in the source code, specifically, if the source code applies a masking scheme, and the compiler is clever enough to notice, the masking process may be identified as unnecessary computation and be optimised out. In addition to this, only the vulnerable intermediate bits are identified for masking rather than the application as a whole. The SDFA converts the compiler intermediate-representation (IR) code to a Control-Flow Graph (CFG) where each node represents a statement in the program. The authors go on to define a set of rules for the propagation of keyed information at the input and output of the and, or, cmp, mul, div, mod, store, load and shift operations. I could define all the rules here, however this would be almost equivalent to re-writing the paper so if you are interested, please read the full text [1]. In the final section, the authors present experimental results with an implementation of AES-128. The results are compared to implementations presented in [2] and [3]. They discuss different levels of masking and their respective code size and speed performance. The authors highlight the speed-up in performance over existing masked implementations (roughly x2.5). It is a nice and neat presentation on how to exploit the compilers framework to identify and mask vulnerable intermediate data. I am on the fence about the speed-up results seeing as the reference implementations are optimised for 8-bit processors (whereas the target hardware was a 32-bit processor). They also state the code size increase in their work is 'acceptable' given the speed up. However there is a three fold increase in size for a first order tabulated S-Box (accredited to loop-unrolling) for a two fold speed-up. Nevertheless, a nice paper with good potential for automation of side-channel countermeasures. [1] http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6560674 [2] http://link.springer.com/chapter/10.1007%2F978-3-642-15031-9_28 [3] http://link.springer.com/chapter/10.1007%2F11605805_14 Plaintext Awareness and Signed ElGamal For the first study group of the academic year, David Bernhard spoke on recent developments in the area of CCA-secure public key encryption. After introducing the security goals of the area and some key previous results, he presented a paper by Yannick Seurin and Joana Treger, from CT-RSA 2013 [0], which aims to provide such a scheme, which is a variant of Schnorr-signed ElGamal encryption... ElGamal Encryption We begin with a recap of the ElGamal encryption scheme. ElGamal is a public-key cryptosystem whose security is derived from the security of the discrete logarithm problem (DLP). Very briefly, it uses a publicly known cyclic group $G$ (in which the DLP must be hard!) of prime order $p$, with a chosen generator $g$. The user setting up the scheme picks a secret variable $x$, and sets their public key to be $h:=g^x$. To encrypt a message $m$ under this public key, one sends sends the pair $(g^y,mh^y)$, which the initial user can decrypt since they know $x$ and $p$. ElGamal encryption is IND-CPA secure if (and only if) the Discrete Diffie-Hellman (DDH) assumption holds[1]. However, its ciphertexts are malleable and thus the scheme cannot be IND-CCA2. Indeed, trying to extend it to such a scheme turned out not to be as simple as one might think. Various papers made progress in this direction, taking two rather different directions. One branch aimed to create a hybrid scheme (e.g. DHIES [2]) and reduces security to a stronger assumption on the group and that of the symmetric primitive used. The method we will look at is the alternative... Plaintext Awareness A scheme is said to be plaintext aware in the Random Oracle Model (ROM-PA) if an adversary cannot generate a valid ciphertext without already knowing the plaintext of it. That is, it encapsulates the behaviour of a good scheme that prevents the adversary from generating valid ciphertexts other than by encrypting plaintexts herself (or doing something she knows to be equivalent to this). The technicalities of this definition are rather complex, but roughly mean that that given a ciphertext and the list of random oracle queries made to generate it, one can deduce the underlying plaintext. Now, the key result for us is that if a scheme is both IND-CPA and ROM-PA secure, then it is IND-CCA2 secure [3]. Making ElGamal ROM-PA Various earlier results had considered designing a plaintext aware scheme by combining ElGamal with a Schnorr signature [1,4], forming a scheme this paper refers to as SS-EG. Whilst SS-EG inherits IND-CPA security from ElGamal, unfortunately it does not add plaintext awareness. The key contribution of this paper[0] was to observe that actually SS-EG is in some sense "very close", and define a very similar scheme using Chaum-Pedersen commitments rather than the Schnorr signatures. Denoted CPS-EG, the only real difference between the schemes is a slight modification to the variables in the ciphertext and the addition of two more variables to the Random Oracle query when generating the commitment. It is the second of these differences that is criticial, because the extra information supplied to the random oracle (information that an extractor is given direct access to) is sufficient to make the scheme ROM-PA. Making ElGamal IND-CCA2 At this point, one may hope that the scheme is indeed IND-CCA2, but there remains one final twist. There are various different ways one may transmit the required data for a signature, of which one traditional method is the Fiat-Shamir scheme, consisting of a challenge commitment and appropriate response (i.e. a non-interactive proof of knowledge). However, there are also alternative representations that are more efficiently packaged, leading to more concise ciphertexts. The particular representation chosen in the original scheme in fact failed to inherit IND-CPA security from the underlying encryption scheme [5]. Returning to the (less concise) Fiat-Shamir scheme ensures the final version of CPS-EG is indeed an IND-CCA2 secure public key encryption scheme. [0] A Robust and Plaintext-Aware Variant of Signed ElGamal Encryption Yannick Seurin and Joana Treger CT-RSA 2013 [1] On the Security of ElGamal Based Encryption. Yiannis Tsiounis and Moti Yung PKC '98 [2] The Oracle Diffie-Hellman Assumptions and an Analysis of DHIES M. Abdalla, M. Bellare and P. Rogaway CT-RSA '01 [3] Relations among notions of security for public-key encryption schemes M. Bellare, A. Desai, D. Pointcheval and P. Rogaway Crypto '98 [4] A practical mix Markus Jakobsson Eurocrypt '98 [5] Cited by authors as 'personal communication' Bertram Poettering 52 Things: Number 1 : Different Types of Processors This is the first in a series of blog posts to address the list of '52 Things Every PhD Student Should Know' to do Cryptography. The set of questions has been compiled to give PhD candidates a sense of what they should know by the end of their first year - and as an early warning system to advise PhD students they should get out while they still can! In any case, we will be presenting answers to each of the questions over the next year and I have been volunteered for the (dubious) honour of blogging the very first 'thing'. The first topic is on computer architecture and is presented as the following question: What is the difference between the following? - A general-purpose processor. - A general-purpose processor with instruction-set extensions. - A special-purpose processor (or co-processor). - An FPGA. There is no strict definition of a general-purpose processor, however, it is widely accepted that a processor is general if it is Turing-complete. This captures any processor that is able to compute anything actually computable (i.e. can solve any problem a Turing machine can). I will not delve into the definition of a Turing machine but if I've already lost you then I would recommend brushing up on your Theory of Computation [1]. Note though that this definition has no concept of performance or instruction-set capabilities and, in fact, some researchers have gone through the trouble of proving that you may only need a single instruction to be Turing-complete [2]. In the context of modern processors, most programmable CPUs are considered general purpose. The cost of being general-purpose usually comes at a penalty in performance. Specifically, a general-purpose processor may be able to compute anything computable but, it will never excel at complex repeated tasks. Given a task that is repeated regularly on a general-purpose processor in a wide variety of applications, a processor designer may incorporate instruction-set extensions to the base micro-architecture to accommodate the task. Functionally, there may be no difference in the micro-architecture capabilities but practically there may be huge performance gains for the end-user. As we're all cryptographers here, I will stick to a crypto example for instruction-set extensions. Consider a desktop machine with an AES encrypted disk. Any reads from secondary storage require a CPU interrupt to decrypt the data blocks before being cached. Given disk access from a cache miss is already considered terrible, add the decryption routine over the top and you have a bottleneck worth making you re-consider your disk encryption. It should be clear here that AES is our complex repeated task and given a general-purpose CPU with a simple instruction-set, we have no choice but to implement the decryption as a linear stream operations. Intel and AMD both recognised the demand for disk encryption and the penalty AES adds to secondary storage access and have (since circa 2010) produced the AES-NI x86 instruction-set extension to accelerate disk encryption on the their line of desktop CPUs. If you want to fully accelerate any computation, the most optimal result will always be a special-purpose processor or an Application-specific integrated circuit (ASIC). Here we lose a significant portion of the flexibility granted by a general-purpose processor in exchange for performance gains. These types of processors are often tightly-coupled to a general-purpose processor, hence the term co-processor. Note, a co-processor may indeed be in the same package as a general-purpose processor but not necessarily integrated into the general-purpose architecture. Once again, if we turn to modern processor architectures, Intel and AMD have both integrated sound cards, graphics processors and DSP engines into their CPUs for some time now. The additional functionality is exposed via special-purpose registers and the co-processor treated as a separate component which the general-purpose processor must manage. Finally we turn to Field-Programmable Gate Arrays (FPGAs). The middle-ground between ASIC and general-purpose processors. If an application demands high-performance throughput but also requires (infrequent) modification then an FPGA is probably the answer. To understand how an FPGA works, consider a (very) large electronics breadboard with thousands of logic-gates and lookup tables (multiplexers attached to memory) placed all around the breadboard. If you describe an application as a set of gates and timing constraints then you can wire it together on the breadboard and produce a circuit that will evaluate your application. An FPGA affords the flexibility of being re-programmable whilst producing the dedicated logic to evaluate a target application. The key difference to a general-purpose program is how you design and build your application. In order to get the most out of the hardware you must describe the application as a set of hardware components and events using a hardware description language (Verilog or VHDL). This process is frequently used to prototype general-purpose and special-purpose processors on FPGAs before production. However, it is not without its drawbacks. Designing a program with low-level building blocks becomes very cumbersome for large applications. In addition, the energy consumption and hardware costs are generally higher in comparison to a general-purpose embedded IC. Recently, FPGA manufacturer Xilinx have begun shipping FPGAs with ARM general-purpose cores integrated within a single package [3]. This now makes FPGAs available to the ARM core as a flexible co-processor. As a result, you can build dedicated logic to evaluate your crypto primitives and hence accelerate cryptographic applications. In summary, general-purpose processors are capable of computing anything computable. Similarly for a general-purpose processor with instruction-set extensions and it may perform better in particular applications. A special-purpose (or co-processor) is very fast at a particular task but is unable to compute anything outside of that. An FPGA can be used to build all of the above hardware but sacrifices speed for flexibility over an ASIC solution. [2] http://www.cl.cam.ac.uk/~sd601/papers/mov.pdf [3] http://www.xilinx.com/products/zynq-7000/extensible-virtual-platform.htm EM is beginning to lose the non-invasive touch Anyone familiar with hardware side-channel attacks knows that electromagnetic (EM)-field attacks are considered the most practical and forgiving attack vector. This is owing to the non-invasive and localisation properties of EM-fields. We note this in contrast to early differential power analysis (DPA) [1] attacks which require a direct power tap (and often some modification to the target circuitry). A recent publication at CHES2014 [2] seeks to challenge the state of EM-field attacks in an attempt to detect and subsequently prevent EM side-channel attacks. Prior attempts have been made to design an EM 'concious' devices (either by EM noise generation or active EM shielding [3]) but these have come at a heavy cost to area and power consumption, both of which are often of higher priority than security in an integrated circuit (IC) development cycle. This recent publication addresses these constraints and presents a simple and elegant solution to foil EM-field attacks. First, let's recall the stages of an EM-field side-channel attack. An adversary must first buy/capture/'borrow' a target device. Having successfully gained physical access to the device, they must now establish a method to capture the EM-field side-channel leakage. A standard setup will include an near-field probe, an amplifier and a digitizer/oscilloscope. The probe is placed over an (experimentally determined) area of interest and the adversary will record the EM-field radiation during the execution of some cryptographic operations. Additional information may be requires for specific attacks (ciphertexts or plaintests) but we'll stick to a generic scenario here. In [2], the authors present a design methodology which allows the IC to detect probing attempts and prevent an attack early on in the process. The authors exploit the physical laws of EM-fields and mutual inductance to detect frequency shifts in the side-channel leakage owing to a near-field probe being placed in close proximity to the IC. If we consider the EM-frequency on the surface of the target IC as a result of some inductance ($L$) and capacitance $(C)$ we can calculate the frequency as follows: $f_{LC} \approx \frac{1}{2\pi\sqrt{LC}}$ With the introduction of an EM-field probe, we expect the probe coil to produce its own field and hence a some form of mutual inductance ($M$) on the IC surface. The (shifted) EM-frequency at the IC surface will then present as: $\bar{f}_{LC} \approx \frac{1}{2\pi\sqrt{(L-M)C}}$ We expect the mutual inductance to be inversely proportional to the distance between IC surface and the probe. Hence, as the probe approaches the surface, the frequency shift increases. At a high-level, the countermeasure detects the frequency shift and alerts the cryptographic core to the attack. However, any IC designer will point out, analogue circuity requires a careful design process and is often restrictive and costly. In addition, capturing a reference frequency may not always be possibly if the adversary has unfettered access to the device. The authors realise this and present a clever dual-coil control and detection logic implemented with a standard cell library and a MOS capacitor bank. This allows the entire design workflow to be (semi-)automated and hence greatly reducing the development time and resource constraints. We'll not go into the details of the design here but you can pick up all the information from their paper on eprint [2]. As a proof-of-concept design, the authors produced an $0.18\mu m$ process ASIC with an AES core and their EM detection logic. They proceeded to test the ASIC under a few different attack scenarios ranging from a vanilla EM attack to an adversary who is completely aware of the countermeasure and attempts to circumvent it. In all scenarios, the detection logic was able to pick-up the EM-probe and assert the control logic to halt the cryptographic operations. Arguably a solid result for the design team. The paper presents the system in a very nice and neat package for IC designers to pick up and implement. With relatively low overhead costs ($2\%$ area, $9\%$ power and $0.2\%$ performance) it is hard to argue against it. However, it is not without a few caveats. The detection system will not be able to detect all EM attacks and the authors do acknowledge this in their conclusion. However they do not discuss this in any great detail. Having no access to their device I can guess at a few scenarios in which their system is too limited to detect an attack. Primarily (from my understanding) the authors always depackage the device (normally unnecessary when dealing with EM-field side-channel attacks and defeating the purpose of its non-invasive nature) and measure the probe distance relative to the die surface rather than the IC surface. There seems to be little mention on the effectiveness when with the device package still intact. Furthermore their current approach is limited to detecting probes from a maximum of $0.1mm$ to the die surface whereas EM leakage can picked up at far greater distances [4]. There is also the prospect that an adversary will not position the probe above the IC itself but over the support circuity around the IC (i.e. decoupling capacitors and power regulators). In this scenario, the countermeasures will be unable to detect any shift. Finally, there is little discussion on false-positives. All electrical devices will produce some form of mutual inductance and capacitive coupling so if we consider a device deployed in the field with these countermeasures. Will placing it near my smartphone (which contains several antennas and ICs) stop the device from performing any cryptographic operations? For its practical shortcomings though, this paper is a solid move in the direction to preventing EM side-channel attacks. Their design methodology and workflow make it appealing for practitioners and the simplicity behind their approach minimises the cost for IC manufacturers, overall a good contribution to the literature. [1] http://www.cryptography.com/public/pdf/DPA.pdf [2] https://eprint.iacr.org/2014/541.pdf [3] http://www.google.com/patents/US6962294 [4] https://www.youtube.com/watch?v=4L8rnYhnLt8 Posted by Unknown at 12:45 PM No comments: Everybody loves quantum Today is the last day of the PQC conference in Waterloo, Canada. In combination with the ETSI workshop on standardising post-quantum cryptography next week, the conference and summer school have attracted a varied crowd of cryptography researchers, physicists, government employees and people who care about standards. There were four invited talks this week and in this blog post I will try to summarise two of them. The first one is about quantum computing and the second one is about quantum cryptography. I chose these two because you do not hear too much about them in regular crypto conferences. The first invited talk was on building a quantum computer by Matteo Mariantoni. He started off by listing the applications for quantum computers, which include quantum simulation of physical and chemical processes, applications to security such as cryptanalysis and quantum crypto, but also a more recent concept aimed at machine learning and artificial intelligence called quantum data processing. His opinion was that we should compare building a quantum computer to things like the Manhattan project, the space race and the current Mars exploration projects. In this comparison, he claims a quantum computer would be easier, cheaper and more useful and he cited how the requirements on technology used in space generated improvements that aid technology for personal use as well. There are several important steps on the way to an actual quantum computer. Currently, researchers can construct physical qubits, but these are too noisy to carry out any meaningful quantum computation. The solution is to embed the physical qubits into a two-dimensional grid such that they can interact with their nearest neighbours, and to apply error-correcting techniques which eliminate the noise. This forms a single logical qubit. Now, there are some requirements on the physical qubits and gates like fidelity and readout times that need to be met before the logical qubit will actually work. At Waterloo they spent two years optimising their set-up, which is based on superconducting qubits, in order to reach the threshold that allows to create a logical qubit. As an example of why this sort of thing takes two years, he mentioned the task of individually ensuring that none of the screws in the chips with the superconductors are magnetic, which took two months. Future steps in building a quantum computer consist of performing operations on the single logical qubit, performing operations on multiple logical qubits and eventually combining everything into a quantum computer. His final slide consisted of an estimate of what would be needed to factor a 2000-bit number in 24 hours. It would require 500 million physical qubits, a dedicated nuclear power plant, at least five years of research time and ten years of development time and about 1 billion dollars. The 24 hours is not really a variable in this case, as this comes from the physical implementation of the gates. The second invited talk was about quantum cryptography, by Nicolas Gisin. He talked about two applications of quantum mechanics in constructive cryptography, Quantum Random Number Generators and Quantum Key Distribution. He briefly described a QRNG based on beam splitters, which is conceptually simple but fundamentally random (if quantum mechanics work). Current implementations provide about 4 Megabits per second of balanced random bits and solutions have been deployed in practice. An interesting observation he made is that there is also research into different approaches for QRNG's, one of which is based on the use of photosensors that are also inside most modern smart phones. However, the biggest part of the talk was about QKD. QKD is based on the fact that quantum mechanics provides non-local distributed randomness and that it is impossible to clone quantum states, which leads to information-theoretic security. I will not describe the protocol here, but the idea is that Alice and Bob exchange information through quantum states (e.g. encoded in photons), and any information learned by Eve causes a perturbation on these states which can be detected by Alice and Bob. There are even methods that when given a certain amount of tampering by Eve allow Alice and Bob to compress their randomness such that Eve has no information on the result. One issue is that Alice and Bob require to do some communication on a classical channel, which requires authentication to prevent MitM attacks. This leaves two options: use an information-theoretic solution, which requires a pre-shared key or use a computationally secure solution. The first results in a "key-growing" scheme, where a pre-shared key turns into a longer key, which can then be used for future interactions, whereas the second results in a scheme that is secure as long as the authentication is secure at the time of execution. The reason is that the attack has to be active, which means it cannot be launched at a later point in time. Essentially, QKD is immune to any future progress in cryptanalysis of the authentication scheme. Of course, in reality the implementation matters and there have been attacks on existing implementations, similar to side-channel attacks on classical cryptography. In the remainder of the talk, Nicolas described the state of the art in current implementations as well as the open challenges. A solution using fiber optic cables is limited in distance due to noise caused by spurious refractions inside the cable, whereas solutions through the air are hampered by that pesky atmosphere that allows us to breathe. The maximum range of these solutions appears to vary between about 100-400km. The obvious question then becomes how to extend this. One possibility would be to use a satellite, because the atmosphere does not stretch as far upwards. Alice sends her quantum states to the satellite, the satellite moves through orbit and sends the quantum states to Bob upon reaching him. This solution is also being explored at the University of Waterloo. Another option is to use trusted intermediary nodes where the information is converted to classical and back to quantum before it is sent on. Several countries such as the US and China are already planning such networks. The final option Nicolas mentioned is by using intermediary nodes (no longer trusted) to entangle states over longer distances. However, this solution requires some extra research into storing these entangled states until their counterparts arrive, which is currently not yet possible. Posted by Joop van de Pol at 8:07 PM No comments: 52 Things: Number 3: Computational and storage pow... Study group: witness-indistinguishable proofs (201... 52 Things: Number 2: What is the difference betwee...
CommonCrawl
A quantitative systems pharmacology (QSP) model for Pneumocystis treatment in mice Guan-Sheng Liu1 na1, Richard Ballweg1 na1, Alan Ashbaugh2, Yin Zhang3, Joseph Facciolo1, Melanie T. Cushion2 & Tongli Zhang ORCID: orcid.org/0000-0003-1773-62791 BMC Systems Biology volume 12, Article number: 77 (2018) Cite this article A Correction to this article was published on 12 August 2019 The yeast-like fungi Pneumocystis, resides in lung alveoli and can cause a lethal infection known as Pneumocystis pneumonia (PCP) in hosts with impaired immune systems. Current therapies for PCP, such as trimethoprim-sulfamethoxazole (TMP-SMX), suffer from significant treatment failures and a multitude of serious side effects. Novel therapeutic approaches (i.e. newly developed drugs or novel combinations of available drugs) are needed to treat this potentially lethal opportunistic infection. Quantitative Systems Pharmacological (QSP) models promise to aid in the development of novel therapies by integrating available pharmacokinetic (PK) and pharmacodynamic (PD) knowledge to predict the effects of new treatment regimens. In this work, we constructed and independently validated PK modules of a number of drugs with available pharmacokinetic data. Characterized by simple structures and well constrained parameters, these PK modules could serve as a convenient tool to summarize and predict pharmacokinetic profiles. With the currently accepted hypotheses on the life stages of Pneumocystis, we also constructed a PD module to describe the proliferation, transformation, and death of Pneumocystis. By integrating the PK module and the PD module, the QSP model was constrained with observed levels of asci and trophic forms following treatments with multiple drugs. Furthermore, the temporal dynamics of the QSP model were validated with corresponding data. We developed and validated a QSP model that integrates available data and promises to facilitate the design of future therapies against PCP. Pneumocystis is a common opportunistic infection. In hosts with functional immune systems, the growth of these organisms is repressed and few pathological symptoms are observed. On the other hand, PCP is a cause of morbidity in HIV-positive patients as well as hosts with other immune defects, or in patients undergoing therapy with immunosuppressive agents [1,2,3]. Despite a decreased incidence of PCP in developed countries (due to the introduction of Highly Active Anti-Retroviral Therapy), the infection still causes death in about 15% of HIV-infected patients [4,5,6]. The genus Pneumocystis is comprised of many species, including P. carinii, P. jirovecii [7], P. wakefieldiae, P. murina [8], and P. oryctolagi [9,10,11]. These different species are characterized by their ability to infect different hosts. For example, P. jirovecii resides in the human lung alveoli. Despite their differences in host preference, all Pneumocystis species are hypothesized to have a bi-phasic life cycle: a) an asexual phase of replication via the binary fission of the trophic forms; b) a sexual phase in which the conjugation of trophic forms results in formation of asci which contain 8 ascospores, that are released and either continue in the sexual phase or enter the asexual phase [9]. Unlike mammalian cells, Pneumocystis is unable to harvest folate from the environment and must synthesize it de novo [12]. To take advantage of this weakness, the primary therapy for PCP is TMP-SMX, which inhibits dihydropteroate synthase and dihydrofolate reductase, the integral enzymes involved in folate synthesis in host cells and fungi [13,14,15]. Despite high success rates in treating PCP, TMP-SMX therapy leads to significant side effects, including neutropenia and serious allergic skin reactions that can result in death. It's estimated that between 25 and 50% of HIV-infected patients are unable to tolerate prolonged TMP-SMX treatment due to these harsh side effects and must seek other treatment options [16]. Currently, alternative medications include atovaquone, clindamycin-primaquine, echinocandins, and pentamidine isethionate. Atovaquone inhibits nucleic acid and adenosine triphosphate synthesis [17], thus disrupting DNA replication, energy production, and proliferation of the fungi. A combination of clindamycin and primaquine suppresses fungal protein synthesis and mitochondrial function [18] i. The echinocandin family (i.e. anidulafungin, caspofungin, and micafungin) are β-1,3-D-glucan (BG) synthase inhibitors. Since BG is an essential component of the cellular wall that surrounds the asci of Pneumocystis, these drugs selectively target fungi in this phase [19,20,21]. The targets of the drug pentamidine isethionate remain unknown, although the drug has been shown to be effective [22]. When compared to TMP-SMX, these alternative therapies suffer from high rates of relapse and recurrence [23, 24]. Development of new drugs to treat PCP promises to deliver effective treatment with reduced side effects. In comparison to other pathogens, the study of Pneumocystis is particularly challenged by the fact that these fungi cannot be reliably cultured in vitro for any significant length of time, nor continuously passaged to identify whether drugs are pneumocysticidal or pneumocystistatic. Due to this limitation, preclinical drug efficacy studies are carried out in animal models of Pneumocystis infection, typically in mice or rats [25]. Such reliance on animal studies significantly increases both the time and costs associated with the development of treatments to combat PCP. To alleviate this, it will be beneficial to integrate currently available knowledge on the treatment of PCP and our current knowledge of the Pneumocystis lifecycle into a QSP model to facilitate the drug development process. By combining traditional PK and PD analysis with systems biology modeling, QSP can summarize available information into a convenient framework, which can then be used to rigorously test different hypotheses, and scan through treatment regimens in an efficient and cost-effective manner [26, 27]. QSP modeling has been useful in the treatment of infectious diseases, such as Tuberculosis, where it has been used for dose optimization of anti-Tuberculosis drugs [28,29,30]. In addition, QSP models have shown great promise as powerful quantitative tools to study the dosing regimens for novel compounds [31]. A QSP model for the treatment of Pneumocystis is not yet available, and the scarcity of data from human patients makes the development of a human model difficult. With available data in mice, we constructed and validated a QSP model of PCP. This model includes both a PK module and a PD module. The PK module describes the distribution and decay of an applied drug, with different drugs characterized by their respective rate constants. This module was parametrized using independent construction and validation datasets. Following validation, the model was then used to predict the temporal PK profiles of standard dosing regimens in mice. The PD module specifies the proliferation, transformation, and death of Pneumocystis in infected mice. The PK module and PD modules were then integrated into a population of QSP models. The parameters of this integrated model were estimated using a population of models that recapture the steady state distributions of the trophic forms and asci following drug treatment. The temporal dynamics generated by these QSP models were further validated with the observed dynamics of Pneumocystis following these same drug treatments. After constructing independent PK and PD modules with data from various literature sources, the independent modules were then integrated to form a QSP model which was further validated using novel data of the temporal dynamics of Pneumocystis infection. As result, the QSP models developed in this work promise to serve as a solid first step towards understanding the temporal dynamics of Pneumocystis infection and facilitating the design of novel therapies. In the future, this model can potentially be improved and projected to a human version. Our overall modeling strategy is illustrated in Fig. 1. After a PK module and a PD module were constructed, they were integrated into a comprehensive QSP model. The overall QSP modeling strategy. The constructed QSP model includes both a PK module and a PD module. The PK module describes the distribution and decay of different drugs. The PD module specifies the proliferation, transformation, and death of the trophic forms and asci of Pneumocystis fungi. After construction of the PK module, this module was validated with independent data that were not used for its construction. For the PD module, all available data were used for its construction. The integrated QSP model, which includes both the PK module and the PD module, was constructed with the distribution of asci and trophic forms following treatment and then validated with their temporal dynamics Construction of the PK module in mice A three-compartment PK module was used to describe drug dynamics (Fig. 2). Drugs can be administrated either through intravenous (i.v.) injection, intraperitoneal (i.p.) injection or oral (p.o.) administration. In order to mimic i.v. injection, we elevated the initial level of the drug in the plasma compartment. To model i.p. or p.o treatments, drug was added to the administration compartment (AC) (Fig. 2). The level of the drug first increases in the AC, then diffuses into the plasma compartment. In this way, we were able to constrain the PK module with data from sources that administrated drugs via multiple methods. The structure of the QSP model. Left panel: A three-compartment PK module was used to describe the reported pharmacokinetic data. The first compartment was the AC, the second compartment was plasma, and the third was 'peripheral tissue'. Drug decay was assumed to occur in plasma and 'peripheral tissue' compartments. The rates of drug distribution and decay were described by the corresponding parameters. Right panel: The dynamics of Pneumocystis were described by a two-stage model which involves both trophic forms and asci. The temporal changes of trophic forms and asci were also controlled by the indicated parameters. The drug effects were indicated by arrows (promoting) and lines with solid circle heads (inhibiting) Overall the module is comprised of the AC, a plasma compartment and a "peripheral tissue" compartment (combining all organs, muscles and fat etc). Drug decay is assumed to occur in both the plasma and "peripheral tissue" compartments. The parameters that govern drug distribution and decay are labeled near the corresponding reaction arrows (Fig. 2). The PK module was constructed with previously reported pharmacokinetic profiles (references elaborated in Table 2). Using figures from these works that plot drug concentration against time, digital values were extracted with the publicly available software labnotes (http://mpf.biol.vt.edu/lab_website/Labnotes.php).Three ordinary differential equations (ODEs) with identical structures (detailed below) were used to describe all drugs, while the rate constants differ between individual drugs. (Table 1). Table 1 The equations and parameters of the PK module The drug concentration in the administration compartment (AC) is modeled as: $$ \frac{\boldsymbol{dDrugAC}}{\boldsymbol{dt}}=-\boldsymbol{RAP}\ast {\boldsymbol{K}}_{\boldsymbol{abs}}\ast {\boldsymbol{Drug}}_{\boldsymbol{AC}} $$ Where DrugAC represents the current level of Drug in the Absorptive compartment, Kabs represents the absorption rate of the drug, and RAP is a non-dimensional scaling factor. Since the drug resides in the administration compartment for only a short time, its decay is not explicitly incorporated. The drug concentration in the plasma compartment is modeled as: $$ \frac{\boldsymbol{dDrugP}}{\boldsymbol{dt}}={\boldsymbol{K}}_{\boldsymbol{abs}}\ast {\boldsymbol{Drug}}_{\boldsymbol{AC}}-\left({\boldsymbol{K}}_{\boldsymbol{P}\boldsymbol{T}}+{\boldsymbol{K}}_{\boldsymbol{dP}}\right)\ast {\boldsymbol{Drug}}_{\boldsymbol{P}}+{\boldsymbol{K}}_{\boldsymbol{T}\boldsymbol{P}}\ast {\boldsymbol{Drug}}_{\boldsymbol{T}} $$ Where DrugP represents the current level of Drug in the Plasma compartment, Kabs represents the absorption rate of the drug, KPTthe rate at which drug moves from the Plasma compartment to the Tissue compartment, Kd is the degradation rate of the Drug and KTP is the rate at which the Drug moves from the tissue compartment to the plasma compartment. The drug concentration in the peripheral tissue compartment is modeled as: $$ \frac{\boldsymbol{d}\boldsymbol{DrugT}}{\boldsymbol{d}\boldsymbol{t}}=\boldsymbol{RTP}\ast \left(-{\boldsymbol{K}}_{\boldsymbol{T}\boldsymbol{P}}\ast {\boldsymbol{Drug}}_{\mathbf{T}}+{\boldsymbol{K}}_{\boldsymbol{P}\boldsymbol{T}}\ast {\boldsymbol{Drug}}_{\boldsymbol{P}}\right)-{\boldsymbol{K}}_{\boldsymbol{d}}\ast {\boldsymbol{Drug}}_{\boldsymbol{T}} $$ Where DrugT represents the current level of Drug in the tissue compartment and RTP represents a non-dimensional scaling factor. The values of the dimensionless factors RAP and RTP are estimated from the observed data for each drug. For the PK modules, all parameters sets and initial conditions were derived manually using a trial and error method to find a plausible parameter set that visually recaptures the experimentally observed data. The initial conditions for each drug were estimated from the literature data when available. For example, the initial level of Anidulafungin was estimated to be 90 μg/ml for the i.p. dosage of 10 mg/kg (34). When such data was not available, higher or lower initial levels were assumed for higher or lower dosage of applied drug. The sum of squared error (SSE) for each parameter set were calculated and summarized (Table 2). Table 2 The experimental data for the construction and validation of PK modules Construction of the PD module in mice The life cycle of Pneumocystis, including its proliferation, life cycle stage transformation, and death, was simplified into a two-stage model which included both trophic forms and asci (Fig. 2). The simplified model was described using a pair of ODEs and 5 control parameters. Trophic forms of the organism were model by the following ODE: $$ \frac{\boldsymbol{dTro}}{\boldsymbol{dt}}={\boldsymbol{K}}_{\boldsymbol{sTro}}\ast \boldsymbol{Tro}-{\boldsymbol{K}}_{\boldsymbol{dTro}}\ast \boldsymbol{Tro}\ast \boldsymbol{Tro}-{\boldsymbol{K}}_{\boldsymbol{TA}}\ast \boldsymbol{Tro}+{\boldsymbol{K}}_{\boldsymbol{AT}}\ast \boldsymbol{Asci} $$ Where Tro represents the current value of the Trophic form of Pneumocystis, KsTro is the proliferation rate of Tro, KdTro is the death rate of Tro, KTA is the rate at which trophic forms are converted to asci, and KAT represents the rate at which asci are converted to trophic forms. Asci were described by the following ODE: $$ \frac{\boldsymbol{dAsci}}{\boldsymbol{dt}}={\boldsymbol{K}}_{\boldsymbol{TA}}\ast \boldsymbol{Tro}-{\boldsymbol{K}}_{\boldsymbol{AT}}\ast \boldsymbol{Asci}-{\boldsymbol{K}}_{\boldsymbol{dAsci}}\ast \boldsymbol{Asci} $$ Where Asci represents the current value of the asci of the fungi, and KdAsci represents its death rate. Our model describes the transformation between trophic forms and asci following a similar multistate model of tuberculosis [32]. Following logistic growth models, the decay of the trophic form is a second order reaction since the trophic forms actively proliferate and compete for space and nutrients. On the contrary, the asci do not actively proliferate but rather result from the transformation of trophic forms. Hence, the decay of the asci is set to be a first order reaction. The basal values of these control parameters were estimated on the basis of relevant experimental data (Table 4). The experimentally observed levels of Pneumocystis (Figs. 3 & 4) are distributed over a broad range. To recapture these experimentally observed distributions, we constructed a population of PD models with parameter values selected from a uniform distribution that covers 70–130% of the basal values (Table 4). For Fig. 3b, the experimental results are reported as a total nuclei count of both trophic forms and asci that is on a different scale than the PD model. To recapture this dataset, the model results were converted into a nuclei count and rescaled to the maximum. The PD modules were consistent with experimental data from diverse sources. a. Temporal simulations for the dynamic changes of trophic form (black curves) and asci (red curves) starting from an initial state with a high level of trophic forms and a low level of asci. b. Temporal simulations (black curves) of the normalized total number of Pneumocystis were compared to the normalized nuclei count from Pneumocystis infected mice (red dots, error bars represent SEM, n = 2 or 3 for each time point). c and d. Histograms showing the distributions of the numbers of the trophic form and asci simulated by the PD module The simulations of the QSP models were consistent to relevant data. a and b. Bar plots of average simulated log10 levels: of asci (a) and trophic forms (b) at day 56 post-treatment of Pneumocystis from: untreated mice (Control), mice treated with varying doses of anidulafungin, caspofungin and micafungin; as well as mice treated with TMP-SMX. Corresponding experimental data are represented as dot plots with standard error. c. The simulated dynamic changes of the trophic forms (black curves) and asci (red curves), on a log10 scale were consistent to the corresponding experimental data (black and red dots) following anidulafungin treatment. d. The simulated dynamic changes of trophic forms (black curves) and asci (red curves) were consistent to the corresponding data (black dots and red dots) following TMP-SMX treatment Integration of the PK module and the PD modules into QSP models Various drugs target Pneumocystis via diverse mechanisms, which were incorporated into the QSP models. TMP-SMX represses folate synthesis which is essential for genome replication in the organism [13]. Therefore, in our simplified model, TMP-SMX was assumed to inhibit the proliferation rate of the trophic forms and increase the death rates of both the trophic forms and asci. Echinocandins on the other hand, block the construction of the cellular wall of the asci. Therefore, this family of drugs were assumed to reduce the level of asci by promoting their death as well as inhibiting their formation (Fig. 2 and Table 5). The EC50 and maximal effect of each drug were estimated from the levels of asci and trophic forms following treatment with different drugs. Since the ratio of TMP-SMX was fixed to be 1:5 in the data constraining our QSP model, we simplified the model by using the level of SMX as a reasonable proxy for this drug combination. To account for the drug effects on pneumocystis in the QSP models, we replaced the constant parameters of the PD modules (ks, kdTro, kdAsci and kTA) with corresponding functions of the levels of drugs (vsTro, vdTro, vdAsci, and vTA,Table 5) . In the presence of SMX, the death rates of both the trophic forms and asci are enhanced and descried with the following equation: $$ {\mathbf{v}}_{\mathbf{dTro}}={\mathbf{k}}_{\mathbf{dTro}}\ast \left(\mathbf{1}+{\mathbf{ME}}_{\mathbf{Tro}}\ast \frac{{\mathbf{SMX}}_{\mathbf{eff}}^{\mathbf{n}}}{{\mathbf{SMX}}_{\mathbf{eff}}^{\mathbf{n}}+\mathbf{Ec}{\mathbf{50}}_{\mathbf{SMX}}^{\mathbf{n}}}\right) $$ where kdTro, represents the basal death rate of the trophic form, METro is the maximal effect by SMX, SMXeff is the effective level of SMX, EC50SMX is the half maximal effective concentration of SMX, and n is the hill coefficient . A similar equation is used to calculate, vdAsci, the death rate of the asci when exposed to SMX. The presence of SMX also leads to repression of tro proliferation, which is described with the following equation: $$ {\mathbf{v}}_{\mathbf{s}\mathbf{Tro}}={\mathbf{k}}_{\mathbf{s}}\ast \left(\mathbf{1}-\frac{{\mathbf{SMX}}_{\mathbf{eff}}^{\mathbf{n}}}{{\mathbf{SMX}}_{\mathbf{eff}}^{\mathbf{n}}+\mathbf{Ec}{\mathbf{50}}_{\mathbf{SMX}}^{\mathbf{n}}}\right) $$ Where ks represents the basal proliferation rate of the trophic form. The presence of echinocandins inhibits the asci specifically. To incorporate its effect, we assume that the transformation of trophic forms to asci is inhibited and that the death rate of the asci is enhanced. The enhanced death of asci is modeled with a similar equation as replaced above. The inhibition of asci formation is described with the following equation: $$ {\mathbf{v}}_{\mathbf{TA}}={\mathbf{k}}_{\mathbf{TA}}\ast \left(\mathbf{1}-\frac{{\mathbf{Echi}}_{\mathbf{pla}}^{\mathbf{n}}}{{\mathbf{Echi}}_{\mathbf{pla}}^{\mathbf{n}}+\mathbf{Ec}{\mathbf{50}}_{\mathbf{Echi}}^{\mathbf{n}}}\right) $$ Where kTA represents the normal rate of asci formation, while Echipla is the current plasma level of echinocandin. The ordinary differential equations were simulated with the mathematical software XPPAUT, which is freely available at http://www.math.pitt.edu/~bard/xpp/xpp.html. The experimental and simulated data were then visualized using MATLAB from Mathworks (https://www.mathworks.com). Measuring pneumocystis numbers in mice The Pneumocystis number is commonly estimated in two ways: reverse Transcriptase quantitative PCR (RT-qPCR) or microscopic quantification. For all mouse studies, 6 week old male, C3H/HeN mice were used. These came from the animal supply company Charles River ( https://www.criver.com ). For RT-qPCR, Pneumocystis-infected mice were euthanized by CO2 exposure until cessation of breathing at regular intervals and the lungs flash frozen, followed by RNA extraction and cDNA synthesis. Pneumocystis mitochondrial large subunit ribosomal RNA, was then quantified by TaqMan assay. The threshold cycle for each sample was identified as the point at which the fluorescence generated by degradation of the TaqMan probe increased significantly above the baseline. To convert the threshold cycle to Pneumocystis nuclei number, a standard curve was generated using cDNA made from RNA isolated from 107 Pneumocystis nuclei. The level of infection for each sample was estimated using the standard curve [33]. Although accurate, this technique cannot distinguish between the trophic forms and asci of Pneumocystis. For microscopic quantification with a use a Nikon Eclipse E600, lungs from Pneumocystis-infected mice were isolated and stained with a dye that selectively binds to the asci of the fungi, cresyl echt violet. A rapid version of the Wright-Giemsa stain was used to enumerate the nuclei of all life cycle stages [34]. In contrast to RT-qPCR, microscopic quantification allows for the distinction between the trophic forms and asci. Though these methods rely on different techniques, the time scale characterizing Pneumocystis is independent of the method used. This common time scale facilitated the construction of the current PD modules with both literature reported numbers of trophic forms and asci and novel experimental results using the RT-qPCR method. The constructed PK module was validated against independent data The equations and parameter values of the constructed PK modules were reported in Table 1, with each drug characterized by a different set of parameter values. These parameter values were estimated using data reported in the literature (Table 2). For example, Gumbo et al. measured the plasma concentration of anidulafungin following a single 10 mg/kg i.p. injection [35], which we used to estimate the PK parameters for a three compartment pharmacokinetic model of anidulafungin. (Fig. 5a). After estimating the PK parameters, they were used to simulate further experimental scenarios with either i.p. or i.v. administration of anidulafungin and compared to their respective data sets (Fig. 5a). Because these additional data sources were not used for the initial parameter estimation, the consistency between the model simulation and these additional data sources served as a validation of the estimated parameters for the anidulafungin PK model. The temporal simulations of the PK modules were consistent with diverse experimental data. The temporal simulations of the plasma concentrations of anidulafungin (a), caspofungin (b), micafungin (c) and smx (d) were compared to relevant experimental data. The black dots and black solid curves represent the construction data and corresponding model simulations; the colored dots and colored dashed curves represent the validation data and corresponding simulations. The data sources were elaborated in Table 2. The colors in each panel were used to indicate different administration methods and dosages. In a, blue, i.v. of 1 mg/kg; magenta, green and red, i.p. of 80 mg/kg, 20 mg/kg and 5 mg/kg respectively. In b, blue and magenta, i.v. of 0.5 mg/kg and 5 mg; red, cyan and green, i.p. of 1 mg/kg, 5 mg/kg and 80 mg/kg; In c, blue, red and green, i.v. of 0.32 mg/kg, 1 mg/kg and 3.2 mg/kg; cyan and magenta, i.p. of 5 mg/kg and 80 mg/kg; In d, blue, oral of 50 mg/kg In a similar fashion, the parameters for PK models of caspofungin, micafungin, and SMX were estimated and validated with different literature sources (Fig. 5b-d). The PK modules predict novel PK profiles Once constructed and validated, our PK modules may serve as convenient tools to predict the plasma level of each drug following more than a single dose. To illustrate this potential, we used the PK modules to predict the plasma levels of four different drugs following the reported treatment regimens [33]. Here, three drugs from the echinocandin family (anidulafungin, caspofungin and micafungin) were administrated through i.p. injection and a fourth drug, TMP-SMX, was administered orally [33]. Given the reported dosage of each treatment, we estimated the expected increases of each drug in the administration compartment, which served as the in silico drug dosage (Table 3). Following the experimental dosing regimen reported by Cushion et al., each drug was elevated three times a week for 3 weeks [33]. The simulated plasma levels of echinocandins within one week and SMX within three weeks were shown in Fig. 6. Table 3 Estimated Initial AC concentrations of echinocandins and SMX for model prediction The temporal drug profiles predicted by the PK modules. a, b, c and d show the predicted plasma levels of anidulafungin, caspofungin, micafungin and SMX when administrated 3 times/week. The different dosages of anidulafungin, caspofungin, micafungin (in mg/kg) are labelled in each panel, the SMX dosage is 200 mg/kg Compared with traditional pharmacokinetic indexes such as area under the curve, the temporal predictions from the PK modules elaborated temporal dynamics of applied drugs that might play a significant role in determining drug effectiveness [36]. When additional PK data are available, these data can be used to further refine the PK modules and reduce the need of repeating PK measurements. The constructed PD modules were consistent with multiple experimental observations After the PD wiring diagram (Fig. 2) was converted into ODEs (details in methods), the parameters of the module were estimated with currently available data (Table 4). In order to check whether the estimated parameters are reasonable, the temporal simulations of the PD module were compared to these experimental observations. Table 4 The equations and parameters of the PD modules By specifically targeting the asci of the fungi, administration of the anti-fungal drug anidulafungin can result in a state with a low level of asci and a high level of trophic forms. Starting from this initial state, and in the absence of any drug treatment, it takes several weeks for asci to repopulate [33]. The time range of this recovery was consistent to a number of temporal simulations of the PD module (Fig. 3a) with initial conditions that mimicked this experimental scenario. The consistency between time frames suggests that the estimated rates characterizing the transformation from the trophic form to asci (KTA) fall within a biologically reasonable scale. In addition to literature reported data, we have also experimentally determined the total number of P. murina nuclei within infected and immunosuppressed mice (red dots, Fig. 3b). The initial growth of the organism was very slow within the first two weeks, however, starting from the third week, an exponential growth of Pneumocystis was observed which peaked at the end of the fifth week. The experimentally determined nuclei count was then compared with a population of simulated temporal curves of Pneumocystis accumulation (Fig. 3b). The model simulations recaptured the slow initial accumulation of the Pneumocystis, the rapid, exponential growth of the organism, and the steady state level following the exponential peak. The consistency between this experimental data and the temporal simulations suggests that the model assumption of rapid Pneumocystis growth is indeed reasonable. The level of Pneumocystis begins to decrease near the end of the experiment (red outlier, Fig. 3b). This is likely due to depletion of nutrients or overcrowding. Since these mechanisms have not been incorporated into the current model, it is not surprising that the model simulations fail to recapture the observed decrease. Furthermore, the simulated distributions of the trophic forms (Fig. 3c) and asci (Fig. 3d) are consistent to the observed levels of the fungi (7.62 ± 0.17 for trophic forms and 7.79 ± 0.13 for asci) [33]. The agreement between the experimentally determined pneumocystis level and those simulated with the model, suggests that the assumed ratios between proliferation and decay (ratio between KsTro andKdTro) and transformation (ratio between KTAand KAT) are reasonable. In order to incorporate variability, all model parameters are changed independently. Quantitative systems pharmacology model construction and validation By integrating the PK modules and the PD module, the QSP model can describe the changes of asci and trophic forms following treatment for a population of models. Modules were integrated by adjusting the parameters that control: the growth and death of the cyst form (for the echinocandin family of drugs, Fig. 2), or the death rates of the trophic and cyst forms along with the growth of the trophic form (for TMP/SMX treatment, Fig. 2). Details of the integration procedure can be found in the Methods. Following the experimental setting as reported by Cushion et al., each drug in the model was administrated 3 times per week for 3 weeks [33]. The simulated levels of asci at day 56 were then compared to the experimental observations from Cushion et al. (Fig. 4a). At a dose of 1 mg/kg, treatment with all three echinocandins (anidulafungin, caspofungin and micafungin) considerably reduced asci burdens. At lower doses (0.5 and 0.1 mg/kg), anidulafungin and caspofungin still decreased the number of asci, while micafungin caused no notable decrease in the levels of asci (Fig. 4a). In contrast to the dramatic reductions in asci, the simulated trophic forms were not meaningfully altered following treatment with any of the echinocandins (Fig. 4b). The model showed a marked decrease in both asci and trophic forms in response to TMP/SMX treatment (Fig. 4a & b). These simulated results were consistent with the experimental observations [33], indicating that our integrated QSP models are reasonable in describing the therapeutic effects of the echinocandin family of drugs and those of TMP/SMX. With the constructed QSP models, we then simulated the temporal changes of asci and trophic forms prior to and after anidulafungin treatment (Fig. 4c). Prior to drug administration, the simulated accumulation of both trophic forms and asci are consistent to experimental data collected in the absence of drugs, as elaborated in the description of the PD modules above. At about 35 days, the levels of both trophic forms and asci reached a steady state of about 107, in agreement to the experimental data (Fig. 4c). Following anidulafungin treatment (starting at day 35), the level of asci decreased dramatically while the level of trophic form remained constant. These simulated responses to anidulafungin were consistent with the corresponding experimental data from our lab (Fig. 4c). When compared to anidulafungin, treatment with TMP-SMX decreased the levels of both asci and trophic forms. However, in comparison with the rapid antifungal effect of anidulafungin, the experimental evidence suggests that the effect of TMP-SMX was delayed. This time delay was incorporated into our QSP model (Table 5), and the simulated responses of trophic and asci levels (Fig. 4d) were consistent to corresponding experimental data (Fig. 4d). In summary, the QSP models serve as a reasonable tool to describe the temporal dynamics of Pneumocystis upon treatment with either the echinocandin class of antifungals or TMP/SMX. Table 5 Integrating the PK modules and PD modules into QSP models In this work, we developed a QSP model to simulate how the numbers of Pneumocystis are altered by commercially available echinocandins and TMP-SMX. In addition to describing the temporal dynamics of these drugs, this novel QSP model also incorporated two different life cycle stages of the infecting fungi. Since the different life stages are presumably conserved in a broad range of hosts, the QSP model would be useful for studying Pneumocystis infections in a number of hosts including humans. QSP modeling, which integrates knowledge from pharmacology and systems biology, is emerging as a powerful approach in pharmaceutical development [37, 38]. To the encouragement of the QSP community, QSP modeling aided in studying the dosing regimens of a new biologic, NATPARA, in the regulatory domain [31]. Particularly, QSP modeling has been useful in aiding the treatment of infectious diseases, such as tuberculosis, where it has been used for dose optimization of anti-Tuberculosis drugs [28,29,30]. Moreover, QSP models have shown great promise as powerful quantitative tools to study the dosing regimen for novel pharmaceutical compounds [31]. Thus, it is worthwhile to carefully evaluate the power as well as limitations of QSP modeling. The benefits of QSP modelling originate from its ability to integrate all available knowledge and data to predict the effect of novel treatment regimens. In this way, the modeling provides some guidance for choosing effective strategies and avoiding plans that might have little chance for success. In this way, QSP combines traditional PK/PD modeling with systems biological modeling and provides a more comprehensive picture than single indices such as steady state AUC [39]. In order to generate faithful predictions, both the PK and PD portions of the QSP models must be carefully constructed and independently validated. For the current QSP model, the PK module has been well constrained with the abundant data available in the literature, however the PD module needs to be further validated with additional dynamic data of the asci and trophic forms following treatment with different drugs as well as dynamic data of the growth of the organism prior to treatment. These additional data sources will either validate the model's current parameter settings or allow for further refinement of the parameters. The complexity and scope of the current model aim to achieve a balance between incorporation of mechanistic details and constraint by currently available data. When additional details become available, the current PD module can be expanded to include a more detailed description of the Pneumocystis life stages, while the PK module can be expanded to incorporate additional compartments, such as a lung compartment. Furthermore, the model can be tailored to investigate additional drugs such as atovaquone or clindamycin-primaquine. The current model, constrained with data collected in mice, promises to serve as a useful framework to understand and predict the growth, death and drug response of Pneumocystis in human patients, assuming the conservation of Pneumocystis life stages between species. Such predictions of Pneuomocystis levels in human, being orthogonal to the observed symptoms, will provide valuable insight for the clinicians to understand the progression of the infection as well as its response to treatment. It was highlighted that the original article [1] contained errors in the figures and their legends and by extension the in-text figure citations. This Corrections article shows the correct figures and correct figure legends. AC : Administration compartment BG : β-1,3-D-glucan i.p. : i.v. : ODEs : p.o. : Oral administration PCP : Pneumocystis pneumonia PD : PK : QSP : Quantitative Systems Pharmacological RT-qPCR : Reverse Transcriptase quantitative PCR TMP-SMX : Trimethoprim-sulfamethoxazole Huang YS, Yang JJ, Lee NY, Chen GJ, Ko WC, Sun HY, Hung CC. Treatment of pneumocystis jirovecii pneumonia in HIV-infected patients: a review. Expert Rev Anti-Infect Ther. 2017;15(9):873–92. Liu Y, Su L, Jiang SJ, Qu H. Risk factors for mortality from pneumocystis carinii pneumonia (PCP) in non-HIV patients: a meta-analysis. Oncotarget. 2017;8(35):59729–39. Luraschi A, Cisse OH, Pagni M, Hauser PM. Identification and functional ascertainment of the pneumocystis jirovecii potential drug targets Gsc1 and Kre6 involved in glucan synthesis. J Eukaryot Microbiol. 2017;64(4):481–90. Walzer PD, Evans HE, Copas AJ, Edwards SG, Grant AD, Miller RF. Early predictors of mortality from pneumocystis jirovecii pneumonia in HIV-infected patients: 1985-2006. Clin Infect Dis. 2008;46(4):625–33. Huang L, Cattamanchi A, Davis JL, den Boon S, Kovacs J, Meshnick S, Miller RF, Walzer PD, Worodria W, Masur H, et al. HIV-associated pneumocystis pneumonia. Proc Am Thorac Soc. 2011;8(3):294–300. Rabodonirina M, Vaillant L, Taffe P, Nahimana A, Gillibert RP, Vanhems P, Hauser PM. Pneumocystis jirovecii genotype associated with increased death rate of HIV-infected patients with pneumonia. Emerg Infect Dis. 2013;19(1):21–8. quiz 186 Nahimana A, Rabodonirina M, Bille J, Francioli P, Hauser PM. Mutations of pneumocystis jirovecii dihydrofolate reductase associated with failure of prophylaxis. Antimicrob Agents Chemother. 2004;48(11):4301–5. Hauser PM, Macreadie IG. Isolation of the pneumocystis carinii dihydrofolate synthase gene and functional complementation in Saccharomyces cerevisiae. FEMS Microbiol Lett. 2006;256(2):244–50. Beck JM, Cushion MT. Pneumocystis workshop: 10th anniversary summary. Eukaryot Cell. 2009;8(4):446–60. Weiss LM, Cushion MT, Didier E, Xiao L, Marciano-Cabral F, Sinai AP, Matos O, Calderon EJ, Kaneshiro ES. The 12th international workshops on opportunistic Protists (IWOP-12). J Eukaryot Microbiol. 2013;60(3):298–308. Calderon EJ, Cushion MT, Xiao L, Lorenzo-Morales J, Matos O, Kaneshiro ES, Weiss LM. The 13th international workshops on opportunistic Protists (IWOP13). J Eukaryot Microbiol. 2015;62(5):701–9. Skold O. Sulfonamide resistance: mechanisms and trends. Drug Resist Updat. 2000;3(3):155–60. Huang L, Crothers K, Atzori C, Benfield T, Miller R, Rabodonirina M, Helweg-Larsen J. Dihydropteroate synthase gene mutations in pneumocystis and sulfa resistance. Emerg Infect Dis. 2004;10(10):1721–8. Nahimana A, Rabodonirina M, Zanetti G, Meneau I, Francioli P, Bille J, Hauser PM. Association between a specific pneumocystis jiroveci dihydropteroate synthase mutation and failure of pyrimethamine/sulfadoxine prophylaxis in human immunodeficiency virus-positive and -negative patients. J Infect Dis. 2003;188(7):1017–23. Nahimana A, Rabodonirina M, Francioli P, Bille J, Hauser PM. Pneumocystis jirovecii dihydrofolate reductase polymorphisms associated with failure of prophylaxis. J Eukaryot Microbiol. 2003;50(Suppl):656–7. Castro JG, Morrison-Bryant M. Management of Pneumocystis Jirovecii pneumonia in HIV infected patients: current options, challenges and future directions. HIV AIDS (Auckl). 2010;2:123–34. Artymowicz RJ, James VE. Atovaquone: a new antipneumocystis agent. Clin Pharm. 1993;12(8):563–70. Schlunzen F, Zarivach R, Harms J, Bashan A, Tocilj A, Albrecht R, Yonath A, Franceschi F. Structural basis for the interaction of antibiotics with the peptidyl transferase Centre in eubacteria. Nature. 2001;413(6858):814–21. Powles MA, Liberator P, Anderson J, Karkhanis Y, Dropinski JF, Bouffard FA, Balkovec JM, Fujioka H, Aikawa M, McFadden D, et al. Efficacy of MK-991 (L-743,872), a semisynthetic pneumocandin, in murine models of pneumocystis carinii. Antimicrob Agents Chemother. 1998;42(8):1985–9. Letscher-Bru V, Herbrecht R. Caspofungin: the first representative of a new antifungal class. J Antimicrob Chemother. 2003;51(3):513–21. Espinel-Ingroff A. Novel antifungal agents, targets or therapeutic strategies for the treatment of invasive fungal diseases: a review of the literature (2005-2009). Rev Iberoam Micol. 2009;26(1):15–22. Foye WO, Lemke TL, Williams DA. Foye's principles of medicinal chemistry. 7th ed. Philadelphia: Wolters Kluwer Health/Lippincott Williams & Wilkins; 2013. Patel N, Koziel H. Pneumocystis jiroveci pneumonia in adult patients with AIDS: treatment strategies and emerging challenges to antimicrobial therapy. Treat Respir Med. 2004;3(6):381–97. Thomas M, Rupali P, Woodhouse A, Ellis-Pegler R. Good outcome with trimethoprim 10 mg/kg/day-sulfamethoxazole 50 mg/kg/day for pneumocystis jirovecii pneumonia in HIV infected patients. Scand J Infect Dis. 2009;41(11–12):862–8. Lobo ML, Esteves F, de Sousa B, Cardoso F, Cushion MT, Antunes F, Matos O. Therapeutic potential of caspofungin combined with trimethoprim-sulfamethoxazole for pneumocystis pneumonia: a pilot study in mice. PLoS One. 2013;8(8):e70619. Agoram BM, Demin O. Integration not isolation: arguing the case for quantitative and systems pharmacology in drug discovery and development. Drug Discov Today. 2011;16(23–24):1031–6. Knight-Schrijver VR, Chelliah V, Cucurull-Sanchez L, Le Novere N. The promises of quantitative systems pharmacology modelling for drug development. Comput Struct Biotechnol J. 2016;14:363–70. Lyons MA, Reisfeld B, Yang RS, Lenaerts AJ. A physiologically based pharmacokinetic model of rifampin in mice. Antimicrob Agents Chemother. 2013;57(4):1763–71. Lyons MA. Computational pharmacology of rifampin in mice: an application to dose optimization with conflicting objectives in tuberculosis treatment. J Pharmacokinet Pharmacodyn. 2014;41(6):613–23. Lyons MA, Lenaerts AJ. Computational pharmacokinetics/pharmacodynamics of rifampin in a mouse tuberculosis infection model. J Pharmacokinet Pharmacodyn. 2015;42(4):375–89. Peterson MC, Riggs MM. FDA advisory meeting clinical pharmacology review utilizes a quantitative systems pharmacology (QSP) model: a watershed moment? CPT Pharmacometrics Syst Pharmacol. 2015;4(3):e00020. Clewe O, Aulin L, Hu Y, Coates AR, Simonsson US. A multistate tuberculosis pharmacometric model: a framework for studying anti-tubercular drug effects in vitro. J Antimicrob Chemother. 2016;71(4):964–74. Cushion MT, Linke MJ, Ashbaugh A, Sesterhenn T, Collins MS, Lynch K, Brubaker R, Walzer PD. Echinocandin treatment of pneumocystis pneumonia in rodent models depletes cysts leaving trophic burdens that cannot transmit the infection. PLoS One. 2010;5(1):e8524. Cushion MT, Walzer PD, Ashbaugh A, Rebholz S, Brubaker R, Vanden Eynde JJ, Mayence A, Huang TL. In vitro selection and in vivo efficacy of piperazine- and alkanediamide-linked bisbenzamidines against pneumocystis pneumonia in mice. Antimicrob Agents Chemother. 2006;50(7):2337–43. Gumbo T, Drusano GL, Liu W, Ma L, Deziel MR, Drusano MF, Louie A. Anidulafungin pharmacokinetics and microbial response in neutropenic mice with disseminated candidiasis. Antimicrob Agents Chemother. 2006;50(11):3695–700. Lakota EA, Bader JC, Ong V, Bartizal K, Miesel L, Andes DR, Bhavnani SM, Rubino CM, Ambrose PG, Lepak AJ. Pharmacological basis of CD101 efficacy: exposure shape matters. Antimicrob Agents Chemother. 2017;61(11) Leil TA, Bertz R. Quantitative systems pharmacology can reduce attrition and improve productivity in pharmaceutical research and development. Front Pharmacol. 2014;5:247. Leil TA, Ermakov S. Editorial: the emerging discipline of quantitative systems pharmacology. Front Pharmacol. 2015;6:129. Wang Y, Bhattaram AV, Jadhav PR, Lesko LJ, Madabushi R, Powell JR, Qiu W, Sun H, Yim DS, Zheng JJ, et al. Leveraging prior quantitative knowledge to guide drug development decisions and regulatory science recommendations: impact of FDA pharmacometrics during 2004-2006. J Clin Pharmacol. 2008;48(2):146–56. Andes D, Diekema DJ, Pfaller MA, Bohrmuller J, Marchillo K, Lepak A. In vivo comparison of the pharmacodynamic targets for echinocandin drugs against Candida species. Antimicrob Agents Chemother. 2010;54(6):2497–506. Misiek M, Buck RE, Pursiano TA, Chisholm DR, Tsai YH, Price KE, Leitner F. Antibacterial activity of phosphanilic acid, alone and in combination with trimethoprim. Antimicrob Agents Chemother. 1985;28(6):761–5. Andes D, Diekema DJ, Pfaller MA, Prince RA, Marchillo K, Ashbeck J, Hou J. In vivo pharmacodynamic characterization of anidulafungin in a neutropenic murine candidiasis model. Antimicrob Agents Chemother. 2008;52(2):539–50. Hajdu R, Thompson R, Sundelof JG, Pelak BA, Bouffard FA, Dropinski JF, Kropp H. Preliminary animal pharmacokinetics of the parenteral antifungal agent MK-0991 (L-743,872). Antimicrob Agents Chemother. 1997;41(11):2339–44. Icenhour CR, Kottom TJ, Limper AH. Evidence for a melanin cell wall component in pneumocystis carinii. Infect Immun. 2003;71(9):5360–3. This study was funded by institutional support to TZ and a Merit Award from the United States Veterans Affairs (I01BX000523) to MC. Dr. Cushion is the recipient of a Research Career Scientist Award from the Department of Veterans Affairs. All data generated or analyzed during this study are included in this published article and its supplementary information files. Guan-Sheng Liu and Richard Ballweg contributed equally to this work. Department of Pharmacology and Systems Physiology, College of Medicine, University of Cincinnati, 231 Albert Sabin Way, Cincinnati, OH, 45267-0576, USA Guan-Sheng Liu, Richard Ballweg, Joseph Facciolo & Tongli Zhang Department of Internal Medicine, College of Medicine, University of Cincinnati, Cincinnati, OH, USA Alan Ashbaugh & Melanie T. Cushion Division of Biostatistics and Epidemiology, Cincinnati Children's Hospital Medical Center, Cincinnati, OH, USA Yin Zhang Guan-Sheng Liu Richard Ballweg Alan Ashbaugh Joseph Facciolo Melanie T. Cushion Tongli Zhang GSL, RB, MC and TZ designed the study; AA and MC carried out the experimental work; GSL, RB, YZ, and JF analyzed and interpreted the data; all authors contributed to writing the manuscript; all authors read and approved the final manuscript. Correspondence to Tongli Zhang. The animal protocols used for this study were reviewed and approved by the University of Cincinnati's IACUC committee and the Cincinnati Veterans Affairs Medical Center IACUC; protocols UC 12–05–03-01 and ACORP#15–02–25-01, respectively. Both committees adhere to the 8th edition of the "Guide for the Care and Use of Laboratory Animals" and both are AAALAC accredited. Liu, GS., Ballweg, R., Ashbaugh, A. et al. A quantitative systems pharmacology (QSP) model for Pneumocystis treatment in mice. BMC Syst Biol 12, 77 (2018). https://doi.org/10.1186/s12918-018-0603-9 Pneumocystis - systems biology - quantitative systems pharmacology
CommonCrawl
The effect of preoperative chemotherapy on liver regeneration after portal vein embolization/ligation or liver resection in patients with colorectal liver metastasis: a systematic review protocol Mihai-Calin Pavel ORCID: orcid.org/0000-0003-2370-68421,2, Raquel Casanova1, Laia Estalella1,2, Robert Memba1,2, Erik Llàcer-Millán1,2, Mar Achalandabaso1, Elisabet Julià1, Justin Geoghegan3 & Rosa Jorba1,2 Systematic Reviews volume 9, Article number: 279 (2020) Cite this article Liver resection (LR) in patients with liver metastasis from colorectal cancer remains the only curative treatment. Perioperative chemotherapy improves prognosis of these patients. However, there are concerns regarding the effect of preoperative chemotherapy on liver regeneration, which is a key event in avoiding liver failure after LR. The primary objective of this systematic review is to assess the effect of neoadjuvant chemotherapy on liver regeneration after (LR) or portal vein embolization (PVE) in patients with liver metastasis from colorectal cancer. The secondary objectives are to evaluate the impact of the type of chemotherapy, number of cycles, and time between end of treatment and procedure (LR or PVE) and to investigate whether there is an association between degree of hypertrophy and postoperative liver failure. This meta-analysis will include studies reporting liver regeneration rates in patients submitted to LR or PVE. Pubmed, Scopus, Web of Science, Embase, and Cochrane databases will be searched. Only studies comparing neoadjuvant vs no chemotherapy, or comparing chemotherapy characteristics (bevacizumab administration, number of cycles, and time from finishing chemotherapy until intervention), will be included. We will select studies from 1990 to present. Two researchers will individually screen the identified records, according to a list of inclusion and exclusion criteria. Primary outcome will be future liver remnant regeneration rate. Bias of the studies will be evaluated with the ROBINS-I tool, and quality of evidence for all outcomes will be determined with the GRADE system. The data will be registered in a predesigned database. If selected studies are sufficiently homogeneous, we will perform a meta-analysis of reported results. In the event of a substantial heterogeneity, a qualitative systematic review will be performed. The results of this systematic review may help to better identify the patients affected by liver metastasis that could present low regeneration rates after neoadjuvant chemotherapy. These patients are at risk to develop liver failure after extended hepatectomies and therefore are not good candidates for such aggressive procedures. PROSPERO registration number: CRD42020178481 (July 5, 2020). Colorectal cancer is the fourth most frequently diagnosed cancer and the second cause of death related to cancer [1]. More than 50% of the patients diagnosed with colorectal cancer will develop metastases in the course of their disease [2]. Of these metastases, 20–30% will be confined exclusively in the liver (CRCLM) [3]. To date, liver resection (LR) remains the only curative option for patients with CRCLM [2, 4, 5], with survival rates that may reach 50% and 26% at 5 and 10 years, respectively [6]. The importance of complete treatment of all liver diseases is reflected by the fact that up to 97% of 10-year survivors do not develop recurrence after CRCLM resection [7]. However, more than 80% of CRCLM patients will have unresectable disease at the time of diagnosis [2, 8, 9]. Several prospective trials have shown encouraging results for preoperative chemotherapy, with conversion rates to resectable disease of 12.5–60%, depending on tumor biology and type of regimen used [10,11,12,13,14]. Furthermore, current guidelines recommend preoperative chemotherapy for the majority of CRCLM patients with resectable disease [4, 5]. The justification for this type of recommendations is to lower the probability of microscopic disease, to test the response to the treatment, and to identify the patients with aggressive disease in whom resection would not be indicated [15]. Therefore, the majority of CRCLM patients who reach LR will have received some form of neoadjuvant chemotherapy. One of the most important complications after LR is liver failure, which leads to a higher probability of major postoperative complications and death [16]. Major post-LR complications are usually associated with a significant increase in hospitalization time and higher postoperative costs [17, 18]. Current data demonstrates a correlation between liver failure and the extent of LR, highlighting the importance of planning a sufficient future liver remnant (FLR) (i.e., the volume of liver to be preserved after LR) when undertaking liver resection [19, 20]. In the context of CRCLM, in order to avoid liver failure, a minimum FLR of 30% is recommended [19, 21, 22]. For smaller FLRs, strategies for manipulating liver volume may be used, such as portal vein embolization (PVE), two-stage hepatectomy, or associating liver partition and portal vein ligation for staged hepatectomy (ALPPS) [19, 22, 23]. Preoperative chemotherapy may cause liver histological changes, such as sinusoidal obstruction syndrome (SOS) and non-alcoholic steatohepatitis (NASH) [21]. SOS has been associated with oxaliplatin regimens, while NASH is described particularly with irinotecan-based chemotherapy [24, 25]. Both syndromes may cause an increase in postoperative complications index, although NASH has especially been associated with higher rates of postoperative liver failure [21, 24]. One of the key events in the liver response to the injury (i.e., LR) is the occurrence of regeneration or hypertrophy. At a cellular level, this process is more accurately described as a compensatory hyperplasia, given that the remaining liver tissue expands in order to meet the organism requirements [26]. In healthy livers, regeneration restores liver volume to more than 80% of the preoperative value, 3 months after major LR [27]. However, several factors may impair adequate hepatic regeneration. The majority of these are also associated with postoperative liver failure: steatosis, fibrosis or cirrhosis, obstructive cholestasis, ischemia, etc. [19, 28]. Since preoperative chemotherapy causes proven histological changes in the liver, a link between neoadjuvant treatment and insufficient hypertrophy of the liver remnant may exist. However, to date, the available data remains controversial. Details about this matter are offered in the subchapter "How the intervention might work?". Current US and European guidelines establish the preferred neoadjuvant chemotherapy depending on the liver disease characteristics [4, 5]. However, there is a great degree of variability on the indications for neoadjuvant chemotherapy and on the type of treatment used, depending on each hospital's local protocol. For this reason, definitive conclusions concerning the oncological results as well as the occurrence of postoperative complications related to the chemotherapy are difficult to obtain outside randomized control trials or systematic reviews. The effect of neoadjuvant chemotherapy on postoperative liver regeneration is subject to the same variability and remains uncertain. How the intervention might work The effect of neoadjuvant chemotherapy on liver regeneration may be considered from several different perspectives. First, neoadjuvant chemotherapy can cause histological changes in the liver which may impair regeneration after surgery or post-embolization. As shown by several studies, steatosis and NASH are related to preoperative chemotherapy [21, 24, 25]. Liver steatosis reduces hypertrophy after major hepatectomy in animal models [29]. In addition, some publications have shown lower regeneration rates after major hepatectomy in obese patients [30]. Furthermore, there are studies that have demonstrated a direct correlation between lobular inflammation or fibrosis and liver regeneration rate [31]. Other authors mention the association of SOS with lower liver regeneration rates and increased indicators of liver failure [32]. However, the deleterious effect of neoadjuvant chemotherapy on liver regeneration are probably less evident in patients submitted to minor hepatectomies, since in these cases the percentage change in future liver volume is lower than after major hepatectomy [33]. Second, the impact of the number of preoperative chemotherapy cycles on liver regeneration is still not known. Some studies describe lower regeneration rates with more than six cycles of treatment [34], while other studies did not find such differences in the post-procedure hypertrophy rates [31]. Third, the time interval between completion of chemotherapy and the procedure could be important. Some studies report differences in the post-procedure regeneration rates in patients with less than 8 weeks of chemotherapy-free interval, especially when dealing with bevacizumab regimens [35]. However, other studies have failed to reproduce the same results [31]. Finally, the type of chemotherapy might be important. This debate is generally related to regimens containing bevacizumab. This molecule is a monoclonal antibody that targets vascular endothelial growth factor (VEGF) [36]. Its addition to classical chemotherapy regimens has showed an improved response and prolonged disease-free survival rates in patients with initially unresectable disease [37]. Concerns about the possible deleterious effect of bevacizumab on liver regeneration have been raised after experimental studies have shown that neutralization of VEGF inhibited proliferative activity of hepatocytes [38]. However, this effect in human patients is still debated [35]. This systematic review will study the effect of neoadjuvant chemotherapy and of its characteristics (type, number of cycles, and time from the end of treatment until intervention) on liver regeneration. In accordance with current guidelines, our systematic review protocol was registered with the International Prospective Register of Systematic Reviews (PROSPERO) on July 5, 2020 (registration number CRD42020178481). The main objective is to assess the effects of neoadjuvant chemotherapy on liver regeneration after LR or PVE in patients with CRCLM when compared to patients without chemotherapy before the procedure (defined as LR or PVE). Secondary objectives are as follows: Evaluate the impact of type of chemotherapy, number of cycles, and time between end of treatment and procedure on liver regeneration after LR or PVE in patients with CRCLM Assess the association between liver hypertrophy rate (defined below in the "Outcomes") section and index of postoperative liver failure/liver dysfunction in patients with neoadjuvant chemotherapy. Study eligibility criteria Studies selection will be performed according to the PICO (Population, Intervention, Comparison and Outcomes) criteria of Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) described below [39] and detailed in Table 1: Preferred Reporting Items for Systematic Reviews and Meta-Analyses Protocols (PRISMA-P) checklist. Table 1 PRISMA-P 2015 Checklist Randomized controlled trials (RCTs), including cluster RCTs, controlled (non-randomized) clinical trials (CCTs) or cluster trials, controlled before-after (CBA) studies, prospective and retrospective comparative cohort studies, and case-control or nested case-control studies will be included. Cluster randomized, cluster non-randomized, or CBA studies will be included only if there are at least two intervention sites and two control sites. We will include studies independently of geographic location or year of publication. We will accept unpublished material and abstracts from congresses. There will be no language restrictions. Cross-sectional studies, case series, case reports, systematic reviews, meta-analysis, and experimental studies on animals will be excluded. Editorials, letters, or commentaries will be excluded during the screening of titles and abstracts. We will include adult patients with CRCLM and with indication to perform LR or PVE, irrespective of the number of lesions or their localization. At least two volumetric estimations of FLR, one before and one after the procedure (LR or PVE) will be required for the inclusion of the study in the current review. The usual indication to perform PVE is an insufficient FLR. Accordingly, a separate analysis for LR and PVE indications will be performed. Type of interventions In order to achieve the primary objective, the type of intervention to be taken into account will be neoadjuvant chemotherapy, irrespective of type, number of cycles, or other characteristics. According to the National Cancer Institute, neoadjuvant chemotherapy is defined as "treatment given as a first step to shrink a tumor before the main treatment, which is usually surgery" [40]. In order to achieve the secondary objectives, we will perform sub-analysis for the following type of interventions: Type of chemotherapy: addition of bevacizumab to the chemotherapy regimen. Number of cycles: the intervention will be administration of more than 6 cycles of neoadjuvant chemotherapy. Depending on the data found in the selected studies, this number may be changed. Time between the end of chemotherapy and the procedure (LR or PVE): the intervention will be considered when this wait is less than 8 weeks. Depending on the analysis performed in the selected studies, this number may be changed. For the primary objective, the control group will include patients without chemotherapy submitted to the procedure (defined as LR or PVE). We will accept studies in which the control group contains patients with benign disease or other types of neoplasia. However, these studies will be carefully analyzed to detect possible biases of selection. For the secondary objectives, the control group can include patients with chemotherapy that do not meet the type of intervention mentioned above, patients without chemotherapy (similar to primary objective), or both. Primary outcomes Liver regeneration/hypertrophy rate: future liver remnant regeneration rate (FLR3), calculated as follows: $$ \mathrm{FL}{\mathrm{R}}^3=\frac{{\mathrm{FLR}}_f-{\mathrm{FLR}}_i}{{\mathrm{FLR}}_f} $$ where FLRf is the final future liver remnant (after the procedure—LR or PVE) and FLRi is the initial future liver remnant (before the procedure). The timing of FLRf will depend on the data reported in the selected studies, but we expect mainly volumetric data in the first month after the procedure, giving that most part of the hypertrophy occurs in this interval. Surrogate volumetric data: Total liver volume changes Changes in the volume of the liver to be resected (derived from pre- and post-procedure volumes in the case of PVE and pre-procedure volumes and weight of the resected liver in the case of LR) Secondary outcomes Postoperative liver failure or liver dysfunction. Accepted definition of liver failure will be: 50–50 criteria as proposed by Balzan: association of a prothrombin time of less than 50% and serum bilirubin of more than 50 μmol/L on 5th postoperative day [16] Bilirubin peak of more than 120 μmol/L during the postoperative period of a major hepatectomy [41] Grade B and C of post-hepatectomy liver failure according to International Study Group of Liver Surgery [42]. Electronic searches Literature search strategies will be conducted using medical subject headings (MeSH) and text words related to the objectives of this systematic review. We will search the following electronic databases: PubMed (1990 to present) Scopus (1990 to present) Web of Science (1990 to present) Embase (1990 to present) Cochrane Central Register of Controlled Trials (1996 to present) The key words used to perform the search in each electronic database are as follows: (regeneration OR hypertrophy) AND chemotherapy AND (liver OR hepatic) AND (metastasis OR metastases OR metastatic OR secondary). The details of the search are shown in the Appendix. Before proceeding to write the manuscript draft, the search of the literature will be updated, in order to identify any new publication which could be relevant to the objectives of this systematic review. Searching other resources Reference lists of all primary studies and review articles will be manually searched for additional references. Authors of published studies that were selected for this review will be contacted if needed in order to ask them to identify other published and unpublished studies. Errata or retractions from eligible studies will be searched on PubMed, and the date this was done will be reported in the review. Data collection and analysis Literature search will be loaded in a specially created Mendeley folder in order to access to titles and abstracts and will be listed in a specially created Excel file with the following coding: included, not included, and 2nd look. Several methods will be used to identify duplicate publications, according to Cochrane Handbook for Systematic Reviews of Interventions: trial identification numbers, juxtaposing author names, location and setting of the study, comparing sample sizes, specific details of the interventions, date and duration of the study, outcomes, and text of the abstract [43]. Two review authors (MP and RC) will independently screen titles and abstracts of all the potential studies identified as a result of the search, and code them as "included", "not included," or "2nd look". Full text of study reports coded as "included" or "2nd look" will be retrieved, and two review authors (MP and RC) will independently analyze the full text, identifying studies for inclusion and recording reasons for exclusion of the ineligible studies. Study authors will be contacted when additional information will be needed to resolve questions about eligibility. Any disagreement between the two authors will be solved by a third reviewer (RJ). Duplicates and collate multiple reports of the same study will be identified and excluded; thus, each study rather than each report will be the unit of interest in the review. The selection process will be recorded in sufficient detail to complete a PRISMA flow diagram and a table of excluded studies features. Data collection process An Excel-based data collection form will be used to record study characteristics and outcome data as described below in subchapters "Data Items" and "Outcomes and Prioritization." The data collection will be piloted on five studies previously identified as significant for this review. Two review authors (MP and RC) will independently extract study characteristics and outcome data from included studies. Any disagreement between the two authors will be solved by a third reviewer (RJ). To ensure consistency across reviewers, calibration exercises will be conducted before starting the review. Corresponding authors of selected studies will be contacted to resolve any uncertainties (three e-mail attempts at maximum). In order to extract data not reported in a numeric format, graphically presented data will be translated into usable format using OriginPro v 7.5 from OriginLab. Before the final revision of the draft, another search will be performed in order to identify duplicate publications among the selected articles, following the same previously described methodology. Data items The following study characteristics and outcomes will be extracted: Methods: study design, total duration of study and run-in period, number of study centers and location, study setting, withdrawals, and date of study Participants: number, mean age, age range, gender, inclusion and exclusion criteria Interventions: intervention, comparison, and any co-interventions Outcomes: primary and secondary outcomes specified and collected, and time points reported Notes: funding for trial and notable conflicts of interest of trial authors The main outcome of this systematic review will be FLR3, which represents the hypertrophy of the remnant liver after surgery. This outcome is derived from the estimated volume of the remnant before and after the intervention (LR or PVE). Even though both interventions induce liver hypertrophy, presumably FLR3 after LR or PVE will not be comparable. Therefore, FLR3 will be analyzed separately after each one of the interventions. FLR3 data will be expressed as mean ± standard deviation (SD). If data is offered in other forms (median–range or median–interquartile range (IQR)), mean ± SD will be calculated following the recommendations of Cochrane Handbook for Systematic Reviews of Interventions [43], whenever possible. Regarding the timing of post-procedure volume calculation, homogenous data are expected in PVE studies. The majority of preoperative PVE protocols plan surgery 4 weeks after the embolization procedure. However, the timing of remnant volumetry after LR could vary between studies. The majority of included studies most likely perform volume calculation 1 month after the procedure, when 80% of liver hypertrophy has occurred. Whenever remnant volume has been calculated at several time points, the one obtained at 1 month after LR will be chosen. The time of post-procedure volumetry will be registered for each study. The secondary outcome will be liver failure/dysfunction as defined above. This outcome will be calculated only for patients submitted to LR. This outcome will try to determine whether there is a correlation between a possible lower hypertrophy rate in patients with neoadjuvant chemotherapy and subsequent liver dysfunction rate. Assessment of bias Two investigators (MP and RC) will independently assess risk of bias for the included studies. Risk of bias will be assessed by the Risk Of Bias in Non-randomized Studies of Interventions (ROBINS-I) tool for non-RCT studies [44]. In this tool, risk of bias is assessed within specified domains, including (1) bias due to confounding, (2) bias in selection of participants into the study, (3) in classification of interventions, (4) bias due to deviations from intended interventions (5) bias due to missing data, (6) bias in measurement of outcomes, (7) bias in selection of the reported result, and (8) overall bias. Since assessments are inherently subjective and there are no strict and objective criteria to judge bias within the ROBINS-I tool, disagreements will be resolved via discussion between the two investigators or by the intervention of a third (RJ). If any RCT meeting the inclusion criteria are found, the evaluation of bias will be performed according to the Cochrane risk-of-bias tool for randomized trials (RoB 2) [45]. Quality assessment for all outcomes The quality of evidence for all outcomes will be determined with the Grading of Recommendations Assessment, Development and Evaluation (GRADE) system [46]. Quality will be evaluated as high, moderate, low, or very low. The evaluation of quality will be independently performed by two of the authors (MP and RJ). Data synthesis If selected studies are sufficiently homogeneous in design and comparators, we will perform a meta-analysis of reported results. Measures of treatment effect Dichotomous data (liver failure or dysfunction as previously defined or presence of ascites, encephalopathy) will be analyzed by using risk ratio (RR) with 95% confidence interval (CI) and continuous outcomes (FLR3 or surrogate volumetric data—total liver volume changes and changes in the liver volume to be resected) as mean difference or standardized mean difference when different scales are used (e.g., FLR3 vs surrogate volumetric data). If a study is suspected of comprising skewed data, this is commonly indicated by reporting medians and interquartile ranges. When this is found, transformations to mean differences will be carried out. If this is not possible due to lack of data, the data will be considered as skewed [47]. If the data are skewed, a meta-analysis will be not performed, though a narrative summary will be provided instead. Unit of analysis issues The unit of analysis will be individual participants affected by liver metastasis and candidates for LR or PVE. If any cluster randomized studies are unexpectedly found, the data will be included in the analysis if the results are adjusted for intra-cluster correlation. If any cross-over randomized studies are found, the data prior to the cross-over will be included. When a study has more than two treatment groups, the additional treatment arms will be presented. Where the additional treatment arms are not relevant, they will not be taken into account. Dealing with missing data Investigators or study sponsors will be contacted to verify key study characteristics and obtain missing numerical outcome data (e.g., when a study is presented as abstract only). If this information is not available from the study authors, it will be obtained, where feasible, by using calculations provided in the Cochrane Handbook for Systematic Review of Interventions [43]. The impact of including such studies will be assessed in a sensitivity analysis. If we are unable to calculate the standard deviation from standard error, interquartile range, or P values, we will impute standard deviation as the highest standard deviation in the remaining studies included in the outcome. Assessment of heterogeneity Clinical heterogeneity will be tested by considering the variability in participant factors among trials (e.g., age) and trial factors (randomization concealment, blinding of outcome assessment, losses to follow-up, treatment type, co-interventions). Statistical heterogeneity will be tested using the chi-squared test (significance level: 0.1) and I2 statistic (0 to 40%: might not be important; 30 to 60%: may represent moderate heterogeneity; 50 to 90%: may represent substantial heterogeneity; 75 to 100%: considerable heterogeneity). If high levels of heterogeneity among the trials exist (I2 > =50% or P < 0.1), the study design and characteristics in the included studies will be analyzed. The source of heterogeneity by subgroup analysis or sensitivity analysis will be explained. Each outcome will be calculated using the statistical software RevMan 5.1, according to the current version of the Cochrane Handbook for Systematic Reviews of Interventions [43]. The Mantel-Haenszel method will be used for the fixed effect model if tests of heterogeneity are not significant. If statistical heterogeneity is observed (I2 > =50% or P < 0.1), the random effects model will be chosen. Data will be presented in text and tables, in order to summarize the characteristics and findings of included studies. The analysis will describe the findings and associations within individual studies as well as among all the studies included in this review. Subgroup analysis and investigation of heterogeneity Subgroup analysis will be used to investigate possible sources of heterogeneity, based on the following parameters: General characteristics of included patients (age, sex) Timing of post-procedure volumetry Type of procedure (hepatectomy vs PVE) Sensitivity analysis will be performed to explain the source of heterogeneity: Analysis of the material retrieved (full text vs abstract only, preliminary data vs final results, published vs unpublished material) Risk of bias (performing analysis by omitting studies evaluated as of high risk of bias) If there is a need to amend this protocol, the date of each amendment will be registered, describing the change and giving the rationale in this section. Changes will not be incorporated into the protocol. Reaching conclusions Conclusions will be based on findings from the quantitative or narrative analysis of the studies included in this review. We will avoid making recommendations for clinical practice but we will focus on the remaining uncertainties in the field and the need for future clinical investigation. To date, there is sufficient data to conclude that postoperative liver regeneration is a key factor in avoiding postoperative liver failure. Factors associated with postoperative liver failure are also associated with impaired liver regeneration [19, 28]. Furthermore, predicted insufficient FLR volume represents an indication to perform alternative techniques in order to stimulate liver hypertrophy such as preoperative PVE, two-stage hepatectomy, or ALPPS [19, 48]. Since liver failure is associated with higher rates of postoperative death, it is important to evaluate whether neoadjuvant chemotherapy may cause a deficit in liver regeneration. However, there are no systematic reviews that analyze specifically the association between chemotherapy characteristics and post-procedure hypertrophy. A number of important prospective randomized controlled trials establish as a primary objective the oncological results or the postoperative complication rates [3, 37, 49, 50]. Furthermore, the heterogeneity of regimens used, the different protocols of treatments, and the lack of volumetric data in published studies makes it difficult to draw definitive conclusions regarding the role of neoadjuvant chemotherapy in liver regeneration without a properly conducted systematic analysis. Therefore, the need of a systematic review centered on this issue is evident. Data sharing is not applicable to this article, as no datasets were generated or analyzed during the current study. The HBP unit of Hospital Universitari de Tarragona Joan XXIII is an emergent group, formed by several surgeons with an important background in HBP surgery. Our unit is the only one in Tarragona province approved to perform liver resections for colorectal liver metastasis. MCP is currently the coordinator of the HBP Committee of Tarragona province, Spain. Together with LE, he is responsible for the management of the waiting list of patients with CRCLM for the University Hospital of Tarragona Joan XXIII. RJ is the chief of General Surgery Department of University Hospital of Tarragona Joan XXIII. RM is the coordinator of the HBP unit. ALPPS: Associating liver partition and portal vein ligation for staged hepatectomy CBA: Controlled before-after study CRCLM: Colorectal cancer liver metastases FLR: Future liver remnant FLR3 : Future liver remnant regeneration rate Grading of Recommendations, Assessment, Development and Evaluation LR: Liver resection MeSH: NASH: Non-alcoholic steatohepatitis PICO: Population, Intervention, Comparison and Outcomes PRISMA: Preferred Reporting Items for Systematic Reviews and Meta-Analyses PRISMA-P: Preferred Reporting Items for Systematic Reviews and Meta-Analyses Protocols PROSPERO: International prospective register of systematic reviews PVE: Portal vein embolization RoB 2: Cochrane risk-of-bias tool for randomized trials ROBINS-I: Risk of Bias in Non-randomized Studies of Interventions SOS: Sinusoidal obstruction syndrome VEGF: Siegel RL, Miller KD, Jemal A. Cancer statistics, 2019. CA Cancer J Clin. 2019;69(1):7–34. PubMed Article PubMed Central Google Scholar Van Cutsem E, Nordlinger B, Adam R, Köhne CH, Pozzo C, Poston G, et al. Towards a pan-European consensus on the treatment of patients with colorectal liver metastases. Eur J Cancer. 2006;42(14):2212–21. Borner MM. Neoadjuvant chemotherapy for unresectable liver metastases of colorectal cancer - too good to be true? Editorial. Ann Oncol. 1999;10(6):623–6. CAS PubMed Article PubMed Central Google Scholar Recently updated NCCN Clinical Practice Guidelines in OncologyTM [Internet]. Available from: https://www.nccn.org/professionals/physician_gls/recently_updated.aspx. [cited 2020 Mar 19]. Van Cutsem E, Cervantes A, Adam R, Sobrero A, Van Krieken JH, Aderka D, et al. ESMO consensus guidelines for the management of patients with metastatic colorectal cancer. Ann Oncol. 2016;27(8):1386–422 Available from: http://www.ncbi.nlm.nih.gov/pubmed/27380959. [cited 2018 Dec 5]. Kanas GP, Taylor A, Primrose JN, Langeberg WJ, Kelsh MA, Mowat FS, et al. Survival after liver resection in metastatic colorectal cancer: review and meta-analysis of prognostic factors. Clin Epidemiol. 2012;4(1):283–301. Tomlinson JS, Jarnagin WR, DeMatteo RP, Fong Y, Kornprat P, Gonen M, et al. Actual 10-year survival after resection of colorectal liver metastases defines cure. J Clin Oncol. 2007;25(29):4575–80. Muratore A, Zorzi D, Bouzari H, Amisano M, Massucco P, Sperti E, et al. Asymptomatic colorectal cancer with un-resectable liver metastases: immediate colorectal resection or up-front systemic chemotherapy? Ann Surg Oncol. 2007;14(2):766–70. Alberts SR, Horvath WL, Sternfeld WC, Goldberg RM, Mahoney MR, Dakhil SR, et al. Oxaliplatin, fluorouracil, and leucovorin for patients with unresectable liver-only metastases from colorectal cancer: a North Central Cancer Treatment Group phase II study. J Clin Oncol. 2005;23(36):9243–9. Pozzo C, Basso M, Cassano A, Quirino M, Schinzari G, Trigila N, et al. Neoadjuvant treatment of unresectable liver disease with irinotecan and 5-fluorouracil plus folinic acid in colorectal cancer patients. Ann Oncol. 2004 Jun;15(6):933–9. Adam R, Delvart V, Pascal G, Valeanu A, Castaing D, Azoulay D, et al. Rescue surgery for unresectable colorectal liver metastases downstaged by chemotherapy: a model to predict long-term survival. Ann Surg. 2004;240(4):644–58. Folprecht G, Gruenberger T, Bechstein WO, Raab HR, Lordick F, Hartmann JT, et al. Tumour response and secondary resectability of colorectal liver metastases following neoadjuvant chemotherapy with cetuximab: the CELIM randomised phase 2 trial. Lancet Oncol. 2010;11(1):38–47. Ye LC, Liu TS, Ren L, Wei Y, Zhu DX, Zai SY, et al. Randomized controlled trial of cetuximab plus chemotherapy for patients with KRAS wild-type unresectable colorectal liver-limited metastases. J Clin Oncol. 2013;31(16):1931–8. Modest DP, Martens UM, Riera-Knorrenschild J, Greeve J, Florschütz A, Wessendorf S, et al. FOLFOXIRI plus panitumumab as first-line treatment of RAS wild-type metastatic colorectal cancer: the randomized, open-label, phase II Volfi study (AIO KRK0109). J Clin Oncol. 2019;37(35):3401–11. Chow FCL, Chok KSH. Colorectal liver metastases: an update on multidisciplinary approach. World J Hepatol. 2019;11(2):150–72. Balzan S, Belghiti J, Farges O, Ogata S, Sauvanet A, Delefosse D, et al. The 50-50 criteria on postoperative day 5: an accurate predictor of liver failure and death after hepatectomy. Ann Surg. 2005;242(6):824–8 discussion 828-9. Available from: http://www.ncbi.nlm.nih.gov/pubmed/16327492. [cited 2017 Mar 10]. Idrees JJ, Johnston FM, Canner JK, Dillhoff M, Schmidt C, Haut ER, et al. Cost of major complications after liver resection in the United States: are high-volume centers cost-effective? Ann Surg. 2019;269(3):503–10. Idrees JJ, Kimbrough CW, Rosinski BF, Schmidt C, Dillhoff ME, Beal EW, et al. The cost of failure: assessing the cost-effectiveness of rescuing patients from major complications after liver resection using the National Inpatient Sample. J Gastrointest Surg. 2018;22(10):1688–96. Clavien P-A, Petrowsky H, DeOliveira ML, Graf R. Strategies for safer liver surgery and partial liver transplantation. N Engl J Med. 2007;356(15):1545–59 Available from: http://www.nejm.org/doi/abs/10.1056/NEJMra065156. [cited 2018 Dec 11]. Lafaro K, Buettner S, Maqsood H, Wagner D, Bagante F, Spolverato G, et al. Defining post hepatectomy liver insufficiency: where do we stand? J Gastrointest Surg. 2015;19(11):2079–92. Zorzi D, Laurent A, Pawlik TM, Lauwers GY, Vauthey J-N, Abdalla EK. Chemotherapy-associated hepatotoxicity and surgery for colorectal liver metastases. Br J Surg. 2007 ;94(3):274–286. Available from: https://doi.org/10.1002/bjs.5719. [cited 2018 Dec 11]. Jones RP, Stättner S, Sutton P, Dunne DF, McWhirter D, Fenwick SW, et al. Controversies in the oncosurgical management of liver limited stage IV colorectal cancer. Surg Oncol. 2014;23(2):53–60. Moris D, Ronnekleiv-Kelly S, Kostakis ID, Tsilimigras DI, Beal EW, Papalampros A, et al. Operative results and oncologic outcomes of associating liver partition and portal vein ligation for staged hepatectomy (ALPPS) versus two-stage hepatectomy (TSH) in patients with unresectable colorectal liver metastases: a systematic review and meta-anal. World J Surg. 2018;42(3):806–15. Fernandez FG, Ritter J, Goodwin JW, Linehan DC, Hawkins WG, Strasberg SM. Effect of steatohepatitis associated with irinotecan or oxaliplatin pretreatment on resectability of hepatic colorectal metastases. J Am Coll Surg. 2005;200(6):845–53. Vauthey JN, Pawlik TM, Ribero D, Wu TT, Zorzi D, Hoff PM, et al. Chemotherapy regimen predicts steatohepatitis and an increase in 90-day mortality after surgery for hepatic colorectal metastases. J Clin Oncol. 2006;24(13):2065–72. Mao SA, Glorioso JM, Nyberg SL. Liver regeneration. Transl Res. 2014;163(4):352–62. CAS PubMed PubMed Central Article Google Scholar Olthoff KM, Emond JC, Shearon TH, Everson G, Baker TB, Fisher RA, et al. Liver regeneration after living donor transplantation: adult-to-adult living donor liver transplantation cohort study. Liver Transplant. 2015;21(1):79–88. Forbes SJ, Newsome PN. Liver regeneration-mechanisms and models to clinical application. Nat Rev Gastroenterol Hepatol. 2016;13(8):473–85. Veteläinen R, Van Vliet AK, Van Gulik TM. Severe steatosis increases hepatocellular injury and impairs liver regeneration in a rat model of partial hepatectomy. Ann Surg. 2007;245(1):44–50. Truant S, Bouras AF, Petrovai G, Buob D, Ernst O, Boleslawski E, et al. Volumetric gain of the liver after major hepatectomy in obese patients: a case-matched study in 84 patients. Ann Surg. 2013;258(5):696–704. Simoneau E, Alanazi R, Alshenaifi J, Molla N, Aljiffry M, Medkhali A, et al. Neoadjuvant chemotherapy does not impair liver regeneration following hepatectomy or portal vein embolization for colorectal cancer liver metastases. J Surg Oncol. 2016;113(4):449–55 Available from: http://www.ncbi.nlm.nih.gov/pubmed/26955907. [cited 2020 Apr 17]. Narita M, Oussoultzoglou E, Chenard MP, Rosso E, Casnedi S, Pessaux P, et al. Sinusoidal obstruction syndrome compromises liver regeneration in patients undergoing two-stage hepatectomy with portal vein embolization. Surg Today. 2011;41(1):7–17. Inoue Y, Fujii K, Tashiro K, Ishii M, Masubuchi S, Yamamoto M, et al. Preoperative chemotherapy may not influence the remnant liver regenerations and outcomes after hepatectomy for colorectal liver metastasis. World J Surg. 2018;42(10):3316–30. Dello SAWG, Kele PGS, Porte RJ, Van Dam RM, Klaase JM, Verhoef C, et al. Influence of preoperative chemotherapy on CT volumetric liver regeneration following right hemihepatectomy. World J Surg. 2014;38(2):497–504. Zorzi D, Chun YS, Madoff DC, Abdalla EK, Vauthey JN. Chemotherapy with bevacizumab does not affect liver regeneration after portal vein embolization in the treatment of colorectal liver metastases. Ann Surg Oncol. 2008;15(10):2765–72. Hicklin DJ, Ellis LM. Role of the vascular endothelial growth factor pathway in tumor growth and angiogenesis. J Clin Oncol. 2005;23:1011–27. Gruenberger T, Bridgewater J, Chau I, Garcia Alfonso P, Rivoire M, Mudan S, et al. Bevacizumab plus mFOLFOX-6 or FOLFOXIRI in patients with initially unresectable liver metastases from colorectal cancer: the OLIVIA Multinational Randomised Phase II Trial - PubMed. Ann Oncol Off J Eur Soc Med Oncol. 2015;26(4):702–8 Available from: https://pubmed.ncbi.nlm.nih.gov/25538173/?from_term=Ann+Oncol+2014%3B+26%3A+702–708&from_filter=pubt.review. [cited 2020 Apr 14]. Taniguchi E, Sakisaka S, Matsuo K, Tanikawa K, Sata M. Expression and role of vascular endothelial growth factor in liver regeneration after partial hepatectomy in rats. J Histochem Cytochem. 2001;49(1):121–9. Moher D, Shamseer L, Clarke M, Ghersi D, Liberati A, Petticrew M, et al. Preferred Reporting Items for Systematic Review and Meta-Analysis Protocols (PRISMA-P) 2015 statement. Syst Rev. 2015;4:1 Available from: https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4320440/. [cited 2020 Apr 7]. Definition of neoadjuvant therapy - NCI Dictionary of Cancer Terms - National Cancer Institute. Available from: https://www.cancer.gov/publications/dictionaries/cancer-terms/def/neoadjuvant-therapy. [cited 2020 Apr 27]. Mullen JT, Ribero D, Reddy SK, Donadon M, Zorzi D, Gautam S, et al. Hepatic insufficiency and mortality in 1,059 noncirrhotic patients undergoing major hepatectomy. J Am Coll Surg. 2007;204(5):854–62 discussion 862-4. Available from: http://linkinghub.elsevier.com/retrieve/pii/S1072751506018369. [cited 2018 Oct 19]. Rahbari NN, Garden OJ, Padbury R, Brooke-Smith M, Crawford M, Adam R, et al. Posthepatectomy liver failure: a definition and grading by the International Study Group of Liver Surgery (ISGLS). Surgery. 2011;149(5):713–24 Available from: https://linkinghub.elsevier.com/retrieve/pii/S0039606010005659. [cited 2019 Feb 22]. Higgins J, Thomas J, Chandler J, Cumpston M, Li T, Page M, et al. Cochrane Handbook for Systematic Reviews of Interventions. 6.1(update. Cochrane 2020; 2020. Available from: www.training.cochrane.org/handbook. [cited 2020 Apr 8]. Sterne JA, Hernán MA, Reeves BC, Savović J, Berkman ND, Viswanathan M, et al. ROBINS-I: a tool for assessing risk of bias in non-randomised studies of interventions. BMJ. 2016;355:i4919. https://doi.org/10.1136/bmj.i4919. Sterne JAC, Savović J, Page MJ, Elbers RG, Blencowe NS, Boutron I, et al. RoB 2: a revised tool for assessing risk of bias in randomised trials. BMJ. 2019;366:l4898. https://doi.org/10.1136/bmj.l4898. Guyatt GH, Oxman AD, Vist GE, Kunz R, Falck-Ytter Y, Alonso-Coello P, et al. GRADE: an emerging consensus on rating quality of evidence and strength of recommendations. BMJ. 2008;336(7650):924–6. Deeks JJ, Higgins JPT, Altman DG, on behalf of the Cochrane Statistical Methods Group. Cochrane Handbook for Systematic Reviews of Interventions. Chapter 10: analysing data and undertaking meta-analyses | Cochrane Training. Available from: https://training.cochrane.org/handbook/current/chapter-10. [cited 2020 Nov 7]. Liu Y, Yang Y, Gu S, Tang K. A systematic review and meta-analysis of associating liver partition and portal vein ligation for staged hepatectomy (ALPPS) versus traditional staged hepatectomy. Medicine (Baltimore). 2019;98(15):e15229. Nordlinger B, Sorbye H, Glimelius B, Poston GJ, Schlag PM, Rougier P, et al. Perioperative FOLFOX4 chemotherapy and surgery versus surgery alone for resectable liver metastases from colorectal cancer (EORTC 40983): long-term results of a randomised, controlled, phase 3 trial. Lancet Oncol. 2013;14(12):1208–15. Primrose J, Falk S, Finch-Jones M, Valle J, O'Reilly D, Siriwardena A, et al. Systemic chemotherapy with or without cetuximab in patients with resectable colorectal liver metastasis: the New EPOC randomised controlled trial. Lancet Oncol. 2014;15(6):601–11. We would like to thank Miss Gemma Falcó, medical librarian, for her valuable help provided to perform the literature search. The study was performed exclusively by members of the HBP Unit of University Hospital of Tarragona Joan XXIII. No funding has been received for this study. HPB Unit, Department of General Surgery, Hospital Universitari de Tarragona Joan XXIII, C/ Dr. Mallafrè Guasch, 4, 43005, Tarragona, Spain Mihai-Calin Pavel, Raquel Casanova, Laia Estalella, Robert Memba, Erik Llàcer-Millán, Mar Achalandabaso, Elisabet Julià & Rosa Jorba Departament de Medicina i Cirugia, Universitat Rovira i Virgili, Reus, Spain Mihai-Calin Pavel, Laia Estalella, Robert Memba, Erik Llàcer-Millán & Rosa Jorba HPB and Liver Transplant Surgery Department, St. Vincent's University Hospital, Dublin, Ireland Justin Geoghegan Mihai-Calin Pavel Raquel Casanova Laia Estalella Robert Memba Erik Llàcer-Millán Mar Achalandabaso Elisabet Julià Rosa Jorba MCP and LE conceived the protocol. MCP, RC, and RJ designed the protocol. MCP coordinated the protocol. MCP and RC designed the search strategies. MCP, RC, LE, RM, EL, MA, EJ, JG, and RJ write and reviewed the protocol. MCP is the guarantor of the review. The author(s) read and approved the final manuscript. Correspondence to Mihai-Calin Pavel. Search strategy for electronic databases ("Liver"[Mesh] OR "Liver"[tiab] OR "Liver Neoplasms"[Mesh] OR "Hepatic"[tiab] OR "Hepatectomy"[tiab]) ("Neoplasm metastasis"[Mesh] OR metasta*[tiab] OR "secondary"[tiab]) ("Liver Regeneration"[Mesh] OR "Regeneration"[Mesh] OR "Regeneration"[tiab] OR "Hypertrophy"[Mesh] OR "hypertrophy"[tiab]) (("chemotherapy"[tiab] OR "Antineoplastic Agents"[Mesh]) AND ("preoperative"[tiab] OR "before"[tiab])) Resultats →50 referències [ accés: pubmed.pdf ] TITLE-ABS(("Liver" AND "Hepatic" OR "Hepatectomy") AND (metasta* OR "secondary") AND ("Regeneration" OR "hypertrophy") AND "chemotherapy" AND ("preoperative" OR "before")) Resultats → 62 referències [ accés: scopus.pdf ] TS = (("Liver" AND "Hepatic" OR "Hepatectomy") AND (metasta* OR "secondary") AND ("Regeneration" OR "hypertrophy") AND "chemotherapy" AND ("preoperative" OR "before")) Período de tiempo: Todos los años. Bases de datos: WOS, CCC, DIIDW, KJD, MEDLINE, RSCI, SCIELO. Idioma de búsqueda = Auto Resultats →193 referències [accés: webofscience.pdf ] ('liver'/exp OR 'liver':ti,ab) AND ('liver neoplasms'/exp OR 'hepatic':ti,ab OR 'hepatectomy':ti,ab OR 'liver resection'/exp) AND ('metastasis'/exp OR metasta*:ti,ab OR 'secondary':ti,ab) AND ('liver regeneration'/exp OR 'regeneration'/exp OR 'regeneration':ti,ab OR 'hypertrophy'/exp OR 'hypertrophy':ti,ab) AND ('chemotherapy':ti,ab OR 'chemotherapy'/exp OR 'antineoplastic agents'/exp) AND ('preoperative':ti,ab OR 'preoperative period'/exp OR 'before':ti,ab) Resultats → 341 referències [accés: embase.pdf] (("Liver" AND "Hepatic" OR "Hepatectomy") AND (metasta* OR "secondary") AND ("Regeneration" OR "hypertrophy") AND "chemotherapy" AND ("preoperative" OR "before")) in Title Abstract Keyword - (Word variations have been searched) Resultats →7 referències [accés: cochrane.txt ] Pavel, MC., Casanova, R., Estalella, L. et al. The effect of preoperative chemotherapy on liver regeneration after portal vein embolization/ligation or liver resection in patients with colorectal liver metastasis: a systematic review protocol. Syst Rev 9, 279 (2020). https://doi.org/10.1186/s13643-020-01545-w DOI: https://doi.org/10.1186/s13643-020-01545-w Liver regeneration Colorectal cancer liver metastasis
CommonCrawl
Logarithmic encoding of ensemble time intervals Yue Ren1, Fredrik Allenmark1, Hermann J. Müller1 & Zhuanghua Shi1 Scientific Reports volume 10, Article number: 18174 (2020) Cite this article Although time perception is based on the internal representation of time, whether the subjective timeline is scaled linearly or logarithmically remains an open issue. Evidence from previous research is mixed: while the classical internal-clock model assumes a linear scale with scalar variability, there is evidence that logarithmic timing provides a better fit to behavioral data. A major challenge for investigating the nature of the internal scale is that the retrieval process required for time judgments may involve a remapping of the subjective time back to the objective scale, complicating any direct interpretation of behavioral findings. Here, we used a novel approach, requiring rapid intuitive 'ensemble' averaging of a whole set of time intervals, to probe the subjective timeline. Specifically, observers' task was to average a series of successively presented, auditory or visual, intervals in the time range 300–1300 ms. Importantly, the intervals were taken from three sets of durations, which were distributed such that the arithmetic mean (from the linear scale) and the geometric mean (from the logarithmic scale) were clearly distinguishable. Consistently across the three sets and the two presentation modalities, our results revealed subjective averaging to be close to the geometric mean, indicative of a logarithmic timeline underlying time perception. What is the mental scale of time? Although this is one of the most fundamental issues in timing research that has long been posed, it remains only poorly understood. The classical internal-clock model implicitly assumes linear coding of time: a central pacemaker generates ticks and an accumulator collects the ticks in a process of linear summation1,2. However, the neuronal plausibility of such a coding scheme has been called into doubt: large time intervals would require an accumulator with (near-)unlimited capacity3, making it very costly to implement such a mechanism neuronally4,5. Given this, alternative timing models have been proposed that use oscillatory patterns or neuronal trajectories to encode temporal information6,7,8,9. For example, the striatal beat-frequency model6,9,10 assumes that time intervals are encoded in the oscillatory firing patterns of cortical neurons, with the length of an interval being discernible, for time judgments, by the similarity of an oscillatory pattern with patterns stored in memory. Neuronal trajectory models, on the other hand, use intrinsic neuronal patterns as markers for timing. However, owing to the 'arbitrary' nature of neuronal patterns, encoded intervals cannot easily be used for simple arithmetic computations, such as the summation or subtraction of two intervals. Accordingly, these models have been criticized for lacking computational accessibility11. Recently, a neural integration model12,13,14 adopted stochastic drift diffusion as the temporal integrator which, similar to the classic internal-clock model, starts the accumulation at the onset of an interval and increases until the integrator reaches a decision threshold. To avoid the 'unlimited-capacity' problem encountered by the internal-clock model, the neural integration model assumes that the ramping activities reach a fixed decision barrier, though with different drift rates—in particular, a lower rate for longer intervals. However, this proposal encounters a conceptual problem: the length of the interval would need to be known at the start of the accumulation. Thus, while a variety of timing models have been proposed, there is no agreement on how time intervals are actually encoded. There have been many attempts, using a variety of psychophysical approaches, to directly uncover the subjective timeline that underlies time judgments. However, distinguishing between linear and logarithmic timing turned out to be constrained by the experimental paradigms adopted15,16,17,18,19,20,21. In temporal bisection tasks, for instance, a given probe interval is compared to two, short and long, standard intervals, and observers have to judge whether the probe interval is closer to one or the other. The bisection point—that is, the point that is subjectively equally distant to the short and long time references—was often found to be close to the geometric mean22,23. Such observations led to the earliest speculation that the subjective timeline might be logarithmic in nature: if time were coded linearly, the midpoint on the subjective scale should be equidistant from both (the short and long) references, yielding their arithmetic mean. By contrast, with logarithmic coding of time, the midpoint between both references (on the logarithmic scale) would be their geometric mean, as is frequently observed. However, Gibbon and colleagues offered an alternative explanation for why the bisection point may turn out close to the geometric mean, namely: rather than being diagnostic of the internal coding of time, the midpoint relates to the comparison between the ratios of the elapsed time T with respect to the Short and Long reference durations, respectively; accordingly, the subjective midpoint is the time T for which the ratios Short/T and T/Long are equal, which also yields the geometric mean24,25. Based on a meta-analysis of 148 experiments using the temporal bisection task across 18 independent studies, Kopec and Brody concluded that the bisection point is influenced by a number of factors, including the short-long spread (i.e., the Long/Short ratio), probe context, and even observers' age. For instance, for short-long spreads less than 2, the bisection points were close to the geometric mean of the short and long standards, but they shifted toward the arithmetic mean when the spread increased. In addition, the bisection points can be biased by the probe context, such as the spacing of the probe durations presented15,17,26. Thus, approaches relying on simple duration comparison have limited utility to uncover the internal timeline. The timeline issue became more complicated when it was discovered that time judgments are greatly impacted by temporal context. One prime example is the central-tendency effect27,28: instead of being veridical, observed time judgments are often assimilated towards the center of the sampled durations (i.e., short durations are over- and long durations under-estimated). This makes a direct interpretation of the timeline difficult, if not impossible. On a Bayesian interpretation of the central-tendency effect, the perceived duration is a weighted average of the sensory measure and prior knowledge of the sampled durations, where their respective weights are commensurate to their reliability29,30. There is one point within the range of time estimation where time judgments are accurate: the point close to the mean of the sampled durations (i.e., prior), which is referred to as 'indifference point'27. Varying the ranges of the sampled durations, Jones and McAuley31 examined whether the indifference point would be closer to the geometric or the arithmetic mean of the test intervals. The results turned out rather mixed. It should be noted, though, that the mean of the prior is dynamically updated across trials by integrating previous sampled intervals into the prior—which is why it may not provide the best anchor for probing the internal timeline. Probing the internal timeline becomes even more challenging if we consider that the observer's response to a time interval may not directly reflect the internal representation, but rather a decoded outcome. For example, an external interval might be encoded and stored (in memory) in a compressed, logarithmic format internally. When that interval is retrieved, it may first have to be decoded (i.e., transformed from logarithmic to linear space) in working memory before any further comparison can be made. The involvement of decoding processes would complicate drawing direct inferences from empirical data. However, it may be possible to escape such complications by examining basic 'intuitions' of interval timing, which may bypass complex decoding processes. One fundamental perceptual intuition we use all the time is 'ensemble perception'. Ensemble perception refers to the notion that our sensory systems can rapidly extract statistical (summary) properties from a set of similar items, such as their sum or mean magnitude. For example, Dehaene et al.32 used an individual number-space mapping task to compare Mundurucu, an Amazonian indigenous culture with a reduced number lexicon, to US American educated participants. They found that the Mundurucu group, across all ages, mapped symbolic and nonsymbolic numbers onto a logarithmic scale, whereas educated western adults used linear mapping of numbers onto space—favoring the idea that the initial intuition of number is logarithmic32. Moreover, kindergarten and pre-school children also exhibit a non-linear representation of numbers close to logarithmic compression (e.g., they place the number 10 near the midpoint of the 1–100 scale)33. This nonlinearity then becomes less prominent as the years of schooling increase34,35,36. That is, the sophisticated mapping knowledge associated with the development of 'mathematical competency' comes to supersede the basic intuitive logarithmic mapping, bringing about a transition from logarithmic to linear numerical estimation37. However, rather than being unlearnt, the innate, logarithmic scaling of number may in fact remain available (which can be shown under certain experimental conditions) and compete with the semantic knowledge of numeric value acquired during school education. Our perceptual intuition works very fast. For example, we quickly form an idea about the average size of apples from just taking a glimpse at the apple tree. In a seminal study by Ariel38, participants, when asked to identify whether a presented object belonged to a group of similar items, tended to automatically respond with the mean size. Intuitive averaging has been demonstrated for various features in the visual domain39, from primary ensembles such as object size40,41 and color42, to high-level ensembles such as facial expression and lifelikeness43,44,45,46. Rather than being confined to the (inherently 'parallel') visual domain, ensemble perception has also been demonstrated for sequentially presented items, such as auditory frequency, tone loudness, and weight47,48,49,50. In a cross-modal temporal integration study, Chen et al.51 showed that the average interval of a train of auditory intervals can quickly capture a subsequently presented visual interval, influencing visual motion perception. In brief, our perceptual systems can automatically extract overall statistical properties using very basic intuitions to cope with sensory information overload and the limited capacity of working memory. Thus, given that ensemble perception operates at a fast and low-level stage of processing (possibly bypassing many high-level cognitive decoding processes), using ensemble perception as a tool to test time perception may provide us with new insights into the internal representation of time intervals. On this background, we designed an interval duration-averaging task in which observers were asked to compare the average duration of a set of intervals to a standard interval. We hypothesized that if the underlying interval representation is linear, the intuitive average should reflect the arithmetic mean (AM) of the sample intervals. Conversely, if intervals are logarithmically encoded internally and intuitive averaging operates on that level (i.e., without remapping individual intervals from logarithmic to linear scale), we would expect the readout of the intuitive average at the intervals' geometric mean (GM). This is based on the fact that the exponential transform of the average of the log-encoded intervals is the geometric mean. Note, though, that the subjective averaged duration may be subject to general bias and sequence (e.g., time-order error52,53) effects, as has often been observed in studies of time estimation54. For this reason, we considered it wiser to compare response patterns across multiple sets of intervals to the patterns predicted, respectively, from the AM and the GM, rather than comparing the subjective averaged duration directly to either the AM or the GM of the intervals. Accordingly, we carefully chose three sets of intervals, for which one set would yield a different average to the other sets according to each individual account (see Fig. 1). Each set contained five intervals—Set 1: 300, 550, 800, 1050, 1300 ms; Set 2: 600, 700, 800, 900, 1000 ms; and Set 3: 500, 610, 730, 840, 950 ms. Accordingly, Sets 1 and 2 have the same arithmetic mean (800 ms), which is larger than the arithmetic mean of Set 3 (727 ms). And Sets 1 and 3 have the same geometric mean (710 ms), which is shorter than the geometric mean of Set 2 (787 ms). The rationale was that, given the assumptions of linear and logarithmic representations make distinct predictions for the three sets, we may be able to infer the internal representation by observing the behavioral outcome based on the predictions. Illustration of three sets of intervals used in the study. (a) Three sets of intervals each of five intervals (Set 1: 300, 550, 800, 1050, 1300 ms; Set 2: 600, 700, 800, 900, 1000 ms; Set 3: 500, 610, 730, 840, 950 ms). The presentation order of the five intervals was randomized within each trial. (b) Predictions of ensemble averaging based on two hypothesized coding schemes: Linear Coding and, respectively, Logarithmic Coding. Sets 1 and 2 have the same arithmetic mean of 800 ms, which is larger than the arithmetic mean of the group 3 (727 ms). Sets 1 and 3 have the same geometric mean of 710 ms, which is smaller than the geometric mean of set 1 (787 ms). Subjective durations are known to differ between visual and auditory signals5,55,56, as our auditory system has higher temporal precision than the visual system. Often, sounds are judged longer than lights55,57, where the difference is particularly marked when visual and auditory durations are presented intermixed in the same testing session58. It has been suggested that time processing may be distributed in different modalities59, and the internal pacemaker 'ticks' faster for the auditory than the visual modality55. Accordingly, the processing strategies may potentially differ between the two modalities. Thus, in order to establish whether the internal representation of time is modality-independent, we tested both modalities using the same set of intervals in separate experiments. The methods and experimental protocols were approved by the Ethics Board of the Faculty of Pedagogics and Psychology at LMU Munich, Germany, and are in accordance with the Declaration of Helsinki 2008. A total of 32 participants from the LMU Psychology community took part in the study, 1 of whom were excluded from further analyses due to lower-than-chance-level performance (i.e., temporal estimates exceeded 150% of the given duration). 16 participants were included in Experiment 1 (8 females, mean age of 22.2), and 15 participants were included in Experiment 2 (8 females, mean age of 26.4). Prior to the experiment, participants gave written informed consent and were paid for their participation of 8 Euros per hour. All reported a normal (or corrected-to-normal) vision, normal hearing, and no somatosensory disorders. The experiments were conducted in a sound-isolated cabin, with dim incandescent background lighting. Participants sat approximately 60 cm from a display screen, a 21-inch CRT monitor (refresh rate 100 Hz; screen resolution 800 × 600 pixels). In Experiment 1, auditory stimuli (i.e., intervals) were delivered via two loudspeakers positioned just below the monitor, with a left-to-right separation of 40 cm. Brief auditory beeps (10 ms, 60 dB; frequency of 2500 or 3000 Hz, respectively) were presented to mark the beginning and end of the auditory intervals. In Experiment 2, the intervals were demarcated visually, namely, by presenting brief (10-ms) flashes of a gray disk (5° of visual angle in diameter, 21.4 \({\text{cd}}/{\text{m}}^{2}\)) in center of the display monitor against black screen background (1.6 \({\text{cd}}/{\text{m}}^{2}\)). As for the length of the (five) successively presented intervals on a given trial, there were three sets: Set 1: 300, 550, 800, 1050, 1300 ms; Set 2: 600, 700, 800, 900, 1000 ms; and Set 3: 500, 610, 730, 840, 950 ms. These sets were constructed such that Sets 1 and 2 had the same arithmetic mean (800 ms), which is larger than the arithmetic mean of Set 3 (727 ms). And Sets 1 and 3 have the same geometric mean (710 ms), which is shorter than the geometric mean of Set 2 (787 ms). Of note, the order of the five intervals (of the presented set) was randomized on each trial. Two separate experiments were conducted, testing auditory (Experiment 1) and visual stimuli (Experiment 2), respectively. Each trial consisted of two presentation phases: successive presentation of five intervals, followed by the presentation of a single comparison interval. Participants' task was to indicate, via a keypress response, whether the comparison interval was shorter or longer than the average of the five successive intervals. The response could be given without stress on speed. In Experiment 1 (auditory intervals), trials started with a fixation cross presented for 500 ms, followed by a succession of five intervals demarcated by six 10-ms auditory beeps. Along with the onset of the auditory stimuli, a '1' was presented on display monitor, telling participants that this was the first phase of the comparison task. The series of intervals was followed by a blank gap (randomly ranging between 800 and 1200 ms), with a fixation sign '+' on the screen (indicating the transition to the comparison phase 2). After the gap, a single comparison duration demarcated by two brief beeps (10 ms) was presented, together with a '2', indicating phase two of the comparison. Following another random blank gap (of 800–1200 ms), a question mark ('?') appeared in the center of the screen, prompting participants to report whether the average interval of the first five (successive) intervals was longer or shorter than the second, comparison interval (Fig. 2a). Participants issued their response via the left or right arrow keys (on the keyboard in front of them) using their two index fingers, corresponding to either 'shorter' or 'longer' judgments. To make the two parts 1 and 2 of the interval presentation clearly distinguishable, two different frequencies (2500 and 3000 Hz) were randomly assigned to the first and, respectively, the second set of auditory interval markers. Schematic illustration of a trial in Experiments 1 and 2. (a) In Experiment 1, an auditory sequence of five intervals demarcated by six short (10-ms) auditory beeps of a particular frequency (either 2500 or 3000 Hz) was first presented together with a visual cue '1'. After a short gap with visual cue '+', the second, comparison interval was demarcated by two beeps of a different frequency (either 3000 or 2500 Hz). A question mark prompts participants to respond if the mean interval of the first was longer or shorter than the second. (b) The temporal structure was essentially the same in Experiment 2 as in Experiment 1, except that the intervals were marked by a brief flash of a grey disk in the monitor center. Given that the task required a visual comparison, the two interval presentation phases were separated by a fixation cross. Experiment 2 (visual intervals) was essentially the same as Experiment 1, except that the intervals were delivered via the visual modality and were demarcated by brief (10-ms) flashes of gray disks in the screen center (see Fig. 2b). Also, the visual cue signals used to indicate the two interval presentation phases ('1', '2') in the 'auditory' Experiment 1 were omitted, to ensure participants' undivided attention to the judgment-relevant intervals. In order to obtain, in an efficient manner, reliable estimates of both the point of subjective equality (PSE) and the just noticeable difference (JND) of the psychometric function of the interval comparison, we employed the updated maximum-likelihood (UML) adaptive procedure from the UML toolbox for Matlab60. This toolbox permits multiple parameters of the psychometric function, including the threshold, slope, and lapse rate (i.e., the probability of an incorrect response, which is independent of stimulus interval) to be estimated simultaneously. We chose the logistic function as the basic psychometric function and set the initial comparison interval to 500 ms. The UML adaptive procedure then used the method of maximum-likelihood estimation to determine the next comparison interval based on the participant's responses to minimize the expected variance (i.e., uncertainty) in the parameter space of the psychometric function. In addition, after each response, the UML updated the posterior distributions of the psychometric parameters (see Fig. 3b for an example), from which the PSE and JND can be estimated (for the detailed procedure, see Shen et al.60). To mitigate habituation and expectation effects, we presented the sequences of comparison intervals for the three different sets randomly intermixed across trials, concurrently tracking the three separate adaptive procedures. (a) Trial-wise update of the threshold estimate (\(\alpha\)) for the three different interval sets in Experiment 1, for one typical participant. (b) The posterior parameter distributions of the threshold (\(\alpha\)) and slope (\(\beta\)) based on the logistic function \(p = 1/\left( {1 + e^{{ - \left( {x - \alpha } \right) \cdot \beta }} } \right)\), separately for the three sets (240 trials in total) for the same participant. Prior to the testing session, participants were given verbal instructions and then familiarized with the task in a practice block of 30 trials (10 comparison trials for each set). Of note, upon receiving the instruction, most participants spontaneously voiced concern about the difficulty of the judgment they were asked to make. However, after performing just a few trials of the training block, they all expressed confidence that the task was easily doable after all, and they all went on to complete the experiment successfully. In the formal testing session, each of the three sets was tested 80 times, yielding a total of 240 trials per experiment. The whole experiment took some 60 min to complete. All statistical tests were conducted using repeated-measures ANOVAs—with additional Bayes-Factor analyses (using using JASP software) to comply with the more stringent criteria required for acceptance of the null hypothesis61,62. All Bayes factors reported for ANOVA main effects are "inclusion" Bayes factors calculated across matched models. Inclusion Bayes factors compare models with a particular predictor to models that exclude that predictor, providing a measure of the extent to which the data support inclusion of a factor in the model. The Holm–Bonferroni method and Bayes factor have been applied for the post-hoc analysis. Figure 3 depicts the UML estimation for one typical participant: the threshold (\(\alpha\)) and the slope (\(\beta\)) parameters of the logistic function \(p = 1/\left( {1 + e^{{ - \left( {x - \alpha } \right) \cdot \beta }} } \right)\). By visual inspection, the thresholds reached stable levels within 80 trials of dynamic updating (Fig. 3a), and the posterior distributions (Fig. 3b) indicate the two parameters were converged in all three sets. Figure 4 depicts the mean thresholds (PSEs), averaged across participants, for the three sets of intervals, separately for the auditory Experiment 1 and the visual Experiment 2. In both experiments, the estimated averages from the three sets showed a similar pattern, with the mean of Set 2 being larger than the means of both Set 1 and Set 3. Repeated-measures ANOVAs, conducted separately for both experiments, revealed the Set (main) effect to be significant both for Experiment 1, \(F\left( {2,30} \right) = 10.1,p < 0.001,\eta_{g}^{2} = 0.064\),\( BF_{incl} = 58.64\), and for Experiment 2, \(F\left( {2,28} \right) = 8.97\), \(p < 0.001,\eta_{g}^{2} = 0.013\), \( BF_{incl} = 30.34\). Post-hoc Bonferroni-corrected comparisons confirmed the Set effect to be mainly due to the mean being highest with Set 2. In more detail, for the auditory experiment (Fig. 4a), the mean of Set 2 was larger than the means of Set 1 [\(t\left( {15} \right) = 3.14, p = 0.013, BF_{10} = 7.63\)] and Set 3 [\(t\left( {15} \right) = 5.12,\) p < 0.001, \(BF_{10} = 234\)], with no significant difference between the latter (\(t\left( {15} \right) = 1.26\), p = 0.23, \(BF_{10} = 0.5\)). The result pattern was similar for the visual experiment (Fig. 4b), with Set 2 generating a larger mean than both Set 1 (\(t\left( {14} \right) = 3.13, p = 0.015, BF_{10} = 7.1\)) and Set 3 (\(t\left( {14} \right) = 4.04\), p < 0.01, \(BF_{10} = 32.49\)), with no difference between the latter (\(t\left( {14} \right) = 1.15, \) p = 0.80, \(BF_{10} = 0.46\)). This pattern of PSEs (Set 2 > Set 1 = Set 3) is consistent with one of our predictions, namely, that the main averaging process for rendering perceptual summary statistics is based on the geometric mean, in both the visual and the auditory modality. Violin plot of the distribution of individual subjective mean intervals (gray dots) of three tested sets, with the grand mean PSE (and associated standard error) overlaid on the respective set, separately for Experiment 1 (a) and Experiment 2 (b). *denotes p < 0.05, **p < 0.01, and ***p < 0.001. To obtain a better picture of individual response patterns and assess whether they are more in line with one or the other predicted pattern illustrated in Fig. 1b, we calculated the PSE differences between Sets 1 and 2 and between Sets 1 and 3 as two indicators. Figure 5 depicts the difference between Sets 1 and 2 over the difference between Sets 1 and 3, for each participant. The ideal differences between the respective arithmetic means and the respective geometric means are located on the orthogonal axes (triangle points). By visual inspection, individuals (gray dots) differ considerably: while many are closer to the geometric than to the arithmetic mean, some show the opposite pattern. We used the line of reflection between the 'arithmetic' and 'geometric' points to separate participants into two groups: geometric- and arithmetic-oriented groups. Eleven (out of 16) participants exhibited a pattern oriented towards the geometric mean in Experiment 1, and nine (out of 15) in Experiment 2. Thus, geometric-oriented individuals outnumbered arithmetic-oriented individuals (7:3 ratio). Consistent with the above PSE analysis, the grand mean differences (dark dots in Fig. 5) and their associated standards errors are located within the geometric-oriented region. Difference in PSEs between Sets 1 and 2 plotted against the difference between Sets 1 and 3 for all individuals (gray dots) in Experiments 1 (a) and 2 (b). The dark triangles represent the ideal locations of arithmetic averaging (Arith.M) and geometric averaging (Geo.M). The black dots, with the standard-error bars, depict the mean differences across all participants. The dashed lines represent the line of reflection between the 'geometric' and 'arithmetic' ideal locations. Of note, however, while the mean patterns across three sets are in line with the prediction of geometric interval averaging (see the pattern illustrated in Fig. 1b) for both experiments, the absolute PSEs were shorter in the visual than in the auditory conditions. Further tests confirmed that, in the 'auditory' Experiment 1, the mean PSEs did not differ significantly from their correspondent physical geometric means (one-sample Bayesian t-test pooled across the three sets), \(t\left( {47} \right) = 1.70,p = 0.097, BF_{10} = 0.587\), but they were significant smaller than the physical arithmetic means, \(t\left( {47} \right) = 3.87,p < 0.001, BF_{10} = 76.5\). In the 'visual' Experiment 2, the mean PSEs for all three interval sets were significantly smaller than both the physical geometric mean [\(t\left( {44} \right) = 4.74,p < 0.001\), \(BF_{10} = 924.1\)] and the arithmetic mean [\(t\left( {44} \right) = 6.23,p < 0.001\), \(BF_{10} > 1000\)]. Additionally, the estimated mean durations were overall shorter for the visual (Experiment 2) versus the auditory intervals (Experiment 1), \(t\left( {91} \right) = 2.97,p < 0.01, BF_{10} = 9.64\). This modality effect is consistent with previous reports that auditory intervals are often perceived as longer than physically equivalent visual intervals55,63. Another key parameter providing an indicator of an observer's temporal sensitivity (resolution) is given by the just noticeable difference (JND), defined as the interval difference between the 50%- and 75%-thresholds estimated from the psychometric function. Figure 6 depicts the JNDs obtained in Experiments 1 and 2, separately for the three sets of intervals. Repeated-measures ANOVAs, with Set as the main factor, failed to reveal any differences among the three sets, for either experiment [Experiment 1: \(F\left( {2,30} \right) = 1.05, p = 0.36, BF_{incl} = 0.325\); Experiment 2:\(F\left( {2,28} \right) = 0.166, p = 0.85, BF_{incl} = 0.156\)]. Comparison across Experiments 1 and 2, however, revealed the JNDs to be significantly smaller for auditory than for visual interval averaging, \(t\left( {91} \right) = 2.95, p < 0.01,BF_{10} = 9.08\). That is, temporal resolution was higher for the auditory than for the visual modality, consistent with the literature64. Violin plot of the distribution of individual JNDs (gray dots) of three tested sets, with the mean JND (and associated standard error) overlaid on the respective set, separately for Experiment 1 (a) and Experiment 2 (b). Thus, taken together, evaluation of both the mean and sensitivity of the participants' interval estimates demonstrated not only that ensemble coding in the temporal domain is accurate and consistent, but also that the geometric mean is used as the predominant averaging scheme for performing the task. Model simulations Although our results favor the geometric averaging scheme, one might argue that participants adopt alternative schemes to simple, equally weighted, arithmetic or geometric averaging. For instance, the weight of an interval in the averaging process might be influenced by the length or/and the position of that interval in the sequence. For example, a long interval might engage more attention than a short interval, and weights are assigned to intervals according to their lengths. Alternatively, short intervals might be assigned higher weights. This would be in line with an animal study65, in which pigeons received reinforcement after varying delay intervals. The pigeons assigned greater weight to short delays, as reflected by an inverse relationship between delay and efficacy of reinforcement. In case each interval is weighted precisely relative to its inverse (reciprocal), the result would be harmonic averaging, that is: the reciprocal of the arithmetic mean of the reciprocals of the presented ensemble intervals (i.e., \(M_{h} = \left( {\sum\nolimits_{i = 1}^{n} {\frac{1}{{x_{i} }}} } \right)^{ - 1}\)). A daily example of the harmonic mean is that when one drives from A to B at a speed of 90 km/h and returns with 45 km/h, the average speed is the harmonic mean of 60 km/h, not the arithmetic or the geometric mean. To further examine how closely the perceived ensemble means, reflected by the PSEs, match what would be expected if participants had been performing different types of averaging (arithmetic, geometric, weighted, and harmonic), as well as to explore the effect of an underestimation bias that we observed for the visual modality, we compared and contrasted four model simulations. All four models assume that each interval was corrupted by noise, where the noise scales with interval length according to the scalar property1. In more detail, the arithmetic-, weighted-, and harmonic-mean models all assume that each perceived interval is corrupted by normally distributed noise which follows the scalar property: $$ T_{i} \sim N\left( {\mu_{i} ,\mu_{i} w_{f} } \right), $$ where Ti is the perceived duration of interval i, \(\mu_{i}\) is its physical duration, and \(w_{f}\) is the Weber scaling. In contrast, the geometric-averaging model assumes that the internal representation of each interval is encoded on a logarithmic timeline, and all intervals are equally affected by the noise, which implicitly incorporates the scalar property: $$ log(T_{i} ) \sim N\left( {log\left( {\mu_{i} } \right),\sigma_{t} } \right), $$ where \(\sigma_{t} \) is the standard deviation of the noise. Given that the perceived duration is subject to various types of contextual modulation (such as the central-tendency bias28,29,30) and modality differences55, individual perceived intervals might be biased. To simplify the simulation, we assume a general bias in ensemble averaging, which follows the normal distribution: $$ B \sim N\left( {\mu_{b} ,\sigma } \right). $$ Accordingly, the arithmetic (\(M_{A}\)) and harmonic (\(M_{H}\)) average of the five intervals in our experiments are given by: $$ M_{A} = \mathop \sum \limits_{1}^{5} T_{i} /5 + B, $$ $$ M_{H} = 5/\left( {\mathop \sum \limits_{1}^{5} 1/T_{i} } \right) + B. $$ In the weighted-mean model, the intervals are weighted by their relative duration within the set, and the weighted intervals are subject to normally distributed noise and averaged, with a general bias added to the average: $$ M_{W} = \mathop \sum \limits_{1}^{5} \left( {w_{i} T_{i} } \right)/5 + B, $$ where the weight \(w_{i} = \mu_{i} /\sum\nolimits_{1}^{5} {\mu_{i} }\). The geometric-mean model assumes that the presented intervals are first averaged on a logarithmic scale, and corrupted independently by noise and the general bias, while the ensemble average is then back-transformed into the linear scale for 'responding': $$ M_{G} = e^{{\mathop \sum \limits_{1}^{5} log\left( {T_{i} } \right)/5 + B}} . $$ It should be noted that the comparison intervals could also be corrupted by noise. In addition, trial-to-trial variation of the comparison intervals may introduce the central-tendency bias28,29,30. However, the central-tendency bias does not shift the mean PSE, which is the measure we focused on here. Thus, for the sake of simplicity, we omit the variation of the comparison intervals in the simulation. Evaluation of each of the above models was based on 100,000 rounds of simulation for each interval set (per model). For the arithmetic, geometric, and weighted means, the noise parameters (\(w_{f}\) and \(\sigma\)) make no difference to the average prediction, given that, over a large number of simulations, the influence of noise on the linear interval averaging would be zero (i.e., the mean of the noise distribution). Therefore, the predictions for these models are based on a noise-free model version (i.e., the noise parameters were set to zero), with the bias parameter (\(\mu_{b}\)) chosen to minimize the sum of square distances between the model predictions and the average PSE's from each experiment. For the harmonic mean, owing to the non-linear transformation, the noise does make a difference to the average prediction and the best parameters, which minimize the sum of squared errors (i.e. the sum of squared differences between the model predictions and the observed PSE's), was determined by grid search, i.e. by evaluating the model for all combinations of parameters on a grid covering the range of the most plausible values for each parameter and finding the combination that minimized the error on that grid. Among the four models, the model using the geometric mean provides the closest fit to the (pattern of the) average PSEs observed in both experiments (see Fig. 7). By visual inspection, across the three interval sets, the pattern of the average PSEs is the closest to that predicted by the geometric mean, which makes the same predictions for Sets 1 and 3. Note, though, that the PSE observed for Set 1 slightly differs from that for Set 3, by being shifted somewhat in the direction of the prediction based on the arithmetic mean (i.e., shifted towards the PSE for Set 2). The harmonic-mean model predicts that the PSE to be smaller for Set 1 as compared to Set 3, which was, however, not the case in either experiment. On the weighted-mean model, the PSE was expected to be the largest for Set 1, which differs even more from the observed PSE. Predicted and observed PSE's for Experiment 1 (a) and Experiment 2 (b). The filled circles show the observed PSE's (i.e. the grand mean PSE's, which are also shown in Fig. 4, and the error bars represent the associated standard errors); the lines represent the predictions of the four models described in the text. Furthermore, as is also clear by visual inspection, there was a greater bias in the direction of shorter durations in the visual compared to the auditory experiment (witness the lower PSEs in Fig. 7b compared to Fig. 7a), which was reflected in a difference in the bias parameter (\(\mu_{b}\)). The value of the bias parameter associated with the best fit of the geometric mean model was − 0.04 for Experiment 1 (auditory) and − 0.20 for Experiment 2 (visual), which correspond to a shortening by 4% in the auditory and by 18% in the visual experiment. For the arithmetic and weighted-mean models, both bias parameters reflect a larger degree of shortening compared to the geometric-mean model, while the bias parameters of the harmonic-mean model were somewhat smaller compared to the bias parameters of the geometric mean model. The aim of the present study was to reveal the internal encoding of subjective time by examining intuitive ensemble averaging in the time domain. The underlying idea was that ensemble summary statistics are computed at a low level of temporal processing, bypassing high-level cognitive decoding strategies. Accordingly, ensemble averaging of time intervals may directly reflect the fundamental internal representation of time. Thus, if the internal representation of the timeline is logarithmic, basic averaging should be close to the geometric mean (see Footnote 1); alternatively, if time intervals are encoded linearly, ensemble averaging should be close to the arithmetic mean. We tested these predictions by comparing and contrasting ensemble averaging for three sets of time intervals characterized by differential patterns of the geometric and arithmetic means (see Fig. 1b). Critically, the pattern of ensemble averages we observed most closely matched that of the geometric mean (rather than those of the arithmetic, weighted, or, respectively, harmonic means), and this was the case with both auditory (Experiment 1) and visual intervals (Experiment 2) (see results of modeling simulation in Fig. 7). Although some 30% of the participants appeared to prefer arithmetic averaging, the majority showed a pattern consistent with geometric averaging. These findings thus lend support to our central hypothesis: regardless of the sensory modality, intuitive ensemble averaging of time intervals (at least in the 300- to 1300-ms range) is based on logarithmically coded time, that is: the subjective timeline is logarithmically scaled. Unlike ensemble averaging of visual properties (such as telling the mean size or mean facial expression of simultaneously presented objects), there is a pragmatic issue of how we can average (across time) in the temporal domain—in Wearden and Jones's16 words: 'can people do this at all?' (p. 1295). Wearden and Jones16 asked participants to average three consecutively presented durations and compare their mean to that of the subsequently comparison duration. They found that participants were indeed able to extract the (arithmetic) mean; moreover, the estimated means remained indifferent to variations in the spacing of the sample durations. In the current study, by adopting the averaging task for multiple temporal intervals (> 3), we resolved the problem encountered by the temporal bisection task, namely: it cannot be ruled out that finding of the bisection point to be nearest the geometric mean is the outcome of a ratio comparison24,25, rather than reflecting the internal timeline (see "Introduction"). Specifically, we hypothesized that temporal ensemble perception may be indicative of a fast and intuitive process likely involving two stages: transformation, either linearly or nonlinearly, of the sample durations onto a subjective scale66,67,68 and storage in short-term (or working) memory (STM); followed by estimation of the average of the multiple intervals on the subjective scale and then remapping from the subjective to the objective scale. One might assume that the most efficient form of encoding would be linear, avoiding the need for nonlinear transformation. But this is at variance with our finding that, across the three sets of intervals, the averaging judgments followed the pattern predicted by logarithmic encoding (for both visual and auditory intervals). The use of logarithmic encoding may be owing to the limited capacity of STM: uncompressed intervals require more space ('bits') to store, as compared to logarithmically compressed intervals. The brain appears to have chosen the latter for efficient STM storage in the first stage. However, nonlinear, logarithmic encoding in stage 1 could give rise to a computational cost for the averaging process in stage 2: averaging intervals on the objective, external scale would require the individual encoded intervals to be first transformed back from the subjective to the objective scale, which, due to being computationally expensive, would reduce processing speed. By contrast, arithmetic averaging on the subjective scale would be computationally efficient, as it requires only one step of remapping—of the subjective averaged interval onto the objective scale. Intuitive ensemble processing of time appears to have opted for the latter, ensuring computational efficiency. Thus, given the subjective scale is logarithmic, intuitive averaging would yield the geometric mean. It could, of course, be argued that participants may adopt alternative weighting schemes to simple (equally weighted) arithmetic or geometric averaging. For example, the weight of an interval in the averaging process might be influenced by the length of that interval or/and the position of that interval within the sequence. Thus, for example, a long interval might engage more attention than a short interval, and weights are assigned to the intervals according to their lengths. Alternatively, greater weight might be assigned to shorter intervals, consistent with animal studies. For instance, Killen65, in a study with pigeons, found that trials with short-delay reinforcement (with food tokens) had higher impact than trials with long-delay reinforcement, biasing the animals to respond earlier than the arithmetic and geometric mean interval, but close to the harmonic mean. We simulated such alternative averaging strategies—finding that the prediction of geometric averaging was still superior to those of arithmetic, weighted, and, respectively, harmonic averaging: none of the three alternative averaging schemes could explain the patterns we observed in Experiments 1 and 2 better than the geometric averaging. Thus, we are confident that intuitive ensemble averaging is best predicted by the geometric mean. Of course, it would be possible to think of various other, complex weighting schemes that we did not explore in our modeling. However, based on Occam's razor, our observed data patterns favor the simple geometric averaging account. Logarithmic representation of stimulus intensity, such as of loudness or weight, has been proposed by Fechner over one and a half centuries ago69, based on the fact that the JND is proportionate to stimulus intensity (Weber's law). It has been shown that, for the same amount of information (quantized levels), the logarithmic scale provides the minimal expected relative error that optimizes communication efficiency, given that neural storage of sensory or magnitude information is capacity-limited70. Accordingly, logarithmic timing would provide a good solution for coping with limited STM capacity to represent longer intervals. However, as argued by Gallistel71, logarithmic encoding makes valid computations problematic: "Unless recourse is had to look-up tables, there is no way to implement addition and subtraction, because the addition and subtraction of logarithmic magnitudes corresponds to the multiplication and division of the quantities they refer to" (p. 8). We propose that the ensuing computational complexity pushed intuitive ensemble averaging onto the internal, subjective scale—rather than the external, objective scale, which would have required multiple nonlinear transformations. Thus, our results join the increasing body of studies suggesting that, like other magnitudes72,73, time is represented internally on a logarithmic scale and intuitive averaging processes are likely bypassing higher-level cognitive computations. Higher-level computations based on the external, objective scale can be acquired through educational training, and this is linked to mathematical competency37,72,74. Such high-level computations are likely to become involved (at least to some extent) in magnitude estimation, which would explain why investigations of interval averaging have produced rather mixed results15,16,31. Even in the present study, the patterns exhibited by some of the participants could not be explained by purely geometric encoding, which may well be attributable to the involvement of such higher processes. Interestingly, a recent study reported that, under dual-task conditions with an attention-demanding secondary task taxing visual working memory, the mapping of number onto space changed from linear to logarithmic75. This provides convergent support for our proposal of an intuitive averaging process that operates with a minimum of cognitive resources. Another interesting finding of the present study concerns the overall underestimation of the (objective) mean interval duration, which was evident for all three sets of intervals and for both modalities (though it was more marked with visual intervals). This general underestimation is consistent with the subjective 'shortening effect': a source of bias reducing individual durations in memory76,77. The underestimation was less pronounced in the auditory (than the visual) modality, consistent with the classic 'modality effect' of auditory events being judged as longer than visual events. The dominant account of this is that temporal information is processed with higher resolution in the auditory than in the visual domain30,55,58,78. Given the underestimation bias, our analysis approach was to focus on the global pattern of observed ensemble averages across multiple interval sets, rather than examining whether the estimated average for each individual set was closer to the arithmetic or the geometric mean. We did obtain a consistent pattern across all three sets and for both modalities, underpinned by strong statistical power. We therefore take participants' performance to genuinely reflect an intuitive process of temporal ensemble averaging, where the average lies close to the geometric mean. Another noteworthy finding was that the JNDs were larger in the visual than in the auditory modality (Fig. 6), indicative of higher uncertainty, or more random guessing, in ensemble averaging in the visual domain. As random guessing would corrupt the effect we aimed to observe79,80,81, this factor would have obscured the underlying pattern more in the visual than in the auditory modality. To check for such a potential impact of random responses on temporal averaging, we fitted additional psychometric functions to the original response data from our visual experiment. These fits used the logistic psychometric function with and without a lapse-rate parameter, as well as a mixed model—of both temporal responses, modeled by a gamma distribution, and non-temporal responses, modelled by an exponential distribution—proposed by Laude et al.81, and finally a model with the non-temporal component from the model of Laude et al. combined with the logistic psychometric function. We found that the model of Laude et al. did not improve the quality of the fit sufficiently to justify the extra parameters, as evaluated using the Akaike Information Criterion (AIC), and adding a lapse rate improved the AIC only slightly (average AIC: logistic with no lapse rate: 99.1, gamma with non-temporal responses: 102, logistic with non-temporal responses: 99.3, and logistic with lapse rate: 97.9). Importantly, the overall pattern of the PSEs remained the same when the PSEs were estimated from a psychometric function with a lapse rate parameter (set 1: 591 ms; set 2: 629 ms; set 3: 578 ms): the PSE remained significantly larger for Set 2 compared to Set 1 (t(14) = 2.56, p = 0.02) and for Set 2 compared to Set 3 (t(14) = 2.84, p = 0.01), without a significant difference between Sets 1 and 3 (t(14) = 0.76, p = 0.46). Thus, the pattern we observed is rather robust (it does not appear to have been affected substantially by random guessing), favoring geometric averaging not only in the auditory but also in the visual modality. In summary, the present study provides behavioral evidence supporting a logarithmic representation of subjective time, and that intuitive ensemble averaging is based on the geometric mean. Even though the validity of behavioral studies is being increasingly acknowledged, achieving a full understanding of human timing requires a concerted research effort from both the psychophysical and neural perspectives. Accordingly, future investigations (perhaps informed by our work) would be required to reveal the—likely logarithmic—neural representation of the inner timeline. The data and codes for all experiments are available at: https://github.com/msenselab/Ensemble.OpenCodes. Gibbon, J. Scalar expectancy theory and Weber's law in animal timing. Psychol. Rev. 84, 279–325 (1977). Church, R. M. Properties of the internal clock. Ann. N. Y. Acad. Sci. 423, 566–582 (1984). ADS CAS PubMed Article PubMed Central Google Scholar Buhusi, C. V. & Meck, W. H. What makes us tick? Functional and neural mechanisms of interval timing. Nat. Rev. Neurosci. 6, 755–765 (2005). CAS PubMed Article PubMed Central Google Scholar Eagleman, D. M. & Pariyadath, V. Is subjective duration a signature of coding efficiency?. Philos. Trans. R. Soc. Lond. B Biol. Sci. 364, 1841–1851 (2009). PubMed PubMed Central Article Google Scholar Matthews, W. J. & Meck, W. H. Time perception: The bad news and the good. Wiley Interdiscip. Rev. 5, 429–446 (2014). Matell, M. S. & Meck, W. H. Cortico-striatal circuits and interval timing: Coincidence detection of oscillatory processes. Cogn. Brain Res. 21, 139–170 (2004). Matell, M. S., Meck, W. H. & Nicolelis, M. A. L. Interval timing and the encoding of signal duration by ensembles of cortical and striatal neurons. Behav. Neurosci. 117, 760–773 (2003). PubMed Article PubMed Central Google Scholar Buonomano, D. V. & Karmarkar, U. R. How do we tell time ?. Neuroscientist 8, 42–51 (2002). Gu, B. M., van Rijn, H. & Meck, W. H. Oscillatory multiplexing of neural population codes for interval timing and working memory. Neurosci. Biobehav. Rev. 48, 160–185 (2015). Oprisan, S. A. & Buhusi, C. V. Modeling pharmacological clock and memory patterns of interval timing in a striatal beat-frequency model with realistic, noisy neurons. Front. Integr. Neurosci. 5, 52 (2011). Wilkes, J. T. & Gallistel, C. R. Information theory, memory, prediction, and timing in associative learning. In Computational Models of Brain and Behavior (ed. Moustafa, A. A.). https://doi.org/10.1002/9781119159193.ch35 (2017). Simen, P., Balci, F., de Souza, L., Cohen, J. D. & Holmes, P. A model of interval timing by neural integration. J. Neurosci. 31, 9238–9253 (2011). CAS PubMed PubMed Central Article Google Scholar Balci, F. & Simen, P. Decision processes in temporal discrimination. Acta Psychol. 149, 157–168 (2014). Simen, P., Vlasov, K. & Papadakis, S. Scale (in)variance in a unified diffusion model of decision making and timing. Psychol. Rev. 123, 151–181 (2016). Wearden, J. H. & Ferrara, A. Stimulus spacing effects in temporal bisection by humans. Q. J. Exp. Psychol. B 48, 289–310 (1995). Wearden, J. H. & Jones, L. A. Is the growth of subjective time in humans a linear or nonlinear function of real time?. Q. J. Exp. Psychol. 60, 1289–1302 (2007). Brown, G. D. A., McCormack, T., Smith, M. & Stewart, N. Identification and bisection of temporal durations and tone frequencies: Common models for temporal and nontemporal stimuli. J. Exp. Psychol. Hum. Percept. Perform. 31, 919–938 (2005). Yi, L. Do rats represent time logarithmically or linearly?. Behav. Process. 81, 274–279 (2009). Gibbon, J. & Church, R. M. Time left: Linear versus logarithmic subjective time. J. Exp. Psychol. 7, 87–108 (1981). Jozefowiez, J., Gaudichon, C., Mekkass, F. & Machado, A. Log versus linear timing in human temporal bisection: A signal detection theory study. J. Exp. Psychol. Anim. Learn. Cogn. 44, 396–408 (2018). Kopec, C. D. & Brody, C. D. Human performance on the temporal bisection task. Brain Cogn. 74, 262–272 (2010). Church, R. M. & Deluty, M. Z. Bisection of temporal intervals. J. Exp. Psychol. Anim. Behav. Process. 3, 216–228 (1977). Stubbs, D. A. Scaling of stimulus duration by pigeons. J. Exp. Anal. Behav. 26, 15–25 (1976). Allan, L. G. & Gibbon, J. Human bisection at the geometric mean. Learn. Motiv. 22, 39–58 (1991). Allan, L. G. The influence of the scalar timing model on human timing research. Behav. Process. 44, 101–117 (1998). Penney, T. B., Brown, G. D. A. & Wong, J. K. L. Stimulus spacing effects in duration perception are larger for auditory stimuli: Data and a model. Acta Psychol. 147, 97–104 (2014). Lejeune, H. & Wearden, J. H. Vierordt's the experimental study of the time sense (1868) and its legacy. Eur. J. Cogn. Psychol. 21, 941–960 (2009). Hollingworth, H. L. The central tendency of judgment. J. Philos. Psychol. Sci. Methods 7, 461–469 (1910). Jazayeri, M. & Shadlen, M. N. Temporal context calibrates interval timing. Nat. Neurosci. 13, 1020–1026 (2010). Shi, Z., Church, R. M. & Meck, W. H. Bayesian optimization of time perception. Trends Cogn. Sci. 17, 556–564 (2013). PubMed Article Google Scholar Jones, M. R. & McAuley, J. D. Time judgments in global temporal contexts. Percept. Psychophys. 67, 398–417 (2005). Dehaene, S., Izard, V., Spelke, E. & Pica, P. Log or linear? Distinct intuitions of the number scale in Western and Amazonian indigene cultures. Science 320, 1217–1220 (2008). ADS MathSciNet CAS PubMed PubMed Central MATH Article Google Scholar Siegler, R. S. & Booth, J. L. Development of numerical estimation in young children. Child Dev. 75, 428–444 (2004). Berteletti, I., Lucangeli, D., Piazza, M., Dehaene, S. & Zorzi, M. Numerical estimation in preschoolers. Dev. Psychol. 46, 545–551 (2010). Booth, J. L. & Siegler, R. S. Developmental and individual differences in pure numerical estimation. Dev. Psychol. 42, 189–201 (2006). Barth, H. C. & Paladino, A. M. The development of numerical estimation: Evidence against a representational shift. Dev. Sci. 14, 125–135 (2011). Libertus, M. E., Feigenson, L. & Halberda, J. Preschool acuity of the approximate number system correlates with school math ability. Dev. Sci. 14, 1292–1300 (2011). Ariely, D. Seeing sets: Representation by statistical properties. Psychol. Sci. 12, 157–162 (2001). CAS PubMed Article Google Scholar Whitney, D. & Yamanashi Leib, A. Ensemble perception. Annu. Rev. Psychol. 69, 105–129 (2018). Chong, S. C. & Treisman, A. Representation of statistical properties. Vis. Res. 43, 393–404 (2003). Chong, S. C. & Treisman, A. Statistical processing: Computing the average size in perceptual groups. Vis. Res. 45, 891–900 (2005). Webster, J., Kay, P. & Webster, M. A. Perceiving the average hue of color arrays. J. Opt. Soc. Am. A 31, A283 (2014). ADS Article Google Scholar Haberman, J. & Whitney, D. Ensemble perception: Summarizing the scene and broadening the limits of visual processing. In Oxford Series in Visual Cognition. From Perception to Consciousness: Searching with Anne Treisman (Eds Wolfe, J. & Robertson, L.) 339–349. https://doi.org/10.1093/acprof:osobl/9780199734337.003.0030 (Oxford University Press, 2012). Kramer, R. S. S., Ritchie, K. L. & Burton, A. M. Viewers extract the mean from images of the same person: A route to face learning. J. Vis. 15, 1–9 (2015). Leib, A. Y., Kosovicheva, A. & Whitney, D. Fast ensemble representations for abstract visual impressions. Nat. Commun. 7, 13186 (2016). ADS CAS PubMed PubMed Central Article Google Scholar Haberman, J. & Whitney, D. Rapid extraction of mean emotion and gender from sets of faces. Curr. Biol. 17, 751–753 (2007). Curtis, D. W. & Mullin, L. C. Judgments of average magnitude: Analyses in terms of the functional measurement and two-stage models. Percept. Psychophys. 18, 299–308 (1975). Piazza, E. A., Sweeny, T. D., Wessel, D., Silver, M. A. & Whitney, D. Humans use summary statistics to perceive auditory sequences. Psychol. Sci. 24, 1389–1397 (2013). Anderson, N. H. Application of a weighted average model to a psychophysical averaging task. Psychon. Sci. 8, 227–228 (1967). Schweickert, R., Han, H. J., Yamaguchi, M. & Fortin, C. Estimating averages from distributions of tone durations. Atten. Percept. Psychophys. 76, 605–620 (2014). Chen, L., Zhou, X., Müller, H. J. & Shi, Z. What you see depends on what you hear: Temporal averaging and crossmodal integration. J. Exp. Psychol. Gen. 147, 1851–1864 (2018). Hellström, Å. The time-order error and its relatives: Mirrors of cognitive processes in comparing. Psychol. Bull. 97, 35–61 (1985). Le Dantec, C. et al. ERPs associated with visual duration discriminations in prefrontal and parietal cortex. Acta Psychol. 125, 85–98 (2007). Shi, Z., Ganzenmüller, S. & Müller, H. J. Reducing bias in auditory duration reproduction by integrating the reproduced signal. PLoS ONE 8, e62065 (2013). Wearden, J. H., Edwards, H., Fakhri, M. & Percival, A. Why "Sounds Are Judged Longer Than Lights''': Application of a model of the internal clock in humans". Q. J. Exp. Psychol. Sect. B 51, 97–120 (1998). Wearden, J. H. When do auditory/visual differences in duration judgments occur?. Q. J. Exp. Psychol. 59, 1709–1724 (2006). Goldstone, S. & Lhamon, W. T. Studies of auditory-visual differences in human time judgment. 1. Sounds are judged longer than lights. Percept. Mot. Skills 39, 63–82 (1974). Penney, T. B., Gibbon, J. & Meck, W. H. Differential effects of auditory and visual signals on clock speed and temporal memory. J. Exp. Psychol. Hum. Percept. Perform. 26, 1770–1787 (2000). Ivry, R. B. & Schlerf, J. E. Dedicated and intrinsic models of time perception. Trends Cogn. Sci. 12, 273–280 (2008). Shen, Y., Dai, W. & Richards, V. M. A MATLAB toolbox for the efficient estimation of the psychometric function using the updated maximum-likelihood adaptive procedure. Behav. Res. Methods 47, 13–26 (2015). Kass, R. E. & Raftery, A. E. Bayes factors. J. Am. Stat. Assoc. 90, 773–795 (1995). MathSciNet MATH Article Google Scholar Rouder, J. N., Speckman, P. L., Sun, D., Morey, R. D. & Iverson, G. Bayesian t tests for accepting and rejecting the null hypothesis. Psychon. Bull. Rev. 16, 225–237 (2009). Ganzenmüller, S., Shi, Z. & Müller, H. J. Duration reproduction with sensory feedback delay: Differential involvement of perception and action time. Front. Integr. Neurosci. 6, 1–11 (2012). Shipley, T. Auditory flutter-driving of visual flicker. Science 145, 1328–1330 (1964). Killeen, P. On the measurement of reinforcement frequency in the study of preference. J. Exp. Anal. Behav. 11, 263–269 (1968). Gibbon, J. The structure of subjective time: How time flies. Psychol. Learn. Motiv. https://doi.org/10.1016/s0079-7421(08)60017-1 (1986). Johnson, K. O., Hsiao, S. S. & Yoshioka, T. Neural coding and the basic law of psychophysics. Neuroscientist 8, 111–121 (2002). Taatgen, N. A., van Rijn, H. & Anderson, J. An integrated theory of prospective time interval estimation: The role of cognition, attention, and learning. Psychol. Rev. 114, 577–598 (2007). Fechner, G. T. Elemente der Psychophysik, Vol. I and II (Breitkopf and Härtel, Leipzig, 1860). Sun, J. Z., Wang, G. I., Goyal, V. K. & Varshney, L. R. A framework for Bayesian optimality of psychophysical laws. J. Math. Psychol. 56, 495–501 (2012). Gallistel, C. R. Mental magnitudes. In Space, Time, and Number in the Brain: Searching for the Foundations of Mathematical Thought (eds. Dehaene, S. & Brannon, E. M.) 3–12 (Elsevier, Amsterdam, 2011). Dehaene, S. Subtracting pigeons: Logarithmic or linear?. Psychol. Sci. 12, 244–246 (2001). Roberts, W. A. Evidence that pigeons represent both time and number on a logarithmic scale. Behav. Proc. 72, 207–214 (2006). Anobile, G. et al. Spontaneous perception of numerosity in pre-school children. Proc. Biol. Sci. 286, 20191245 (2019). Anobile, G., Cicchini, G. M. & Burr, D. C. Linear mapping of numbers onto space requires attention. Cognition 122, 454–459 (2012). Meck, W. H. Selective adjustment of the speed of internal clock and memory processes. J. Exp. Psychol. Anim. Behav. Process. 9, 171–201 (1983). Spetch, M. L. & Wilkie, D. M. Subjective shortening: A model of pigeons' memory for event duration. J. Exp. Psychol. Anim. Behav. Process. 9, 14–30 (1983). Gu, B. M., Cheng, R. K., Yin, B. & Meck, W. H. Quinpirole-induced sensitization to noisy/sparse periodic input: Temporal synchronization as a component of obsessive-compulsive disorder. Neuroscience 179, 143–150 (2011). Daniels, C. W. & Sanabria, F. Interval timing under a behavioral microscope: Dissociating motivational and timing processes in fixed-interval performance. Learn. Behav. 45, 29–48 (2017). Lejeune, H. & Wearden, J. H. The comparative psychology of fixed-interval responding: Some quantitative analyses. Learn. Motiv. 22, 84–111 (1991). Laude, J. R., Daniels, C. W., Wade, J. C. & Zentall, T. R. I can time with a little help from my friends: effect of social enrichment on timing processes in Pigeons (Columba livia). Anim. Cogn. 19, 1205–1213 (2016). Open Access funding enabled and organized by Projekt DEAL. This work was supported by the German Research Foundation (DFG) Grants SH166/3-2, awarded to ZS. General and Experimental Psychology, Psychology Department, LMU Munich, 80802, Munich, Germany Yue Ren, Fredrik Allenmark, Hermann J. Müller & Zhuanghua Shi Yue Ren Fredrik Allenmark Hermann J. Müller Zhuanghua Shi Y.R. and Z.S. conceived the study and analyzed the data, F.A. did the modeling and simulation. Y.R., F.A., H.J.M, and Z.S. drafted and revised the manuscript. Y.R. prepared Fig. 2. Correspondence to Zhuanghua Shi. Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/. Ren, Y., Allenmark, F., Müller, H.J. et al. Logarithmic encoding of ensemble time intervals. Sci Rep 10, 18174 (2020). https://doi.org/10.1038/s41598-020-75191-6 Accepted: 07 October 2020 The ecological significance of time sense in animals Leslie Ng , Jair E. Garcia , Adrian G. Dyer & Devi Stuart‐Fox Biological Reviews (2020) Temporal bisection is influenced by ensemble statistics of the stimulus set Xiuna Zhu , Cemre Baykan , Hermann J. Müller & Zhuanghua Shi Attention, Perception, & Psychophysics (2020)
CommonCrawl
Structure-preserving finite difference schemes for the Cahn-Hilliard equation with dynamic boundary conditions in the one-dimensional case Global dynamics of a microorganism flocculation model with time delay September 2017, 16(5): 1893-1914. doi: 10.3934/cpaa.2017092 Dynamics of some stochastic chemostat models with multiplicative noise T. Caraballo , , M. J. Garrido-Atienza and J. López-de-la-Cruz Dpto. Ecuaciones Diferenciales y Análisis Numérico, Facultad de Matemáticas, Universidad de Sevilla, C/ Tarfia s/n. Sevilla, 41012, Spain * Corresponding author: [email protected] Received August 2016 Revised March 2017 Published May 2017 Fund Project: Partially supported by FEDER and Ministerio de Economía y Competitividad under grant MTM2015-63723-P and Junta de Andalucía under Proyecto de Excelencia P12-FQM-1492. Figure(6) / Table(2) In this paper we study two stochastic chemostat models, with and without wall growth, driven by a white noise. Specifically, we analyze the existence and uniqueness of solutions for these models, as well as the existence of the random attractor associated to the random dynamical system generated by the solution. The analysis will be carried out by means of the well-known Ornstein-Uhlenbeck process, that allows us to transform our stochastic chemostat models into random ones. Keywords: Chemostat, stochastic differential equations, multiplicative noise, random dynamical systems, random attractors. Mathematics Subject Classification: Primary: 34C11, 34F05; Secondary: 60H10. Citation: T. Caraballo, M. J. Garrido-Atienza, J. López-de-la-Cruz. Dynamics of some stochastic chemostat models with multiplicative noise. Communications on Pure & Applied Analysis, 2017, 16 (5) : 1893-1914. doi: 10.3934/cpaa.2017092 [1] L. Arnold, Random Dynamical Systems, Berlin, 1998. doi: 10.1007/978-3-662-12878-7. Google Scholar H. R. Bungay and M. L. Bungay, Microbial interactions in continuous culture, Advances in Applied Microbiology, 10 (1968), 269-290. Google Scholar T. Caraballo, Recent results on stabilization of PDEs by noise, Bol. Soc. Esp. Mat. Apl., 37 (2006), 47-70. Google Scholar T. Caraballo, M. J. Garrido-Atienza and J. López-de-la-Cruz, Some aspects concerning the dynamics of stochastic chemostats, Advances in Dynamical Systems and Control, Ⅱ, Studies in Systems, Decision and Control, vol. 69, Springer International Publishing, Cham, (2016), 227-246. Google Scholar T. Caraballo, M.J. Garrido-Atienza, B. Schmalfuß and J. Valero, Asymptotic behaviour of a stochastic semilinear dissipative functional equation without uniqueness of solutions, Discrete and Continuous Dynamical Systems Series B, 14 (2010), 439-455. doi: 10.3934/dcdsb.2010.14.439. Google Scholar T. Caraballo and X. Han, Applied Nonautonomous and Random Dynamical Systems, Applied Dynamical Systems, Springer, 2016. doi: 10.1007/978-3-319-49247-6. Google Scholar T. Caraballo, X. Han and P. E. Kloeden, Chemostats with time-dependent inputs and wall growth, Applied Mathematics and Information Sciences, 9 (2015), 2283-2296. Google Scholar T. Caraballo, X. Han and P. E. Kloeden, Chemostats with random inputs and wall growth, Math. Methods Appl. Sci., 38 (2015), 3538-3550. doi: 10.1002/mma.3437. Google Scholar T. Caraballo, P. E. Kloeden and B. Schmalfuß, Exponentially stable stationary solutions for stochastic evolution equations and their perturbation, Applied Mathematics & Optimization, 50 (2004), 183-207. doi: 10.1007/s00245-004-0802-1. Google Scholar T. Caraballo and K. Lu, Attractors for stochastic lattice dynamical systems with a multiplicative noise, Front. Math. China, 3 (2008), 317-335. doi: 10.1007/s11464-008-0028-7. Google Scholar T. Caraballo, G. Lukaszewicz and J. Real, Pullback attractors for asymptotically compact non-autonomous dynamical systems, Nonlinear Analysis TMA, 64 (2006), 484-498. doi: 10.1016/j.na.2005.03.111. Google Scholar A. Cunningham and R. M. Nisbet, Transients and oscillations in continuous cultures, Mathematics in Microbiology, Academic Press, London, (1983), 77-103. Google Scholar G. D'ans, P. V. Kokotovic and D. Gottlieb, A nonlinear regulator problem for a model of biological waste treatment, IEEE Transactions on Automatic Control, AC-16 (1971), 341-347. Google Scholar F. Flandoli and B. Schmalfuß, Random attractors for the 3D stochastic Navier-Stokes equation with multiplicative noise, Stochastics Stochastics Rep., 59 (1996), 21-45. Google Scholar D. Foster and P. Young, Stochastic evolutionary game dynamics, Theor. Pop. Biol., 38 (1990), 219-232. doi: 10.1016/0040-5809(90)90011-J. Google Scholar A. G. Fredrickson and G. Stephanopoulos, Microbial competition, Science, 213 (1981), 972-979. doi: 10.1126/science.7268409. Google Scholar R. Freter, Mechanisms that control the microflora in the large intestine, Human Intestinal microflora in Health and Disease, J. Hentges, ed. , Academic Press, New York, (1983), 33-54. Google Scholar R. Freter, An understanding of colonization of the large intestine requires mathematical analysis, Microecology and Therapy, 16 (1986), 147-155. Google Scholar D. Fudenberg and C. Harris, Evolutionary dynamics with aggregate shocks, J. Econom. Theory, 57 (1992), 420-441. doi: 10.1016/0022-0531(92)90044-Ⅰ. Google Scholar D. J. Higham, An algorithmic introduction to numerical simulation of stochastic differential equations, SIAM Review, 43 (2001), 525-546. doi: 10.1137/S0036144500378302. Google Scholar J. Hofbauer and L. A. Imhof, Time averages, recurrence and transience in the stochastic replicator dynamics, Ann. Appl. Probab., 19 (2009), 1347-1368. doi: 10.1214/08-AAP577. Google Scholar L. Imhof and S. Walcher, Exclusion and persistence in deterministic and stochastic chemostat models, J. Differential Equations, 217 (2005), 26-53. doi: 10.1016/j.jde.2005.06.017. Google Scholar H. W. Jannash and R. T. Mateles, Experimental bacterial ecology studies in continuous culture, Advances in Microbial Physiology, 11 (1974), 165-212. Google Scholar R. Khasminskii and N. Potsepun, On the replicator dynamics behavior under Stratonovich type random perturbations, Stoch. Dyn., 6 (2006), 197-211. doi: 10.1142/S0219493706001712. Google Scholar J. W. M. La Riviere, Microbial ecology of liquid waste, Advances in Microbial Ecology, 1 (1977), 215-259. Google Scholar H. L. Smith, Monotone dynamical systems: an introduction to the theory of competitive and cooperative systems, Mathematical Surveys and Monographs, 41 (1995). American Mathematical Society, Providence, RI Google Scholar [27] H. L. Smith and P. Waltman, The theory of the chemostat: dynamics of microbial competition, Cambridge University Press, Cambridge, UK, 1995. doi: 10.1017/CBO9780511530043. Google Scholar [28] V. Sree Hari Rao and P. Raja Sekhara Rao, Dynamic Models and Control of Biological Systems, Springer-Verlag, Heidelberg, 2009. Google Scholar P. A. Taylor and J. L. Williams, Theoretical studies on the coexistence of competing species under contunous flow conditions, Canadian Journal of Microbiology, 21 (1975), 90-98. Google Scholar M. Turelli, Random environments and stochastic calculus, Theoret. Population Biology, 12 (1977), 140-178. Google Scholar H. Veldcamp, Ecological studies with the chemostat, Advances in Microbial Ecology, 1 (1977), 59-95. Google Scholar P. Waltman, Competition models in population biology, CBMS-NSF Regional Conference Series in Applied Mathematics, 45 Society for Industrial and Applied Mathematics, Philadelphia, 1983. doi: 10.1137/1.9781611970258. Google Scholar P. Waltman, Coexistence in chemostat-like model, Rocky Mountain Journal of Mathematics, 20 (1990), 777-807. doi: 10.1216/rmjm/1181073042. Google Scholar P. Waltman, S. P. Hubbel and S. B. Hsu, Theoretical and experimental investigations of microbial competition in continuous culture, Modeling and Differential Equations in Biology (Conf. , southern Illinois Univ. Carbonadle, Ⅲ. , 1978), (1980) pp. 107-152. Lecture Notes in Pure and Appl. Math. , 58, Dekker, New York. Google Scholar Figure 2. Stochastic chemostat without wall growth. Values of parameters: $S_0=5$, $x_0=10$, $S^0=1$, $D=2$, $a=0.6$, $m=1$, $\alpha=0.2$ (left) and $\alpha= 0.5$ (right) Figure Options Download as PowerPoint slide Figure 5. Stochastic chemostat with wall growth. Values of parameters: $S_0=5$, $x_{01}=10$, $x_{02}=10$, $S^0=1$, $D=2$, $a=0.6$, $m=5$, $b=0.5$, $r_1=0.2$, $r_2=0.8$, $\nu=1.2$, $c=1$, $\alpha=0.2$ Table 1. Internal structure of the random attractor -Random chemostat model with wall growth ASYMPTOTIC BOUNDS ATTRACTOR INTERNAL STRUCTURE Case A: $ b\nu c_\xi-m\geq 0$ (A-1) $\,\,\nu+\frac{\alpha^2}{2}>c$ $ \displaystyle{\lim_{t\to\infty}\sigma(t)\geq S^0D\rho^*_\sigma(\omega)-\varepsilon }$ $\displaystyle{\lim_{t\to\infty}\kappa(t)\leq \varepsilon }$ (A-2) $\,\, \nu+\frac{\alpha^2}{2}<c$ $ \displaystyle{\lim_{t\to\infty}\sigma(t)\geq S^0D\rho^*_\sigma(\omega)-\varepsilon }$ $\kappa(t)$ does not provide any extra information Case B: $ b\nu c_\xi-m< 0$ (B-1) $\,\, \nu+\frac{\alpha^2}{2}>c$ $ \displaystyle{\lim_{t\to\infty}\sigma(t)\geq S^0D\rho^*_\sigma(\omega)-\varepsilon }$ (B-2) $\,\, \nu+\frac{\alpha^2}{2}<c$ $\sigma(t)$ does not provide any extra information Table 2. Internal structure of the random attractor -Stochastic chemostat model with wall growth Case A: $ b\nu c_\xi-m\geq 0$ (A-1) $\,\,\nu+\frac{\alpha^2}{2}>c$ $ \displaystyle{\lim_{t\to\infty}S(t)\geq S^0D\rho^*_\sigma(\omega)e^{-\alpha z^*(\omega)}-\varepsilon }$ $\displaystyle{\lim_{t\to\infty}\left[x_1(t)+x_2(t)\right]\leq \varepsilon }$ (A-2) $\,\, \nu+\frac{\alpha^2}{2}<c$ $ \displaystyle{\lim_{t\to\infty}S(t)\geq S^0D\rho^*_\sigma(\omega)e^{-\alpha z^*(\omega)}-\varepsilon }$ $x_1+x_2$ does not provide any extra information Case B: $ b\nu c_\xi-m< 0$ (B-1) $\,\, \nu+\frac{\alpha^2}{2}>c$ $ \displaystyle{\lim_{t\to\infty}S(t)\geq S^0D\rho^*_\sigma(\omega)e^{-\alpha z^*(\omega)}-\varepsilon }$ (B-2) $\,\, \nu+\frac{\alpha^2}{2}<c$ $S$ does not provide any extra information Bixiang Wang. Random attractors for non-autonomous stochastic wave equations with multiplicative noise. Discrete & Continuous Dynamical Systems - A, 2014, 34 (1) : 269-300. doi: 10.3934/dcds.2014.34.269 Abiti Adili, Bixiang Wang. Random attractors for non-autonomous stochastic FitzHugh-Nagumo systems with multiplicative noise. Conference Publications, 2013, 2013 (special) : 1-10. doi: 10.3934/proc.2013.2013.1 Junyi Tu, Yuncheng You. Random attractor of stochastic Brusselator system with multiplicative noise. Discrete & Continuous Dynamical Systems - A, 2016, 36 (5) : 2757-2779. doi: 10.3934/dcds.2016.36.2757 Zhaojuan Wang, Shengfan Zhou. Existence and upper semicontinuity of random attractors for non-autonomous stochastic strongly damped wave equation with multiplicative noise. Discrete & Continuous Dynamical Systems - A, 2017, 37 (5) : 2787-2812. doi: 10.3934/dcds.2017120 Xiaojun Li, Xiliang Li, Kening Lu. Random attractors for stochastic parabolic equations with additive noise in weighted spaces. Communications on Pure & Applied Analysis, 2018, 17 (3) : 729-749. doi: 10.3934/cpaa.2018038 Yangrong Li, Shuang Yang. Backward compact and periodic random attractors for non-autonomous sine-Gordon equations with multiplicative noise. Communications on Pure & Applied Analysis, 2019, 18 (3) : 1155-1175. doi: 10.3934/cpaa.2019056 Tomás Caraballo, José A. Langa, James C. Robinson. Stability and random attractors for a reaction-diffusion equation with multiplicative noise. Discrete & Continuous Dynamical Systems - A, 2000, 6 (4) : 875-892. doi: 10.3934/dcds.2000.6.875 María J. Garrido–Atienza, Kening Lu, Björn Schmalfuss. Random dynamical systems for stochastic partial differential equations driven by a fractional Brownian motion. Discrete & Continuous Dynamical Systems - B, 2010, 14 (2) : 473-493. doi: 10.3934/dcdsb.2010.14.473 Fuke Wu, Peter E. Kloeden. Mean-square random attractors of stochastic delay differential equations with random delay. Discrete & Continuous Dynamical Systems - B, 2013, 18 (6) : 1715-1734. doi: 10.3934/dcdsb.2013.18.1715 Ludwig Arnold, Igor Chueshov. Cooperative random and stochastic differential equations. Discrete & Continuous Dynamical Systems - A, 2001, 7 (1) : 1-33. doi: 10.3934/dcds.2001.7.1 Zhen Li, Jicheng Liu. Synchronization for stochastic differential equations with nonlinear multiplicative noise in the mean square sense. Discrete & Continuous Dynamical Systems - B, 2019, 24 (10) : 5709-5736. doi: 10.3934/dcdsb.2019103 Zhaojuan Wang, Shengfan Zhou. Random attractor and random exponential attractor for stochastic non-autonomous damped cubic wave equation with linear multiplicative white noise. Discrete & Continuous Dynamical Systems - A, 2018, 38 (9) : 4767-4817. doi: 10.3934/dcds.2018210 Shengfan Zhou, Min Zhao. Fractal dimension of random attractor for stochastic non-autonomous damped wave equation with linear multiplicative white noise. Discrete & Continuous Dynamical Systems - A, 2016, 36 (5) : 2887-2914. doi: 10.3934/dcds.2016.36.2887 Janusz Mierczyński, Sylvia Novo, Rafael Obaya. Lyapunov exponents and Oseledets decomposition in random dynamical systems generated by systems of delay differential equations. Communications on Pure & Applied Analysis, 2020, 19 (4) : 2235-2255. doi: 10.3934/cpaa.2020098 Lianfa He, Hongwen Zheng, Yujun Zhu. Shadowing in random dynamical systems. Discrete & Continuous Dynamical Systems - A, 2005, 12 (2) : 355-362. doi: 10.3934/dcds.2005.12.355 Philippe Marie, Jérôme Rousseau. Recurrence for random dynamical systems. Discrete & Continuous Dynamical Systems - A, 2011, 30 (1) : 1-16. doi: 10.3934/dcds.2011.30.1 Wenqiang Zhao. Pullback attractors for bi-spatial continuous random dynamical systems and application to stochastic fractional power dissipative equation on an unbounded domain. Discrete & Continuous Dynamical Systems - B, 2019, 24 (7) : 3395-3438. doi: 10.3934/dcdsb.2018326 Yuncheng You. Random attractors and robustness for stochastic reversible reaction-diffusion systems. Discrete & Continuous Dynamical Systems - A, 2014, 34 (1) : 301-333. doi: 10.3934/dcds.2014.34.301 Tomás Caraballo, Stefanie Sonner. Random pullback exponential attractors: General existence results for random dynamical systems in Banach spaces. Discrete & Continuous Dynamical Systems - A, 2017, 37 (12) : 6383-6403. doi: 10.3934/dcds.2017277 HTML views (38) T. Caraballo M. J. Garrido-Atienza J. López-de-la-Cruz
CommonCrawl
Origin of the term "weight" in representation theory In representation theory, there are the related concepts of weights and roots. Since both are kinds of generalised eigenvalues, and eigenvalues are roots of e.g. the characteristic polynomial, the word "root" makes sense to me (at least, the question is reduced to why zeros of polynomials / equations are called "roots".) But I wondered: Who used the term weight (or poids, or Gewicht, or ...) for the first time? And for what (if any) specific reason? This site does not know the word "weight" in this meaning. (But see the entry "radix" about roots (of equations).) The only thing I could find on the internet is this (unanswered) stackexchange question. rt.representation-theory soft-question ho.history-overview terminology etymology Torsten SchoenebergTorsten Schoeneberg $\begingroup$ It's likely that Elie Cartan originated the use of poids in representation theory of Lie groups. But it's harder to sort out the underlying rationale for the "weight" concept here. In the case of "root" there is a sense historically of having a sort of formal characteristic polynomial attached to the adjoint representation. $\endgroup$ – Jim Humphreys Jan 18 '14 at 15:57 $\begingroup$ I think it's likely that this use of the term 'weight' in representation theory comes from the use of the term 'weighted homogeneous' for polynomials that are only homogeneous when the variables are assigned the appropriate 'weights'. I'm pretty sure that this use of 'weights' was common in the 19th century and possibly even before that. When you consider that the maximal torus will be acting on the weight vectors in a manner that is entirely analogous to weights of variables in a weighted homogeneous situations, this terminology seems quite natural. $\endgroup$ – Robert Bryant Jan 18 '14 at 16:24 Robert Bryant's comment motivates me to mention the "weighty" historical monograph Emergence of the Theory of Lie Groups (Springer, 2000) written by Thomas Hawkins. As usual with terminology such as "weight", the history reaches back into nineteenth century's invariant theory (Cayley, G. Kowalewski) but becomes most relevant to modern Lie theory in the work of Elie Cartan about a century ago. The early part of Chapter 8 in Hawkins' book is most pertinent but not easy reading. Though Cartan's use of the term poids (weight, Gewicht) was not the earliest mathematical occurrence, it does seem to have been the first use in connection with what we now call weights of representations. There is also a long history involving the term "root" (and its offshoot "secondary root"), going back to antiquity, but here the work of Killing anticipates Cartan's more definitive treatment of semisimple Lie groups and what we now call their Lie algebras. The history is not at all easy to untangle, but I think Hawkins was thorough in his study of the development of ideas along with terminology. Terminology in this particular subject should not be taken too seriously, I think, and sometimes the names given to things are either misleading or inappropriate (including concepts named after people). Still, we are stuck with the language, which is almost impossible to change. Julien Puydt Jim HumphreysJim Humphreys $\begingroup$ @Torsten: The history of Lie theory goes back a long way, and Tom Hawkins has devoted much of his professional career to understanding it in detail. Of course, a modern researcher need not master so much of the history, but it's an interesting example of the evolution of ideas. $\endgroup$ – Jim Humphreys Jan 18 '14 at 18:37 $\begingroup$ Before someone comments my editing the post with "poids" instead of "poid" : this is one of those singular french words taking an 's' (pun intended). $\endgroup$ – Julien Puydt Jan 19 '14 at 19:23 $\begingroup$ @Julien: Thanks for the proofreading help. I'm not sure how I wrote "poid", but I should note that "nineteenth century invariant theory" is proper English usage though your version is not. (And for me "relevant" is about as good as "pertinent", though the former word is overused.) $\endgroup$ – Jim Humphreys Jan 19 '14 at 20:00 $\begingroup$ Pp. 272 and 288 of Hawkins's book answer the question, consistent with Robert Bryant's comment. Cartan introduced "poids" in papers of 1909 / 1913. According to Hawkins, this was prompted by "Gewichte" in a 1902 paper by G. Kowalewski (an expository version of which is digizeitschriften.de/dms/img/?PPN=GDZPPN002118912). Kowalewski's usage, in turn, comes from 19th century invariant theory. -- Amusing sidenotes: 1) In his French paper in Crelle 1854, Cayley translates weight as "pesanteur". 2) On p. 288, Hawkins writes "poid" without "s", like, well, some of us have done here ... $\endgroup$ – Torsten Schoeneberg Jan 19 '14 at 22:04 Not the answer you're looking for? Browse other questions tagged rt.representation-theory soft-question ho.history-overview terminology etymology or ask your own question. What role does the "dual Coxeter number" play in Lie theory (and should it be called the "Kac number")? What is the origin of the term magma? Smallest dimension of nontrivial representation of a simple Lie algebra over `$\mathbb{C}$` Origin of symbols used for half-sum of positive roots in Lie theory? Origin of the term "generic" in set theory What is the square of the Weyl denominator? How to compute the index of a given weight? Realizing root-system roots as polynomial roots without Lie theory Definition of the weight lattice for a nonreduced root system Origin of the term "sinc" function
CommonCrawl
Over 3 years (47) Earth and Environmental Sciences (26) Physics And Astronomy (8) Materials Research (6) Statistics and Probability (1) Mineralogical Magazine (24) Mineralogical Magazine and Journal of the Mineralogical Society (9) MRS Online Proceedings Library Archive (6) Cardiology in the Young (3) Journal of Fluid Mechanics (2) Review of Middle East Studies (2) International Journal of Middle East Studies (1) Network Science (1) Mineralogical Society (24) Materials Research Society (6) AEPC Association of European Paediatric Cardiology (3) Middle East Studies Association (3) Ryan Test (2) A study of cascading failures in real and synthetic power grid topologies RUSSELL SPIEWAK, SALEH SOLTAN, YAKIR FORMAN, SERGEY V. BULDYREV, GIL ZUSSMAN Journal: Network Science / Volume 6 / Issue 4 / December 2018 Print publication: December 2018 Using the direct current power flow model, we study cascading failures and their spatial and temporal properties in the U.S. Western Interconnection (USWI) power grid. We show that yield (the fraction of demand satisfied after the cascade) has a bimodal distribution typical of a first-order transition. The single line failure leads either to an insignificant power loss or to a cascade which causes a major blackout with yield less than 0.8. The former occurs with high probability if line tolerance α (the ratio of the maximal load a line can carry to its initial load) is greater than 2, while a major blackout occurs with high probability in a broad range of 1 < α < 2. We also show that major blackouts begin with a latent period (with duration proportional to α) during which few lines overload and yield remains high. The existence of the latent period suggests that intervention during early stages of a cascade can significantly reduce the risk of a major blackout. Finally, we introduce the preferential Degree And Distance Attachment model to generate random networks with similar degree, resistance, and flow distributions to the USWI. Moreover, we show that the Degree And Distance Attachment model behaves similarly to the USWI against failures. Transcatheter intervention for coarctation of the aorta* Matthew E. Zussman, Russel Hirsch, Carrie Herbert, Gary E. Stapleton Journal: Cardiology in the Young / Volume 26 / Issue 8 / December 2016 Published online by Cambridge University Press: 02 February 2017, pp. 1563-1567 The use of a three-dimensional print model of an aortic arch to plan a complex percutaneous intervention in a patient with coarctation of the aorta* Nalini Ghisiawan, Carrie E. Herbert, Matthew Zussman, Adam Verigan, Gary E. Stapleton Recently, three-dimensional printing of heart models is being used to plan percutaneous and surgical interventions in patients with CHD. We describe a case where we used a three-dimensional print model to plan a complex percutaneous intervention in a patient with coarctation of the aorta. Closure of a secundum atrial septal defect in two infants with chronic lung disease using the Gore HELEX Septal Occluder Matt E. Zussman, Grace Freire, Shawn D. Cupp, Gary E. Stapleton Journal: Cardiology in the Young / Volume 26 / Issue 1 / January 2016 Published online by Cambridge University Press: 20 January 2015, pp. 79-83 Print publication: January 2016 Children with a secundum atrial septal defect are usually asymptomatic and are referred for elective closure after 3–4 years of age; however, in premature infants with chronic lung disease, bronchopulmonary dysplasia, or pulmonary hypertension, increased pulmonary blood flow secondary to a left-to-right atrial shunt, may exacerbate their condition. Closure of the atrial septal defect in these patients can result in significant clinical improvement. We report the cases of two premature infants with chronic lung disease, who underwent atrial septal defect closure with the Gore HELEX Septal Occluder and discuss the technical aspects of using the device in these patients and their clinical outcomes. Electrospinning of Ultrahigh-Molecular-Weight Polyethylene Nanofibers Dmitry M. Rein, Yachin Cohen, Avner Ronen, Eyal Zussman, Kim Shuster Journal: MRS Online Proceedings Library Archive / Volume 1083 / 2008 Published online by Cambridge University Press: 01 February 2011, 1083-R03-03 Print publication: 2008 The electrospinning method was employed to fabricate extremely fine nanofibers of ultra-high molecular weight polyethylene (UHMWPE) for the first time, using a mixture of solvents with different dielectric constant and conductivity. A novel experimental device for elevated temperature electrospinning of highly volatile and quickly crystallizing polymer solutions and melts was developed. The possibility to produce the highly oriented nanofibers from ultra-high molecular weight polymers suggests new ways for fabrication of ultra-strong, porous, surface modified fibers and single-component nanocomposite yarn with improved properties. Microscale fibre alignment by a three-dimensional sessile drop on a wettable pad S. N. REZNIK, W. SALALHA, A. L. YARIN, E. ZUSSMAN Journal: Journal of Fluid Mechanics / Volume 574 / 10 March 2007 Published online by Cambridge University Press: 15 February 2007, pp. 179-207 Print publication: 10 March 2007 Fluidic assembly provides solutions for assembling particles with sizes from nanometres to centimetres. Fluidic techniques based on patterned shapes of monolayers and capillary forces are widely used to assemble microfabrication devices. Usually, for self-assembly, the precondition is that the components must be mobile in a fluidic environment. In the present work, a shape-directed fluidic self-assembly of rod-like microstructures, such as an optical fibre on a wettable pad is demonstrated experimentally with submicrometre positioning precision. A model of the process is proposed, which accounts for the following two stages of the orientation of a fibre submerged in a sessile drop: (i) the drop melting and spreading over a wettable pad; (ii) fibre reorientation related to the surface-tension-driven shrinkage of the drop surface area. At the end of stage (ii), the fibre is oriented along the pad. The experimental results for the optical-fibre assembly by a solder joint have been compared to the modelling results, and a reasonable agreement has been found. The major outcome of the experiments and modelling is that surface tension forces on the fibre piercing a drop align the fibre rather than the flow owing to the spreading of the drop over the horizontal pad, i.e. stage (ii) mostly contributes to the alignment. Transient and steady shapes of droplets attached to a surface in a strong electric field S. N. REZNIK, A. L. YARIN, A. THERON, E. ZUSSMAN Journal: Journal of Fluid Mechanics / Volume 516 / 10 October 2004 Print publication: 10 October 2004 The shape evolution of small droplets attached to a conducting surface and subjected to relatively strong electric fields is studied both experimentally and numerically. The problem is motivated by the phenomena characteristic of the electrospinning of nanofibres. Three different scenarios of droplet shape evolution are distinguished, based on numerical solution of the Stokes equations for perfectly conducting droplets. (i) In sufficiently weak (subcritical) electric fields the droplets are stretched by the electric Maxwell stresses and acquire steady-state shapes where equilibrium is achieved by means of the surface tension. (ii) In stronger (supercritical) electric fields the Maxwell stresses overcome the surface tension, and jetting is initiated from the droplet tip if the static (initial) contact angle of the droplet with the conducting electrode is $\alpha_{s}\,{<}\,0.8\pi $; in this case the jet base acquires a quasi-steady, nearly conical shape with vertical semi-angle $\beta \,{\leq}\, 30^{\circ}$, which is significantly smaller than that of the Taylor cone ($\beta_{T}\,{=}\,49.3^{\circ}$). (iii) In supercritical electric fields acting on droplets with contact angle in the range $0.8\pi \,{<}\,\alpha_{s}\,{<}\,\pi $ there is no jetting and almost the whole droplet jumps off, similar to the gravity or drop-on-demand dripping. The droplet–jet transitional region and the jet region proper are studied in detail for the second case, using the quasi-one-dimensional equations with inertial effects and such additional features as the dielectric properties of the liquid (leaky dielectrics) taken into account. The flow in the transitional and jet region is matched to that in the droplet. By this means, the current–voltage characteristic $I\,{=}\,I(U)$ and the volumetric flow rate $Q$ in electrospun viscous jets are predicted, given the potential difference applied. The predicted dependence $I\,{=}\,I(U)$ is nonlinear due to the convective mechanism of charge redistribution superimposed on the conductive (ohmic) one. For $U\,{=}\,O(10kV)$ and fluid conductivity $\sigma \,{=}\,10^{-4}$ S m$^{-1}$, realistic current values $I\,{=}\,O(10^{2}nA)$ were predicted. Evaluation of Copper Penetration in Low-κ Polymer Dielectrics by Bias-Temperature Stress Alvin L. S. Loke, S. Simon Wong, Niranjan A. Talwalkar, Jeffrey T. Wetzel, Paul H. Townsend, Tsuneaki Tanabe, Raymond N. Vrtis, Melvin P. Zussman, Devendra Kumar Journal: MRS Online Proceedings Library Archive / Volume 565 / 1999 Published online by Cambridge University Press: 10 February 2011, 173 The industry is strongly interested in integrating low-κ dielectrics with Damascene copper. Otherwise, with conventional materials, interconnects cannot continue to scale without limiting circuit performance. Integration of copper wiring with silicon dioxide (oxide) requires barrier encapsulation since copper drifts readily in oxide. An important aspect of integrating copper wiring with low-κ dielectrics is the drift behavior of copper ions in these dielectrics, which will directly impact the barrier requirements and hence integration complexity. This work evaluates and compares the copper drift properties in six low-κ organic polymer dielectrics: parylene-F; benzocyclobutene; fluorinated polyimide; an aromatic hydrocarbon; and two varieties of poly(arylene ether). Copper/oxide/polymer/oxide/silicon capacitors are subjected to bias-temperature stress to accelerate penetration of copper from the gate electrode into the polymer. The oxide-sandwiched dielectric stack is used to overcome interface instabilities occurring when a low-κ dielectric is in direct contact with either the gate metal or silicon substrate. The copper drift rates in the various polymers are estimated by electrical techniques, including capacitance-voltage, current-voltage, and current-time measurements. Results correlate well with timeto-breakdown obtained by stressing the capacitor dielectrics. Our study shows that copper ions drift readily into fluorinated polyimide and poly(arylene ether), more slowly into parylene-F, and even more slowly into benzocyclobutene. A qualitative comparison of the chemical structures of the polymers suggests that copper drift in these polymers may possibly be retarded by increased crosslinking and enhanced by polarity in the polymer. The industry is strongly interested in integrating low–κ dielectrics with Damascene copper. Otherwise, with conventional materials, interconnects cannot continue to scale without limiting circuit performance. Integration of copper wiring with silicon dioxide (oxide) requires barrier encapsulation since copper drifts readily in oxide. An important aspect of integrating copper wiring with low-K dielectrics is the drift behavior of copper ions in these dielectrics, which will directly impact the barrier requirements and hence integration complexity. This work evaluates and compares the copper drift properties in six low-κ organic polymer dielectrics: parylene-F; benzocyclobutene; fluorinated polyimide; an aromatic hydrocarbon; and two varieties of poly(arylene ether). Copper/oxide/polymer/oxide/silicon capacitors are subjected to bias-temperature stress to accelerate penetration of copper from the gate electrode into the polymer. The oxide-sandwiched dielectric stack is used to overcome interface instabilities occurring when a low-κ dielectric is in direct contact with either the gate metal or silicon substrate. The copper drift rates in the various polymers are estimated by electrical techniques, including capacitance- voltage, current-voltage, and current-time measurements. Results correlate well with timeto- breakdown obtained by stressing the capacitor dielectrics. Our study shows that copper ions drift readily into fluorinated polyimide and poly(arylene ether), more slowly into parylene-F, and even more slowly into benzocyclobutene. A qualitative comparison of the chemical structures of the polymers suggests that copper drift in these polymers may possibly be retarded by increased crosslinking and enhanced by polarity in the polymer. Electrical Reliability of Cu and Low-K Dielectric Integration S. Simon Wong, Alvin L. S. Loke, Jeffrey T. Wetzel, Paul H. Townsend, Raymond N. Vrtis, Melvin P. Zussman The recent demonstrations of manufacturable multilevel Cu metallization have heightened interest to integrate Cu and low-K dielectrics for future integrated circuits. For reliable integration of both materials, Cu may need to be encapsulated by barrier materials since Cu ions (Cu+) might drift through low-K dielectrics to degrade interconnect and device integrity. This paper addresses the use of electrical testing techniques to evaluate the Cu+ drift behavior of low-K polymer dielectrics. Specifically, bias-temperature stress and capacitance-voltage measurements are employed as their high sensitivities are well-suited for examining charge instabilities in dielectrics. Charge instabilities other than Cu+ drift also exist. For example, when low-K polymers come into direct contact with either a metal or Si, interface-related instabilities attributed to electron/hole injection are observed. To overcome these issues, a planar Cu/oxide/polymer/oxide/Si capacitor test structure is developed for Cu+ drift evaluation. Our study shows that Cu+ ions drift readily into poly(arylene ether) and fluorinated polyimide, but much more slowly into benzocyclobutene. A thin nitride cap layer can prevent the penetration. G. D. GuthrieJr. and B. T. Mossman, eds. Health Effects of Mineral Dusts Washington, D.C. (Mineralogical Society of America: Reviews in Mineralogy, Vol. 28), 1993. xvi + 584 pp. Price $28.00 ISBN 0-939950-33-2 J. Zussman Journal: Mineralogical Magazine / Volume 58 / Issue 393 / December 1994 Published online by Cambridge University Press: 05 July 2018, pp. 701-702 Carol Delaney, The Seed and the Soil: Gender and Cosmology in Turkish Village Society, Comparative Studies on Muslim Societies (Berkeley: University of California Press, 1991). Pp. 373. Mira Zussman Journal: International Journal of Middle East Studies / Volume 25 / Issue 1 / February 1993 Published online by Cambridge University Press: 23 April 2009, pp. 177-179 Print publication: February 1993 Scribe, Griot, and Novelist: Narrative Interpreters of the Songhay Empire, by Thomas A. Hale, followed by The Epic of Askia Mohammed, recounted by Nouhou Malio, transcribed and translated by Thomas A. Hale, xiv + 313 pages, maps, transcription, translation, notes, bibliography, index. University Press of Florida; Center for African Studies, Gainesville1990. $29.95. Journal: Review of Middle East Studies / Volume 25 / Issue 1 / July 1991 Published online by Cambridge University Press: 09 March 2016, pp. 52-53 Print publication: July 1991 P-Type Quantum Well Infrared Photodetectors Grown by OMVPE W. S. Hobson, A. Zussman, J. De Jong, B. F. Levine We report on the growth and fabrication of p-doped long wavelength GaAs/AlxGa1−x As quantum well infrared photodetectors (QWIP) grown by organometallic vapor phase epitaxy. The operation of these devices is based on the photocurrent induced through valence band intersubband absorption by holes and, unlike n-doped QWIPs, can utilize normal incidence illumination. Carbon and zinc were used as the p-type dopants in a low-pressure (30 Torr) vertical-geometry reactor. The Zn-doped QWIP consisted of fifty periods of 48 nm-thick undoped Al0.36Ga0.64As barriers and nominally 4 nm-thick doped GaAs quantum wells. Using normal incidence, a quantum efficiency of η = 2.5% and a detectivity of at 77K were obtained for a peak wavelength λp = 6.8 μm and a cutoff wavelength λ∫ =7.6 μm. The C-doped QWIP had 54 nm-thick Al0.31Ga0.69As barriers and exhibited a normal incidence These initial studies indicate the superiority of carbon to zinc as the p-type dopant for these structures. The detectivity of the C-doped QWIPs is about four times less than n-doped QWIPs for the same λp but have the advantage of utilizing normal incidence illumination. A. A. Hodgson (editor). Alternatives to Asbestos—the Pros and Cons. John Wiley & Sons (on behalf of the Society of Chemical Industry), 1989. xiv + 195 pp. Price £43.50. GaAs Quantum Well Infrared Photodetectors Grown by OMVPE W. S. Hobson, A. Zussman, B. F. Levine, S. J. Pearton, V. Swaminathan, L. C. Luther We have grown, fabricated, and measured GaAs quantum well infrared photodetectors (QWIPs) using organometallic vapor phase epitaxy (OMVPE). The epitaxial layers were characterized by electrochemical capacitance-voltage profiling, double-crystal X-ray diffraction, cathodoluminescence, and infrared absorption. Dark current, responsivity spectra, and detectivity were measured for the QWIP devices. The performance of these QWIPs was comparable to detectors grown using MBE. This is of importance since OMVPE has advantages for wafer throughout and cost. D. K. Smith, G. J. McCarthy, P. Bayliss, and Joan Fitzpatrick. PDF Mineral File Workbook: Use of the X-ray Powder Diffraction File of Minerals. JCPDS International Centre for Diffraction Data. Swarthmore, PA, U.S.A. 1986. pp. v + 170. Price $10.00. Journal: Mineralogical Magazine / Volume 53 / Issue 371 / June 1989 Published online by Cambridge University Press: 05 July 2018, p. 392 Nomenclature of Pyroxenes N. Morimoto, J. Fabries, A. K. Ferguson, I. V. Ginzburg, M. Ross, F. A. Seifert, J. Zussman, K. Aoki, G. Gottardi Journal: Mineralogical Magazine / Volume 52 / Issue 367 / September 1988 Print publication: September 1988 This is the final report on the nomenclature of pyroxenes by the Subcommittee on Pyroxenes established by the Commission on New Minerals and Mineral Names of the International Mineralogical Association. The recommendations of the Subcommittee as put forward in this report have been formally accepted by the Commission. Accepted and widely used names have been chemically defined, by combining new and conventional methods, to agree as far as possible with the consensus of present use. Twenty names are formally accepted, among which thirteen are used to represent the end-members of definite chemical compositions. In common binary solid-solution series, species names are given to the two end-members by the '50% rule'. Adjectival modifiers for pyroxene mineral names are defined to indicate unusual amounts of chemical constituents. This report includes a list of 105 previously used pyroxene names that have been formally discarded by the Commission. A. A. Hodgson. Scientific Advances in Asbestos 1967 to 1985. Crowthorne, Berkshire (Anjalena Publications), 1986. 186 pp., 15 figs. Price £56·00. Minerals and the electron microscope Journal: Mineralogical Magazine / Volume 51 / Issue 359 / March 1987 Print publication: March 1987 The transmission electron microscope is now used in a great variety of mineralogical and petrological contexts, The development of such applications over the past thirty years is illustrated by reference mainly to studies on serpentine and amphibole minerals.
CommonCrawl
Inhibitory effect of Newtonia extracts and myricetin-3-o-rhamnoside (myricitrin) on bacterial biofilm formation Katlego E. Motlhatlego1,2, Muna Ali Abdalla ORCID: orcid.org/0000-0002-3211-85001,3, Carmen M. Leonard4, Jacobus N. Eloff1 & Lyndy J. McGaw1 BMC Complementary Medicine and Therapies volume 20, Article number: 358 (2020) Cite this article Diarrhoea is a major health issue in both humans and animals and may be caused by bacterial, viral and fungal infections. Previous studies highlighted excellent activity of Newtonia buchananii and N. hildebrandtii leaf extracts against bacterial and fungal organisms related to diarrhoea-causing pathogens. The aim of this study was to isolate the compound(s) responsible for antimicrobial activity and to investigate efficacy of the extracts and purified compound against bacterial biofilms. The acetone extract of N. buchananii leaf powder was separated by solvent-solvent partitioning into eight fractions, followed by bioassay-guided fractionation for isolation of antimicrobial compounds. Antibacterial activity testing was performed using a broth microdilution assay. The cytotoxicity was evaluated against Vero cells using a colorimetric MTT assay. A crystal violet method was employed to test the inhibitory effect of acetone, methanol: dichloromethane and water (cold and hot) extracts of N. buchananii and N. hildebrandtii leaves and the purified compound on biofilm formation of Pseudomonas aeruginosa, Escherichia coli, Salmonella Typhimurium, Enterococcus faecalis, Staphylococcus aureus and Bacillus cereus. Myricetin-3-o-rhamnoside (myricitrin) was isolated for the first time from N. buchananii. Myricitrin was active against B. cereus, E. coli and S. aureus (MIC = 62.5 μg/ml in all cases). Additionally, myricitrin had relatively low cytotoxicity with IC50 = 104 μg/ml. Extracts of both plant species had stronger biofilm inhibitory activity against Gram-positive than Gram-negative bacteria. The most sensitive bacterial strains were E. faecalis and S. aureus. The cold and hot water leaf extracts of N. buchananii had antibacterial activity and were relatively non-cytotoxic with selectivity index values of 1.98–11.44. The purified compound, myricitrin, contributed to the activity of N. buchananii but it is likely that synergistic effects play a role in the antibacterial and antibiofilm efficacy of the plant extract. The cold and hot water leaf extracts of N. buchananii may be developed as potential antibacterial and antibiofilm agents in the natural treatment of gastrointestinal disorders including diarrhoea in both human and veterinary medicine. Diarrhoea is a neglected disease responsible for over 700,000 deaths annually of children under the age of five worldwide [1]. Ciprofloxacin has been used for the treatment of gastrointestinal infections, such as diarrhoea, for decades [2]. The over-prescribing and incorrect use of antibiotics commonly used to treat diarrhoea and other infections have led to global antibiotic resistance against several microbes. This has greatly impacted on the efficacy of most available antibacterial drugs [3]. Antibiotics are less effective when biofilms form because of the relative impermeability of biofilms, the variable physiological status of microorganisms, subpopulations of persistent strains, the presence of variations of phenotypes and the expression of genes involved in the general stress response [4]. Consequently, biofilms are a recognized source of recurrent, persistent or sporadic bacterial infections [5, 6]. The treatment of infection has become difficult because the biofilm mode of microbial growth has increased the survival strategies and resistance levels of microbes to drugs [3]. Many drugs have been developed that were originally sourced from natural products [7]. Plants contain a variety of secondary metabolites such as alkaloids, flavonoids, glycosides, phenols, saponins, steroids, terpenoids and tannins [8]. These metabolites may have individual bioactivity, or may act together synergistically to disrupt growth or pathogenic pathways of disease-causing organisms [9]. Some natural products are known to inhibit biofilm formation or preformed biofilms [3]. Plants are therefore a potential source of novel antibiofilm agents [4, 10, 11] worthy of further investigation. A wide variety of medicinal plants are used in southern Africa to treat gastrointestinal ailments as well as other infections. Two of these species include Newtonia hildebrandtii (Vatke) Torre and Newtonia buchananii (Baker) G.C.C. Gilbert & Boutiqu of the family Fabaceae, which are used for the treatment of skin conditions and wounds and for an upset stomach [12, 13]. In a previous study, the acetone and dichloromethane:methanol (1:1) extracts of the leaves and stems of the two species had very good antibacterial activity against a range of bacterial species with minimum inhibitory concentration (MIC) values as low as 0.02 mg/ml [14]. This motivated the present study where the isolation of compounds responsible for this promising activity was undertaken. Additionally, hot and cold water extracts of the two species were included to more closely replicate the traditional methods of preparation and to provide evidence to support their use in traditional medicine against stomach upsets. As biofilms are an important mechanism employed by bacteria to establish infection and avoid antibiotic activity, the organic and aqueous extracts as well as the purified compound were investigated for their ability to inhibit biofilm formation. Plant material and extraction The plant species were collected from labelled trees in the Lowveld National Botanical Garden in Nelspruit, Mpumalanga, South Africa in December 2014. Voucher specimens (PRU 122347 for N. hildebrandtii and PRU 122348 for N. buchananii) were prepared and lodged in the H.G.W.J. Schweickerdt Herbarium at the University of Pretoria (South Africa) for reference purposes. The collected plant material was dried at room temperature in a well-ventilated room and ground to a fine powder in a Macsalab Mill (Model 2000 LAB Eriez). One gram of each plant part (leaves and stems, and in the case of N. buchananii also the seeds and seedpods combined) was separately extracted in 10 ml of acetone, 1:1 MeOH-DCM (technical grade, Merck), cold distilled water, or boiling distilled water in polyester centrifuge tubes. The tubes were vigorously shaken for 30 min on an orbital shaker, then centrifuged at 4000 x g for 10 min. The supernatant was filtered through Whatman No.1 filter paper before it was transferred into pre-weighed glass containers. This was repeated thrice on the same plant material and the solvent was removed by evaporation under a stream of air in a fume hood at room temperature to yield the dried crude extract. For isolation purposes, a similar extraction method was followed where 300 g of N. buchananii leaf material was extracted in 3000 ml of acetone (technical grade, Merck) in a 5 L glass bottle. The bottle was vigorously shaken and left overnight and the supernatant was filtered through Whatman No.1 filter paper before it was transferred into pre-weighed glass containers. The extraction yield was calculated as follows: $$ \mathrm{Plant}\ \mathrm{crude}\ \mathrm{extract}\mathrm{ion}\ \mathrm{yield}\ \left(\%\right)=\frac{\mathrm{Mass}\ \mathrm{of}\ \mathrm{dried}\ \mathrm{extract}\ \left(\mathrm{g}\right)\ }{\mathrm{Mass}\ \mathrm{of}\ \mathrm{plant}\ \mathrm{powder}\ \mathrm{extract}\mathrm{ed}\ \left(\mathrm{g}\right)}\times 100 $$ Antimicrobial assay The antimicrobial activity of the water extracts was determined against the following bacteria: Staphylococcus aureus (ATCC 29213), Bacillus cereus (ATCC 21366), Enterococcus faecalis (ATCC 29212), Escherichia coli (ATCC 25922), Pseudomonas aeruginosa (ATCC 27853) and Salmonella enterica subsp. enterica serovar Typhimurium (ATCC 39183). The antimicrobial activity was evaluated in terms of minimum inhibitory concentration (MIC) using a rapid broth microdilution technique with p-iodonitrotetrazolium violet (INT) as a growth indicator [15]. The INT dissolved in hot water (40 μl of a 0.2 mg/ml stock solution) was added to the wells and incubated at 37 °C for an hour. The MIC values were recorded as the lowest concentration of the extracts that inhibited bacterial growth, as indicated by a marked reduction in colour formation. The p-iodonitrotetrazolium violet turns to a red-pink formazan where bacterial growth is not inhibited. The assays were repeated three times with three replicates in each assay. Cytotoxicity assay The cytotoxicity of the aqueous extracts against African green monkey (Vero) kidney cells was determined by the MTT [3-(4,5-dimethylthiazol-2-yl)-2,5-diphenyltetrazolium bromide] reduction assay as previously described by Mosmann (1983) [16] with slight modifications. Vero cells were maintained at 37 °C and 5% CO2 in a humidified environment in Minimal Essential Medium (MEM) containing L-glutamine (Lonza, Belgium) and supplemented with 5% fetal bovine serum (Capricorn Scientific Gmbh, South America) and 1% gentamicin (Virbac, RSA). These cells were seeded at a density of 105 cells/ml (100 μl) in 96-well microtitre plates and incubated at 37 °C overnight to allow attachment. After incubation, extracts (100 μl) at varying final concentrations were added to the wells containing cells. Doxorubicin hydrochloride (Pfizer) was used as a positive control. Wells made of cells in fresh medium without treatment and a blank containing only the fresh medium were used as negative controls. The plates were further incubated at 37 °C and 5% CO2 for 48 h. After incubation, the medium was aspirated from the cells, which were then washed with phosphate-buffered saline (PBS). Then, 200 μl fresh medium together with 30 μl MTT (5 mg/ml in PBS) were added to each well and the plates were incubated at 37 °C in a 5% CO2 humidified incubator for 4 h. The medium was carefully aspirated from the wells and the formed formazan crystals were dissolved in dimethylsulfoxide (DMSO). The plates were placed on an orbital shaker for about 2 min. The absorbance was measured on a microplate reader (BioTek Synergy) at 570 nm. Cell growth inhibition for each extract was expressed in terms of LC50 values, defined as the lethal concentration that caused 50% inhibition of cell viability. The selectivity index (SI) values were calculated by dividing LC50 values by the MIC values in the same units (SI = LC50/MIC). Tests were carried out in quadruplicate and each experiment was repeated thrice. Isolation of the compound N. buchananii was selected for isolation of antimicrobial compounds owing to the high antimicrobial activity of the crude extracts. The crude acetone extract (43.8 g) of N. buchananii was subjected to silica gel column chromatography (7.5 × 60 cm), silica gel 60: 0.05–0.2 mm, 70–270 mesh (Macherey-Nagel & Co) eluted with CH2Cl2 followed by stepwise addition of CH3OH (gradient 0 to 100%) to yield eight fractions. The antibacterial activity of the eight fractions was determined against the six bacterial pathogens known to cause diarrhoea. The next step was to isolate the bioactive compounds from the most active fraction with low cytotoxicity using Sephadex LH-20 column chromatography. Structure elucidation of the isolated compound The compound was characterized by means of 1D and 2D NMR (spectroscopic and mass spectrometry analysis. 1H NMR and 2D NMR including COSY, HMQC, and HMBC data were acquired on a 400 MHz NMR spectrometer (Bruker Avance III 400 MHz). HPLC-HR-ESI–MS was performed on Waters Acquity Ultra Performance Liquid Chromatography (UPLC®) system hyphenated to a quadrupole-time-of-flight (QTOF) instrument. Inhibition of biofilm formation The biofilm inhibition assay was determined according to Sandasi et al. (2011) [10]. In this study the various stages of biofilm development were assumed to be: no attachment/planktonic (T0), initial attachment (T4), irreversible attachment (T24) and mature biofilm (T48). The extracts were resuspended in acetone for the acetone and MeOH: DCM extracts, and sterile distilled water for water extracts and prepared to the same concentration as the MIC value ( [14], Table 1). Briefly, 100 μl aliquots of plant extracts or compound were placed into wells of a 96 well microtitre plate to prevent initial attachment. A 100 μl aliquot of standardised cultures (OD560 = 0.02 equivalent to 1.0 × 106 CFU/ml) of P. aeruginosa, S. Typhimurium, S. aureus, E. faecalis, E. coli or B. cereus was added into the wells and incubated (Scientific Group) at 37 °C for 0, 4, 24 and 48 h (i.e. T0, T4, T24 and T48) respectively without shaking. After the selected incubation periods (T0, T4, T24 and T48), 100 μl of the extracts and the isolated compound (at the MIC values) were added at the different biofilm development stages and incubated at 37 °C for 24 h. Table 1 Antibacterial activity and cytotoxicity of water extracts of two Newtonia species Ciprofloxacin at a concentration of 0.01 mg/ml served as a positive control for all organisms used in this study. Acetone or sterile water with bacterial cells served as negative controls. As soon as the selected incubation periods (T0, T4, T24 and T48) ended, the biofilm biomass was assayed using the modified crystal violet (CV) staining assay [17]. Briefly, the microtitre plates were washed three times with sterile distilled water and allowed to air-dry. Following this the plates were oven-dried at 60 °C for 45 min. The wells were then stained with 100 μl of 1% crystal violet and incubated at room temperature for 15 min after which the plates were washed three times with sterile distilled water to remove unabsorbed stain. The semi-quantitative assessment of biofilm formation was performed by adding 125 μl of ethanol to de-stain the wells. A 100 μl aliquot of the de-staining solution was transferred to a new plate and the absorbance was measured with SoftMax Pro 6 at 590 nm using a microplate reader (SpectraMax M2). The experiments were performed in triplicate for each extract or compound and mean absorbance of 8 replicates for each experiment calculated. The mean absorbance of the samples was determined, and percentage inhibition calculated using the equation described below: $$ \mathrm{Percentage}\ \mathrm{inhibition}=\left({\mathrm{OD}}_{\mathrm{Negative}\ \mathrm{control}}\kern0.5em -\kern0.5em {\mathrm{OD}}_{\mathrm{Experimental}}\times 100\right)/{\mathrm{OD}}_{\mathrm{Negative}\ \mathrm{control}} $$ Statistical analysis was conducted with GraphPad InStat Software and results were compared using the StudentNewman Keuls and Dunnett's tests. Data were analysed using a one-way analysis of variance to compare within each species, where there were significant differences, a Duncans Multiple Range Post Hoc test was used to separate the means. Results were considered significantly different when P< 0.05. Minimum inhibitory concentration and cytotoxicity of the water extracts The water extracts of N. hildebrandtii and N. buchananii had some antibacterial activity (Table 1). Although the best antibacterial effect of both water leaf extracts was against B. cereus with MIC values of 0.31 mg/ml, the hot water leaf extract of N. buchananii had an MIC value of 0.16 mg/ml. However this was not noteworthy as extracts with MIC values above 100 μg/ml are considered to have relatively low antibacterial activity [18]. Importantly, the extracts had SI values above 1 and as high as 18.44 (Table 2). This means that the water extracts of both Newtonia species were more toxic to the microorganisms than against mammalian Vero cells which is very promising. Table 2 Selectivity index (SI) of water extracts of the selected Newtonia species Characterization of the isolated compound The compound was isolated as a yellow powder, which produced a strong ultraviolet (UV) absorbing band on TLC at 257 nm and turned to yellow with vanillin-sulphuric acid spray reagent. The ESI-HRMS afforded the molecular formula as C20H21O12. The molecular weight was determined by ESI-MS (m/z 463.0907 [M-H]−, 927.1844 [2 M-H]−). 1H and 13C NMR spectra indicated the presence of a flavonol rhamnoside. The 1H NMR and 13C NMR data are presented in Table 3. Table 3 NMR data of myricetin-3-O-rhamnoside (myricitrin, in CD3OD) and literature data (in DMSO-d6, 500 Hz) Minimum inhibitory concentration and cytotoxicity of the isolated compound The antimicrobial activity of an isolated compound is generally considered significant if the MIC is 10 μg/ml or lower, moderate if MIC is between 10 and 100 μg/ml and low if MIC is greater than 100 μg/ml [18, 20, 21]. In this study it was found that myricitrin (as presented in Table 4) showed moderate activity at a MIC of 62.5 μg/ml against E. coli, S. aureus and B. cereus, additionally it was relatively non-toxic with an LD50 of 104 μg/ml. Table 4 Antimicrobial activity and cytotoxicity of myricitrin from Newtonia buchananii Anti-biofilm potential of the extracts and the compound The anti-biofilm activity of the Newtonia extracts, the antibacterial compound myricitrin, and the control was determined (Fig. 2). The graphs represent the biofilm inhibitory activity (BIA) of the crude extracts against some human pathogens known to cause diarrhoea. The Gram-negative organism, P. aeruginosa, is a model biofilm-forming organism and BIA was evaluated at different time intervals (Fig. 2a). At time 0 h (organism in planktonic form) the cold water extracts of both plants enhanced the growth or biofilm development of the test organism. This enhancement was also observed at the 4 and 24 h timeframe and then poor inhibition of 1–40% at 48 h (mature biofilm). This may be due to carbohydrates dissolved by the water acting as nutrients for the bacteria. In contrast, the acetone, MeOH: DCM, and hot water extracts of both plants showed BIA of between 42 and 59%. The leaves of N. hildebrandtii (hot water extract) showed stronger inhibition ranging from 59 to 62% at the T0, T4 and T48 biofilm formation stages. Promising activity of the water extracts indicates that N. hildebrandtii leaves could possibly be developed into traditional medicinal teas to treat or prevent diarrhoeal episodes if the safety can be confirmed. Additionally, at 0 h biofilm against E. coli, N. hildebrandtii and N. buchananii acetone and MeOH: DCM extracts had good inhibition (Fig. 2b). Anti-biofilm activity of the water extracts of both plants showed enhancement. For the 4 h biofilm, N. hildebrandtii MeOH: DCM extract had poor BIA of 22% while all other extracts showed enhancement. The 24 h biofilm showed that no extract had an inhibitory effect on the attachment of E. coli, meaning that all extracts were enhancing growth at this time point. In the 48 h biofilm N. hildebrandtii MeOH: DCM and hot water extracts had poor activity of approximately 0.3–9%. In contrast, the N. buchananii acetone leaf extract had good BIA of 55% against E. coli (a common diarrhoea-causing pathogen). The compound myricitrin showed excellent BIA of 84% which indicates that it may be responsible for most of the BIA in the crude extract. Furthermore, good anti-biofilm activity of myricitrin against E. coli was observed at all time periods (T0, T4, T24 and T48). At T0, all the extracts had good BIA ranging from 57 to ≥100% against S. Typhimurium biofilm (Fig. 2c) besides the N. hildebrandtii cold water extract and N. hildebrandtii hot water extract which had poor activity and enhancement, respectively. At the 4 h biofilm production, all extracts resulted in enhancement. At T24 the acetone and MeOH: DCM extracts of both plants had BIA of approximately 16–73%. The water extracts of both plants caused enhancement. At T48 the inhibition of the extracts ranged from 8 to 86%. Acetone and MeOH: DCM extracts of both plants and N. hildebrandtii cold water extract had poor BIA while N. hildebrandtii hot water extract and N. buchananii cold and hot water extracts had good activity. Good anti-biofilm activity of myricitrin (above 50%) against S. Typhimurium was observed at all time periods (T0, T4, T24 and T48). The acetone, MeOH: DCM, and hot water extracts of N. hildebrandtii and N. buchananii acetone and MeOH: DCM extracts had inhibition ranging from approximately 33–50% in the initial attachment stage (0 h biofilm) against B. cereus (Fig. 2d). At 4 h (T4) biofilm, N. hildebrandtii acetone and MeOH: DCM extracts showed enhancement while other extracts had good activity of approximately 78 to ≥100%. In the 24 h biofilm, N. hildebrandtii MeOH: DCM and N. buchananii acetone extracts were below 0 while the other extracts had inhibition ranging from 1 to 58%. However, all extracts had poor to very poor activity in the mature biofilm (48 h) and the lack of BIA may be due to the spore-forming activity of this organism as well as the complexity of the biofilm structure. Myricitrin had good activity in all biofilm stages against B. cereus. In the biofilm assay against E. faecalis (Fig. 2e), at T0 the acetone and MeOH: DCM extracts of both plant species had good activity whilst the water extracts of both plant species enhanced growth. At 4 h biofilm formation, all extracts showed enhancement. At the maturation and dispersion of the biofilm (24 h and 48 h), there seemed to be a similar effect. The water extracts of both plant species had very good activity of approximately 64- ≥ 100%. Correspondingly, acetone extracts of both plant species had BIA of 26–50% while MeOH: DCM extracts of both plant species had BIA of 1–46%. Generally, at T24 and T48 BIA was good compared to the planktonic and attachment stages. This means that the plant extracts have the potential to overcome resistance by inhibiting biofilm formation. Myricitrin had good activity in all biofilm stages against E. faecalis. The T0 biofilm against S. aureus (Fig. 2f) showed that the acetone and MeOH: DCM extracts of both plant species had good BIA whilst the water extracts of both plant species enhanced biofilm development. At 4 h and 24 h biofilm phases all crude extracts showed enhancement. At T48 extracts showed inhibition ranging 19–82% where N. hildebrandtii acetone and hot-water extracts had poor BIA of ~ 37 and 19%, respectively. Myricitrin had good activity in all biofilm stages against S. aureus. The antimicrobial activity of myricitrin complements the findings of the activity of the crude extract and fractions of N. buchananii. However, the compound had much lower antimicrobial activity than the crude extract, fractions and sub-fractions. The separation of compounds therefore leads to a decrease in activity, and increased toxicity [22]. Interestingly, the crude extracts prepared with organic solvents had very good antimicrobial activity against P. aeruginosa with MIC of 20 μg/ml [14] but the compound had low activity of 250 μg/ml in this study. In contrast, Aderogba and co-workers reported myricetin-3-O-rhamnoside (isolated from Croton menyharthii) to be active against E. coli and S. aureus at a much higher MIC of 250 μg/ml [23]. This indicates that the possible application of myricitrin would be more for its low toxicity than its moderate activity. According to Wagner and Ulrich-Merzenich (2009), synergistic effects occur if the constituents of an extract affect different targets or interact with one another to improve the solubility, thereby enhancing the bioavailability of one or several substances of an extract [24]. It is also possible that the highly active compounds were not isolated or were inactivated during the isolation procedure. It has been established that pure drugs isolated from plants rarely have the same degree of activity as the crude extract at comparable concentrations or doses of the active component [25]. This may be due to crude extracts generally consisting of several compounds which may act synergistically with one another [21, 26]. Most traditional health practitioners recommend the use of plants as a whole rather than the isolated compounds because of this synergistic effect [27]. Our results recommend the use of N. buchananii leaf extract or fractions rather than a single compound for the treatment of diarrhoea as an antimicrobial agent. The structure of myricitrin was elucidated by means of 2D spectroscopic analysis including COSY, HMQC, and HMBC (Fig. 1). A search in the Dictionary of Natural Products [28] and comparing the spectroscopic data with the literature confirmed the structure as the flavonoid myricetin-3-o-rhamnoside (myricitrin) [29]. This is the first report of isolation of the flavonoid myricetin-3-o-rhamnoside (myricitrin) from N. buchananii. Myricitrin has previously been isolated from Croton menyharthii, Euphorbia davidii, Myrtus communis, Pistacia chinensis, Plumbago europaea, Santaloides afzelli and Searsia chirindensis, and the compound is known for its antimicrobial, antioxidant, antigenotoxic, anti-inflammatory and antifibrotic activity [19, 30,31,32,33,34,35,36]. HMBC (arrows) and 1H:1H COSY (bold) correlations of myricitrin Furthermore, good anti-biofilm activity of the compound myricitrin against P. aeruginosa was observed at all time periods (T0, T4 and T24) with the exception of T48. Infections caused by P. aeruginosa may be serious and life-threatening and hard to control by most antibiotics due to its cell wall properties and ability to form biofilms [37]. Additionally, myricitrin had good anti-biofilm activity against all the pathogenic strains known to cause diarrhoea that we investigated. Lopes et al. (2017) studied the inhibitory effects of the glycone myricitrin and the aglycone flavonoid myricetin, in addition to other flavonoids on biofilm formation by S. aureus RN4220 and S. aureus SA1199B, which are able to overexpress the msrA and norA efflux protein genes. The authors discovered that aglycone myricetin inhibited biofilm formation of S. aureus RN4220 and S. aureus SA1199B by MBIC50 values of 1 and 32 μg/ml, respectively. While myricitrin exhibited MBIC50 of 128 μg/ml against S. aureus RN4220 and did not show biofilm inhibitory effect against S. aureus SA1199B [38]. The authors indicated that myricitrin, myricetin, and other studied flavonoids had weak inhibition on the growth of S. aureus strains that overexpress efflux protein genes. While sub-MICs of both myricitrin, myricetin, and other flavonoids showed inhibitory potential on biofilm formation in these strains. The antibiofilm potential of flavonoids isolated from plants has been reported [39,40,41]. Moreover, recent studies investigated the inhibitory effects of plant extracts on biofilm formation and the results were in agreement with our findings, for instance, Wijesundara and Rupasinghe (2019) investigated 14 ethanol extracts of selected medicinal plants on bacterial growth and biofilm formation of Streptococcus pyogenes. The authors found that the most effective extracts had MIC and MBC of 62.5 μg/mL and 125 μg/mL, respectively, while the MBIC (minimum biofilm inhibitory concentration) ranged from 31.5–250 μg/mL [42]. Alam et al. (2020) evaluated anti-biofilm activity of different extracts of traditionally used plants of Himalayan region of Pakistan against infectious pathogen-Pseudomonas aeruginosa PAO1. The authors suggested that various solvent extracts showed different activity against the P. aeruginosa PAO1 biofilm. It was found that the 1% methanolic extract of Bergenia ciliata exhibited 80% inhibitory effect on biofilm formation without affecting the growth of the bacterium. Interestingly, the authors indicated a significant correlation in the methanolic extract between flavonoid content and anti-biofilm potential, which confirms the inhibitory effects of flavonoids against P. aeruginosa (PAO1) [43]. According to Sandasi et al. (2009) the enhanced biofilm development may be due to the presence of certain compounds within the crude plant extracts that provide a conditioning film promoting microbial adhesion [17]. Both plant species had anti-biofilm activity at the MIC value obtained against the planktonic stage though N. buchananii had more activity. The inhibition of biofilm formation may be related to the ability of this compound to inactivate microbial adhesins [44]. However, for commercial purposes it would be logical to use the active extracts. Moreover, leaves are easily accessible and traditional healers can prepare formulations using either cold water or boiling water. The dried, ground leaf powder may be soaked in hot or cold water, and the water extract can be used traditionally to relieve symptoms associated with diarrhoea. There is potential of using this plant as a tea to treat diarrhoea and related gastrointestinal conditions since the boiling hot water extract had more anti-biofilm effect than the cold water extract. The dried ground leaf powder of this plant species may be used as tea to alleviate diarrhoeal symptoms, but in vivo studies need to be conducted to confirm the useful antidiarrhoeal efficacy and lack of toxicity of this extract. The water extracts had poor antibacterial activity with MIC values above 100 μg/ml but had good anti-biofilm activity. This may mean that the low antibacterial activity during the planktonic stage of growth does not limit the potential of the extract to inhibit or prevent biofilm formation. As reported in Motlhatlego et al. (2018) [14] N. buchananii acetone and MeOH: DCM leaf extracts had good antibacterial activity of 20–80 μg/ml against B. cereus, P. aeruginosa, S. Typhimurium and S. aureus. However, anti-biofilm activity at 48 h was only observed against S. aureus and S. Typhimurium and not the other two organisms. Extracts of these plant species had stronger inhibitory effects against Gram-positive than Gram-negative bacteria and the most sensitive bacterial strains were E. faecalis and S. aureus. This may be because Gram-positive bacteria are more susceptible to the action of the extracts that contain flavonoids such as myricitrin [45]. Moreover, Gram-negative bacteria have a different cell wall which decreases uptake. Ciprofloxacin had very good anti-biofilm activity against all six bacteria tested in this study. Shafiei et al. (2014) emphasized the importance of determining how conventional antibiotics affect the ability of anti-biofilm agents to control biofilm perpetuation [46]. This study demonstrated the therapeutic significance of the flavonoid myricetin-3-o-rhamnoside (myricitrin), isolated for the first time from Newtonia buchananii, a plant used for diarrhoea, against bacterial pathogens. This flavonoid had moderate antibacterial activity against the planktonic forms of E. coli, B. cereus and S. aureus with MIC = 62.5 μg/ml. This study also suggested that the leaf cold and hot water extracts of N. buchananii are a potential source of natural antibiofilm agents for gastrointestinal disorders, particularly diarrhoea. Biofilm formation remains a worldwide public health concern and research on the efficacy of novel molecules to prevent this formation is a priority. The antibiofilm potential of myricitrin against P. aeruginosa, E. coli, S. Typhimurium, E. faecalis, S. aureus and B. cereus is a promising tool for reducing microbial colonization on surfaces and epithelial mucosa leading to gastrointestinal infections, particularly diarrhoea. Myricitrin was effective in inhibiting biofilm formation of S. aureus strains. In this study myricitrin showed a good antibiofilm dispersal effect against S. aureus (Fig. 2f). To the best of our knowledge this is the first report of BIA of N. hildebrandtii and N. buchananii. The development of potential antibiofilm strategies is of substantial interest. The rational next step would be to determine if there is increased synergistic effect if the two species are respectively combined with ciprofloxacin. A combination of Newtonia leaf extracts with ciprofloxacin may possibly offer a novel strategy to effectively control diarrhoeal biofilm-based infections. 0, 4, 24 and 48-h biofilm inhibition of Newtonia hildebrandtii and Newtonia buchananii extracts against bacterial strains known to cause diarrhoea. N. hildebrandtii acetone leaf extract (1A), N. hildebrandtii Methanol-DCM leaf extract (1MD), N. hildebrandtii cold-water leaf extract (1CW), N. hildebrandtii hot water leaf extract (1HW), N. buchananii acetone leaf extract (2A), N. buchananii MeOH: DCM leaf extract (2MD), N. buchananii cold-water leaf extract (2CW), N. buchananii hot water leaf extract (2HW) MIC: Minimum inhibitory concentration MBC: Minimum bactericidal concentrations MBIC: Minimum biofilm inhibitory concentration MeOH: DCM: Dichloromethane MTT: 3-(4,5-dimethylthiazol-2-yl)-2,5-diphenyltetrazolium bromide p-iodonitrotetrazolium violet SI: Selectivity index LC50 : 50% lethal concentration DMSO: Dimethylsulfoxide BIA: Biofilm inhibitory activity World Health Organisation. Diarrhoeal disease. 2013. http://www.who.int/mediacentre/factsheets/fs330/en/. Accessed 1 Mar 2016. Pichler H, Diridl G, Wolf D. Ciprofloxacin in the treatment of acute bacterial diarrhoea: a double blind study. Eur J Clin Microbiol. 1986;5:241–3. Ahmad I, Husain FM, Maheshwari M, Zahin M. Medicinal plants and phytocompounds: A potential source of novel antibiofilm agents. In: Rumbaugh KP, Ahmad I, editors. Antibiofilm agents; 2014. p. 205–32. Namasivayam SKR, Roy EA. Anti-biofilm effect of medicinal plant extracts against clinical isolate of biofilm of Escherichia coli. Int J Pharm Pharm Sci. 2013;5:486–9. Rollet C, Gal L, Guzzo J. Biofilm-detached cells, a transition from sessile to a planktonic phenotype: a comparative study of adhesion and physiological characteristics in Pseudomonas aeruginosa. FEMS Microbiol Lett. 2008;290:135–42. PubMed Article CAS Google Scholar Lambert G, Bergman A, Zhang Q, Bortz D, Austin R. Physics of biofilms: the initial stages of biofilm formation and dynamics. New J Phys. 2014;16:045005. Newman DJ, Cragg GM. Natural products as sources of new drugs from 1981 to 2014. J Nat Prod. 2016;79:629–61. Ripa FA, Haque M, Imran-Ul-Haque M. In vitro antimicrobial, cytotoxic and antioxidant activity of flower extract of Saccharum spontaneum Linn. Eur J Sci Res. 2009;30:478–83. Van Wyk B-E, Wink M. Medicinal plants of the world: an illustrated scientific guide to important medicinal plants and their uses: Timber Press; 2004. Sandasi M, Leonard CM, Van Vuuren SF, Viljoen AM. Peppermint (Mentha piperita) inhibits microbial biofilms in vitro. SAJB. 2011;77:80–5. Abraham KP, Sreenivas J, Venkateswarulu TC, Indira M, Babu DJ, Diwakar T, Prabhakar KV. Investigation of the potential antibiofilm activities of plant extracts. Int J Pharm Pharm Sci. 2012;4:282–5. Fratkin E. Traditional medicine and concepts of healing among Samburu pastoralists of Kenya. J Ethnobiol. 1996;16:63–97. Kariba RM, Houghton PJ. Antimicrobial activity of Newtonia hildebrandtii. Fitoterapia. 2001;72:415–7. Motlhatlego KE, Njoya EM, Abdalla MA, Eloff JN, McGaw LJ. The potential use of leaf extracts of two Newtonia (Fabaceae) species to treat diarrhoea. SAJB. 2018;116:25–33. Eloff JN. A sensitive and quick microplate method to determine the minimal inhibitory concentration of plant extracts for bacteria. Planta Med. 1998;64:711–3. Mosmann T. Rapid colorimetric assay for cellular growth and survival: application to proliferation and cytotoxicity assays. J Immunol Methods. 1983;65:55–63. Sandasi M, Leonard CM, Viljoen AM. The in vitro antibiofilm activity of selected culinary herbs and medicinal plants against listeria monocytogenes. Lett Appl Microbiol. 2009;50:30–5. Eloff JN. Quantification the bioactivity of plant extracts during screening and bioassay guided fractionation. Phytomedicine. 2004;11:370–1. Kassem MES, Ibrahim LF, Husein SR, El-Sharawy R, El-Ansari MA, Hassanane MM, Booles HF. Myricitrin and bioactive extract of Albizia amara leaves: DNA protection and modulation of fertility and antioxidant-related genes expression. Pharm Biol. 2016;54:2404–9. Ríos JL, Recio MC. Medicinal plants and antimicrobial activity. J Ethnopharmacol. 2005;100:80–4. Awouafack MD, McGaw LJ, Gottfried S, Mbouangouere R, Tane P, Spiteller M, Eloff JN. Antimicrobial activity and cytotoxicity of the ethanol extract, fractions and eight compounds isolated from Eriosema robustum (Fabaceae). BMC Complement Altern Med. 2013;13:289. Nwodo UU, Ngene AA, Iroegbu CU, Obiiyeke GC. Effects of fractionation on antibacterial activity of crude extracts of Tamarindus indica. Afr J Biotechnol. 2010;9:7108–13. Aderogba MA, Ndhala AR, Rengasamy KRR, Van Staden J. Antimicrobial and selected in vitro enzyme inhibitory effects of leaf extracts, flavonols, and indole alkaloids isolated from Croton menyharthii. Molecules. 2013;18:12633–44. Wagner H, Ulrich-Merzenich G. Synergy research: approaching a new generation to phytopharmaceuticals. Phytomedicine. 2009;16:97–110. Rasoanaivo P, Wright CW, Willcox ML, Glibert B. Whole plant extracts versus single compounds for the treatment of malaria: synergy and positive interactions. Malar J. 2011;10:S4. Njateng GSS, Du Z, Gatsing D, Mouokeu RS, Liu Y, Zang HX, Gu J, Luo X, Kuiate JR. Antibacterial and antioxidant properties of crude extract, fractions and compounds from the stem bark of Polyscias fulva Hiern (Araliaceae). BMC Complement Altern Med. 2017;17:99. Rodriguez-Fragoso L, Reyes-Esparza J, Burchiel SW, Herrera-Ruiz D, Torres E. Risks and benefits of commonly used herbal medicines in Mexico. Toxicol Appl Pharmacol. 2008;227:125–35. Chapman and Hall. Dictionary of natural products on CD-ROM. Chemical DataBase; 2017. Elegami AA, Bates C, Gray AI, Mackay SP, Skellern GG, Waigh RD. Two very unusual macrocyclic flavonoids from the water lily Nymphaea lotus. Phytochemistry. 2003;63:727–31. Hayder N, Bouhlel I, Skandrani I, Kadri M, Steinman R, Guiraud P, Mariotte AM, Ghedira K, Dijoux-Franca MG, Chekir-Ghedira L. In vitro antioxidant and antigenotoxic potentials of myricetin-3-ο-galactoside and myricetin-3-o-rhamnoside from Myrtus commmunis: modulation of expression of genes involved in cell defence system using cDNA microarray. Toxicol in Vitro. 2008;22:567–81. Serrilli AM, Sanfilippo V, Ballero M, Sanna C, Poli F, Scartezzini P, Serafini M, Bianco A. Polar and antioxidant fraction of Plumbago europaea L., a spontaneous plant of Sardinia. Nat Prod Res. 2010;24:633–9. Yaya S, Benjamin KABB, Fanté B, Sorho S, Amadou TS, Jean-Marie C. Flavonoids and gallic acid from leaves of Santalloides afzelli (Connaraceae). Rasāyan J Chem. 2012;5:332–3. Madikizela B, Aderogba MA, Van Staden J. Isolation and characterization of antimicrobial constituents of Searsia chirindensis L. (Anacardiaceae) leaf extracts. J Ethnopharmacol. 2013;150:609–13. Domitrović R, Rashed K, Cvijanović O, Vladimir-Knežić S, Škoda M, Višnić A. Myricitrin exhibits antioxidant, anti-inflammatory and antifibrotic activity in carbon tetrachloride intoxicated mice. Chem Biol Interact. 2015;230:21–9. Rédei D, Kúsz N, Szabó M, Pinke G, Zupkó I, Hohmann J. First phytochemical investigation of secondary metabolites of Euphorbia davidii Subils. and antiproliferative activity of its extracts. Acta Biol Hung. 2015;66:480–3. Rashed K, Said A, Abdo A, Selim S. Antimicrobial activity and chemical composition of Pistacia chinensis Bunge leaves. Int Food Res J. 2016;23:316–21. Omwenga EO, Hensel A, Pereira S, Shitandi AA, Goycoolea FM. Antiquorum sensing, antibiofilm formation and cytotoxicity activity of commonly used medicinal plants by inhabitants of Borabu sub-county, Nyamira County, Kenya. PLoS ONE. 2017; https://doi.org/10.1371/journal.pone.0185722N. Lopes LAA, dos Santos Rodrigues JB, Magnani M, de Souza EL, de Siqueira-Júnior JP. Inhibitory effects of flavonoids on biofilm formation by Staphylococcus aureus that overexpresses efflux protein genes. Microb Pathog. 2017;107:193–7. Riihinen KR, OU ZM, Gödecke T, Lankkin DC, Pauli GF, Wu CD. The antibiofilm activity of lingonberry flavonoids against oral pathogens is a case connected to residual complexity. Fitoterapia. 2014;97:78–86. De Souza Barboza TJ, Ferreira AE, Ignácio AC, Albarello N. Cytotoxicity, antibacterial and antibiofilm activities of aqueous extracts of leaves and flavonoids occurring in Kalanchoe pinnata (Lam.). Pers J Med Plants Res. 2016;10:763–70. Faegheh Farhadi F, Khameneh B, Iranshahi M, Iranshahy M. Antibacterial activity of flavonoids and their structure–activity relationship: an update review. Phytother Res. 2019;33:13–40. Wijesundara NM, Rupasinghe HPV. Bactericidal and anti-biofilm activity of ethanol extracts derived from selected medicinal plants against streptococcus pyogenes. Molecules. 2019;24(6):1165. PubMed Central Article CAS PubMed Google Scholar . Alam K, Al Farraj DA, Fatima SM, Yameen MA, Elshikh MS, Alkufeidy, RM, Mustafa, AMA, Bhasmee, P, Alshammari, MK, Alkubaisi, NA, Abbasi AM. Naqvi TA. Anti-biofilm activity of plant derived extracts against infectious pathogen-Pseudomonas aeruginosa PAO1. J Infect Public Health. 2020. https://doi.org/10.1016/j.jiph.2020.07.007. Ciocan ID, Bӑra II. Plants products as antimicrobial agents. Genet Mol Biol. 2007;8:151–6. Evaristo FFV, Albuquerque MRJ, Dos Santos HS, Bandeira PN, Do Nascimento Ávila F, Da Silva BR, Vasconcelos AA, De Menezes Rabelo F, Nascimento-Neto LG, Arruda FVS, Vasconcelos MA, Carneiro VA, Cavada BS, Teixeira EH. Antimicrobial effect of the triterpene 3훽,6훽,16훽-Trihydroxylup-20(29)-ene on planktonic cells and biofilms from Gram-positive and Gram-negative bacteria. Biomed Res Int. 2014:729358. Shafiei M, Ali AA, Shahcheraghi F, Saboora A, Noghabi A. Eradication of Pseudomonas aeruginosa biofilms using combination of n-butanolic Cyclamen coum extract and ciprofloxacin. Jundishapur J Microbiol. 2014;7:e14358. The Medical Research Council of South Africa (SIR JNE) and the National Research Foundation (Grant number 105993 to LJM) provided funding for this project. The National Research Foundation and University of Pretoria are also acknowledged for financial support via student scholarships. The curator of the Lowveld National Botanical Garden is thanked for allowing collection of plant material. Elsa van Wyk and Magda Nel of the H.G.W.J. Schweickerdt Herbarium are thanked for preparing voucher specimens. The Medical Research Council of South Africa (SIR) and the National Research Foundation (Grant number 105993) to LJM provided funding for this project. The National Research Foundation and University of Pretoria are also acknowledged for financial support via student scholarships. Phytomedicine Programme, Department of Paraclinical Sciences, Faculty of Veterinary Science, University of Pretoria, Private Bag X04, Onderstepoort, 0110, South Africa Katlego E. Motlhatlego, Muna Ali Abdalla, Jacobus N. Eloff & Lyndy J. McGaw Current address: Department of Pharmacy and Pharmacology, Faculty of Health Sciences, University of the Witwatersrand, 7 York Road, Parktown, Johannesburg, 2193, South Africa Katlego E. Motlhatlego Department of Food Science and Technology, Faculty of Agriculture, University of Khartoum, 13314, Khartoum North, Sudan Muna Ali Abdalla Microbiology Laboratory, Department of Pharmaceutical Sciences, Tshwane University of Technology, Private Bag X680, Pretoria, 0001, South Africa Carmen M. Leonard Jacobus N. Eloff Lyndy J. McGaw KEM isolated the compound with the help of MAA who identified the compound. KEM performed the minimum inhibitory and cytotoxicity assays of the extracts and the compound. KEM and CML performed the inhibition of biofilm formation and crystal violet biofilm staining assay and CML provided the laboratory facilities for the biofilm work. MAA and JNE were co-supervisors of KEM. LJM supervised the study and provided research funding. All authors have read and approved the manuscript. Correspondence to Muna Ali Abdalla. All authors have given their consents for publication. Motlhatlego, K.E., Abdalla, M.A., Leonard, C.M. et al. Inhibitory effect of Newtonia extracts and myricetin-3-o-rhamnoside (myricitrin) on bacterial biofilm formation. BMC Complement Med Ther 20, 358 (2020). https://doi.org/10.1186/s12906-020-03139-4 Accepted: 30 October 2020 Newtonia Cytotoxicity Biofilm formation
CommonCrawl
Area Studies (10) Drama and Theatre (1) Journal of the Royal Asiatic Society (3) Bulletin of the School of Oriental and African Studies (2) Journal of British Studies (2) Journal of Materials Research (2) The Journal of Asian Studies (2) Brain Impairment (1) Canadian Journal of Neurological Sciences (1) High Power Laser Science and Engineering (1) Journal of Applied Probability (1) Renaissance Quarterly (1) The British Journal of Psychiatry (1) The China Quarterly (1) Theatre Survey (1) The Association for Asian Studies (5) Royal Asiatic Society JRA (3) North American Conference on British Studies (2) American Society for Theatre Research (1) Applied Probability Trust (1) Australasian Society for the Study of Brain Impairment (1) Canadian Neurological Sciences Federation (1) Renaissance Society of America (1) The Royal College of Psychiatrists (1) Michael C. Gao, Daniel B. Miracle, David Maurice, Xuehui Yan, Yong Zhang, Jeffrey A. Hawk Journal: Journal of Materials Research / Volume 33 / Issue 19 / 14 October 2018 Published online by Cambridge University Press: 20 September 2018, pp. 3138-3155 Print publication: 14 October 2018 While most papers on high-entropy alloys (HEAs) focus on the microstructure and mechanical properties for structural materials applications, there has been growing interest in developing high-entropy functional materials. The objective of this paper is to provide a brief, timely review on select functional properties of HEAs, including soft magnetic, magnetocaloric, physical, thermoelectric, superconducting, and hydrogen storage. Comparisons of functional properties between HEAs and conventional low- and medium-entropy materials are provided, and examples are illustrated using computational modeling and tuning the composition of existing functional materials through substitutional or interstitial mixing. Extending the concept of high configurational entropy to a wide range of materials such as intermetallics, ceramics, and semiconductors through the isostructural design approach is discussed. Perspectives are offered in designing future high-performance functional materials utilizing the high-entropy concepts and high-throughput predictive computational modeling. Jason Crawford . Allegory and Enchantment: An Early Modern Poetics. Oxford: Oxford University Press, 2017. Pp. 256. $80.00 (cloth). David Hawkes Journal: Journal of British Studies / Volume 57 / Issue 2 / April 2018 Published online by Cambridge University Press: 29 March 2018, pp. 370-371 Computational modeling of high-entropy alloys: Structures, thermodynamics and elasticity Michael C. Gao, Pan Gao, Jeffrey A. Hawk, Lizhi Ouyang, David E. Alman, Mike Widom Published online by Cambridge University Press: 12 October 2017, pp. 3627-3641 This article provides a short review on computational modeling on the formation, thermodynamics, and elasticity of single-phase high-entropy alloys (HEAs). Hundreds of predicted single-phase HEAs were re-examined using various empirical thermo-physical parameters. Potential BCC HEAs (CrMoNbTaTiVW, CrMoNbReTaTiVW, and CrFeMoNbReRuTaVW) were suggested based on CALPHAD modeling. The calculated vibrational entropies of mixing are positive for FCC CoCrFeNi, negative for BCC MoNbTaW, and near-zero for HCP CoOsReRu. The total entropies of mixing were observed to trend in descending order: CoCrFeNi > CoOsReRu > MoNbTaW. Calculated lattice parameters agree extremely well with averaged values estimated from the rule of mixtures (ROM) if the same crystal structure is used for the elements and the alloy. The deviation in the calculated elastic properties from ROM for select alloys is small but is susceptible to the choice used for the structures of pure components. Influence of laser polarization on collective electron dynamics in ultraintense laser–foil interactions HEDP and HPL 2016 Bruno Gonzalez-Izquierdo, Ross J. Gray, Martin King, Robbie Wilson, Rachel J. Dance, Haydn Powell, David A. MacLellan, John McCreadie, Nicholas M. H. Butler, Steve Hawkes, James S. Green, Chris D. Murphy, Luca C. Stockhausen, David C. Carroll, Nicola Booth, Graeme G. Scott, Marco Borghesi, David Neely, Paul McKenna Journal: High Power Laser Science and Engineering / Volume 4 / 2016 Published online by Cambridge University Press: 27 September 2016, e33 The collective response of electrons in an ultrathin foil target irradiated by an ultraintense ( ${\sim}6\times 10^{20}~\text{W}~\text{cm}^{-2}$ ) laser pulse is investigated experimentally and via 3D particle-in-cell simulations. It is shown that if the target is sufficiently thin that the laser induces significant radiation pressure, but not thin enough to become relativistically transparent to the laser light, the resulting relativistic electron beam is elliptical, with the major axis of the ellipse directed along the laser polarization axis. When the target thickness is decreased such that it becomes relativistically transparent early in the interaction with the laser pulse, diffraction of the transmitted laser light occurs through a so called 'relativistic plasma aperture', inducing structure in the spatial-intensity profile of the beam of energetic electrons. It is shown that the electron beam profile can be modified by variation of the target thickness and degree of ellipticity in the laser polarization. By Vivian J. Carlson, Daniel L. Everett, Alma Gottlieb, Robin L. Harwood, Sean Hawks, Johannes Johow, Heidi Keller, Michael E. Lamb, David F. Lancy, Robert A. LeVine, Courtney L. Meehan, Hiltrud Otto, Birgitt Röttger-Rössler, Nancy Scheper-Hughes, Eckart Voland, Thomas S. Weisner Edited by Hiltrud Otto, Hebrew University of Jerusalem, Heidi Keller, Universität Osnabrück Book: Different Faces of Attachment Print publication: 17 July 2014, pp x-xv Christophe Tournu. Milton, de la famille à la République: Droit au divorce et droit des peuples. Libre pensée et littérature clandestine 48. Paris: Honoré Champion Éditeur, 2011. 444 pp. €112. ISBN: 978-2-7453-2220-3. Journal: Renaissance Quarterly / Volume 67 / Issue 2 / Summer 2014 Published online by Cambridge University Press: 20 November 2018, pp. 748-750 Print publication: Summer 2014 Stephen Deng. Coinage and State Formation in Early Modern English Literature. Early Modern Cultural Studies 1500–1700. New York: Palgrave MacMillan, 2011. Pp. 284. $90.00 (cloth). Applying a Biopsychosocial Perspective to Investigate Factors Related to Emotional Adjustment and Quality of Life for Individuals With Brain Tumour Tamara Ownsworth, Anna L. Hawkes, Suzanne Chambers, David G. Walker, David Shum Journal: Brain Impairment / Volume 11 / Issue 3 / 01 December 2010 Published online by Cambridge University Press: 21 February 2012, pp. 270-280 Print publication: 01 December 2010 Objective: This exploratory study applied a biopsychosocial perspective to investigate cognitive and psychosocial factors related to emotional adjustment and QoL after brain tumour. Methods: Participants included 30 adults with a brain tumour (60% benign and 40% malignant) who were aged 28 to 71 years (M = 51.5, SD = 12.3) and on average 5.4 years post-diagnosis (SD = 5.6 years). Participants completed a brief battery of cognitive tests and self-report measures of emotional status (Depression, Anxiety Stress Scale), subjective impairment (Patient Competency Rating Scale), coping (COPE), social support (Brief Social Support Questionnaire), and QoL (Functional Assessment of Cancer Therapy — Brain Tumour [FACT-Br]). Results: QoL was significantly associated with global cognitive ability (r = .49, p < .01), subjective impairment (r = .66, p < .01), and satisfaction with support (r = .50, p < .05). Level of depressive symptoms was significantly correlated with premorbid IQ (r = -.49, p < .01), use of planning to cope (r = -.48, p < .01), and satisfaction with support (r = -.47, p < .01). Conclusions: Overall, these exploratory findings indicate that emotional adjustment and QoL after brain tumour is related to a slightly different pattern of neuropsychological, psychological (self-perceptions and coping) and social factors. The clinical implications for interventions with individuals with brain tumour are discussed. Transversal Enterprises in the Drama of Shakespeare and His Contemporaries: Fugitive Explorations. By Bryan Reynolds. New York: Palgrave Macmillan, 2006; pp. 271. $84.95 cloth, $29.95 paper. Journal: Theatre Survey / Volume 50 / Issue 1 / May 2009 Structural and Molecular Compartmentation in the Cerebellum Hawkes Richard, Blyth Steven, Chockkan Vijay, Tano David, Ji Zhongqi, Mascher Christa Journal: Canadian Journal of Neurological Sciences / Volume 20 / Issue S3 / May 1993 Published online by Cambridge University Press: 18 September 2015, pp. S29-S35 Most descriptions treat the cerebellum as a uniform structure, and the possibility of important regional heterogeneities in either chemistry or physiology is rarely considered. However, it is now clear that such an assumption is inappropriate. Instead, there is substantial evidence that the cerebellum is composed of hundreds of distinct modules, each with a precise pattern of inputs and outputs, and expressing a range of molecular signatures. By screening a monoclonal antibody library against cerebellar polypeptides we have identified antigens – zebrins – that reveal some of the cerebellum's covert heterogeneity. This article reviews some of these findings, relates them to the patterns of afferent connectivity, and considers some possible mechanisms through which the modular organization may arise. Rice, Rivalry, and Politics: Managing Cambodian Relief by Linda Mason and Roger Brown (University of Notre Dame Press; 256 pp.; $19.95/$9.95) David Hawk Journal: Worldview / Volume 27 / Issue 1 / January 1984 Published online by Cambridge University Press: 06 September 2018, pp. 28-29 Update: Distributing Food in Kampuchea David R. Hawk Journal: Worldview / Volume 24 / Issue 2 / February 1981 In May, 1980, when the Geneva meeting on Humani tarian Assistance and Relief to the Kampuchean People took place and funds to continue aid were pledged, the food distribution system inside Kampuchea was a sham bles. Rice-laden ships were backed up in the harbors at Kampong Som and Pnompenh, where warehouses were full. Rice, unlike rice seed, was not reaching the villages. There was rare consensus on this point, even among those international organizations and voluntary agencies that had been presenting a generally positive pic ture of developments inside Kampuchea. Relief work ers in Pnompenh took the unprecedented step of jointly warning the Heng Samrin regime that it could not count on continued international aid unless distribu tion improved. Interviews with Khmer peasants treking on foot, by bicycle, and oxcart to the Thai border from several provinces in west and northwest Kampuchea confirmed that food distribution was grossly inadequate if it existed at all. Heng Samrin's own village commit tees, with no rice to give peasants and farmers, were issuing passes to the Thai border that were being hon ored by the Vietnamese soldiers who control much of the access to the relief "land bridge" at Nong Chan. Human Rights and U.S. Foreign Policy ed. by Peter Brown and Douglas MacLean (Lexington Books; 301 pp.; $16.95) - Human Rights and U.S. Foreign Policy ed. by Barry Rubin and Elizabeth Spiro (Westview Press; 283 pp.; $20.00) - Human Rights and American Foreign Policy ed. by Donald Kommers and Gilburt Loescher (University of Notre Dame Press; 345 pp.; $14.95) Journal: Worldview / Volume 23 / Issue 4 / April 1980 Dictionary of Oriental Literatures. General editor Jaroslav Prusek. Vol. 1: East Asia [226 pp.]. Edited by Zbigniew Slupski. Vol. 2: South and South-East Asia [191 pp.]. Edited by Dusan Zbanitel. Vol. 3: West Asia and North Africa [213 pp.]. Edited by Jiri Becka. [London: George Allen & Unwin, 1974. Each volume £5·85.] Journal: The China Quarterly / Volume 75 / September 1978 Community Care: An Analysis of Assumptions David Hawks Journal: The British Journal of Psychiatry / Volume 127 / Issue 3 / September 1975 The implementation of a policy of 'community care' is seen to involve a number of assumptions, some of which are rarely examined. These can be roughly categorized as involving the nature of mental illness, the nature of community, the course and treatment of mental illness, the proper scope of psychiatry, the burden on the community and the efficacy of social work. Data bearing on these assumptions are reviewed, and the conclusion is offered that they are far from being uncontentious. It is suggested that the movement toward community care has many of the attributes of a moral enterprise which, unless substantiated by benefits to the patient or his family, may be the latest diversion of the psychiatric conscience from the care and treatment of the chronic mentally ill. A cluster process representation of a self-exciting process Alan G. Hawkes, David Oakes Journal: Journal of Applied Probability / Volume 11 / Issue 3 / September 1974 It is shown that all stationary self-exciting point processes with finite intensity may be represented as Poisson cluster processes which are age-dependent immigration-birth processes, and their existence is established. This result is used to derive some counting and interval properties of these processes using the probability generating functional. Jao Tsung-I: Tz'ŭ-tsi k'ao: examination of documents relating to tz'ŭ. Part 1. Collected works of separate authors from T'ang to Yüan, X, [XV[, 344, 6 pp. Hong Kong: Hong Kong University Press, 1963. (Distributed in G.B. by Oxford University Press. 24s.) Journal: Bulletin of the School of Oriental and African Studies / Volume 28 / Issue 3 / October 1965 Published online by Cambridge University Press: 24 December 2009, pp. 656-657 Print publication: October 1965 James J. Y. Liu: The art of Chinese poetry. xvi, 166 pp. London: Routledge & Kegan Paul, 1962. 30s. Chinese Literature: A Historical Introduction. By Ch'ên Shou-yi. New York: Ronald Press, 1961. 665. Index. $8.75. Journal: The Journal of Asian Studies / Volume 21 / Issue 3 / May 1962
CommonCrawl
Second order modified objective function method for twice differentiable vector optimization problems over cone constraints NACO Home A new reprojection of the conjugate directions June 2019, 9(2): 147-156. doi: 10.3934/naco.2019011 A Mehrotra type predictor-corrector interior-point algorithm for linear programming Soodabeh Asadi and Hossein Mansouri , Faculty of Mathematical Sciences, Shahrekord University, Shahrekord, Iran Received February 2017 Revised August 2018 Published January 2019 In this paper, we analyze a feasible predictor-corrector linear programming variant of Mehrotra's algorithm. The analysis is done in the negative infinity neighborhood of the central path. We demonstrate the theoretical efficiency of this algorithm by showing its polynomial complexity. The complexity result establishes an improvement of factor $ n^3 $ in the theoretical complexity of an earlier presented variant in [2], which is a huge improvement. We examine the performance of our algorithm by comparing its implementation results to solve some NETLIB problems with the algorithm presented in [2]. Keywords: Interior-point algorithm, linear programming, central path, predictor-corrector, iteration complexity. Mathematics Subject Classification: 90C05, 90C51. Citation: Soodabeh Asadi, Hossein Mansouri. A Mehrotra type predictor-corrector interior-point algorithm for linear programming. Numerical Algebra, Control & Optimization, 2019, 9 (2) : 147-156. doi: 10.3934/naco.2019011 R. Almeida, F. Bastos and A. Teixeira, On polynomiality of a predictor-corrector variant algorithm, in International conference on numerical analysis and applied mathematica, Springer-Verlag, New York, (2010), 959–963.Google Scholar R. Almeida and A. Teixeira, On the convergence of a predictor-corrector variant algorithm, TOP, 23 (2015), 401-418. doi: 10.1007/s11750-014-0346-8. Google Scholar E. D. Andersen and K. D. Andersen, The MOSEK interior point optimizer for linear programming: an implementation of the homogeneous algorithm, in High Performance Optimization (eds. H. Frenk, K. Roos, T. Terlaky and S. Zhang), Kluwer Academic Publishers, (2000), 197–232. doi: 10.1007/978-1-4757-3216-0_8. Google Scholar S. Asadi, H. Mansouri, Zs. Darvay and M. Zangiabadi, On the $P_*(\kappa)$ horizontal linear complementarity problems over Cartesian product of symmetric cones, Optim. Methods Softw., 31 (2016), 233-257. doi: 10.1080/10556788.2015.1058795. Google Scholar S. Asadi, H. Mansouri, Zs. Darvay, G. Lesaja and M. Zangiabadi, A long-step feasible predictor-corrector interior-point algorithm for symmetric cone optimization, Optim. Methods Softw., 67 (2018), 2031–2060 doi: 10.1080/10556788.2018.1528248. Google Scholar S. Asadi, H. Mansouri, G. Lesaja and M. Zangiabadi, A long-step interior-point algorithm for symmetric cone Cartesian $P_*(\kappa)$ -HLCP, Optimization, 67 (2018), 2031-2060. doi: 10.1080/02331934.2018.1512604. Google Scholar S. Asadi, H. Mansouri, Zs. Darvay, M. Zangiabadi and N Mahdavi-Amiri, Large-neighborhood infeasible predictor-corrector algorithm for horizontal linear complementarity problems over cartesian product of symmetric cones, J. Optim. Theory Appl., q doi: 10.1007/s10957-018-1402-6. Google Scholar S. Asadi, H. Mansouri and and Zs. Darvay, An infeasible full-NT step IPM for $P_*(\kappa)$ horizontal linear complementarity problem over Cartesian product of symmetric cones, Optimization, 66 (2017), 225-250. doi: 10.1080/02331934.2016.1267732. Google Scholar J. Czyzyk, S. Mehrtotra, M. Wagner and S. J. Wright, PCx: an interior-point code for linear programming, Optim. Methods Softw., 11/12 (1999), 397-430. doi: 10.1080/10556789908805757. Google Scholar J. Ji, F. Potra and R. Sheng, On a local convergence of a predictor-corrector method for semidefinite programming, SIAM J. Optim., 10 (1999), 195-210. doi: 10.1137/S1052623497316828. Google Scholar N. K. Karmarkar, A new polynomial-time algorithm for linear programming, Combinatorica, 4 (1984), 373-395. doi: 10.1007/BF02579150. Google Scholar M. Kojima, N. Megiddo, T. Noma and A. Yoshise, A Unified Approach to Interior Point Algorithms for Linear Complementarity Problems, Springer, Berlin, 1991. doi: 10.1007/3-540-54509-3. Google Scholar M. Kojima, N. Megiddo and S. Mizuno, A primal-dual infeasible-interior-point algorithm for linear programming, Math. Program., 61 (1993), 263-280. doi: 10.1007/BF01582151. Google Scholar S. Mehrotra, On finding a vertex solution using interior-point methods, Linear Algebra Appl., 152 (1991), 233-253. doi: 10.1016/0024-3795(91)90277-4. Google Scholar S. Mehrotra, On the implementation of a primal-dual interior point method, SIAM J. Optim., 2 (1992), 575-601. doi: 10.1137/0802028. Google Scholar N. Megiddo, Pathways to the optimal set in linear programming, in Progress in Mathematical Programming, (1989), 135–158. Google Scholar S. Mizuno, M. J. Todd and Y. Ye, On adaptive-step primal-dual interior-point algorithms for linear programming, Math. Oper. Res., 18 (1993), 964-981. doi: 10.1287/moor.18.4.964. Google Scholar R. D. C. Monteiro, Primal-dual path-following algorithm for semidefinite programming, SIAM J. Optim., 7 (1997), 663-678. doi: 10.1137/S1052623495293056. Google Scholar J. Peng, C. Roos and T. Terlaky, Self-Regularity: A New Paradigm for Primal-Dual Interior-Point Algorithms. Princeton University Press, Princeton, New Jersey, 2002. Google Scholar M. Salahi, J. Peng and T. Terlaky, On mehrotra-type predictor-corrector algorithms, SIAM J. Optim., 18 (2007), 1377-1397. doi: 10.1137/050628787. Google Scholar M. Salahi, A finite termination mehrotra type predictor-corrector algorithm, Appl. Math. Comput., 190 (2007), 1740-1746. doi: 10.1016/j.amc.2007.02.061. Google Scholar Gy. Sonnevend, An analytic center for polyhedrons and new classes of global algorithms for linear (smooth, convex) programming, in Lecture Notes in Control and Information Sciences, Springer, Berlin, (1985), 866–876. doi: 10.1007/BFb0043914. Google Scholar J. Stoer and M. Wechs, Infeasible-interior-point paths for sufficient linear complementarity problems and their analyticity, Math. Program. Ser. A., 83 (1998), 407-423. doi: 10.1016/S0025-5610(98)00011-2. Google Scholar G. Q. Wang and Y. Q. Bai, Polynomial interior-point algorithms for $P_*(\kappa)$ horizontal linear complementarity problem, J. Comput. Appl. Math., 233 (2009), 248-263. doi: 10.1016/j.cam.2009.07.014. Google Scholar G. Q. Wang and G. Lesaja, Full Nesterov-Todd step feasible interior-point method for the Cartesian $P_*(\kappa)$-SCLCP, Optim. Methods Softw., 28 (2013), 600-618. doi: 10.1080/10556788.2013.781600. Google Scholar Y. Zhang and D. Zhang, Superlinear convergence of infeasible-interior-point methods for linear programming, Math. Program., 66 (1994), 361-377. doi: 10.1007/BF01581155. Google Scholar Y. Zhang, On the convergence of a class of infeasible interior-point methods for the horizontal linear complementarity problem, SIAM J. Optim., 4 (1994), 208-227. doi: 10.1137/0804012. Google Scholar Y. Zhang, Solving large scale linear programmes by interior point methods under the Matlab environment, Optim. Methods Softw., 10 (1999), 1-31. doi: 10.1080/10556789808805699. Google Scholar Table 1. The number of iterations Problem $ m $ $ n $ Alg 1 (It.) Alg 1 ($ x^Tv $) Alg 2 (It.) Alg 2 ($ x^Tv $) blend 74 114 242 8.3720e-4 280 8.4720e-4 adlittle 56 138 61 3.8658e-4 376 3.7909e-4 scagr7 129 185 308 4.2065e-4 217 7.5233e-4 share1b 117 253 51 1.3551e-4 344 1.0125e-4 share2b 96 162 191 3.8527e-4 296 3.9533e-4 scsd1 77 760 75 1.1725e-4 112 1.0346e-4 sc105 105 163 238 5.0063e-4 266 1.6058e-4 agg 488 615 31 1.0920e-4 199 1.0088e-4 Liming Sun, Li-Zhi Liao. An interior point continuous path-following trajectory for linear programming. Journal of Industrial & Management Optimization, 2019, 15 (4) : 1517-1534. doi: 10.3934/jimo.2018107 Siqi Li, Weiyi Qian. Analysis of complexity of primal-dual interior-point algorithms based on a new kernel function for linear optimization. Numerical Algebra, Control & Optimization, 2015, 5 (1) : 37-46. doi: 10.3934/naco.2015.5.37 Yinghong Xu, Lipu Zhang, Jing Zhang. A full-modified-Newton step infeasible interior-point algorithm for linear optimization. Journal of Industrial & Management Optimization, 2016, 12 (1) : 103-116. doi: 10.3934/jimo.2016.12.103 Behrouz Kheirfam, Morteza Moslemi. On the extension of an arc-search interior-point algorithm for semidefinite optimization. Numerical Algebra, Control & Optimization, 2018, 8 (2) : 261-275. doi: 10.3934/naco.2018015 Yanqin Bai, Pengfei Ma, Jing Zhang. A polynomial-time interior-point method for circular cone programming based on kernel functions. Journal of Industrial & Management Optimization, 2016, 12 (2) : 739-756. doi: 10.3934/jimo.2016.12.739 Guoqiang Wang, Zhongchen Wu, Zhongtuan Zheng, Xinzhong Cai. Complexity analysis of primal-dual interior-point methods for semidefinite optimization based on a parametric kernel function with a trigonometric barrier term. Numerical Algebra, Control & Optimization, 2015, 5 (2) : 101-113. doi: 10.3934/naco.2015.5.101 Antonio Coronel-Escamilla, José Francisco Gómez-Aguilar. A novel predictor-corrector scheme for solving variable-order fractional delay differential equations involving operators with Mittag-Leffler kernel. Discrete & Continuous Dynamical Systems - S, 2018, 0 (0) : 561-574. doi: 10.3934/dcdss.2020031 Behrouz Kheirfam. A full Nesterov-Todd step infeasible interior-point algorithm for symmetric optimization based on a specific kernel function. Numerical Algebra, Control & Optimization, 2013, 3 (4) : 601-614. doi: 10.3934/naco.2013.3.601 Yanqin Bai, Lipu Zhang. A full-Newton step interior-point algorithm for symmetric cone convex quadratic optimization. Journal of Industrial & Management Optimization, 2011, 7 (4) : 891-906. doi: 10.3934/jimo.2011.7.891 Andrew E.B. Lim, John B. Moore. A path following algorithm for infinite quadratic programming on a Hilbert space. Discrete & Continuous Dynamical Systems - A, 1998, 4 (4) : 653-670. doi: 10.3934/dcds.1998.4.653 Yanqin Bai, Xuerui Gao, Guoqiang Wang. Primal-dual interior-point algorithms for convex quadratic circular cone optimization. Numerical Algebra, Control & Optimization, 2015, 5 (2) : 211-231. doi: 10.3934/naco.2015.5.211 Boshi Tian, Xiaoqi Yang, Kaiwen Meng. An interior-point $l_{\frac{1}{2}}$-penalty method for inequality constrained nonlinear optimization. Journal of Industrial & Management Optimization, 2016, 12 (3) : 949-973. doi: 10.3934/jimo.2016.12.949 Yu-Hong Dai, Xin-Wei Liu, Jie Sun. A primal-dual interior-point method capable of rapidly detecting infeasibility for nonlinear programs. Journal of Industrial & Management Optimization, 2017, 13 (5) : 1-27. doi: 10.3934/jimo.2018190 Zheng-Hai Huang, Shang-Wen Xu. Convergence properties of a non-interior-point smoothing algorithm for the P*NCP. Journal of Industrial & Management Optimization, 2007, 3 (3) : 569-584. doi: 10.3934/jimo.2007.3.569 Yanqun Liu. An exterior point linear programming method based on inclusive normal cones. Journal of Industrial & Management Optimization, 2010, 6 (4) : 825-846. doi: 10.3934/jimo.2010.6.825 Rong Hu, Ya-Ping Fang. A parametric simplex algorithm for biobjective piecewise linear programming problems. Journal of Industrial & Management Optimization, 2017, 13 (2) : 573-586. doi: 10.3934/jimo.2016032 Guillaume Bal, Wenjia Jing. Homogenization and corrector theory for linear transport in random media. Discrete & Continuous Dynamical Systems - A, 2010, 28 (4) : 1311-1343. doi: 10.3934/dcds.2010.28.1311 Jianqin Zhou, Wanquan Liu, Xifeng Wang. Complete characterization of the first descent point distribution for the k-error linear complexity of 2n-periodic binary sequences. Advances in Mathematics of Communications, 2017, 11 (3) : 429-444. doi: 10.3934/amc.2017036 Rich Stankewitz, Hiroki Sumi. Random backward iteration algorithm for Julia sets of rational semigroups. Discrete & Continuous Dynamical Systems - A, 2015, 35 (5) : 2165-2175. doi: 10.3934/dcds.2015.35.2165 Scott Crass. New light on solving the sextic by iteration: An algorithm using reliable dynamics. Journal of Modern Dynamics, 2011, 5 (2) : 397-408. doi: 10.3934/jmd.2011.5.397 Soodabeh Asadi Hossein Mansouri
CommonCrawl
Induced Ginibre ensemble of random matrices and quantum operations (1107.5019) J. Fischmann, W. Bruzda, B. A. Khoruzhenko, H.-J. Sommers, K. Zyczkowski Jan. 5, 2012 math-ph, math.MP A generalisation of the Ginibre ensemble of non-Hermitian random square matrices is introduced. The corresponding probability measure is induced by the ensemble of rectangular Gaussian matrices via a quadratisation procedure. We derive the joint probability density of eigenvalues for such induced Ginibre ensemble and study various spectral correlation functions for complex and real matrices, and analyse universal behaviour in the limit of large dimensions. In this limit the eigenvalues of the induced Ginibre ensemble cover uniformly a ring in the complex plane. The real induced Ginibre ensemble is shown to be useful to describe statistical properties of evolution operators associated with random quantum operations, for which the dimensions of the input state and the output state do differ. Non-Hermitian Random Matrix Ensembles (0911.5645) B.A. Khoruzhenko, H.-J. Sommers Nov. 30, 2009 math-ph, math.MP This is a concise review of the complex, real and quaternion real Ginibre random matrix ensembles and their elliptic deformations. Eigenvalue correlations are exactly reduced to two-point kernels and discussed in the strongly and weakly non-Hermitian limits of large matrix size. Characteristic polynomials in real Ginibre ensembles (0810.1458) G. Akemann, M.J. Phillips, H.-J. Sommers Nov. 10, 2009 hep-th, math-ph, math.MP, cond-mat.stat-mech We calculate the average of two characteristic polynomials for the real Ginibre ensemble of asymmetric random matrices, and its chiral counterpart. Considered as quadratic forms they determine a skew-symmetric kernel from which all complex eigenvalue correlations can be derived. Our results are obtained in a very simple fashion without going to an eigenvalue representation, and are completely new in the chiral case. They hold for Gaussian ensembles which are partly symmetric, with kernels given in terms of Hermite and Laguerre polynomials respectively, depending on an asymmetry parameter. This allows us to interpolate between the maximally asymmetric real Ginibre and the Gaussian Orthogonal Ensemble, as well as their chiral counterparts. The chiral Gaussian two-matrix ensemble of real asymmetric matrices (0911.1276) Nov. 6, 2009 hep-th, math-ph, math.MP, hep-lat We solve a family of Gaussian two-matrix models with rectangular Nx(N+v) matrices, having real asymmetric matrix elements and depending on a non-Hermiticity parameter mu. Our model can be thought of as the chiral extension of the real Ginibre ensemble, relevant for Dirac operators in the same symmetry class. It has the property that its eigenvalues are either real, purely imaginary, or come in complex conjugate eigenvalue pairs. The eigenvalue joint probability distribution for our model is explicitly computed, leading to a non-Gaussian distribution including K-Bessel functions. All n-point density correlation functions are expressed for finite N in terms of a Pfaffian form. This contains a kernel involving Laguerre polynomials in the complex plane as a building block which was previously computed by the authors. This kernel can be expressed in terms of the kernel for complex non-Hermitian matrices, generalising the known relation among ensembles of Hermitian random matrices. Compact expressions are given for the density at finite N as an example, as well as its microscopic large-N limits at the origin for fixed v at strong and weak non-Hermiticity. Random Bistochastic Matrices (0711.3345) V. Cappellini, H.-J. Sommers, W. Bruzda, K. Zyczkowski Aug. 24, 2009 math-ph, math.MP, cond-mat.stat-mech, nlin.SI Ensembles of random stochastic and bistochastic matrices are investigated. While all columns of a random stochastic matrix can be chosen independently, the rows and columns of a bistochastic matrix have to be correlated. We evaluate the probability measure induced into the Birkhoff polytope of bistochastic matrices by applying the Sinkhorn algorithm to a given ensemble of random stochastic matrices. For matrices of order N=2 we derive explicit formulae for the probability distributions induced by random stochastic matrices with columns distributed according to the Dirichlet distribution. For arbitrary $N$ we construct an initial ensemble of stochastic matrices which allows one to generate random bistochastic matrices according to a distribution locally flat at the center of the Birkhoff polytope. The value of the probability density at this point enables us to obtain an estimation of the volume of the Birkhoff polytope, consistent with recent asymptotic results. Systematic approach to statistics of conductance and shot-noise in chaotic cavities (0906.0161) B. A. Khoruzhenko, D. V. Savin, H.-J. Sommers May 31, 2009 math-ph, math.MP, nlin.CD, cond-mat.mes-hall Applying random matrix theory to quantum transport in chaotic cavities, we develop a novel approach to computation of the moments of the conductance and shot-noise (including their joint moments) of arbitrary order and at any number of open channels. The method is based on the Selberg integral theory combined with the theory of symmetric functions and is applicable equally well for systems with and without time-reversal symmetry. We also compute higher-order cumulants and perform their detailed analysis. In particular, we establish an explicit form of the leading asymptotic of the cumulants in the limit of the large channel numbers. We derive further a general Pfaffian representation for the corresponding distribution functions. The Edgeworth expansion based on the first four cumulants is found to reproduce fairly accurately the distribution functions in the bulk even for a small number of channels. As the latter increases, the distributions become Gaussian-like in the bulk but are always characterized by a power-law dependence near their edges of support. Such asymptotics are determined exactly up to linear order in distances from the edges, including the corresponding constants. Superbosonization of invariant random matrix ensembles (0707.2929) P. Littelmann, H.-J. Sommers, M.R. Zirnbauer Aug. 23, 2008 math-ph, math.MP Superbosonization is a new variant of the method of commuting and anti-commuting variables as used in studying random matrix models of disordered and chaotic quantum systems. We here give a concise mathematical exposition of the key formulas of superbosonization. Conceived by analogy with the bosonization technique for Dirac fermions, the new method differs from the traditional one in that the superbosonization field is dual to the usual Hubbard-Stratonovich field. The present paper addresses invariant random matrix ensembles with symmetry group U(n), O(n), or USp(n), giving precise definitions and conditions of validity in each case. The method is illustrated at the example of Wegner's n-orbital model. Superbosonization promises to become a powerful tool for investigating the universality of spectral correlation functions for a broad class of random matrix ensembles of non-Gaussian and/or non-invariant type. Nonlinear statistics of quantum transport in chaotic cavities (0711.1764) D. V. Savin, H.-J. Sommers, W. Wieczorek Nov. 12, 2007 cond-mat.mes-hall In the framework of the random matrix approach, we apply the theory of Selberg's integral to problems of quantum transport in chaotic cavities. All the moments of transmission eigenvalues are calculated analytically up to the fourth order. As a result, we derive exact explicit expressions for the skewness and kurtosis of the conductance and transmitted charge as well as for the variance of the shot-noise power in chaotic cavities. The obtained results are generally valid at arbitrary numbers of propagating channels in the two attached leads. In the particular limit of large (and equal) channel numbers, the shot-noise variance attends the universal value 1/(64\beta) that determines a universal Gaussian statistics of shot-noise fluctuations in this case. Statistics of conductance and shot-noise power for chaotic cavities (0710.5370) H.-J. Sommers, W. Wieczorek, D.V. Savin Oct. 29, 2007 cond-mat.mes-hall We report on an analytical study of the statistics of conductance, $g$, and shot-noise power, $p$, for a chaotic cavity with arbitrary numbers $N_{1,2}$ of channels in two leads and symmetry parameter $\beta = 1,2,4$. With the theory of Selberg's integral the first four cumulants of $g$ and first two cumulants of $p$ are calculated explicitly. We give analytical expressions for the conductance and shot-noise distributions and determine their exact asymptotics near the edges up to linear order in distances from the edges. For $0<g<1$ a power law for the conductance distribution is exact. All results are also consistent with numerical simulations. Classical Particle in a Box with Random Potential: exploiting rotational symmetry of replicated Hamiltonian (cond-mat/0610035) Yan V. Fyodorov, H.-J. Sommers June 25, 2007 cond-mat.dis-nn We investigate thermodynamics of a single classical particle placed in a spherical box of a finite radius $R$ and subject to a superposition of a $N-$dimensional Gaussian random potential and the parabolic potential with the curvature $\mu>0$. Earlier solutions of $R\to \infty$ version of this model were based on combining the replica trick with the Gaussian Variational Ansatz (GVA) for free energy, and revealed a possibility of a glassy phase at low temperatures. For a general $R$, we show how to utilize instead the underlying rotational symmetry of the replicated partition function and to arrive to a compact expression for the free energy in the limit $N\to \infty$ directly, without any need for intermediate variational approximations. This method reveals striking similarity with the much-studied spherical model of spin glasses. Depending on the value of $R$ and the three types of disorder - short-ranged, long-ranged, and logarithmic - the phase diagram of the system in the $(\mu,T)$ plane undergoes considerable modifications. In the limit of infinite confinement radius our analysis confirms all previous results obtained by GVA. Energy correlations for a random matrix model of disordered bosons (cond-mat/0607243) T. Lueck, H.-J. Sommers, M.R. Zirnbauer Oct. 3, 2006 math-ph, math.MP, cond-mat.mes-hall, cond-mat.dis-nn Linearizing the Heisenberg equations of motion around the ground state of an interacting quantum many-body system, one gets a time-evolution generator in the positive cone of a real symplectic Lie algebra. The presence of disorder in the physical system determines a probability measure with support on this cone. The present paper analyzes a discrete family of such measures of exponential type, and does so in an attempt to capture, by a simple random matrix model, some generic statistical features of the characteristic frequencies of disordered bosonic quasi-particle systems. The level correlation functions of the said measures are shown to be those of a determinantal process, and the kernel of the process is expressed as a sum of bi-orthogonal polynomials. While the correlations in the bulk scaling limit are in accord with sine-kernel or GUE universality, at the low-frequency end of the spectrum an unusual type of scaling behavior is found. Shot noise in chaotic cavities with an arbitrary number of open channels (cond-mat/0512620) D. V. Savin, H.-J. Sommers Feb. 15, 2006 cond-mat.mes-hall Using the random matrix approach, we calculate analytically the average shot-noise power in a chaotic cavity at an arbitrary number of propagating modes (channels) in each of the two attached leads. A simple relationship between this quantity, the average conductance and the conductance variance is found. The dependence of the Fano factor on the channel number is considered in detail. Scattering, reflection and impedance of waves in chaotic and disordered systems with absorption (cond-mat/0507016) Y. V. Fyodorov, D. V. Savin, H.-J. Sommers Dec. 15, 2005 cond-mat.mes-hall We review recent progress in analysing wave scattering in systems with both intrinsic chaos and/or disorder and internal losses, when the scattering matrix is no longer unitary. By mapping the problem onto a nonlinear supersymmetric sigma-model, we are able to derive closed form analytic expressions for the distribution of reflection probability in a generic disordered system. One of the most important properties resulting from such an analysis is statistical independence between the phase and the modulus of the reflection amplitude in every perfectly open channel. The developed theory has far-reaching consequences for many quantities of interest, including local Green functions and time delays. In particular, we point out the role played by absorption as a sensitive indicator of mechanisms behind the Anderson localisation transition. We also provide a random-matrix-based analysis of S-matrix and impedance correlations for various symmetry classes as well as the distribution of transmitted power for systems with broken time-reversal invariance, completing previous works on the subject. The results can be applied, in particular, to the experimentally accessible impedance and reflection in a microwave or ultrasonic cavity attached to a system of antennas. Universal statistics of the local Green's function in quantum chaotic systems with absorption (cond-mat/0502359) D. V. Savin, H.-J. Sommers, Y. V. Fyodorov We establish a general relation between the statistics of the local Green's function for systems with chaotic wave scattering and a uniform energy loss (absorption) and its two-point correlation function for the same system without absorption. Within the random matrix approach this kind of a fluctuation dissipation relation allows us to derive the explicit analytical expression for the joint distribution function of the real and imaginary parts of the local Green function for all symmetry classes as well as at an arbitrary degree of the time-reversal symmetry breaking in the system. The outstanding problem of the orthogonal symmetry is further reduced to simple quadratures. The results can be applied, in particular, to the experimentally accessible impedance and reflection in a microwave cavity attached to a single-mode antenna. Correlation functions of impedance and scattering matrix elements in chaotic absorbing cavities (nlin/0506040) D. V. Savin, Y. V. Fyodorov, H.-J. Sommers June 22, 2005 nlin.CD, cond-mat.mes-hall, nucl-th Wave scattering in chaotic systems with a uniform energy loss (absorption) is considered. Within the random matrix approach we calculate exactly the energy correlation functions of different matrix elements of impedance or scattering matrices for systems with preserved or broken time-reversal symmetry. The obtained results are valid at any number of arbitrary open scattering channels and arbitrary absorption. Elastic enhancement factors (defined through the ratio of the corresponding variance in reflection to that in transmission) are also discussed. Distribution of reflection eigenvalues in many-channel chaotic cavities with absorption (cond-mat/0311285) D.V. Savin, H.-J. Sommers Nov. 13, 2003 quant-ph, nlin.CD, cond-mat.mes-hall The reflection matrix R=S^{\dagger}S, with S being the scattering matrix, differs from the unit one, when absorption is finite. Using the random matrix approach, we calculate analytically the distribution function of its eigenvalues in the limit of a large number of propagating modes in the leads attached to a chaotic cavity. The obtained result is independent on the presence of time-reversal symmetry in the system, being valid at finite absorption and arbitrary openness of the system. The particular cases of perfectly and weakly open cavities are considered in detail. An application of our results to the problem of thermal emission from random media is briefly discussed. Delay times and reflection in chaotic cavities with absorption (cond-mat/0303083) July 13, 2003 nlin.CD, cond-mat.mes-hall Absorption yields an additional exponential decay in open quantum systems which can be described by shifting the (scattering) energy E along the imaginary axis, E+i\hbar/2\tau_{a}. Using the random matrix approach, we calculate analytically the distribution of proper delay times (eigenvalues of the time-delay matrix) in chaotic systems with broken time-reversal symmetry that is valid for an arbitrary number of generally nonequivalent channels and an arbitrary absorption rate 1/\tau_{a}. The relation between the average delay time and the ``norm-leakage'' decay function is found. Fluctuations above the average at large values of delay times are strongly suppressed by absorption. The relation of the time-delay matrix to the reflection matrix S^{\dagger}S is established at arbitrary absorption that gives us the distribution of reflection eigenvalues. The particular case of single-channel scattering is explicitly considered in detail. Is the concept of the non-Hermitian effective Hamiltonian relevant in the case of potential scattering? (cond-mat/0206176) D. V. Savin, V. V. Sokolov, H.-J. Sommers March 4, 2003 quant-ph, cond-mat.mes-hall, nucl-th We examine the notion and properties of the non-Hermitian effective Hamiltonian of an unstable system using as an example potential resonance scattering with a fixed angular momentum. We present a consistent self-adjoint formulation of the problem of scattering on a finite-range potential, which is based on separation of the configuration space on two, internal and external, segments. The scattering amplitude is expressed in terms of the resolvent of a non-Hermitian operator H. The explicit form of this operator depends both on the radius of separation and the boundary conditions at this place which can be chosen in many different ways. We discuss this freedom and show explicitly that the physical scattering amplitude is, nevertheless, unique though not all choices are equally adequate from the physical point of view. The energy-dependent operator H should not be confused with the non-Hermitian effective Hamiltonian H_{eff} exploited usually to describe interference of overlapping resonances. We apply the developed formalism to a chain of L delta-barriers whose solution is also found independently in a closed form. For a fixed band of L overlapping resonances, the smooth energy dependence of H can be ignored so that complex eigenvalues of the LxL submatrix H_{eff} define the energies and widths of the resonances.We construct H_{eff} for the two commonly considered types of the boundary conditions (Neumann and Dirichlet) for the internal motion. Formation in the outer well of a short-lived doorway state is explicitly demonstrated together with the appearance of L-1 long-lived states trapped in the inner part of the chain. Random unistochastic matrices (nlin/0112036) K. Zyczkowski, W. Slomczynski, M. Kus, H.-J. Sommers Aug. 28, 2002 nlin.CD An ensemble of random unistochastic (orthostochastic) matrices is defined by taking squared moduli of elements of random unitary (orthogonal) matrices distributed according to the Haar measure on U(N) (or O(N), respectively). An ensemble of symmetric unistochastic matrices is obtained with use of unitary symmetric matrices pertaining to the circular orthogonal ensemble. We study the distribution of complex eigenvalues of bistochastic, unistochastic and ortostochastic matrices in the complex plane. We compute averages (entropy, traces) over the ensembles of unistochastic matrices and present inequalities concerning the entropies of products of bistochastic matrices. Statistics of S-matrix poles for chaotic systems with broken time reversal invariance: a conjecture (cond-mat/9802306) Yan V. Fyodorov, Mikhail Titov, H.-J. Sommers Feb. 27, 1998 cond-mat In the framework of a random matrix description of chaotic quantum scattering the positions of $S-$matrix poles are given by complex eigenvalues $Z_i$ of an effective non-Hermitian random-matrix Hamiltonian. We put forward a conjecture on statistics of $Z_i$ for systems with broken time-reversal invariance and verify that it allows to reproduce statistical characteristics of Wigner time delays known from independent calculations. We analyze the ensuing two-point statistical measures as e.g. spectral form factor and the number variance. In addition we find the density of complex eigenvalues of real asymmetric matrices generalizing the recent result by Efetov\cite{Efnh}. Almost-Hermitian Random Matrices: Crossover from Wigner-Dyson to Ginibre eigenvalue statistics (cond-mat/9703152) Yan V. Fyodorov, Boris A. Khoruzhenko, H.-J. Sommers March 14, 1997 hep-th, nlin.CD, chao-dyn, cond-mat By using the method of orthogonal polynomials we analyze the statistical properties of complex eigenvalues of random matrices describing a crossover from Hermitian matrices characterized by the Wigner- Dyson statistics of real eigenvalues to strongly non-Hermitian ones whose complex eigenvalues were studied by Ginibre. Two-point statistical measures (as e.g. spectral form factor, number variance and small distance behavior of the nearest neighbor distance distribution $p(s)$) are studied in more detail. In particular, we found that the latter function may exhibit unusual behavior $p(s)\propto s^{5/2}$ for some parameter values. Parametric Correlations of Phase Shifts and Statistics of Time Delays in Quantum Chaotic Scattering: Crossover between Unitary and Orthogonal Symmetries (cond-mat/9701108) Yan V. Fyodorov, Dmitry V. Savin, H.-J. Sommers Jan. 15, 1997 nlin.CD, cond-mat.mes-hall, chao-dyn We analyse universal statistical properties of phase shifts and time delays for open chaotic systems in the crossover regime of partly broken time-reversal invariance. In particular, we find that the distribution of the time delay shows $\tau^{-3/2}$ behavior for weakly open systems of any symmetry. Parametric Correlations of Scattering Phase Shifts and Fluctuations of Delay Times in Few-Channel Chaotic Scattering (cond-mat/9601046) April 19, 1996 nlin.CD, chao-dyn, cond-mat By using the supersymmetry method we derive an explicit expression for the parametric correlation function of densities of eigenphases $\theta_a$ of the S-matrix in a chaotic quantum system with broken time-reversal symmetry coupled to continua via M equivalent open channels;$\,\,a=1,..,M$ .We use it to find the distribution of derivatives of these eigenphases over the energy ("phaseshift times") as well as over an arbitrary external parameter.We also find the parametric correlations of Wigner-Smith delay times Statistics of S-matrix poles in Few-Channel Chaotic Scattering: Crossover from Isolated to Overlapping Resonances (cond-mat/9507117) Nov. 24, 1995 nlin.CD, nucl-th, chao-dyn, cond-mat We derive the explicit expression for the distribution of resonance widths in a chaotic quantum system coupled to continua via M equivalent open channels. It describes a crossover from the $\chi^2$ distribution (regime of isolated resonances) to a broad power-like distribution typical for the regime of overlapping resonances. The first moment is found to reproduce exactly the Moldauer-Simonius relation between the mean resonance width and the transmission coefficient. This fact may serve as another manifestation of equivalence between the spectral and the ensemble averaging. Time Delay Correlations in Chaotic Scattering: Random Matrix Approach (chao-dyn/9501018) N. Lehmann, D.V. Savin, V.V. Sokolov, H.-J. Sommers Jan. 30, 1995 nlin.CD, nucl-th, chao-dyn, cond-mat We study the correlations of time delays in a model of chaotic resonance scattering based on the random matrix approach. Analytical formulae which are valid for arbitrary number of open channels and arbitrary coupling strength between resonances and channels are obtained by the supersymmetry method. We demonstrate that the time delay correlation function, though being not a Lorentzian, is characterized, similar to that of the scattering matrix, by the gap between the cloud of complex poles of the $S$-matrix and the real energy axis.
CommonCrawl
The Journal of Economic Inequality December 2018 , Volume 16, Issue 4, pp 507–525 | Cite as How does inequality aversion affect inequality and redistribution? Matthew N. Murray Langchuan Peng Rudy Santore We investigate the effects of inequality aversion on equilibrium labor supply, tax revenue, income inequality, and median voter outcomes in a society where agents have heterogeneous skill levels. These outcomes are compared to those which result from the behavior of selfish agents. A variant of Fehr-Schmidt preferences is employed that allows the externality from agents who are "ahead" to differ in magnitude from the externality from those who are "behind" in the income distribution. We find first, that inequality-averse preferences yield distributional outcomes that are analogous to tax-transfer schemes with selfish agents, and may either increase or decrease average consumption. Second, in a society of inequality-averse agents, a linear income tax can be welfare-enhancing. Third, inequality-averse preferences can lead to less redistribution at any given tax, with low-wage agents receiving smaller net subsidies and/or high-wage individuals paying less in net taxes. Finally, an inequality-averse median voter may prefer higher redistribution even if it means less utility from own consumption and leisure. Income distribution Inequality aversion Redistribution 10888_2018_9389_MOESM1_ESM.docx (27 kb) (DOCX 26.7 KB) Claim: Wage income, wL(w,τ), is increasing in w if and only if σ(1 − αH(w) + β[1 − H(w)]) > w(α + β)h(w) for all \(w\in [\underline {w},\overline {w} ]\). Using (2) we have $${wL(w,\tau )=w^{\frac{\sigma} {\sigma -1}}[\frac{(1-\tau )\{1-\alpha H(w)+\beta [1-H(w)]\}}{\xi} ]}^{\frac{1}{\sigma -1}}. $$ Now differentiate the above with respect to w to get a necessary and sufficient condition for labor income to be increasing in the wage: $$\begin{array}{@{}rcl@{}} \frac{\partial (wL(w,\tau ))}{\partial w}&=&{w^{\frac{\sigma} {\sigma -1}}[\frac{(1-\tau )\{1-\alpha H\left( w \right)+\beta [1-H(w)]\}}{\xi} ]}^{\frac{1}{\sigma -1}-1}(\frac{1}{\sigma -1})\\&&\times(\frac{-(1-\tau )(\alpha +\beta )h(w)}{\xi} ) \end{array} $$ $$+{w^{\frac{1}{\sigma -1}}[\frac{(1-\tau )\{1-\alpha H(w)+\beta [1-H(w)]\}}{\xi} ]}^{\frac{1}{\sigma -1}}(\frac{\sigma} {\sigma -1})>0. $$ The parametric condition for \(\frac {\partial (wL(w,\tau ))}{\partial w}>0\) is thus $${w^{\frac{1}{\sigma -1}}[\frac{(1-\tau )\{1-\alpha H(w)+\beta [1-H(w)]\}}{\xi} ]}^{\frac{1}{\sigma -1}}(\frac{\sigma} {\sigma -1})>$$ $${w^{\frac{\sigma} {\sigma -1}}[\frac{(1-\tau )\{1-\alpha H(w)+\beta [1-H(w)]\}}{\xi} ]}^{\frac{1}{\sigma -1}-1}(\frac{1}{\sigma -1})(\frac{(1-\tau )(\alpha +\beta )h(w)}{\xi} ). $$ Finally, rewriting the above yields the necessary and sufficient condition (given in (3)) that must be satisfied at all w in order for labor income to be monotonically increasing in the wage: σ(1 − αH(w) + β[1 − H(w)]) > w(α + β)h(w). □ Proof of Proposition 1 An inequality-averse individual i with an hourly wage wi provides weakly greater labor than a self-interested individual with the same hourly wage if and only if: $$ {[\frac{(1-\tau )w_{i}\{1-\alpha H(w_{i})+\beta [1-H(w_{i})]\}}{\xi}]}^{\frac{1}{\sigma -1}}\ge {\mathrm{[[}\frac{(1-\tau )w_{i}}{\xi}]}^{\frac{1}{\sigma -1}}. $$ Rearranging the above yields the condition $$ H(w_{i})\le \frac{\beta} {\alpha +\beta}. $$ Recall that \(\tilde {w}\) is defined as the wi such that (9) holds with equality. Since H is a strictly increasing function, (9) is satisfied if and only if \(w_{i}\le \tilde {w}\). □ Before stating the proofs of Proposition 2 and 3, define average pre-tax income as $$\text{I}(\alpha, \beta )={[\frac{(1-\tau )}{\xi} ]}^{\frac{1}{\sigma -1}}{\int}_{\underline{w}}^{\overline{w}} {z^{\frac{\sigma} {\sigma -1}}\{\left[ 1-\alpha H\left( z \right)+\beta \left[ 1-H\left( z \right) \right] \right]^{\frac{1}{\sigma -1}}\}h(z)dz} , $$ with I(0,0) corresponding to selfish agents. It is clear that I(α,β) is increasing in β and decreasing in α. For a given τ, both average tax revenue, S = τ I(α,β), and average consumption, (1 − τ)I(α,β), and are increasing in I(α,β), so the following proofs focus on I(α,β). (i). We know from above that I(0,0) < I(0,β). Suppose β ≤ 1. If I(β,β) ≥I(0,0), the result holds for α′ = β. If I(β,β) < I(0,0) < I(0,β), the result follows from the observation that I(α,β) is a continuous function. Next, suppose β > 1. If I(α,β) ≥I(0,0), the result holds for α′ = 1. If I(1,β) < I(0,0) < I(0,β), the result follows from the observation that I(α,β) is a continuous function. (ii). Follows from 2(i) and Proposition 1. We first establish that I(β,β) < I(0,0) for a symmetric distribution with σ > 2. The result then follows from the fact that I(α,β) is a continuous decreasing function of α. Take the derivative of ρ(β) ≡I(β) to yield $$ \frac{d\rho (\beta )}{d\beta} =\frac{1}{\sigma -1}{[\frac{(1-\tau)}{\xi} ]}^{\frac{1}{\sigma -1}}{\int}_{\underline{w}}^{\overline{w}} {z^{\frac{\sigma} {\sigma -1}}\{{[1+\beta [1-2H(z)]]}^{\frac{1}{\sigma -1}-1}\}[1-2H(z)]h(z)dz}. $$ Let the median wage be wM. The above is negative if and only if $$\begin{array}{@{}rcl@{}} &&{\int}_{\underline{w}}^{w_{M}} {z^{\frac{\sigma} {\sigma -1}}\{\left[ 1+\beta [1-2H(z)] \right]^{\frac{1}{\sigma -1}-1}\}[1-2H(z)]h(z)dz}\\ &&{\kern58pt} <{\int}_{w_{M}}^{\overline{w}} {z^{\frac{\sigma} {\sigma -1}}\{{[1+\beta [1-2H(z)]]}^{\frac{1}{\sigma -1}-1}\}[2H(z)-1]h(z)dz} . \end{array} $$ Using symmetry, which implies g(x) ≡ h(wM − x) = h(wM + x), and reversing the limits of integration we arrive at (12) which is equivalent to (11): $$\begin{array}{@{}rcl@{}} {\int}_{0}^{\frac{\overline{w} -\underline{w}}{2}} {{(w_{M}-x)}^{\frac{\sigma} {\sigma -1}}\{{[1+\beta \left[ 1-2H(w_{M}-x) \right]]}^{\frac{1}{\sigma -1}-1}\}[1-2H(w_{M}-x)]g(x)dx}\\ <{\int}_{0}^{\frac{\overline{w} -\underline{w}}{2}} {{(w_{M}+x)}^{\frac{\sigma} {\sigma -1}}\{{[1+\beta [2H(w_{M}-x)-1]]}^{\frac{1}{\sigma -1}-1}\}[1-2H(w_{M}-x)]g(x)dx}.\\ \end{array} $$ To show that (12) holds it is sufficient to show that (13) holds weakly for x = 0 and strictly for all \(x\in \left (0 \right .\frac {\overline {w} -\underline {w}}{2}]\). $$\begin{array}{@{}rcl@{}} &&{(w_{M}-x)}^{\frac{\sigma} {\sigma -1}}{[1+\beta \left[ 1-2H(w_{M}-x) \right]]}^{\frac{1}{\sigma -1}-1}\\ &&{\kern95pt} \le {(w_{M}+x)}^{\frac{\sigma} {\sigma -1}}{[1+\beta [2H(w_{M}-x)-1]]}^{\frac{1}{\sigma -1}-1}. \end{array} $$ Clearly, (13) holds weakly for x = 0 since H(wM) = 1/2. We now show that (13) holds strictly for \(x\in \left (0 \right .\frac {\overline {w} -\underline {w}}{2}]\), which implies \(\frac {\overline {w} -\underline {w}}{2}\le w_{M}\). So we have 2H(wM − x) < 1 and, given that \(\frac {1}{\sigma -1}-1<0\) for σ > 2, it follows that $$ {[1+\beta [1-2H(w_{M}-x)]]}^{\frac{1}{\sigma -1}-1}<{[1+\beta [2H(w_{M}-x)-1]]}^{\frac{1}{\sigma -1}-1}. $$ Finally, (14) and (wM − x) < (wM + x) implies that (12) holds strictly for $$x\in \left( 0 \right.\frac{\overline{w} -\underline{w}}{2}]. $$ (ii). Follows from part (i) and Proposition 1. □ Proof of the Lemma Observe that γ(wM,α,β) can be written as $$\begin{array}{@{}rcl@{}} \gamma (w_{M},\alpha, \beta )&\equiv& \frac{1}{\sigma -1}(\frac{1}{2})[\beta \int\limits_{w_{M}}^{\overline{w}} {zL(z,\tau )} 2h(z)dz-\alpha \int\limits_{\underline{w}}^{w_{M}} {zL(z,\tau )} 2h(z)dz]\\ &+&(\frac{\beta} {2})[{\int}_{w_{M}}^{\overline{w}} {zL(z,\tau )} 2h(z)dz-w_{M}L(w_{M},\tau )]\\ &&+(\frac{\alpha} {2} )[w_{M}L(w_{M},\tau )-{\int}_{\underline{w}}^{w_{M}} {zL(z,\tau )} 2h(z)dz]. \end{array} $$ Average income of those above the median is given by \({\int }_{w_{M}}^{\overline {w}} {zL\left (z,\tau \right )} 2h(z)dz\) and average income of those below the median is given by \({\int }_{\underline {w}}^{w_{M}} {zL(z,\tau )} 2h(z)dz\). For all β > 0 and α ∈ [0,β] the sum in the first bracketed term is positive since average income of those above the median is greater than average income of those below the median. The sum in the second bracketed term is positive because average income of those above the median is greater than the median income. The sum in the third bracketed term is positive because median income is greater than the average income of those below the median. □ Let τM denote the most preferred tax for an other-regarding median agent and let τS denote the most preferred tax for a selfish median agent. The condition α = β implies that labor income for the selfish median voter equals that of the inequality averse median voter; namely, wML(wM,τ). Thus, τS must satisfy $$ -w_{M}L(w_{M},\tau^{S})+{\int}_{\underline{w}}^{\overline{w}} {[zL(z,\tau^{S})+\tau^{S}z\frac{\partial L(z,\tau^{S})}{\partial \tau} ]} h(z)dz= 0. $$ Now if we evaluate the expression in (7) at τ = τS we get $$-w_{M}L(w_{M},\tau^{S})+{\int}_{\underline{w}}^{\overline{w}} {[zL(z,\tau^{S})+\tau^{S}z\frac{\partial L(z,\tau^{S})}{\partial \tau} ]} h(z)dz+ \gamma (w_{M},\alpha, \beta )>0. $$ Given that τM must satisfy (6) it follows that τM > τS. □ Ackert, L.F., Martinez-Valdez, J., Rider, M.: Social preferences and tax policy: Some experimental evidence. Econ. Inq. 45, 487–501 (2007)CrossRefGoogle Scholar Algood, S.: The marginal costs and benefits of redistributing income and the willingness to pay for status. J. Public. Econ. Theory. 8, 357–77 (2006)CrossRefGoogle Scholar Alesina, A., Angeletos, G.: Fairness and redistribution. Am. Econ. Rev., pp. 960–980 (2005)CrossRefGoogle Scholar Alm, J., McClelland, G.H., Schulze, W.: Changing the norm of tax compliance by voting. Kyklos 52, 141–71 (1999)CrossRefGoogle Scholar Atkinson, A.B.: On the measurement of inequality. J. Econ. Theory. 2, 244–263 (1970)CrossRefGoogle Scholar Beckman, S.R., Formby, J., Smith, J.S., Zheng, B.: Envy, malice and pareto efficiency: an experimental examination. Soc. Choice. Welfare. 19, 349–367 (2002)CrossRefGoogle Scholar Bolton, G., Ockenfels, A.: ERC: A theory of equity, reciprocity, and competition. Am. Econ. Rev. 90, 166–193 (2000)CrossRefGoogle Scholar Clark, A.E., Frijters, P., Shields, M.A.: Relative income, happiness, and utility: an explanation for the easterlin paradox and other puzzles. J. Econ. Lit. 46, 95–144 (2008)CrossRefGoogle Scholar Dhami, S., Al-Nowaihi, A.: Existence of a condorcet winner when voters have other regarding preferences. J. Public. Econ. Theory. 12, 897–922 (2010)CrossRefGoogle Scholar Dhami, S., Al-Nowaihi, A.: Redistributive policies with heterogeneous social preferences of voters. Eur. Econ. Rev. 54, 743–759 (2010)CrossRefGoogle Scholar Dorfman, R.: A formula for the gini coefficient. Rev. Econ. Stat. 61, 146–149 (1979)CrossRefGoogle Scholar Easterlin, R.A.: Will raising the incomes of all increase the happiness of all. J. Econ. Behav. Organ. 27, 35–48 (1995)CrossRefGoogle Scholar Fehr, E., Schmidt, K.M.: A theory of fairness, competition and cooperation. Q. J. Econ. 114, 817–868 (1999)CrossRefGoogle Scholar Fehr, E., Fischbacher, U.: Why social preferences matter—the impact of non-selfish motives on competition, cooperation and incentives. Econ. J. 112, C1–C33 (2002)CrossRefGoogle Scholar Forsythe, R, Horowitz, J., Savin, N.E., Sefton, M.: Fairness in simple bargaining experiments. Game. Econ. Behav 6, 347–369 (1994)CrossRefGoogle Scholar Frank, R.H.: Should public policy respond to positional externalities. J. Public. Econ. 92, 1777–86 (2008)CrossRefGoogle Scholar Frohlich, N, Oppenheimer, J., Kurki, A.: Modeling other-regarding preferences and an experimental test. Publ. Choice 119, 91–117 (2004)CrossRefGoogle Scholar Galasso, V: Redistribution and fairness: a note. Eur. J. Polit. Econ. 19, 885–892 (2003)CrossRefGoogle Scholar Hochman, H.M., Rodgers, J.D.: Pareto optimal redistribution. Am. Econ. Rev. 59, 542–557 (2012)Google Scholar Höchtl, W., Sausgruber, R., Tyran, J.: Inequality aversion and voting on redistribution. Eur. Econ. Rev. 56, 1406–1421 (2012)CrossRefGoogle Scholar Hopkins, E.: Inequality, happiness and relative concerns: What actually is their relationship. J. Econ. Inequal. 6, 351–72 (2008)CrossRefGoogle Scholar Hwang, S., Lee, J.: Conspicuous consumption and income inequality. Oxford. Econ. Pap. 69, 279–292 (2017)Google Scholar Ireland, N.J.: Status seeking, income taxation and efficiency. J. Public. Econ. 70, 99–113 (1998)CrossRefGoogle Scholar Ireland, N.J.: Optimal income tax in the presence of status effects. J. Public. Econ. 81, 193–212 (2001)CrossRefGoogle Scholar Ledyard, J.O.: Public goods: A survey of experimental research. In: Kagel, J.H., Roth, A.E. (eds.) The Handbook of Experimental Economics, pp 111–194. Princeton University Press, Princeton (1995)Google Scholar Lind, JT: Fractionalization and the size of government. J. Public. Econ. 91, 51–76 (2007)CrossRefGoogle Scholar Mankiw, N., Weinzierl, M., Yagan, D.: Optimal taxation in theory and practice. J. Econ. Perspect. 23, 147–74 (2009)CrossRefGoogle Scholar Meltzer, A.H., Richard, S.F.: A rational theory of the size of government. J. Polit. Econ. 89, 914–927 (1981)CrossRefGoogle Scholar Tyran, J., Sausgruber, R.: A little fairness may induce a lot of redistribution in democracy. Eur. Econ. Rev. 50, 469–485 (2006)CrossRefGoogle Scholar Wendner, R., Goulder, L.H.: Status effects, public goods provision and excess burden. J. Public. Econ. 92, 1968–85 (2008)CrossRefGoogle Scholar 1.Department of Economics505A Stokely Management Center University of TennesseeKnoxvilleUSA 2.Institute of Economics and FinanceNanjing Audit UniversityPukou DistrictPeople's Republic of China Murray, M.N., Peng, L. & Santore, R. J Econ Inequal (2018) 16: 507. https://doi.org/10.1007/s10888-018-9389-7
CommonCrawl
Asian-Australasian Journal of Animal Sciences (아세아태평양축산학회지) Asian Australasian Association of Animal Production Societies (아세아태평양축산학회) The Body Weight-related Differences of Leptin and Neuropeptide Y (NPY) Gene Expression in Pigs Shan, Tizhong (Animal Science College, Zhejiang University, The Key Laboratory of Molecular Animal Nutrition Ministry of Education) ; Wang, Yizhen (Animal Science College, Zhejiang University, The Key Laboratory of Molecular Animal Nutrition Ministry of Education) ; Guo, Jia (Animal Science College, Zhejiang University, The Key Laboratory of Molecular Animal Nutrition Ministry of Education) ; Chu, Xiaona (Animal Science College, Zhejiang University, The Key Laboratory of Molecular Animal Nutrition Ministry of Education) ; Liu, Jianxin (Animal Science College, Zhejiang University, The Key Laboratory of Molecular Animal Nutrition Ministry of Education) ; Xu, Zirong (Animal Science College, Zhejiang University, The Key Laboratory of Molecular Animal Nutrition Ministry of Education) https://doi.org/10.5713/ajas.2008.70260 To determine if body weight change is directly related to altered leptin and neuropeptide Y (NPY) gene expression, we assessed adipose tissue weight, percent body fat, leptin and NPY mRNA levels and serum leptin concentration in pigs at weights of 1, 20, 40, 60, and 90 kg. The results indicated that the weight of adipose tissues and the percent body fat of pigs significantly increased and correlated with body weight (BW) from 1 to 90 kg (p<0.01). Serum leptin concentrations and leptin mRNA levels in omental adipose tissue (OAT) increased from 1 to 60 kg, and then decreased from 60 to 90 kg. At 60 kg, the serum leptin concentration and leptin mRNA level significantly increased by 33.5% (p<0.01) and 98.2% (p<0.01), respectively, as compared with the levels at 1 kg. At 60 kg, the amount of leptin mRNA in subcutaneous adipose tissue (SAT) was significantly higher than that of 1 and 40 kg animals (p<0.05). NPY gene expression in the hypothalamus also changed with BW and at 60 kg the NPY mRNA level significantly decreased by 54.0% (p< 0.05) as compared with that in 1 kg. Leptin mRNA in OAT was correlated with serum leptin concentrations (r = 0.98, p<0.01), body weight (r = 0.82, p<0.05) and percent body fat (r = 0.81, p<0.05). This is the first report of the developmental expression of leptin in porcine OAT, peritoneal adipose tissue (PAT) and SAT, and proves that the expression of leptin in OAT could reflect the levels of circulating leptin. These results provide some information for nutritional manipulation of leptin secretion which could lead to practical methods of controlling appetite and growth in farm animals, thereby regulating and improving efficiency of lean meat production and meat production quality. Fat Deposition;Gene Expression;Leptin;Neuropeptide Y;Pig Souza, D. N., D. W. Pethick, F. R. Dunshea, D. Suster, J. R. Pluske and B. P. Mullan. 2004: The pattern if fat and lean muscle tissuedeposition differs in the different pork primal cuts of female pigs during the finisher growth phase. Livest. Prod. Sci. 91:1-8. https://doi.org/10.1016/j.livprodsci.2004.04.005 Stephens, T. W. 1995. The role of neuropeptide Y in the antiobesity action of the obese gene product. Nature 377:530-532. https://doi.org/10.1038/377530a0 Swart, I., J. M. Overton and T. A. Houpt, 2001. The effect of food deprivation and experimental diabetes on orexin and NPY mRNA levels. Peptides. 22:2175-2179. https://doi.org/10.1016/S0196-9781(01)00552-6 Wang, Y. Z., Y. J. Tu, F. F Han, Z. R. Xu and J. H. Wang. 2005. Developmental gene expression of lactoferrin and effect of dietary iron on gene regulation of lactoferrin in mouse mammary gland. J. Dairy Sci. 88(6):2065-2071. https://doi.org/10.3168/jds.S0022-0302(05)72883-6 Gruenewald, D. A., B. T. Marck and A. M. Matsumoto. 1996. Fasting-induced increases in food intake and neuropeptide Y gene expression are attenuated in aging male brown Norway rats. Endocrinol. 137:4460-4467. https://doi.org/10.1210/en.137.10.4460 Ji, S. Q., G. M. Wills, R. R. Scott and M. E. Spurlock. 1998. Partial cloning and expression of the bovine leptin gene. Anim. Biotechnol. 9(1):1-14. https://doi.org/10.1080/10495399809525887 Li, H., M. Matheny, N. Tumer and P. J. Scarpace. 1998. Ageing and fasting regulation of leptin and hypothalamic neuropeptide Y gene expression. Am. J. Physiol. 275:405-401. Mann, D. R., M. A. Akinbami, K. G. Gould and V. D. Castracane. 2000. A longitudinal sStudy of leptin during development in the male rhesus monkey: the effect of body composition and season on circulating leptin levels. Biol. Reprod. 62:285-291. https://doi.org/10.1095/biolreprod62.2.285 Meier, C. A. 1995. Advances in the understanding of the molecular basis of obesity. Eur. J. Endocrinol. 133:761-763. https://doi.org/10.1530/eje.0.1330761 Morrison, C. D., J. A. Daniel, B. J. Holmberg, J. Djiane, N. Raver, A. Gertler and D. H. Keisler. 2001. Central infusion of leptin into well-fed and undernourished ewe lambs: effects on feed intake and serum concentrations of growth hormone and luteinizing hormone. J. Endocrinol. 168:317-324. https://doi.org/10.1677/joe.0.1680317 Nogalska, A., A. Pankiewicz, E. Goyke and J. Sweirczynski. 2003. The age-related inverse relationship between ob and lipogenic enzymes genes expression in rat white adipose tissue. Exp. Gerontol. 38(4):415-422. https://doi.org/10.1016/S0531-5565(02)00210-3 Nogalska, A. and J. Swierczynski. 2001. The age-related differences in obese and fatty acid synthase gene expression in white adipose tissue of rat. Biochim. Biophys. Acta 1533:73-80. https://doi.org/10.1016/S1388-1981(01)00142-1 Qian, H., C. R. Barb, M. M. Compton, G. J. Hausman, M. J. Azain, R. R. Kraeling and C. A. Baile. 1999. Leptin mRNA expression and serum leptin concentrations as influenced by age, weight, and estradiol in pigs. Domest. Anim. Endocrinol. 16(2):135-143. https://doi.org/10.1016/S0739-7240(99)00004-1 Schwartz, R. S. 1998: Obesity in the elderly (Ed. G. A. Bray, C. Bouchard and W. P. T. James). Handbook of obesity. New York: Marcel Dekker, Inc., 103-114. Brockmann, G. A., J. Kratzsch, C. S Haley, U. Renne, M. Schwerin and S. Karle. 2000. Single QTL effects, epistasis, and pleiotropy account for two-thirds of the phenotypic F(2) variance of growth and obesity in DU6i$\times$DBA/2 mice. Genome Res. 10:1941-1957. https://doi.org/10.1101/gr.GR1499R Campfield, L. A., F. J. Smith, Y. Guisez, R. Devos and P. Burn. 1997. Recombinant mouse OB protein: evidence for a peripheral signal linking adiposity and central neural networks. Sci. 269:546-549. https://doi.org/10.1126/science.7624778 Chen, X. L., J. Lin, D. B. Hausman, R. J. Martin, R. G. Dean and G. J. Hausman. 2000. Alterations in fetal adipose tissue leptin expression correlate with the development of adipose tissue. Biol. Neonate 78(1):41-47. https://doi.org/10.1159/000014245 Cheong, H. S., D. H. Yoon, L. H. Kim, B. L. Park, E. R. Chung, H. J. Lee, I. C. Cheong, S. J. Oh and H. D. Shin. 2006. Leptin polymorphisms associated with carcass traits of meat in Korean cattle. Asian-Aust. J. Anim. Sci. 19(11):1529-1535. https://doi.org/10.5713/ajas.2006.1529 Cunningham, M. J., D. K. Clifton and R. A. Steiner. 1999. Leptin's action on the reproductive axis: perspectives and mechanisms. Biol. Reprod. 60:216-222. https://doi.org/10.1095/biolreprod60.2.216 Delavaud, C., F. Bocquier, Y. Chilliard, D. H. Keisler, A. Gertler and G. Kann. 2000. Plasma leptin determination in ruminants: effect of nutritional status and body fatness on plasma leptin concentration assessed by a specific RIA in sheep. J. Endocrinol. 165:519-526. https://doi.org/10.1677/joe.0.1650519 Friedman, J. M. and J. L. Halaas. 1998. Leptin and the regulation of body weight in mammals. Nature 395:763-770. https://doi.org/10.1038/27376 Garcia-Mayor, R. V., A. Andrade, M. Rios, M. Lage, C. Dieguez and F. F. Casanueva. 1997. Serum leptin levels in normal children: Relationship to age, gender, body mass index, pituitary-gonadal hormones, and pubertal stage. J. Clin. Endocrinol. Metab. 82:2849-2855. https://doi.org/10.1210/jc.82.9.2849 Barb, C. R., J. B. Barrett, R. R. Kraeling and G. B. Rampacek. 1999. Role of leptin in modulating neuroendocrine function: A metabolic link between the brain-pituitary and adipose tissue. Reprod. Domest. Anim. 34(3-4):111-125. https://doi.org/10.1111/j.1439-0531.1999.tb01228.x Barb, C. R., G. J. Hausman and K. L. Houseknecht. 2001a. Biology of leptin in the pig. Domest. Anim. Endocrinol. 21(4):297-317. https://doi.org/10.1016/S0739-7240(01)00123-0 Barb, C. R., J. B. Barrett, R. R. Kraeling and G. B. Rampacek. 2001b. Serum leptin concentrations, luteinizing hormone and growth hormone secretion during feed and metabolic fuel restriction in the prepuberal gilt. Domest. Anim. Endocrinol. 20:47-63. https://doi.org/10.1016/S0739-7240(00)00088-6 Ahima, R. S. and J. S. Flier. 2000. Leptin. Annu. Rev. Physiol. 62:413-437. https://doi.org/10.1146/annurev.physiol.62.1.413 Barb, C. R. and J. B. Barrett. 2005. Neuropeptide Y modulates growth hormone but not luteinizing hormone secretion from prepuberal gilt anterior pituitary cells in culture. Domest. Anim. Endocrinol. 29(3):548-555. https://doi.org/10.1016/j.domaniend.2005.03.004 Blum, W. F., P. Englaro, S. Hanitsch, A. Juul, N. T. Hertel, J. Muller, N. E. Skakkebaek, M. L. Heiman, M. Birkett, A. M. Attanasio, W. Kiess and W. Rascher. 1997. Plasma leptin levels in healthy children and adolescents: dependence on body mass index, body fat mass, gender, pubertal stage, and testosterone. J. Clin. Endocrinol. Metab. 82:2904-2910. https://doi.org/10.1210/jc.82.9.2904 Dai, H. C., L. Q. Long, X. W. Zhang, W. M. Zhang and X. X. Wu. 2007. Cloning and expression of the duck leptin gene and the effect of leptin on food intake and fatty deposition in mice. Asian-Aust. J. Anim. Sci. 20(6):850-855. https://doi.org/10.5713/ajas.2007.850 Shin, S. C. and E. R. Chung. 2007. Association of SNP marker in the leptin gene with carcass and meat quality traits in Korean cattle. Asian-Aust. J. Anim. Sci. 20(1):1-6. vol.40, pp.6, 2009, https://doi.org/10.1111/j.1365-2052.2009.01927.x
CommonCrawl
Effects of changes in precipitation on energy and water balance in a Eurasian meadow steppe Jingyan Chen ORCID: orcid.org/0000-0003-4174-06061, Changliang Shao2, Shicheng Jiang3, Luping Qu4, Fangyuan Zhao5,6 & Gang Dong7,2 Water such as precipitation is the most critical environment driver of ecosystem processes and functions in semi-arid regions. Frequency and intensity of drought and transient waterlogging are expected to increase in the meadow steppe in northeastern China. Using a 4-year dataset of eddy covariance flux measurements, ground measurements of biomass, phenology, and meteorological conditions, we investigated the changes in energy fluxes at multiple temporal scales and under different precipitation regimes. The meadow steppe was latent heat (LE) dominated when soil water content was > 0.3 m3 m−3, but switched to sensible heat (H) dominated status when soil water content fell below 0.3 m3 m−3. LE dominated the energy exchange of the meadow grasslands on a yearly basis. Intensive precipitation had a profound impact on water-energy balance that could reduce the damages of drought by elevating deep soil moisture. The influence of LE on waterlogging depended on timing, with increased LE at the beginning of growing season and decreased LE after waterlogging. Spring and summer droughts resulted in different energy partitioning between latent and sensible heat energies, with spring drought dramatically decreased the LE fraction due to the change in water. In contrast, summer drought had little impact on LE due to the sufficient water input from large precipitation events at the beginning of the growing season. There existed great seasonal and interannual variabilities in energy balance and partitioning in the meadow steppe over the 4-year study period, which were strongly influenced by changes in precipitation. The water loss through latent heat was more sensitive to spring drought than to summer drought, while summer drought had negligible impact on LE. Waterlogging contributed to LE by enhancing its values during and after the waterlogged periods at the beginning of the growing season in a dry year, but lowering its value after the waterlogged periods in growing season. Water is the most influential resource in ecosystems and societies in arid and semi-arid areas (Chen et al. 2013; Feng et al. 2016; Jaeger et al. 2017; Qi et al. 2017). In drylands, the spatio-temporal distribution and availability of water resources have been the major focuses of terrestrial ecosystem studies at local, regional and global levels (Groisman et al. 2017; Monier et al. 2017; Soja and Groisman 2018). In terrestrial ecosystems, hydrological cycling is largely motivated through energy transfer because of the close coupling of water and heat. Precipitation, via either rainfall or snow, is the largest flux term of water budget and can lead to changes in soil moisture and latent heat (LE), sensible heat (H), soil heat flux (G), and heat storage (S) by influencing plant transpiration and soil evaporation (Rodrigues et al. 2013). Knowledge about coupled ecosystem water cycling and energy balance can not only be applied to explain the integrated water-energy exchange between ground and atmosphere but also be used to address water scarcity and maintain ecosystem functions. Rapid changes in land use and land cover and global warming have produced an increasing incidence and intensity of climatic extremes such as frequent and heavy precipitation events (Eade et al. 2012; IPCC 2013; Rey et al. 2017) and more intensive and prolonged drought. Changes in precipitation regimes contribute to higher uncertainty in ecosystem water-energy balance that requires further investigation of biomes and habitat. For instance, Minderlein and Menzel (2015) reported direct influences of precipitation on soil moisture, evapotranspiration (ET), and energy balance of shrub-grass ecosystems in the semi-arid areas of northern Mongolia. Nevertheless, there remains limited knowledge and consensus on the relationships between changes in precipitation and water-energy balance in grassland ecosystems. Climate change, together with human disturbances, has had a large influence on ecological processes in the Eurasian Steppes, such as carbon assimilation and emission and energy and hydrological cycling. The meadow steppe appears to be the most sensitive ecosystem type in the grassland biome because of its relatively high biodiversity and high demands for water (Wang et al. 2007). Using a long-term research site of a meadow steppe in Northeast China, we explore the changes over time in water and energy fluxes in response to changes in climate, especially precipitation. Here, we focus particularly on the changes of major energy flux terms and the overall balances under different water availability. Based on 4 years of eddy covariance measurements, we first emphasize the distribution of energy and water fluxes and the possible ecological and climatological controls for a Songnen meadow steppe. Precipitation and its extremes, especially drought and transient waterlogging events, are examined for their roles in regulating the energy balance. Two major scientific questions are as follows: (1) What are the roles of shifting precipitation on water-energy balance of a meadow steppe? (2) What are the differences in how each energy flux term responds to spring/summer drought and transient waterlogging? Study site This study was conducted at the Songnen Grassland Ecology Field Station of the Northeast Normal University (NENU) in Changling, Jilin, China (123° 30′ E, 44° 35′ N, 171 m a.s.l.). The site is representative of a temperate meadow steppe dominated by perennial grasses, including Leymus chinensis and Phragmitis communis. The study area has a temperate, semi-arid continental monsoonal climate, with a cold-dry spring and a warm-wet summer. The mean annual temperature is 5 °C, with the maximum and minimum temperatures being 39.2 °C and − 33.9 °C, respectively. The frost-free period is approximately 130 days to 165 days. The precipitation varied greatly within and between years, with an annual mean of 350 mm, 80% of which falls between June and August. The annual pan evaporation varies from 1200 to 1600 mm, which is approximately three to four times higher than the mean annual precipitation. The growing season runs from May through September. Main soil types in this area are alkaline soil, chernozem, and saline soil. To evaluate the energy and water exchanges between the land surface and the atmosphere, an open-path eddy-covariance (EC) flux tower was deployed in 2007 as part of the US-China Carbon Consortium (USCCC). A three-dimensional ultrasonic anemometer (CSAT3, Campbell Scientific Inc. (CSI)) and a CO2/H2O infrared gas analyzer (LI-7500, LI-COR) were mounted 2 m above the ground. Fluxes were calculated as the mean covariance of vertical wind speed fluctuation and the scalar fluctuation. Downward fluxes were indicated as negative and upward as positive. All raw data was sampled at 10 Hz and logged using a CR3000 data-logger (CSI). Footprint analysis using the flux source area model (FSAM) suggested a footprint in the prevalent wind direction (150–240°) extended to approximately 6.5 m to 77.8 m during unstable conditions (Monin-Obukhov length (L) < 0) and to about 11.3 m to 229.9 m during stable conditions (L > 0) at night, representing 90% of the total flux. Meteorological measurements Meteorological measurements were taken starting in May 2007. Net radiation (Rn) was measured using a four-component net radiometer (CNR-1, Kipp & Zonen) at 2 m above the ground. Photosynthetically active radiation (PAR), precipitation (P), wind speed (WS), and wind direction (WD) were measured by a quantum sensor (LI190SB, LI-COR), a tipping-bucket rain gauge (TE525MM, CSI), and a propeller anemometer (034B-L Met One Windset, CSI), respectively. Air temperature (Ta) and relative humidity (Rh) were measured at 2, 4, and 6 m with HMP45C probes (Vaisala). Soil temperature profiles (Ts) were measured at depths of 0.05, 0.1, and 0.3 m below the ground by thermistors (107-L, CSI). Soil heat flux (G) was measured at a depth of 0.05 m in three separate locations (HFT-3, REBS), and soil water content (SWC) was measured at a depth of 0.1 m by a time-domain reflectometry probe (CS616, CSI), which was buried horizontally in the soil. Additionally, soil water potential (SWP) was estimated using watermark probes at 0.1 and 0.3 m (257-L, CSI). All micrometeorological data were recorded and averaged or summed over a 30-min interval by a separate CR3000 data-logger. Biometric measurements of aboveground biomass (AGB), vegetation height (Vt), and leaf area index (LAI) were conducted monthly during the growing season (May to September). Within a radius of 200 m around the EC tower, we placed 12 0.5 m2 quadrats for measuring Vt and LAI by a plant canopy analyzer (LAI-2000, LI-COR), while AGB was destructively sampled in 12 additional 0.5 m2 quadrats. Data processing and gap filling The EC flux data was processed using EdiRe. The planar fit rotation and Webb–Pearman–Leaning (WPL) density correction were applied to the flux calculations (Webb et al. 1980). Data from stable nocturnal periods were excluded, specifically when the friction velocity (u*) was < 0.15 m s−1. Anomalous or spurious data due to sensor malfunction, sensor maintenance, precipitation events, IRGA calibration, and power failure were also rejected. Consequently, 25.4%, 20.9%, 27.1%, and 30.6% of the data obtained during growing season in 2007, 2008, 2009, and 2010, respectively, were discarded. The introduced data gaps were filled with following the methods: for gaps of less than 2 h, linear interpolation was used by averaging the fluxes before and after the gaps; for larger data gaps, we used both the empirical relationships and look-up tables (Falge et al. 2001). Calculation of soil heat storage The Rn was partitioned into convective H for heating the atmosphere, LE for evaporating water from plants and the soil, G for heating the soil, and S for heat storage in the soil. Because the canopy and air heat storage are expected to be negligible in short canopies with minimal biomass, the canopy heat storage was not included in our energy balance algorithm: $$ {R}_n=H+\mathrm{LE}+G+S $$ where soil heat storage (S) was calculated as follows: $$ S={C}_s\frac{\Delta {T}_s}{\Delta t}d $$ where Ts is the average soil temperature (K) above heat flux plates, t is time (Δt = 1800 s), d is a depth between the heat flux plate and soil surface, and Cs is the soil heat capacity calculated from Shao et al. (2014): $$ {C}_s={\rho}_b{C}_d+{\theta}_v{\rho}_w{C}_w $$ where ρb is soil bulk density, ρw is the density of water, Cd and Cw are the specific heat capacity of dry mineral soil (Cd = 890 J kg−1 K− 1) and soil water (Cw = 4190 J kg−1 K−1), respectively, and θv is volumetric SWC (%). The heat storage of air and organic matter in the soil were neglected due to their small quantities. Energy parameters The energy balance ratio (EBR) was calculated as a ratio of the turbulent fluxes (LE + H) and the radiative energy fluxes (Rn–G–S): $$ \mathrm{EBR}\kern0.5em =\kern0.5em \frac{\sum \left( LE+H\right)}{\sum \left({R}_n-G-S\right)} $$ Canopy conductance (gc) was calculated using the inverted form of the Penman–Monteith equation (Monteith and Unsworth 1990): $$ {g}_c=\frac{1}{\frac{\uprho {C}_p}{\upgamma}\frac{D}{LE}+\left(\frac{\Delta}{\upgamma}\frac{H}{LE}-1\right)/{g}_a} $$ where ρ is air density, Cp is the specific heat of the air, Δ is the change of saturation vapor pressure with temperature, γ is the psychrometric constant, D is the vapor press deficit of air, and ga is the aerodynamic conductance of the air layer between the canopy and the flux measurement height. ga was calculated flowing Monteith and Unsworth (1990) as follows: $$ {g}_a=\frac{1}{\frac{u}{u^{\ast 2}}+6.2{u}^{\ast -0.67}} $$ where u is wind speed and u* is friction velocity. To calculate the Priestley-Taylor coefficient (LE/LEeq), the equilibrium LE flux (LEeq) was determined using (Priestley and Taylor 1972): $$ {\mathrm{LE}}_{eq}\kern0.5em =\kern0.5em \frac{\Delta \left({R}_n-G-S\right)}{\Delta +\upgamma} $$ The LEeq is dependent on the Rn and temperature. Lower and higher values indicate evaporation rates that are lower and higher than the equilibrium rate, respectively (Wilson et al. 2002b). The sensitivity of evapotranspiration to stomatal control and the degree of aerodynamic coupling between vegetation and the atmosphere was expressed by the decoupling factor (Ω), obtained by the equation (Jarvis and McNaughton 1986; Runkle et al. 2014): $$ \Omega =\frac{\Delta +\upgamma}{\Delta +\upgamma \left(1+\frac{{\mathrm{g}}_{\mathrm{a}}}{{\mathrm{g}}_{\mathrm{c}}}\right)} $$ The value of Ω ranges from 0 to 1. When it approaches 0, the vegetation and the atmosphere are fully aerodynamically coupled and the evapotranspiration proceeds at rates imposed by vapor pressure deficit; when it approaches 1, the vegetation and the atmosphere are completely aerodynamically decoupled and the evapotranspiration is controlled by Rn (Goldberg and Bernhofer 2001). The mean Ta in 2007–2010 was 8.23, 6.83, 5.64, and 7.91 °C, respectively (Table 1). For the growing season, the mean Ta in 2007 (21.26 °C) and 2010 (20.67 °C) were higher than that in 2008 (19.76 °C) and 2009 (19.88 °C). The maximum Ta occurred in July of 2007 and 2008, August of 2009, and June of 2010, while the minimum Ta occurred in January in all 4 years. For the non-growing seasons (October–April) of 2007–2008, 2008–2009, 2009–2010, and 2010–2011, the mean Ta was − 2.82, − 3.38, − 6.69, and − 5.62 °C, respectively. The Ts closely followed the changes of Ta (Fig. 1a). However, there was no significant difference between Ts (21.15 °C) and Ta (21.26 °C) in the growing season of 2007. Table 1 Daily means of microclimatic variables and energy balance components during the growing season and non-growing season of 2007–2010 in Songnen meadow steppe Seasonal changes of major meteorological variables and energy fluxes during 2007–2010 in the meadow steppe: a daily mean air (Ta) and soil (Ts) temperatures, b net radiation (Rn), c latent heat (LE) and sensible heat (H), d soil heat flux (G), and e daily mean soil volumetric water content (SWC) and daily sum of rainfall (P) Precipitation amounts and distribution were significantly different during the 4 years. Annual precipitation from 2007 to 2010 was 207.9, 384.2, 281.5, and 265.6 mm, respectively. These years can be classified as dry (2007, 2009, and 2010) or wet years (2008) based on to the precipitation trending in the Songnen meadow steppe for the past 50 years (Fig. 2). The driest period was from DOY181 to DOY279 in 2007, with the first rain of > 3 mm. Precipitation or snowfall throughout the initial 6 months was minimal. In 2008, the meadow steppe received 85% more precipitation and experienced 18 more single precipitation pulses than in 2007. The first precipitation in 2008 (March 22nd) was 3 months earlier than in 2007, and the rainy season therein lasted longer and ended on November 15th. In 2009 and 2010, precipitation started in mid-April, and it ended in mid-October and mid-November, respectively. Total precipitation of March–May in dry year 2007 accounted for only 8.8% and 5.1% of that in 2009 and 2010, respectively, indicating that the meadow steppe experienced a serious spring drought in 2007. On the other hand, June–August precipitation was low in 2010 (90.7 mm) compared to that in 2007 (170 mm) and 2008 (231.9 mm). In the meadow steppe, transient waterlogging occurs usually after a single precipitation of > 30 mm, and the waterlogging condition often lasts for 2–3 days. Six transient waterlogging events were detected during the study period: DOY181, 190, 212 in 2007; DOY187, 214 in 2008; DOY125 in 2010. Seasonal variation of SWC closely followed the changes in precipitation. The temporal change of SWC was clearly asymmetrical within any of the 4 years (Fig. 1e). A long dry period in 2007 resulted in severe water deficit in the spring and early summer when the SWC fell below 0.2 m3 m−3 in the upper soil layer. In contrast, frequent precipitation events during the summer of 2008 resulted in a consistently higher SWC. The highest SWC occurred in the early spring of 2009 and 2010, which declined sharply afterwards. SWC decreased to its minimum before July in 2010, followed by a summer drought. The SWC recovered in June of 2009 after a series of continuous precipitation events. A large precipitation event combined with several isolated precipitation events led to a temporary SWC peak during the 2010 summer compared to the previous summer. Interestingly, SWC at 10 cm depth did not drop below 0.4 m3 m−3 during the water-stressed periods in 2007 and 2009 (Fig. 2). Long-term change in annual precipitation and its trend in Songnen meadow steppe during the past 49 years (1967 to 2015) A unimodal pattern of growing season AGB and LAI showed an overall higher AGB and earlier peak in 2008 than in the dry years (Fig. 3). Peak value of AGB in August 2007 was ~ 68% of that in July and August of 2009 and 2010. LAI was highest in 2008 and lowest in 2007. Seasonal change of leaf area index (LAI) and aboveground biomass (AGB) with month in the meadow steppe during the four growing seasons of 2007–2010 Energy balance closure The energy balance enclosures using half-hourly and daily values produced reasonable results for all 4 years (Table 2). The slope of the enclosure was < 1 for all years, with an average value (± SD) of 0.75 (± 0.06), intercept of 10.48–21.17 W m−2, and R2 value of 0.84–0.98. For the growing season, the slope values were similar in 2008, 2009, and 2010. The average slope value increased to 0.85 (± 0.09), ranging from 0.74 to 0.93, when using the daily sums instead of the half-hourly data to calculate the regression coefficients of [LE + H] against [Rn–G–S]. Table 2 Energy enclosure based on half-hourly and daily sums of the growing seasons during 2007–2010 The EBR provides an overall evaluation of energy closure at long temporal scale by averaging over random errors in the flux measurement. The mean values of EBR for the four growing seasons were 0.94 (2007), 0.83 (2008), 0.85 (2009), and 0.92 (2010), respectively, and increased to 1.00, 0.97, 0.91, and 0.93 at annual scale for the four consecutive years (Table 1). Intra- and interannual variations in energy fluxes The seasonal changes of daily Rn, LE, H, and G were similar (Fig. 1b–d). Rn showed maximum values of 15.67–20.86 MJ m−2 day−1 in July and the minimum values of from − 1.03–3.56 MJ m−2 day−1 in December among the 4 years. Seasonal changes in LE matched well with those of Rn, which gradually increased from May, peaked in July–August, and gradually decreased until soil was frozen in late October through February. During the spring drought in 2007, LE in March–May (3.51 MJ m−2 day−1) was 70% of the normal values (4.65–5.08 MJ m−2 day−1). LE during the peak growing season of 2010 (July–August) was 7.31 MJ m−2 day−1 and remained lower during 2007–2009 (~ 6.4 MJ m−2 day−1) despite the summer drought of 2010. Surprisingly, H showed a bimodal seasonal change, which increased at the beginning of a year, peaked in May, and began to decrease in June even though Rn continued its high values. Interestingly, H started to increase again in August and reached the second peak in September. As expected, G values were low, especially during the wintertime. At the beginning of March, G switched from negative to positive and continued to increase, reaching a maximum in May for 2007 (3.39 MJ m−2 day−1), 2009 (2.16 MJ m−2 day−1), and 2010 (2.15 MJ m−2 day−1), but in June of 2008 (2.10 MJ m−2 day−1). Monthly LE value was higher than H value during the growing season in 2008 (Fig. 4). In dry years, the duration of higher LE over H appeared short (2 months) in 2007 (spring drought), mediate (3 months) in 2009, and long (4 months) in 2010, with different starting days in July, June, and May, respectively. For accumulated G, however, a positive G (22.72 MJ m−2 month−1) was observed in March 2008, which was − 2.00 MJ m−2 month−1 in 2009 and − 7.84 MJ m−2 month−1 in 2010. More importantly, G in May (67.91 MJ m−2 month−1) and June (132.91 MJ m−2 month−1) of 2007 was about twice the values of 2008 (35.21/71.93 MJ m−2 month−1), 2009 (39.85/63.93 MJ m−2 month−1), and 2010 (41.44/76.21 MJ m−2 month−1). The monthly change of latent heat (LE), sensible heat (H), and soil heat flux (G) during 2007–2010 in the Songnen meadow steppe in northeastern China Sensible heat (H) was a major factor of Rn at annual scale, but LE was more influential during the growing seasons, with the mean LE/Rn ratio 0.46 to 0.53. LE/Rn values remained low during the non-growing season, with mean values in 0.12, 0.18, and 0.19 for 2007, 2008, and 2009, respectively (Table 1). The peak LE/Rn in 2007 arrived half a month later than that in 2010 (Fig. 5). Throughout the spring drought period in 2007, LE/Rn remained low until late summer. In contrast, LE/Rn in 2010 remained relatively high from the beginning of the growing season, when soil moisture was high due to the large precipitation input. The seasonal changes of H/Rn seemed to be a mirror image of LE/Rn in all 4 years. The lowest value of H/Rn was observed during the growing season when LE/Rn was the highest, while H/Rn began increasing in late September. The annual mean G/Rn was higher in 2007 (0.171) than in other years (0.0045–0.055) (Table 1). Although there was a higher G/Rn at the beginning of growing season (May–June) in 2007, no significant difference was found among LEs of the 4 years despite a severe summer drought in July–August of 2010. Growing season fractions of latent heat, sensible heat, and soil heat flux to net radiation (LE/Rn, H/Rn, G/Rn, respectively) during 2007–2010 Biophysical controls of energy partitioning The Bowen ratio (β, H/LE) during the growing season varied from 0.21 to 4.63 (Fig. 6b). Due to a faster increase in H than that in LE during the spring, midday means of β over the growing season exhibited a concave shape, with β being > 1 during spring and autumn, but ~ 1 in summer. Over the growing season, midday means of β amounted to 1.39, 0.95, 1.29, and 1.04 for 2007–2010, respectively. The low LE and high H during the non-growing season resulted in a dramatic increase in the mean daytime β. Mean daily β varied from 0.02 to 12.85 over the 4-year study period, with the largest value found in the non-growing season of 2007 (DOY310), and the lowest in the growing season of 2010 (DOY233). The average of daily β during the spring drought period was 1.97, which was higher than that in the summer drought period (1.17). The gc value ranged from 0.02 to 19.67 mm s−1 during the growing season, with a peak in July of 2008, 2009, and 2010, but not in 2007, when the maximum gc was delayed to August (Fig. 6c). In 2007, LAI maintained a higher level after July while gc declined during the same time. During the growing season, the Ω value amounted a peak of 0.73 (DOY229), 0.81 (DOY214), 0.75 (DOY204), and 0.56 (DOY213), respectively, for the 4 years. Its seasonal changes appeared less pronounced compared to other measures of the fluxes, with the change in 2007 very different from that in other 3 years (Fig. 6). The seasonal change of 5-day average midday LE/LEeq, Bowen ration (β), canopy conductance (gc), and decoupling coefficient (Ω) in four study years in the meadow steppe. Midday means were calculated for 10:00–15:00 h Impact of precipitation intensity on water and energy balance In this study, P and ET appeared to be more or less balanced in the wet year. Differing from previous reports on P vs ET in neighboring typical steppes on the Mongolia Plateau (Chen et al. 2009; Foken et al. 2006), the meadow steppe in our study showed an overall higher cumulative ET than cumulative P in dry years (2007, 2009, and 2010), suggesting there were other sources of water to enrich the soil (Fig. 7). The mean annual ET (409.5 mm) (Table 1) in the meadow steppe lies in the higher end of ET values (163~481 mm) for grasslands worldwide (Aires et al. 2008; Krishnan et al. 2012; Li et al. 2007; Ryu et al. 2008; Zha et al. 2010). However, ET (334.5 mm) in 2007—a spring drought—fell below the mean value. The ratio of ET/P varied between 1.05 and 1.71 for all study years (Table 1), and was higher than those in the Mediterranean grasslands (0.4–0.87) (Aires et al. 2008; Ryu et al. 2008), American semi-arid prairies (0.72–1.20) (Krishnan et al. 2012), northern temperate grasslands (0.98) (Hao et al. 2007; Wever et al. 2002), and alpine Kobresia meadows (0.60) (Li et al. 2013), but similar to those in the alpine wetland meadows (1.27) (Hu et al. 2009). The annual and monthly accumulation of precipitation (P) (gray) and evapotranspiration (ET) (black) for the Songnen meadow steppe during 2007–2010 Intensity of precipitation events had a crucial effect on water-energy balance and ecosystem recovery after drought (Cabral et al. 2010; Heisler-White et al. 2008; Knapp et al. 2008). Differences in seasonal ET/P in the meadow steppe suggested that non-growing season P from snow would supplement water resources required for the following growing season. The relative contribution of temporally isolated non-growing season precipitation events to water availability in the next growing season is considered a critical determinant of ecological processes (Eamus et al. 2001; Ma et al. 2013). Previous studies have indicated that freezing and thawing have important influences on ET and surface energy fluxes (Chen et al. 2011; Gu et al. 2005; Yao et al. 2008), by increasing SWC and ET from melted snow and frozen soil (Zhang et al. 2014; Zhang et al. 2003). Although cumulative P was lower than ET at annual scale, the monthly P could sometimes be higher than ET in the meadow steppe during the growing season. For example, heavy and frequent precipitation events in July of 2008 brought ~ 132.66 mm precipitation and 90.8 mm ET at our site, yielding a peak soil water content of 0.91. Similarly, although the maximum soil water content was equivalent to that of 2008, P was 86.5% lower and ET was 13.8% higher following the consecutive precipitation events in July 2009 compared to those in July 2008 (Figs. 1 and 7), suggesting a proportion of precipitation infiltrated to recharge groundwater in 2008. Heavy precipitation events in May 2010 (P = 105.1 mm, ET = 58.4 mm) alleviated summer drought in the growing season through water infiltration into deep soil under similar maximum soil water content (0.89 mm) in May 2009 (P = 14.5 mm, ET = 45.6 mm). Therefore, AGB between 2009 and the summer drought year of 2010 remained similar, likely because both the low infiltration rate in the alkali-saline soil of the meadow steppe and the limited precipitation input from short and sporadic showers could substantially reduce soil moisture. However, low ET/P in July 2007 (0.74), in July (0.68), and August (0.68) of 2008 and in May 2010 (0.56) (Fig. 7) indicated that infiltration might have happened after occasional and large precipitation events (Knapp et al. 2008; Thomey et al. 2011). In our study, LE was apparently influenced by the process of temporary waterlogging caused by precipitation > 30 mm in a day (Fig. 8). LE increased because of low vegetation cover and high groundwater level at the beginning of the growing season, which is consistent with the findings of spring drought influence on evapotranspiration and water use efficiency in the same steppe (Dong et al. 2011). However, LE declined in response to the 3-day waterlogging at the growing season peak of 2007 and 2008, which could possibly be attributed to stomatal closure and reduction in both leaf photosynthesis and root respiration. Clearly, the severity of waterlogging effects on LE depend on the developmental stage of the grass. The magnitudes of latent heat (LE) before, during (3-day), and after the transient waterlogging during 2007–2010 Influence of drought on energy balance and partitioning The Songnen meadow steppe exhibited the great intra- and interannual variability of energy partitioning into H and LE, which also differed greatly between the dry and wet years. The meadow steppe was LE-dominated when soil water content was > 0.3 m3 m−3, but switched to H-dominated status when soil water content fell below 0.3 m3 m−3 (Fig. 9). The timing of the shift from an H-dominated to a LE-dominated energy balance depended on the magnitude of the first precipitation event (> 30 mm day−1) or accumulated precipitation within five consecutive days over 30 mm, which enabled grass seed germination with SWC exceeding 0.3 m3 m−3. Similarly, the switch from a LE-dominated to an H-dominated ecosystem happened in concert with the start of biomass senescence when SWC fell below 0.3 m3 m−3. However, LE was dominant in energy balance even though SWC dropped to 0.3 m3 m−3 in the growing season of 2010 (since DOY200), which likely was due to the lag effect of a large precipitation event at early stage. In 2008–2010, the change in energy partitioning occurred in May–June. However, the spring drought led to a shift from H to LE occurring in mid-July in 2007. The increase in SWC and vegetation growth during the growing season altered solar heating of the surface via a decrease in albedo that likely was driven by the dark soil and vegetation surfaces after the precipitation events (Krishnan et al. 2012; Thompson et al. 2004). Daily mean soil volumetric water content (SWC) and daily mean Bowen ration (β) in the meadow steppe during the four growing seasons of 2007–2010 Interannual climate, especially the occurrence, duration, and extent of drought, can significantly reduce LE in this site. On the one hand, limited precipitation could cause a decrease in soil moisture and limit soil water recharge, thus decreasing LE. On the other hand, droughts with high temperature and radiation would result in a reduction in transpiration with lower stomatal conductance of foliage and canopy (Chen et al. 2009). For instance, spring drought in 2007 resulted in a higher LE over H in the beginning of growing season (May–June), but the summer drought of 2010 did not change the interannual relation of higher LE over H in June–August. The growth of plants was triggered by the monsoon precipitation for all grasses. Delayed and low LE in spring drought year appeared to be a direct consequence of slow grass germination and late starting of transpiration. The main reason for higher LE during summer drought meanwhile might be related to the soil water availability. Several large precipitation pulses at the beginning of the growing season could have replenished the soil profile to allow grass to reach a biomass peak despite the summer drought in 2010. The results suggested that LE was more sensitive to spring drought than to summer drought, and there was hardly a negative consequence of summer drought to LE in the meadow steppe. Impacts of biotic and climate on LE were further examined for other flux terms and key parameters, including LE/LEeq, gc, and Ω. Evapotranspiration (i.e., latent heat) is mainly composed of plant transpiration (Tr), soil evaporation (Es), and the negligible proportion of canopy intercepted precipitation (I) in grasslands that can be inferred from LE/LEeq, gc, and Ω. LE/LEeq is conventionally used to estimate daily evapotranspiration (Wilson et al. 2002a). LE/LEeq of spring drought (0.53) and summer drought (0.61) in this study were typical events for the semi-arid grassland LE/LEeq (0.6–0.7) (Meyers 2001), indicating that the restriction of the water supply for LE over the meadow steppe was less compared to other grasslands. Decreased gc is usually attributed to low LAI during the growing season (Baldocchi et al. 2004) and decreasing stem-root hydraulic conductivity under conditions of low soil water content (Rodrigues et al. 2013). According to Dong et al. (2011), Es was significantly higher than Tr under spring drought conditions, during which Es released more than 85% of available soil water in May 2007. During the spring drought period, the low gc did not reflect the low transpiration rate. On the contrary, higher gc during summer drought may have been influenced by high LAI in 2007. Ω evaluated the response of LE to air humidity and net radiation changes (Jarvis and Mcnaughton 1986). Ω of growing seasons was higher in summer drought (0.34) than in spring drought (0.24), suggesting that LE was highly sensitive to decreasing air humidity. Moreover, similar Ω (0.33) in summer drought years and wet years further proved the significant effect of spring drought in increasing sensitivity of LE to low air humidity. In summary, although LE was not limited by water stress in the meadow steppe (4.58 MJ m−2 day−1 in spring drought and 5.43 MJ m−2 day−1 in summer drought), spring drought had an even stronger effect on LE than summer drought did. The proportion of individual surface energy balance component is a result of complex, long-term interactions between biogeochemical cycling, disturbance and climate, as well as short-term interactions of plant physiology and the dynamics of the atmospheric boundary layer (Aires et al. 2008; Baldocchi et al. 2004; Shao et al. 2012; Sun et al. 2010). The annual values of LE/Rn in our study (0.28–0.40) are comparable to the 0.31–0.35 that were reported for an annual grassland (Baldocchi et al. 2004), but lower than those in a tallgrass prairie (0.48–0.58) (Burba and Verma 2005) and in a Mediterranean C4 grassland (0.37–0.55) (Jongen et al. 2011). For the growing season, LE/Rn (0.49) was similar to that in an alpine meadow (Li et al. 2013), and higher than in temperate semi-arid grasslands in North America (Krishnan et al. 2012) and C3/C4 grassland in Southern Portugal (Aires et al. 2008), but less than the values reported in the Mongolian steppe (Li et al. 2007). When compared among the 4 years, the spring drought in 2007 caused the highest H/Rn and G/Rn but the lowest LE/Rn. The short-term water deficit at the initial stage of growing season could be partially responsible for the low LAI and AGB values. The cumulative available energy was 639.8 MJ m−2 in the extended drought period of 2010 (July–August), 6.6% lower than that of the same month in 2007. However, there was no obvious decrease in LE/Rn (Fig. 5), suggesting that the magnitude of LE was not constrained by the availability of energy but rather by the vegetation cover at dry period. As a result of earlier water inputs from precipitation, grass germinated earlier in the season, and the growing period was lengthened to maintain a stable and high AGB. Soil heat flux (G) played an important role in energy partitioning at our site, which is similar to previous findings at other typical steppes (e.g., Shao et al. 2008). Soil heat flux tended to increase with Rn and H. A low canopy cover in the spring drought year jointly led to the highest G values (0.356 MJ m−2 day−1). Other than for the spring drought year, G/Rn was < 0.1 at the meadow steppe, which was lower than those reported in other grasslands (Aires et al. 2008) but similar to those at the Mongolian steppe and temperate semi-arid prairies (Krishnan et al. 2012; Li et al. 2007). More importantly, soil heat flux had wider diel variations in the dry year of 2007 than in other years (Fig. 10). The large seasonal fluctuations also reflected the seasonal variation in soil thermal conductivity, which were regulated by precipitation, soil moisture, and seasonal changes in vegetation coverage that directly determine value Rn (Heusinkveld et al. 2004; Rodrigues et al. 2013). A higher G was expected in a wet year (2008) because soil thermal conductivity increased with soil moisture (Hillel et al. 2005). However, the shading effect of vegetation cover likely offset the positive effect of soil thermal conductivity and resulted in lower G (0.69 MJ m−2 day−1). Finally, a higher S was found in spring drought year. A critical take-home message is that, if S is omitted from our analysis, G value at the topsoil may underestimate the surface heat flux by > 50% at our site (Ochsner et al. 2007; Shao et al. 2008). Diurnal change of half-hourly soil heat flux (G) (closed circle) and soil heat storage (S) (open circle) during the growing seasons of 2007–2010 The lack of energy closure is a common feature in most eddy flux sites. According to the FLUXNET study (Wilson et al. 2002a), the EBR for all vegetation types of site-years ranged from 0.53 to 0.99, with a mean of 0.79 ± 0.01. For the ChinaFLUX sites, the annual EBR ranged from 0.58 to 1.00, with a mean of 0.83 (Chen et al. 2011). In this study, the mean energy imbalance (0.89 ± 0.05) at meadow steppe is generally within, or higher than, the acceptable range reported within the FLUXNET and ChinaFLUX sites, which was higher compared with typical steppe 0.83 ± 0.07 (Chen et al. 2009). Changes in precipitation resulted in frequent drought and waterlogging in meadow steppe of northeastern China. We measured the energy and water fluxes over a meadow steppe in Songnen Plain over a 4-year study period and found that spring and summer droughts resulted in a rapid shift in energy partitioning due to the change of water availability. However, spring drought produced more severe effects than summer drought. The rise of the groundwater level in our study region usually followed a large precipitation event or a series of continuous precipitation pulses that have profound influence on ecosystem water balance. Waterlogging influenced LE by enhancing its values during and after the waterlogged periods at the beginning of growing season in a dry year, but lowering them after the waterlogged periods in growing season. Disentangling the roles of different water table heights in controlling the energy balance, particularly LE remains a challenging research topic and should be prioritized in future studies. Aboveground biomass EBR: Energy balance ratio Eddy-covariance Soil evaporation Soil heat flux g c : Canopy conductance H : Sensible heat flux LAI: LE: Latent heat LEeq : Equilibrium LE flux Photosynthetically active radiation R h : R n : Net radiation SWC: Soil water content SWP: T a : T r : Plant transpiration T s : Soil temperature profiles VPD: Vapor pressure deficit Vt: Vegetation height WD: WPL: Webb–Pearman–Leaning WS: β: Bowen ratio Ω: Decoupling factor Aires LM, Pio CA, Pereira JS (2008) The effect of drought on energy and water vapour exchange above a mediterranean C3/C4 grassland in Southern Portugal. Agric For Meteorol 148(4):565–579 Baldocchi DD, Xu L, Kiang N (2004) How plant functional-type, weather, seasonal drought, and soil physical properties alter water and energy fluxes of an oak–grass savanna and an annual grassland. Agric For Meteorol 123(1–2):13–39 Burba GG, Verma SB (2005) Seasonal and interannual variability in evapotranspiration of native tallgrass prairie and cultivated wheat ecosystems. Agric For Meteorol 135(1–4):190–201 Cabral OMR, Rocha HR, Gash JHC, Ligo MAV, Freitas HC, Tatsch JD (2010) The energy and water balance of a Eucalyptus plantation in southeast Brazil. J Hydrol 388(3–4):208–216 Chen J, Wan S, Henebry G, Qi J, Gutman G, Sun G, Kappas M (2013) Dryland East Asia: land dynamics amid social and climate change. Higher Education Press, Beijing Chen N, Guan D, Jin C, Wang A, Wu J, Yuan F (2011) Influences of snow event on energy balance over temperate meadow in dormant season based on eddy covariance measurements. J Hydrol 399(1–2):100–107 Chen S, Chen J, Lin G, Zhang W, Miao H, Wei L, Huang J, Han X (2009) Energy balance and partition in Inner Mongolia steppe ecosystems with different land use types. Agric For Meteorol 149(11):1800–1809 Dong G, Guo J, Chen J, Sun G, Gao S, Hu L, Wang Y (2011) Effects of spring drought on carbon sequestration, evapotranspiration and water use efficiency in the songnen meadow steppe in Northeast China. Ecohydrology 4(2):211–224 Eade R, Hamilton E, Smith DM, Graham RJ, Scaife AA (2012) Forecasting the number of extreme daily events out to a decade ahead. J Geophys Res Atmos. https://doi.org/10.1029/2012JD018015 Eamus D, Hutley LB, O'Grady AP (2001) Daily and seasonal patterns of carbon and water fluxes above a north Australian savanna. Tree Physiol 21(12–13):977–988 Falge E, Baldocchi D, Olson R, Anthoni P, Aubinet M, Bernhofer C, Burba G, Ceulemans R, Clement R, Dolman H, Granier A, Gross P, Grunwald T, Hollinger D, Jensen NO, Katul G, Keronen P, Kowalski A, Lai CT, Law BE, Meyers T, Moncrieff H, Moors E, Munger JW, Pilegaard K, Rannik U, Rebmann C, Suyker A, Tenhunen J, Tu K, Verma S, Vesala T, Wilson K, Wofsy S (2001) Gap filling strategies for long term energy flux data sets. Agric For Meteorol 107(1):71–77 Feng X, Fu B, Piao S, Wang S, Ciais P, Zeng Z, Lü Y, Zeng Y, Li Y, Jiang X (2016) Revegetation in China's Loess Plateau is approaching sustainable water resource limits. Nat Clim Chang 6(11):1019–1022 Foken T, Wimmer F, Mauder M, Thomas C, Liebethal C (2006) Some aspects of the energy balance closure problem. Atmos Chem Phys 6(12):4395–4402 Goldberg V, Bernhofer C (2001) Quantifying the coupling degree between land surface and the atmospheric boundary layer with the coupled vegetation-atmosphere model HIRVAC. Ann Geophys 19(5):581–587 Groisman P, Shugart H, Kicklighter D, Henebry G, Tchebakova N, Maksyutov S, Monier E, Gutman G, Gulev S, Qi J, Prishchepov A, Kukavskaya E, Porfiriev B, Shiklomanov A, Loboda T, Shiklomanov N, Nghiem S, Bergen K, Albrechtová J, Chen J, Shahgedanova M, Shvidenko A, Speranskaya N, Soja A, Kd B, Bulygina O, McCarty J, Zhuang Q, Zolina O (2017) Northern Eurasia Future Initiative (NEFI): facing the challenges and pathways of global change in the twenty-first century. Prog Earth Planetary Sci 4(1):41 Gu S, Tang Y, Cui X, Kato T, Du M, Li Y, Zhao X (2005) Energy exchange between the atmosphere and a meadow ecosystem on the Qinghai–Tibetan Plateau. Agric For Meteorol 129(3–4):175–185 Hao Y, Wang Y, Huang X, Cui X, Zhou X, Wang S, Niu H, Jiang G (2007) Seasonal and interannual variation in water vapor and energy exchange over a typical steppe in Inner Mongolia, China. Agric For Meteorol 146(1–2):57–69 Heisler-White J, Knapp A, Kelly E (2008) Increasing precipitation event size increases aboveground net primary productivity in a semi-arid grassland. Oecologia 158(1):129–140 Heusinkveld BG, Jacobs AFG, Holtslag AAM, Berkowicz SM (2004) Surface energy balance closure in an arid region: role of soil heat flux. Agric For Meteorol 122(1–2):21–37 Hillel D, Hatfield JH, Powlson DS, Rosenzweig C, Scow KM, Singer MJ, Sparks DL (2005) Thermal properties and processes. In: H Daniel (ed) Encyclopedia of Soils in the Environment. Elsevier, Oxford, pp 156–163 Hu Z, Yu G, Zhou Y, Sun X, Li Y, Shi P, Wang Y, Song X, Zheng Z, Zhang L, Li S (2009) Partitioning of evapotranspiration and its controls in four grassland ecosystems: application of a two-source model. Agric For Meteorol 149(9):1410–1420 IPCC (2013) Summary for policymakers. In: Climate change 2013: the physical science basis. Cambridge University Press, New York, pp 1–30 Jaeger WK, Amos A, Bigelow DP, Chang H, Conklin DR, Haggerty R, Langpap C, Moore K, Mote PW, Nolin AW (2017) Finding water scarcity amid abundance using human-natural system models. Proc Natl Acad Sci U S A 114(45):11884–11889 Jarvis PG, McNaughton KG (1986) Stomatal control of transpiration: scaling up from leaf to region. Adv Ecol Res 15:1–49 Jongen M, Pereira JS, Aires LMI, Pio CA (2011) The effects of drought and timing of precipitation on the inter-annual variation in ecosystem-atmosphere exchange in a Mediterranean grassland. Agric For Meteorol 151(5):595–606 Knapp AK, Beier C, Briske DD, Classen AT, Luo Y, Reichstein M, Smith MD, Smith SD, Bell JE, Fay PA (2008) Consequences of more extreme precipitation regimes for terrestrial ecosystems. BioScience 58(9):811–821 Krishnan P, Meyers TP, Scott RL, Kennedy L, Heuer M (2012) Energy exchange and evapotranspiration over two temperate semi-arid grasslands in North America. Agric For Meteorol 153:31–44 Li J, Jiang S, Wang B, Jiang W-w, Tang Y-h, Du M-y, Gu S (2013) Evapotranspiration and its energy exchange in alpine meadow ecosystem on the Qinghai-Tibetan Plateau. J Integr Agric 12(8):1396–1401 Li S-G, Asanuma J, Kotani A, Davaa G, Oyunbaatar D (2007) Evapotranspiration from a Mongolian steppe under grazing and its environmental constraints. J Hydrol 333(1):133–143 Ma X, Huete A, Yu Q, Coupe NR, Davies K, Broich M, Ratana P, Beringer J, Hutley LB, Cleverly J, Boulain N, Eamus D (2013) Spatial patterns and temporal dynamics in savanna vegetation phenology across the North Australian Tropical Transect. Remote Sens Environ 139:97–115 Meyers TP (2001) A comparison of summertime water and CO2 fluxes over rangeland for well watered and drought conditions. Agric For Meteorol 106(3):205–214 Minderlein S, Menzel L (2015) Evapotranspiration and energy balance dynamics of a semi-arid mountainous steppe and shrubland site in Northern Mongolia. Environ Earth Sci 73(2):1–17 Monier E, Kicklighter DW, Sokolov AP, Zhuang Q, Sokolik IN, Lawford R, Kappas M, Paltsev SV, Groisman PY (2017) A review of and perspectives on global change modeling for Northern Eurasia. Environ Res Lett 12(8):083001 Monteith JL, Unsworth MH (1990) Principles of environmental physics, 2nd edn. Edward Arnold, New York Ochsner TE, Sauer TJ, Horton R (2007) Soil heat storage measurements in energy balance studies. Agron J 99(1):311 Priestley CHB, Taylor RJ (1972) On the assessment of surface heat flux and evaporation using large-scale parameters. Mon Weather Rev 100(2):81–92 Qi J, Xin X, John R, Groisman P, Chen J (2017) Understanding livestock production and sustainability of grassland ecosystems in the Asian Dryland Belt. Ecol Process 6(1):22 Rey A, Oyonarte C, Morán-López T, Raimundo J, Pegoraro E (2017) Changes in soil moisture predict soil carbon losses upon rewetting in a perennial semiarid steppe in SE Spain. Geoderma 287:135–146 Rodrigues TR, de Paulo SR, Novais JWZ, Curado LFA, Nogueira JS, de Oliveira RG, Lobo FA, Vourlitis GL (2013) Temporal patterns of energy balance for a Brazilian tropical savanna under contrasting seasonal conditions. Int J Atmos Sci https://doi.org/10.1155/2013/326010 Runkle BRK, Wille C, Gažovič M, Wilmking M, Kutzbach L (2014) The surface energy balance and its drivers in a boreal peatland fen of northwestern Russia. J Hydrol 511:359–373 Ryu Y, Baldocchi DD, Ma S, Hehn T (2008) Interannual variability of evapotranspiration and energy exchange over an annual grassland in California. J Geophys Res Atmos 113(D9):D09104 Shao C, Chen J, Li L, Xu W, Chen S, Gwen T, Xu J, Zhang W (2008) Spatial variability in soil heat flux at three Inner Mongolia steppe ecosystems. Agric For Meteorol 148(10):1433–1443 Shao C, Chen J, Li L, Zhang L (2012) Ecosystem responses to mowing manipulations in an arid Inner Mongolia steppe: an energy perspective. J Arid Environ 82:1–10 Shao C, Li L, Dong G, Chen J (2014) Spatial variation of net radiation and its contribution to energy balance closures in grassland ecosystems. Ecol Process 3(1):7 Soja AJ, Groisman PY (2018) Earth science and the integral climatic and socio-economic drivers of change across northern Eurasia: the NEESPI legacy and future direction. Environ Res Lett 13(4):040401 Sun G, Noormets A, Gavazzi MJ, McNulty SG, Chen J, Domec JC, King JS, Amatya DM, Skaggs RW (2010) Energy and water balance of two contrasting loblolly pine plantations on the lower coastal plain of North Carolina, USA. For Ecol Manag 259(7):1299–1310 Thomey ML, Collins SL, Vargas R, Johnson JE, Brown RF, Natvig DO, Friggens MT (2011) Effect of precipitation variability on net primary production and soil respiration in a Chihuahuan Desert grassland. Glob Chang Biol 17(4):1505–1515 Thompson CC, Beringer J, Chapin FS, Mcguire AD (2004) Structural complexity and land-surface energy exchange along a gradient from arctic tundra to boreal forest. J Veg Sci 15(3):397–406 Wang Y, Zhou G, Wang Y (2007) Modeling responses of the meadow steppe dominated by Leymus chinensis to climate change. Clim Chang 82(3–4):437–452 Webb EK, Pearman GI, Leuning R (1980) Corrections of flux measurements for density effects due to heat and water vapor transfer. Q J Roy Meteor Soc 106(447):85–100 Wever LA, Flanagan LB, Carlson PJ (2002) Seasonal and interannual variation in evapotranspiration, energy balance and surface conductance in a northern temperate grassland. Agric For Meteorol 112(1):31–49 Wilson KB, Baldocchi DD, Aubinet M, Berbigier P, Bernhofer C, Dolman H, Falge E, Field C, Goldstein A, Granier A, Grelle A, Halldor T, Hollinger D, Katul G, Law BE, Lindroth A, Meyers T, Moncrieff J, Monson R, Oechel W, Tenhunen J, Valentini R, Verma S, Vesala T, Wofsy S (2002a) Energy partitioning between latent and sensible heat flux during the warm season at FLUXNET sites. Water Resour Res 38(12):1294–1305 Wilson KB, Goldstein AH, Falge E, Aubinet M, Baldocchi DD, Berbigier P, Bernhofer C, Ceulemans R, Dolman H, Field CB (2002b) Energy balance closure at FLUXNET sites. Agric For Meteorol 113(1):223–243 Yao J, Zhao L, Ding Y, Gu L, Jiao K, Qiao Y, Wang Y (2008) The surface energy budget and evapotranspiration in the Tanggula region on the Tibetan Plateau. Cold Reg Sci Technol 52(3):326–340 Zha T, Barr AG, van der Kamp G, Black TA, McCaughey JH, Flanagan LB (2010) Interannual variation of evapotranspiration from forest and grassland ecosystems in western Canada in relation to drought. Agric For Meteorol 150(11):1476–1484 Zhang S-Y, Li X-Y, Ma Y-J, Zhao G-Q, Li L, Chen J, Jiang Z-Y, Huang Y-M (2014) Interannual and seasonal variability in evapotranspiration and energy partitioning over the alpine riparian shrub Myricaria squamosa Desv. on Qinghai–Tibet plateau. Cold Reg Sci Technol 102:8–20 Zhang Y, Ohata T, Kadota T (2003) Land-surface hydrological processes in the permafrost region of the eastern Tibetan Plateau. J Hydrol 283(1):41–56 This study was supported by the Major State Research Development Program of China (2016YFC0500600, 2017YFE0104500), Natural Science Foundation of China (31800512, 31870466), and the US-China Carbon Consortium (USCCC). We are grateful to all members who worked at the Changling station. Sincere thanks go to Ge Sun, Jingfeng Xiao, and Haiqiang Guo for their useful comments and valuable suggestions on early versions of the manuscript. We also greatly appreciate the careful reviews by the anonymous reviewers and editing of Kristine Blakeslee. The datasets generated and/or analyzed during the current study are available from the corresponding author upon request. Institute of Loess Plateau, Shanxi University, Taiyuan, 030006, China Jingyan Chen National Hulunber Grassland Ecosystem Observation and Research Station & Institute of Agricultural Resources and Regional Planning, Chinese Academy of Agricultural Sciences, Beijing, 100081, China Changliang Shao & Gang Dong Key Laboratory of Vegetation Ecology, Ministry of Education, Northeast Normal University, Changchun, 130024, China Shicheng Jiang Forestry Post-Doctoral Station, Forest Ecology Stable Isotope Center, Forestry College, Fujian Agriculture and Forestry University, Fuzhou, 350002, China Luping Qu Institute of Applied Ecology, Chinese Academy of Sciences, Shenyang, 110016, China Fangyuan Zhao University of Chinese Academy of Sciences, Beijing, 100049, China School of Life Science, Shanxi University, Taiyuan, 030006, China Gang Dong Search for Jingyan Chen in: Search for Changliang Shao in: Search for Shicheng Jiang in: Search for Luping Qu in: Search for Fangyuan Zhao in: Search for Gang Dong in: JYC, CLS, and GD constructed the overall structure of the manuscript. JYC and CLS analyzed the data and wrote the paper. SCJ and LPQ contributed the assistance with data collection and fieldwork. CLS, GD, and FYZ guided and revised the content. All authors read and approved the final manuscript. Correspondence to Gang Dong. Chen, J., Shao, C., Jiang, S. et al. Effects of changes in precipitation on energy and water balance in a Eurasian meadow steppe. Ecol Process 8, 17 (2019) doi:10.1186/s13717-019-0170-z DOI: https://doi.org/10.1186/s13717-019-0170-z Energy partition Precipitation change Eddy covariance
CommonCrawl
A quantum framework for likelihood ratios Public talks & lectures What is quantum cognition? Rev. Thomas Bayes This article was published in the International Journal of Quantum Information. It should be cited as: Bond, R. L., He, Y.‑H., & Ormerod, T. C. (2018). A quantum framework for likelihood ratios. International Journal of Quantum Information, 16(1), 1850002. doi: 10.1142/S0219749918500028 PDF Available International Journal of Quantum Information Vol. 16, No. 1 (2018) 1850002 (14 pages) © World Scientific Publishing Company Rachael L. Bond School of Psychology, University of Sussex, Falmer, East Sussex, BN1 9QH, UK Yang-Hui He Department of Mathematics, City, University of London, EC1V 0HB, UK Merton College, University of Oxford, OX1 4JD, UK School of Physics, NanKai University, Tianjin 300071, China Thomas C. Ormerod Received 6 March 2017 The ability to calculate precise likelihood ratios is fundamental to science, from Quantum Information Theory through to Quantum State Estimation. However, there is no assumption-free statistical methodology to achieve this. For instance, in the absence of data relating to covariate overlap, the widely used Bayes' theorem either defaults to the marginal probability driven "naive Bayes' classifier", or requires the use of compensatory expectation-maximization techniques. This article takes an information-theoretic approach in developing a new statistical formula for the calculation of likelihood ratios based on the principles of quantum entanglement, and demonstrates that Bayes' theorem is a special case of a more general quantum mechanical expression. Keywords: Bayes' theorem; probability; statistics; inference; decision-making. In recent years, Bayesian statistical research has often been epistemologically driven, guided by de Finetti's famous quote that "probability does not exist" (de Finetti, 1974). For example, the "quantum Bayesian" methodology of Caves, Fuchs & Schack has applied de Finetti's ideas to Bayes' theorem for use in quantum mechanics (Caves, Fuchs, & Schack, 2002). In doing so, Caves et al. have argued that statistical systems are best interpreted by methods in which the Bayesian likelihood ratio is seen to be both external to the system and subjectively imposed on it by the observer (Timpson, 2008). However, the Caves et al. approach is problematic. At a human scale, for instance, an observer's belief as to the chances of a fair coin landing either "heads" or "tails" has no known effect. Indeed, for all practical purposes, the "heads:tails" likelihood ratio of 0.5:0.5 is only meaningful when considered as a property of the coin's own internal statistical system rather than as some ephemeral and arbitrary qualia. Yet, to date, the axiomatic difficulties associated with Bayes' theorem, notably its reliance upon the use of marginal probabilities in the absence of structural statistical information (eg., estimates of covariate overlap), as well as the assumed conditional independence of data, have largely been approached from a "mend and make do" standpoint. For instance, the "maximum likelihood" approach of Dempster, Laird, & Rubin calculates iteratively derived measures of covariate overlap which not only lack a sense of natural authenticity, but also introduce fundamental assumptions into the statistical analysis (Dempster, Laird, & Rubin, 1977). Instead, this paper adopts a different approach to the analysis of statistical systems. By using quantum mechanical mathematical spaces, it is demonstrated that the creation of isomorphic representations of classical data-sets as entangled systems allows for a natural, albeit non-trivial, calculation of likelihood ratios. It is expected that this technique will find applications within the fields of quantum state estimation, and quantum information theory. 2. The limits of Bayes' theorem Bayes' theorem is used to calculate the conditional probability of a statement, or hypothesis, being true given that other information is also true. It is usually written as P(H_{i}|D)=\frac{P(H_{i})P(D|H_{i})}{\sum\limits_{j}P(H_{j})P(D|H_{j})} \ . \tag{1} Here, \(P(H_{i}|D)\) is the conditional probability of hypothesis \(H_{i}\) being true given that the information \(D\) is true; \(P(D|H_{i})\) is the conditional probability of \(D\) being true if \(H_{i}\) is true; and \(\sum\limits_{j}P(H_{j})P(D|H_{j})\) is the sum of the probabilities of all hypotheses multiplied by the conditional probability of \(D\) being true for each hypothesis (Oaksford & Chater, 2007). \begin{array}{lcc} & \mbox{Particle $\alpha$ }(H_1) & \mbox{Particle $\beta$ }(H_2) \\ \hline \mbox{Number of particles }(n) & 10 & 10\\ \hline \mbox{Proportion spin $\uparrow$ }(D) & 0.8 & 0.7\\ \mbox{Proportion spin $\downarrow$ }(\bar{D}) & 0.2 & 0.3\\ \hline \label{figure1} To exemplify using the contingency information in \(\eqref{figure1}\), if one wishes to calculate the nature of a randomly selected particle from a set of 20, given that it has spin \(\uparrow\), then using Bayes' theorem it is trivial to calculate that particle \(\alpha\) is the most likely type with a likelihood ratio of approximately 0.53:0.47, P(H_{1}|D)&=\frac{0.5\times0.8}{(0.5\times0.8)+(0.5\times0.7)} = \frac{8}{15}&\approx 0.533 \ , \\ P(H_{2}|D)&=1-P(H_{1}|D) = \frac{7}{15}&\approx 0.467 \ , \label{ex1} where \(P(H_i) = 10/(10+10) = 0.5\) for both \(i=1,2\). However, difficulties arise in the use of Bayes' theorem for the calculation of likelihood ratios where there are multiple non-exclusive data sets. For instance, if the information in \(\eqref{figure1}\) is expanded to include data about particle charge \(\eqref{figure2}\) then the precise covariate overlap (ie., \(D_1 \cap D_2\)) for each particle becomes an unknown. &\mbox{Particle $\alpha$ }(H_1)&\mbox{Particle $\beta$ }(H_2)\\ \hline \mbox{Number of particles }(n)&10&10\\ \hline \mbox{Proportion spin $\uparrow$ }(D_1)&0.8&0.7\\ \mbox{Proportion charge + }(D_2)&0.6&0.5\\ \hline All that may be shown is that, for each particle, the occurrence of both features forms a range described by & n(D_1 \cap D_2|H_{i}) \in \left\{ \begin{array}{l} \Bigl[n(D_1|H_i) + n(D_2|H_i) – n(H_i) \ , \ldots, \min(n(D_1|H_i), n(D_2|H_i))\Bigr] \\ \qquad \qquad \mbox{ if } n(D_1|H_i)+n(D_2|H_i) > n(H_i) \ , \quad \mbox{or} \Bigl[0 \ ,\ldots, \min(n(D_1|H_i), n(D_2|H_i))\Bigr] \\ \qquad \qquad \mbox{ if } n(D_1|H_i)+n(D_2|H_i) \leq n(H_i) \ , \label{min_max} where \(n(H_i)\) is the total number of exemplars \(i\), \(n(D_1|H_i)\) is the total number of \(i\) with spin \(\uparrow\), and \(n(D_2|H_i)\) is the total number of \(i\) with a positive charge. Specifically for \(\eqref{figure2}\) these ranges equate to n(D_1 \cap D_2|H_{1})&\in\{4,\:5,\:6\} \ , \\ n(D_1 \cap D_2|H_{2})&\in\{2,\:3,\:4,\:5\} \ . \label{min_max_actual} The simplest approach to resolving this problem is to naively ignore any intersection, or co-dependence, of the data and to directly multiply the marginal probabilities. Hence, given \(\eqref{figure2}\), the likelihood of particle \(\alpha\) having the greatest occurrence of both spin \(\uparrow\) and a positive charge would be calculated as P(H_{1}|D_1 \cap D_2)&=\frac{0.5\times0.8\times0.6}{(0.5\times0.8\times0.6)+(0.5\times0.7\times0.5)} \\ &\approx 0.578 \ . \label{pseudo_calc} Yet, because the data intersect, this probability value is only one of a number which may be reasonably calculated. Alternatives include calculating a likelihood ratio using the mean value \(\mu\) of the frequency ranges for each hypothesis P(\mu [n(D_1 \cap D_2|H_1)])&=\frac{1}{10}\times\frac13(4+5+6) = 0.5\ , \\ P(\mu [n(D_1 \cap D_2|H_2)])&=\frac{1}{10}\times\frac14(2+3+4+5) = 0.35 \\ &\Rightarrow \: P(H_1|\mu D_1 \cap D_2) \approx 0.588 \ ; \label{Bayes2} and taking the mean value of the probability range derived from the frequency range &\min P(H_1|D_1 \cap D_2) = \frac{4}{4+5} \ , \notag \\ \: &\max P(H_1|D_1 \cap D_2)=\frac{6}{6+2} \notag \\ &\Rightarrow \: \mu[ P(H_1|D_1 \cap D_2) ] \approx0.597 \ . Given this multiplicity of probability values, it would seem that none of these methods may lay claim to normativity. This problem of covariate overlap has, of course, been previously addressed within statistical literature. For instance, the "maximum likelihood" approach of Dempster, Laird, & Rubin has demonstrated how an "expectation-maximization" algorithm may be used to derive appropriate covariate overlap measures (Dempster et al., 1977). Indeed, the mathematical efficacy of this technique has been confirmed by (Wu, 1983). However, it is difficult to see how such an iterative methodology can be employed without introducing axiomatic assumptions. Further, since any assumptions, irrespective of how benign they may appear, have the potential to skew results, what is required is an approach in which covariate overlaps can be automatically, and directly, calculated from contingency data. 3. A quantum mechanical proof of Bayes' theorem for independent data Previously unconsidered, the quantum mechanical von Neumann axioms would seem to offer the most promise in this regard, since the re-conceptualization of covariate data as a quantum entangled system allows for statistical analysis with few, non-arbitrary assumptions. Unfortunately there are many conceptual difficulties that can arise here. For instance, a Dirac notation representation of \(\eqref{figure2}\) as a standard quantum superposition is \newcommand\ket[1]{\left|#1\right>} \ket{\Psi}=&\frac{1}{\sqrt{N}}\biggl[ \alpha \bigg( \sqrt{\frac{1}{3}}\ket{4}_{H1}+\sqrt{\frac{1}{3}}\ket{5}_{H1}+\sqrt{\frac{1}{3}}\ket{6}_{H1} \bigg) \\ &+\beta \bigg( \sqrt{\frac{1}{4}}\ket{2}_{H2}+\sqrt{\frac{1}{4}}\ket{3}_{H2}+\sqrt{\frac{1}{4}}\ket{4}_{H2}+\sqrt{\frac{1}{4}}\ket{5}_{H2} \bigg) \biggr] \ . \label{superpos} \tag{10} In this example, \(\eqref{superpos}\) cannot be solved since the possible values of \(D_1 \cap D_2\) for each hypothesis \(\eqref{min_max_actual}\) have been described as equal chance outcomes within a general superposition of \(H_1\) and \(H_2\), with the unknown coefficients \(\alpha\) and \(\beta\) assuming the role of the classical Bayesian likelihood ratio. The development of an alternative quantum mechanical description necessitates a return to the simplest form of Bayes' theorem using the case of exclusive populations \(H_i\) and data sets \(D\), \(\bar{D}\), such as given in \(\eqref{figure1}\). Here, the overall probability of \(H_1\) may be simply calculated as P(H_1)=\frac{n(H_1)}{n(H_1)+n(H_2)} \ . The a priori uncertainty in \(\eqref{figure1}\) may be expressed by constructing a wave function in which the four data points are encoded as a linear superposition \ket{\Psi}=&\alpha_{1,1}\ket{H_1 \otimes D} + \alpha_{1,2}\ket{H_1 \otimes \bar{D}} \\ &+\alpha_{2,1}\ket{H_2 \otimes D} + \alpha_{2,2}\ket{H_2 \otimes \bar{D}} \ . \label{superpos2} Since there is no overlap between either \(D\) and \(\bar{D}\) or the populations \(H_1\) and \(H_2\), each datum automatically forms an eigenstate basis with the orthonormal conditions \left< H_1 \otimes D|H_1 \otimes D \right> = \left< H_1 \otimes \bar{D}|H_1 \otimes \bar{D} \right> & = 1 \notag \\ \mbox{all other bra‑kets } & = 0 \ , \label{ortho1} where the normalization of the wave function demands that \left<\Psi|\Psi\right>=1 \ , so that the sum of the modulus squares of the coefficients \(\alpha_{i,j}\) gives a total probability of 1 |\alpha_{1,1}|^2 + |\alpha_{1,2}|^2 + |\alpha_{2,1}|^2 + |\alpha_{2,2}|^2 = 1 \ . For simplicity let x_1=P(D|H_1),\:\:y_1=P(\bar{D}|H_1) \ , \notag \\ X_1=P(H_1),\:\:X_2=P(H_2) \ . \label{griddef} If the coefficients \(\alpha_{i,j}\) from \(\eqref{superpos2}\) are set as required by \(\eqref{figure1}\), it follows that |\alpha_{1,1}|^2=x_1,\:\:|\alpha_{1,2}|^2=y_1,\:\:|\alpha_{2,1}|^2=x_2,\:\:|\alpha_{2,2}|^2 = y_2 \ , so that the normalised wave function \(\newcommand\ket[1]{\left|#1\right>} \ket{\Psi}\) is described by \ket{\Psi}=&\frac{1}{\sqrt{N}}(\sqrt{x_1}\ket{H_1\otimes D}+\sqrt{y_1}\ket{H_1\otimes\bar{D}} \\ &+\sqrt{x_2}\ket{H_2\otimes D}+\sqrt{y_2}\ket{H_2\otimes\bar{D}}) for some normalization constant \(N\). The orthonormality condition \(\eqref{ortho2}\) implies that N=x_1+y_1+x_2+y_2=X_1+X_2 \ , thereby giving the full wave function description \ket{\Psi}=\frac{\sqrt{x_1}\ket{H_1\otimes D}+\sqrt{y_1}\ket{H_1\otimes\bar{D}}+\sqrt{x_2}\ket{H_2\otimes D}+\sqrt{y_2}\ket{H_2\otimes\bar{D}}}{\sqrt{X_1+X_2}} \ . If the value of \(P(H_1|D)\) is to be calculated, ie., the property \(D\) is observed, then the normalized wave function \(\eqref{superpos2}\) necessarily collapses to \ket{\Psi'}=\alpha_1 \ket{H_1 \otimes D_1} + \alpha_2 \ket{H_2 \otimes D_1} \ , \label{collapse1} where the coefficients \(\alpha_{1,2}\) may be determined by projecting \(\newcommand\ket[1]{\left|#1\right>} \ket{\Psi}\) on to the two terms in \(\newcommand\ket[1]{\left|#1\right>} \ket{\Psi'}\) using \(\eqref{ortho1}\), giving \small \begin{align} &\alpha_1=\left<\Psi'|H_1\otimes D\right> = \sqrt{\frac{x_1}{X_1+X_2}} \ ,\notag \\ &\mbox{ } \notag \\ &\alpha_2=\left<\Psi'|H_2\otimes D\right> = \sqrt{\frac{x_2}{X_1+X_2}} \ . \end{align} \label{collapse4} \tag{22} Normalizing \(\eqref{collapse1}\) with the coefficient \(N'\) \ket{\Psi'}=\frac{1}{\sqrt{N'}}\Bigl(\sqrt{\frac{x_1}{X_1+X_2}} \ket{H_1 \otimes D} + \sqrt{\frac{x_2}{X_1+X_2}} \ket{H_2 \otimes D}\Bigr) \ , and using the normalization condition \(\eqref{ortho2}\), implies that 1=\left<\Psi'|\Psi'\right>&=\frac{1}{N'}\Bigl( \frac{x_1}{X_1+X_2} + \frac{x_2}{X_1+X_2} \Bigr)\notag \\ \mbox{ } \notag \\ \rightarrow N'&= \frac{x_1+x_2}{X_1+X_2} \ . Thus, after collapse, the properly normalized wave function \(\eqref{collapse2}\) becomes \ket{\Psi'}=\sqrt{\frac{x_1}{x_1+x_2}} \ket{H_1 \otimes D} + \sqrt{\frac{x_2}{x_1+x_2}} \ket{H_2 \otimes D} \ , which means that the probability of observing \(\newcommand\ket[1]{\left|#1\right>} \ket{H_1 \otimes D}\) is P(\ket{H_1\otimes D})=\Bigg(\sqrt{\frac{x_1}{x_1+x_2}}\:\Bigg)^2=\frac{\alpha_1^2}{\alpha_1^2+\alpha_2^2}=\frac{x_1}{x_1+x_2} \ . \label{bayesproof} This is entirely consistent with Bayes' theorem and demonstrates its derivation using quantum mechanical axioms. 4. Quantum likelihood ratios for co‑dependent data Having established the principle of using a quantum mechanical approach for the calculation of simple likelihood ratios with mutually exclusive data \(\eqref{figure1}\), it is now possible to consider the general case of \(n\) hypotheses and \(m\) data \(\eqref{figure3}\), where the data are co‑dependent, or intersect. \begin{array}{|c|c|c|c|c|} \hline & H_1 & H_2 & \cdots & H_n \\ \hline D_1 & x_{1,1} & x_{1,2} & \cdots & x_{1,n} \\ \hline \vdots & & \vdots & & \vdots \\ \hline D_m & x_{m,1} & x_{m,2} & \cdots & x_{m,n} \\ \hline Here the contingency table in \(\eqref{figure3}\) has been indexed using x_{i, \alpha} \ , \qquad \alpha = 1, 2, \ldots, n; \quad i = 1,2, \ldots, m \ . While the general wave function remains the same as before, the overlapping data create non-orthonormal inner products which can be naturally defined as \newcommand\vev[1]{\left<#1\right>} \newcommand{\IR}{\mathbb{R}} \vev{H_\alpha \otimes D_i | H_\beta \otimes D_j} = c_{ij}^{\alpha} \delta_{\alpha\beta} \ , \:\: c_{ij}^\alpha = c_{ji}^\alpha \in \IR \ , c_{ii}^\alpha = 1 \ . \label{innerGen} Assuming, for simplicity, that the overlaps \(c_{ij}^{\alpha}\) are real, then there is a symmetry in that \(c_{ij}^\alpha = c_{ji}^\alpha\) for each \(\alpha\). Further, for each \(\alpha\) and \(i\), the state is normalized, ie., \(c_{ii}^\alpha = 1\). The given independence of the hypotheses \(H_\alpha\) also enforces the Kroenecker delta function, \(\delta_{\alpha\beta}\). The Hilbert space \(V\) spanned by the kets \(\newcommand\ket[1]{\left|#1\right>} \ket{H_\alpha \otimes D_i}\) is \(mn\)‑dimensional and, because of the independence of \(H_\alpha\), naturally decomposes into the direct sum V = \mbox{Span}(\{ \ket{H_\alpha \otimes D_i} \} ) = \bigoplus\limits_{\alpha=1}^{n} V^{\alpha} \ , \qquad \dim V^{\alpha} = m \label{directsum} with respect to the inner product, thereby demonstrating that the non‑orthonormal conditions are the direct sum of \(m\) vector spaces \(V^\alpha\). Since the inner products are non-orthonormal, each \(V^{\alpha}\) must be individually orthonormalised. Given that \(V\) splits into a direct sum, this may be achieved for each subspace \(V^{\alpha}\) by applying the Gram‑Schmidt algorithm to \(\newcommand\ket[1]{\left|#1\right>} \{ \ket{H_\alpha \otimes D_i} \}\) of \(V\). Consequently, the orthonormal basis may be defined as \ket{K^\alpha_i} = \sum\limits_{k=1}^n A_{i,k}^{\alpha} \ket{H_\alpha \otimes D_k} \ , \vev{K^\alpha_i | K^\alpha_j} = \delta_{ij} \ , \label{KHD} for each \(\alpha=1,2,\ldots,n\) with \(m\times m\) matrices \(A_{i,k}^{\alpha}\), for each \(\alpha\). Substituting the inner products \(\eqref{innerGen}\) gives \sum\limits_{k,k'=1}^m A_{ik}^{\alpha} A_{jk'}^{\alpha} c_{kk'}^{\alpha} = \delta_{ij} \quad \forall \alpha = 1, 2, \ldots, n \ . \label{AAc} The wave-function may now be written as a linear combination of the orthonormalised kets \(\newcommand\ket[1]{\left|#1\right>} \ket{K^\alpha_i}\) with the coefficients \(b_i^\alpha\), and may be expanded into the \(\newcommand\ket[1]{\left|#1\right>} \ket{H_\alpha \otimes D_i}\) basis using \(\eqref{KHD}\), ie., \ket{\Psi} = \sum\limits_{\alpha,i} b_i^\alpha \ket{K^\alpha_i} = \sum\limits_{\alpha,i,k} b_i^\alpha A_{ik}^{\alpha} \ket{H_\alpha \otimes D_k} \ . \label{PsiGen} As with \(\eqref{ortho4}\) from earlier, the coefficients in \(\eqref{PsiGen}\) should be set as required by the contingency table \sum\limits_{i} b_i^\alpha A_{i,k}^{\alpha} = \sqrt{x_{k \alpha}} \ , \label{bAx} where, to solve for the \(b\)‑coefficients, \(\eqref{AAc}\) may be used to invert \sum\limits_{k,k'}\sum\limits_{i} b_i^\alpha A_{ik}^{\alpha} A_{jk'}c_{kk'}^\alpha = \sum\limits_{k,k'} \sqrt{x_{k \alpha}} A_{jk'}^\alpha c_{k'k}^\alpha \ , b_j^\alpha = \sum\limits_{k,k'} \sqrt{x_{k \alpha}} A_{jk'}^\alpha c_{kk'}^\alpha \ . \label{bsolGen} Having relabelled the indices as necessary, a back-substitution of \(\eqref{bAx}\) into the expansion \(\eqref{PsiGen}\) gives \ket{\Psi} = \sum\limits_{\alpha,i,k} b_i^\alpha A_{i,k}^{\alpha} \ket{H_\alpha \otimes D_k} = \sum\limits_{\alpha,k} \sqrt{x_{k \alpha}} \ket{H_\alpha \otimes D_k} \ ,\end{equation} which is the same as having simply assigned each ket's coefficient to the square root of its associated entry in the contingency table. The normalization factor for \(\newcommand\ket[1]{\left|#1\right>} \ket{\Psi}\) is simply \(1/\sqrt{N}\), where \(N\) is the sum of the squares of the coefficients \(b\) of the orthonormalised bases \(\newcommand\ket[1]{\left|#1\right>} \ket{K^\alpha_i}\), N &= \sum\limits_{i,\alpha} (b_i^\alpha)^2 \sum\limits_{i,\alpha} b_i^\alpha \left( \sum\limits_{k,k'} \sqrt{x_{k \alpha}} A_{k',i}^\alpha c_{kk'}^\alpha \right) \notag \\ \sum\limits_{k,k',\alpha} \sqrt{x_{k \alpha} x_{k'\alpha}} c_{kk'}^\alpha \ . Thus, the final normalized wave function is \frac{\sum\limits_{\alpha,k} \sqrt{x_{k \alpha}} \ket{H_\alpha \otimes D_k}} {\sqrt{\sum\limits_{i,j,\alpha} \sqrt{x_{i \alpha}x_{j \alpha}} c_{ij}^\alpha}} \label{NormalWave} where \(\alpha\) is summed from 1 to \(n\), and \(i,j\) are summed from 1 to \(m\). Note that, in the denominator, the diagonal term \(\sqrt{x_{i \alpha}x_{j \alpha}}c_{ij}^\alpha\), which occurs whenever \(i=j\), simplifies to \(x_{i \alpha}\) since \(c_{ii}^\alpha = 1\) for all \(\alpha\). From \(\eqref{NormalWave}\) it follows that, exactly in parallel to the non‑intersecting case, if all properties \(D_i\) are observed simultaneously, the probability of any hypothesis \(H_\alpha\), for a fixed \(\alpha\), is P(H_\alpha | D_1 \cap D_2 \ldots \cap D_m) &= \frac{\sum\limits_{i} (b_i^\alpha)^2}{\sum\limits_{i,\beta} (b_i^\beta)^2} \frac{ \sum\limits_{i,j} \sqrt{x_{i \alpha} x_{j\alpha}} c_{ij}^\alpha \sum\limits_{i,j,\beta} \sqrt{x_{i \beta} x_{j\beta}} c_{ij}^\beta \ . \label{PHalpha} In the case of non‑even populations for each hypothesis (ie., non‑even priors), the calculated probabilities should be appropriately weighted. 5. Example solution Returning to the problem presented in the contingency table \(\eqref{figure2}\), it is now possible to calculate the precise probability for a randomly selected particle with the properties of "spin \(\uparrow\)" and "charge +" being particle \(\alpha\) (\(H_1\)). For this \(2 \times 2\) matrix, recalling from \(\eqref{innerGen}\) that \(c_{ii}^\alpha=1\) and \(c_{ij}^\alpha = c_{ji}^\alpha\), the general expression \(\eqref{PHalpha}\) may be written as & P(H_1|D_1 \cap D_2) = \frac{ \sum\limits_{i,j=1}^2 \sqrt{x_{i,1} x_{j,1}} c_{ij}^1 \sum\limits_{i,j=1}^2 \sum\limits_{\alpha=1}^2 \notag \\ \sqrt{x_{1,1}^2} c_{1,1}^1 + \sqrt{x_{2,1}^2} c_{2,2}^1 + \sqrt{x_{1,1}x_{2,1}} c_{1,2}^1 + \sqrt{x_{2,1}x_{1,1}} c_{2,1}^1 \sum\limits_{\alpha=1}^2 \sqrt{x_{1,\alpha}^2} c_{1,1}^1 + \sqrt{x_{2,\alpha}^2} c_{2,2}^1 + \sqrt{x_{1,\alpha}x_{2,\alpha}} c_{1,2}^1 + \sqrt{x_{2,\alpha}x_{1,\alpha}} c_{2,1}^1 \frac{x_1 + y_1 + 2 c_1 \sqrt{x_1 y_1}}{x_1 + x_2 + y_1 + y_2 + 2 c_1 \sqrt{x_1 y_1} + 2 c_2 \sqrt{x_2 y_2}} \ , \label{entang2} where, adhering to the earlier notation \(\eqref{griddef}\), x_1=x_{1,1} = P(D_1|H_1),\:\:y_1= x_{2,1} = P(D_2|H_1) \ , \notag \\ X_1=P(H_1),\:\:X_2=P(H_2) \ ,\ \label{griddef2} and, for brevity, \(c_1 := c_{1,2}^1\), \(c_2 := c_{1,2}^2\). For simplicity, \(P(H_i|D_1 \cap D_2)\) will henceforth be denoted as \(P_i\). Implementing \(\eqref{entang2}\) is dependent upon deriving solutions for the yet unknown expressions \(c_{i}\), \(i=1,2\) which govern the extent of the intersection in \(\eqref{innerGen}\). This can only be achieved by imposing reasonable constraints upon \(c_i\) which have been inferred from expected behaviour and known outcomes, ie., through the use of boundary values and symmetries. Specifically, these constraints are: (i) Data dependence The expressions \(c_i\) must, in some way, be dependent upon the data given in the contingency table, ie., c_1&=c_1(x_1,y_1,x_2,y_2;X_1,X_2) \ , \notag \\ c_2&=c_2(x_1,y_1,x_2,y_2;X_1,X_2) \ . (ii) Probability The calculated values for \(P_i\) must fall between 0 and 1. Since \(x_i\) and \(y_i\) are positive, it suffices to take c_i(x_1, y_1, x_2, y_2) < 1 \ , \mbox{and } c_i(x_1, y_1, x_2, y_2) > -1 \ . (iii) Complementarity The law of total probability dictates that P_1 + P_2 = 1 \ , which can easily be seen to hold. (iv) Symmetry The exchanging of rows within the contingency tables should not affect the calculation of \(P_i\). In other words, for each \(i=1,2\), \(P_i\) is invariant under \(x_i \leftrightarrow y_i\). This constraint implies that c_i(x_1,y_1,x_2,y_2)=c_i(y_1,x_1,y_2,x_2) \ . Equally, if the columns are exchanged then \(P_i\) must map to each other, ie., for each \(i=1,2\) then \(P_1 \leftrightarrow P_2\) under \(x_1 \leftrightarrow x_2,\: y_1 \leftrightarrow y_2\) which gives the further constraint that c_1(x_1,y_1,x_2,y_2)=c_2(x_2,y_2,x_1,y_1) \ . (v) Known values There are a number of contingency table structures which give rise to a known probability, ie., \begin{array}{|c|c|l|} \hline & H_1 & H_2 \\ \hline D_1 & 1 & 1 \\ D_2 & m & n \\ \hline \end{array} \notag &\ \ \ \ \ \ \ \ \ \rightarrow &P_1 &= \frac{m}{m+n} \\ \notag D_1 & m & n \\ D_2 & 1 & 1 \\ \hline D_1 & n & m \\ &P_1 &= \frac{1}{2} D_1 & n & n \\ D_2 & m & m \\ \hline D_1 & m & m \\ &P_1 &= \frac{1}{2} \ , where \(m,n\) are positively valued probabilities. For such contingency tables the correct probabilities should always be returned by \(c_i\). Applying this principle to \(\eqref{entang2}\) gives the constraints \frac{m}{m+n} = \frac{2 c_1(m,1,n,1) \sqrt{m}+m+1}{2 c_1(m,1,n,1) \sqrt{m}+2 c_2(m,1,n,1)\sqrt{n}+m+n+2} \ , \label{movern} \frac{1}{2} = \frac{2 c_1(n,m,m,n) \sqrt{m} \sqrt{n}+m+n}{2 c_1(n,m,m,n) \sqrt{m} \sqrt{n}+2 c_2(n,m,m,n) \sqrt{m} \sqrt{n}+2 m+2 n} \ , \label{movern2} \frac{1}{2} = \frac{2 c_1(n,m,n,m) \sqrt{m} \sqrt{n}+m+n}{2 c_1(n,m,n,m) \sqrt{m} \sqrt{n}+2 c_2(n,m,n,m) \sqrt{m} \sqrt{n}+2 m+2 n} \ . (vi) Non‑homogeneity Bayes' theorem returns the same probability for any linearly scaled contingency tables, eg., x_1 \rightarrow 1.0,\: y_1 \rightarrow 1.0,\: x_2 \rightarrow 1.0,\: y_2 \rightarrow 0.50 \Rightarrow P_1\approx 0.667 \ , \\ \label{homogen1} \) \( x_1 \rightarrow 0.5,\: y_1 \rightarrow 0.5,\: x_2 \rightarrow 0.5,\: y_2 \rightarrow 0.25 \Rightarrow P_1\approx 0.667 \ . While homogeneity may be justified for conditionally independent data, this is not the case for intersecting, co‑dependent data since the act of scaling changes the nature of the intersections and the relationship between them. This may be easily demonstrated by taking the possible value ranges for \(\eqref{homogen1}\) and \(\eqref{homogen2}\), calculated using \(\eqref{min_max}\), which are \mbox{Eq. \eqref{homogen1}} \Rightarrow &(D_1 \cap D_2)|H_{1}=\{1\} \ , \\ &(D_1 \cap D_2)|H_{2}=\{0.5\} \ , \notag \\ \mbox{Eq. \eqref{homogen2}} \Rightarrow &(D_1 \cap D_2)|H_{1}=\{0.0 \ldots 0.5\} \ , \\ &(D_1 \cap D_2)|H_{2}=\{0.0 \ldots 0.25\} \ . The effect of scaling has not only introduced uncertainty where previously there had been none, but has also introduced the possibility of 0 as a valid answer for both hypotheses. Further, the spatial distance between the hypotheses has also decreased. For these reasons it would seem unreasonable to assert that \(\eqref{homogen1}\) and \(\eqref{homogen2}\) share the same likelihood ratio.Using these principles and constraints it becomes possible to solve \(c_i\). From the principle of symmetry it follows that c_1(n, m, m, n) = c_2(m,n,n,m) = c_2(n, m, m, n) \ , \\ c_1(n, m, n, m) = c_2(n,m,n,m) = c_2(n, m, n, m) \ , and that the equalities \(\eqref{movern2}\), \(\eqref{movern3}\) for \(P_i=0.5\) automatically hold. Further, \(\eqref{movern}\) solves to give c_2(m,1,n,1) = \frac{2 \sqrt{m} n c_1(m,1,n,1) – m + n}{2m \sqrt{n}} \ , \label{function2} which, because \(c_1(n,1,m,1)=c_2(m,1,n,1)\), finally gives c_1(n,1,m,1) = \frac{2 \sqrt{m} n c_1(m,1,n,1) – m + n}{2m \sqrt{n}} \ . Substituting \(g(m,n) := \sqrt{n} c_1(m,1,n,1)\) transforms \(\eqref{function1}\) into an anti‑symmetric bivariate functional equation in \(m,n\), g(m,n) – g(n,m) = \frac{m}{2\sqrt{mn}} – \frac{n}{2\sqrt{mn}} \ , whose solution is \(g(m,n)=\frac{m}{2\sqrt{mn}} \ .\) This gives a final solution for the coefficients \(c_{1,2}\) of c_1(x_1,y_1,x_2,y_2)&=\frac{\sqrt{x_1 y_1}}{2x_2 y_2} \ , \notag \\ \quad c_2(x_1,y_1,x_2,y_2)&=\frac{\sqrt{x_2 y_2}}{2x_1 y_1} \ . \label{c12sol} Thus, substituting \(\eqref{c12sol}\) into \(\eqref{entang2}\) gives the likelihood ratio expression of, P(H_1|D_1 \cap D_2) =\frac{\frac{x_1 y_1}{x_2 y_2}+x_1+y_1} \frac{x_1 y_1}{x_2 y_2}+ x_1 + y_1 + \frac{x_2 y_2}{x_1 y_1}+ x_2 + y_2 } \ . Given that the population sizes of \(H_1\) and \(H_2\) are the same, no weighting needs to take place. Hence, the value of \(P(H_1|D_1 \cap D_2)\) for \(\eqref{figure2}\) may now be calculated to be P(H_1|D_1 \cap D_2) \approx 0.5896 \ . One of the greatest obstacles in developing any statistical approach is demonstrating correctness. This formula is no different in that respect. If correctness could be demonstrated then, a priori, there would be an appropriate existing method which would negate the need for a new one. All that may be hoped for in any approach is that it generates appropriate answers when they are known, reasonable answers for all other cases, and that these answers follow logically from the underlying mathematics. However, what is clear is that the limitations of the naive Bayes' classifier render any calculations derived from it open to an unknown margin of error. Given the importance of accurately deriving likelihood ratios this is troubling. This is especially true when the statistical tolerance of calculations is marginal. As a quantum mechanical methodology this result is able to calculate accurate, iteration free, likelihood ratios which fall beyond the scope of existing statistical techniques, and offers a new theoretical approach within both statistics and physics. Further, through the addition of a Hamiltonian operator to introduce time‑evolution, it can offer likelihood ratios for future system states with appropriate updating of the contingency table. In contrast, Bayes' theorem is unable to distinguish directly between time‑dependent and time‑independent systems. This may lead to situations where the process of contingency table updating results in the same decisions being made repeatedly with the appearance of an ever increasing degree of certainty. Indeed, from \(\eqref{bayesproof}\), it would seem that the naive Bayes' classifier is only a special case of a more complex quantum mechanical framework, and may only be used where the exclusivity of data is guaranteed. The introduction of a Hamiltonian operator, and a full quantum dynamical formalism, is in progress, and should have profound implications for the physical sciences. Inevitably, such a formalism will require a sensible continuous classical limit. In other words, the final expressions for the likelihood ratios should contain a parameter, in some form of \(\hbar\), which, when going to 0, reproduces a classically known result. For example, the solutions to \(\eqref{c12sol}\) could be moderated as c_1(x_1,y_1,x_2,y_2) = \frac{\sqrt{x_1y_1}}{2x_2y_2}(1 – \exp(-\hbar)) \ , \\ c_2(x_1,y_1,x_2,y_2) = \frac{\sqrt{x_2y_2}}{2x_1y_1}(1 – \exp(-\hbar)) \ , so that in the limit of \(\hbar \rightarrow 0\), the intersection parameters, \(c_1\) and \(c_2\), vanish to return the formalism to the classical situation of independent data. This article has demonstrated both theoretically, and practically, that a quantum mechanical methodology can overcome the axiomatic limitations of classical statistics. In doing so, it challenges the orthodoxy of de Finetti's epistemological approach to statistics by demonstrating that it is possible to derive "real" likelihood ratios from information systems without recourse to arbitrary and subjective evaluations. While further theoretical development work needs to be undertaken, particularly with regards to the application of these mathematics in other domains, it is hoped that this article will help advance the debate over the nature and meaning of statistics within the physical sciences. YHH would like to thank the Science and Technology Facilities Council, UK, for grant ST/J00037X/1, the Chinese Ministry of Education for a Chang‑Jiang Chair Professorship at NanKai University, as well as the City of Tian‑Jin for a Qian‑Ren Scholarship, and Merton College, Oxford, for her enduring support. Caves, C., Fuchs, C., & Schack, R. (2002). Conditions for compatibility of quantum-state assignments. Physical Review A, 66(6), 062111. doi: 10.1103/PhysRevA.66.062111 de Finetti, B. (1974). Theory of probability: A critical introductory treatment (Vol. 1). New York, New York: Wiley. Dempster, A. P., Laird, N. M., & Rubin, D. B. (1977). Maximum likelihood from incomplete data via the EM algorithm. Journal of the Royal Statistical Society, Series B (Statistical Methodology), 39(1), 1–38. Oaksford, M., & Chater, N. (2007). Bayesian rationality. The probabilistic approach to human reasoning. Oxford, England: Oxford University Press. Timpson, C. G. (2008). Quantum Bayesianism: A study. Studies in History and Philosophy of Science Part B: Studies in History and Philosophy of Modern Physics, 39(3), 579–609. doi: 10.1016/j.shpsb.2008.03.006 Wu, C. F. J. (1983). On the convergence properties of the EM algorithm. The Annals of Statistics, 11(1), 95–103. doi: 10.1214/aos/1176346060 © 2007 - 2019 Rachael Louise Bond. All rights reserved.
CommonCrawl
Structural covariance model reveals dynamic reconfiguration of triple networks in autism spectrum disorder Zhiliang Long1, Xujun Duan1, Heng Chen1, Youxue Zhang1 & Huafu Chen1 The data open sharing provides us opportunity to investigate mechanisms underlying autism spectrum disorder (ASD) by employing advanced techniques. In the current study, we employed structural covariance (SC) model to investigate the development of triple networks in ASD. Three hundred and seven ASD and 337 typical controls were collected and further classified into four distinct age cohorts. Night brain seeds belonging to default modal network, salience network, and central executive network were obtained. SC between those seeds, as well as its topological properties, was calculated for each group within each age cohort. Statistical analysis revealed that ASD had dynamic reconfigurations of SC of the triple networks, especially the right fronto-insular cortex. The results might indicate that ASD had specific mechanism within distinct age cohort. Additionally, the big data sharing, together with the SC modal, was able to facilitate understanding of the mechanism underlying ASD. In the last decades, much progress has been made to the data open sharing, especially in the neuroimaging studies of brain function and disease (Russell et al. 2014). On the one hand, the increasing big data sharing has profound impact on research in cognitive neuroscience and psychiatry, resulting in advances in the diagnosis and treatment of psychiatric and neurological disease. On the other hand, the big data sharing is able to facilitate development of advanced analysis methods, which in turn help find the new biomarkers in brain diseases. The structural covariance (SC) model is such a method that has its potential value in understanding of various psychiatric conditions (Aaron et al. 2013). The SC is a phenomenon that inter-individual differences in the structure of a brain region often covary with the inter-individual structural difference of other brain areas. It has been recognized that the genetics, behavior, and plasticity together contributed to covariance between brain areas (Krista et al. 2009; William et al. 2001). The SC is usually characterized by the linear dependence between two large samples of human datasets using the product-moment correlation coefficient, the Pearson's r. The method has been widely employed to investigate the development of brain structures across lifespan (Brandon et al. 2010), and also the neurodegenerative disorders, such as the Alzheimer's disease (Yong et al. 2008) and Schizophrenia (Serge et al. 2005). Question surrounding the SC in autism spectrum disorder (ASD) still remains. The ASD is a neurodevelopment disorder characterized by deficits in communication and social interaction, along with repetitive patterns of behavior and interests. Neuroimaging studies have demonstrated that dysfunction of triple networks [including central executive network (CEN), salience network (SN), and default modal network (DMN)] were associated with ASD (Vinod et al. 2011). To the best of our knowledge, there were no studies investigating the SC of those networks, especially their dynamic configuration across ages in ASD. The lack of investigation of the SC in ASD is probably due to the limited number of participants recruited in previous studies. Therefore, in the current study, we collected big datasets from the Autism Brain Imaging Data Exchange (ABIDE). Then we employed the SC modal to investigate the SC and topological properties of the triple networks and their development across ages in ASD. Participants and data preprocessing The datasets supporting the conclusions of this article are available from the ABIDE (http://fcon_1000.projects.nitrc.org/indi/abide/) database. The datasets involved 307 ASD and 337 typical controls (TC). Participants were further classified into four groups based on different age cohorts. They are group 1 (59 ASD, 63 TC, age, 6–11 years), group 2 (109 ASD, 114 TC, age, 11–15 years), group 3 (46 ASD, 52 TC, age, 15–18 years), and group 4 (93 ASD, 107 TC, age, >18 years). Written informed consent was obtained from all participants. Experimental protocols were approved by the local Institutional Review Boards. All participants were scanned using a 3 Tesla SIEMENS scanner following diagnostic assessment. Subjects were asked to relax and look at a white cross-hair against a black background. The anatomical image was then acquired for each participant. Data preprocessing was conducted using SPM8 software. First, all T1-weighted anatomical images were manually reoriented to place the anterior commissure at the origin of the three-dimensional Montreal Neurological Institute (MNI) space. The images were then segmented into gray matter, white matter, and cerebrospinal fluid (John et al. 2005). A diffeomorphic non-linear registration algorithm was used to spatially normalize the segmented images (John et al. 2007). This procedure generated a template for a group of individuals. The resulting images were spatially normalized into the MNI space using affine spatial normalization, and resampled into 1.5 × 1.5 × 1.5 mm3. Finally, the resulting gray matter images were smoothed with a 6 mm full-width half-maximum (FWHM) isotropic Gaussian kernel. Structural connectivity estimation Nine coordinates in MNI space were obtained from a previous study (Lucina et al. 2011). The brain areas corresponding to those coordinates were left and right fronto-insular cortex (lFIC/rFIC), anterior cingulate cortex (ACC), left and right dorsolateral prefrontal cortex (lDLPFC/rDLPFC), left and right posterior parietal cortex (lPPC/rPPC), ventromedial prefrontal cortex (VMPFC), and posterior cingulate cortex (PCC). The lFIC/rFIC and ACC belong to SN. The lDLPFC/rDLPFC and lPPC/rPPC belong to CEN. The VMPFC and PCC belong to DMN. Night spherical regions of interests (ROIs) were generated with radius of 8 mm based on those coordinates. Those ROIs were multiplied with a gray matter mask to exclude voxels outsides the gray matter. The gray matter density value was averaged across voxels within each ROI for each participant. The SC analysis was then conducted as follows: The Pearson correlation analysis was performed between pairs of ROIs to characterize the SC in ASD and HC. The permutation test was employed to determine the statistical significance level. Briefly, we first calculated the between-group difference of the correlation value. We then randomly assigned each participant to one of the two groups with the same size as the origin groups of ASD and HC. This randomization procedure was repeated for 10,000 permutations, which generated a null permutation distribution. For each permutation, the new between-group difference was calculated. We then assigned a p value to the between-group difference by computing the proportion of differences exceeding the null distribution values. The multiple comparisons were corrected using an exploratory threshold of 1/N (here, N is number of edges, which is 9*8/2). Notably, the effect of sites and full IQ were regressed out before the Pearson correlation analysis. The whole procedure mentioned above was repeatedly conducted on each of the four groups. Topological properties calculation Within each age cohort, we computed the topological properties of SC networks including clustering coefficient, shortest path length, local efficiency and global efficiency, for ASD group and HC group, and compared those properties between the two groups. The connectivity sparsity was firstly employed to threshold SC network. Connectivity sparsity was computed as number of existing edges divided by maximum possible number of edge in a network. Here, a connectivity sparsity of 40 % was used to ensure that all SC networks were full connected. We then computed those topological properties for each SC network. The clustering coefficient of a node i is calculated as: $$ C_{i}^{\text{w}} = \frac{1}{{k_{i} (k_{i} - 1)}}\sum\limits_{j,h \in N} {(w_{ij} w_{ih} w_{jh} )^{1/3} ,} $$ where w ij is the weight between node i and node j; k i is the degree of node i; N is the number of nodes. The clustering coefficient of a network is computed as: $$ C^{\text{w}} = \frac{1}{N}\sum\limits_{i \in N} {C_{i}^{\text{w}} } $$ This measure indicates the extent of local interconnectivity or cliquishness in network. The characteristic shortest path length of a network is defined as: $$ L^{\text{w}} = \frac{N(N - 1)}{{\sum\nolimits_{i = 1}^{N} {\sum\nolimits_{j \ne i}^{N} {1/L_{ij} } } }}, $$ where L ij is the path between node i and node j with shortest length. This measure quantifies the ability for information propagation in parallel. The global efficiency of a network is computed as: $$ E_{{{\text{global}}}} = \frac{1}{{N(N - 1)}}\sum\limits_{{i \ne j \in N}} {\frac{1}{{L_{{ij}} }}}$$ which is a measure of parallel information transformation. While the local efficiency of a network is defined as: $$ E_{{{\text{local}}}} = \frac{1}{N}\sum\limits_{{i \in G}} {E_{{{\text{global}}}} (G_{i} )} , $$ where E global(G i ) is the global efficiency of the neighborhood sub-graph G i of the node i. The local efficiency can be understood as a measure of fault tolerance of the network, indicating how well each sub-graph exchanges information when the index node is eliminated. The statistical test was conducted using permutation test, as we did in the ''Participants and data preprocessing'' section. The statistical level of p < 0.05 was considered as significant. The gray matte density value of each ROI strongly covaries with the values of rest ROIs in both ASD group and HC group (Fig. 1a). The SC value within SN was much higher, compared to those within DMN and CEN, and also those between the three networks (Fig. 1a). Further, ASD had significant reduced SC value between lPPC and VMPFC (Fig. 1b). The SC connectivity matrices across ages. The SC connectivity matrices in ASD and HC (a), and the statistical differences in SC value between ASD and HC (b). The value in the matrix represents the extent to which one brain area covary with other areas. Notably, these results were obtained by including all participants in ASD group and HC group We then observed different SC pattern between distinct age cohorts. In group 1, ASD had increased SC between ACC and lFIC, between ACC and VMPFC, and between rFIC and lFIC. In group 2, ASD had reduced SC between lPPC and rFIC, lFIC and VMPFC, between ACC and rFIC, lFIC. In group 3, increased SC was found in ASD between rPPC and rFIC. While in group 4, decreased SC was observed in ASD between ACC and lFIC, between VMPFC and lFIC (Fig. 2). The SC connectivity matrices at different age cohorts. The SC connectivity matrices in ASD and HC, and the statistical differences in SC value between ASD and HC in group 1 (6–11 years), group 2 (11–15 years), group 3 (15–18 years), and group 4 (18–58 years) We further found that patients with ASD had significantly higher clustering coefficient at early adolescent, and lower global efficiency at late adolescent, compared to HC (Fig. 3). There was no difference in characteristic shortest path length and local efficiency between ASD and TC within each cohort. Topological difference between ASD and HC. The difference (ASD vs. HC) in clustering coefficient and global efficiency of SC network in group 1 (6–11 years), group 2 (11–15 years), group 3 (15–18 years), and group 4 (>18 years). The red circles indicate the significant difference between ASD and HC. The gray zones were the 95 % confidence intervals, which were obtained from the null permutation distribution In the current study, we observed dynamic SC reorganization of triple networks, especially the rFIC, by employing the SC model. It has been demonstrated that the rFIC, a critical component of SN, mediates interaction between CEN and DMN (Vinod et al. 2010; William et al. 2007). A developmental study had shown that the maturation of rFIC connectivity plays a critical role in the brain network maturation to support complex cognitive processes (Lucina et al. 2011). Those results probably suggested abnormal development of cognitive processes in ASD. More specifically, we firstly found decreased SC between lPPC and VMPFC, which might suggest impaired executive function in ASD (Timothy et al. 2006). Interestingly, we then observed dramatically distinct SC patterns, especially the connectivity of rFIC in ASD across age cohorts. The findings here provided structural substrates for functional deficits of rFIC observed in ASD (Jyri-Johan et al. 2010), and might suggest specific mechanism underlying ASD in different age cohort. Additionally, a lower global efficiency in ASD was observed at specific age cohort, indicating that the altered information communication pattern within the triple networks was affected by ages. Overall, we were the first to report the dynamic SC reconfiguration of the triple networks in ASD using SC modal, and highlighted the importance of age effect in autistic research. This study investigated the dynamic changes of SC and its topological properties as function of age cohorts in patents with ASD by employing a large dataset. The results suggested the crucial role of triple network abnormalities in pathology of ASD at specific age ranges, and highlighted effect of age on autistic development. Aaron AB, Jay NG, Ed B (2013) Imaging structural co-variance between human brain regions. Nat Rev Neurosci 14:322–336 Brandon AZ, Efstathios DG, Juan Z, William WS (2010) Network-level structural covariance in the developing brain. Proc Natl Acad Sci 107:18191–18196 John A (2007) A fast diffeomorphic image registration algorithm. Neuroimage 38:95–113 John A, Karl JF (2005) Unified segmentation. Neuroimage 26:839–851 Jyri-Johan P, Jukka R, Xiangyu L, Irma M, Osmo T, Juha N, Tuemo S, Jukka R, Tuula H, Helena H, Katja J, Sanna K, Marja-Leena M, Yufeng Z, Vesa K (2010) Alterations in regional homogeneity of resting-state brain activity in autism spectrum disorders. Brain Res 1321:169–179 Krista LH, Jason L, Andrea N, Marie F, Ellen W, Alan E, Gottfried S (2009) Musical training shapes structural brain development. J Neurosci 29:3019–3025 Lucina QU, Kaustubh SS, Srikanth R, Vinod M (2011) Dynamic reconfiguration of structural and functional connectivity across core neurocognitive brain networks with development. J Neurosci 31:18578–18589 Russell AP, Krzysztof JG (2014) Making big data open: data sharing in neuroimaging. Nat Neurosci 17:1510–1517 Serge AM, Monte SB, Adam MB, Lina S (2005) Cortical intercorrelations of frontal area volumes in schizophrenia. Neuroimage 27:753–770 Timothy JS, Nicole R, John LB, Bruce T, Gary E, Michael WO, Ross C (2006) Visuospatial processing and the function of prefrontal-parietal networks in autism spectrum disorders: a functional MRI study. Am J Psychiatry 163:1440–1443 Vinod M (2011) Large-scale brain networks and psychopathology: a unifying triple network model. Trends Cogn Sci 15:483–506 Vinod M, Lucina QU (2010) Saliency, switching, attention and control: a network model of insula function. Brain Struct Funct 214:655–667 William FB, Hilleke EP, Dorret IB, Danielle P, Eco JG, Hugo GS, Neeltje EH, Clarine JO, Rene SK (2001) Quantitative genetic modeling of variation in human brain morphology. Cereb Cortex 11:816–824 William WS, Vinod M, Alan FS, Jennifer K, Gary HG, Heather K, Allan LR, Michael DG (2007) Dissociable intrinsic connectivity networks for salience processing and executive control. J Neurosci 27:2349–2356 Yong H, Xhang C, Alan E (2008) Structural insights into aberrant topological patterns of large-scale cortical networks in Alzheimer's disease. J Neurosci 28:4756–4766 HC and ZL proposed and implemented the idea. ZL, HC, and YZ performed the data analysis. ZL and XD drafted the manuscript. All authors read and approved the final manuscript. The work is supported by 863 project (2015AA020505) and the Natural Science Foundation of China (61533006 and 81301279). Key Laboratory for Neuroinformation of Ministry of Education, Center for Information in BioMedicine, School of Life Science and Technology, University of Electronic Science and Technology of China, Chengdu, 610054, China Zhiliang Long, Xujun Duan, Heng Chen, Youxue Zhang & Huafu Chen Zhiliang Long Xujun Duan Heng Chen Youxue Zhang Huafu Chen Correspondence to Huafu Chen. Long, Z., Duan, X., Chen, H. et al. Structural covariance model reveals dynamic reconfiguration of triple networks in autism spectrum disorder. Appl Inform 3, 7 (2016). https://doi.org/10.1186/s40535-016-0023-0 Accepted: 08 October 2016 Structural covariance Triple networks
CommonCrawl
Hochschild cohomology of a Sullivan algebra Oteng Maphane Mathematics & Statistical Sciences A derivation $ \theta $ is a $ k $-linear map $ \theta :A^{n}\rightarrow A^{n-k} $ such that $ \theta (ab)=\theta (a)b+(-1)^{k|a|}a\theta (b), $ where $ A=\underset{n\geq 0}{\oplus}A^{n} $ is a commutative graded algebra over a commutative ring $ \Bbbk. $ Let $ \der _{k}A $ denote the vector space of all derivations of degree $ k $ and $$ \der A=\underset{k}{\oplus}\der _{k}A. $$ If $ A=(\wedge V,d) $ is a minimal Sullivan algebra, then there is a homomorphism $ \phi : (\wedge _{A}L,d_{0})\rightarrow C^{\ast}(A;A) $ which induces an isomorphism of graded Gerstenhaber algebras in homology, where $ L=s^{-1}(\der A). $ The latter shows that the Hochschild cochain complex of $ A $ with coefficients in $ A $ can be computed in terms of derivations of $ A. $ In this talk we shall use this method to compute Hochschild cohomology of the minimal Sullivan algebra of a formal homogeneous space, the Grassmannian over the quaternion division algebra, $$Sp(5)/Sp(2) \times Sp(3).$$ Petroleum Abstracts http://www.biomathforum.org/samsa-congress.org/samsa/index.php/abstracts/article/view/149 Dive into the research topics of 'Hochschild cohomology of a Sullivan algebra'. Together they form a unique fingerprint. Algebra Engineering & Materials Science 100% Vector spaces Engineering & Materials Science 19% Maphane, O. (2016). Hochschild cohomology of a Sullivan algebra. Petroleum Abstracts, 1(1). http://www.biomathforum.org/samsa-congress.org/samsa/index.php/abstracts/article/view/149 Maphane, Oteng. / Hochschild cohomology of a Sullivan algebra. In: Petroleum Abstracts. 2016 ; Vol. 1, No. 1. @article{134f8495cfa94e2093b7398f8a1d7092, title = "Hochschild cohomology of a Sullivan algebra", abstract = "A derivation $ \theta $ is a $ k $-linear map $ \theta :A^{n}\rightarrow A^{n-k} $ such that $ \theta (ab)=\theta (a)b+(-1)^{k|a|}a\theta (b), $ where $ A=\underset{n\geq 0}{\oplus}A^{n} $ is a commutative graded algebra over a commutative ring $ \Bbbk. $ Let $ \der _{k}A $ denote the vector space of all derivations of degree $ k $ and $$ \der A=\underset{k}{\oplus}\der _{k}A. $$ If $ A=(\wedge V,d) $ is a minimal Sullivan algebra, then there is a homomorphism $ \phi : (\wedge _{A}L,d_{0})\rightarrow C^{\ast}(A;A) $ which induces an isomorphism of graded Gerstenhaber algebras in homology, where $ L=s^{-1}(\der A). $ The latter shows that the Hochschild cochain complex of $ A $ with coefficients in $ A $ can be computed in terms of derivations of $ A. $ In this talk we shall use this method to compute Hochschild cohomology of the minimal Sullivan algebra of a formal homogeneous space, the Grassmannian over the quaternion division algebra, $$Sp(5)/Sp(2) \times Sp(3).$$", author = "Oteng Maphane", journal = "Petroleum Abstracts", Maphane, O 2016, 'Hochschild cohomology of a Sullivan algebra', Petroleum Abstracts, vol. 1, no. 1. <http://www.biomathforum.org/samsa-congress.org/samsa/index.php/abstracts/article/view/149> Hochschild cohomology of a Sullivan algebra. / Maphane, Oteng. In: Petroleum Abstracts, Vol. 1, No. 1, 2016. T1 - Hochschild cohomology of a Sullivan algebra AU - Maphane, Oteng N2 - A derivation $ \theta $ is a $ k $-linear map $ \theta :A^{n}\rightarrow A^{n-k} $ such that $ \theta (ab)=\theta (a)b+(-1)^{k|a|}a\theta (b), $ where $ A=\underset{n\geq 0}{\oplus}A^{n} $ is a commutative graded algebra over a commutative ring $ \Bbbk. $ Let $ \der _{k}A $ denote the vector space of all derivations of degree $ k $ and $$ \der A=\underset{k}{\oplus}\der _{k}A. $$ If $ A=(\wedge V,d) $ is a minimal Sullivan algebra, then there is a homomorphism $ \phi : (\wedge _{A}L,d_{0})\rightarrow C^{\ast}(A;A) $ which induces an isomorphism of graded Gerstenhaber algebras in homology, where $ L=s^{-1}(\der A). $ The latter shows that the Hochschild cochain complex of $ A $ with coefficients in $ A $ can be computed in terms of derivations of $ A. $ In this talk we shall use this method to compute Hochschild cohomology of the minimal Sullivan algebra of a formal homogeneous space, the Grassmannian over the quaternion division algebra, $$Sp(5)/Sp(2) \times Sp(3).$$ AB - A derivation $ \theta $ is a $ k $-linear map $ \theta :A^{n}\rightarrow A^{n-k} $ such that $ \theta (ab)=\theta (a)b+(-1)^{k|a|}a\theta (b), $ where $ A=\underset{n\geq 0}{\oplus}A^{n} $ is a commutative graded algebra over a commutative ring $ \Bbbk. $ Let $ \der _{k}A $ denote the vector space of all derivations of degree $ k $ and $$ \der A=\underset{k}{\oplus}\der _{k}A. $$ If $ A=(\wedge V,d) $ is a minimal Sullivan algebra, then there is a homomorphism $ \phi : (\wedge _{A}L,d_{0})\rightarrow C^{\ast}(A;A) $ which induces an isomorphism of graded Gerstenhaber algebras in homology, where $ L=s^{-1}(\der A). $ The latter shows that the Hochschild cochain complex of $ A $ with coefficients in $ A $ can be computed in terms of derivations of $ A. $ In this talk we shall use this method to compute Hochschild cohomology of the minimal Sullivan algebra of a formal homogeneous space, the Grassmannian over the quaternion division algebra, $$Sp(5)/Sp(2) \times Sp(3).$$ JO - Petroleum Abstracts JF - Petroleum Abstracts Maphane O. Hochschild cohomology of a Sullivan algebra. Petroleum Abstracts. 2016;1(1).
CommonCrawl
Volume 20 Supplement 2 The International Conference on Intelligent Biology and Medicine 2019: Computational methods for drug interactions A deep learning-based method for drug-target interaction prediction based on long short-term memory neural network Yan-Bin Wang1,2 na1, Zhu-Hong You1 na1, Shan Yang1, Hai-Cheng Yi1,2, Zhan-Heng Chen1,2 & Kai Zheng1 BMC Medical Informatics and Decision Making volume 20, Article number: 49 (2020) Cite this article The key to modern drug discovery is to find, identify and prepare drug molecular targets. However, due to the influence of throughput, precision and cost, traditional experimental methods are difficult to be widely used to infer these potential Drug-Target Interactions (DTIs). Therefore, it is urgent to develop effective computational methods to validate the interaction between drugs and target. We developed a deep learning-based model for DTIs prediction. The proteins evolutionary features are extracted via Position Specific Scoring Matrix (PSSM) and Legendre Moment (LM) and associated with drugs molecular substructure fingerprints to form feature vectors of drug-target pairs. Then we utilized the Sparse Principal Component Analysis (SPCA) to compress the features of drugs and proteins into a uniform vector space. Lastly, the deep long short-term memory (DeepLSTM) was constructed for carrying out prediction. A significant improvement in DTIs prediction performance can be observed on experimental results, with AUC of 0.9951, 0.9705, 0.9951, 0.9206, respectively, on four classes important drug-target datasets. Further experiments preliminary proves that the proposed characterization scheme has great advantage on feature expression and recognition. We also have shown that the proposed method can work well with small dataset. The results demonstration that the proposed approach has a great advantage over state-of-the-art drug-target predictor. To the best of our knowledge, this study first tests the potential of deep learning method with memory and Turing completeness in DTIs prediction. Drug targets are the foundation of drug research and development, and over the past few centuries, people have relied heavily on hundreds of drug targets currently known to detect drugs [1]. Although the number of known drugs interacting with target proteins continues to increase, the number of approved drug targets is still only a small fraction of the human proteome. The detection of interactions between drugs and targets is the first step in the development of new drugs, and one of the key factors for drug screening and drug directed synthesis. Benefit from high-throughput experiments, more and more understanding of the structural space of drug compounds and the genomic space of target proteins has been made. Unfortunately, due to the time-consuming and laborious experimental process, our understanding of the relationship between the two spaces is still rather limited [2, 3]. Thanks to the rapid increase in publicly available biological and chemical data, researchers can systematically learn and analyze heterogeneous new data through computational methods and revisit drug-target interactions (DTIs). There are several free databases that focus on relationships between drugs and targets, such as the ChEMBL [4], DrugBank [5], SuperTarget [6]. These database contents constitute the gold standard datasets, which are essential for the development of computational methods to predict DTIs. At present, the computational method for DTIs prediction can be classified into three categories: the ligand-based approach, the docking approach and the feature learning approach. Ligand-based methods are often used to estimate potential targets of action by calculating the chemical structural similarity of a given drug or compound to active compounds of known targets. Keiser et al. [3] proposed a method for inferring protein targets based on the chemical similarity of their ligands. Yamanishi et al. [7,8,9] predict unknown drug-target interactions by integrating the chemical structural similarity of compounds and the amino acid sequence similarity of proteins to a uniform space. Campillos et al. [6] predict the potential target proteins through similarity of phenotypic side effects. This kind of ligand-based method is simple and effective in the case of high chemical structural similarity, but it also limits the scope and accuracy of its application to a great extent. The docking method is to calculate the shape and electrical matching of drugs and potential targets in three-dimensional structure, thereby inferring possible targets of action of the drug. Among them, the reverse docking method is the most commonly used prediction method. This method ranks drug targets by predicting the interaction mode and affinity between a given compound and a target, thereby determining possible targets for the drug. Cheng et al. [10] developed a structure-based maximum affinity model. Li et al. [11] developed a web server called TarFisDock that uses docking methods to identify drug targets. Such methods fully consider the three-dimensional structural information of the target protein, but the molecular docking method itself still has some problems that have not yet been effectively solved, such as protein flexibility, the accuracy of scoring functions, and solvent water molecules, which lead to reverse docking. The prediction accuracy of the method is low. Another serious problem with docking is that it cannot be applied to proteins with unknown 3D structures. So far, proteins with known 3D structure are still only a small part of all proteins. This severely limits the promotion and popularization of this method. A feature learning approach treats drug target relationships as a two-class problem: interaction and non-interaction. Such methods learn the potential patterns of known compound-target pairs using machine learning algorithms, generate prediction models by iterative optimization, and then infer potential DTIs. Yu et al. [12] proposed a systematic approach based on chemical, genomic, and pharmacological information. Faulon et al. [13] predicted drug targets using the signature molecular descriptor. Even though these methods have accelerated the discovery of drug targets, there is still much room for improvement. In this work, we proposed deep learning-based method to identify unknown DTIs. The proposed method consists of three steps: (i) Representation for drug-target pairs. The drug molecules are encoded as fingerprint feature and the protein sequences features are obtaining by using Legendre Moments (LMs) on Position Specific Scoring Matrix (PSSM) that contains evolutionary information about protein. (ii) Feature compression and fusion. The Sparse Principal Component Analysis (SPCA) is used to decrease the features dimension and information redundancy. (iii) Prediction. The Deep Long Short-Term Memory (DeepLSTM) model is adopted for executing prediction tasks. The flow of our proposed model is represented in Fig. 1. We implement the proposed method on four important DTIs datasets involving enzymes, ion channels, GPCRs and nuclear receptors. The results are exposed to give superior performance to the existing state-of-the-art algorithms for DTI prediction. Schematic diagram of drug targets predicted by the proposed method We collected information about the interactions between drug compounds and target proteins form KEGG [14], DrugBank [5], and SuperTarget [6] databases [14, 15]. Table 1 summarizes the data set according to the number of drug compounds, and target protein and interactions. This set of known DTIs are considered to be the gold standard for assessing the performance of the proposed method. Target proteins are linked to drug molecules to form a network of drug targets. To obtain positive datasets from the network, all identified drug-target pairs in gold standard dataset are considered as positive samples. The negative sample correspond to the remaining drug-target pairs in the network. Since the scale of the non-interaction pairs is much larger than that of the interaction pairs, the constructed datasets are imbalanced. In order to solve the bias caused by imbalanced data sets, we randomly selected negative samples from the remaining drug-target pairs in the network, until the number of negative samples is the same as that of positive samples. Table 1 The selected drug-target interaction data sets from KEGG, SuperTarget, and DrugBank databases Characterization of drug molecules The ability of substructure fingerprints in characterizing drug molecules has been confirmed in some studies. Through the comprehensive analysis of previous research results, PubChem fingerprint was used to characterized each drug molecules. In this work, drugs are encoded Boolean substructure vector representing the presence or absence of corresponding substructures in a molecule. The PubChem database defines 881 chemical substructures in which each substructure is assigned to a particular location. Therefore, for a substructure appears in the drug compound, the position corresponding to the substructure in the fingerprint vector is set to 1, otherwise, and the corresponding position is set to 0. Hence, each drug was represented as an 881-dimensional vector [16]. Characterization of target proteins Position specific scoring matrix The position specific scoring matrix (PSSM) was firstly introduced for finding distantly related proteins. In recent years, PSSMs is widely used in proteomics and genomics research, such as prediction of DNA or RNA binding sites and membrane protein types. In this paper, PSSM is used to encode proteins and obtain evolutionary information about amino acids. The PSSM of protein A with N amino acids residue can be expressed as $$ {A}_{PSSM}=\left[\begin{array}{cccccc}{A}_{1\to 1}& {A}_{1\to 2}& \dots & {A}_{1\to j}& \dots & {A}_{1\to 20}\\ {}{A}_{2\to 1}& {A}_{2\to 2}& \dots & {A}_{2\to j}& \dots & {A}_{2\to 20}\\ {}\vdots & \vdots & \vdots & \vdots & \vdots & \vdots \\ {}{A}_{i\to 1}& {A}_{i\to 2}& \dots & {A}_{i\to j}& \dots & {A}_{i\to 20}\\ {}\vdots & \vdots & \vdots & \vdots & \vdots & \vdots \\ {}{A}_{N\to 1}& {A}_{N\to 2}& \dots & {A}_{N\to j}& \dots & {A}_{N\to 20}\end{array}\right] $$ where Ai → j is a score that represents probability of i-th residue being mutated to j-th native amino acid and N is the length of amino acids residue of sequence A, 20 means the 20 native amino acid types. To get the PSSM for each protein sequence, the Position Specific Iterated BLAST (PSI-BLAST) [17, 18] was utilized and the default parameters were choosing except for three iterations [19, 20]. Legendre moments The invariant moments are a global statistical feature and has excellent characteristics in size invariance, rotation invariance and displacement invariance which avail to the extraction of stability features. Legendre moments (LMs), as a fast moment invariant feature extraction technology, show good performance in the application of many pattern recognition, viz., graphic analysis, target recognition, image processing, classification and prediction. Here, we use Legendre moment to further refine the evolutionary information contained in PSSM and generate feature vector. LMs are continuous orthogonal moments, which can be used to represent objects with minimal information redundancy [21, 22]. The LMs with order (a, b) are defined as $$ {L}_{ab}=\frac{\left(2a+1\right)\left(2b+1\right)}{4}\sum \limits_{i=1}^C\sum \limits_{j=1}^V{h}_{ab}\left(x,y\right)I\left({x}_i,{y}_i\right) $$ where I(x, y) is a set of discrete points (xi, yi), xi, yi ∈ [−1, +1]. In this work, I(x, y) denotes PSSM, C is the number of rows of a PSSM, V means the sum of each column of a PSSM [23, 24]. The $$ {h}_{ab}\left(x,y\right)={\int}_{x_i-\frac{\Delta x}{2}}^{x_i+\frac{\Delta x}{2}}{\int}_{y_i-\frac{\Delta y}{2}}^{y_i+\frac{\Delta y}{2}}{R}_a(x)\ {R}_b(y) dxdy $$ $$ {R}_a(x)=\frac{1}{2^aa!}\frac{d^a}{dx^a}{\left({x}^2-1\right)}^a=\frac{1}{2^a}\sum \limits_{k=0}^{\left[a/2\right]}-{1}^k\left(\genfrac{}{}{0pt}{}{p}{k}\right)\left(\genfrac{}{}{0pt}{}{2\left(p-k\right)}{p}\right){x}^{p-2k} $$ The integral terms in (3) are commonly estimated by zeroth-order approximation, that is, the values of Legendre polynomials are always to be constant over the intervals [\( {x}_i-\frac{\Delta x}{2},{x}_i+\frac{\Delta x}{2} \)] and [\( {y}_i-\frac{\Delta x}{2},{y}_i+\frac{\Delta x}{2} \)]. Hence, the set of approximated LMs is defined as: $$ L{`}_{ab}=\frac{\left(2a+1\right)\left(2b+1\right)}{KL}\sum \limits_{i=1}^K\sum \limits_{j=1}^L{R}_a\left({x}_i\right)\ {R}_b\left({y}_i\right)g\left({x}_i,{y}_i\right) $$ As a result, using LMs on PSSM of protein sequence, we have obtained 961 features from each protein sequence by setting a, b = 30. Feature compression and fusion We got an 1842-dimensional drug target feature vector from each drug target pair by combining drug substructure fingerprint features (881-D) with protein LMs features (961-D). To economize calculating time of classifier, reduce memory consumption and remove noisy features from the original feature space, the sparse principal component analysis (SPCA) is used to integrate both features of drugs and target proteins into an organic whole, reduce the feature dimension and redundant information. Classical principal component analysis (PCA) has an obviously drawback, that is, each PC is a linear combination of all variables and the loadings are typically nonzero. Thus, when dealing with a combination of two different types of features, such as the drug and protein features produced herein, often results in unpredictable results. SPCA is an improved PCA the using lasso (elastic net) to produce principal components with sparse loadings, which overcome above problem. Finally, we gain 400-dimensional refined feature vector as the input of classifier. Constructing DeepLSTM model LSTM is a special recurrent neural network (RNN) architecture, providing more excellent performance than the traditional RNNs [25]. In this section, we explore the application of LSTM architecture in predicting drug-target. One of the major differences with standard RNNs network is the LSTM architecture use memory blocks to replace the summation units. Memory blocks, as shown in Fig. 2, contain self-connection memory cells for storing the temporal state, and gates (special multiplicative units), input gate, output gate and forget gate, for controlling the information flow. To better understand the work of the gate unit, memory cells are not shown in the Fig. 2. These gates enable the LSTM to store and access over lengthy periods of time, thereby reducing the impact of vanishing gradient problems on the prediction model. The input activation flow that enters the memory unit is controlled by the input gate [26, 27]. The output flow of cell activation flows to other parts of the network, which is dominated by the output gate. Through the self-recursive connection of the unit, the forgetting gate is added to the cell as input, so that the LSTM network can process the continuous input stream. In addition, the LSTM cell can include peephole connections, that allow gates to be modulated according to the state values in the internal memory [28]. Memory block of LSTM networks We constructed DeepLSTM by stacking multiple LSTM layers [29, 30]. Compared with simple three-tier architecture, deep architecture can better use the parameters through the distribution of multiple layers in space. Deep results in inputs going through more non-linear operations per time step. Prevent over fitting Neural networks often optimized with a large number of parameters. However, there may be overfitting problems in such networks. Dropout is used for solving this problem by randomly removing units from the neural network and their connections in the train of training. The meaning of "dropout" is to extract a "sparse" network from the original network, the sparse network is composed of all the surviving units, as shown in Fig. 3. In this paper, we follow the previous study to set the dropout rate to 0.5. We have 35 hidden layer units, which may generate 235 different subnets during training. In the testing phase, an "mean network" strategy is adopted, which contains all of the original network connection, but their efferent weights are halved in order to make up for the fact that twice as many of them are active [31, 32]. Dropout Neural Net Model. Left: A standard full connection network; Right: A thinned network generated by utilizing dropout in Left Experiment settings Evaluation indicators In this paper, we evaluate the performance of our predictor by calculating accuracy (ACC), true positive rate (TPR), specificity (SPC), positive predictive value (PPV), and Matthews's correlation coefficient (MCC). The ACC is used to reveal the overall level of prediction. The TPR exposes the proportion of positives samples that have been correctly predicted in the test results. The SPC exposes the proportion of negatives samples that have been correctly predicted in the test results. The PPV is used to reveal the proportion of the true positive samples in the samples that were predicted to be positive. The MCC is a general measure of predictive performance for two classification problems. These performance indicators are defined as follow: $$ \mathrm{ACC}=\frac{TN+ TP}{TN+ FN+ TP+ FP} $$ $$ \mathrm{TPR}=\frac{TP}{FN+ TP} $$ $$ \mathrm{SPC}=\frac{TN}{TN+ FP} $$ $$ \mathrm{PPV}=\frac{TP}{TP+ FP} $$ $$ \mathrm{MCC}=\frac{\left( TP\times TN\right)+\left( FP\times FN\right)}{\sqrt{\left( TP+ FP\right)\times \left( TN+ FN\right)\times \left( TP+ FN\right)\times \left( TN+ FP\right)}} $$ Here, FN, FP, TN, TP represents the number of false negative, false positive, true negative and true positive, respectively, and the area under the Receiver Operating Characteristic curve (AUC) is calculated used for measuring the quality of prediction [33,34,35]. Model training For four datasets, we divided each dataset into: the training set; the verification set; the test set. Test sets account for one tenth of the total, the training set account for eight tenths of the remaining data, the rest are used as validation sets. We use the training set to fit a DeepLSTM prediction model, use the validation set to optimize the DeepLSTM neural network weight, use the test set to verify the model performance. Another benefit of using validation set is to prevent overfitting by early stopping: terminate model training when errors on the validation dataset no longer decrease and have an increasing trend. This trick avoids the overfitting and reduces the training cost of the model. We use hyperbolic tangent activation for the cell input units and cell output units, and logistic sigmoid for the input, output and forget gate units. The input to the LSTMs and RNNs is 40-dimensional features. The output layer is a fully connected network and uses softmax function to produce probability results. In order to find the best network structure, we test the performance of DeepLSTM models with different number of layers and units on the validation data. The number of hidden layers that were trialed from 1 to 6. With respect to the number of units, these were trialed from 20 to 200 with stride s = 4. Finally, the DeepLSTM model with 4 hidden layers and 36 units was determined. The weights of the DeepLSTM were initialized using random numbers with 0 mean and standard deviation 0.1. We trained model with mean squared error and Nadam optimizer, using dynamic learning rate with initial value of 0.002, decay of 0.004 and momentum of 0.5. The time step was set to 1 and batch size was 64. Training was stopped after a maximum of 500 iterations or early stopping if there was no new best error on the validation data. Statistics of the prediction performance for the proposed models are given in Table 2. Focus on enzymes data sets, our predictor has given satisfying result of 92.92% accuracy, along with of 99.31% sensitivity, of 86.57% specificity, of 88.04% precision, of 86.75% MCC and AUC of 0.9951. The same good results also appear on other three data sets by using our method. The results achieved of our method on ion channels dataset is 91.97% accuracy, along with 93.23% sensitivity, of 90.87% specificity, of 89.95% precision, of 85.19% MCC and AUC of 0.9705. The results achieved of our method on GPCRs dataset is 91.80% accuracy, along with 83.71% sensitivity, of 100% specificity, of 100% precision, of 84.44% MCC and AUC of 0.9511. The results achieved of our method on nuclear receptors dataset is 91.11% accuracy, along with 95.24% sensitivity, of 87.50% specificity, of 86.96% precision, of 83.76% MCC and AUC of 0.9206. There is particularly noteworthy is our method achieved over 90% accuracy on nuclear receptors datasets with only 180 sample. This clearly shows that our method can provide excellent performance in the case of very small training samples. This is a huge advantage that will be clearly distinguished from other methods. The extraordinary performance comes mainly from the following three points: 1) our feature representation method can effectively extract the discriminative features from drug molecular and target protein sequence; 2) SPCA enjoys advantages in several aspects, including computational efficiency, high explained variance and an ability in identifying important variables, which compresses two different feature vectors into a unified feature space and extracts heterogeneous features; 2) The hierarchical structure enables the neural network to convert the input data into new feature space which is more conducive to complete classification tasks. Table 2 Prediction performance for the four datasets in term of ACC, TPR, SPC, PPV, precision, MCC, and AUC Comparison with others classifier model To exhibit the advantage of DeepLSTM, computations were performed on enzymes, ion channels, GPCRs and nuclear receptors datasets by using other two prominent classifiers (Multi-layer Perceptron and Support Vector Machines). For fairness, except for the different classifiers, the other settings are completely consistent. We build multi-layer perceptron (MLP) networks, in which the number of hidden layers and neurons is the same as the DeepLSTM network. The Support Vector Machine (SVM) was available by using LIBSVM tool [36]. The parameters are optimized by grid search technology. The results of 5-fold cross-validation achieved by SVM can be found in Tables S1, S2, S3 and S4 of the Supplementary Material. The cross validation average results on four datasets are presented in the Table 3. Table 3 Comparison with three classifier on four datasets in term of ACC, TPR, SPC, PPV, precision, MCC, and AUC From the results summarized in Table 2, The DeepLSTM achieves overall the best prediction results. The accuracies achieved by the DeepLSTM are 92.92% in enzymes data set, 91.97% in ion channels data set, 91.80% in GPCRs data set, 91.1% in nuclear receptors data set. and clearly outperform MLP (99.01, 87.58, 87.20, 88.89%, respectively) and SVM (89.88, 89.36, 85.43, 85.00%, respectively). The AUC obtained by the DeepLSTM net are 0.9951 in enzymes data set, 0.9705 in ion channels data set, 0.9951 in GPCRs data set, 0.9206 in nuclear receptors data set. However, the MLP net respectively achieve the average AUC of 0.9967, 0.9972, 0.9853 and 0.8421 in four datasets. The SVM respectively achieve the average AUC of 0.9686, 0.9613, 0.9230 and 0.9910 in four datasets. There are five main reasons for the proposed method to produce better results. The first one is that the hierarchical structure of the deep neural network is convert the input data to more complexity space, which is more conducive to complete classification tasks. The second one is that the design of our DeepLSTM not only avoid overfitting effectively, but also makes it possible to train a large number of different neural networks in a short period of time, which makes the network produce better performance. The third one is that the memory units of LSTM can retain more knowledge, which helps to make more accurate decisions at the prediction stage. The fourth is that the LSTM solves the gradient vanishing problem in the Back-Propagation (BP) algorithm, which is helpful to get better prediction model than MLP. The fifth is the use of the validation set helps to train more flexible models. Compare with state-of-the-art approaches In this section, we compared the AUC of our proposed method with that of some state-of-the-art methods including DBSI [10], KBMF2K [37], and NetCBP [38], and the model proposed by Yamanishi et al [7,8,9] and Wang et al [39]. for the four classes of target-proteins. The results of several methods on four data sets are listed in the Table 4. As it can be observed in Table 4, the AUC of the proposed method is clearly superior in comparison with the AUC of other several methods for the four datasets. The AUC value obtained by our method is 16% higher than those the average in several other methods on enzymes dataset. Focus on nuclear receptors dataset, the value obtained by our method is 10% higher than those the highest in several other methods, 21% than those the minimum in several other methods. The obviously higher AUC indicates that our scheme obviously outperforms the other compared methods. The results of comparison with other methods also confirm this fact that our method can improve the performance for drug–target interaction prediction. In fact, from the results shown in Table 2, we can see that the other two models (MLP-based and SVM-based) still have higher AUC values than several existing techniques. This shows that our feature extraction strategy can capture the interaction information between drug targets very efficiently and improve the performance of the predictor in predicting the interaction of drug-targets. Table 4 The comparison of the proposed model with seven existing approaches (DBSI, KBMF2K, and NetCBP, and the model proposed by Yamanishi et al and Wang et al.) in terms of the AUC In this paper, we have developed a deep learning-based method to infer potential DTIs using compounds and proteins sequence. To evaluate the ability of our method, we compared it with several state-of-the-art approaches. The experimental results proved that this approach is significantly better than others in terms of performance. Comparing with other classifiers, we have provided initial evidence that DeepLSTM outperforms traditional machine learning system on the DTIs task. For the characterization and quantitative method of drug-target pairs, an interesting scheme was proposed by using SPCA to fuse PubChem fingerprint and protein evolutionary features obtained by the combination of PSSM and LM. Promising results were observed when the characterization method cooperates with three different classifiers, respectively. These results indicate that the proposed scheme has great advantage on feature expression and recognition. We have shown that the proposed method can work well with small dataset, which is distinguish from the predecessor's methods and goes in its own special way. We also found that prediction quality continues to improve with increasing dataset size. This underscores the value of this model to train and apply very large datasets, and suggests that further performance gains may be had by increasing the data size. On the whole, the theoretical analysis and experimental results give strong theoretical and empirical evidences for the efficacy of using the proposed method to predict DTIs. The data and code is available at: https://deepbiolab.coding.net/s/fbdb894d-2730-425b-bac6-18ba55396bab. ACC: DeepLSTM: Deep long short-term memory DTIs: Drug-target interactions LMs: MCC: Matthew's correlation coefficient MLP: Multi-layer perceptron PPV: Positive predictive value PSSM: RNN: Recurrent neural network SPC: SPCA: Sparse principal component analysis SVM: TPR: True positive rate Knowles J, Gromo G. A guide to drug discovery: target selection in drug discovery. Nat Rev Drug Discov. 2003;2(1):63–9. Marcucci F, Stassi G, Maria RD. Epithelial-mesenchymal transition: a new target in anticancer drug discovery. Nat Rev Drug Discov. 2016;15(5):311–25. Keiser MJ, Setola V, Irwin JJ, Laggner C, Abbas AI, Hufeisen SJ, Jensen NH, Kuijer MB, Matos RC, Tran TB. Predicting new molecular targets for known drugs. Nature. 2009;462(7270):175–81. Gaulton A, Bellis LJ, Bento AP, Chambers J, Davies M, Hersey A, Light Y, Mcglinchey S, Michalovich D, Allazikani B. ChEMBL: a large-scale bioactivity database for drug discovery. Nucleic Acids Res. 2012;40:1100–7. Wishart DS, Knox C, Guo AC, Shrivastava S, Hassanali M, Stothard P, Chang Z, Woolsey J. DrugBank: a comprehensive resource for in silico drug discovery and exploration. Nucleic Acids Res. 2006;34:668–72. Günther S, Kuhn M, Dunkel M, Campillos M, Senger C, Petsalaki E, Ahmed J, Urdiales EG, Gewiess A, Jensen LJ. SuperTarget and Matador: resources for exploring drug-target relationships. Nucleic Acids Res. 2007;36:919–22. Bleakley K, Yamanishi Y. Supervised prediction of drug–target interactions using bipartite local models. Bioinformatics. 2009;25(18):2397–403. Yamanishi Y, Araki M, Gutteridge A, Honda W, Kanehisa M. Prediction of drug–target interaction networks from the integration of chemical and genomic spaces. Bioinformatics. 2008;24(13):232–40. Yamanishi Y, Kotera M, Kanehisa M, Goto S. Drug-target interaction prediction from chemical, genomic and pharmacological data in an integrated framework. Bioinformatics. 2010;26(12):246–54. Cheng F, Liu C, Jiang J, Lu W, Li W, Liu G, Zhou W, Huang J, Tang Y. Prediction of drug-target interactions and drug repositioning via network-based inference. PLoS Comput Biol. 2012;8(5):e1002503. Li H, Gao Z, Kang L, Zhang H, Yang K, Yu K, Luo X, Zhu W, Chen K, Shen J. TarFisDock: a web server for identifying drug targets with docking approach. Nucleic Acids Res. 2006;34(Web Server issue):219–24. Yu H, Chen J, Xu X, Li Y, Zhao H, Fang Y, Li X, Zhou W, Wang W, Wang Y. A systematic prediction of multiple drug-target interactions from chemical, genomic, and pharmacological data. PLoS One. 2012;7(5):e37608. Faulon JL, Misra M, Martin S, Sale K, Sapra R. Genome scale enzyme–metabolite and drug–target interaction predictions using the signature molecular descriptor. Bioinformatics. 2008;24(2):225–33. Kanehisa M, Araki M, Goto S, Hattori M, Hirakawa M, Itoh M, Katayama T, Kawashima S, Okuda S, Tokimatsu T. KEGG for linking genomes to life and the environment. Nucleic Acids Res. 2008;36(Database issue):480–4. Wang Y, Xiao J, Suzek TO, Jian Z, Wang J, Bryant SH. PubChem: a public information system for analyzing bioactivities of small molecules. Nucleic Acids Res. 2009;37(Web Server issue):623–33. Weininger D, Weininger A, Weininger JL. SMILES. 2. Algorithm for generation of unique SMILES notation. J Chem Inf Model. 1989;29(2):97–101. Wang Y, You Z, Li X, Chen X, Jiang T, Zhang J. PCVMZM: using the probabilistic classification vector machines model combined with a Zernike moments descriptor to predict protein–protein interactions from protein sequences. Int J Mol Sci. 2017;18(5):1029–42. CAS PubMed Central Article Google Scholar You ZH, Lei YK, Zhu L, Xia J, Wang B. Prediction of protein-protein interactions from amino acid sequences with ensemble extreme learning machines and principal component analysis. BMC Bioinform. 2013;14(S8):1–11. Wang YB, You ZH, Li X, Jiang TH, Chen X, Zhou X, Wang L. Predicting protein-protein interactions from protein sequences by a deep sparse autoencoder deep neural network. Mol BioSyst. 2017;13(7):1336–45. You ZH, Li L, Ji Z, Li M, Guo S. Prediction of protein-protein interactions from amino acid sequences using extreme learning machine combined with auto covariance descriptor. In: Memetic Computing; 2013. p. 80–5. Wang YB, You ZH, Li LP, Huang YA, Yi HC. Detection of interactions between proteins by using Legendre moments descriptor to extract discriminatory information embedded in PSSM. Molecules. 2017;22(8):1366–79. PubMed Central Article CAS Google Scholar Chong CW, Raveendran P, Mukundan R. Translation and scale invariants of Legendre moments. Pattern Recogn. 2004;37(1):119–29. Mukundan R, Ramakrishnan KR. Fast computation of Legendre and Zernike moments. Pattern Recogn. 1995;28(9):1433–42. Yap PT, Paramesran R. An efficient method for the computation of Legendre moments. IEEE Trans Pattern Anal Mach Intell. 2005;27(12):1996–2002. Chen H, Engkvist O, Wang Y, Olivecrona M, Blaschke T. The rise of deep learning in drug discovery. Drug Discov Today. 2018;23(6):1241–50. Dyer C, Ballesteros M, Ling W, Matthews A, Smith NA. Transition-based dependency parsing with stack long short-term memory. Comput Sci. 2015;37(2):321–32. Hochreiter S, Schmidhuber J. Long short-term memory. Neural Comput. 1997;9(8):1735–80. Graves A, Mohamed AR, Hinton G. Speech recognition with deep recurrent neural networks. In: IEEE international conference on acoustics, speech and signal processing; 2013. p. 6645–9. Hinton G, Deng L, Yu D, Dahl GE, Mohamed AR, Jaitly N, Senior A, Vanhoucke V, Nguyen P, Sainath TN. Deep neural networks for acoustic modeling in speech recognition: the shared views of four research groups. IEEE Signal Process Mag. 2012;29(6):82–97. Kalinin AA, Higgins GA, Reamaroon N, Soroushmehr SMR, Allynfeuer A, Dinov ID, Najarian K, Athey BD. Deep learning in pharmacogenomics: from gene regulation to patient stratification. Pharmacogenomics. 2018;19(7):629–50. Srivastava N, Hinton G, Krizhevsky A, Sutskever I, Salakhutdinov R. Dropout: a simple way to prevent neural networks from overfitting. J Mach Learn Res. 2014;15(1):1929–58. Dahl GE, Sainath TN, Hinton GE. Improving deep neural networks for LVCSR using rectified linear units and dropout. In: IEEE international conference on acoustics, speech and signal processing; 2013. p. 8609–13. Hinton GE, Srivastava N, Krizhevsky A, Sutskever I, Salakhutdinov RR. Improving neural networks by preventing co-adaptation of feature detectors. Comput Sci. 2012;3(4):212–23. Hanley JA, Mcneil BJ. The meaning and use of the area under a receiver operating characteristic (ROC) curve. Radiology. 1982;143(1):29–36. Dodd LE, Pepe MS. Partial AUC estimation and regression. Biometrics. 2003;59(3):614–23. Chang CC, Lin CJ. LIBSVM: a library for support vector machines. Acm Trans Intell Syst Technol. 2007;2(3):389–96. Gönen M. Predicting drug–target interactions from chemical and genomic kernels using Bayesian matrix factorization. Bioinformatics. 2012;28(18):2304–10. PubMed Article CAS Google Scholar Chen X, Liu MX, Yan GY. Drug-target interaction prediction by random walk on the heterogeneous network. Mol BioSyst. 2012;8(7):1970–8. Wang YC, Zhang CH, Deng NY, Wang Y. Kernel-based data fusion improves the drug-protein interaction prediction. Comput Biol Chemistry. 2011;35(6):353–62. The authors would like to thank all anonymous reviewers for their advice. About this supplement This article has been published as part of BMC Medical Informatics and Decision Making Volume 20 Supplement 2, 2020: The International Conference on Intelligent Biology and Medicine 2019: Computational methods for drug interactions. The full contents of the supplement are available online at https://bmcmedinformdecismak.biomedcentral.com/articles/supplements/volume-20-supplement-2. Publication of this article was sponsored in part by the NSFC Excellent Young Scholars Program, under Grants 61722212, in part by the National Science Foundation of China under Grants 61873212, 61572506. Yan-Bin Wang and Zhu-Hong You contributed equally to this work. Xinjiang Technical Institute of Physics and Chemistry, Chinese Academy of Sciences, Urumqi, 830011, China Yan-Bin Wang, Zhu-Hong You, Shan Yang, Hai-Cheng Yi, Zhan-Heng Chen & Kai Zheng Department of Computer Science and Technology, University of Chinese Academy of Sciences, Beijing, 100049, China Yan-Bin Wang, Hai-Cheng Yi & Zhan-Heng Chen Yan-Bin Wang Zhu-Hong You Shan Yang Hai-Cheng Yi Zhan-Heng Chen Kai Zheng YBW and ZHY considered the algorithm, carried out analyses, arranged the data sets, carried out experiments, and wrote the manuscript. SY, HCY, ZHC and KZ designed, performed and analyzed experiments. All authors read and approved the final manuscript. Correspondence to Zhu-Hong You. Additional file 1: Table S1. Prediction performance of SVM-based for the enzymes datasets in term of ACC, TPR, SPC, PPV, MCC, and AUC. Table S2. Prediction performance of SVM-based for the ion channels datasets in term of ACC, TPR, SPC, PPV, MCC, and AUC. Table S3. Prediction performance of SVM-based for the GPCRs datasets in term of ACC, TPR, SPC, PPV, MCC, and AUC. Table S4. Prediction performance of SVM-based for the nuclear receptors datasets in term of ACC, TPR, SPC, PPV, MCC, and AUC. Wang, YB., You, ZH., Yang, S. et al. A deep learning-based method for drug-target interaction prediction based on long short-term memory neural network. BMC Med Inform Decis Mak 20, 49 (2020). https://0-doi-org.brum.beds.ac.uk/10.1186/s12911-020-1052-0 DOI: https://0-doi-org.brum.beds.ac.uk/10.1186/s12911-020-1052-0 Drug-target Legendre moment Long short-term memory
CommonCrawl
Only $35.99/year Issues in Biology Unit 5 Peyti__ Explain why the acceptance of evolution by the United States population is so low. 1. Perceived conflicts between professed religious convictions and the theory of evolution 2. Lack of understanding of the basic nature of science 3. Lack of understanding of the theory of evolution 4. Politicization of evolution State a general definition for the term evolution. involves cumulative change over time; Cumulative - successive additions or gradual step Describe cosmic evolution. The study of the sum total of the many varied developmental and generational changes in the assembly and composition of radiation, matter, and life throughout the history of the Universe. Describe geological evolution. The study of the geologic history of Earth, evolution of the continents, oceans, atmosphere, and biosphere. Describe biological evolution. The study of the change in inherited traits over successive generations in populations of organisms. Define population. All members of the same species within a given space at a given time; Species - group of like organisms that can mate and produce viable offspring Define gene pool. All the alleles (genes) found within a population of organisms. Define biological evolution. cumulative changes in allele (gene) frequency of a population over time; important point: cumulative change in population characteristics results from cumulative changes in allele (gene) frequencies over time Identify the smallest unit of life that can undergo biological evolution. a population Explain why changes in the characteristics of a population due to environmental factors are not evolutionary in nature. Changes in the characteristics of a population due to environmental factors are NOT evolutionary in nature because these changes in the characteristics of the population do not involve cumulative changes in allele (gene) frequencies! Examples: Average life expectancy in U.S. in 1900 - 48 years, in 2020 - 78.93 years Define scientific theory and identify its characteristics. an explanation of a repeatedly verified natural phenomenon; Based on well-supported hypotheses, Has NOT been proven, but may be disproven, NOT a fact, but an attempt to explain facts (different theories may explain the same facts), Carries much weight in science Define fact of science. In science, fact is a confirmed observation. Distinguish between the fact of evolution and the theories of evolution. Biological evolution- the characteristics of a population change over time - THIS IS A FACT. In science, theory is an explanation of a repeatedly verified natural phenomenon: Based on well-supported hypotheses, Has NOT been proven; may be disproven, NOT a fact, but an attempt to explain facts, carries much weight in science. Identify those mechanisms which produce evolution. mutation, gene flow, nonrandom mating, genetic drift, natural selection Describe the theory of common descent. all organisms on Earth today are descendants of a single ancestor that arose in the distant past Identify the attempts to explain shared characteristics of life. Cells, DNA, molecules, development, metabolic pathways, etc. Distinguish between microevolution and macroevolution. Microevolution is the process by which organisms change in small ways over time. Macroevolution refers to larger evolutionary changes that result in new species. Define microevolution. Pattern of evolution in which there is evolutionary change within a population. Define macroevolution. Pattern of evolution that produces new species. Describe the characteristics of microevolution and identify an example. Characteristics: Due to a change in allele frequency within gene pool, May result in changes in phenotype frequencies, Happens on a small scale, Can be observed over short periods of time Example: red vs. white eyed fruit flies Define mutation and describe how mutations function in evolution. a change in DNA; produces new alleles, the raw material for evolutionary change Define gene flow and identify an example. sharing of genes between two populations; ex: green and brown beetles Define immigration Moving into a population Define emigration Moving out of a population Define genetic drift, describe its characteristics and identify an example. random changes in allele frequency within a population; the ratio of one allele to another randomly changes generation to generation; ex: deer dying by chance Identify the mechanisms of genetic drift genetic drift reduces genetic variation in populations, which may reduce a population's ability to respond to environmental selective pressures, acts faster and has more drastic results in smaller populations, may contribute to the formation of a new species What are the bottleneck effect and the founder effect patterns of? They are patterns of genetic drift. Describe how the bottleneck effect operates and identify its characteristics as well as an example of the bottleneck effect. occurs when a population's size is reduced for at least one generation and then recovers, typically reduces a population's genetic variation (the population may not be able to adjust to new selection); ex: northern elephant seals; cheetahs Describe how the founder effect operates and identify its characteristics as well as an example of the founder effect. the colonization of an area by a limited number of individuals who, by chance, have different allele frequencies than the parent population, typically produces reduced genetic variation as opposed to the parent population; ex: the amish Define non-random mating and identify factors which influence non-random mating. mating that does not occur due to chance; influenced by proximity, phenotype, competition Describe sexual selection, its characteristics, originator of the theory and examples. competition between individuals of one sex for the right to mate with the opposite sex; typically involves males (via competition between males and expression of characteristics which attract females); leads to increased fitness (fitness- the ability to transfer genes to the next generation); theory proposed by Charles Darwin in 1871; ex: vivid colors in birds, etc. Identify factors influencing non-random mating in humans. phenotype: attractiveness, stature, intelligence, skin color, personality; cultural values; social rules Define the theory of natural selection. process by which individuals possessing certain inheritable characteristics of traits have greater survival and reproductive success than individuals lacking such characteristics; the environment "selects" favorable phenotypes; results in a population adapted to the environment Describe the sequence of mechanisms involved in natural selection within a population. 1. Individuals within a population vary (ex: color, size, shape, etc.) 2. variation among individuals of the population can be passed on to the offspring of the next generation 3. populations produce far more offspring than the environment can support (some live, most die) 4. because more individuals are produced than can survive, members of the population must compete for limited resources among themselves, other species, and the environment 5. some individuals have variations that enable them to survive better than others in the environment (possess greater fitness) 6. reproduction is NOT random (the fittest individuals have a better opportunity to reproduce) 7. advantageous variations become more common in a population 8. results in a population aspirated to its local environment Who is the originator of the theory of natural selection, published in On the Origin of Species in 1859. Define fitness as it relates to organisms. the ability of an organism to survive and reproduce within its environment (measured against the fitness of other members of the population living in the same environment); results from adaptation Define adaptation and identify examples of organism's adaptations. any characteristic that gives an organism a better chance of survival in its environment (physical and behavioral characteristic) Who originated the statement survival of the fittest? When? Herbert Spencer, 1864. Used in Darwin's Origin of Species, 5th ed. 1869. Identify the basic rules of natural selection. populations evolve, individuals do not - Natural selection acts on the level of the individual, but populations are the smallest unit that can evolve natural selection only works on inheritable variations, not acquired characteristics natural selection can only work with what it's given - Variations produced by different genetic mechanisms natural selection is situational to a given environment in a given time and place - Environments undergo change Describe examples of natural selection. North American English Sparrow phenotypes - Adaptation to temperature. Larger and darker in the north, smaller and lighter in the south. Describe Dr. Richard Lenski's natural selection experiment (Michigan State University, 1988-present) with Escherichia coli bacteria. Experiment: Used a single E. coli bacterium to produce 12 separate colonies, Transferred 1% of each colony to a new flask each day, Average 6-7 generations per day (> 58,000 generations), Froze samples every 500 generations for comparison, Observed each population for evidence of evolution Results: • All 12 colonies evolved: Smaller population densities, Larger cell size, Ability to better exploit glucose, 70% faster growth rate than ancestor strain, 4 colonies' ability to repair DNA decreased, Colony 3 acquired the ability to use citrate as a food source, Research revealed two sequential mutations: Mutation A + Mutation B = Ability to use Citrate as food Define artificial selection and identify examples of characteristics of organisms that have been artificially selected for. selective breeding of domesticated animals and plants to increase the frequency of desirable characteristics; ex: modern corn (zea mays); wild v. domestic plants (strawberries, tomatoes, sunflowers, etc.) Describe Dmitry Belyaev's (1917-1985) fox domestication experiment. Selected 130 foxes that showed the least fear on a fur farm and allowed them to breed, did the same thing with the next generation, and repeated. After 10 years domestication was successfully achieved. Define polygenic traits, identify their characteristics and give examples of polygenic traits. Quantitative traits controlled by more than one pair of alleles w/ contributing and noncontributing alleles, Wide range of phenotypes, Frequency distribution typically resembles a bell-curve, ex: human height, eye color, etc. Identify and describe the following pattern of natural selection, giving examples: directional selection. Selection for one extreme phenotype; population evolves in the direction of the preferred extreme; ex: 1848 Manchester, England peppered moth with 99.99% light and 0.01% dark. After industrial revolution, 1895 showed 2% light and 98% dark. Identify and describe the following pattern of natural selection, giving examples: stabilizing selection. Selection for the average phenotype; population stabilizes, nearly all the individuals are the same phenotype; ex: amount of eggs in starling birds w/ ideal amount 4-5 eggs. >4 not enough to guarantee survival, <5 is too many to keep warm. Identify and describe the following pattern of natural selection, giving examples: diversifying selection Selection for both extreme phenotypes; population diversifies into nearly equal percentages of both extremes; ex: snail coloration in forest = dark, grassland = light. Define species. A group of similar organisms that are reproductively isolated from other groups of organisms, Gene flow occurs between members of a species, Gene flow does not occur between members of different species Define speciation. origination of new species, Via changes in a population's gene pool allele frequencies due to micro-evolution mechanisms: Mutation, Gene flow, Genetic drift, Non-random mating, Natural selection Describe the mechanism of allopatric speciation and identify examples. Origin of new species in a population separated by a geographic barrier, ex: kaibab squirrel on north side of Grand Canyon, abert squirrel on south side of Grand Canyon. Identify the steps of allopatric speciation. 1. Original population 2. Geographic barrier separates population, Gene flow is prevented 3. Variations are produced in both populations via: Mutations, Gene flow (other populations), Genetic drift, Non-random mating, Natural selection 4. Over time, two distinct species originate, reproductively isolated from each other Describe the mechanism of sympatric speciation and identify examples. Origin of species in a population not subjected to geographic isolation, ex: Cichlid fish species in Lake Victoria, in east Africa, approximately 450 species, Species developed from a common ancestor, Adapted to unique food and habitats Identify the steps of sympatric speciation. 2. Variations are produced in both populations via: Mutations, Gene flow (other populations), Genetic drift (exploiting new habitats), Non-random mating, Natural selection 3. Divergence between isolated gene pools leads to new species Identify science's theory of human evolution. Human evolution is the lengthy process of change by which people originated from apelike ancestors. Scientific evidence shows that the physical and behavioral traits shared by all people originated from apelike ancestors and evolved over a period of approximately six million years. Identify and describe the following biological evidence of evolution: evolution of resistance In order for resistance to develop, a novel genetic characteristic must occur in the population, and it must persist and expand by providing a replicative advantage; comes in the form of Antibiotic resistance, Antiviral resistance, and Pesticide resistance Identify and describe the following biological evidence of evolution: camouflage a form of mimicry in which a species appears similar to its surroundings Identify and describe the following geological evidence of evolution: age of Earth Earth is old: 4.6 billion years, provides time required for evolution to occurred ex: rock dating Identify and describe the following geological evidence of evolution: environmental change Environments change by natural processes: Patterns of life followed geological changes ex: Grand Canyon Identify and describe the following geological evidence of evolution: continental drift. Alfred Wegener, 1915: Pangaea - supercontinent, 200 million years ago, Drifted apart to form modern continents Evidence: Fit of continents, Fossil similarities, Geological similarities, Modern GPS readings Environments changed as continents drifted Populations adapted or perished Define fossil and identify the evidence of evolution related to fossils. any evidence of a past organism that has been preserved in the Earth's crust Provide evidence of: Past organisms, Diversity of life, Gradual change over time, Succession of life forms, Transition between groups of organisms Define relative dating of fossils. dating technique used to determine the age of a fossil relative to fossils in other layers of rock, cannot determine the actual age of the fossil Define absolute dating of fossils. dating techniques used to calculate the actual age of fossils Distinguish between relative and absolute dating. relative compares fossils to the age of other fossils in other layers of rock, absolute dating uses dating techniques to find the exact age of a fossil without comparison Describe how radioactive decay is utilized in the absolute dating of fossils. Radioactive elements decay at a consistent rate to non-radioactive forms (parent daughter) Define half-life and, when given data, be able to calculate the half-life of a radioactive element. the time required for one half of the radioactive element to decay, the age of a fossil can be determined by calculating the length of time over which radioactive decay has been occurring within the fossil drawback: testing destroys part of the fossil Let's say you start with 16 grams of 11Be. Wait 13.81 seconds, and you'll have 8 grams left; the rest will have decayed to Boron 11. Another 13.81 seconds go by, and you're left with 4 grams of 11Be; 13.81 seconds more, and you have 2 grams. Define transitional fossil and identify examples. Fossils that document evolutionary transition; ex: Archaeopteryx - Intermediate between bird and dinosaur discovered in 1859 Define biogeography and describe how it provides evidence for evolution. the study of the geographical distribution of organisms Evidence that related forms evolved from a common ancestor that diversified as they spread into other accessible areas Identify the following as anatomical evidence of evolution and state examples: vestigial structures. Organism structures that have no apparent function, Evidence of ancestral structures ex: Humans- Muscles for wiggling ears, Boa constrictors- Hip bones and rudimentary hind legs, Blind cave fish- Nonfunctional eyes, Whale- Rudimentary hind legs Identify the following as anatomical evidence of evolution and state examples: homologous structures. Body parts of different organisms, possessing different functions but similar structure, Evidence of common ancestry Ex: human and monkey hands Identify the following as anatomical evidence of evolution and state examples: analogous structures. Structures similar in function, but not origination. Produced by convergent evolution: the independent evolution of similar structures in unrelated populations, produced as different populations adapt to similar environments Identify the following as anatomical evidence of evolution and state examples: structural imperfections. Evidence that evolution is non-directional Example: Vertebrate eyes, Blind spot, Photoreceptors are in the posterior of the retina Identify biochemical evidences of evolution. All life is carbon based All life uses the same genetic molecules - DNA, RNA• Life possesses similar chemical processes: Photosynthesis, Cellular respiration, Enzymatic reactions Similar organisms have similar genetic codes: Humans and chimpanzees share nearly identical genes, 98.4% identical gene sequences Identify embryological evidences of evolution. An early stage of vertebrate development, All vertebrate embryos share common features: Eye spot, pharyngeal gill pouches, Notochord, Tail, Developmental similarities reflect descent from a common ancestor Are religion and science compatible? Survey of Media Final Exam Empire Cocktail Exam 7 terms Issues In Biology Exam 4 155 terms Genetics & Probability Lab Quiz Evaluate either $\iint_{\mathbf{\Sigma}} \mathbf{F} \cdot \mathbf{n} d \sigma$ or $\iiint_{\mathcal{M}} \nabla \cdot \boldsymbol{F} d V .$ $\mathbf{F}=4 x \mathbf{i}-6 y \mathbf{j}+z \mathbf{k}, \mathbf{\Sigma}$ is the surface of the solid cylinder $x^{2}+y^{2} \leq 4,0 \leq z \leq 2,$ including the end caps of the cylinder. What is the function of a centromere? At what stage of the cell cycle would you expect the centromere to be the most important? Prepare a report tracing the evolution of the current periodic table since 1900. Cite the chemists involved and their major contributions. What is the difference between generic medicines and nongenetic medicines? 15th Edition•ISBN: 9781337520164John David Jackson, Patricia Meglich, Robert Mathis, Sean Valentine
CommonCrawl
Hysteresis operators in metric spaces DCDS-S Home Thermodynamical consistency - a mystery or? August 2015, 8(4): 769-772. doi: 10.3934/dcdss.2015.8.769 On a Poisson's equation arising from magnetism Luca Lussardi 1, Dipartimento di Matematica e Fisica "N.Tartaglia", Università Cattolica del Sacro Cuore, Via dei Musei 41, I-25121 Brescia Received January 2014 Revised July 2014 Published October 2014 We review the proof of existence and uniqueness of the Poisson's equation $\Delta u + {\rm div}\,{\bf m}=0$ whenever ${\bf m}$ is a unit $L^2$-vector field on $\mathbb R^3$ with compact support; by standard linear potential theory we deduce also the $H^1$-regularity of the unique weak solution. Keywords: micromagnetism, fundamental solution., Poisson's equation, Riesz transform, Riesz potential. Mathematics Subject Classification: Primary: 35J05; Secondary: 310. Citation: Luca Lussardi. On a Poisson's equation arising from magnetism. Discrete & Continuous Dynamical Systems - S, 2015, 8 (4) : 769-772. doi: 10.3934/dcdss.2015.8.769 C. Amrouche, H. Bouzit and U. Razafison, On the two and three dimensional Oseen potentials,, Potential Anal., 34 (2011), 163. doi: 10.1007/s11118-010-9186-9. Google Scholar W. F. Brown, Micromagnetics,, John Wiley and Sons, (1963). Google Scholar O. Bottauscio, V. Chiadò Piat, M. Eleuteri, L. Lussardi and A. Manzin, Homogenization of random anisotropy properties in polycrystalline magnetic materials,, Phys. B, 407 (2012), 1417. Google Scholar O. Bottauscio, V. Chiadò Piat, M. Eleuteri, L. Lussardi and A. Manzin, Determination of the equivalent anisotropy properties of polycrystalline magnetic materials: Theoretical aspects and numerical analysis,, Math. Models Methods Appl. Sci., 23 (2013), 1217. doi: 10.1142/S0218202513500073. Google Scholar R. D. James and D. Kinderlehrer, Frustration in ferromagnetic materials,, Continuum Mech. Thermodyn, 2 (1990), 215. doi: 10.1007/BF01129598. Google Scholar L. D. Landau, E. M. Lifshitz and L. P. Pitaevskii, Electrodynamics of Continuous Media,, Course of Theoretical Physics, (1984). Google Scholar E. M. Stein, Singular Integrals and Differentiability Properties of Functions,, Princeton University Press, (1970). Google Scholar Luciano Pandolfi. Riesz systems and moment method in the study of viscoelasticity in one space dimension. Discrete & Continuous Dynamical Systems - B, 2010, 14 (4) : 1487-1510. doi: 10.3934/dcdsb.2010.14.1487 Jiankai Xu, Song Jiang, Huoxiong Wu. Some properties of positive solutions for an integral system with the double weighted Riesz potentials. Communications on Pure & Applied Analysis, 2016, 15 (6) : 2117-2134. doi: 10.3934/cpaa.2016030 Scott W. Hansen, Rajeev Rajaram. Riesz basis property and related results for a Rao-Nakra sandwich beam. Conference Publications, 2005, 2005 (Special) : 365-375. doi: 10.3934/proc.2005.2005.365 Luciano Pandolfi. Riesz systems, spectral controllability and a source identification problem for heat equations with memory. Discrete & Continuous Dynamical Systems - S, 2011, 4 (3) : 745-759. doi: 10.3934/dcdss.2011.4.745 Sergei Avdonin, Julian Edward. Controllability for a string with attached masses and Riesz bases for asymmetric spaces. Mathematical Control & Related Fields, 2019, 9 (3) : 453-494. doi: 10.3934/mcrf.2019021 Gen Qi Xu, Siu Pang Yung. Stability and Riesz basis property of a star-shaped network of Euler-Bernoulli beams with joint damping. Networks & Heterogeneous Media, 2008, 3 (4) : 723-747. doi: 10.3934/nhm.2008.3.723 Jun Cao, Der-Chen Chang, Dachun Yang, Sibei Yang. Boundedness of second order Riesz transforms associated to Schrödinger operators on Musielak-Orlicz-Hardy spaces. Communications on Pure & Applied Analysis, 2014, 13 (4) : 1435-1463. doi: 10.3934/cpaa.2014.13.1435 Kolade M. Owolabi, Abdon Atangana. High-order solvers for space-fractional differential equations with Riesz derivative. Discrete & Continuous Dynamical Systems - S, 2019, 12 (3) : 567-590. doi: 10.3934/dcdss.2019037 Yusheng Jia, Weishi Liu, Mingji Zhang. Qualitative properties of ionic flows via Poisson-Nernst-Planck systems with Bikerman's local hard-sphere potential: Ion size effects. Discrete & Continuous Dynamical Systems - B, 2016, 21 (6) : 1775-1802. doi: 10.3934/dcdsb.2016022 Manh Hong Duong, Hoang Minh Tran. On the fundamental solution and a variational formulation for a degenerate diffusion of Kolmogorov type. Discrete & Continuous Dynamical Systems - A, 2018, 38 (7) : 3407-3438. doi: 10.3934/dcds.2018146 Jian Zhang, Shihui Zhu, Xiaoguang Li. Rate of $L^2$-concentration of the blow-up solution for critical nonlinear Schrödinger equation with potential. Mathematical Control & Related Fields, 2011, 1 (1) : 119-127. doi: 10.3934/mcrf.2011.1.119 Hong Lu, Ji Li, Joseph Shackelford, Jeremy Vorenberg, Mingji Zhang. Ion size effects on individual fluxes via Poisson-Nernst-Planck systems with Bikerman's local hard-sphere potential: Analysis without electroneutrality boundary conditions. Discrete & Continuous Dynamical Systems - B, 2018, 23 (4) : 1623-1643. doi: 10.3934/dcdsb.2018064 Reinhard Farwig, Ronald B. Guenther, Enrique A. Thomann, Šárka Nečasová. The fundamental solution of linearized nonstationary Navier-Stokes equations of motion around a rotating and translating body. Discrete & Continuous Dynamical Systems - A, 2014, 34 (2) : 511-529. doi: 10.3934/dcds.2014.34.511 István Győri, László Horváth. On the fundamental solution and its application in a large class of differential systems determined by Volterra type operators with delay. Discrete & Continuous Dynamical Systems - A, 2020, 40 (3) : 1665-1702. doi: 10.3934/dcds.2020089 Hans-Otto Walther. On Poisson's state-dependent delay. Discrete & Continuous Dynamical Systems - A, 2013, 33 (1) : 365-379. doi: 10.3934/dcds.2013.33.365 Thomas Chen, Ryan Denlinger, Nataša Pavlović. Moments and regularity for a Boltzmann equation via Wigner transform. Discrete & Continuous Dynamical Systems - A, 2019, 39 (9) : 4979-5015. doi: 10.3934/dcds.2019204 Vladimir Georgiev, Sandra Lucente. Focusing nlkg equation with singular potential. Communications on Pure & Applied Analysis, 2018, 17 (4) : 1387-1406. doi: 10.3934/cpaa.2018068 Zifei Shen, Fashun Gao, Minbo Yang. On critical Choquard equation with potential well. Discrete & Continuous Dynamical Systems - A, 2018, 38 (7) : 3567-3593. doi: 10.3934/dcds.2018151 Karen Yagdjian, Anahit Galstian. Fundamental solutions for wave equation in Robertson-Walker model of universe and $L^p-L^q$ -decay estimates. Discrete & Continuous Dynamical Systems - S, 2009, 2 (3) : 483-502. doi: 10.3934/dcdss.2009.2.483 Anat Amir. Sharpness of Zapolsky's inequality for quasi-states and Poisson brackets. Electronic Research Announcements, 2011, 18: 61-68. doi: 10.3934/era.2011.18.61 Luca Lussardi
CommonCrawl
Natural, Dirichlet Density of a set of primes. Let $M$ be a set of prime numbers of $\mathbb{Q}$ . The limit $$d(M)= \lim_{s\rightarrow 1^+} \frac{ \sum_{p \in M} p^{-s} }{ - \log(s-1)}$$ Where $p$ is a prime of $\mathbb{Q}$ is called Dirichlet Density of $M$. Also, the Natural density of $M$ is the limit $$ \delta(M)= \lim_{x\rightarrow \infty} \frac{ \# \{ p \in M : p \leq x\}}{ \# \{ p \in \mathbb{Q} : p \leq x\}} $$ Now, let $$M= \bigcup_{k=0}^{\infty}\{ p \mbox{ prime} : 10^k \leq p < 2\cdot10^k \}$$ Show that $\delta(M)$ not exists and $d(M)=\frac{\log(2)}{\log(10)}$ For $\delta(M)$ i can apply the Prime Number Theorem, but if $$M(x)= \# \{ p \in M : p \leq x \}= \sum_{p \in M, p\leq x}1 = \sum_{0 \leq k \leq \frac{\log(x)}{\log(10)}} \sum_{10^k \leq p < 2\cdot10^k, p \leq x} 1$$ I do not know how to follow... number-theory P. M. O.P. M. O. $\begingroup$ Look at how $M(x)$ changes in the range $10^k < x < 2\cdot 10^k$, and how it changes in the range $2\cdot 10^k < x < 10^{k+1}$. In particular, estimate $M(x)/\pi(x)$ for $x = 10^k$ and $x = 2\cdot 10^k$. $\endgroup$ – Daniel Fischer♦ Jul 15 '13 at 19:49 $\begingroup$ @DanielFischer $M(10^k)$? $\endgroup$ – P. M. O. Jul 15 '13 at 20:41 $\begingroup$ The $M(x)$ you defined in the second row from bottom. $\endgroup$ – Daniel Fischer♦ Jul 15 '13 at 20:50 $\begingroup$ @DanielFischer $M(10^k)=\sum_{0 \leq k} \sum_{p=10^k}1=0? $ $\endgroup$ – P. M. O. Jul 15 '13 at 20:54 $\begingroup$ $M(x)= \# \{ p \in M : p \leq x \}$. Due to the construction of $M$, $M(10^k) \leqslant \pi(2\cdot 10^{k-1})$. $\endgroup$ – Daniel Fischer♦ Jul 15 '13 at 20:56 Using $M(x)= \# \{ p \in M : p \leq x \}$, we observe that due to the definition of $M$, we have $$M(10^k) \leqslant \pi(2\cdot 10^{k-1}), \text{ and } M(2\cdot 10^k) \geqslant \pi(2\cdot 10^k) - \pi(10^k).$$ Thus, supposing $k$ not too small, and using the prime number theorem, we obtain $$\frac{M(10^k)}{\pi(10^k)} \leqslant \frac{\pi(2\cdot 10^{k-1})}{\pi(10^k)} \approx \frac15\left(1 + \frac{\log 5}{\log (2\cdot 10^{k-1})}\right) \approx \frac15$$ $$\frac{M(2\cdot 10^k)}{\pi(2\cdot 10^k)} \geqslant 1 - \frac{\pi(10^k)}{\pi(2\cdot 10^k)} \approx 1 - \frac12\left(1 + \frac{\log 2}{\log (10^k)}\right) \approx \frac12.$$ So the value of $\frac{M(x)}{\pi(x)}$ oscillates between $\approx \frac15$ (or smaller) and $\approx \frac12$, hence has no limit. For the Dirichlet density, we use that $$\sum_{p \leqslant x} \frac{1}{p} = \log \log x + M + O\biggl(\frac{1}{(\log x)^2}\biggr)\tag{1}$$ where $M$ is the Meissel-Mertens constant. De la Vallée Poussin's error bound in the prime number theorem gives a smaller remainder term, but that would give no advantage in our calculation. For $x > 2$, we obtain $$\sum_{x < p \leqslant 2x} \frac{1}{p} = \log\biggl(1 + \frac{\log 2}{\log x}\biggr) + O\biggl(\frac{1}{(\log x)^2}\biggr) = \frac{\log 2}{\log x} + O\biggl(\frac{1}{(\log x)^2}\biggr)\tag{2}$$ from $(1)$, so $$\sum_{10^k < x \leqslant 2\cdot 10^k} \frac{1}{p} = \frac{A}{k} + R(k)\tag{3}$$ with $A = \frac{\log 2}{\log 10}$ and $R(k) \in O(k^{-2})$ for $k \geqslant 1$. Define $$S(x) = \sum_{\substack{p\in M \\ p \leqslant x}} \frac{1}{p}.$$ For $x > 20$ pick $K$ such that $2\cdot 10^K \leqslant x < 2\cdot 10^{K+1}$. Then $$S(x) = \sum_{k = 1}^K \Biggl(\frac{A}{k} + R(k)\Biggr) + O(K^{-1}) = A\log K + (A\gamma + C) + O(K^{-1})$$ with $C = \sum_{k = 1}^{\infty} R(k)$, since $$\sum_{10^{K+1} < p \leqslant x} \frac{1}{p} \in O(K^{-1})$$ by $(2)$, and $C - \sum_{k = 1}^K R(k) = \sum_{k = K+1}^{\infty} R(k) \in O(K^{-1})$ too. With $$K = \biggl\lfloor \frac{\log \frac{x}{2}}{\log 10}\biggr\rfloor = \frac{\log x - \log 2}{\log 10}\cdot \Bigl( 1 + O\bigl((\log x)^{-1}\bigr)\Bigr)$$ we thus obtain $$S(x) = A\log \log x + B + O\bigl((\log x)^{-1}\bigr),$$ where $B = A\gamma + C - A\log \log 10$. Since $S(x) = 0$ for $x < 11$, we can thus write $$\sum_{p \in M} \frac{1}{p^{1+\varepsilon}} = \varepsilon \int_e^{\infty} \frac{S(x)}{x^{1+\varepsilon}}\,dx = A\varepsilon \int_e^{\infty} \frac{\log \log x}{x^{1+\varepsilon}}\,dx + B\varepsilon\int_e^{\infty} \frac{dx}{x^{1+\varepsilon}} + O\Biggl(\varepsilon\int_e^{\infty} \frac{dx}{x^{1+\varepsilon}\log x}\Biggr).$$ For the first term, we calculate \begin{align} \varepsilon \int_e^{\infty} \frac{\log \log x}{x^{1+\varepsilon}}\,dx &= \varepsilon \int_1^{\infty} e^{-\varepsilon t}\log t\,dt \tag{$x = e^t$}\\ &= \int_{\varepsilon}^{\infty} e^{-u}(\log u - \log \varepsilon)\,du \tag{$u = \varepsilon t$} \\ &= \log \frac{1}{\varepsilon}\cdot \Biggl(1 - \int_0^{\varepsilon} e^{-u}\,du\Biggr) + \Gamma'(1) -\int_0^{\varepsilon} e^{-u}\log u\,du \\ &= \log \frac{1}{\varepsilon} + \Gamma'(1) + O(\varepsilon \lvert\log \varepsilon\rvert). \end{align} The second term is $B e^{-\varepsilon}$, and to bound the error term, we note that $$\int_e^{\infty} \frac{dx}{x^{\varepsilon}(x\log x)} = \underbrace{\frac{\log \log x}{x^{\varepsilon}}\biggr\rvert_e^{\infty}}_{{}=0} + \varepsilon\int_e^{\infty} \frac{\log \log x}{x^{1+\varepsilon}}\,dx,$$ so reusing the previous result we finally get $$\sum_{p\in M} \frac{1}{p^{1+\varepsilon}} = A\log \frac{1}{\varepsilon} + A\Gamma'(1) + B + O(\varepsilon\lvert\log\varepsilon\rvert),$$ which shows $$d(M) = A = \frac{\log 2}{\log 10}.$$ Daniel Fischer♦Daniel Fischer $\begingroup$ Thanks,you know how to do $d(M)$? $\endgroup$ – P. M. O. Jul 15 '13 at 21:54 $\begingroup$ Not off the top of my head. I need to think about that. $\endgroup$ – Daniel Fischer♦ Jul 15 '13 at 21:57 This is a non-trivial result, I may give you an idea of proof but first you should look at this. KunnysanKunnysan Not the answer you're looking for? Browse other questions tagged number-theory or ask your own question. Natural density implies Dirichlet density Dirichlet vs. logarithmic density Question concerning the Dirichlet density of a subset of the set of primes Dirichlet density for number fields $K$? Existence of the natural density of the strictly-increasing sequence of positive integer? Following the previous question: Existence of the natural density … Computing certain Dirichlet densities Reasoning about the Heilbronn-Rohrbach Inequality and natural density Counterexamples showing natural density is not a measure
CommonCrawl
← Isadore Singer 1924-2021 ABC is Still a Conjecture → Yet More Geometric Langlands News Posted on February 27, 2021 by woit It has only been a couple weeks since my last posting on this topic, but there's quite a bit of new news on the geometric Langlands front. One of the great goals of the subject has always been to bring together the arithmetic Langlands conjectures of number theory with the geometric Langlands conjectures, which involved curves over function fields or over the complex numbers. Fargues and Scholze for quite a few years now have been working on a project that realizes this vision, relating the arithmetic local Langlands conjecture to geometric Langlands on the Fargues-Fontaine curve. Their joint paper on the subject has just appeared [arXiv version here]. It weighs in at 348 pages and absorbing its ideas should keep many mathematicians busy for quite a while. There's an extensive introduction outlining the ideas used in the paper, including a long historical section (chapter I.11) explaining the story of how these ideas came about and how the authors overcame various difficulties in trying to realize them as rigorous mathematics. In other geometric Langlands news, this weekend there's an ongoing conference in Korea, videos here and here. The main topic of the conference is ongoing work by Ben-Zvi, Sakellaridis and Venkatesh, which brings together automorphic forms, Hamiltonian spaces (i.e classical phase spaces with a G-action), relative Langlands duality, QFT versions of geometric Langlands, and much more. One can find many talks by the three of them about this over the last year or so, but no paper yet (will it be more or less than 348 pages?). There is a fairly detailed write up by Sakellaridis here, from a talk he gave recently at MIT. In Austin, Ben-Zvi is giving a course which provides background for this work, bringing number theory and quantum theory together, conceptualizing automorphic forms as quantum mechanics on arithmetic locally symmetric spaces. Luckily for all of the rest of us, he and the students seem to have survived nearly freezing to death and are now back at work, with notes from the course via Arun Debray. For something much easier to follow, there's a wonderful essay on non-fundamental physics at Nautilus, The Joy of Condensed Matter. No obvious relation to geometric Langlands, but who knows? Update: Arun Debray reports that there is a second set of notes for the Ben-Zvi course being produced, by Jackson Van Dyke, see here. Update: David Ben-Zvi in the comments points out that a better place for many to learn about his recent work with Sakellaridis and Venkatesh is his MSRI lectures from last year: see here and here, notes from Jackson Van Dyke here. Update: Very nice talk by David Ben-Zvi today (3/22/21) about this, see slides here, video here. This entry was posted in Langlands. Bookmark the permalink. 14 Responses to Yet More Geometric Langlands News Fen Zuo says: Geometric Langlands is reflected in integer/fractional quantum Hall effect and the so-called Hofstadter's butterfly, according to recent work of Kazuki Ikeda. Jim Eadon says: Can anyone provide a (hand-wavy) summary for the educated layman (Physics post-grad) of what a Fargues-Fontaine curve is, and what's special about, e.g. in the context of the Langlands programme? Arun Debray says: Re: DBZ's course, I am not the only student taking notes. Jackson Van Dyke is also posting his notes online: https://web.ma.utexas.edu/users/vandyke/notes/langlands_sp21/langlands.pdf (Github link: https://github.com/jacksontvd/langlands_sp21). Jackson's notes go into quite a bit more detail, and he's gone back and added more references and figures than I have. In the end we'll hopefully combine our notes into one document. If anyone has any questions, comments, or corrections about either set of notes they're welcome to get in touch with me or Jackson. Arun, Thanks a lot, both for producing the notes, and for letting us know about the other ones. David Ben-Zvi says: Peter – thanks for the references! Maybe let me note the series of talks in Korea (by Sakellaridis, Venkatesh and me) are aimed at a more arithmetic audience, some previous talks (eg at MSRI last March) might be more accessible to the audience here. Also I might add that the perspective on automorphic forms as quantum mechanics is very old and widely used. A newer perspective I'm trying to advertise in the course and talks is to think of automorphic forms (and the Langlands program) as being really about 4d QFT — an arithmetic elaboration of the Kapustin-Witten picture for geometric Langlands. This accounts for many of the special features of quantum mechanics on arithmetic locally symmetric spaces – e.g., dependence on a number field is the analog of considering states on different 3-manifolds, Hecke operators — a form of quantum integrability– come from 't Hooft line operators, the choice of level (congruence subgroup) corresponds to consideration of surface defects, and most importantly the relation with Galois representations (the Langlands program) can be viewed as electric-magnetic duality. The new feature of the work with Sakellaridis and Venkatesh (the first paper should appear relatively soon..) is that the theories of periods of automorphic forms and L-functions of Galois representations can fruitfully be understood as considering boundary conditions in the two dual TQFT, and that the electric-magnetic duality of boundary conditions (as studied by Gaiotto-Witten) can be used to explain the relation between the two (the theory of integral representations of L-functions). Peter Scholze says: Jim Eadon, let me try to answer. This paper is about the (local) Langlands correspondence over the $p$-adic numbers $\mathbb Q_p$. Recall that $p$-adic numbers can be thought of as power series $a_{-n}p^{-n} + \ldots + a_0 + a_1 p + a_2p^2 + \ldots$ in the "variable" $p$ — they arise by completing the rational numbers $\mathbb Q$ with respect to a distance where $p$ is small. They are often thought of as analogous to the ring of meromorphic functions on a punctured disc $\mathbb D^*$ over the complex numbers, which admit Laurent series expansions $a_{-n} t^{-n} + \ldots + a_0 + a_1 t + a_2t^2 + \ldots$. More precisely, there is this "Rosetta stone" going back to Weil between meromorphic functions over $\mathbb C$, their version $\mathbb F_p((t))$ over a finite field $\mathbb F_p$, and $\mathbb Q_p$. However, there is an important difference: $t$ is an actual variable, while $p$ is just a completely fixed number — how should $p=2$ ever vary? In geometric Langlands over $\mathbb C$, it is critical to take several points in the punctured disc $\mathbb D^\ast$ and let them move, and collide, etc. What should the analogue be over $\mathbb Q_p$, where there seems to be no variable that can vary? In one word, what the Fargues–Fontaine curve is about is to build an actual curve in which $p$ is the variable, so "turn $\mathbb Q_p$ into the functions on an actual curve". It then even becomes possible to take two independent points on the curve, and let them move, and collide. With this, it becomes possible to adapt all (well, at least a whole lot of) the techniques of geometric Langlands to this setup. This idea of "turning $p$ into a variable and allowing several independent points" is something that number theorists have long been aiming for, and is basically the idea behind the hypothetical "field with one element". I would however argue that our paper is the first paper to really make profitable use of this idea. Thanks! I've added some links to your MSRI talks, which do look like a better place for people to start. I've always been fascinated by analogies between number theory and QM/QFT, the new angle on this you're pointing out is really remarkable, looks like a significant deep link between the subjects. Could you (or Peter Scholze!) comment on any relation of this to the other topic of the posting (local arithmetic Langlands as geometric Langlands on the Fargues-Fontaine curve)? Professor Scholze, You give a sense of how exciting it is, to, literally? connect the dots between different fields. I'm glad I studied pure mathematics as a hobby enough to get the gist of your explanation. I will re-read your reply a few times, as it's deep, and connects several fascinating objects and techniques. I really appreciate you taking the time to engage, it means a lot. And thanks too, to Professor Woit, for bringing such mathematics to my (and others) attention, I enjoy the blog. Peter – the way I see it (somewhat metaphorically) is as follows. In extended 4d topological field theory we seek to attach vector spaces to 3-manifolds, categories to 2-manifolds etc. The Langlands program fits beautifully into this if you accept the "arithmetic topology" analogy: besides ordinary 3-manifolds we consider (Spec of ring of integers of) global fields (number fields and function fields over finite fields) as "3-manifolds". Besides surfaces we also admit local fields (such as p-adics or Laurent series over finite fields) and curves over the algebraic closure of finite fields as "2-manifolds" (this is the theme I'd like to get to in my course, though still a way to go). If you accept this ansatz, there's no "geometric Langlands" and "arithmetic Langlands", we're just considering different kinds of "manifolds" as inputs. For example geometric Langlands on surfaces and local (arithmetic) Langlands both concern equivalences of categories (in the latter case, one seeks descriptions of categories of reps of reductive groups over local fields are described in terms of spaces of Galois representations). The Fargues-Scholze work is (among many other things!) a spectacular realization of this kind of idea. They show that the local Langlands program can be (and arguably is best) considered as geometric Langlands on an actual curve attached to the local field (the Fargues-Fontaine curve). Moreover the most crucial structure here, the Hecke operators, are miraculously described in a geometric way (factorization — the colliding points in Peter's response) that descends directly (via Beilinson-Drinfeld) from the structure of operator products in 2d QFT. The wonderful recent work of Arinkin, Gaitsgory, Kazhdan, Raskin, Rozenblyum and Varshavsky that Will Sawin mention in a recent comment also fits into this general paradigm, in that they show that [unramified] arithmetic and geometric Langlands in the function field setting are precisely related by "dimensional reduction" – you pass from "2-manifolds" (curves over alg closure of finite fields) to "3-manifolds" (curves over the finite fields — which you should think of as mapping tori of the Frobenius map, so 3-manifolds fibering over the circle) by taking trace of Frobenius, just as TFT would tell you. Laurent Fargues says: I'm late but I'm going to say a few words to complement Peter's comments. Since this is a Physics blog I'm going to give a few key words that may speak to physicists. A lot of things work by analogies in this work, trying to put together some ideas from arithmetic and geometry together, make some mental jumps and trying to fill the gaps. When Peter is saying "turning into a variable and allowing several independent points" this is analog to the fusion rules in conformal fields theory. There is the possibility in the world of diamonds to take different copies of the prime number p and fuse them into one copy. Here there reference, if I dig in my mind the first time I heard about this, is the work of Beilinson Drinfeld on factorization sheaves in terms of D-modules. You will find this fusion process in the Verlinde formula too in a coherent sheaves setting for compact Riemann surfaces in the work of Beauville "Conformal blocks, fusion rules and the Verlinde formula" for example where you fuse different points on a Riemann surface. I typically remember a talk by Kapranov about 'The formalism of factorizability' and did not get why the Russian peoples, who are known to have a huge background in physics, were such obsessed with this. No doubt this is linked to vertex algebras, where there are fusion rules, too and plenty of things of interest for physicists. For arithmeticians the declic, I remember saying to myself "at least I understand why peoples are obsessed with those factorization stuff", came from Vincent Lafforgue who remarked that if you work in an étale setting instead of a D-module setting, the moduli spaces of Shtukas (a vast generalization of modular curves for functions fields over a finite field) admits a factorization structure and this factorization structure gives you the Langlands parameter for global Langlands over a function field over a finite field. For the curve here is what I can say. If you take an hyperbolic Riemann surface it is uniformized by the half plane on which you have a complex coordinate z. You can imagine the same type of things for the curve where the variable is the prime number p. By the way there is an object in the article that may speak to physicists : Bun_G, the moduli of principal G-bundles on the curve. The analog for physicists would be the moduli of principal G-bundles on a compact Riemann surface, where now G is a compact Lie group, that shows up in the work of Atiyah and Bott. Still by the way, one of the origin of my geometrization conjecture is trying to understand the reduction theory "à la Atiyah Bott" for principal G-bundles on the curve (the analog for any G of the work on the indian school (Narasimhan-Seshadri) + Harder for GL_n i.e. usual vector bundles). There are other objects that may speak, by analogies, to physicists in this paper. Typically the so called local Shtuka moduli spaces "with one paw" (i.e. only one copy of the prime number p). The archimedian analogs are hermitian symmetric spaces. Realizing local Langlands in the cohomology of those local Shtuka moduli spaces has an archimedian analog : Schmid realization of Harisch-Chandra discrete series in the L^2 cohomology of symmetric spaces. Schmid uses the Atiyah-Singer index formula to obtain his result, a tool well know to Physicists. Hermitian symmetric spaces are moduli of Hodge structures and this has been a great thing to realize that p-adic Hodge structures à la Fontaine are the same as "geometric Hodge structures" linked to the curve. I could speak about this during hours and do some name-dropping that speaks to physicists but one thing is sure: there is no link with the multiverse, this I'm sure. Anyway, I have no idea how the curve looks like in other universes of the multiverse. Thanks Laurent! The Atiyah-Bott story (involving gauge fields + the Yang-Mills equations) and the Atiyah-Schmid story (involving, for SL(2,R), the Dirac operator on a 2d space) are two of my favorite topics. They're essentially the two main components of the Standard model (the Dirac equation for matter fields, the Yang-Mills equation for gauge fields). Only difference is that they're in 2d rather than 4d…. I hope you're still planning to come to Columbia for fall 2022, look forward to seeing you then! My Eillenberg lectures are reported to 2023 sadly, because of the virus. I'll try not to enter into the technical details and give some general picture of the objects showing up in this work. By the way, the cancelled program "The Arithmetic of the Langlands program" jointly organized with Calegari, Caraiani and Scholze is officially reported to 2023, same period of the year as before. A very nice popular sketch of the idea of "the curve" by Matthew Morrow: https://webusers.imj-prg.fr/~matthew.morrow/Morrow,%20Raconte-moi%20la%20Courbe.pdf & more: https://webusers.imj-prg.fr/~matthew.morrow/Exp-1150-Morrow,%20La%20Courbe%20de%20FF.pdf @Thomas thanks for the link.
CommonCrawl
By Gianluigi Filippelli on Wednesday, December 19, 2012 posted by @ulaulaman about #Moon formation #astronomy #Earth with Italo Calvino quotes Our satellite, the Moon, is really fascinating, not only for artists and poets, but also for scientists. For example the first, precise description of the Moon was made by Galileo Galilei in the Sidereus Nuncius: One of the problems that the astronomy try to resolve about the Moon is its origins: for example in the beginning of the Twentieth Century it was developed the Earth-Moon Theory, that was reviewed by LeRoy Hughbanks(8): "The moon," says Prof. Percival Lowell, "did not originate as a separate body, but had its birth in a rib of earth." Doctor Lowell is an ardent sup- porter of "the earth-moon theory," and his views and deductions are frankly stated in his two last scientific works, "Mars as the Abode of Life" and "Evolution of Worlds," both of which are publications of the Macmillan Company, New York. In the discussion has a really great importance George Darwin with his works about the tidal friction(6) and the viscous spheroids(5): Following Sir George Darwin, the Moon would have been detached from the Earth because to a solar tide. The attraction of the Sun acted on the covering of lighter rock (granite) as on a fluid, lifting one hand and tearing it to our planet. The waters that covering the entire Earth were largely sucked down by the abyss that had opened by the escape of the Moon (i.e. the Pacific Ocean), leaving uncovered the remaining granite, which fragmented and wrinckled itself into the continents. Without the Moon, the evolution of the life on the Earth, although it had been, would have been very different.(2) Another good description of the earth-moon theory was given by Andrew Patterson: In brief, the theory is that when the earth had cooled, from its molten condition sufficiently to have a crust of solidified matter something like thirty miles thick over its entire surface, it was revolving so rapidly that gravitational attraction and centrifugal force practically balanced each other. For some reason, perhaps some vast and sudden cataclysm, a large portion of this crust was thrown off the earth, and by tidal action was forced gradually outward in a spiral path. In order to form the moon, a mass of this crust about thirty miles thick and of area nearly equal to the combined areas of the present oceans on the earth must have been thrown off. It is supposed that this immense amount of crust was largely taken from the present basin of the Pacific, and that the remaining parts of the earth's crust, while it still floated on a liquid interior, split along an irregular line into two pieces which floated apart, and the gap between these two parts was later filled with the waters of the Atlantic.(7) But following Gerstenkorn(11) we could arrive to a variation of this picture: Following H. Gerstenkorn's calculations(11), developed by H. Alfven(9, 10), Earth's continents would be fragments of the Moon fell on our planet. The Moon in origin would also be a planet gravitating around the Sun, until such time as the proximity to the Earth derailed her from its orbit. Captured by Earth's gravity, the Moon came up more and more, tightening its orbit around us. At one point, the mutual attraction began to deform the surface of the two celestial bodies, raising high waves which were detached fragments whirling in space between Earth and Moon, especially fragments of lunar matter that fell on Earth. Later, under the influence of our tides, the Moon was pushed away to reach its present orbit. But part of the lunar mass, perhaps half, was left on Earth, forming the continents.(3) Impact from kin via Jane Grant In this case we have a capture picture(4) with a subsequent impact dued by tidal forces: Gerstenkirn, a high school teacher, repeated Darwin's calculations and he demonstrated that the Moon could be an indipendent planet in the past.(10) The modern view about the origin of the Moon is, instead, the giant impact theory: the first hypothesis of a such-type event is dued by Reginald Aldworth Daly in his 1945's paper Origin of the Moon and its topography. Daly's ideas was recovered in 1975 in Hartmann's and Davis' paper Satellite-sized planetesimals and lunar origin(12): the two reserachers supposed that at the end of the planet formation period, some Moon-sized bodies could be collide or be captured by the planets. In particular one of these objects may have collided with Earth ejecting the materials that formed the Moon. In 1986 Alastair Cameron started a series of five papers about Moon formation that continue the work proposed in 1976 with William Ward at the Lunar and Planetary Science Conference. Comeron's approach is to perform a series of numerical simulation: Therefore more detailed simulations were needed in which the physics of the shocks and the vaporization process could be accurately treated.(13) In the paper Cameron's team describes a collision between the proto-Earth and an object of about $\frac{1}{10}$ of Earth's mass. In particular in this first step they find that The relative velocity between the impactor and the proto-Earth is relatively small (less than about 5 km/sec at infinity). If this condition is not fulfilled the impactor is completely dispersed in space.(13) In the following three papers Cameron et al. the smooth particle hydrodynamics (SPH) method and finally he can describe the Giant Impact hypothesis and his consequences: Wherever the surface of the Protoearth is hit hard by the impact, a very hot magma is produced. From this hot surface, rock vapor evaporates and forms a hot extended atmosphere around the Protoearth.(14) The temperature to about 8 Earth's radii is 4000 K. In order to obtain similar results, the impactor has at least 10% of the Earth's mass, and, following Cameron's results, has at least 14% of Earth's mass in order to (...) swallow up the Impactor iron core and avoid getting too much iron in the Moon. But apart from this constraint it appears from the present simulations that any division of mass between the Protoearth and the Impactor can produce a promising set of conditions.(14) Also some physical considerations like the conservation of angular momentum could be explain with the theoretical proposal, but, in every case: The formation of the Moon as a postcollision consequence of a Giant Impact remains a hypothesis.(14) Some clues about the correctness of the Giant Impact theory come from chemistry: first of all Moon and Earth are identical if we study their oxygen, tungsten, chromium and titanium isotopes. Cause the relevant differences between Earth and other space bodies, the most simple explanation for these isotopic systems is that Moon was formed by a catastrofic event like a giant impact between a space bullet and the Earth. If the details of this scenario could be discussed using different starting hypothesis, from the chemistry point of view the Giant Impact theory received on of the most ultimate corroboration by the study of the zinc isotopes(16) Here we present high-precision zinc isotopic and abundance data which show that lunar magmatic rocks are enriched in the heavy isotopes of zinc and have lower zinc concentrations than terrestrial or Martian igneous rocks. Conversely, Earth and Mars have broadly chondritic zinc isotopic compositions. We show that these variations represent large-scale evaporation of zinc, most probably in the aftermath of the Moon-forming event, rather than small-scale evaporation processes during volcanism. Our results therefore represent evidence for volatile depletion of the Moon through evaporation, and are consistent with a giant impact origin for the Earth and Moon.(16) Studying the zinc's isotopes could provide important clues about the origin of the Moon; indeed any precise measurements about zinc isotopic concentrations between planetary igneous rocks(16) from, for example, Earth, Moon and Mars, could give important differences in the depletion and replenishment event, and so in their origins. Because terrestrial, Martian and lunar rocks all lie on the same massdependent mass-fractionation line, along with all classes of chondritic meteorites, this implies that Zn from all of the analysed samples evolved from a single, isotopically homogeneous reservoir. This relationship presumably reflects Zn isotope homogeneity in the solar nebula before terrestrial planet formation and, thus, that all reported isotopic variations are due to mass-dependent fractionations.(16) In particular the team find that the Zn isotopic fraction is in agreement with a melting event associated with the Moon formation, and so the results support a giant impact event as origin of Earyh-Moon system(16). If chemistry provide some importnant clues in order to confirm the Giant Impact theory, from a physical point of view, one of the most important difficulties in the simulations about this event is the evolution of the angular momentum of the Earth-Moon system, which today value furnish a constrain for the models. For example if we imagine an erosive giant impact against a fast-spinning proto-Earth, the produced Moon was the right mass with a composition primarily from Earth, but the system would have an angular momentum higher than today(17). So the excess of angular momenutum must be lost since the impact. We can imagine two different ways: it lost during the tidal evolution of the Moon via a resonance between Earth's orbital period(17) or through a resonance with the Sun(18). But this is not the only difference between the two models. In the first scenario, that I would call the punch, Cuk and Stewart considered the following ingredients: the isotopic similarity between Moon and Earth; the mass of the Moon; the mass of the lunar core. The first limits the composition and the mass of the cosmic bullet. In particular the composition is supposed more similar to Earth than Mars, so the difference in projectile mass fraction between silicate portions of the planet and disk is limited to 15% of the weight(17). After the mass of the satellite from the disk must be greater than or equal to one lunar mass(17). And finally only 10% (or less) of the weight of the disk be composed by material originating from the iron cores of the impactor and target(17). And so, the Earth-Moon system is born! The punch(17) But it has an excess of angular momentum: After the Moon was captured in the resonance, the lunar orbit continued to evolve outward while keeping a constant precession period, which led to a rapid increase of eccentricity. The eccentricity increased until a balance between Earth and lunar tides was reached, but the exact eccentricity at which this happened is model-dependent because the mechanical properties of both Earth and the Moon are uncertain. There was always a substantial period of balance between Earth and Moon tides, where the Moon stayed in the evection resonance with a roughly constant eccentricity. During this period, Earth tides were transferring angular momentum to the Moon, and Earth's rotation was slowing down. Satellite tides cannot remove angular momentum from lunar orbit, but the Sun can absorb angular momentum through the evection resonance.(17) But the resonance broke, because Tidal acceleration of the Moon at perigee weakened once the rates of Earth's rotation(17) and the lunar tides dominate: Once, according to Sir George H. Darwin, the Moon was much closer to the Earth. There were the tides that gradually pushed her away: the tides caused by the Moon in waters and land and where the Earth slowly loses energy.(1) But, if we start from different initial conditions, we could arrive to the same Earth-Moon system. We considered a larger impactor that is comparable in mass with that of the target itself. A final disk and planet with the same composition are then produced if the impactor contributes equally to both, which for large impactors is possible even if the disk contains substantial impactor-derived material because the impactor also adds substantial mass to the planet. For example, in the limiting case of an impactor whose mass equals that of the target and in the absence of any pre-impact rotation, the collision is completely symmetric, and the final planet and any disk that is produced will be composed of equal parts impactor and target-derived material and can thus have the same silicate compositions even if the original impactor and target did not.(18) The kiss(18) In this way Canup could consider a "Mars-like" composition for the bullet, so in general the composition between proto-Earth and projectile are very different, but during the impact and the following recombination the new planet and its satellite gain one similar composition. Finally, impactor and target are not rotating before collision(18). Last observation: Robin Canup is one of the most active contributor to the Giant Impact theory. In a paper published in 2001 on Nature(15) he considered an impact with a bullet that is smaller than the proto-Earth: The dancer(15) (1) Translated from the introduction to the short story La distanza dalla Luna (november, 1964), Italo Calvino (2) Translated from the introduction to the short story La Luna come un fungo (16th may, 1965), Italo Calvino (3) Translated from the introduction to the short story La molle Luna (october, 1967), Italo Calvino (4) About the capture theory you can read also The Earth Without the Moon by Immanuel Velikovsky, or the paper Origin and Evolution of the Earth-Moon System by Alfven and Arrhenius. (5) Darwin G.H. (1879). On the Precession of a Viscous Spheroid, and on the Remote History of the Earth, Philosophical Transactions of the Royal Society of London, 170 447-538. DOI: 10.1098/rstl.1879.0073 (archive.org) (6) Darwin G.H. (1881). On the Tidal Friction of a Planet Attended by Several Satellites, and on the Evolution of the Solar System, Philosophical Transactions of the Royal Society of London, 172 491-535. DOI: 10.1098/rstl.1881.0009 (7) Patterson A.H. (1909). The origin of the Moon, Science, 29 (754) 936-937. DOI: 10.1126/science.29.754.936 (8) Hughbanks L. (1919). The Earth-Moon Theory, Transactions of the Kansas Academy of Science (1903-), 30 214. DOI: 10.2307/3624069 (9) Alfvén H. (1962). The early history of the Moon and the Earth, Icarus, 1 (1-6) 357-363. DOI: 10.1016/0019-1035(62)90036-2 (10) Alfven H. (1965). Origin of the Moon: Recalculation of early earth-moon distances suggests dramatic events a billion years ago, Science, 148 (3669) 476-477. DOI: 10.1126/science.148.3669.476 (11) Gerstenkorn, H. Über Gezeitenreibung beim Zweikörperproblem.. Zeitschrift für Astrophysik, Vol. 36, p.245 In english: Gerstenkorn H. (1967). The Importance of Tidal Friction for the Early History of the Moon, Proceedings of the Royal Society A: Mathematical, Physical and Engineering Sciences, 296 (1446) 293-303. DOI: 10.1098/rspa.1967.0023 (12) Hartmann W.K. & Davis D.R. (1975). Satellite-sized planetesimals and lunar origin, Icarus, 24 (4) 504-515. DOI: 10.1016/0019-1035(75)90070-6 (13) Benz W., Slattery W.L. & Cameron A.G.W. (1986). The origin of the moon and the single-impact hypothesis I, Icarus, 66 (3) 515-535. DOI: 10.1016/0019-1035(86)90088-6 (pdf) (14) Cameron A. (1997). The Origin of the Moon and the Single Impact Hypothesis V☆, Icarus, 126 (1) 126-137. DOI: 10.1006/icar.1996.5642 (pdf) (15) Canup R.M. & Asphaug E. (2001). Origin of the Moon in a giant impact near the end of the Earth's formation, Nature, 412 (6848) 708-712. DOI: 10.1038/35089010 (pdf) (16) Paniello R.C., Day J.M.D. & Moynier F. (2012). Zinc isotopic evidence for the origin of the Moon, Nature, 490 (7420) 376-379. DOI: 10.1038/nature11507 (17) Cuk M. & Stewart S.T. (2012). Making the Moon from a Fast-Spinning Earth: A Giant Impact Followed by Resonant Despinning, Science, 338 (6110) 1047-1052. DOI: 10.1126/science.1225542 (18) Canup R.M. (2012). Forming a Moon with an Earth-like Composition via a Giant Impact, Science, 338 (6110) 1052-1055. DOI: 10.1126/science.1226073 Labels: astronomy, chemistry, earth, george darwin, italo calvino, moon, simulation Torbjörn Larsson December 21, 2012 at 12:29 AM Thank you _so much_ for the update! I haven't kept up with this lately. Duly bookmarked. To successful test of the impactor I would add the early GRAIL observation of the same aluminum crustal mineral content, now that the fragmented, porous and thin nature of the Moon crust is better understood: "The average crustal thickness that Wieczorek and coworkers calculated, 34-43 kilometers, is much lower than has previously been assumed. Why is that important? Because when you work out the math to figure out what the bulk composition of the Moon is, given this thinner crust, you wind up with numbers for the abundance of aluminum that are a much better match to Earth's aluminum abundance than they were before. Previously, an apparent compositional mismatch had been a problem dogging the giant impact hypothesis for the Moon's formation. The GRAIL result makes for a compositional match." [ http://www.planetary.org/blogs/emily-lakdawalla/2012/12110923-grail-results.html ] Also, when you look at modern mantle-core formation scenarios, a hefty amount of disequilibration is allowed, up to 60 % disequilibration. ["Chronometry of Meteorites and the Formation of the Earth and Moon", Kleine et al, Elements 2011] And early mantle heterogenities may be observed. ["182W Evidence for Long-Term Preservation of Early Mantle Differentiation Products", Touboul et al, Science 2012] It seems to me all terrestrial type bodies can sustain late stage aggregation with events picked out of the same distribution so more likely same size, and so the new homogenization result is legit. Many asteroids and KBOs are impact binaries, prominently the Pluto-Charon Earth-Moon analog. Mars, with its 2 still orbiting moons with one soon deorbiting and at least one recently deorbited equatorial elliptic moon impact scar, may be another similar pathway. And what about the retrograde Venus with potentially deorbited remnants? Now I'm going out on a limb: Maybe that is what later vaporized any oceans and got the runaway greenhouse going in the first place. You can perhaps even make a case for Mercury, since its remarkably thin crust makes a hit-and-run scenario a potentially rewarding pathway to look at. Yeah, I know, thin ice, just a notion of what can be looked at. After some consideration, one thing I don't like a priori is the requirement of no (more likely slow) rotation of both impactors initially. How likely is that, seeing that aggregation impacts tend to create rotation? I must read that paper.
CommonCrawl
Shouldn't the Uncertainty Principle be intuitively obvious, at least when talking about the position and momentum of an object? Please forgive me if I'm wrong, as I have no formal physics training (apart from some in high school and personal reading), but there's something about Heisenberg's Uncertainty Principle that strikes me as quite obvious, and I find it strange that nobody thought about it before quantum mechanics development began, and still most people and texts explain it in quantum mechanics terms (such as citing wave/particle dualism, or the observer effect)... while actually it should appear blatantly obvious in classical mechanics too, at least regarding the position and momentum variables, due to the very definition of speed. As everyone knows, the speed of an object is the variation of its position over an interval of time; in order to measure an object's speed, you need at least two measurements of its position at different times, and as much as you can minimize this time interval, this would always create an uncertainty on the object's position; even if the object was exactly in the same place at both times, and even if the time was a single nanosecond, this still wouldn't guarantee its speed is exactly zero, as it could have moved in the meantime. If you, on the contrary, reduce the time interval to exactly zero and only measure the object's position at a specific time, you will know very precisely where the object is, but you will never be able to know where it came from and where it's going to, thus you will have no information at all about its speed. So, shouldn't the inability to exactly measure the position and speed (and thus the momentum) of an object derive directly from the very definition of speed? This line of reasoning could also be generalized to any couple of variables of which one is defined as a variation of the other over time; thus, the general principle should be: You can't misure with complete accuracy both $x$ and $\frac{\Delta x}{\Delta t}$ For any possible two points in time, there will always be a (however small) time interval between them, and during that interval the value of $x$ could have changed in any way that the two consecutive measurements couldn't possibly show. Thus, there will always be a (however low) uncertainty for every physical quantity if you try to misure both its value and its variation over time. This is what should have been obvious from the beginning even in classical mechanics, yet nobody seem to have tought about it until the same conclusion was reached in quantum mechanics, for completely different reasons... classical-mechanics heisenberg-uncertainty-principle measurements speed MassimoMassimo $\begingroup$ Related: physics.stackexchange.com/q/24068/2451 and links therein. $\endgroup$ – Qmechanic♦ Mar 10 '14 at 14:02 $\begingroup$ @Qmechanic, thanks but I'm not talking about limits in our ability to measure things here. I just think the very definition of speed implies that you can't exactly measure it and position at the same time. $\endgroup$ – Massimo Mar 10 '14 at 14:06 $\begingroup$ @Massimo: so it doesn't seem normal to you that one can measure both the value of a curve $y = f(x)$ AND its tangent at its tangent? Is it because the problem is always presented as if we were doing a single measurement? What if one imagined simply sampling positions at regular time intervals such that Shannon's sampling theorem applies and THEN measure position and velocity at any point of the reconstructed trajectory with arbitrary accuracy? Just in case it is nor clear: even that strategy doesn't lead to statistically null variances in position and momentum. $\endgroup$ – gatsu Mar 10 '14 at 15:41 $\begingroup$ I think that reducing the time interval between measurements gives you a better value for both position AND velocity. After all, if you want to find the slope of a curve, you want to send $\Delta t\rightarrow 0$. $\endgroup$ – Jahan Claes Nov 4 '15 at 18:20 $\begingroup$ Of course you'll get a better (= more precise) measurement by reducing the time interval; but you can't reduce it to zero, otherwise you'll have no information about speed. Hence the intrinsic (however low) uncertainity of measuring both position and speed. $\endgroup$ – Massimo Nov 4 '15 at 20:00 The uncertainty principle says something a deeper than "it is impossible to measure both position and momentum to arbitrary accuracy". It says 1) The accuracy is precisely limited by $\Delta x \Delta p > \hbar/2$. 2) In fact, this is not a limit of our measuring procedure, but a limit of reality. If something has well-defined position, it does not have a well defined momentum, and vice-versa. In other words, it's not that we can't precisely measure the exact momentum; if an object has a well-defined position, it does not have a well-defined momentum. Your argument doesn't really get at the real content of the uncertainty relationship, which is that there is no such thing as a particle with well-defined momentum and position. Jahan ClaesJahan Claes $\begingroup$ While this is indeed correct, I never said that the problem was in measuring both variables at the same time; I said, instead, something very similar to what you are saying: that the very definition of speed ceases to make sense if you reduce to zero the time interval between two measurements, and thus speed and position can't be both precisely determined at the same time, regardless of the accuracy of measuring instruments. $\endgroup$ – Massimo Nov 4 '15 at 18:59 $\begingroup$ What I was arguing, however, was that this should be obvious by the very definition of speed: if the speed is completely defined then the position isn't, and vice versa. $\endgroup$ – Massimo Nov 4 '15 at 19:01 $\begingroup$ @Massimo, I don't think that's true. Classically, we can measure both position and momentum arbitrarily well just by shining high-wavelength, low-energy light on an object, and deduce both the position and the speed. $\endgroup$ – Jahan Claes Nov 4 '15 at 19:39 $\begingroup$ @Massimo How about this: Let's say I classically measure the position of an object at $t=0$ to some arbitrary precision. Then I measure the position at $t=\Delta t$. I can then find BOTH the approximate velocity (by taking $\Delta x/\Delta t$) and the exact position (to some high precision). By decreasing $\Delta t$, I'm INCREASING the precision of my velocity measurement, and leaving my position measurement just as precise. $\endgroup$ – Jahan Claes Nov 4 '15 at 19:42 $\begingroup$ @Massimo this is slightly separate from my original point, but classically, velocity is only defined in the limit $\Delta t\rightarrow 0$. $\endgroup$ – Jahan Claes Nov 4 '15 at 19:53 The uncertainty principle doesn't say anything about simultaneous measurements of a particle - that's just a myth which originated from Heisenberg's interpretation of it. Let us first describe the basis of quantum physics and let's start with the most innocent looking object: the quantum state. We can see a quantum state as a prescription to prepare a system. It's a number of steps how to prepare your system (e.g. how to build a one-electron source, how to set it up in a vacuum, how to install magnetic fields and apply them to produce an electron with specific spin direction). In the literature, you'll hear something like "this electron is in the state..." - this is just an extreme short cut for saying: in our theory, we can define objects such as electrons and they have properties and we have an experimental procedure that produces results that seems to work exactly as we would predict if it were an electron of our theory. The second step is a measurement. From classical physics, it is pretty clear what a position measurement is and a momentum measurement is only slightly more difficult. In quantum physics, we already have to be much more careful, but let's suppose we know what it means to measure position and momentum (for position, we can for example take a number of detectors in an array and when one detector makes "click", this tells us the position of the instance of the state we created). This implies that we have another set of rules that gives us some classical output to read of a screen and that we call "momentum" of the state or "position". Now, suppose we want to have a very special state that is as localized as possible, i.e.: we prepare a state (or better: we perform the procedure defined by the state) over and over and each time we measure the position. Like this, we get a distribution of the positions - if our preparation is not very accurate, the variance of that distribution will be large, if it is quite accurate, it will be small. We go on and change the state (i.e. refine the procedure) such that ultimately, whenever we create an instance of the state and we measure its position, every time the same detector clicks. Let's assume we can achieve arbitrary accuracy, i.e. the detector is really small and it's really always the same detector that clicks. And now, instead of measuring the position, let's measure the momentum of our perfectly localized state. What happens? We'll get some result, but when we redo the experiment, it'll be completely different. We do the preparation & measurement over and over again and the probability distribution for momentum we receive will have a huge variance. Heisenberg's uncertainty relation tells us that regardless of what we do, this is the picture we must obtain. We can try to change the preparation procedure as much as we want and assuming we could work with arbitrary precision, we cannot define a procedure were both the position and the momentum density we obtain have a small variance. This is contrary to classical mechanics: Let's suppose I want to do the experiment where I try to see whether two stones of different masses fall differently. Galileo supposedly did this in Pisa by throwing two stones from the tower. His "state" was: I take two stones and hold them directly next to each other. Then I let them fall. But this means that at the beginning of the experiment, he knew both the position (next to each other) and the velocity (zero) perfectly - otherwise the conclusion wouldn't make sense. Heisenberg's uncertainty relation tells us that this is not possible in quantum mechanics - it doesn't even really make sense to ask the question. MartinMartin $\begingroup$ By the way, Galilean parabole is very nice to consider energy time indetermination. $\endgroup$ – arivero Apr 14 '16 at 19:21 $\begingroup$ "And now, instead of measuring the position, let's measure the momentum of our perfectly localized state. What happens? We'll get some result, but when we redo the experiment, it'll be completely different." So what? Each measurement will result in a precise momentum at a precise position. $\endgroup$ – Tom B. Feb 9 '18 at 19:10 In order to measure an object's speed, you need at least two measurements of its position at different times. This is not the case. The radar guns used by police to determine if you are exceeding the speed limit do not use position measurements. They instead measure the frequency difference between the outgoing and reflected signals. No position measurements are required. Conceptually, one could use a measurement from a radar gun to simultaneously determine position and velocity: Measure the frequency shift between the outgoing and incoming signals and measure the time difference between transmission and reception. The first measurement yields velocity, the second, position. No, it's not. Classically, one could make measurement devices that simultaneously measure position and velocity to any desired degree of precision. I gave an example above. Even if two position measurements are used, simply making those position measurements ever more precise lets one use two position measurements closely-spaced in time to measure both position and velocity to any desired degree of precision. There are no limits to the precision of measurements of canonically conjugate variables in classical mechanics. The uncertainty principle says that this is not possible. There are limits, specifically $\Delta x \Delta p>\hbar/2$. This is not just a limitation on measurement devices. It is much deeper than that. The uncertainty principle is a fundamental limitation of reality, as opposed to a minor constraint on measurement devices. David HammenDavid Hammen No, not necessarily. in order to measure an object's speed, you need at least two measurements of its position at different times, A police radar gun can be used to measure the speed of a object at a single point in time. It can also be used to measure its position in space at the same time. Using Heisenberg's Uncertainty Principle: $$ \sigma_x \sigma_\rho \geq \frac{\hbar}{2} $$ Which leaves a minimum accuracy of speed of a one tonne car measured to $1~\text{nm}$ accuracy at $5 \times 10^{-29}~\text{ms}^{-1}$. So classically for all intents and purposes one can measure a car to an arbitrary accuracy of both position and momentum at any point in time without invoking HUP. This is made even easier when you assume that the measurement of position does not affect its position or momentum, which classically is true for the car, so you can measure them separately in any order. $\begingroup$ Well, of course you can measure both quantities at the same time, but only up to a certain degree of precision, however high. Never exactly. $\endgroup$ – Massimo Mar 10 '14 at 14:24 $\begingroup$ Of course, this becomes practically irrelevant for large (i.e. bigger than atoms) objects, just like most quantum physics. But it's the principle that counts. $\endgroup$ – Massimo Mar 10 '14 at 14:32 $\begingroup$ True, (your first comment) but that is the measurement error and it will exist for all measurements, the limit is not given classically except by the precision of the instruments. The HUP tells us that in the microcosm no matter how precisely we measure there exist pairs of variables, called conjugate, which are connected through the HUP indeterminancy, not measurement error. $\endgroup$ – anna v Mar 10 '14 at 14:33 $\begingroup$ @Massimo I assume you don't know calculus, because calculus works on the very principle you're trying to deny. Instantaneous speed is a concept in classical mechanics and is completely defined. One can exactly know what an object's velocity is at a given time if one knows exactly how the object is moving around that time. In quantum mechanics the object does not have a definite velocity, which is very different. $\endgroup$ – Robert Mastragostino Mar 10 '14 at 18:25 $\begingroup$ A radar gun can measure the speed of a object at a single point in time. Not true. A radar gun measures the Doppler shift of the reflected wave. That is, it measures the frequency of the reflected wave. But you can't define the frequency of a wave at a single instant in time. I'm not going to attempt the math, but I'm pretty sure that a radar gun effectively tells you some weighted average of the target speed over some finite interval of time. $\endgroup$ – Solomon Slow Nov 4 '15 at 17:58 "So, shouldn't the inability to exactly measure the position and speed (and thus the momentum) of an object derive directly from the very definition of speed?" Yes, if you are talking about instantaneous speed. There is no speed at a point because the definition requires two points. No, if you are talking about average speed, which unsurprisingly is what everyone means when they are talking about speed in the macroscopic regime. gregsangregsan $\begingroup$ And thus, the very fact that speed needs to be measured as an average between two points explicitly forbids measuring it to an arbitrary level of precision; and this limit becomes increasingly relevant the smaller the objects and quantities involved become. $\endgroup$ – Massimo Mar 10 '14 at 14:45 $\begingroup$ Really, this limitation should have immediately become self-evident as soon as $\Delta t$ was used as a denominator... $\endgroup$ – Massimo Mar 10 '14 at 14:47 $\begingroup$ There is no speed at a point because the definition requires two points Except, if you know calculus, then you can define the speed at an instant in time. Assuming you know the position of the object as an function of time, then the derivative of the position function w.r.t. time gives you the instantaneous speed function. If the position function is analytic, then the instantaneous speed will also be analytic. $\endgroup$ – Solomon Slow Nov 4 '15 at 18:05 No. You can not talk about the uncertainty principle without associating a wave to a particle according to the de Broigle hypothesis. "A function (a wave) and its Fourier transform cannot both be sharply localized in time and frequency, respectively." A short pulse around moment $t_0$ appears like a vertical line on the screen of an oscilloscope but if you display its spectrum (calculated by the same oscilloscope if it is a smart one) you will remark that the spectrum covers a large part of the screen spreading over a broad range of frequencies. If you do not associate a wave to a particle or to a macroscopic body the uncertainty principle does not make sense. See also the following explanation: Mathematically, in wave mechanics, the uncertainty relation between position and momentum arises because the expressions of the wavefunction in the two corresponding orthonormal bases in Hilbert space are Fourier transforms of one another (i.e., position and momentum are conjugate variables). A nonzero function and its Fourier transform cannot both be sharply localized. A similar tradeoff between the variances of Fourier conjugates arises in all systems underlain by Fourier analysis, for example in sound waves: A pure tone is a sharp spike at a single frequency, while its Fourier transform gives the shape of the sound wave in the time domain, which is a completely delocalized sine wave. In quantum mechanics, the two key points are that the position of the particle takes the form of a matter wave, and momentum is its Fourier conjugate, assured by the de Broglie relation p = ħk, where k is the wavenumber. Source Energizer777Energizer777 Yes the argument that you can not measure the speed at single point is correct, but if you take a distance 'dx' and then measure the time 'dt', you will measure the momentum absolutely and position can be determined with the uncertainty of dx. Now you may divide dx by n and measure the time dt again you will get absolute momentum and even more precise position. But how small you can go. classically you may make n tends to infinite and you will have no uncertainty in both momentum and position. If you try to derive the uncertainty relation classically you can derive it but you can not define the lower limit, as soon as you apply the quantum considerations (wave particle duality) you can set the lower limit such that $\Delta x$ $\Delta p$> $\hbar$/2. hsinghalhsinghal protected by Qmechanic♦ Nov 4 '15 at 18:56 Not the answer you're looking for? Browse other questions tagged classical-mechanics heisenberg-uncertainty-principle measurements speed or ask your own question. Can the Heisenberg Uncertainty Principle be explained intuitively? Isn't the uncertainty principle just non-fundamental limitations in our current technology that could be removed in a more advanced civilization? Uncertainty principle and multiple observers Is momentum/position uncertainty related to the fact that to MEASURE momentum, one needs a CHANGE in position? Why shouldn't the uncertainty principle be interpreted as an observer effect? Is the Uncertainty Principle valid for information about the past? Is the uncertainty principle a statement about limits on our predictive rather than our measurement abilities? Do the position-momentum uncertainty and time-energy uncertainty really exist in QFT? How does the uncertainty principle make sense of the fact that momentum for massive particles depends in part on position? If uncertainty principle is explained by wave function, then doesn't wave function collapse when we measure position or momentum? Particle in a box momentum and position, expectation and uncertainty in stationary states The uncertainty principle and probing small scales
CommonCrawl
Why the set $\{1\}$ is equal to the set $\{1,1,1\}$ ? A box with 3 equal elements is NOT the same as a box with only one of those elements. Why the set $\{1\}$ is equal to the set $\{1,1,1\}$? A box with 3 equal elements is NOT the same as a box with only one of those elements. This just doesn't seems right, i can't explain it further than the title. I do know why there isn't multiplicity in sets due to the axiom of extensionality by the way, but that's not the point! elementary-set-theory definition Jneven Karine SilvaKarine Silva $\begingroup$ Sets are defined to ignore multiplicity. The object you want is called a multiset. Sets focus on the yes/no question of inclusion, not how many. $\endgroup$ – Michael Burr Aug 22 '18 at 4:12 $\begingroup$ In mathematics, there aren't three different 1s, there is only one 1. I think of $\{1,1,1\}$ as saying, "1 is in the set, 1 is in the set, 1 is in the set, nothing else is in the set". $\endgroup$ – Rahul Aug 22 '18 at 4:28 $\begingroup$ Rahul, that's actually helpful, thanks. $\endgroup$ – Karine Silva Aug 22 '18 at 4:35 $\begingroup$ Just because you say someone's name three times doesn't mean you are talking about three people. $\endgroup$ – Lee Mosher Aug 22 '18 at 20:17 $\begingroup$ @Lee: In some cases mentioning someone's name three times can summon them in demon form. $\endgroup$ – Asaf Karagila♦ Aug 23 '18 at 3:09 You ignore equality, in its strongest sense. If you have three apples, you don't have three of the same apple. You have three apples. Even if you cloned the apple perfectly three times, you still don't have the same apple, you have three copies of the same apple. This is where the analogy of "sets are like boxes" fails our intuition. Because in real life we often replace "equality" by "sufficiently the same, even if not really the same". One could also try and make the argument that if I put "you" and "yourself" inside a box, it won't have two copies of you, just the one. But this analogy is weird and awkward, because it seems unnatural to put someone in a box and then put them into the box again. In mathematics, two objects are equal means that they are just the one object. So $\{1\}$ and $\{1,1,1\}$ are the same set. To see why, note that every element of $\{1\}$, namely $1$, is an element of $\{1,1,1\}$. But on the other hand, every element of $\{1,1,1\}$ is either $1$ or $1$ or $1$, and $1\in\{1\}$. So the two sets have the same elements, and are therefore equal, which means that they are the same. On the other hand, there is a concept of a multiset where repetition matters. You might want to read up on that. Sets define a concept that corresponds to the concept of a classification of all things as either in the set or not in the set. If I have e.g. "the set of positive integers less than 10", it doesn't make sense to say how many times "3" is in that set. It's either in the set or it isn't. This concept is useful for many things. If you want to create a system of your own which differentiates between {1} and {1,1,1} then you can do that. Many, many different ways in fact. Some of those will also correspond to useful things, but each of them is a distinct concept from that of a set. Is your problem just the use of a word that means something different to you? Steven IrrgangSteven Irrgang Sets are not boxes. Boxes are an analogy that shows some of the features of sets -- but the analogy does not define how sets behave. Or in other words, the purpose of set theory is not to be a mathematical theory of how boxes behave. When you have a situation where sets behave differently than boxes, that just means you've reached the point where the box analogy is not helpful anymore. That is not set theory's problem. $\{1\}$ is a useful shorthand for the set $$ \{ x \mid x = 1 \}. $$ $\{1,1,1\}$ is a (useful?) shorthand for the set $$ \{ x \mid x = 1 \lor x = 1 \lor x = 1 \} $$ Since $x=1\lor x=1\lor x=1$ is true for exactly the same $x$s as $x=1$ is (namely, $1$ and nothing else), the two sets have the same elements, and therefore they are the same set. Do not be confused by the fact that $\{1,1,1\}$ appears multiple times in the notation. That just means that we're writing down the condition for being in the set in an unnecessarily redundant way. But the condition still means the same, and therefore the set is the same. Henning MakholmHenning Makholm Not the answer you're looking for? Browse other questions tagged elementary-set-theory definition or ask your own question. A doubt on axiomatic set theory Axiom of extensionality in ZF - pointless? Axiom of Extensionality - Why not called equality? Why can't a set have two elements of the same value? Precise meaning of "extension"? Why isn't this a well ordering of $\{A\subseteq\mathbb N\mid A\text{ is infinite}\}$? How can a set with one element be equal to a set with two elements Set Theory question about equalities of sets Why does "Naive Set Theory" by Halmos allow the universal set despite admitting its non-existence? Why the empty set is a subset of every set?
CommonCrawl
October 2004 , Volume 145, Issue 5, pp 959–970 | Cite as Buoyancy measurements and vertical distribution of eggs of sardine (Sardina pilchardus) and anchovy (Engraulis encrasicolus) S. H. Coombs G. Boyra L. D. Rueda A. Uriarte M. Santos D. V. P. Conway N. C. Halliday Measurements were made of the density and settling velocity of eggs of sardine (Sardina pilchardus) and anchovy (Engraulis encrasicolus), using a density-gradient column. These results were related to observed vertical distributions of eggs obtained from stratified vertical distribution sampling in the Bay of Biscay. Eggs of both species had slightly positive buoyancy in local seawater throughout most of their development until near hatching, when there was a marked increase in density and they became negatively buoyant. The settling velocity of anchovy eggs, which are shaped as prolate ellipsoids, was close to predictions for spherical particles of equivalent volume. An improved model was developed for prediction of the settling velocity of sardine eggs, which are spherical with a relatively large perivitelline volume; this incorporated permeability of the chorion and adjustment of the density of the perivitelline fluid to ambient seawater. Eggs of both species were located mostly in the top 20 m of the water column, in increasing abundance towards the surface. A sub-surface peak of egg abundance was sometimes observed at the pycnocline, particularly where this was pronounced and associated with a low-salinity surface layer. There was a progressive deepening of the depth distributions for successive stages of egg development. Results from this study can be applied for improved plankton sampling of sardine and anchovy eggs and in modelling studies of their vertical distribution. Vertical Distribution Settling Velocity Prolate Ellipsoid Japanese Anchovy Sardina Pilchardus Communicated by J.P. Thorpe, Port Erin This work was funded, in part, by the EU PELASSES program 99/010 for improved stock estimation of sardine and anchovy. The authors acknowledge the assistance of the ship's personnel and scientists in carrying out the sampling on the R.V. "Investigador" and for the subsequent LHPR sample analysis, particularly P. Alvarez, I. Martin and B. Beldarrain. D. Checkly kindly provided the Denny (1993) reference. All experimental procedures complied with EU statutory legislation on animal welfare. Permeable membrane model (G. Boyra) In this model, the permeability of the chorion and, hence, the change in density of the perivitelline fluid is accounted for in the settling velocity of sardine eggs in the density-gradient column. The egg is considered to be separated into two density components, one having a fixed density relating to the central vitelline mass combined with the chorion; the other, the perivitelline fluid, which varies with the external medium. For sardine eggs, the diameter of the vitelline mass is approximately half the egg diameter and the chorion is of relatively insignificant thickness; hence, the perivitelline fluid occupies seven-eighths of the egg volume and the vitelline mass one-eighth. Thus, overall egg density (ρe) can be given as: $$ \rho _{{\text{e}}} = \frac{{\rho _{1} + 7\rho _{2} }} {8} $$ where ρ1 is the density of the vitelline mass and chorion and ρ1 is the density of the perivitelline fluid. The rate of transport of solutes per unit surface area across a permeable membrane separating two media with different concentrations of solutes C i is proportional to the difference in concentration: $$ \frac{{\partial Q}} {{\partial t}} = P \cdot {\left( {C_{{\text{I}}} - C_{{{\text{II}}}} } \right)} $$ The constant of proportionality, P, represents the permeability of the membrane (here denoted as Pm for the egg membrane). P depends upon the thickness of the membrane and its diffusivity for each particular solute. For a medium of a certain concentration, separated from an external medium of different concentration by a permeable membrane, the rate of change of solutes in the bounded region, P′, depends upon the permeability multiplied by the ratio of the surface area (S) to volume (V) delimited by the membrane: $$ \frac{{\partial Q}} {{\partial t}} = {P}\ifmmode{'}\else$'$\fi \cdot {\left( {C_{{\text{I}}} - C_{{{\text{II}}}} } \right)} = \frac{S} {V} \cdot P \cdot {\left( {C_{{\text{I}}} - C_{{{\text{II}}}} } \right)} $$ For the present case, as the change in saline concentration is equivalent to the change in ambient water density, we model the change of density of the perivitelline fluid of the eggs equivalently as: $$ \frac{{\partial \rho _{2} }} {{\partial t}} = P_{{\text{e}}} \cdot {\left( {\rho _{{\text{w}}} - \rho _{2} } \right)} $$ $$ P_{{\text{e}}} = \frac{S} {V} \cdot P_{{\text{m}}} $$ The density of the ambient seawater in which the egg is immersed (varying down the gradient column) is ρw, while Pe is the rate of change of egg density, which we consider the "egg permeability", and is treated as a constant for all sardine eggs. Simulation runs were then carried out, incorporating this variable density component in the Sundby (1983) model of settling velocity, to see whether it improved the fit to the experimental observations. The model was run for the mean of a batch of seven sardine eggs (experiment 2b, Table 2) that had been measured for settling velocities in the gradient column, the same density gradient being reproduced for use in the model. For these simulations: ρ1 is the combined density of the vitelline mass and the chorion, is constant and was set equal to the density (1.02415 g cm−3) corresponding to the equilibrium height in the gradient column at the end of the experiment; ρ2 is the density of the perivitelline fluid, varies with time and was first set with an initial value equal to the density of the seawater at the field collection site (1.02517 g cm−3) and then optimised for best fit (1.02567 g cm−3); Pe is the "egg permeability" and was estimated by best-fit iteration in the model as 0.006 s−1, with Pm being 0.002 mm s−1. The results (Fig. 9) showed an improvement in the fit of the model by incorporating a permeable egg membrane (R2=0.15, n=11; Pm=0.002) compared with a non-permeable membrane (R2=0, n=11; Pm=0), both these cases having ρ2 equal to the density of the seawater at the field collection site (1.02517 g cm−3). The model with an optimised ρ2 (1.02567 g cm−3) and permeable membrane (Pm=0.002) resulted in a much improved fit (R2=0.76, n=11). Ådlandsvik B, Coombs SH, Sundby S, Temple G (2001) Buoyancy and vertical distribution of eggs and larvae of blue whiting (Micromesistius poutassou): observations and modelling. Fish Res (Amst) 50:59–72CrossRefGoogle Scholar Alderdice DF, Forrester CR (1968) Some effects of salinity and temperature on early development and survival of the English sole (Parophrys vetulus). J Fish Res Board Can 25:495–521Google Scholar Boyra G, Rueda L, Coombs SH, Sundby S, Ådlandsvik B, Santos M, Uriarte A (2003) Modelling the vertical distribution of eggs of anchovy (Engraulis encrasicolus) and sardine (Sardina pilchardus). Fish Oceanogr 12:381–395CrossRefGoogle Scholar Checkley DM Jr, Ortner P, Settle L, Cummings S (1997) A continuous underway fish egg sampler. Fish Oceanogr 6:58–73CrossRefGoogle Scholar Coombs SH (1981) A density-gradient column for determining the specific gravity of fish eggs, with particular reference to eggs of the mackerel Scomber scombrus. Mar Biol 63:101–106Google Scholar Coombs SH, Fosh CA, Keen MA (1985) The buoyancy and vertical distribution of eggs of sprat (Sprattus sprattus) and pilchard (Sardina pilchardus). J Mar Biol Assoc UK 65:461–474Google Scholar Coombs SH, Morgans D, Halliday NC (2001) Seasonal and ontogenetic changes in the vertical distribution of eggs and larvae of mackerel (Scomber scombrus L.) and horse mackerel (Trachurus trachurus L.). Fish Res (Amst) 50:27–40CrossRefGoogle Scholar Coombs SH, Giovanardi O, Halliday NC, Franceschini G, Conway DVP, Manzueto L, Barrett CD, McFadzen IRB (2003) Wind mixing, food availability and mortality of anchovy larvae Engraulis encrasicolus in the northern Adriatic Sea. Mar Ecol Prog Ser 248:221–235Google Scholar Cushing DH (1957) The number of pilchards in the Channel. Fish Investig Ser II Mar Fish GB Minist Agric Fish Food 211–27Google Scholar Davenport J, Lønning S, Kjørsvik E (1981) Osmotic and structural changes during early development of eggs and larvae of the cod, Gadus morhua L. J Fish Biol 19:317–331Google Scholar Denny MW (1993) Air and water: the biology and physics of life's media. Princeton University Press, PrincetonGoogle Scholar Farinha A, Lopez PA (1996) Horizontal and vertical distribution of planktonic phases of Trachurus trachurus (L.) and Sardina pilchardus (Walbaum), off northwestern Portuguese coast. Int Counc Explor Sea Comm Meet S:28Google Scholar Gamulin T, Hure J (1955) Contribution a la connaissance de l'ecologie de la ponte de la sardine Sardina pilchardus (Walb.) dans l'Adriatique. Acta Adriat 7:1–23Google Scholar Kuwahara A, Suzuki S (1984) Diurnal changes in vertical distribution of anchovy eggs and larvae in the western Wakasa Bay. Bull Jpn Soc Sci Fish 50:1285–1292Google Scholar Lockwood SJ, Nichols J, Dawson WA (1981) The estimation of mackerel (Scomber scombrus L.) spawning stock size by plankton survey. J Plankton Res 3:217–233Google Scholar Lopez-Jamar E, Coombs SH, Alemany F, Alonso J, Alvarez F, Barrett CD, Cabanas JM, Casas B, Diaz del Rio G, Fernandez de Puelles ML, Franco C, Garcia A, Halliday NC, Lago de Lanzos A, Lavin A, Miranda A, Robins DB, Valdes L, Varela M (1991) A SARP pilot study for sardine (Sardina pilchardus) off north and northwest Spain in April/May 1991. Int Counc Explor Sea Comm Meet L:69Google Scholar McNown JS, Malaika J (1950) Effects of particle shape on settling velocity at low Reynolds numbers. Trans Am Geophys Union 31:74–82Google Scholar Moser HG, Ahlstrom EH (1985) Staging anchovy eggs. In: Lasker R (ed) An egg production method for estimating spawning biomass of pelagic fish: application to the northern anchovy, Engraulis mordax. NOAA Tech Rep NMFS 36:7–15Google Scholar Motos L, Coombs S (2000) Vertical distribution of anchovy eggs and field observations of incubation temperature. Ozeanografika 3:253–272Google Scholar Newmann FH, Searle VHL (1948) The general properties of matter. Arnold, LondonGoogle Scholar Olivar MP, Salat J, Palomera I (2001) Comparative study of spatial distribution patterns of the early stages of anchovy and pilchard in the NW Mediterranean Sea. Mar Ecol Prog Ser 217:111–120Google Scholar Palomera I (1991) Vertical distribution of eggs and larvae of Engraulis encrasicolus in stratified waters of the western Mediterranean. Mar Biol 111:37–44Google Scholar Parada C, van der Lingen CD, Mullon C, Penven P (2003) Modelling the effect of buoyancy on the transport of anchovy (Engraulis encrasicolus) eggs from spawning to nursery grounds in the southern Benguela: an IBM approach. Fish Oceanogr 12:170–184CrossRefGoogle Scholar Pipe RK, Coombs SH, Clarke KR (1981) On the sample validity of the Longhurst–Hardy plankton recorder for fish eggs and larvae. J Plankton Res 4:675–683Google Scholar Southward AJ, Barrett RL (1983) Observations on the vertical distribution of zooplankton, including post-larval teleosts, off Plymouth in the presence of a thermocline and a chlorophyll-dense layer. J Plankton Res 5:599–618Google Scholar Sundby S (1983) A one-dimensional model for the vertical distribution of pelagic fish eggs in the mixed layer. Deep-Sea Res 30:645–661Google Scholar Tanaka Y (1990) Change in the egg buoyancy of Japanese anchovy Engraulis japonicus during embryonic development. Nippon Suisan Gakkaishi 56:165Google Scholar Tanaka Y (1992) Japanese anchovy egg accumulation at the sea surface or pycnocline—observations and model. J Oceanogr 48:461–472Google Scholar Williams R, Collins NR, Conway DVP (1983) The double LHPR system, a high speed micro- and macroplankton sampler. Deep-Sea Res 30:331–342Google Scholar © Springer-Verlag 2004 1.Marine Biological AssociationPlymouthUK 2.Fundación AZTIPortualdeaSpain Coombs, S.H., Boyra, G., Rueda, L.D. et al. Marine Biology (2004) 145: 959. https://doi.org/10.1007/s00227-004-1389-4 Publisher Name Springer-Verlag
CommonCrawl
What Are Waves Caused By Physics How To Learn Name Reactions In Organic Chemistry Geometry Seeing Doing Understanding Answer Key Geometry Segment Addition Postulate Worksheet What Are Reflexes In Psychology What Is Unit In Physics Eddy Merckx Corsa Extra Geometry Characteristics Of Shock Waves Introduction to waves | Mechanical waves and sound | Physics | Khan Academy Comparing supersonic flows, we may achieve an increase in an expansion through an expansion fan. For instance, the known expansion fan is a Prandtl-Meyer expansion fan. Coupled with, expansion wave may approach, collide and lastly recombine with the shock wave, creating a process of destructive interference. The sonic boom associates with the passage of a supersonic aircraft, which is a type of sound wave produced as a result of constructive interference. When a shock wave passes through matter, energy preserves, however, entropy increases. This change in the matter's properties, as a rule, manifests itself as a decrease in energy. However, we can extract work from this energy, also, as a drag force on supersonic objects. Therefore, shock waves are strongly irreversible processes. Additionally, we have a few types of shock waves, lets understand these: Sound Wave Graphs Explained Sound waves can be described by graphing either displacement or density. Displacement-time graphs represent how far the particles are from their original places and indicates which direction theyve moved. Particles that show up on the zero line in a particle displacement graph didnt move at all from their normal position. These seemingly motionless particles experience more compressions and rarefactions than other particles. Since pressure and density are related, a pressure versus time graph will display the same information as a density versus time graph. These graphs indicate where the particles are compressed and where they are very expanded. Unlike displacement graphs, particles along the zero line in a density graph are never squished or pulled apart. Instead, they are the particles that move back and forth the most. What Do Waves Do Physics In physics a wave can be thought of as a disturbance or oscillation that travels through space-time, accompanied by a transfer of energy. Wave motion transfers energy from one point to another, often with no permanent displacement of the particles of the medium that is, with little or no associated mass transport. Don't Miss: What Does Causation Mean In Math How Is Sound Produced Sound is produced when an object vibrates, creating a pressure wave. This pressure wave causes particles in the surrounding medium to have vibrational motion. As the particles vibrate, they move nearby particles, transmitting the sound further through the medium. The human ear detects sound waves when vibrating air particles vibrate small parts within the ear. In many ways, sound waves are similar to light waves. They both originate from a definite source and can be distributed or scattered using various means. Unlike light, sound waves can only travel through a medium, such as air, glass, or metal. This means theres no sound in space! Classification Of Wave Based On Vibration Of Particles Of Wave a. Transverse Wave: In transverse waves, the particles of the medium vibrate at right angles to the direction in which the wave propagates. Waves on strings, surface water waves, and electromagnetic waves are transverse waves. In electromagnetic waves , the disturbance that travels is not a result of vibrations of particles it is due to the oscillation of electric and magnetic fields at right angles to the direction in which the wave travels. b. Longitudinal wave: In these types of waves, particles of the medium vibrate to and fro about their mean position along the direction of energy propagation. These are also called pressure waves. Sound waves are longitudinal mechanical waves. Further, we have: a. Matter waves: They are associated with constituents of matter: electrons, protons, neutrons, atoms, and molecules. They arise in the quantum mechanical description of nature. Though conceptually more abstract than mechanical or electromagnetic waves, they have already found applications in several devices essential to modern technology matter waves associated with electrons are employed in electron microscopes. Terms Related to Waves 2. AmplitudeThe amplitude of a wave is the maximum displacement of the particles of the medium from their mean position. 3. FrequencyThe number of vibrations made by a particle in one second is called frequency. It is represented by \. Its unit is hertz \,\nu = \frac\). \ = } \times }\) Read Also: Geometry Dash 1.1 Apk What Exactly Causes Mechanical Waves I agree its all about vibrations the energy causes the medium to vibrate and the total over all displacement of matter is 0, but how exactly are ripples formed? I read that a vibrating particle can push or pull the water molecules and later that water molecule pull or pushes the other water particle. How does that exactly happen? Kindly enphasize on water waves. Quite simply, a mechanical wave is unable to propagate through a vacuum, unlike electromagnetic waves, which do not require a medium, and can propagate through a vacuum. A wave, as you probably know, simply carries energy through a medium, and does not move the medium itself. Similarly, in the case of water waves, there is no overall displacement of matter, as you mentioned. When there occurs a disturbance in the body of water, which is the medium, water is pushed outwards, thereby creating what you refer to as ripples. There is no movement of water involved, there is just energy being moved through the water. At its most fundamental level, $_2\text}$ molecules collide with each other, resulting in the ripple. Wave In Elastic Medium Consider a traveling transverse wave " rel="nofollow"> pulse) on a string . Consider the string to have a single spatial dimension. Consider this wave as traveling in the direction in space. For example, let the positive x direction be to the right, and the negative x independent of amplitude . with constant waveform, or shape This wave can then be described by the two-dimensional functions or, more generally, by d'Alembert's formula: representing two component waveforms G traveling through the medium in opposite directions. A generalized representation of this wave can be obtained as the partial differential equation 1 2 . }}u}}}=u}}}.} General solutions are based upon Duhamel's principle. Beside the second order wave equations that are describing a standing wave field, the one-way wave equation describes the propagation of single wave in a defined direction. The form or shape of F in d'Alembert's formula involves the argument x â vt. Constant values of this argument correspond to constant values of F, and these constant values occur if x increases at the same rate that vt increases. That is, the wave shaped like the function F will move in the positive x-direction at velocity v . Don't Miss: Do Biologics Affect Immune System What Are Progressive And Stationary Waves We can divide waves into stationary waves and progressive waves. Stationary waves do not propagate as they oscillate up and down in the same place. An example of this is the guitar strings when you play the guitar. Progressive waves move from one place to another. A classic example of a progressive wave is an ocean wave. Figure 1. A progressive wave moves from one place to another. Source: StudySmarter. Figure 2. A stationary wave does not move from one place to another. Stationary waves are oscillations that appear and disappear in fixed points of space, as shown above. Source: StudySmarter. Classification Of Wave Based On The Dimension GCSE Physics – Intro to Waves – Longitudinal and Transverse Waves #61 a. Waves in one-dimension: These waves travel along a line, i.e., along with one-dimensional space. These waves are only a function of one space variable. A wave on a string is an example of a one-dimensional wave b. Waves in two-dimension: The two-dimensional wave travels along any two dimensions it can have either \ and \ or \ and \ or \ and \ components. Waves can also travel on a surface that is a two-dimensional space, such as the surface of water or in a layer of clouds c. Waves in three-dimension: The three-dimensional waves have the \ component, \ component, and \ component. Many significant waves propagate in a three-dimensional space. These include sound waves, radio waves, light, and other electromagnetic waves. Recommended Reading: What Is Transformation In Biology Characteristics Of Sound Waves There are five main characteristics of sound waves: wavelength, amplitude, frequency, time period, and velocity. The wavelength of a sound wave indicates the distance that wave travels before it repeats itself. The wavelength itself is a longitudinal wave that shows the compressions and rarefactions of the sound wave. The amplitude of a wave defines the maximum displacement of the particles disturbed by the sound wave as it passes through a medium. A large amplitude indicates a large sound wave. The frequency of a sound wave indicates the number of sound waves produced each second. Low-frequency sounds produce sound waves less often than high-frequency sounds. The time period of a sound wave is the amount of time required to create a complete wave cycle. Each vibration from the sound source produces a waves worth of sound. Each complete wave cycle begins with a trough and ends at the start of the next trough. Lastly, the velocity of a sound wave tells us how fast the wave is moving and is expressed as meters per second. Sound wave diagram. A wave cycle occurs between two troughs. Sound Intensity In An Air Column An air column is a large, hollow tube that is open on one side and closed on the other. The conditions created by an air column are especially useful for investigating sound characteristics such as intensity and resonance. Check out the video below to see how air columns can be used to investigate nodes, antinodes and resonance. Don't Miss: How Does Soap Work Chemistry What Are The Characteristics Of Transverse Waves The propagation of transverse waves is possible only through solids and not through liquids or gases. Only transverse waves can exhibit the phenomenon of polarisation. The vibration of the particles in a medium takes place in the same place this is known as the plane of vibration or polarisation. Properties such as pressure and density are constant in a medium when transverse waves are propagated. The formation of typical crests and troughs in transverse waves is periodic in nature. The propagation of transverse waves depends on the mediums rigidity. Classification Of Wave Based On The Propagation Of Energy a. Stationary waves or standing waves: These are the waves that possess vertical oscillating movement but do not undergo forward motion in a horizontal direction. It results from the superposition of two identical waves of the same amplitude and frequency, propagating in the opposite direction. Stationary wave generates vibration pattern within the medium. Thus, energy is confined within it. It never appears to get travelled as the nearby points of the wave are of similar phase, and so the energy is not transferred from one point to another b. Progressive waves: These waves are the ones that allow the propagation of energy through the medium as the wave continuously travels in one direction where the amplitude is kept constant. The molecules in the progressive wave transfer their oscillating energy in the forward direction. It leads to the propagation of energy from one point to another through the medium. Don't Miss: What Does Mean In Geometry Why Are Water Waves Transverse And Longitudinal On the water surface transverse waves are formed because of the water ripples passing on the surface.And as we go deep inside the water body, the particles are displaced parallel to the direction in which the wave travels, therefore, longitudinal waves are found.Hence, water waves are both transverse as well as longitudinal. PHYSICS Related Links Anatomy Of An Electromagnetic Wave Energy, a measure of the ability to do work, comes in many forms and can transform from one type to another. Examples of stored or potential energy include batteries and water behind a dam. Objects in motion are examples of kinetic energy. Charged particlessuch as electrons and protonscreate electromagnetic fields when they move, and these fields transport the type of energy we call electromagnetic radiation, or light. You May Like: What Is 9th Grade Math How Are Waves Related To The Transport Of Energy Waves involve the transport of energy without the transport of matter. In conclusion, a wave can be described as a disturbance that travels through a medium, transporting energy from one location to another location without transporting matter. How are gravitational waves used to study the universe? Though its mission is to detect gravitational waves from some of the most violent and energetic processes in the Universe, the data LIGO collects may have far-reaching effects on many areas of physics including gravitation, relativity, astrophysics, cosmology, particle physics, and nuclear physics. Which is the most important property of a wave? The prime properties of waves are as follows: Amplitude Wave is an energy transport phenomenon. Amplitude is the height of the wave, usually measured in meters. It is directly related to the amount of energy carried by a wave. How Does Sound Travel Light Is Waves: Crash Course Physics #39 Before we discuss how sound travels, its important to understand what a medium is and how it affects sound. We know that sound can travel through gases, liquids, and solids. But how do these affect its movement? Sound moves most quickly through solids, because its molecules are densely packed together. This enables sound waves to rapidly transfer vibrations from one molecule to another. Sound moves similarly through water, but its velocity is over four times faster than it is in air. The velocity of sound waves moving through air can be further reduced by high wind speeds that dissipate the sound waves energy. Mediums and the Speed of Sound The speed of sound is dependent on the type of medium the sound waves travel through. In dry air at 20°C, the speed of sound is 343 m/s! In room temperature seawater, sound waves travel at about 1531 m/s! When physicists observe a disturbance that expands faster than the local speed of sound, its called a shockwave. When supersonic aircraft fly overhead, a local shockwave can be observed! Generally, sound waves travel faster in warmer conditions. As the ocean warms from global climate, how do you think this will affect the speed of sound waves in the ocean? Propagation of Sound Waves Compression & Rarefaction Also Check: Is Ap Human Geography Hard Wave Frequency And Wave Period To calculate wave frequency and period, it is helpful to remember the inverse relationship between these two properties. Frequency is measured in waves per second and period is measured in seconds per wave. For example, if the period is 2 seconds, then the frequency is 0.5 waves per second. Create standing waves in a wave tank and look at the effect of frequency and length of wave pulse on wavelength, wave height, wave speed, and wave period. Standing waves are waves that do not appear to move forward or advance in position. Rather, they oscillate or vibrate in place. A plucked guitar string is an example of a standing mechanical wave with two fixed ends. Standing waves occur when waves with the same frequency, wavelength, and amplitude interact. In contrast to standing waves, transverse waves advance in position. Water waves on the surface of the ocean do not typically behave like standing waves. Instead, they behave like transverse waves, propagating their energy forward as they move. What Causes A Tsunami An Ocean Scientist Explains The Physics Of These Destructive Waves On Jan. 15, 2022, the Hunga Tonga-Hunga Haapai volcano in Tonga erupted, sending a tsunami racing across the Pacific Ocean in all directions. As word of the eruption spread, government agencies on surrounding islands and in places as far away as New Zealand, Japan and even the U.S. West Coast issued tsunami warnings. Only about 12 hours after the initial eruption, tsunami waves a few feet tall hit California shorelines more than 5,000 miles away from the eruption. Im a physical oceanographer who studies waves and turbulent mixing in the ocean. Tsunamis are one of my favorite topics to teach my students because the physics of how they move through oceans is so simple and elegant. Waves that are a few feet tall hitting a beach in California might not sound like the destructive waves the term calls to mind, nor what you see in footage of tragic tsunamis from the past. But tsunamis are not normal waves, no matter the size. So how are tsunamis different from other ocean waves? What generates them? How do they travel so fast? And why are they so destructive? Recommended Reading: What Is Causation In Psychology Pulse Waves And Periodic Waves If you drop a pebble into the water, only a few waves may be generated before the disturbance dies down, whereas in a wave pool, the waves are continuous. A pulse wave is a sudden disturbance in which only one wave or a few waves are generated, such as in the example of the pebble. Thunder and explosions also create pulse waves. A periodic wave repeats the same oscillation for several cycles, such as in the case of the wave pool, and is associated with simple harmonic motion. Each particle in the medium experiences simple harmonic motion in periodic waves by moving back and forth periodically through the same positions. How Do You Calculate The Frequency Of A Wave To calculate a waves frequency, you need to find a fixed point in space and then measure the time it takes for two consecutive crests to pass that point. The time it takes for a wave to repeat its oscillation pattern is known as its period. To obtain the waves frequency, you need to take the inverse of this period, as shown in the formula below: f=1/T Be perfectly prepared on time with an individual plan. You May Like: What Does Flashpoint Mean In Chemistry How To Create Standing Waves With PASCOs String Vibrator, Sine Wave Generator, and Strobe System, students can create, manipulate and measure standing waves in real time. The Sine Wave Generator and String Vibrator work together to propagate a sine wave through the rope, while the Strobe System can be used to freeze waves in time. Create clearly defined nodes, illuminate standing waves, and investigate the quantum nature of waves in real-time with this modern investigative approach. You can check out some of our favorite wave applications in the video below. Previous articleHow To Play Cool Math Games Without Flash Next articleHow To Find G In Physics
CommonCrawl
« GapP, Oracles, and Quantum Supremacy Michael Cohen (1992-2017) » My Big Numbers talk at Festivaletteratura Last weekend, I gave a talk on big numbers, as well as a Q&A about quantum computing, at Festivaletteratura: one of the main European literary festivals, held every year in beautiful and historic Mantua, Italy. (For those who didn't know, as I didn't: this is the city where Virgil was born, and where Romeo gets banished in Romeo and Juliet. Its layout hasn't substantially changed since the Middle Ages.) I don't know how much big numbers or quantum computing have to do with literature, but I relished the challenge of explaining these things to an audience that was not merely "popular" but humanisitically rather than scientifically inclined. In this case, there was not only a math barrier, but also a language barrier, as the festival was mostly in Italian and only some of the attendees knew English, to varying degrees. The quantum computing session was live-translated into Italian (the challenge faced by the translator in not mangling this material provided a lot of free humor), but the big numbers talk wasn't. What's more, the talk was held outdoors, on the steps of a cathedral, with tons of background noise, including a bell that loudly chimed halfway through the talk. So if my own words weren't simple and clear, forget it. Anyway, in the rest of this post, I'll share a writeup of my big numbers talk. The talk has substantial overlap with my "classic" Who Can Name The Bigger Number? essay from 1999. While I don't mean to supersede or displace that essay, the truth is that I think and write somewhat differently than I did as a teenager (whuda thunk?), and I wanted to give Scott2017 a crack at material that Scott1999 has been over already. If nothing else, the new version is more up-to-date and less self-indulgent, and it includes points (for example, the relation between ordinal generalizations of the Busy Beaver function and the axioms of set theory) that I didn't understand back in 1999. For regular readers of this blog, I don't know how much will be new here. But if you're one of those people who keeps introducing themselves at social events by saying "I really love your blog, Scott, even though I don't understand anything that's in it"—something that's always a bit awkward for me, because, uh, thanks, I guess, but what am I supposed to say next?—then this lecture is for you. I hope you'll read it and understand it. Thanks so much to Festivaletteratura organizer Matteo Polettini for inviting me, and to Fabrizio Illuminati for moderating the Q&A. I had a wonderful time in Mantua, although I confess there's something about being Italian that I don't understand. Namely: how do you derive any pleasure from international travel, if anywhere you go, the pizza, pasta, bread, cheese, ice cream, coffee, architecture, scenery, historical sights, and pretty much everything else all fall short of what you're used to? Big Numbers by Scott Aaronson My four-year-old daughter sometimes comes to me and says something like: "daddy, I think I finally figured out what the biggest number is! Is it a million million million million million million million million thousand thousand thousand hundred hundred hundred hundred twenty eighty ninety eighty thirty a million?" So I reply, "I'm not even sure exactly what number you named—but whatever it is, why not that number plus one?" "Oh yeah," she says. "So is that the biggest number?" Of course there's no biggest number, but it's natural to wonder what are the biggest numbers we can name in a reasonable amount of time. Can I have two volunteers from the audience—ideally, two kids who like math? [Two kids eventually come up. I draw a line down the middle of the blackboard, and place one kid on each side of it, each with a piece of chalk.] So the game is, you each have ten seconds to write down the biggest number you can. You can't write anything like "the other person's number plus 1," and you also can't write infinity—it has to be finite. But other than that, you can write basically anything you want, as long as I'm able to understand exactly what number you've named. [These instructions are translated into Italian for the kids.] Are you ready? On your mark, get set, GO! [The kid on the left writes something like: 999999999 While the kid on the right writes something like: 11111111111111111 Looking at these, I comment:] 9 is bigger than 1, but 1 is a bit faster to write, and as you can see that makes the difference here! OK, let's give our volunteers a round of applause. [I didn't plant the kids, but if I had, I couldn't have designed a better jumping-off point.] I've been fascinated by how to name huge numbers since I was a kid myself. When I was a teenager, I even wrote an essay on the subject, called Who Can Name the Bigger Number? That essay might still get more views than any of the research I've done in all the years since! I don't know whether to be happy or sad about that. I think the reason the essay remains so popular, is that it shows up on Google whenever someone types something like "what is the biggest number?" Some of you might know that Google itself was named after the huge number called a googol: 10100, or 1 followed by a hundred zeroes. Of course, a googol isn't even close to the biggest number we can name. For starters, there's a googolplex, which is 1 followed by a googol zeroes. Then there's a googolplexplex, which is 1 followed by a googolplex zeroes, and a googolplexplexplex, and so on. But one of the most basic lessons you'll learn in this talk is that, when it comes to naming big numbers, whenever you find yourself just repeating the same operation over and over and over, it's time to step back, and look for something new to do that transcends everything you were doing previously. (Applications to everyday life left as exercises for the listener.) One of the first people to think about systems for naming huge numbers was Archimedes, who was Greek but lived in what's now Italy (specifically Syracuse, Sicily) in the 200s BC. Archimedes wrote a sort of pop-science article—possibly history's first pop-science article—called The Sand-Reckoner. In this remarkable piece, which was addressed to the King of Syracuse, Archimedes sets out to calculate an upper bound on the number of grains of sand needed to fill the entire universe, or at least the universe as known in antiquity. He thereby seeks to refute people who use "the number of sand grains" as a shorthand for uncountability and unknowability. Of course, Archimedes was just guessing about the size of the universe, though he did use the best astronomy available in his time—namely, the work of Aristarchus, who anticipated Copernicus. Besides estimates for the size of the universe and of a sand grain, the other thing Archimedes needed was a way to name arbitrarily large numbers. Since he didn't have Arabic numerals or scientific notation, his system was basically just to compose the word "myriad" (which means 10,000) into bigger and bigger chunks: a "myriad myriad" gets its own name, a "myriad myriad myriad" gets another, and so on. Using this system, Archimedes estimated that ~1063 sand grains would suffice to fill the universe. Ancient Hindu mathematicians were able to name similarly large numbers using similar notations. In some sense, the next really fundamental advances in naming big numbers wouldn't occur until the 20th century. We'll come to those advances, but before we do, I'd like to discuss another question that motivated Archimedes' essay: namely, what are the biggest numbers relevant to the physical world? For starters, how many atoms are in a human body? Anyone have a guess? About 1028. (If you remember from high-school chemistry that a "mole" is 6×1023, this is not hard to ballpark.) How many stars are in our galaxy? Estimates vary, but let's say a few hundred billion. How many stars are in the entire observable universe? Something like 1023. How many subatomic particles are in the observable universe? No one knows for sure—for one thing, because we don't know what the dark matter is made of—but 1090 is a reasonable estimate. Some of you might be wondering: but for all anyone knows, couldn't the universe be infinite? Couldn't it have infinitely many stars and particles? The answer to that is interesting: indeed, no one knows whether space goes on forever or curves back on itself, like the surface of the earth. But because of the dark energy, discovered in 1998, it seems likely that even if space is infinite, we can only ever see a finite part of it. The dark energy is a force that pushes the galaxies apart. The further away they are from us, the faster they're receding—with galaxies far enough away from us receding faster than light. Right now, we can see the light from galaxies that are up to about 45 billion light-years away. (Why 45 billion light-years, you ask, if the universe itself is "only" 13.6 billion years old? Well, when the galaxies emitted the light, they were a lot closer to us than they are now! The universe expanded in the meantime.) If, as seems likely, the dark energy has the form of a cosmological constant, then there's a somewhat further horizon, such that it's not just that the galaxies beyond that can't be seen by us right now—it's that they can never be seen. In practice, many big numbers come from the phenomenon of exponential growth. Here's a graph showing the three functions n, n2, and 2n: The difference is, n and even n2 grow in a more-or-less manageable way, but 2n just shoots up off the screen. The shooting-up has real-life consequences—indeed, more important consequences than just about any other mathematical fact one can think of. The current human population is about 7.5 billion (when I was a kid, it was more like 5 billion). Right now, the population is doubling about once every 64 years. If it continues to double at that rate, and humans don't colonize other worlds, then you can calculate that, less than 3000 years from now, the entire earth, all the way down to the core, will be made of human flesh. I hope the people use deodorant! Nuclear chain reactions are a second example of exponential growth: one uranium or plutonium nucleus fissions and emits neutrons that cause, let's say, two other nuclei to fission, which then cause four nuclei to fission, then 8, 16, 32, and so on, until boom, you've got your nuclear weapon (or your nuclear reactor, if you do something to slow the process down). A third example is compound interest, as with your bank account, or for that matter an entire country's GDP. A fourth example is Moore's Law, which is the thing that said that the number of components in a microprocessor doubled every 18 months (with other metrics, like memory, processing speed, etc., on similar exponential trajectories). Here at Festivaletteratura, there's a "Hack Space," where you can see state-of-the-art Olivetti personal computers from around 1980: huge desk-sized machines with maybe 16K of usable RAM. Moore's Law is the thing that took us from those (and the even bigger, weaker computers before them) to the smartphone that's in your pocket. However, a general rule is that any time we encounter exponential growth in our observed universe, it can't last for long. It will stop, if not before then when it runs out of whatever resource it needs to continue: for example, food or land in the case of people, fuel in the case of a nuclear reaction. OK, but what about Moore's Law: what physical constraint will stop it? By some definitions, Moore's Law has already stopped: computers aren't getting that much faster in terms of clock speed; they're mostly just getting more and more parallel, with more and more cores on a chip. And it's easy to see why: the speed of light is finite, which means the speed of a computer will always be limited by the size of its components. And transistors are now just 15 nanometers across; a couple orders of magnitude smaller and you'll be dealing with individual atoms. And unless we leap really far into science fiction, it's hard to imagine building a transistor smaller than one atom across! OK, but what if we do leap really far into science fiction? Forget about engineering difficulties: is there any fundamental principle of physics that prevents us from making components smaller and smaller, and thereby making our computers faster and faster, without limit? While no one has tested this directly, it appears from current physics that there is a fundamental limit to speed, and that it's about 1043 operations per second, or one operation per Planck time. Likewise, it appears that there's a fundamental limit to the density with which information can be stored, and that it's about 1069 bits per square meter, or one bit per Planck area. (Surprisingly, the latter limit scales only with the surface area of a region, not with its volume.) What would happen if you tried to build a faster computer than that, or a denser hard drive? The answer is: cycling through that many different states per second, or storing that many bits, would involve concentrating so much energy in so small a region, that the region would exceed what's called its Schwarzschild radius. If you don't know what that means, it's just a fancy way of saying that your computer would collapse to a black hole. I've always liked that as Nature's way of telling you not to do something! Note that, on the modern view, a black hole itself is not only the densest possible object allowed by physics, but also the most efficient possible hard drive, storing ~1069 bits per square meter of its event horizon—though the bits are not so easy to retrieve! It's also, in a certain sense, the fastest possible computer, since it really does cycle through 1043 states per second—though it might not be computing anything that anyone would care about. We can also combine these fundamental limits on computer speed and storage capacity, with the limits that I mentioned earlier on the size of the observable universe, which come from the cosmological constant. If we do so, we get an upper bound of ~10122 on the number of bits that can ever be involved in any computation in our world, no matter how large: if we tried to do a bigger computation than that, the far parts of it would be receding away from us faster than the speed of light. In some sense, this 10122 is the most fundamental number that sets the scale of our universe: on the current conception of physics, everything you've ever seen or done, or will see or will do, can be represented by a sequence of at most 10122 ones and zeroes. Having said that, in math, computer science, and many other fields (including physics itself), many of us meet bigger numbers than 10122 dozens of times before breakfast! How so? Mostly because we choose to ask, not about the number of things that are, but about the number of possible ways they could be—not about the size of ordinary 3-dimensional space, but the sizes of abstract spaces of possible configurations. And the latter are subject to exponential growth, continuing way beyond 10122. As an example, let's ask: how many different novels could possibly be written (say, at most 400 pages long, with a normal-size font, yadda yadda)? Well, we could get a lower bound on the number just by walking around here at Festivaletteratura, but the number that could be written certainly far exceeds the number that have been written or ever will be. This was the subject of Jorge Luis Borges' famous story The Library of Babel, which imagined an immense library containing every book that could possibly be written up to a certain length. Of course, the vast majority of the books are filled with meaningless nonsense, but among their number one can find all the great works of literature, books predicting the future of humanity in perfect detail, books predicting the future except with a single error, etc. etc. etc. To get more quantitative, let's simply ask: how many different ways are there to fill the first page of a novel? Let's go ahead and assume that the page is filled with intelligible (or at least grammatical) English text, rather than arbitrary sequences of symbols, at a standard font size and page size. In that case, using standard estimates for the entropy (i.e., compressibility) of English, I estimated this morning that there are maybe ~10700 possibilities. So, forget about the rest of the novel: there are astronomically more possible first pages than could fit in the observable universe! We could likewise ask: how many chess games could be played? I've seen estimates from 1040 up to 10120, depending on whether we count only "sensible" games or also "absurd" ones (though in all cases, with a limit on the length of the game as might occur in a real competition). For Go, by contrast, which is played on a larger board (19×19 rather than 8×8) the estimates for the number of possible games seem to start at 10800 and only increase from there. This difference in magnitudes has something to do with why Go is a "harder" game than chess, why computers were able to beat the world chess champion already in 1997, but the world Go champion not until last year. Or we could ask: given a thousand cities, how many routes are there for a salesman that visit each city exactly once? We write the answer as 1000!, pronounced "1000 factorial," which just means 1000×999×998×…×2×1: there are 1000 choices for the first city, then 999 for the second city, 998 for the third, and so on. This number is about 4×102567. So again, more possible routes than atoms in the visible universe, yadda yadda. But suppose the salesman is interested only in the shortest route that visits each city, given the distance between every city and every other. We could then ask: to find that shortest route, would a computer need to search exhaustively through all 1000! possibilities—or, maybe not all 1000!, maybe it could be a bit more clever than that, but at any rate, a number that grew exponentially with the number of cities n? Or could there be an algorithm that zeroed in on the shortest route dramatically faster: say, using a number of steps that grew only linearly or quadratically with the number of cities? This, modulo a few details, is one of the most famous unsolved problems in all of math and science. You may have heard of it; it's called P versus NP. P (Polynomial-Time) is the class of problems that an ordinary digital computer can solve in a "reasonable" amount of time, where we define "reasonable" to mean, growing at most like the size of the problem (for example, the number of cities) raised to some fixed power. NP (Nondeterministic Polynomial-Time) is the class for which a computer can at least recognize a solution in polynomial-time. If P=NP, it would mean that for every combinatorial problem of this sort, for which a computer could recognize a valid solution—Sudoku puzzles, scheduling airline flights, fitting boxes into the trunk of a car, etc. etc.—there would be an algorithm that cut through the combinatorial explosion of possible solutions, and zeroed in on the best one. If P≠NP, it would mean that at least some problems of this kind required astronomical time, regardless of how cleverly we programmed our computers. Most of us believe that P≠NP—indeed, I like to say that if we were physicists, we would've simply declared P≠NP a "law of nature," and given ourselves Nobel Prizes for the discovery of the law! And if it turned out that P=NP, we'd just give ourselves more Nobel Prizes for the law's overthrow. But because we're mathematicians and computer scientists, we call it a "conjecture." Another famous example of an NP problem is: I give you (say) a 2000-digit number, and I ask you to find its prime factors. Multiplying two thousand-digit numbers is easy, at least for a computer, but factoring the product back into primes seems astronomically hard—at least, with our present-day computers running any known algorithm. Why does anyone care? Well, you might know that, any time you order something online—in fact, every time you see a little padlock icon in your web browser—your personal information, like (say) your credit card number, is being protected by a cryptographic code that depends on the belief that factoring huge numbers is hard, or a few closely-related beliefs. If P=NP, then those beliefs would be false, and indeed all cryptography that depends on hard math problems would be breakable in "reasonable" amounts of time. In the special case of factoring, though—and of the other number theory problems that underlie modern cryptography—it wouldn't even take anything as shocking as P=NP for them to fall. Actually, that provides a good segue into another case where exponentials, and numbers vastly larger than 10122, regularly arise in the real world: quantum mechanics. Some of you might have heard that quantum mechanics is complicated or hard. But I can let you in on a secret, which is that it's incredibly simple once you take the physics out of it! Indeed, I think of quantum mechanics as not exactly even "physics," but more like an operating system that the rest of physics runs on as application programs. It's a certain generalization of the rules of probability. In one sentence, the central thing quantum mechanics says is that, to fully describe a physical system, you have to assign a number called an "amplitude" to every possible configuration that the system could be found in. These amplitudes are used to calculate the probabilities that the system will be found in one configuration or another if you look at it. But the amplitudes aren't themselves probabilities: rather than just going from 0 to 1, they can be positive or negative or even complex numbers. For us, the key point is that, if we have a system with (say) a thousand interacting particles, then the rules of quantum mechanics say we need at least 21000 amplitudes to describe it—which is way more than we could write down on pieces of paper filling the entire observable universe! In some sense, chemists and physicists knew about this immensity since 1926. But they knew it mainly as a practical problem: if you're trying to simulate quantum mechanics on a conventional computer, then as far as we know, the resources needed to do so increase exponentially with the number of particles being simulated. Only in the 1980s did a few physicists, such as Richard Feynman and David Deutsch, suggest "turning the lemon into lemonade," and building computers that themselves would exploit the exponential growth of amplitudes. Supposing we built such a computer, what would it be good for? At the time, the only obvious application was simulating quantum mechanics itself! And that's probably still the most important application today. In 1994, though, a guy named Peter Shor made a discovery that dramatically increased the level of interest in quantum computers. That discovery was that a quantum computer, if built, could factor an n-digit number using a number of steps that grows only like about n2, rather than exponentially with n. The upshot is that, if and when practical quantum computers are built, they'll be able to break almost all the cryptography that's currently used to secure the Internet. (Right now, only small quantum computers have been built; the record for using Shor's algorithm is still to factor 21 into 3×7 with high statistical confidence! But Google is planning within the next year or so to build a chip with 49 quantum bits, or qubits, and other groups around the world are pursuing parallel efforts. Almost certainly, 49 qubits still won't be enough to do anything useful, including codebreaking, but it might be enough to do something classically hard, in the sense of taking at least ~249 or 563 trillion steps to simulate classically.) I should stress, though, that for other NP problems—including breaking various other cryptographic codes, and solving the Traveling Salesman Problem, Sudoku, and the other combinatorial problems mentioned earlier—we don't know any quantum algorithm analogous to Shor's factoring algorithm. For these problems, we generally think that a quantum computer could solve them in roughly the square root of the number of steps that would be needed classically, because of another famous quantum algorithm called Grover's algorithm. But getting an exponential quantum speedup for these problems would, at the least, require an additional breakthrough. No one has proved that such a breakthrough in quantum algorithms is impossible: indeed, no one has proved that it's impossible even for classical algorithms; that's the P vs. NP question! But most of us regard it as unlikely. If we're right, then the upshot is that quantum computers are not magic bullets: they might yield dramatic speedups for certain special problems (like factoring), but they won't tame the curse of exponentiality, cut through to the optimal solution, every time we encounter a Library-of-Babel-like profusion of possibilities. For (say) the Traveling Salesman Problem with a thousand cities, even a quantum computer—which is the most powerful kind of computer rooted in known laws of physics—might, for all we know, take longer than the age of the universe to find the shortest route. The truth is, though, the biggest numbers that show up in math are way bigger than anything we've discussed until now: bigger than 10122, or even $$ 2^{10^{122}}, $$ which is a rough estimate for the number of quantum-mechanical amplitudes needed to describe our observable universe. For starters, there's Skewes' number, which the mathematician G. H. Hardy once called "the largest number which has ever served any definite purpose in mathematics." Let π(x) be the number of prime numbers up to x: for example, π(10)=4, since we have 2, 3, 5, and 7. Then there's a certain estimate for π(x) called li(x). It's known that li(x) overestimates π(x) for an enormous range of x's (up to trillions and beyond)—but then at some point, it crosses over and starts underestimating π(x) (then overestimates again, then underestimates, and so on). Skewes' number is an upper bound on the location of the first such crossover point. In 1955, Skewes proved that the first crossover must happen before $$ x = 10^{10^{10^{964}}}. $$ Note that this bound has since been substantially improved, to 1.4×10316. But no matter: there are numbers vastly bigger even than Skewes' original estimate, which have since shown up in Ramsey theory and other parts of logic and combinatorics to take Skewes' number's place. Alas, I won't have time here to delve into specific (beautiful) examples of such numbers, such as Graham's number. So in lieu of that, let me just tell you about the sorts of processes, going far beyond exponentiation, that tend to yield such numbers. The starting point is to remember a sequence of operations we all learn about in elementary school, and then ask why the sequence suddenly and inexplicably stops. As long as we're only talking about positive integers, "multiplication" just means "repeated addition." For example, 5×3 means 5 added to itself 3 times, or 5+5+5. Likewise, "exponentiation" just means "repeated multiplication." For example, 53 means 5×5×5. But what's repeated exponentiation? For that we introduce a new operation, which we call tetration, and write like so: 35 means 5 raised to itself 3 times, or $$ ^{3} 5 = 5^{5^5} = 5^{3125} \approx 1.9 \times 10^{2184}. $$ But we can keep going. Let x pentated to the y, or xPy, mean x tetrated to itself y times. Let x sextated to the y, or xSy, mean x pentated to itself y times, and so on. Then we can define the Ackermann function, invented by the mathematician Wilhelm Ackermann in 1928, which cuts across all these operations to get more rapid growth than we could with any one of them alone. In terms of the operations above, we can give a slightly nonstandard, but perfectly serviceable, definition of the Ackermann function as follows: A(1) is 1+1=2. A(2) is 2×2=4. A(3) is 3 to the 3rd power, or 33=27. Not very impressive so far! But wait… A(4) is 4 tetrated to the 4, or $$ ^{4}4 = 4^{4^{4^4}} = 4^{4^{256}} = BIG $$ A(5) is 5 pentated to the 5, which I won't even try to simplify. A(6) is 6 sextated to the 6. And so on. More than just a curiosity, the Ackermann function actually shows up sometimes in math and theoretical computer science. For example, the inverse Ackermann function—a function α such that α(A(n))=n, which therefore grows as slowly as the Ackermann function grows quickly, and which is at most 4 for any n that would ever arise in the physical universe—sometimes appears in the running times of real-world algorithms. In the meantime, though, the Ackermann function also has a more immediate application. Next time you find yourself in a biggest-number contest, like the one with which we opened this talk, you can just write A(1000), or even A(A(1000)) (after specifying that A means the Ackermann function above). You'll win—period—unless your opponent has also heard of something Ackermann-like or beyond. OK, but Ackermann is very far from the end of the story. If we want to go incomprehensibly beyond it, the starting point is the so-called "Berry Paradox", which was first described by Bertrand Russell, though he said he learned it from a librarian named Berry. The Berry Paradox asks us to imagine leaping past exponentials, the Ackermann function, and every other particular system for naming huge numbers. Instead, why not just go straight for a single gambit that seems to beat everything else: The biggest number that can be specified using a hundred English words or fewer Why is this called a paradox? Well, do any of you see the problem here? Right: if the above made sense, then we could just as well have written Twice the biggest number that can be specified using a hundred English words or fewer But we just specified that number—one that, by definition, takes more than a hundred words to specify—using far fewer than a hundred words! Whoa. What gives? Most logicians would say the resolution of this paradox is simply that the concept of "specifying a number with English words" isn't precisely defined, so phrases like the ones above don't actually name definite numbers. And how do we know that the concept isn't precisely defined? Why, because if it was, then it would lead to paradoxes like the Berry Paradox! So if we want to escape the jaws of logical contradiction, then in this gambit, we ought to replace English by a clear, logical language: one that can be used to specify numbers in a completely unambiguous way. Like … oh, I know! Why not write: The biggest number that can be specified using a computer program that's at most 1000 bytes long To make this work, there are just two issues we need to get out of the way. First, what does it mean to "specify" a number using a computer program? There are different things it could mean, but for concreteness, let's say a computer program specifies a number N if, when you run it (with no input), the program runs for exactly N steps and then stops. A program that runs forever doesn't specify any number. The second issue is, which programming language do we have in mind: BASIC? C? Python? The answer is that it won't much matter! The Church-Turing Thesis, one of the foundational ideas of computer science, implies that every "reasonable" programming language can emulate every other one. So the story here can be repeated with just about any programming language of your choice. For concreteness, though, we'll pick one of the first and simplest programming languages, namely "Turing machine"—the language invented by Alan Turing all the way back in 1936! In the Turing machine language, we imagine a one-dimensional tape divided into squares, extending infinitely in both directions, and with all squares initially containing a "0." There's also a tape head with n "internal states," moving back and forth on the tape. Each internal state contains an instruction, and the only allowed instructions are: write a "0" in the current square, write a "1" in the current square, move one square left on the tape, move one square right on the tape, jump to a different internal state, halt, and do any of the previous conditional on whether the current square contains a "0" or a "1." Using Turing machines, in 1962 the mathematician Tibor Radó invented the so-called Busy Beaver function, or BB(n), which allowed naming by far the largest numbers anyone had yet named. BB(n) is defined as follows: consider all Turing machines with n internal states. Some of those machines run forever, when started on an all-0 input tape. Discard them. Among the ones that eventually halt, there must be some machine that runs for a maximum number of steps before halting. However many steps that is, that's what we call BB(n), the nth Busy Beaver number. The first few values of the Busy Beaver function have actually been calculated, so let's see them. BB(1) is 1. For a 1-state Turing machine on an all-0 tape, the choices are limited: either you halt in the very first step, or else you run forever. BB(2) is 6, as isn't too hard to verify by trying things out with pen and paper. BB(3) is 21: that determination was already a research paper. BB(4) is 107 (another research paper). Much like with the Ackermann function, not very impressive yet! But wait: BB(5) is not yet known, but it's known to be at least 47,176,870. BB(6) is at least 7.4×1036,534. BB(7) is at least $$ 10^{10^{10^{10^{18,000,000}}}}. $$ Clearly we're dealing with a monster here, but can we understand just how terrifying of a monster? Well, call a sequence f(1), f(2), … computable, if there's some computer program that takes n as input, runs for a finite time, then halts with f(n) as its output. To illustrate, f(n)=n2, f(n)=2n, and even the Ackermann function that we saw before are all computable. But I claim that the Busy Beaver function grows faster than any computable function. Since this talk should have at least some math in it, let's see a proof of that claim. Maybe the nicest way to see it is this: suppose, to the contrary, that there were a computable function f that grew at least as fast as the Busy Beaver function. Then by using that f, we could take the Berry Paradox from before, and turn it into an actual contradiction in mathematics! So for example, suppose the program to compute f were a thousand bytes long. Then we could write another program, not much longer than a thousand bytes, to run for (say) 2×f(1000000) steps: that program would just need to include a subroutine for f, plus a little extra code to feed that subroutine the input 1000000, and then to run for 2×f(1000000) steps. But by assumption, f(1000000) is at least the maximum number of steps that any program up to a million bytes long can run for—even though we just wrote a program, less than a million bytes long, that ran for more steps! This gives us our contradiction. The only possible conclusion is that the function f, and the program to compute it, couldn't have existed in the first place. (As an alternative, rather than arguing by contradiction, one could simply start with any computable function f, and then build programs that compute f(n) for various "hardwired" values of n, in order to show that BB(n) must grow at least as rapidly as f(n). Or, for yet a third proof, one can argue that, if any upper bound on the BB function were computable, then one could use that to solve the halting problem, which Turing famously showed to be uncomputable in 1936.) In some sense, it's not so surprising that the BB function should grow uncomputably quickly—because if it were computable, then huge swathes of mathematical truth would be laid bare to us. For example, suppose we wanted to know the truth or falsehood of the Goldbach Conjecture, which says that every even number 4 or greater can be written as a sum of two prime numbers. Then we'd just need to write a program that checked each even number one by one, and halted if and only if it found one that wasn't a sum of two primes. Suppose that program corresponded to a Turing machine with N states. Then by definition, if it halted at all, it would have to halt after at most BB(N) steps. But that means that, if we knew BB(N)—or even any upper bound on BB(N)—then we could find out whether our program halts, by simply running it for the requisite number of steps and seeing. In that way we'd learn the truth or falsehood of Goldbach's Conjecture—and similarly for the Riemann Hypothesis, and every other famous unproved mathematical conjecture (there are a lot of them) that can be phrased in terms of a computer program never halting. (Here, admittedly, I'm using "we could find" in an extremely theoretical sense. Even if someone handed you an N-state Turing machine that ran for BB(N) steps, the number BB(N) would be so hyper-mega-astronomical that, in practice, you could probably never distinguish the machine from one that simply ran forever. So the aforementioned "strategy" for proving Goldbach's Conjecture, or the Riemann Hypothesis would probably never yield fruit before the heat death of the universe, even though in principle it would reduce the task to a "mere finite calculation.") OK, you wanna know something else wild about the Busy Beaver function? In 2015, my former student Adam Yedidia and I wrote a paper where we proved that BB(8000)—i.e., the 8000th Busy Beaver number—can't be determined using the usual axioms for mathematics, which are called Zermelo-Fraenkel (ZF) set theory. Nor can B(8001) or any larger Busy Beaver number. To be sure, BB(8000) has some definite value: there are finitely many 8000-state Turing machines, and each one either halts or runs forever, and among the ones that halt, there's some maximum number of steps that any of them runs for. What we showed is that math, if it limits itself to the currently-accepted axioms, can never prove the value of BB(8000), even in principle. The way we did that was by explicitly constructing an 8000-state Turing machine, which (in effect) enumerates all the consequences of the ZF axioms one after the next, and halts if and only if it ever finds a contradiction—that is, a proof of 0=1. Presumably set theory is actually consistent, and therefore our program runs forever. But if you proved the program ran forever, you'd also be proving the consistency of set theory. And has anyone heard of any obstacle to doing that? Of course, Gödel's Incompleteness Theorem! Because of Gödel, if set theory is consistent (well, technically, also arithmetically sound), then it can't prove our program either halts or runs forever. But that means set theory can't determine BB(8000) either—because if it could do that, then it could also determine the behavior of our program. To be clear, it was long understood that there's some computer program that halts if and only if set theory is inconsistent—and therefore, that the axioms of set theory can determine at most k values of the Busy Beaver function, for some positive integer k. "All" Adam and I did was to prove the first explicit upper bound, k≤8000, which required a lot of optimizations and software engineering to get the number of states down to something reasonable (our initial estimate was more like k≤1,000,000). More recently, Stefan O'Rear has improved our bound—most recently, he says, to k≤1000, meaning that, at least by the lights of ZF set theory, fewer than a thousand values of the BB function can ever be known. Meanwhile, let me remind you that, at present, only four values of the function are known! Could the value of BB(100) already be independent of set theory? What about BB(10)? BB(5)? Just how early in the sequence do you leap off into Platonic hyperspace? I don't know the answer to that question but would love to. Ah, you ask, but is there any number sequence that grows so fast, it blows even the Busy Beavers out of the water? There is! Imagine a magic box into which you could feed in any positive integer n, and it would instantly spit out BB(n), the nth Busy Beaver number. Computer scientists call such a box an "oracle." Even though the BB function is uncomputable, it still makes mathematical sense to imagine a Turing machine that's enhanced by the magical ability to access a BB oracle any time it wants: call this a "super Turing machine." Then let SBB(n), or the nth super Busy Beaver number, be the maximum number of steps that any n-state super Turing machine makes before halting, if given no input. By simply repeating the reasoning for the ordinary BB function, one can show that, not only does SBB(n) grow faster than any computable function, it grows faster than any function computable by super Turing machines (for example, BB(n), BB(BB(n)), etc). Let a super duper Turing machine be a Turing machine with access to an oracle for the super Busy Beaver numbers. Then you can use super duper Turing machines to define a super duper Busy Beaver function, which you can use in turn to define super duper pooper Turing machines, and so on! Let "level-1 BB" be the ordinary BB function, let "level-2 BB" be the super BB function, let "level 3 BB" be the super duper BB function, and so on. Then clearly we can go to "level-k BB," for any positive integer k. But we need not stop even there! We can then go to level-ω BB. What's ω? Mathematicians would say it's the "first infinite ordinal"—the ordinals being a system where you can pass from any set of numbers you can possibly name (even an infinite set), to the next number larger than all of them. More concretely, the level-ω Busy Beaver function is simply the Busy Beaver function for Turing machines that are able, whenever they want, to call an oracle to compute the level-k Busy Beaver function, for any positive integer k of their choice. But why stop there? We can then go to level-(ω+1) BB, which is just the Busy Beaver function for Turing machines that are able to call the level-ω Busy Beaver function as an oracle. And thence to level-(ω+2) BB, level-(ω+3) BB, etc., defined analogously. But then we can transcend that entire sequence and go to level-2ω BB, which involves Turing machines that can call level-(ω+k) BB as an oracle for any positive integer k. In the same way, we can pass to level-3ω BB, level-4ω BB, etc., until we transcend that entire sequence and pass to level-ω2 BB, which can call any of the previous ones as oracles. Then we have level-ω3 BB, level-ω4 BB, etc., until we transcend that whole sequence with level-ωω BB. But we're still not done! For why not pass to level $$ \omega^{\omega^{\omega}} $$, $$ \omega^{\omega^{\omega^{\omega}}} $$, etc., until we reach level $$ \left. \omega^{\omega^{\omega^{.^{.^{.}}}}}\right\} _{\omega\text{ times}} $$? (This last ordinal is also called ε0.) And mathematicians know how to keep going even to way, way bigger ordinals than ε0, which give rise to ever more rapidly-growing Busy Beaver sequences. Ordinals achieve something that on its face seems paradoxical, which is to systematize the concept of transcendence. So then just how far can you push this? Alas, ultimately the answer depends on which axioms you assume for mathematics. The issue is this: once you get to sufficiently enormous ordinals, you need some systematic way to specify them, say by using computer programs. But then the question becomes which ordinals you can "prove to exist," by giving a computer program together with a proof that the program does what it's supposed to do. The more powerful the axiom system, the bigger the ordinals you can prove to exist in this way—but every axiom system will run out of gas at some point, only to be transcended, in Gödelian fashion, by a yet more powerful system that can name yet larger ordinals. So for example, if we use Peano arithmetic—invented by the Italian mathematician Giuseppe Peano—then Gentzen proved in the 1930s that we can name any ordinals below ε0, but not ε0 itself or anything beyond it. If we use ZF set theory, then we can name vastly bigger ordinals, but once again we'll eventually run out of steam. (Technical remark: some people have claimed that we can transcend this entire process by passing from first-order to second-order logic. But I fundamentally disagree, because with second-order logic, which number you've named could depend on the model of set theory, and therefore be impossible to pin down. With the ordinal Busy Beaver numbers, by contrast, the number you've named might be breathtakingly hopeless ever to compute—but provided the notations have been fixed, and the ordinals you refer to actually exist, at least we know there is a unique positive integer that you're talking about.) Anyway, the upshot of all of this is that, if you try to hold a name-the-biggest-number contest between two actual professionals who are trying to win, it will (alas) degenerate into an argument about the axioms of set theory. For the stronger the set theory you're allowed to assume consistent, the bigger the ordinals you can name, therefore the faster-growing the BB functions you can define, therefore the bigger the actual numbers. So, yes, in the end the biggest-number contest just becomes another Gödelian morass, but one can get surprisingly far before that happens. In the meantime, our universe seems to limit us to at most 10122 choices that could ever be made, or experiences that could ever be had, by any one observer. Or fewer, if you believe that you won't live until the heat death of the universe in some post-Singularity computer cloud, but for at most about 102 years. In the meantime, the survival of the human race might hinge on people's ability to understand much smaller numbers than 10122: for example, a billion, a trillion, and other numbers that characterize the exponential growth of our civilization and the limits that we're now running up against. On a happier note, though, if our goal is to make math engaging to young people, or to build bridges between the quantitative and literary worlds, the way this festival is doing, it seems to me that it wouldn't hurt to let people know about the vastness that's out there. Thanks for your attention. This entry was posted on Thursday, September 14th, 2017 at 5:11 am and is filed under Adventures in Meatspace, Complexity. You can follow any responses to this entry through the RSS 2.0 feed. Responses are currently closed, but you can trackback from your own site. 196 Responses to "My Big Numbers talk at Festivaletteratura" Comment #1 September 14th, 2017 at 5:24 am Haven't managed to read the post yet, sorry, but just noticed that Nature has a supplement this week with 5 or 6 articles on quantum computing, including one on quantum supremacy. http://www.nature.com/nature/supplements/insights/quantum-software/index.html The cover photo shows a quantum computing laptop in some sort of state of superposition I think. I think there is a typo in the discussion of A(·): 5 pentated to 5 should be followed by 6 secstated (?) to 6, not 6 pentated to 6. Michael #2: Thanks! Fixed. sf #1: Yes, thanks, I saw, though I haven't yet read carefully! The quantum supremacy piece, by Harrow and Montanaro, looked quite nice. I'm guessing that being able to read their name out loud four times in a row is one of the requirements to make a speech at Festivaletteratura? fred #5: I just tried and it wasn't that hard, so I guess my speech is retroactively OK? Joshua Zelinsky Says: "I estimated this morning that there are maybe ~10700 " This should be 10^700. Quick typos: I assume there are $10^700$ first pages of novels, not 10700; and I think an exponentiation is missing from the value of BB(6). I recently encountered another connection between literature and large numbers: a journal article about Walt Whitman's fascination with big numbers (http://ir.uiowa.edu/wwqr/vol34/iss2/4/). Of course, what counts as "big" in this context is minuscule given the content of your talk (Whitman went up as high as decillions or so), but I figured I'd mention it for the benefit of Scott_2035. The number of possible first pages in your post "~10700 possibilities", I guess it should be 10^700? Comment #10 September 14th, 2017 at 8:09 am Joshua #7 and Craig #8: Thanks, fixed! (The issue is that the WordPress editor doesn't include subscripts or superscripts, so I have to go through and insert them by hand, and not surprisingly in a post of this length I miss some.) I guess the proportion of halting (BB(n)) vs non-halting programs for a n state Turing machine has to be uncomputable as well? fred #11: Yes. If that were computable, then we could dovetail over all n-state TMs until we knew which ones halted, and thereby solve the halting problem. In fact, the proportion of n-state TMs that halt is closely related to Chaitin's Omega, the famous uncomputable number. Any parallel between the halting problem and our universe eventually contracting back into a singularity (= halting) or expending forever (= never halting)? (yes, we're just here because some nerdy God is trying to compute BB(…)) Comment #14 September 14th, 2017 at 10:15 am fred #13: I don't think so, because the latter is computable just by looking at a couple simple parameters like the mass density and the cosmological constant. irt. fred #13: Somewhat off-topic– I have a half-joking rebuttal of the Simulation Argument. There will be a post-Singularity society, it won't have special provisions against simulation, nevertheless most individuals will not be simulated (so exactly the eventuality the argument claims won't happen, will happen 😉 ) … … because that society's prodigious computing power will be sunk in updating block chains. Ba-dum-tss. Flumajond Says: So, basically, to get a big number you need to fix a language. But apparently, even if say you work in the language of set theory, the definition might depend on axioms too since there is no absolute truth in mathematics – you could have two sentences in set theory, defining numbers but to prove which one is bigger is independent of set theory. This relates to the Penrose ideas about absolute truth. But in fact, mathematical truth appears to be inductive i.e. it is an illusion on some (deep) level – we can never know what inaccessible cardinal axioms are consistent with set theory or even what the definitions mean. The Penrose-Goedel type intuition is also related to ordinals, their definition and representation. In the end, it boils down to the same thing – to transcend the intuition, you need to go beyond the obvious – say, use large cardinal numbers whose existence, for all we know, might contradict NBG/ZFC set theories, or even if it doesn't, the intuition might be displaced or delusional – and hence you are on square one, since there is no real justification other than empirical. I used to think a lot about these things while growing up and even had an essay on it for some class at mit in 1999; these kind of things seem to be a trapping topic for nerdy youths. So what is truth and does absolute truth exist in mathematics even? It might be that some axiom, which is contradictory to NBG/ZFC in fact has so long a proof of contradiction, that it is in fact useful. Useful in the sense that all the consequences we can derive from it are true (at the very least, do not contradict set theory as that would yield a short proof of contradiction). So, by the same evolutionary or pragmatic pressures we might accept the fake axiom as coming from some sort of intuition. And even more, if our physical world is indeed finite (bounded by 2^10^122, as you claim but which seems to be a bit too free an interpretation of black hole entropy etc, and especially given that we don't even know the ultimate theory and even that what is known is not really strictly mathematically proven to exist as a well defined object, as for instance no-one has proven the existence of any 3+1 dimensional nontrivial (let alone gauge) QFT within the largely ignored framework of Axiomatic quantum field theory, despite Jaffes' putting it on the Clay's Millenium list as his pet problem), then what sense do these things even make, if they are IN PRINCIPLE unknowable. We would then be compelled to abandon the widespread though admittedly naive notion of absolute mathematical truth, and accept more modest formalist or even quite questionable pragmatist approach, as indeed has been done in the last century. Truth seems to be a strange thing which is most clearly understood in mathematics and logic. It has always been strange to me how these abstract concepts reflect on the real life with which they apparently have only superficial resemblance (say philosophical materialism vs materialistic morality, as an oversimplified example, but such jumps, that philosophers seem to make all too often, were always puzzling), but it is the case that many Americans are nowadays experiencing "post truth" world they live in. Unfortunately, fictions and narratives, media and otherwise, have real world consequences that many have felt in the past, but while it is not surprising that lies, bias and propaganda are abundant in the realm of politics (politics seems to be a popular subject here too), it is somewhat surprising that truth can, on a much, much deeper level, be slippery even in mathematics. Comment #17 September 14th, 2017 at 12:03 pm "But apparently, even if say you work in the language of set theory, the definition might depend on axioms too since there is no absolute truth in mathematics" That's not what this means. There's lots of absolute truth. The problem is only that sufficiently powerful axiomatic systems cannot prove their own consistency. Flumajond #16: Your comment has too many threads to respond to all of them, but just to take what you use as your jumping-off point, I don't agree with the statement "there is no absolute truth in mathematics." I take it as self-evident, for example, that a given Turing machine either halts or doesn't halt (when started on a blank tape)—that there's some truth of the matter, regardless of whether we can prove it in the case that the machine runs forever. (Often, of course, we can prove it.) So in particular, Goldbach's Conjecture, the Riemann Hypothesis, etc. etc. all have definite truth-values, and the Busy Beaver function is perfectly well-defined. And so is what I called the "super BB function," and the "super duper BB function," and far vaster BB functions still, involving all the computable ordinals that I mentioned in the talk, and vaster ones that I didn't mention. It's only when we get to even more enormous ordinals—e.g., ordinals defined by dovetailing over all Turing machines and ZF proofs that they define well-orderings on the input strings—that we start having to worry about the axioms of set theory. You might find that strange, but in some sense it's no stranger than Gödel's theorem itself: in fact it's just an example of the Gödelian progression of theories, the fact that there's no single usable theory to prove everything we'd like. That statement is constantly misunderstood to mean that all truth is relative, that 2+2 can be 4 in one theory and 5 in another one, etc. etc., but that's not what it means at all (and is certainly not what Gödel himself believed!). On the contrary, all "non-pathological" theories (like PA, ZF, etc., assuming their consistency) are going to be arithmetically sound, which means in particular that if one proves a given arithmetical statement, then the other one will never disprove that statement. It's just that (1) some theories might be able to prove arithmetical statements that others aren't strong enough to prove (like, say, Con(PA), which is provable in ZF but not in PA), and (2) different theories could genuinely disagree about non- arithmetical statements (involving transfinite sets), like the Continuum Hypothesis or the Axiom of Choice. In the case at hand, it's true that, the greater the ordinal strength of a theory, the larger the numbers we can succinctly specify using that theory. But by definition, it's never going to happen that there are two numbers X and Y with clear arithmetical definitions, such that X>Y in one arithmetically sound theory and X<Y in a different one. "The further away they are from us, the faster they're receding—with galaxies far enough away from us receding faster than light, meaning that any signals they emit can never reach us." This is apparently not true, as the Hubble sphere expands https://www.youtube.com/watch?v=XBr4GkRnY04&feature=youtu.be&t=83 Btw, your blog post where you had some discussion with Roger Penrose, https://www.scottaaronson.com/blog/?p=2756 that I recently had a pleasure to has a very nice idea (is it yours – i've never seen that before) relating to decoupling and quantum consistency. Namely, if I got that right, the decoherence of macroscopic objects cannot be reversed, because of FUNDAMENTAL reasons that states get coupled with particles that are pass the Hubble horizon (quantum collapse has been mine also obsession since teen years, Schrodinger cats and "rising to the level of consciousness", and various interpretations, another problem dear to nerds which too few working physicists seemed to care about, despite its obvious fundamental importance, apparently at least until the dawn of quantum information era, when the subject has become much more widely popular); I don't know how this objection about expanding Hubble's horizon relates to that, but anyway the idea seems intriguing. Has anybody developed that further? My objection related to that is that the apparent "collapse" (or whatever your favorite interpretation is) can hardly be guaranteed to have these runaway degrees of freedom escaping to the Hubble horizon; one could imagine a containment of a system (of much smaller size), say some sort of mirrors etc, that, in some complicated way, reverse the decoherence. In other words, what would guarantee that such escaping would always happen? Presumably, the things would not depend on existence of a relatively distant (yet contained well inside the Hubble radius, or – given the objection explained by Veritasium, possibly even beyond it) reversing mechanism. "Why 45 billion light-years, you ask, if the universe itself is "only" 13.6 billion years old? Well, when the galaxies emitted the light, they were a lot closer to us than they are now! The universe expanded in the meantime." But this would mean that the expansion has to happen faster than the speed of light, no? I.e. space is stretching (or being created) faster than light can travel through it? So eventually such distant galaxies will "fade away" from view? Meaning that the night sky is getting darker and darker? Flumajond #19: I didn't watch the video, but, yes, it's a fundamental feature of de Sitter space that a given observer can only receive signals from a finite region, whose radius depends on the cosmological constant. (With no cosmological constant, it's true that our visible patch would grow without bound and eventually encompass anything we'd like.) As for mathematical truth, I see that Joshua #17 was more concise than I was! 🙂 fred #20: Yes. " don't agree with the statement "there is no absolute truth in mathematics." This is meant to mean, that not ALL of the statements of mathematics, have some absolute truth – for instance, continuum hypothesis etc. But if you use set theory in your definitions (as you did in extended beaver functions) some of the statements might depend on such cases. The question for things like Riemann conjecture, which is an existential (or universal depending what is true) statement, which can indeed be interpreted in terms of halting problem, more deceptively seems to have some absolute truth, and indeed we all seem to believe that. But, if your idea about the finite world is correct, not even that would be clear – you cant plug in the counterexample that is too big. For universal formulas, even for number theory, it might be possible that the truth of such a statement might NEVER be known, not in a set theory, nor in any extension that we can work out. So, even if we accept that all true purely existential statements are true in absolute sense, for universal statements truth might always escape us, though it is reasonable to assume that truth exists nevertheless. But then, once we have multiple alternating quantifiers, even for number theory language, we don't really have justification for absolute truth, as the Skolem functions are infinite objects so even existential intuition fails. Of course, most mathematicians, may I say all, do believe in some sense in the absolute truth of such elementary statements (and among scientists most mathematicians believe in God too apparently), but isn't it all just a very useful delusion? Flumajond #23: Again I feel like you're mixing together too many different concepts. As an example, I'd find it perfectly clear and comprehensible if somebody claimed, "the Riemann hypothesis is false, but the counterexample is too big to be checked in the observable universe." And that would strike me as an interestingly different state of affairs from "the Riemann hypothesis is true." And crucially, I don't need to possess a means to distinguish those two states of affairs, in order to recognize them as being different. Instead, we can simply say: the same conceptual apparatus that lets us discuss these matters in our world, would clearly still let us discuss them in a hypothetical world where astronomical observations had revealed a much smaller value for the cosmological constant. (Indeed, before 1998, that's the world almost everyone assumed we lived in!) And in that other, perfectly sensible world, there would be an experiment to distinguish the two states of affairs. " I didn't watch the video" The video is short (this part is only minute or so) and from Veritasium, who is a physicist and popularizes science and he specifically addresses this point. He claims that it is a misconception. What does he refer to – I doubt that he would put this at random, though it is possible that he is wrong (just as you might be wrong too) – perhaps he is talking about ACCELERATING expansion, not static expansion like in de Sitter space; your willingness to dismiss people like this is I must say quite disappointing, but that's up to you. It is not becoming to underestimate people, and for that to be your first instinct is never a good thing… Scott #23 I understand perfectly what you mean, but there is a problem. Mathematicians (Penrose wrote about that in one of his two books about black holes etc) assume existence of an ideal world, and in such a world, what you say makes perfect sense. But what if such a world really is an illusion. As imaginary as God is to atheists (though much more precise). In that way, it wouldn't make sense to talk about objects which have no bearing (direct OR INDIRECT) to our world. And on the other side, if you for instance agree that Continuum hypothesis does not have some absolute truth value (as many mathematicians indeed do, though by no means all), then what distinguishes it from say properties of some Skolem functions, which also might not "exist" even if physical world is countably infinite. And if you assume physical world is finite (which is perhaps not so widely accepted), then the same objection goes for large numerical counterexamples whose existence we can not know, either directly or prove indirectly. If you assume existence of ideal world of mathematics (as indeed I do too, on a less a cynical day), then all is clear. But there is a possibility that all this is only an illusion, however powerful. And then you seem no different (bare precision and formal system) then a religious guy seems to an atheist, or a terrorist who hopes for 72 virgins in heaven (which, for all we know, might be as real as unreachable cardinals). Granted, mathematics is useful, probably much more so than many sorts or religious or political delusions, though the later occasionally have a bit more impact at least in the short run. DavidC Says: > a given Turing machine either halts or doesn't halt Naively, it seems plausible to me that that statement is getting close to completely characterizing which mathematical statements have definite truth values. Maybe anything that depends only on whether finitely many Turing machines halt? But then I'm not sure what you're allowed to depend on in establishing that dependence. And why not infinitely many? If I describe a sequence of Turing machines, then if there's a fact of the matter about whether each one halts, surely there's also a fact of the matter about whether all of them halt. The above kind of all sounds like nonsense, though. I'd enjoy it if there's a way to make this clearer. Comment #28 September 14th, 2017 at 1:04 pm Flumajond #25: OK, I finally watched the video—I really don't like being linked to videos, especially if they require sound, but whatever. The narrator is talking about the Hubble horizon, which is different from the cosmological event horizon that I was talking about; see here for the distinction. I hope that clears this up. DavidC #27: The way to make it more precise is the arithmetical hierarchy. As an example, I would also ascribe a definite truth-value to any Π2 sentence, which is a sentence that asserts that a given "super Turing machine" (that is, Turing machine with an oracle for the halting problem) runs forever. As examples, the P≠NP conjecture and twin prime conjecture can both be put into that form. But if you accept that, then you might as well go further, and ascribe definite truth-values to Π3-sentences (about the halting of Turing machines with oracles for the super halting problem), Π4-sentences, and so on. Indeed, you might as well go all the way up to Πα-sentences, where α is any computable ordinal that you understand. My intuition starts to falter only at the computable ordinals that I don't understand (for example, ones that require the consistency of large cardinal axioms to construct). Surely you don't mean to attribute the best available Hellenistic astronomy to Eratosthenes, who measured the size of the Earth, but to Aristarchus, who measured the size of the moon and the sun, proposed heliocentrism, and suggested that the fixed stars were very far away. Douglas #30: Oops, thanks, fixed! This is, I believe, patently wrong, and is not even a philosophical point. Of course, what is meant by "theories" is questionable (since Con(ZFC) and its negation are both consistent with ZFC, but only one extension is plausibly "theory" in your sense). Perhaps you subscribe to Penrose's misguided intuition (that you had a blog about) that we have some mysterious insight into transcending one theory by another. That is quite dubious. The modern set theory postulates some large cardinals, axioms say LC, and it is not at all clear that those theories are consistent. But ZFC+Con(ZFC+LC) proves one arithmetical statement, and ZFC+NOT Con(ZFC+LC) proves another. Of course, if LC is not consistent with ZFC only second one is consistent theory, but otherwise both are. Both can be plausibly TRUE (in your absolute sense), as we have no way to determine if LC is sound. But one claims existence of some natural number which might or might not exist, and the other claims the opposite. If you go to more complicated (say universal existential formulas of arithmetic), such connections might also exist. In any case, there is no reason to assume that these are the ONLY arithmetical formulas that are influenced by transfinite objects. As Hilbert's failed program to finitize mathematics showed, there is NO WAY to eliminate influence of large, infinite objects on proof of truth about finite ones. Infinite mathematics is fundamental in proving at least some statements purely arithmetic, and many are shown to be provable in ZFC, but not in the Peano arithmetic; there is absolutely no reason to assume that the same does not happen when we go to higher cardinals or whatever transfinite generalization of set theory, though such statements are difficult to find or prove; the Goedel type ones are just the easiest ones and are tip of an iceberg, more than a rule, I would suspect. Flumajond #32: Firstly, the sentence of mine that you quoted doesn't appear to be the sentence that you disagree with. Secondly, I was careful in what I said. I was talking only about arithmetically sound theories, which are theories that can prove arithmetical statements only if those statements are metatheoretically true. (So, PA and ZF are presumably both examples, but PA+Not(Con(PA)) is not.) Some such theories can prove more arithmetical statements than others—and different theories might even prove incomparable sets of arithmetical statements—but by definition, one such theory will never prove an arithmetical statement to be true that another one proves to be false. If they did, then one or the other wouldn't be arithmetically sound. Flumajond #32: What I meant in the last post is that emphasize on NON arithmetical is wrong, if it means "only" non arithmetical, as the emphasis (that doesn't show in my repost) suggests. Yes, that clarifies it. You didn't explicitly say which horizon by name, though what you said does correspond to the Hubble horizon, not the particle horizon. However, this seems to be a subtle point and Veritasium and refers to the paper https://www.cambridge.org/core/journals/publications-of-the-astronomical-society-of-australia/article/expanding-confusion-common-misconceptions-of-cosmological-horizons-and-the-superluminal-expansion-of-the-universe/EFEEEFD8D71E59F86DDA82FDF576EFD3 https://arxiv.org/pdf/astro-ph/0310808.pdf Which exactly explains the distinction, which is perhaps sometimes ignored for the sake of simplicity (as in your text). OK that clarifies it, I missed "arithmetically sound", though "arithmetically sound" is rather restrictive. We have no way of knowing if a theory is "arithmetically sound", which is exactly my point. The only exception is claiming consistency over ordinals (what Penrose thinks separates us from robots), theories which are all arithmetically sound by construction, GIVEN that we know what is the ordinal, but anyway that is not how large cardinal axioms are constructed. And one large cardinal axiom might transcede many nested consistency-extensions (as it gives model to them all). But on the other hand, it might be inconsistent. Hence my original post (in which perhaps I was not clear too). Scott #29: makes sense, thanks! Flumajong #35 How does this https://en.wikipedia.org/wiki/Banach%E2%80%93Tarski_paradox relate with this notion of "arithmetically sound"? fred #37: The Banach-Tarski paradox is not an arithmetical phenomenon; it crucially depends on the Axiom of Choice. I don't think it is related. ZFC is arithmetically sound, if I understand it correctly, and Banach Tarski is a theorem of ZFC, which is about infinite objects, not arithmetic, which depends on the Axiom of Choice. So, not really related. "if we have a system with (say) a thousand interacting particles, then the rules of quantum mechanics say we need at least 2^1000 amplitudes to describe it" Since 2^1000 amplitudes are needed to describe the system mathematically, I'm still not understanding how can "nature" have so much hidden expressive power. I get that this is all amounting to a bunch of plain/boring probabilities in the end, but it's still the case that those amplitudes aren't reducible to anything simpler/cheaper. This is unlike anything in the classic models where at most interactions are 1000^2 (one on one). I'm still mystified by the fact that Claude Shannon showed that most Boolean functions require exponential size circuits to compute but the lowest bound we have for any explicit circuit is only 5n? Scott, could you give us your take on why this is so? William #41: There's no real mystery about it! If you just want to show that most Boolean functions are hard, then it's enough to point out that there are lots of Boolean functions, and not enough small circuits to cover all of them. By contrast, if you want to show that your favorite Boolean function is hard, then you need to rule out every possible special reason why your function might be one of the easy ones (and given the fact that you succinctly described your function somehow, in order to name it, it's clearly not a random one). You've therefore called upon your head all the difficulties of P-vs.-NP-like questions. But, having said that, whether it's actually true that we have no superlinear lower bound for an "explicit" Boolean function, depends entirely on what you mean by the word "explicit." If you read my P vs. NP survey, you'll learn how it's possible to construct Boolean functions in complexity classes like EXPSPACE that require exponential-size circuits, or scaling down, in SPACE(n2) that require superlinear-size circuits. So then the problem is "merely" that we don't know how to do the same for explicit Boolean functions in the complexity classes we care about more, like NP. (and given the fact that you succinctly described your function somehow, in order to name it, it's clearly not a random one) Well I don't think that is necessarily true because it's trivial to design a psuedorandom generator where each output bit is the result of a different random Boolean function. atreat Says: Scott, is "arithmetically sound" well defined in the way you are using it? I guess what you are saying is that there exist a set of mathematical theories of comparable power to ZFC that all agree on the truth value for any finite arithmetical statement and this is a form of absolute mathematical truth. Did I get that right? Any theory which did not agree simply would not belong to this set. But that is a rather meager form of absolute mathematical truth, no? Especially if their exist theories that don't belong to that set are otherwise consistent and interesting. Btw, I love this talk you gave and envious of the folks in the audience. However, I feel like I have read or watched you give a very similar talk fairly recently. What am I remembering? It certainly was not the blog post you made as a teenager… William #43: No. If each output bit is the result of a different random Boolean function, then you're just talking about a random Boolean function. A pseudorandom function is a perfect example of what I was talking about, something that might look random but actually has a short description. atreat #44: Aha, this is the single most important point I'd like to get across about the philosophy of mathematics. Namely: even to talk about math in the first place, already presupposes that we understand what's meant by arithmetical soundness. For example: for a theory to be sound about Π1-sentences just means that, if the theory says that a Turing machine halts, then that Turing machine indeed halts. (The opposite case—if it says the machine runs forever, then it runs forever—already follows from the theory's being consistent.) OK, you object, but why am I allowed to talk so unproblematically about whether the Turing machine "indeed halts," independently of any theory? Here's the key: because if I'm not allowed, then I'm also not allowed to talk about the theories themselves, and what they prove or don't prove, which are likewise questions about the behaviors of various computer programs. But I was already talking unproblematically about the latter, so to do it in the one case but not the other would be a double standard. Note that this argument doesn't generalize to questions about transfinite set theory, like the Axiom of Choice or the Continuum Hypothesis. But I'd say that it generalizes to all arithmetical questions—showing that, if we're unwilling to talk about arithmetical soundness, then we shouldn't be willing to talk about consistency or provability either. pupmki Says: Flumajond #35: There is another point, "arithmetically sound" depends on the absolute truth of arithmetical statements. If you reject that such absolute truth exists, then to talk about "arithmetically sound" theories is in fact meaningless. Apparently, Scott does not consider the possibility that there might be two natural extensions of ZFC, that have contradicting arithmetical theorems. But, it is for instance known that Reinhardt cardinals are consistent with ZF but not with the axiom of choice, on the other hand we have measurable and supercompact cardinals etc. So, such a possibility cannot be excluded in principle, although it is not clear if there are arithmetical discrepancies between the currently used theories. For the "realist" side there is Penelope Maddy. Defending the axioms: on the philosophical foundations of set theory. Oxford University Press, Oxford, 2011. with excellent review of large cardinals http://www.cs.umd.edu/~gasarch/BLOGPAPERS/belaxioms2.pdf While anti-Platonist stance has also its proponents: Hartry H. Field. Realism, Mathematics, and Modality. Blackwell Publishing, Oxford 1991. Hartry H. Field. Science Without Numbers: The Defence of Nominalism, Princeton Legacy Library, Princeton University Press, 2015. This is essentially a longstanding philosophical dispute, problem of universals, dating back to Plato and Aristotle. Manindra Agrawal has a paper that talks about using psuedorandom generators being used to separate P from NP. In the paper he says that if you have a cryptographically secure psuedorandom generator , that this implies strong lower bounds, but I am not sure what this means. William #48: Yes, he's right. Indeed, it's fairly obvious: it's just another way of saying that if P=NP, then cryptographically secure PRGs would be impossible, since guessing the seed is an NP problem. pupmki #47: Apparently, Scott does not consider the possibility that there might be two natural extensions of ZFC, that have contradicting arithmetical theorems. No, it's not that I don't consider the possibility, it's that I consider it and reject it! 😀 For why, see my comment #46. (Have any examples of ZFC extensions with conflicting arithmetical theorems seriously been proposed by anyone? As you yourself acknowledge, Reinhardt cardinals etc. won't do it. Whenever forcing is used, we even have a result, the Shoenfield absoluteness theorem, to rule out the possibility that the different set theories we construct will disagree on anything arithmetical.) Scott # 49 So to prove that the seed of a PRG can't be guessed , would it suffice to have a PRG that" looks like" it has a string of infinite length as its seed, or a seed that is at least as long as some multiple of the number of the output bits of the generator? Like say that the first million bits of output is the result of a key 128 million bits long? Scott #50,#46: Your argument that we already accept some sort of "arithmetic soundness", if we are talking about theories themselves, and hence you extend that to "arithmetic soundness" in general, is problematic. This properly can be applied only to Π1 sentences, as we could not have a consistent theory that claims an "untrue" Π1 sentence. However, once we come higher in the arithmetical hierarchy, things become much less apparent. What does a Π2,2 sentence actually say? It can be interpreted as a 2 step game, but the set of possible strategies is uncountably infinite, which are objects of different kind entirely than the natural numbers. Hence, you might believe in the "absolute truth" of claims about Turing machine haltings and proofs and theorems, yet in this, countably infinite, world, you can fully reject that higher in the arithmetical hierarchy you have absolute truth, just as you can imagine to reject the absolute truth about transfinite objects. I'm not sure forcing covers all the cases of possible natural extensions of ZFC, and you are wrong if you think that Shoenfield covers whole of the arithmetical hierarchy. Here is a concrete example of Π1,3 arithmetical sentence that is not absolute for forcing models: https://mathoverflow.net/questions/71965/a-limit-to-shoenfield-absoluteness As you can see from the overflow post, such questions have indeed been considered, and while Shoenfield absoluteness theorem is limited (low levels of arithmetical hierarchy, forcing aside) there are some similar but stronger results. But it is a fact that this IS a concern, and that while for some limited sorts of extensions and some levels of arithmetical hierarchy there are absoluteness results, there are also counterexamples and there is no unique way to proceed and justify absolute "arithmetic soundness" on purely mathematical grounds. William #51: In order to have a useful PRG at all, the output has to be longer than the seed—so that's always the assumption. Otherwise, you might as well just dispense with the PRG and use true random bits. There are functions that efficiently map n random bits to n+1, or 2n, or n2, or even 2n pseudorandom bits, in such a way that we believe no polynomial-time algorithm can distinguish the output from random with non-negligible bias. But this requires making cryptographic assumptions (and would be impossible in a world where P=NP). I've got an (admittedly vague fuzzy intuitive) idea for proving that the continuum hypothesis is true. I think there's a way of classifying all numbers into a finite hierarchy such that each new level of number 'emerges' from the level below it, analogous to the way levels of organization can emerge in the physical world (for example cells>tissues>organs). Not sure exactly how to do this, but intuitively, it seems to vaguely make sense. My conjecture is that you can classify all numbers into only 3 different types (levels). That is to say, I think there is (in some well-defined sense), only 3 types of numbers! Each level of number is 'cleanly separated' from the layer beneath, in the sense that you could define the higher levels without worrying about the details of how numbers on lower levels are defined (again, analogous to substrate independence for layers of organization in the physical world). Proposed 3 levels: 1 All numbers with cardinality of reals and above? https://en.wikipedia.org/wiki/User:Zarzuelazen/Books/Reality_Theory:_Set_Theory%26Analysis 2 Algebraic? https://en.wikipedia.org/wiki/User:Zarzuelazen/Books/Reality_Theory:_Algebra%26Number_Theory https://en.wikipedia.org/wiki/User:Zarzuelazen/Books/Reality_Theory:_Theory_of_Computation The idea is that the reals are on the base or foundational level, integers are 'built on top of' the reals on the 2nd level of abstraction (subset of algebraic) and there's higher level of abstraction still on a 3rd level. So if we can add plausible axioms implying that there's really only 3 types of numbers that can be cleanly separated into different levels of abstraction like this, this seems to strongly imply continuum hypothesis? (No level between 1 and 2) There should be "analytical hierarchy" instead of "arithmetical" in my post above. Meaning, quantification is over SETS of integers. Then there are these counterexamples of Σ1,3 sentences. If we restrict to numbers alone, i.e. "arithmetical", I'm not sure that such examples are known for natural extensions, and Shoenfield theorem says if the ordinals are the same as in forcing, then so are arithmetical statements (but not analytical i.e. second order arithmetical). But if we are using new cardinals, the set of ordinal number of course possibly changes, hence there is nothing about absoluteness of arithmetical soundness that is guaranteed. It is unclear weather arithmetical soundness holds in the natural extensions of ZFC that are used (beyond forcing), but in principle there is no reason not to have a counterexample in the possible future extensions if not in those currently studied. Mateus Araújo Says: It feels a bit like cheating to allow the computer program to specify a number by its runtime, instead of having to actually write down the number in the tape (in the solution to Berry's paradox). If one changes this, do we get something interesting, like the largest computable number that can be computed by a Turing machine with n symbols? I've found this paper, from the abstract (link below): We prove that the satisfaction relation N⊨φ[a] of first-order logic is not absolute between models of set theory having the structure N and the formulas φ all in common. Two models of set theory can have the same natural numbers, for example, and the same standard model of arithmetic ⟨N,+,⋅,0,1,<⟩, yet disagree on their theories of arithmetic truth https://arxiv.org/abs/1312.0670 This is a rather elementary paper, and does not consider large cardinal constructions so these models of set theory might not be "natural", but it illustrates the point that (even when you have Π1 agreement), higher order arithmetical statements are not determined. The two set theories (or the parts used for proofs) might both seem natural by different kind of transfinite intuitions, in principle (this is not a mathematical statement). Mateus #56: Good question! I invite you to prove that that doesn't change anything significant. (Hint: If there's a program that runs for T steps, then one can write another, slightly longer program that prints T as its output.) gentzen Says: If you encode a natural number as a finite string, then you already put in the finiteness implicitly via the finiteness of the string. Kolmogorov complexity also does this, by ignoring the information required for encoding the length of the string. Chaitin's Omega can only be defined if you first fix this omission, by only allowing prefix encodings (where the length is given implicitly). But even here, the finiteness might still be put in implicitly, because being a prefix encoding doesn't necessarily imply that the question 'whether a given string is a valid encoding' is decidable. (Does it really matter at that point? Probably not, since you could just use a decidable prefix encoding. So I stop here.) Scott #58: Ah. That shows it's not a good idea for me to write comments before 0500 (on the other hand, apparently you can do it between 0500 and 0600). Give me a double-taped Turing Machine, and it's trivial to write down the running time in unary in the second tape. With a bit better encoding one can do it in binary in a single tape, but it's pointless to do it explicitly. Let me try again: can one get a sensible definition of a fastest-growing computable function? Say by requiring that the Turing machine corresponding to a "computable busy beaver" of size n be generated by a polynomial-time uniform circuit family? Mateus #60: I'm on sabbatical in Tel Aviv this year. It's afternoon here. Alas, I don't know of any function that could answer to the name of "fastest-growing computable function." Indeed, if f(n) is computable, then so is Ackermann(f(n)) and so on. Yes, you could require the "Busy Beaver" Turing machines to be uniformly computable, and if you did, that would limit you to computable functions only—but it still won't mean there's any fastest-growing computable function among them. Incidentally, it's true that there's "plenty of room between Ackermann and Busy Beaver" (e.g., computable functions that grow way faster than Ackermann, and even uncomputable functions that grow way slower than Busy Beaver, which you can construct using priority arguments). Since actual computers have finite memory, would it be interesting to explore "theoretical" math on finite sets of integers (modulo math on naturals)? Or would this be just too limiting? Scott #61: Please ignore my previous post, it was just an error. I wanted to write instead: Aha! I knew you didn't actually have superpowers ? But I don't see how that's a problem. One needs more bits to write down Ackermann(f(n)) than f(n), and even more bits to write down Ackermann(Ackermann(f(n))), so that is taken care of by requiring the Turing machine to be finite. I'm just afraid that the "fastest-growing" sequence is not well-defined. Let 'n' be the size of your program and f(n) its runtime. Now require your sequence of programs of size 'n' to be uniformly computable. So each program to define the sequence of programs of size 'n' defines a different function f(n). But now how do we choose what is "fastest-growing" f(n)? fred #62: The shelves of the math library groan under the weight of what you can do with finite fields and other finite algebraic structures. Of course, the big limitation is that you then don't have a natural notion of "greater than" or "less than." Mateus #63: Alas, my sole superpower is clarity of thought. I'd surely pick a different one if I got to live over again. Yes, it's possible to have two growth rates, neither of which asymptotically dominates the other: for example, f(n)=n versus a function that's usually flat, but occasionally jumps from way less than n to way more than n at extremely widely spaced values of n. But as far as I know, essentially all the growth rates that "naturally arise in practice" can be totally ordered from slowest to fastest. Yes, you could consider, e.g., the fastest-growing computable function definable by a program at most n bits long, for any fixed n of your choice. Of course, as you increase n, these functions will increase without bound (even as Busy Beaver still dominates all of them). I agree that such functions don't arise in practice. Marginally related tidbit: One can write down explicitly (using just a few logs and a little playing with trig functions) a function f(x) such that: 1) f(x)>0 for all x. 2) f(x) is infinitely differentiable for all x> 0. 3) f'(x) >0 for all x>0. 4) f(x) is not O(x) and x is not O(f(x)). Finding such a function is a cute problem. I don't know if one can make an elementary function that does this and has f^(n)(x) >0 for all n. Actually, disregard last sentence; you obviously can't since f"(x) being positive would mean that once you speed up again to get large than x by a bit you won't be able to slow down again. Sniffnoy Says: Scott #46: I just want to say, that is an excellent point and I'm surprised I haven't seen it made before. Well, not in that form, anyway — I guess I've seen people such as myself making the contrapositive point in the form of "aaargh, logic makes no sense because what metatheory are we even using to judge statements about what proves what in the first place??" But you take that contrapositive here and it becomes an interestingly different point of view! 🙂 John Sidles Says: Scott observes (circa #65) "As far as I know, essentially all the growth rates that "naturally arise in practice" can be totally ordered from slowest to fastest." This observation has parallels in computability theory and complexity theory. In computability theory there is … Viola's theorem Given an integer \(k\) and Turing machine \(M\) promised to be in \(P\), the question "Is the runtime of \(M\) of \(\mathcal{O}(n^k)\) with respect to input length \(n\)?" is undecidable. Supposing that, in complexity theory as in function theory, the decidability of "all the growth rates that naturally arise in practice" motivates the definition of \(P'\) as the restriction of the complexity class \(P\) to algorithms that "naturally arise" in this sense. In this light, the question \(P' \overset{\scriptstyle ?}{\subset} NP\) is both theoretically natural and practically interesting (and has been surveyed on TCS StackExchange). In particular, it is not known (AFAICT) whether the well-known obstructions to proving \(P \overset{\scriptstyle ?}{\subset} NP\) also obstruct proofs of its natural restriction \(P' \overset{\scriptstyle ?}{\subset} NP\). Scott #46, I want to make sure I understand your argument. Is it that we shouldn't consider pathological theories, whatever they are, *math* theories? That when we talk about *math* theories we're implicitly confining ourselves to those theories that agree with ZFC for all arithmetical truth value statements? And this is what we should venerate ^^^^^^ as absolute mathematical truth? That still seems meager to me. It also seems to suggest that whatever we are thinking about when it comes to transfinite numbers, it isn't *math*. Atreat #70: I think the main thing you haven't yet understood is that I don't give ZFC any sort of exalted status. At the end of the day, I'm happy to talk unproblematically about "arithmetical truth," independently of ZFC or any other system. That is, I draw a line in the sand, and say: there might not be a fact of the matter about the Continuum Hypothesis, but there's certainly a fact of the matter about the Goldbach Conjecture, and more generally, about whether a given Turing machine halts or doesn't halt, and (I would say) about higher-level arithmetical statements as well. If we're unwilling to stick our necks out that far (i.e., not very), then I don't understand why we're even bothering to use language to talk about math in the first place. For present purposes, a formal system could be called "non-pathological," if all the arithmetical statements that it proves are actually true ones. E.g., if the system proved Goldbach's Conjecture was false, you could write a program to search for counterexamples, and with a big enough computer in a big enough universe, sure enough the program would eventually halt with the predicted counterexample. (For higher-level arithmetical statements, as pupmki correctly pointed out, one would need to tell a slightly more involved story, for example involving interaction with a prover, but I'm fine with that as well.) I'm almost certain that ZFC is a "non-pathological" theory in the above sense. But—here's a key point—if we somehow learned that wasn't the case, then so much the worse for ZFC. We'd then simply want to discard ZFC—which was only ever a tool to begin with, for capturing truths that made sense prior to ZFC or any other system—and find a better foundation for math. To address your other point, I think transfinite set theory clearly counts as math (it proves nontrivial theorems, is done in math departments … how much more do you want? 🙂 ). But you're right that it's an unusual kind of math, which in some sense is less about the objects themselves (e.g., the transfinite sets), than it is about the formal axiomatic systems that we use to discuss those objects. I wouldn't say the same about most areas of math, which for the most part, deal with questions that could ultimately be "cashed out" into questions of arithmetic, for example about whether various computer programs halt (or halt given access to a halt oracle, etc.). Again, the latter sort of questions I regard as having a definite meaning, independent of whatever formal systems we use to discuss the questions. Eric Habegger Says: Scott, this a very fine exposition of concepts that I did not understand before. One big advantage of someone like myself (who is not formally trained on this stuff) is that when you learn about it things just pop out to me that don't seem to pop out to others formally trained in it. I'm thinking about the culture of "shut up and compute" that exists in education. What pops out to me is that you are saying there is a high probability that a quantum computer's only real problem solving ability may be to speed up quantum simulations. That is, it's only major accomplishment in the real world may be to render encoding of information obsolete. Doesn't this bring up some big issues! Is the upside of creating a quantum computer (assuming this probability is valid) worth the downside? It reminds me of the invention of the thermonuclear bomb. So far it's only task is to kill people, or to threaten to kill people. Is it possible that a quantum computer will have an even more limited use? I can't see it even being used as a deterrent as a thermonuclear bomb can be, knowing what I know about the lack of imagination of most people. Dan Staley Says: Okay, I've got what I assume is a really naive question – how can the amount of information you can store in a region scale only with surface area, and not with volume? I mean, we can look at a big sphere – it has some amount of surface area, so we can store some amount of information in it. Then we draw a ribbony, fractal-looking region contained in the sphere it with a much larger surface area than the sphere itself. It has more surface area (by an arbitrarily large factor) – does that mean we can store more in it? Can we really store more information by restricting to a sub-region? I think I get it. Your relationship and veneration to ZFC is analogous say to your view of Newton's theory of gravitation. The moment that ZFC contradicted in some fundamental way your intuitive sense of arithmetical soundness (some one was able to show the equivalent of 1+1=3) is the moment you'd be searching for a new set of axioms. Similarly, you feel confident about saying there is a fact of the matter about whether particular Turing machines halt and would only trust formal systems that agreed. Transfinite numbers and the challenges that they introduce to our intuition (Banach-Tarski) are not problematic in this same way because the very existence of transfinite numbers are counter to our intuition. Couple further questions: Doesn't it bother you at all the fact that the same formal system that so successfully upholds your intuition about arithmetical soundness also betray it so completely when it comes to transfinite numbers? Would you be willing to throw out ZFC and its implications for transfinite numbers and Banach-Tarski for a formal system that similarly upheld your intuition for arithmetical soundness, but did not allow such transfinite silliness? Even eager? For me, I'm happy to talk about math and all of these things and still believe there is no such thing as absolute mathematical truth. I have no place for a platonic wizard behind the curtain. And the void of absolute mathematical truth also leaves me no choice, but to conclude that absolute truth of any type is an illusion. The existence of many-valued logic, fuzzy logic and systems that do not use the law of the excluded middle are further evidence for me of the fallacy of believing in absolute truth. If there is absolute truth or absolute knowing I think it must be ineffable and outside of any language system or rule system. Flumajоnd Says: That paper pretty much destroys Scott #46 comment, and shows how naive this sort of intuition is. Paper is by famous logician/philosopher Joel Hamkins, with input from Hugh Woodin. It shows precisely that even if the sets of natural numbers are the same, the universal existential etc. formulas of number theory are not, because of difference in Skolem functions. So, if metatheory is M1, one is true, and if it is M2, the other. This can be interpreted in terms of formal systems, so there are two systems of axioms of set theory extending ZFC (or any consistent extension of it), so that they AGREE on all purely existential and universal formulas of number theory (hence, on halting of ordinary Turing machines, provability of Theorems etc. – this is not your Con(ZFC) vs NOT Con(ZFC) difference), but disagree for some formulas when quantifiers are altered (staying in first order logic i.e. talking ONLY about numbers). So which one is "true"? There are truth predicates defined in that paper, so that Tr1 and Tr2 differ. This is perhaps the clearest demonstration of logical relativity of truth and illusions of existence of some "absolute truth" and futility of notion of "arithmetic soundness" that Scott is fanatically attached to, but which is justified by non-mathematical arguments ONLY. Purely mathematical arguments of the paper however show that even with having the same natural numbers, some first order properties of them might be different depending on axioms, and that objection from #46 applies only for agreement of universal and existential formulas (which one might ask for as Scott does), but does not extend higher in the hierarchy of first order quantification. >> you can write basically anything you want, as long as I'm able to understand exactly what number you've named. This rule seems ill-defined to me … What if I define L as the largest number one can name in 10 seconds following the Scott A. rules ? Would this number be the winner or would L+1 beat it? Eric #72: Even supposing that a QC's only application was to quantum simulation, that seems likely to have lots of positive applications for humanity. Helping to design high-temperature superconductors, better-efficiency solar cells, drugs that bind to desired receptors … pretty much anything involving many-body quantum mechanics, and for which DFT and other current methods don't yield a good enough result. Why are you so quick to write that off? As for breaking exist crypto, hopefully (and in principle) switching to different forms of crypto would only need to be a one-time thing. And in any case, as the Equifax break most recently reminds us, there are like Ackermann(1000) ways to steal people's personal data that are vastly easier than building a scalable quantum computer. 🙂 Incidentally, I also think thermonuclear bombs could've had positive applications for humanity: for example, bombs exploded underground could be used to drive turbines (yielding fusion energy that could be practical with existing technology), and Project Orion seems like the way to travel through space. Alas, understandable geopolitical concerns seem likely to postpone all such ideas for many generations, if not forever. But, as I once heard it put, the Teller-Ulam design was such a triumph of engineering that it would be a shame for it to only ever be used in murderous weapons. Atreat #74: Let's be more concrete about it. 2+3=5. Not an absolute truth? What about Modus Ponens? For me, taking inspiration from the completeness theorem for first-order logic, a thing not being an "absolute truth" simply means there's some possible universe where that thing is false. So for example, the 4-dimensionality of our spacetime manifold seems like clearly a contingent fact about our world: something that could have been false, and that for that very reason—given the limitations of our senses, measuring instruments, etc. etc.—we can never be 100% sure is true. Without changing the terms involved to mean anything other than their standard meanings, what's a possible universe where 2+3 is not 5, or where Modus Ponens doesn't hold? Or at least: what can you say to persuade me that such a universe might exist? wolfgang #76: No, that's just clearly, unambiguously a number that I don't understand, not being able to do diagonalization over my own brain to see which number-definitions I would or wouldn't accept. Disqualified. To address your other point, I think transfinite set theory clearly counts as math (it proves nontrivial theorems, is done in math departments … how much more do you want? ? ). But you're right that it's an unusual kind of math, which in some sense is less about the objects themselves (e.g., the transfinite sets), than it is about the formal axiomatic systems that we use to discuss those objects. Hey now, some of us still do perfectly good work about those transfinite sets themselves! 🙂 (I have no idea how possible it is to cash out statements about ordinals and well-quasi-orders to statements about natural numbers, being someone who only cares about the order theory itself and not the logic that goes along with it. But I get the impression such a thing may be possible?) Come on … just write down the biggest number you can name in 10sec. Then we ask all mathematicians on this planet to do the same … finally we use the fastest supercomputer limited only by Planck time … L is the maximum of those generated numbers. This is a well defined and *finite* procedure IF your rules are well-defined. What is it that you "clearly don't understand" about it? Quanta magazine just posted an article about recent progress on the continuum hypothesis, with experts extremely surprised to see a proof that two types of infinity (called p and t) were the same cardinality. This supports CH. https://www.quantamagazine.org/mathematicians-measure-infinities-find-theyre-equal-20170912/ wolfgang #81: No, because what would actually happen if we tried what you propose, is that it would degenerate into arguments about which number-definitions are valid and which not (and my lecture explains exactly why). In fact my rules are not well-defined in general—there are number-definitions about which my own intuition vacillates, so that I can't tell you in advance exactly where I'd draw the line. But what the rules are, is perfectly well-defined enough to adjudicate a biggest-number contest between any two children who I've ever met, and all but a few of the adults. 🙂 Sniffnoy #80: As it happens, I was talking to Harvey Friedman about exactly this a few months ago. Well-quasi-order theory is an interesting intermediate case, not nearly as ethereal as (say) the Axiom of Choice or the Continuum Hypothesis, but also not quite as "obviously definite" as arithmetical statements. So yes, I suppose I do admit statements about WQOs as having definite truth-values, although I'm more comfortable with the purely arithmetical consequences of those statements. (E.g., that for every excluded-minor family of graphs, there exists a polynomial-time algorithm to test for membership in that family.) What Harvey and I realized was that there's something interestingly open regarding this distinction. Namely, Harvey's own work from the 1980s, together with Robertson and Seymour, established that some of the key statements of WQO theory are independent of Peano arithmetic, requiring at least some ordinal reasoning to prove. But right now, it seems that no independence from PA is similarly known for the arithmetical consequences of those statements, such as the "algorithmic" version of the Robertson-Seymour theorem that I mentioned above. mjgeddes #82: Yes, I had read that article with interest (or actually, mostly the underlying paper by Malliaris and Shelah, for which I half-understood the introduction, but still got more out than from the popular article). But I don't understand the argument that this breakthrough result "supports CH"; maybe you or someone else could explain it. As I understand it, there were two infinite cardinalities, p and t, which were known to satisfy &aleph;1 ≤ p ≤ t ≤ 2&aleph;0 in all models of ZFC. So, if someone had proven p<t, that would've implied not(CH). However, after Gödel and Cohen's work, we knew that you could build models of ZFC where &aleph;1=2&aleph;0 and therefore p=t, and other models where &aleph;1<2&aleph;0 so the p vs. t question could a-priori have either answer. What the breakthrough by Malliaris and Shelah shows is that, in all of the latter cases as well, we actually have p=t. So then, what's your argument for why this renders CH "more likely"? On the topic of Busy Beaver and its generalizations, a question about another generalization; this is closely related to the earlier questions and conversations about quantifying the small-scale growth of Busy Beaver. We can also talk about BB(n,k) where n is the same as before and k is the number of tapes. Obviously, this growth is much much slower than the sort of super-oracled machines you discussed above. However, there's another closely related generalization: Instead of adding extra tapes, add extra stacks. So consider BB(n,k,s) where s is the number of stacks added. Now, it is well known that two stacks can simulate an extra tape with small overhead (essentially you pretend that each stack is half of your tape and to move the head you pop one stack and push on the other). However, one would expect that in general having s stacks should give one a lot more flexibility than s/2 tapes, since having two stacks allows other operations like just pushing one stack or just popping one stack (which essentially then amounts to something close to being able to add or delete a spot on your tape). Can we make this rigorous in terms of the Busy Beaver function? In particular, aside from very small n, if we assume that s is at least 2, is BB(n,k,s) always much bigger than BB(n,k+1, s-2)? My guess would be that for any computable function f(x), as long as n or s is sufficiently large f(BB(n,k+1,s-2)) is less than BB(n,k,s), but this looks tough. Flumajond #75: I regret that I haven't yet had time to read that paper. In the meantime, though, maybe you could help me by explaining: what is the actual example of a Π2-sentence, or other arithmetical sentence, that's claimed to lack a definite truth-value? Not any of the philosophical or mathematical arguments around it, just the statement itself. For me, a canonical example of a Π2-sentence has always been the Twin Prime Conjecture. So I'm genuinely curious: would you say that, for all we know, the Twin Prime Conjecture might be "neither true nor false," that its truth might be "relative"? If so, what exactly would such a claim mean? Let's say that I write a computer program to spit out the list of twin primes. It starts out like so: Then the question is: assuming unlimited computer time and memory, will this program ever print out a final pair, or not? What would it mean for the answer to this question to be relative: that the program actually does print a last pair under one extension of ZFC, and actually doesn't under a different ZFC extension? If so, then by what means do the axioms of set theory reach down from Platonic heaven to our world, in order to influence the behavior of this deterministic computer program (as opposed to the very different question of what we can prove from those axioms about the program's behavior)? Well the experts seemed surprised. From Quanta: "It was certainly my opinion, and the general opinion, that p should be less than t," Shelah said. Also, as you pointed out: "if p is less than t, then p would be an intermediate infinity — something between the size of the natural numbers and the size of the real numbers. The continuum hypothesis would be false." But with p=t, CH passes the test. So to me the hint is that there's fewer types of inifinity than thought, leading us towards CH…. Of course whether CH actually has a definite truth value depends on how strong a mathematical realist one is. A formalist or a weak realist would argue there's no fact of the matter. A stronger realist or a Platonist still insists there's a definite answer. The famous Geddes super-clicker intuition says..CH is true 😀 Ultra-finite recursion man. "what is the actual example of a Π2-sentence, or other arithmetical sentence, that's claimed to lack a definite truth-value" In the paper natural numbers are the same, so all purely universal (or purely existential) formulas will have the same truth value in both models constructed – which is exactly the point, while at the same time not all first order formulas. Therefore, there are no examples of such formulas as you ask (which is, universal with two variables – which essentially does not matter, as you could get a pair from one number; the alternations of quantifiers matter, to the nature of Skolem functions), but example that follows (not given explicitly) must have alternations in the type of quantifiers. In fact, that is how this paper ruins your argument from #46. Namely, if you ASK for purely universal or purely existential formulas to have definite truth value (i.e. proof existence and halting of Turing machines would all be included), then you can get that, but it does not extent to all higher levels (which have nontrivial Skolem functions) of formulas about number theory (which is what "arithmetic soundness" would require). The proof is given in terms of models of ZFC (but presumably to its finite extensions too), that contain the same natural numbers, but have different truth values of formulas with alternating quantifies (first order, but just about numbers). Which simply means, that the models contain different Skolem functions (which is where the difference comes from). In the end of the paper, there is nice overview about philosophical positions held by various mathematicians/logicians. From strict Platonists, to middle of the road Feferman, to Hamkins who rejects arithmetic determinancy to ultrafinitists (relevant in the case the world is finite). But the point is, these are PHILOSOPHICAL positions. Mathematics can only suggest (even prove as in this paper) that one kind of argument sometimes can not be extended beyond its most narrow scope, as you did in #46 if you wanted to use intuition about Turing machine halting to ALL first order arithmetic and arithmetic soundness and determinancy. Flumajond #89: No, a Π2 sentence does not mean "universal with a pair of variables." It means a sentence with a universal quantifier followed by an existential one. So I repeat my question: what is the example of that kind of sentence whose truth is "relative to the model of set theory"? And I also repeat my question about the program that searches for twin primes. How does anything you've talked about reach across the conceptual chasm, and cause that program either to find a final twin prime pair or not find one? I still haven't gotten an answer. Yes, of course this is a philosophical dispute, not something that can be resolved by a theorem. That's why I gave you a philosophical argument for why the place where I draw the line between definite and indefinite, seems like the only place that makes any sort of sense. 🙂 I've debated Hamkins before about this on MathOverflow—his rejection of arithmetic definiteness never made sense to me—but now I'm asking you. Again, not philosophical schools of thought, just what it is that reaches down from set theory and causes a computer program to either find a last twin prime pair or not find one. mjgeddes #88: Thanks, but without a further argument I don't buy it. After all: the models of ZFC that we already knew about where CH is false, are all still there now! It's just that we now have an extra piece of information about those models, namely that p=t in all of them. And more than that, the models where CH is false are the only models where the new result is even interesting, since we already knew that p=t whenever CH holds. Scott #78: Absolutely! I'd love to get down to brass tacks, but I'm still not sure what kind of argument you'd find convincing. Let's take 2+3=5 and see if we can figure it out. Precisely who/what do you think should adjudicate? Conscious observers in some universe? I gather you would not find that convincing since consciousness is not well defined. Presumably, you envision some form of computing device in some universe with physical/mathematical law that would foreclose it from ever outputting 5 in response to a well formed program performing the operation that we conceive as 2+3. Would that be convincing? Walter Franzini Says: I'm one of the people that like your blog even if it's beyond my understanding, I've been in Mantua for the 10th time this year and I was surprised to see your name in the authors' list. While I've not attended your speeches because of the language barrier, I was disappointed not to find your book in the temporary bookstore. Festivaletteratura is more about readers and authors than about literature. One of the bestselling authors in Italy in 2017 is Carlo Rovelli, a quantum physicist, with a little book about time and he held an event in one of the main location of Mantua (Palazzo Ducale). So the festival is more about what people read than about literature. I hope you come again in Mantua with a stock of your book so I can try to understand something more about this blog 🙂 Atreat #92: I just want you to describe to me what some world is like where 2+3 is not 5—in the same way that I could describe to you what a world was like that had only 2 spatial dimensions (I could just say: "read Flatland"), or that was classical rather than quantum, or maybe even that had multiple dimensions of time, or no time. That's all, no more and no less! Ok, so yes, the twin prime conjecture is universal-existential claim. Your posing it as a list made what you were talking about seem like a halting problem, which it is not – so that is why I jumped to discussing double universal without carefully reading what you wrote, my mistake. As for the paper, if you insist for explanation (though you could have done that on your own, I didn't know about that paper before – and just got to see what it is about and what is claimed, I am as much of an outsider to the model theory as you are, but the proof uses only elementary tools like compactness theorem and is short), ok I can try break it down for you. The more explicit of the two given constructions/proofs of Theorem 1, due to Woodin, is on the page 11. I doubt you could say that the formula they construct is universal-existential, as only its Godel number is constructed and it is SOME formula of first order language of arithmetic. So in proof of Theorem 1, there is no explicit expression for formula in that paper, and it is certainly not said to be universal existential, but it is of first order number theoretic and cannot be purely universal or purely existential by its properties. The construction is indirect. It goes something like this: One knows that truth predicate on arithmetic cannot be defined as a first order formula of arithmetic (old result of Tarski, Goedelian diagonalization of sorts) – that is, list of Goedel numbers of formulas that are "true" cannot be defined by a first order formula of arithmetic. But it CAN be defined in meta theory (predicate he calls TA). Now he considers 2-type consisting of all formulas (in two variables s and t) "f(s) equivalent f(t)", and also "TA(s)" and "NOT TA(t)"; each finite subset of such formulas is realizable by some s and t, this set of formulas is recursive and the model is recursively saturated meaning that we can find s and t which satisfy all the formulas in this set. Then he will take a formula whose Goedel number is s and construct another model of methateory M2 so that this formula is not satisfied in it (as t corresponds to s in that theory). Ok, so thats a sketch of the proof of Theorem 1. I must say that it bothers me that both s and t might not be standard natural numbers, as there are no two standard natural numbers that satisfy the same set of formulas and are different. It appears that Theorem 1 is in fact of interest only to model theory as it shows something about the truth predicate – a predicate on Godel numbers of formulas, which in this case are from some extension of natural numbers. But also, there is Theorem 5 and its Corollary 8 on page 15, that in fact seems to claim that you can have agreement on Tr_k predicate (which inductively applies as on page 6 Tarski definition of truth at most k times) and not on T_(k+1), but even that does not seem to exclude nonstandard Godel numbers. Apparently all this is different from the formal interpretation that I claimed, and oriented to model theoretic stuff, which allow non standard Godel numbers which do not come from any "real" formulas. My mistake was to interpret these claims as claims about standard formulas of first order language, as I did not consider the possibility that the models will give NONSTANDARD Godel numbers, i.e. he uses "formulas" which are not formulas in our standard sense (though look like they are to the model). Nevertheless one can independently ask the following question (about formalism): Can we have two extensions of ZFC, such that they disagree on some universal-existential formulas about numbers, but yet agree on all purely universal and existential formulas about numbers? This claim might have some relatively cheap proof, easier than what was done in that paper, which was model theoretic oriented, but it does not seem to be what was done in that paper, at least not directly. Anyway, that is a mathematical claim, which I believe to be true, and it would be a clear reason why your philosophical intuition about one level does not extend further up the hierarchy. However, this paper indeed does NOT talk about that, on closer inspection, so an independent proof might be needed. Perhaps this is a known result? Walter #93: Thanks for the kind remarks; I'm sorry to have missed you in Mantua. Yes, I was also disappointed not to see Quantum Computing Since Democritus in the festival bookstore! But all but a few of the books that were for sale were in Italian (a language into which QCSD hasn't been translated), so that may have had something to do with it. In any case you can get my book from Amazon, and I'll sign it when and if we meet! Scott #78, Atreat #92: As you surely both know, 2+3=5 (or rather S(S(0))+S(S(S(0)))=S(S(S(S(S(0))))) after removing syntactic sugar) directly follows from the axioms (of PA, or Q, or EFA), so it should always be true. But the actual point of disagreement is whether the ontological commitment (to the existence of Turing machines and) definite truth (or falsehood) of Π1 sentences (which I and many others want to make in order to be able to talk about the consistency of axiom systems) forces one to also accept definite truth (or falsehood) of Π2 sentences (and by iteration also to accept the definitive truth of Πn sentences). Personally, I am not convinced of the absolute truth of arithmetic statements. But I know that it would be hard to convince Scott of my position, and Scott probably knows that it would be hard to convince me of his position. I think: In a certain sense, naïve set theory is inconsistent, but naïve arithmetic is not. Hence independence of set theoretic statements from set theory is seen as a sort of fault of the statement itself, while independence of arithmetic statements from a given formal system is seen as a fault of the formal system. pupmki #57: I have now read section "1. Introduction", section 2 "2. Indefinite arithmetic truth", and section "6. Conclusions" (so I skipped pages 12-24) of Hamkins' paper (with 34 pages). Saying "this is a rather elementary paper" is an interesting assertion. I wondered before whether I should bring up that paper, but decided against it. Even if Scott would read it, it wouldn't necessarily convince him (because the paper intentionally uses non-standard models of the natural numbers). And I should have read the paper first (even if I don't fully understand it) if I bring it up, independent of anybody else in the discussion would also read the paper. Flumajоnd #75, #89, #95: I am happy to see that you have read Hamkins' paper, understood it significantly better than I could have understood it even if I had invested more effort, and are willing to explain to us what we can learn from it for our discussion. I just wondered if you would have brought up that paper by yourself, if pupmkl hadn't done it first. After all, it is not really clear whether it benefits a discussion, to go into such technical details. Flumajond #95: I should say that I really, genuinely admire your intellectual honesty in clearly admitting that the Hamkins paper doesn't do what you think it did—when you could've easily obfuscated the point and I would likely have been none the wiser! 🙂 For me, nonstandard integers have a fundamentally different status from standard ones. They are artifacts of a formal system—in effect, placeholders that say "an integer with such-and-such properties doesn't actually exist, or not in the usual sense, but this formal system can't prove that it doesn't." Indeed the Completeness Theorem can be interpreted as saying precisely that the nonstandard integers are "nothing more than that," spandrels that have to exist by the mere fact of ZF+Not(Con(ZF)) and so forth being consistent. So, if there's a "Π2-sentence of arithmetic" with indefinite truth-value, but it needs to have a nonstandard Gödel number—well, that's more-or-less what my worldview would've led you to expect, isn't it? 😉 Comment #100 September 16th, 2017 at 11:49 am To answer my question above, if we strengthen the requirement so that the natural numbers are the same (but Skolem functions might differ), and want them to be standard natural numbers, it seems that universal-existential formulas wont do, because the negation is existential-universal, so in one of the models, we will have a concrete universal formula, that has to agree with the other model and we have a witness there too. So, we can extend this logic further up the hierarchy, to conclude that if natural numbers are the same in the two models AND are standard natural numbers, then the first order formulas of arithmetic hold the same in the two models. However, if we allow theories to be not complete, situation might be different. Suppose we have an extension of ZFC (it will be non recursive) as a formal system, that has ALL purely universal true (in ordinary sense) formulas about arithmetic as axioms. Of course, it will also prove all purely existential such formulas that are true. Lets call this theory M. It might not be complete, but it is complete as far as these lowest level quantified formulas of arithmetic go. Now, the question is: is this theory necessarily complete as far as first order arithmetic formulas go? Lets see… I don't think so, but if we have any model of this set theory (M) which has the same natural numbers as standard ones, it has to have the same set of true real first order formulas about numbers. So that paper really seems to depend on "nonstandard" formulas and natural numbers. And we cant seem to get arithmetic indeterminancy by different Skolem functions alone. One needs extended set of natural numbers. Or yet in another way – if we have omega-consistent and complete theory of arithmetic, it has to be the "standard" one. So while I still believe what I asked above is true, another question might be asked: can we have two models of ZFC, with the same but nonstandard sets of natural numbers (i.e. structure with same +, *), but such that not for all REAL formulas of first order on natural numbers they have the same truth value (a variant of the theorem proved in the paper but where formula sigma is "real", i.e. has a standard Goedel number). The paper does not seem to answer that either. "doesn't do what you think it did" wrong. doesn't do what I THOUGHT it did. Before I took a closer look. I hope it was a misprint on your part. Comment #102 September 16th, 2017 at 1:06 pm Flumajond #101: I stand corrected. Aula Says: There is something odd going on with this blog: when I view the content of this post at https://www.scottaaronson.com/blog/ the math displays are rendered correctly, but when I look at https://www.scottaaronson.com/blog/?p=3445 the displays are not rendered at all, but instead shown as the source code. Anyway, I mosty wanted to respond to comment #77: Incidentally, I also think thermonuclear bombs could've had positive applications for humanity: for example, bombs exploded underground could be used to drive turbines I don't see how that could be practical. A thermonuclear explosion releases a minimum of 200 terajoules of energy in less than a second. Even if you ignore problems with things like radiation, converting and storing that much energy would be virtually impossible. (yielding fusion energy that could be practical with existing technology), Not really, as there's very little fusion energy involved. Thermonuclear bombs work in four stages: chemical explosions generate a shockwave, the shockwave triggers the primary fission explosion, the energy from the primary fission explosion triggers both the fusion reaction and the secondary fission explosion, the neutrons (not energy!) from the fusion reaction enhance the secondary fission. The energy released by the fusion is only a fraction of the total energy; the purpose of the fusion is to produce sufficient amount of neutrons to completely consume all of the fission fuel. and Project Orion seems like the way to travel through space. No, not through space. Nuclear explosions could propel spacecraft efficiently in the Earth's atmosphere (and even more so in denser atmospheres of some other planets) but would be very inefficient in vacuum. Scott #94, gentzen #97, Flumajond #95: First, I want to say thank you all for this conversation! I've always found this subject fascinating. Personally, I find myself closer to the side of gentzen and Flumajond (whose intellectual honesty I too genuinely admire!), but I suspect I'm willing to go even further. gentzen, I think you frame the real disagreement well. When Gödel introduced his famous work it shook the philosophical worldview of a great many mathematicians. I believe Scott thinks that many mathematicians went too far in throwing the platonic baby out with the bathwater. Scott, I'm going to bite that bullet (and add mustard to it as you'd say!) and get back to you on that description when I can find a break from my papa duties 😉 I was thinking of something like the lambda calculus or SKI calculus that can embed PA, but with some particularly fine-tuned alterations that would preclude 2+3=5. Would that suffice? Atreat #104: I was thinking of something like the lambda calculus or SKI calculus that can embed PA, but with some particularly fine-tuned alterations that would preclude 2+3=5. Would that suffice? Hard for me to say whether that would suffice, since I can't even begin to imagine what it would look like! Remember that I'm not looking for an abstract formalism, but for a description of what the world would be like. Scott #105: Essentially, it would be a Turing-complete language that would embed PA, but have a "bug" if you will that would render 2+3!=5 or as gentzen would put it: S(S(0))+S(S(S(0)))!=S(S(S(S(S(0))))) I've written actual interpreters for SKI calculus so I could place the "bug" in the interpreter or if that is too contrived I can think about an alteration to the semantics of the language which will embed the "bug". The real question is if you will accept this "bug" as sufficient reason to abandon your ontological commitment to the absolute mathematical truth of 2+3=5 🙂 Another great post. Just a curiosity (regarding (your 🙂 ) mathematical psychology). When you stated the three proofs for the uncomputability of the Busy Beaver function, you stated the one involving solving the halting problem only after the other two. Is it because you like the other proofs 'better', or was it maybe because the last one had to be given a link to the halting problem. Because the last one looked the straightforwardest to me and I would not have bothered for another proof. Marvy Says: Atreat 106: This entire discussion is mostly way over my head, but I think Scott's reaction would be something like "This is just a buggy interpreter, the world has no shortage of them, no big deal. Or more formally, this interpreter implements something other than what people mean by integer arithmetic." I think he wants a description of world where people say things like "This basket has room for 7 apples, so if I put my 2 apples in it, there is not enough room for you to put your 3 apples, since 2+3=9." (Or maybe 2+3=4, so the basket is way bigger than needed.) Or a world where Winston from 1984 says "Freedom is the right to say 2+3 do NOT make 5." Or something like that. There is something odd going on with this blog: when I view the content of this post at https://www.scottaaronson.com/blog/ the math displays are rendered correctly, but when I look at https://www.scottaaronson.com/blog/?p=3445 the displays are not rendered at all, but instead shown as the source code. I'm also having this problem, btw, just to confirm that it's not just Aula. Re #53, technically that's not true. In the real world, people do use PRNGs that give fewer bits of output than input, because they have random sources that do have enough entropy but aren't independent 1/2 probability random bits. This doesn't help in this context, that is, creating functions that really don't have a small circuit. Scott re #65: > Alas, my sole superpower is clarity of thought. I?d surely pick a different one if I got to live over again. In Paul Graham's writeup "Why Nerds are Unpopular" ("http://www.paulgraham.com/nerds.html"), which you've mentioned before, Paul Graham appears to say that he would pick clarity of thought again, and that other nerds (such as you) would choose the > If someone had offered me the chance to be the most popular kid in school, but only at the price of being of average intelligence (humor me here), I wouldn't have taken it. Are you contradicting what Paul Graham said there? Or am I just misunderstanding your comment? Comment #111 September 16th, 2017 at 10:17 pm "So, if there's a "Π2-sentence of arithmetic" with indefinite truth-value, but it needs to have a nonstandard Gödel number—well, that's more-or-less what my worldview would've led you to expect, isn't it" Well, sentences with nonstandard Goedel numbers are not really sentences; it just shows that this result (from the paper) is not really relevant to that point. What WOULD be convincing for me is if, assuming that we add all universal "true" formulas of arithmetic (or even a bit more, essentially universal, i.e. universal over some simple predicates that are guaranteed to be decidable even in PA, allowing for instance primitive recursive terms) as our axioms, that we will get an arithmetically complete first order theory. But I think that is not true. In fact, I can give a reason why I think this is true. The sets can be classified according to computability, to decidable sets, recursively enumerable sets, etc. so that we start from decidable sets, and then use complement and projection like in other hierarchies. So the set of all axioms that are "true" universal formulas will be low in that hierarchy (set of all "true" existential formulas in enumerable, and this is sort of its complement), and set of all theorems from that is then also low (one quantifier above or so). But the set of all "true" formulas should be high, so that in this way "recursive" hierarchy would collapse. Which is either un-plausible, or likely even known (the analogue of P!=NP statement is just that enumerable sets are not decidable, which is easy to prove and likely there might be separation results for this on higher levels – its some hierarchy of sets, starting from decidable and up). So there you go, this is a cheap proof of my claim that there are universal-existential or at least first order formulas, that would be independent from the system in which truth of all purely universal (as well as purely existential which we already have of course) is known. A proof, modulo non collapsing of hierarchy of sets starting from decidable (and using projections and complements), which should be known (it must be way easier than noncolapsing of PH). Of course, life would seem much easier if my claim were false, and it is only an argument of sorts against extending #46 to higher level. In fact, as I said before, long time ago (after learning about Goedel incompleteness as a teen), I became a bit obsessed with incompleteness and ways to define things and truth. My default position was the same as that Scott describes (arithmetic truths of first order at least are absolute), and the mysterious logic of adding Con(T) to T, and repeating this process over and over, never getting the full thing, seemed like a way to get something – but whatever you do (passing to limits etc), always gives a recursive theory, and misses the mark. There is NO WAY to formalize this yet we seem to have a grasp on whats going on (which is basis of Penrose's argument). Later, I've concluded that this must be an illusion, and that key is in defining limit ordinals, where our intuition in fact might be faulty (as might be adding large cardinal axioms, with no guarantee they are consistent as in fact some are even shown to be inconsistent), and hence, all we can have is "empirical" evidence for some axiom not to cause contradictions – a rather disappointing state of affairs, but it seems that there is little we can do about it. Of course, there is a philosophical difference about us being able to know something, and something being true or not on its own. But while useful, such slippery notion of "truth" of things which are unknowable – and in principle so – is why I remain agnostic as to existence of such a thing as "absolute truth" in the first place. It is not only continuum hypothesis but perhaps even arithmetic statements which are unknowable that are called into question. So why do we have intuition that arithmetic absolute truths exist? It seems to me that it is always by virtue of some imagined mechanism of KNOWING what the truth value is. For instance, we might imagine some quasi-physical system, with some "superpowers", that computes the generalized BB function with ordinals (we can stick to omega for start, I suppose that should also do for computing the truth of first order arithmetic), or determines truth of some statement. But if these aren't based in reality, then there is plenty of room for skepticism. Greg McLellan Says: I can't help but feel like you're being a little too nonchalant about this possibility. I've seen plenty of pure mathematicians wince at the idea of ejecting Choice from functional analysis (it's kinda doable, but it makes things a hell of a lot harder and less elegant), let alone having to rebuild the modern edifices of topology and algebra if the appeals to Infinity, Powerset and Separation which they are built from turned out to be inconsistent. To me it seems like this isn't just a matter of knowing a system of seemingly contradictory manipulations is "right" and finding a formal framing for it later, a la the situation with quantum field theories (which get plenty of vindication from experiment in the meantime). It seems more like the foundation crises reached somewhat of a conclusion in early 1930s and then, maybe starting with Bourbaki et al in the mid '30s, a brand of mathematics exploded onto the scene which made extensive appeals to Z, not only as an organizing principle, but as an ontological mediator of very deep and essential technical claims which come together to make up the powerful explanatory fulcrums of much of modern pure mathematics. I guess the counter to this is that a lot of pure mathematics does eventually render an empirically testable claim, and so far those claims have tended to check out, so at least *those* branches of pure math would probably be okay. But then, I take all of that as evidence of ZFC's consistency, and you're proposing the hyperastronomically unlikely world where all that stuff worked and yet we're still not allowed to consistently apply powersets and separations to infinite sets. I don't really have the imagination to figure out what such a world might look like. I guess I'd expect the riots in the streets to drown out any calm-headed attempts to salvage pure math. :/ Fascinating! Like I said, I know very little of the actual logic that this stuff touches on. I'm wondering what sorts of statements about well partial orders your thinking about here. Like, it sounds like you're talking about statements that certain orders are WPOs. Whereas what I've done is taking known WPOs and computing their maximum extending ordinals aka their type. I was under the impression these corresponded to growth rates of certain functions? I dunno, I'm not too clear on that aspect, like I said. 😛 Do you know anything about what that actually corresponds to? (It occurs to me that the theorem that every WPO has a maximum extending ordinal relies on AC, as does the equivalence of the various definitions of WPO in the first place; but I think if we just pick an appropriate definition for WPO and use one of the other equivalent definitions for type then we still keep basically the whole theory with no AC dependence (at least I hope so!). And of course if you're talking about particular WPOs like I always am then there should truly be no need for AC…) I personally haven't had much reason to deal with ordinals that are very large. Or I guess I should say, I haven't had much reason to deal with functions of ordinals that are very large! You can certainly get big ones out if you put big ones in. And like, it does seem like everything you're talking about above is countable? It seems a bit surprising, at least, that computing types of uncountable WPOs should have any effect on arithmetic. Honestly the reason I brought up WPO stuff — or we can leave the partial ordering aspect out of it and just say ordinal stuff, there's a few ordinal computations I've done that best I can tell nobody else has done before — was not because of any connection to arithmetic but rather just because it just seems so, well, definite. Yes, it's "transfinite", but because everything's well-ordered (or well-quasi-ordered), you can use induction to get at it and get a very definite answer. It's a bit hard to imagine how the answer could possibly be other than it is, you know? It's not like with cardinals which are famously hard to compare and where if you pick two to compare there's a good chance the answer is independent of set theory. I'm having a hard time imagining a similar situation with ordinals (other than the trivial case of picking initial ordinals for given cardinals, which relies on AC anyway). So like I said, if I prove that a certain WPO has type, say, ω^(ω_1^2) (because in reality I proved a more general formula and then plugged some ω_1's into it) — and again we can use a definition of type that doesn't require AC, and my techniques certainly don't require it either — it's a bit hard for me to imagine that has any effect on arithmetic, but it's also hard for me to imagine that there's anything indefinite or questionable about it. …on the other hand, asserting a pre-theoretic notion of truth about something as out there and infinite as ordinals also seems pretty questionable! So I dunno. Maybe the seeming definiteness is just because when we consider all ordinals rather than just natural numbers, we (or at least I 😛 ) tend to ask simpler questions? That could be the case… like, I can imagine a sort of analogue of the arithmetic hierarchy but for ordinals, and (just assuming for now that such an analogy actually works and makes sense) thinking in those terms I'm not sure I've done anything that goes beyond Π_1… maybe that's why everything about that area seems so definite to me… except Π_1 statements about natural numbers can still be undecidable… but then if we're talking about whether we have a pre-theoretic notion of truth for that domain, that's irrelevant anyway. So that doesn't seem like a hepful line of thought after all and maybe you should just disregard much of this paragraph. Yeah, I have no idea. You see why I'm confused? 😛 Scott, #99: For me, nonstandard integers have a fundamentally different status from standard ones. But Scott, don't you ever lie awake at night, fraught with distress over the possibility that your own internal Platonic model of the Naturals might be infested with Z-chains? What if there is some Turing machine M which diligently runs for all eternity if left unchecked by physical limitations, but your microtubules insist that M halts at time t, where t is writhing around up there on one of those icky, bidirectionally-infinite worms? To what could you possibly appeal to establish that M indeed never halts? Not that I have a better answer about what constitutes a "concrete truth" — I tend to be with you when you write off questions like CH as being decidedly unconcrete, and I think your "interactive proof" argument for AH statements having definitive truth values is pretty compelling — but I've always felt that talk of a "standard" model of the Naturals in absolute terms, as though we can know we have access to it even as we toy with models of PA and ZF which don't, seemed to be laced with a kind of hubris. Chuqui Says: Comment #115 September 17th, 2017 at 3:27 am Tarski elementary geometry is math w/o arithmetic. So PA not required for proper math. Could presumably extended to yield PA, but, more presumably, be extended to yield NON-PA? Thus would have proper math where 2+3 != 5. Arithmetic is boring imho 😉 leopold Says: Atreat #106 Why should the possibility of a formal system which is not sound (indeed, so unsound as to be able to prove 2+3!=5) make anyone doubt the absolute truth that two plus three equals five? Any more than the fact that sometimes human beings make mistakes when they do arithmetic? Is the idea that because mistakes are possible, virtually everyone for thousands (maybe hundreds of thousands) of years might have been making a mistake in supposing that 2+3=5? (Presumably not, since in that case, for this to BE a mistake, it would have to be true that 2+3!=5, and this would be a truth as 'absolute' as the one everyone actually now believes viz 2+3=5; in which case there are absolute arithmetical truths after all) But then what IS the idea running in the background here? Ashley #107: Good question! In my original "Who Can Name the Bigger Number?" essay, the only proof I gave for the rapid growth of BB was that the computability of any upper bound on it would let you solve the halting problem. But in the intervening years, I realized two things: First, if you're talking to a popular audience, it's faster and simpler to give a direct proof—a proof that's intrinsic to the BB function—rather than a proof that relies on the uncomputability of the halting problem, which your audience might never have heard of and which therefore requires a separate discussion. And second, the other proofs actually yield a stronger conclusion. Namely, those arguments pretty readily imply that BB dominates every computable function—whereas without more work, the halting problem reduction yields only the weaker statement that no computable function dominates BB. Marvy #108: jonas #110: Are you contradicting what Paul Graham said there? Well, for one thing, I don't see any contradiction if Paul Graham would make a different hypothetical choice than I would! For a second, "clarity of thought" isn't quite the same thing as intelligence. And for a third, there are lots of imaginable superpowers other than being the most popular kid in school—e.g., the ability to walk through walls and so forth. 🙂 Flumajond #111: Indeed, AFAIK throwing in all true Π1-sentences as axioms still doesn't give you the ability to decide all Πk-sentences for arbitrary k. But of course this isn't particular relevant for my view, which is prior to all formal axiomatic systems—objects that, for me, exist only to formalize and mechanize mathematical intuitions that were already present beforehand, and succeed only to the extent that they do that. Greg McLellan #112: Don't get me wrong, an inconsistency in ZFC would be at least as big a deal as a proof of P=NP, perhaps more so. It's just that this massively-unlikely event wouldn't, by any stretch of imagination, be "the end of mathematics." Restaurant tips would still be calculated the same way. The Pythagorean theorem and prime number theorem would still hold (keep in mind, almost everything that's ever been proved can be formalized in tiny fragments of ZFC). The halting problem would still be unsolvable. There would be lots of new research opportunities in fixing the foundations, but the building would still stand. (Admittedly, I'm biased by the fact that my own mathematical interests—in theoretical computer science and the fields relevant to it—are generally extremely far from the kinds of topology, algebra, etc. you mention, the ones that would be most affected by such an event.) Science @ Festivaletteratura 2017 – out of equilibrium Says: […] the major experts on quantum computation, delivering a fun blackboard lecture on very big numbers (here he blogs about it), and physicist Fabrizio Illuminati, expert on quantum information and quantum […] Sniffnoy #113: While I found your comment extremely interesting, if I try to reply in any detail, I'll very quickly exceed the limits of what I understand. I would want to spend a lot more time working with statements about WQO, before forming a strong opinion about which ones to regard as mathematically definite even if they turned out (let's say) to be independent of ZFC, and which ones, if any, might have an AC- or CH-like status. Greg McLellan #114: No, I don't lie awake at night worrying that my internal model of the natural numbers might be infested with Z-chains—any more than I worry that my brain might be infested with buffer overflows or other C programming security vulnerabilities! For me, these are both problems that are extremely specific to particular ways of regimenting the world—and the idea of treating them as problems of thought in general, rather than of the formalisms that give rise to them, is merely funny. Incidentally, I think Penrose is completely right when he says that our intuition for "the natural numbers" comes prior to any first-order theories that attempt to capture that set. He errs only when he insists that a computer couldn't, in principle, engage in the same sort of pre-theoretical mathematical reasoning that we do. @Marvy #108 This parallels a discussion a while ago, whether natural sciences basically have "one root" aka "particle physics in space" or whether there could be another, independent theory that neither contradicts it nor follows from it. It certainly would be hard to prove that it can in principle not be derived from fundamental physics (the more so since that is (used ot be, at least) a moving target). So basically the question is whether 2+3 = 5 is a necessary condition for proper math (in its realm, Arithmetic, it presumably is) or whether there could be other "maths". Maybe first order logic notation is such that 2+3 = 7 would break the notation, making it not sound. Geometry, however, can be "done" by other means than addition and multiplication. I don't know if geometry is open to other "logics" than standard maths logic. Marvy #108, leopold #116: Good questions! Let me sketch out in a bit more detail and then I'll answer. I feel confident I can create a variant of the SKI calculus (a proven Turing-complete language) that implements Church numerals to embed basic arithmetic. I can modify this SKI calculus to, let's say, make 2+3 a null calculation. In other words, 2+3 would not be calculable *in principle* on this variant. Now, if you subscribe to the physical Church-Turing thesis (which I believe Scott does), then you regard our universe as ultimately calculable. IOW, there is no physical manifestation of uncomputable behavior in our universe. Our universe can be in principle modeled by a Turing-machine. So the question I put to consider the absolute mathematical truth of 2+3=5 is to consider a universe modeled by my variant. I think it possible to create quite a rich world with this variant. To be clear, the abstract notion of the number two and the number three would exist in this universe. Computing machines would exist in this universe. Since we don't know how consciousness arises I can't say definitely that conscious beings could exist, but I see no reason to rule it out. What would not exist is any mechanism for computing the answer to 2+3. That would be uncomputable. I suppose conscious beings would not be precluded from asking this question in the abstract and arriving at the answer 5, but boy would they have a hard time taking hints from their surrounding universe. I guess, in the end, this is a form of argument against platonic realism that points out the necessity of the observer. What I've done is taken out the observer. Neither computing machines, conscious observers, nor the laws of the universe itself would admit that 2+3=5. If they did admit this (in the case of conscious observers), it wouldn't be in the form of absolute mathematical truth. I'm skeptical that Scott and others will find this argument convincing, but I think that's because of a failure of imagination. To be sure, I also suffer from this failure of imagination! It is incredibly hard for a 2D person living in flatland to conceive of a 3D world. That we've managed to realize the contingent nature of so many facts about our universe is quite a triumph. But I ask that you take my contrived thought experiment further. Imagine a world where not just 2+3=5 is foreclosed, but all manner of basic arithmetic is foreclosed. Worlds of incredibly complexity with utterly bizarre rules. Worlds where arithmetic laws are contingent. Again, it's hard for me to imagine too, but I think the difference (if I can be so humble haha) is I can imagine imagining them 🙂 In the end, absolute mathematical truth is dependent upon something observing it. It must arise somehow. Does it arise from itself? Does it arise from something else? Does it arise from both? Does it arise without a cause? None of the above. It does not inherently exist 😉 Apropos apples: I buy half a basket of apples in the supermarket, yesterday 12, today just 10 but no cheating. In fact, or rather in intuition, geometry works just like arithmetic works: if you split a square along the diagonal, you'll get two right angled triangles. No need for formal languages, axioms etc. Now we have non-euclidean geometry, so many statments turned out to be "can but needn't". That's what usually happens. Atreat #126 "What would not exist is any mechanism for computing the answer to 2+3" Perhaps because I know nothing about the SKI calculus, I don't understand this. What would block the usual computation in PA 2+3=2+SSS0 = S(2+SS0) = S(S(2+S0)) = S(S(S(2+0))) = S(S(S(2))) = 5? And whatever it is that blocks it, why should we interpret that blocking to indicate that 2+3=5 is not an absolute truth, rather than that your strange system simply fails to capture the sense of any or all of '2', '3', '5', and addition, as we actually use these terms? I've found Scotts question related to this blogpost and discussion with Hamkins from a couple of years ago https://mathoverflow.net/questions/34710/succinctly-naming-big-numbers-zfc-versus-busy-beaver -is this the discussion referred to in #90? Anyway, the way Scott posed the definiteness there of fast functions ("so that even formalist might agree that it makes sense") – is good enough for me, but it was immediately apparent that "true in all models of ZFC" just means "provable in ZFC", so, as was the conclusion there too, z is way less powerful than generalized BB of some small rank (omega would kill it). This seems pretty uncontroversial to me (I guess I could be branded "formalist"). However, what was strange to me was to read what Hamkins wrote. He insisted that that was not really the right answer, and that, somehow, "true in all models of ZFC" does not equal "provable in ZFC" or something to that effect (I didn't quite get it), and it seems that Hamkins takes nonstandard models of METATHEORY as seriously as they were real. This explains why in his paper he considers proofs and formulas with nonstandard Goedel numbers as "real" – I guess he is right if the metatheory is given certain interpretation, but it seems to be a rather peculiar point of view. Perhaps that is related to Scotts objection in #46 (and in that sense, Hamkins is consistent, as he treats metatheory the same way as theory, but then he probably has some meta-metatheory which is realistic, so it gets rather confusing). But that is not the nature of my skepticism – I guess I am mostly worried about dealing with something unknowable, and assuming that there is some "absolute truth" even if we can never reach it (which is the same reason for skepticism about continuum hypothesis). This is a rather different point than what Hamkins is doing (treating nonstandard models seriously and rejecting some intuition about the standard model as preferred). To me it seems like gibberish, makes as much sense as af45dacd69 c2cfcf0269da 49a69ea 7a39dda 73c54389 3b3515 c8e5b491 c1a2d5, but a computer might have different intuition, as does Hamkins apparently. So, I would propose the following "axiom" that tries to capture standard model. First, note that in nonstandard models of ZFC (by that I mean with different "truth" of first order arithmetic, for "real" formulas which are the only ones I care about), the real set of natural numbers is NOT a member of that model. It comes with extra elements. Now we want to be able to get rid of them. So, suppose we have a model M of some set theory. Suppose even that it is a transitive model. Now, we can take some subset X of M, and then form M[X] – a model, that is extension of M, but contains X that has those elements, AND ONLY those elements (this could work with non-transitive models too if made precise). Now, if for any such X, some theory ZFC+ allows such extensions, than it is OK, but if it does not – then we will reject ZFC+ as non-standard. The good thing is that this can be formalized as it seems to me (maybe starting from finite axiom NBG system if necessary), in the following way: We add to NBG/ZFC a set of axioms, claiming, for any closed formula T "there is a model M of NBG/ZFC+T, and a set X is a subset of M, such that there is no model of NBG/ZFC+T containing M and set X" implies "not T". Now note that this seems to prove Con(ZFC). I'm not quite sure about it, but this seems interesting. Has anyone considered this? The intuition behind this axiom scheme is simple. Transitive models of "true" set theory can always be extended by adding a new subset. But if we have say T equal to "not Con(ZFC)", then any its model will be ruined by adding a real set of natural numbers. We can extend it as we will, but we can never get back the "fake" proof of contradiction. Therefore, Con(ZFC). This seems like a rather nice idea to me (if I'm not wrong) but anyway most likely its either wrong or no-one will pay attention or I would hardly find time to develop it further, but who knows; of course it will not get one far as truth can never be attained, so any such intuition, however useful, is either dead wrong or limited. Note that my axiom scheme might even have something to say about continuum hypothesis (though this is highly speculative). Lucent Says: Trying to understand this a different way, as it reminds me of Typographical Number Theory in GEB and its ability to represent itself once it reaches a certain complexity. Is the lowest undeterminable Busy Beaver number also the size of the smallest Turing machine which can emulate itself in software, recreating the halting problem, since using an emulated Turing as input into a "real" one recreates the indeterminacy of the halting problem? Lucent #130: I'm not sure that a "Turing machine which can emulate itself in software, recreating the halting problem" is a well-defined notion. Certainly there are universal Turing machines, and there's a longstanding game to make them as small as possible, but a crucial there is that universal machines require a description of another Turing machine as input—and for that reason, one can "shoehorn" a huge amount of the complexity into the input encoding scheme. For more see my paper with Yedidia. With our task there's no input to the machine, but there is a theory-dependence: for example, the smallest machine whose behavior is independent of Peano arithmetic, is probably smaller than the smallest machine whose behavior is independent of ZFC. Any attempt to understand our task in terms of something else would have to take this theory-dependence into account. Ah, oh well. But what if we remove the "partial" or "quasi" aspect and just talk about well orders? Once again, ordinal computations seem pretty fricking definite — unlike with cardinals you can get actual answers! — but it seems like it'd be pretty difficult to cash out statements involving uncountable ones. And yes as best I can tell there are interesting questions just of the form "What's the order type of this well-ordered set?" / "How do you actually compute this function on ordinals?" that nobody's bothered to answer before, or at least not to write up anywhere I can find. 🙂 Okay. But to the extent to which your brain (or a computer capable of the same sort of pre-theoretical mathematical reasoning as you, since you've already given us that) purports to leverage its intuition for the Naturals to say correct things about them (as you hopefully do in any journal article you submit) with any kind of rough or eventual consistency, we could then distill that thought process into a specific r.e. set of sentences which agree with some model of the Naturals which is infested with Z-chains, right? Or would you claim that you're wrong enough of the time that the "rough or eventual consistency" thing doesn't hold up? (Which makes it difficult to rely upon your advice, surely?) Are you maybe expressing a hard rejection of "Hilbert's thesis", that informal mathematical reasoning is just a slackening of first-order deductions from ZFC? (Not that this seems to pass muster when it comes time to publish…) Do you maybe think that one day we'll understand the mojo that makes informal human mathematical thought tick, and its power to get things done in practice will somehow skirt around even the loosest of ways in which me measure the correctness of computer programs today? Greg #133: I'm a big fan of the Popperian idea that the crucial thing, in every part of science—I would even say pure math—is not how to be certain that everything you say is right the first time; rather, it's how to detect and fix errors. (I'm deliberately stating the idea in a way that's oversimplified enough that people might actually remember it and apply it to their lives.) Some small constant fraction of all the proofs I've given in published papers have been wrong—see my mea-culpa blog posts for more!—but it's okay. If anyone cares about those theorems, then the errors get found and (in most cases) fixed. And the same with my colleagues. It might be that the overall error rate in CS theory is higher than the error rate in other parts of math—I'm not sure—but if so, it just goes to show that it's possible to make rapid, cumulative progress even in the presence of that higher error rate. And in any case, our error rate is still lower than that of our wonderful friends in physics! 😀 I agree that, after I'm dead, it might be possible in principle for someone to come along and formalize "the standard of accuracy toward which I was striving while I lived" as a collection of first-order axioms for arithmetic—and if so, those axioms would of course admit models that were infested with Z-chains. But even then, that's still not the way I'd think about it, were I lying awake at night wondering about such matters (something that no longer happens much—with two kids, now I mostly just crash and start snoring whenever the opportunity arises). Instead I'd simply think: "I'm just a guy who has a pretty clear idea of what a 'positive integer' is, but who's not able to prove all the theorems about them that he'd like." leopold #128, The SKI calculus was invented by Moses Schönfinkel in 1924 and came before Alonzo Church's lambda calculus. It was later shown to be equivalent. He was a contemporary of Church and Turing. Here is a good introduction: http://people.cs.uchicago.edu/~odonnell/Teacher/Lectures/Formal_Organization_of_Knowledge/Examples/combinator_calculus/ The SKI calclus has been proven to be Turing-complete and is arguably the simplest Turing-complete language ever invented. Other languages have been built upon it that are even simpler (iota, jot and zot for instance), but they are also based upon Schönfinkel's combinatory logic. Jot is particularly interesting in that every program written in Jot is its own Gödel number. Here is language I wrote that is based upon the SKI calculus complete with interpreter: https://github.com/manyoso/hof This is the church encoding of basic arithmetic in this language: // church numerals #define INC(X) "AASAASAKSK" X #define ADD(M, N) N INC(M) #define ZERO FALSE #define ONE I #define TWO INC(ONE) #define THREE INC(TWO) #define FOUR INC(THREE) #define FIVE INC(FOUR) You can also find combinators for booleen logic, p-numerals, church comparison, church pairs and lists, basic recursion, y-combinators, and a while loop all implemented along with the interpreter at the above link. Anyway, the proposal for the sake of this argument would be to introduce a new combinator that would effectively nullify the computation of 2+3 in that the combinator would be a noop. This would be done by extending the formal definition of the language and then to make a change to the interpreter to reflect it. To be clear, the language would still have the concept of '2' and '3' and '5' and increment, decrement and addition. The one thing that would not be possible in the language is evaluating 2+3 and outputting 5. 2+3 would not be computable. However, I can not stress enough that I don't think my argument is a contrivance of the SKI calculus. It is possible to transcompile any Turing-complete language into this language and then in principle to do the same with any of these languages. The only caveat is that whatever program you use to model your universe that it use these church encodings when doing arithmetic. The argument I'm making is the mathematical equivalent of the ancient zen koan about a tree falling in the forest. I'm trying to demonstrate that it is in principle possible to construct a universe where no observer … not conscious, not computing device, not the laws of the universe itself … will have access to the computation of 2+3. So, in this universe, how can you say that 2+3 is an absolute mathematical truth?? Isn't that exactly what Scott is asking for? A universe in which 2+3 is contingent? I think that is exactly what I'm providing. This argument is orthogonal to the debates above about formal systems of mathematics and non-standard-integers as fascinating as that discussion is. This argument relies upon the argument that all facts are contingent upon the observers who conceive them. But, because 'observers' is such a ill-defined term I'm trying to show a universe where *any* well-defined observer would not have access to this supposed absolute mathematical truth. Anyway, Scott what do you think? Luke G Says: Sniffnoy #132: Uncountable ordinals actually do play a role when studying large countable ordinals, as they help with naming very large countable ordinals (as is needed for proof theory). So there may be some possibility to "cash out" theorems on uncountable ordinals to get arithmetic statements. https://en.wikipedia.org/wiki/Ordinal_collapsing_function Also, given the above, who is to say that we have access to all arithmetic computations in our universe? What if there are computations involving the standard integers that are foreclosed by the program running this universe? We would be completely cut-off from them even if our intuition insists that it is complete when it comes to the standard integers? Jon K. Says: Great blog post, Scott. Can you expand on your response?… " Of course, the big limitation is that you then don't have a natural notion of "greater than" or "less than." " Oh look Billy! Another QC breakthrough! https://www.bloomberg.com/news/articles/2017-09-13/ibm-makes-breakthrough-in-race-to-commercialize-quantum-computers Jon K. #138: In a finite field, if you start with 0 and keep adding 1, you eventually loop back to 0. So it doesn't really make sense to ask whether one field element is "greater" or "less" than another one, any more than it makes sense to ask whether 11pm is earlier or later than 3am — you can reach 3am from 11pm by going to the past or going to the future! Atreat: My feeling is that, if I were a scientist living in a universe with a software bug that caused 2+3 to return 9 or whatever, Occam's Razor would inevitably lead me to a theory where first I'd do addition in the normal way, and then I'd add in the "friction" or "correction" term that changes the result of 2+3 from 5 to 9. In other words, normal addition would still have the overwhelming advantage of coherence and simplicity, and weird-addition would have only the advantage of matching observations. It's like if aliens observed humans playing chess: they would never logically deduce castling from the other rules, but could add it in as a weird observed exception to the other rules. Thanks very much for the brief intro to SKI. I'm still struggling to understand though why the possibility of such a system is supposed to tell us anything about the natural numbers we are actually familiar with, and the contingency or necessity of the relations they enter into. Also, you said "This argument relies upon the argument that all facts are contingent upon the observers who conceive them." I assume this means that a fact has to be conceived in order to be real, so that if no-one can conceive of something, that thing is not true/not real. But if that is an assumption of the argument, can't you much more quickly get to the radical nonabsoluteness of arithmetic just by hypothesising a universe with no life in it at all (or even a universe with nothing in it at all)? There would be no observers, and no computations. So (given your assumption) nothing at all would be true, either about arithmetic or anything else. This seems to make the excursion into the SKI calculus unnecessary; for my own part though, it also looks like a reductio ad absurdum of this subjectivist assumption. Why should there not be facts that no-one has ever conceived or ever could conceive (say about galaxies beyond the limits of the observable universe)? Scott #141, If they observe a lot of other human games then they may also notice that many games have special exception rules that have the function of speeding up early game play, and castling fits into that larger framework. This isn't just a nitpick, there's a serious point here also. If we had world where we had to do the sort of "correction" for 2+3, I'd expect any such correction to fall into some larger framework. John Tromp Says: Thanks for another fascinating blog entry, Scott. You covered two concepts that my co-author Matthieu Walraet and me managed to combine in the paper "A Googleplex of Go games", available at https://gobond.nl/sites/default/files/bijlagen/Aantal%20mogelijke%20go%20posities%20-%20update.pdf Big Numbers – The Square Root Says: […] "Oh yeah," she says. "So is that the biggest number?" … (Shtetl-Optimized) […] leopold #142: I'd go even further than that. If a fact is not realized by any observer anywhere or embodied in any way, then in what sense do we say it is "real"? Like, what does that word mean for a fact with this context? A universe with nothing in it at all? In what way could such a universe be said to "exist"? What I am saying is that all things are contingent. All facts or truths are contingent. Absolute facts or truths do not exist because that is an impossible mode of being. We're talking about absolute truth when what Scott and others mean by "absolute" is some things that exist in and of themselves. Things whose existence are utterly independent of anything else and not contingent in any way. I believe that is provably impossible mode of being. Scott, my example doesn't give 2+3=9, but renders 2+3 uncomputable. The analogy I have in mind is not castling in chess, but rather trying to successfully visualize living in 4 spacial dimensions or even better visualizing a solution for the halting problem or even better being able to "see" the Kolmogorov complexity of some arbitrary string. I am imagining a universe where 2+3 is an impenetrable blind spot. This would not contradict the truth that 2+3 = 5! Emphatically, that would still be true in our universe. But it shows that it is contingent 🙂 Well Scott, I think we've got to go with the guy that's spent his life studying CH, Hugh Woodin. It seems that prior to around 2010, Hugh thought CH was false, but around that time he suddenly changed his mind and started talking about: V=Ultimate L and now he thinks CH is true V (Von Neumann Universe) = Ultimate L (Construcible Universe) https://en.wikipedia.org/wiki/Von_Neumann_universe https://en.wikipedia.org/wiki/Constructible_universe I'm sympathetic to your view that math has to be tied to physics (specifically computation) to qualify as having definite truth-values. So that would include the math in theory of computation, algebra, geometry, topology and analysis. It's less clear what we should make of the math in the fields of epistemology (probability&stats, set theory?, proof theory, model theory). It does seem that a lot of mathematical logic might be more about our own internal reasoning processes rather than something that is objectivity 'out there'. So I agree, it's not clear that CH has a definite truth-value. What sorts of infinity are 'real' then? well, the infinities tied to physics only perhaps (sets of natural and real numbers). It's a lot less clear that the additional endless towers of infinities in logic have any sort of objective existence. i What I am saying is that all things are contingent. All facts or truths are contingent. […] Things whose existence are utterly independent of anything else and not contingent in any way. I believe that is provably impossible mode of being. Provably impossible? So it is a necessary truth that there are no necessary truths? (That _does_ sound impossible!) "What I am saying is that all things are contingent. All facts or truths are contingent. […] Things whose existence are utterly independent of anything else and not contingent in any way. I believe that is provably impossible mode of being." @Atreat re #137 The main reason why we think that is that we have various different simple computation models that we can all prove equivalent. While Gödel's is based on arithmetic on natural numbers, that's not the only simple one, and other models don't have a primitive concept of natural numbers. Some of the models are based on stacks or tapes or strings of symbols from a fixed finite alphabet, some are based on (tree-like or recursive) algebraic structures, some on combinators or lambda expression terms. We can define arithmetic based on any of these. We have used the first-order logic deduction system to prove the well-known properties of arithmetic on natural numbers from the ZFC axioms, and that deduction system and axioms also don't have addition and multiplication as primitives. We could even avoid natural numbers when building a representation of the logic deduction system in itself and proving theorems like Gödel's completeness about it, although I'm not aware of anyone having written down such a version of the proofs. leopold #148, No! I never said that the truth that all truths are contingent is absolute and you won't hear me saying it. It simply is a truth and it is itself contingent of course. In other words… All truths depend upon context. <– including that one. How is that contradictory? And I do believe it is provable (indeed, was proved hundreds of years ago). But think what Scott is asking here: to prove to him that his intuition and belief in absolute truths is wrong when he freely admits that if a formal system actually did that, then so much the worse for the formal system! Any such proof he would consider not evidence that his intuition was wrong, but rather that the formal system was wrong. BTW, I'm really hesitant to believe that there is anything I'm correct about that Scott is incorrect about 🙂 He's one of my intellectual heroes (and I like to think a friend) and I have serious appreciation for both his intellect and intellectual honesty. When I think I'm right about something and he's wrong… something smells. So I'd *love* (if a little scared) to be shown the error of my ways here. I just want you to describe to me what some world is like where 2+3 is not 5 The problem is that one needs to define precisely the meaning of the symbols 2, 3, 5, and =. I don't think it's as obvious as it seems. Is math nothing more than the power of writing symbols on paper? Or is math supposed to simply map itself to real world constructs? One could imagine a world where 2+3 = 5 just isn't 'useful'. Imagine a world where the *only* objects are "blobs" that can only exist as a singleton (1), in "pair state" (2), or in "triplet state" (3), "quadruplet" (4) 2+3 != 5 so there's no such thing as "bringing together" or "observing" a pair and a triplet together. It's an ill-defined situation. Maybe because quintuplets just don't exist – bringing 2 or 3 together would create some sort of annihilation of the entire universe. Or a bit like how people can only visualize distinct groups of 2 or 3 objects in their "mind's eye", but are incapable to visualize a group of 5 objects "at once". Indeed the Completeness Theorem can be interpreted as saying precisely that the nonstandard integers are "nothing more than that," spandrels that have to exist by the mere fact of ZF+Not(Con(ZF)) and so forth being consistent. You don't need the incompleteness theorems to get nonstandard models. You can get a nonstandard model of ℕ simply by taking the set of sequences of elements of ℕ and dividing it by the equivalence relation which holds between two sequences if the set of indices on which they agree is a member of some fixed non-principal ultrafilter. In such a model, Con(ZF) will have the same truth-value as in the model from which you've started, and yet it will have members like the equivalence class containing (0, 1, 2, 3, 4, 5, 6, 7, …), which doesn't correspond to any 'ordinary' integer. Nonstandard models aren't there merely because we can't prove that certain formal systems are consistent; they exist because we can't even define what it means for something to be finite. "In a finite field, […] it doesn't really make sense to ask whether one field element is "greater" or "less" than another one, any more than it makes sense to ask whether 11pm is earlier or later than 3am — you can reach 3am from 11pm by going to the past or going to the future!." But it does make sense once you start considering "objects" living in the fields. E.g. spans of time. You can certainly imagine working for 5 hours, from 6pm till 11pm. Or working 5 hours from 11pm till 4am, and then it makes sense to say that "4am is later than 11pm". And it's not the same as working from 4am till 11pm (a span of 7 hours), in which case 11pm is "later" than 4am. Or take a boat sailing the earth (a finite field), you need to be able to distinguish its bow from its rear (i.e. distinguish a boat that's 100m long from one that has a length of a great circle – 100m… or a boat having a length of 3 great circles – 100m, i.e. wrapping around three times). I still haven't managed to carefully read this whole interesting post, but a few comments: 1) Giuseppe Peano's axioms introduced in the 19th century were not the same thing as what we now call PA. Peano's axioms used a single induction axiom that quantified over formulas, so we'd now consider it to be given in second-order logic. I don't know when the first-order theory PA was introduced but my guess is no earlier than the 1920s. It would be interesting to look up the history and I might try to do that. 2) Even if one says the nth-level BB(k) is a definite number for all finite n and k, it gets messy if n is a transfinite ordinal, because of different possible codings of them. You might like the article: https://xorshammer.com/2009/03/23/what-happens-when-you-iterate-godels-theorem/ if you haven't seen it. It explains the natural progression from PA through 2nd order arithmetic, ending up with set theory. IMHO it's reasonable to say that every Turing machine either halts or doesn't (i.e. every Pi-0-1 sentence has a truth value) while not saying the same of sentences with 100s of alternating quantifiers (what can that mean?). Or that PA is unconvincing because the induction axioms say things like "if phi(0) and (phi(n)=>phi(n+1)) then forall n. phi(n)" which presumes that all those n's actually exist. Of course 0 exists (an axiom says so), as does S0, SS0, etc., but many formulas denoting integers can never be reached that way. That viewpoint is called "predicative arithmetic" and was mostly done by Edward Nelson, who wrote a book and a bunch of articles about it. His predicative system was weaker than PRA but stronger than Robinson's Q, and he was still able to encode most interesting math in it. 4. Shachaf Ben-Kiki once joked that ultrafinitists are mathematicians who believe that Peano arithmetic goes from 1 to 88 ;-). Re fred #153: > Is math nothing more than the power of writing symbols on paper? No, but writing symbols on paper is a rather accurate model that most of us humans can handle easily. Writing symbols with chalk on a blackboard is closer to real math, but has some practical difficulties for humans. Real math in the original of the Book is written on a medium that is entirely unusable by humans, and that may even be unimaginable for them. Humans can only ever read imperfect copies of parts of the Book in various forms more convenient for them. jonas #157, Thou shall not take any other Book before me, eh? jonas #157 I was thinking about Formal Grammars, i.e. symbols + replacement rules (https://en.wikipedia.org/wiki/Formal_grammar). Doesn't that cover (most?) of mathematics? tomate Says: @ Walter #93 and Scott #96 It's Matteo, the organizer of the Festival. Thanks again Scott for your great contributions. You shouldn't be worried about the language barrier or the difficulty of the talk: the point we try to make at Festivaletteratura is that we shouldn't downplay the audience. We want them to confront with scientific thinking in all of its complexity rather than with the sort of easily-digestible metaphors that infest scientific popularisation. And thanks Walter for being a Festival supporter. About the books, a short apology: unfortunately the Festival's library is not used to dealing with academic publishers, and apparently there are complications with orders involving this kind of products. @Jonas #110 I think I should clarify what I meant in comment #51. The goal of making PRG is to have a deterministic algorithm that takes a small random seed and output a string that "looks random" to an adversary. One possible way to do this is to build a generator that "behaves like" it has a huge seed even though the actual seed is small, say like 128 bits. So the intuition here is to show that it is infeasible to invert the generator due to an algorithmic information theoretic argument: the program size complexity of the generator is way larger than the string of output bits of the generator by a constant factor, so no algorithm can succeed in describing the internal state of the generator using the output bits. Comment #162 September 21st, 2017 at 12:26 am William #161: The trouble is, you always assume the inversion algorithm knows the code of the generator. Anything that's not known to the inverter is part of the seed, by definition. And the seed is shorter than the output string. Scott#50, "Have any examples of ZFC extensions with conflicting arithmetical theorems seriously been proposed by anyone?" Sure, depending on what you mean by serious. ZFC+Not(CON(ZFC)) conflicts with ZFC+"there exists an inaccessible cardinal", since the existence of the inaccessible cardinal means ZFC has a model and is therefore consistent. Comment #164 September 21st, 2017 at 1:05 am @Scott #162: The generator is self-keying, in the same vein as an autokey cipher. It re-seeds itself every clock cycle , the actual generation of output bits is a fractional (tiny) subroutine of the main algorithm. Yes the actual seed is smaller than the output but the "virtual seed" is always larger, there is a phase transition that takes place , I don't want to give any details of how this would be done here, maybe in the future I will give a link on your blog to the actual design, after it's well vetted, maybe, we'll see 😉 As a humorous side note(hopefully) has anyone noticed that Trump's new chief of staff Kelley looks like the guy who played the head of the Praetorian Guard in the movie "Gladiator" ? At the end of the movie he kinda lets the emperor be killed by Russell Crowe and Rome kinda sorta becomes a democracy again. Is life about to imitate art ? asdf #163: I'm obviously well aware that ZF+Not(Con(ZF)) proves false arithmetical theorems! I meant two extensions that prove conflicting arithmetical statements and that are both "natural," in the sense that both have mathematicians who advocate them as likely to be arithmetically sound. And let's exclude cases like ZF+(NEXP=coNEXP) and ZF+(NEXP≠coNEXP), where pretty much everyone hopes that mathematical advances will ultimately reveal one extension or the other to be inconsistent: I only want cases where experts generally agree that the two extensions are both consistent, and disagree only on which is arithmetically sound. William #164: A pseudorandom generator that's secure for "trivial counting reasons" that have nothing to do with computational intractability, is just as impossible as a lossless compression program that works on random input (i.e., whose success has nothing to do with exploiting regularities in the data). There's no amount of playing around with the definitions of words that can change this. But let's end this particular exchange here, since no convergence seems to be happening… Re #82 and #85, see John Baez's blog entry https://plus.google.com/+johncbaez999/posts/KXnZmQZmNTy and Gowers's blog entry https://gowers.wordpress.com/2017/09/19/two-infinities-that-are-surprisingly-equal/ about that new result. Atreat #135+ Maybe that was the best comment ever on this blog! (or, the best I could appreciate) However yes, I do think there are a few loopholes that remain and maybe destruct the whole argument. First, you said that using SKI you are confident you can describe a universe with a specific blindspot, for exemple a universe where you can do ordinary arithmetics except that 2+3, and 2+3 specifically, is not decidable in this universe. That sounds 100% fair and directly answering Scott's request. But *being confident* is hardly enough! 🙂 Can you actually do it and check whether the blindspots you can plant seem to stay still and quiet, or wether they immediatly contaminate most or all arithmetics? (e.g. produce obvious contradictions or make everything non computable) Second, and harder, is there any way to *prove* that the resulting system would be sound and consistent, or at least as consistent as PA? Finally, and maybe impossible to reach (but maybe not!), is there any systematic method to construct a sound and consistent mathematical framework with a specific blindspot for *any* (well defined) mathematical truth? Anyway thanks for this great line of thought! Comment #170 September 21st, 2017 at 10:34 pm jonas #168: Thanks so much for that link!!! In effect, you get to watch Tim Gowers and John Baez as they collaborately write the popular article about p=t that Quanta magazine "meant" to have published but didn't: the one that takes a person who's never encountered the relevant definitions, and brings them up to the point of actually understanding the question (if not, of course, the answer). While the whole time Tim, John, and the others also engage in a sort of meta-discussion about how one should go about popularizing such an advance, and whether it even should be popularized. (It's striking how, wherever and whenever something this abstruse is being made comprehensible, the same two names Baez and Gowers keep showing up.) I had read enough to understand that (1) p=t has no obvious bearing on the Continuum Hypothesis (only p<t would have refuted CH); (2) the Quanta article spends almost all its time talking about Cantor and Gödel and Cohen rather than the new advance, which could lead readers to the misconception that Maliaris and Shelah were solving CH or something like that (apparently this happened); and finally, (3) the technical statement of p vs. t involves "almost inclusion" of sets and "infinite pseudointersections" and "towers." But I didn't invest the time to develop the intuition that one can get from that wonderful Google Hangouts exchange. Comment #171 September 22nd, 2017 at 1:01 am Well , Baez isn't correct when he says that Godel proved CH doesn't have a definite answer. In fact, Godel was a Platonist, meaning he definitely thought CH was either true or false. All Godel proved was that CH is undecidable *by the axioms of standard ZF set theory*. But of course, add more axioms, and it could in fact be decided. I'm suspecting that Yudkowsky's principle of ultra-finite recursion is somehow related to CH, and could be used to generate a proof. But how? If my suspicion is correct, the axiom we should add is that there are only 3 types of infinity (transfinite cardinals), with logical recursion stopping at 3 levels. Well natural numbers and real numbers only amount to 2 types of transfinite cardinals, so where's the third? Is it above the real numbers, below the natural numbers, or in-between the reals and naturals (which would falsify CH)? Computable numbers don't give us a new type of transfinite cardinal (computable numbers are countable), so where's that third type of infinity? Scott, what would happen if rather than adding axioms to standard set theory, axioms were *weakened* instead… would that help resolve CH? Specifically, I'm wondering about 'Axiom of Powerset' https://en.wikipedia.org/wiki/Axiom_of_power_set mjgeddes: Yes, of course, Baez was editorializing about what independence means. Though even if Gödel were right, and there were a "Platonic truth" about CH, it's very hard to understand how that truth could become universally accepted by mathematicians, given that the CH models and not(CH) models are both perfectly good, internally-consistent playgrounds in which to do almost all of mathematics. If you wanted only three orders of infinity, then you'd certainly need to give up on the powerset axiom, which (together with the axiom of infinity) gives you infinitely many orders of infinity. But why 3? Why not 17? In any case, removing axioms can only ever yield more models of your axiom system, never fewer. So removing the powerset axiom—or any other ZF axiom—certainly can't suffice to decide CH. I.e., you'll still have all the models where CH is true (and more than that, I think models where there are only 3 infinite cardinalities or whatever, like you suggested). But you'll also still have all the models where CH is false. > Though even if Gödel were right, and there were a "Platonic truth" about CH, it's very hard to understand how that truth could become universally accepted by mathematicians, given that the CH models and not(CH) models are both perfectly good, internally-consistent playgrounds in which to do almost all of mathematics. How did the axiom of regularity get accepted by mathematicians? Scott #166: I don't have an example for: "I only want cases where experts generally agree that the two extensions are both consistent, and disagree only on which is arithmetically sound." You might be right that no such example exists. But some time ago, I seriously asked the question: "Does a Π2 sentence becomes equivalent to a Π1 sentence after it has been proven?" It was not well received, but it was a serious question from my part. However, my main motivation was better understanding of the connection to ordinal numbers, not to question the consistency (or "definite truth property"/"law of excluded middle") of Πk sentences. After my question was badly received, I tried to explain my motivation for asking that question: This is a serious question about provable Π2 sentences. The rather random examples of Π2 sentences (P vs. NP, Goodstein's theorem, and the strengthened finite Ramsey theorem) given in the question are enormous statements, and I feel the only way how such enormous statements can be true is by being provable. But being provable of course means being provable in some specific formal theory, and the consistency of such a theory is equivalent to a Π1 sentence. But the examples also show a connection to ordinal numbers, and ordinal numbers are used to measure the proof strength of certain formal theories. I had hoped that somebody would be able to explain whether there is a connection between ordinal numbers and Πk sentences. At least user21820 tried to help me cleanup my misconceptions in a followup chat to that question. I no longer fully understand all my reasonings from back then, but at some point I made the following remark, defending once again my initial question: (2) Even so I thought that ω-consistency showed that the premise of my question was flawed, I wonder whether Gödel's dialectica interpretation doesn't show exactly what I was asking for in the case of arithmethic sentences provable in PA (or rather Heyting arithmetic). After all, the "exist term" is constructed explicitly, so all that is left is the "forall sentence", which is provable in System T (an extension of primitive recursive arithmetic). Even so I stopped thinking about that specific question, I did continue trying to understand the material from mathematical logic in terms of Turing machines. At least for myself, I have the impression that I understand that material better now, and on a more intuitive level. (I will probably write a blog post about it at some point, and then be happy with what I achieved.) I could try to explain it, but I am unsure whether you (or anybody else) would really want that. Should I try to explain it? Technical convenience? In any case, I see this as very different from CH, because it's obvious from the beginning that Regularity can neither be proven nor disproven from the other axioms: it's simply a question of how "pathological" are the sets that you'd like to consider. Are there other general metrics besides "number of states" that could split non-halting programs into different categories? (is never halting the same as halting at infinity? :P) A bit like separating infinite sums into the ones that have a finite number of non-zero terms and the ones that have an infinite number of non-zero terms, then splitting the latter category into the ones that converge and the ones that diverge? Jay #169: Thanks! Yes, being confident is hardly a proof. I could try and actually implement this, but it would take some time and it seems clear that Scott doesn't buy the argument even if I could implement it. Further, I don't have the skills necessary to provide anything like a formal proof that the result would be sound or consistent or have greater mathematical ramifications. Right now, I'm spending my quiet time thinking about what a formal logic would look like where all truth values are contingent ie, where all propositions are dependent. This has some relation to paraconsistent or multi-valued logics, but I'm wondering what a computable calculus would look like with this kind of framework. Cheers! Antedeluvian Says: Out of interest, why do you write BC, not BCE? Is it for political/religious reasons? Antedeluvian #179: Uhh, just because that's the standard/traditional abbreviation? I have no political or religious objections to writing BCE. Antediluvian Says: I am not sure it has been standard for a few years now, at least within academic or non-Trumpish circles. Having said that, the change to BCE did cause a stir a few years ago in Australia (see e.g. http://www.news.com.au/national/for-christs-sake-ad-and-bc-ruled-out-of-date-for-national-curriculum/news-story/ffb9030f1a53ed9226e7bcac9bed3969 ) and in the US more than a decade ago (see e.g http://articles.latimes.com/2005/apr/24/news/adna-notpc24 ). Dandan Says: I have an idea how to name a bigger number 🙂 But I'm not sure if it's correct. Consider ZF for a start. Now add new axiom to it – statement about its consistency. You'll get some new theory ZF + Con(ZF). You can repeat to obtain ZF + Con(ZF) + Con(ZF + Con(ZF)). You can continue (just like with +1) up to some ordinal and get some new very strong theory. This theory, probably, can prove the existance of much larger ordinal. So, this is some kind of function F from ordinals to much larger ordinals. Now we can define ordinal which is the union of ordinals F(w), F(F((w)), … , F^n(w), … . And name a very big number using it. Dandan #182: Yeah, that's pretty much exactly what I was talking about in the last part of the lecture. There's an article at Ars Technica that says that Microsoft is integrating quantum programming into Visual Studio. https://arstechnica.com/gadgets/2017/09/microsoft-quantum-toolkit/ The article *doesn't* say that quantum computing can break NP complete problems by trying all possibilities at once — which is a good sign, I guess. Raoul Ohio Says: Microsoft announces Q# / P++: The quantum programming language will be part of, and fully integrated into, Visual Studios. MS hasn't thought of a name for the language yet, but my personal quantum prediction machine predicts Q#, or possibly P++. Microsoft's rap: https://news.microsoft.com/features/new-microsoft-breakthroughs-general-purpose-quantum-computing-moves-closer-reality/ A lot of this article is about Michael Freedman, but it also has lots of Gee Whiz stuff like: "A quantum computer is able to model Nature". Scott, I will now state the axioms of my 'Reality Theory'. My lemmas will finally provide the answer to life, the universe and everything 😉 *Axiom of Knowledge-Existence Equivalence: Reality and knowledge are equivalent at the fundamental level! Whilst it's *usually* true that the map and the territory are separate, this distinction breaks down at the fundamental level, and knowledge and existence become equivalent! *Axiom of Ultra-Finite Recursion: There is absolutely nothing to be gained from a logical language that has more than 3 levels of recursion. That is to say, full logical power is reached at the 3rd level of recursion, and additional levels are redundant because they can always be reduced to (collapse to) the 3rd level. This is generalization of Yudkowsky's law of ultra-finite recursion: https://wiki.lesswrong.com/wiki/Yudkowsky%27s_Law_of_Ultrafinite_Recursion Yudkowsky's Law of Ultrafinite Recursion states that "in practice, infinite recursions are at most three levels deep." My generalization is to take out the phrase 'in practice' and replace with 'in principle'. *Axiom of Self-Similarity of Knowledge The structure of knowledge is a fractal. That is to say, there's a self-similarity property to the structure of knowledge such that we can find a method of knowledge representation that applies across *all* levels of abstraction The Geddes axioms are sufficient to construct a theory of everything. Lemma 1: Reality is a *language* From Axiom of Existence-Knowledge Equivalence: All of reality is a language that is reflecting upon itself. Lemma 2: From (1) and Axiom of Self-Similarity The language of reality is *recursive*. The universe is a fractal reflecting the structure of knowledge Lemma 3: From (1) and (2) and Axiom of Ultra-Finite Recursion The fundamental language of reality (call it L1) undergoes recursion and splits into 3 levels, representing 3 fundamental levels of abstraction. But this logical system as a whole represents a new language (call it L2) and *that* undergoes recursion, again splitting into 3 levels of abstraction. The new logical system resulting from the previous steps (L3) does not recurse. We are left with 3^3 = 27 fundamental levels of abstraction. There are 27 core knowledge domains that 'cleanly' (precisely) categorize all knowledge. These are based on 27 a-priori categories/sets/types that are necessary and sufficient for a mind to represent any aspect of reality. The 27 core knowledge domains are shown here: http://www.zarzuelazen.com/CoreKnowledgeDomains2.html The domains are arranged according to level of abstraction. So as you look at the knowledge domains on the main page, you can see the whole fractal structure of knowledge across all the levels of abstraction – top-to-bottom and left-to-right. There are only 2 dimensions on the screen – to indicate the 3rd dimension of abstraction I split the domains into groups of 3 using white space. Clicking on links takes you through to my wikibooks with A-Z lists of wikipedia articles explaining the central concepts for each domain (on average, each domain has about 100 core concepts). The key conjecture I want to get across here is that these 27 categories are NOT inventions. They are *fundamental* a-prior categories of existence itself! They *cleanly* (with 100% precision) carve reality at the joints. This follows from Axiom of Knowledge-Existence Equivalence. The map-territory distinction breaks! Some of the math markup isn't rendering, as of 2017-09-26 at 1:45pm CDT. Each of the items I refer to is demarcated by double dollar signs. Māris Ozols Says: Quite an epic story! You should find an artist who can illustrate it and turn it into a book. It could become a computer science equivalent of "A Brief History of Time"! On a different note, when you say "That essay might still get more views than any of the research I've done in all the years since!", it makes me wonder what do you think your legacy will be? I mean, what is the most important thing (by whatever measure) you have done in your career as a scientist? It doesn't have to be a result or anything like that, I mean it more in the "something for the greater good of the humanity" sense. Maris #189: For god's sakes, I'm "only" 36! I hope I can still do one or two more things before it's time for me to ponder my "legacy" (although I wouldn't bank on it…) Comment #191 October 1st, 2017 at 12:15 pm Is it easy to show that level-omega BB exists? It seems the ability to call any level-k BB (with k a positive integer) could plausibly allow us to create a family of n state halting Turing machines without an upper bound on the number of steps before halting. The finite set argument does not hold if we have infinitely many different operations (oracles) to choose from. Comment #192 October 1st, 2017 at 1:14 pm Anders #191: I believe that level-ω BB can be constructed in Peano arithmetic or even weaker theories, so yes. In more detail, you of course need to define the oracle access mechanism in a suitable way, so that (for example) the integer k such that you want to call the level-k BB oracle gets written on an oracle tape. But if you do, then as usual, there are only finitely many n-state machines, so among those that halt, there's one that runs for a maximal number of steps. Comment #193 October 2nd, 2017 at 6:49 am To everyone who pointed out the problem with rendering equations: thanks so much, and I believe the problem is finally fixed! Comment #194 October 3rd, 2017 at 11:25 pm Couldn't help but think of this video (jump to 1:15): Comment #195 November 12th, 2017 at 2:59 am Since comments are still open on this post — I was browsing MathOverflow and came across this answer: https://mathoverflow.net/a/6740/5583 Apparently it's undecideable in ZFC whether ω_1→(ω_1,ω+2). That, uh, seems like serious evidence against my claim that ordinals are on pretty solid groud, huh? Comment #196 November 12th, 2017 at 3:26 pm Thinking about it a bit more, maybe it doesn't — the proposition in question is after all fundamentally a claim about the power set of ω_1 (or rather of (ω_1 choose 2)), rather than about ω_1. So I take it back, ordinals still seem to be on solid ground after all!
CommonCrawl
MathOverflow is a question and answer site for professional mathematicians. It only takes a minute to sign up. Synthetic projective lines The classical synthetic notion of projective plane consists of a set of points, a set of lines, and a relation of incidence between the two, such that any two distinct points lie on a unique line and any two distinct lines intersect in a unique point (plus some nondegeneracy assumptions). There are similar notions of projective 3-space, $n$-space, and so on — but 1-dimensional projective space seems harder to capture synthetically, since there is no "room", dimensionally, for subspaces in between the points and the entire space. Has anyone attempted to define a synthetic notion of "projective line"? Ideally such a definition would have properties like the following: The space $P^1(k)$ is naturally a projective line for any division ring $k$, and from any projective line $L$ satisfying enough axioms we can construct a skew field $c(L)$ such that $c(P^1(k)) \cong k$ and $P^1(c(L))\cong L$ (unnaturally). The corresponding facts for Desarguesian projective planes are classical. Any line in a projective plane is a projective line, and any projective line satisfying enough axioms can be embedded as a line in some projective plane. This would be analogous to how any plane in a projective 3-space is a Desarguesian projective plane, while any Desarguesian projective plane can be embedded in a projective 3-space. I have an idea for how one might do this, by axiomatizing the "quadrangular hexad" relation on a line in a projective plane; but before I try very hard, I'm looking for references where something like it has been tried before. ag.algebraic-geometry projective-geometry incidence-geometry Tom De Medts Mike ShulmanMike Shulman $\begingroup$ Something related to this might have been done in the theory of Moufang sets. (These are, more generally, related to algebraic groups of rank one, but in particular the projective line with its $\operatorname{PGL}_2$-action is an example of a Moufang set.) Could be that some axioms exist that single out the projective lines among the wild world of Moufang sets... $\endgroup$ – Matthias Wendt Oct 16 '15 at 6:41 $\begingroup$ Given 3 distinct and 4 distinct points in $\mathbb P^1$, we can use the first three to decide where $0$, $1$, $\infty$ are, and the latter four to specify a cross-ratio. That number should then be an 8th point. I have no idea what relations this 7-ary (partially defined) operation should satisfy. $\endgroup$ – Allen Knutson Oct 17 '15 at 3:18 $\begingroup$ @AllenKnutson That's similar to what I was thinking of. Note that once you've specified 0, 1, and $\infty$, you don't need to muck around with cross-ratios any more to get something projective; every point is already a single number, so you can just add and multiply them directly. This gives you some partial 5-ary operations, which can both be encoded using the functionality of the 6-ary "quadrangular hexad" relation. But at this point I'm mainly wondering whether anything like this has been done before; it seems a natural thing to try. $\endgroup$ – Mike Shulman Oct 18 '15 at 4:15 Building on previous work by Paul Libois, and related to work by Libois' student Jean van Buggenhaut from 1969, Francis Buekenhout considered and solved this question in "Foundations of one Dimensional Projective Geometry based on Perspectivities" Abhandlungen aus dem Mathematischen Seminar der Universität Hamburg 43 (1975) 21-29. Note that his approach also adapts to lines in Moufang planes. See also F. Buekenhout and A. Cohen Diagram geometry Springer 2013, Section 6.2 Projective lines. Mike Shulman Tim PenttilaTim Penttila $\begingroup$ The book by Buekenhout and Cohen is much more easy to obtain. I just had a look at the copy in my shelf; section 6.2 quite cleary states that it is about classifying projective lines in thick projective spaces, which are Desarguesian. $\endgroup$ – Max Horn Nov 18 '15 at 7:37 $\begingroup$ @MikeShulman A line in a projective plane is a projective line in this sense if the plane is a translation plane with respect to this line. Most lines in non-Desarguesian projective planes are not projective lines in this sense. $\endgroup$ – Tim Penttila Nov 20 '15 at 14:22 $\begingroup$ (continued) This is for the restricted definition where there's only the hypothesis when the two points are equal. Buekenhout has replaced Desargues (little Desargues in the restricted sense) by the symmetry it induces on a line, and a general line in a non-Desarguesian plane won't have that symmetry. $\endgroup$ – Tim Penttila Nov 20 '15 at 14:32 $\begingroup$ The paper has five citations:Hirschfeld's book, the Buekenhout-Cohen book, J.Tits, Twin buildings and groups of Kac-Moody type 1992,PM Johnson Semiquadratic sets...1999 H Van Maldeghem Moufang lines 2007. None of them really take the idea much further. The last one is essntially studying the translation line in the Luneburg plane from this perspective. $\endgroup$ – Tim Penttila Nov 20 '15 at 14:46 $\begingroup$ I finally got a copy of the Buekenhout-Cohen book, which finally contains an actual definition of what he means by "perspectivity", which is not what I thought: he means the restriction to the line in question of a central collineation of the ambient projective space (an automorphism that fixes some hyperplane pointwise and all lines through some point setwise). There's still an exercise I have to do to convince myself that this works, but at least it seems more plausible. $\endgroup$ – Mike Shulman Dec 3 '15 at 20:20 Following up on Matthias Wendt's comment, the language of Moufang sets is indeed a suitable axiomatic approach to (generalizations of) projective lines. Formally speaking, a Moufang set is a set $X$ together with a collection of groups $U_x \leq \operatorname{Sym}(X)$ (one group for each $x \in X$), such that: each $U_x$ fixes $x$ and acts sharply transitively on $X \setminus \{ x \}$; the group $G := \langle U_y \mid y \in X \rangle$ permutes the $U_x$'s by conjugation. The groups $U_x$ are called the root groups of the Moufang set, and the group $G$ is called the little projective group. It is not hard to see that $X = \mathbb{P}^1(k)$ and $G = \mathrm{PSL}_2(k)$ has the structure of a Moufang set, but there are many more interesting examples, most notably those arising from semisimple linear algebraic groups of $k$-rank one. It is possible to single out the genuine projective lines over fields among the Moufang sets. For fiels $k$ with $\operatorname{char}(k) \neq 2$, this was done by Richard Weiss and me (see section 6 of our paper "Moufang sets and Jordan division algebras", Math. Ann. 335 (2006), no. 2, 415–433); this result has been generalized to arbitrary fields (including the case $\operatorname{char}(k) = 2$) by Matthias Grüninger, where the result is somewhat more subtle ("Special Moufang sets with abelian Hua subgroup", J. Algebra 323 (2010), no. 6, 1797–1801). At the risk of giving too much advertisement for my own papers, I can recommend the course notes "A course on Moufang sets", Innov. Incidence Geom. 9 (2009), 79–122 that I wrote together with Yoav Segev, for an introduction to the subject. Edit: As requested in the comments below, I am adding some more details, in particular about the example with $X = \mathbb{P}^1(k)$ and $G = \mathrm{PSL}_2(k)$. First of all, notice that the little projective group $G$ is generated by any two of the root groups, $G = \langle U_x, U_y \rangle$ for all $x,y \in X$ with $x \neq y$. So to give an explicit description of an example, it suffices to describe two of these root groups; all others are then obtained by conjugation inside $G$. We now take $X = \mathbb{P}^1(k) = k \cup \{ \infty \}$, acted upon by $G=PSL_2(k)$, the elements of which I will denote with matrices with square brackets (determined up to a non-zero scalar), so $$ \begin{bmatrix} a & b \\ c & d \end{bmatrix} .x = \frac{ax+b}{cx+d} \quad \text{for all } x \in X.$$ Notice that $\operatorname{Stab}_G(\infty) = \left\{ \begin{bmatrix} a & b \\ 0 & a^{-1} \end{bmatrix}\right\}$ and $\operatorname{Stab}_G(0) = \left\{ \begin{bmatrix} a & 0 \\ c & a^{-1} \end{bmatrix}\right\}$. We now define the root groups $U_\infty$ and $U_0$ to be the group of unipotent elements of these point stabilizers, i.e. $$ U_\infty =\left\{ \begin{bmatrix} 1 & b \\ 0 & 1 \end{bmatrix} \mid b \in k \right\}, \quad U_\infty =\left\{ \begin{bmatrix} 1 & 0 \\ c & 1 \end{bmatrix} \mid c \in k \right\} .$$ The point is that it is now possible to forget about the matrix representation and even about the original ambient group $G$ all together, and only retain the corresponding permutations of $X$. We then get $$ U_\infty = \{ x \mapsto x + b \mid b \in k \}, \quad U_0 = \{ x \mapsto (x^{-1}+c)^{-1} \mid c \in k \} . $$ It is now not so hard to imagine that this description makes sense for more general algebraic structures than commutative fields only. And indeed, this works equally well for skew fields, octonion division algebras, and even more generally for Jordan division algebras. To make examples with non-abelian root groups, similar ideas make sense by replacing the multiplicative inverse by more complicated maps that "behave like a multiplicative inverse". Another relevant comment, related to your first "ideal property": in the case of skew fields, for instance, it is only possible to recover the skew field up to opposition, i.e., in the case of $\mathrm{PSL}_2(D)$, we can recover the pair $(D, D^{\mathrm{op}})$ from the Moufang set, but not $D$ itself. (Here, $D^{\mathrm{op}}$ is the skew field with same underlying additive group as $D$, and with multiplication given by $x*y := yx$.) Tom De MedtsTom De Medts $\begingroup$ Well, I also recommend those course notes, and I am not a co-author, so I think I am allowed to do that ;-) $\endgroup$ – Max Horn Nov 18 '15 at 11:37 $\begingroup$ Of course Moufang sets are not that helpful if one also wants to capture projective lines contained in projective planes with small or even trivial automorphism groups. But catching those sounds like a very difficult problem to me, given that it is not known if there are projective planes of non-prime-power order... So if one could characterize projective lines intrinsically via axioms, deciding whether the finite ones can have non-prime-power order ought to be a very difficult problem. $\endgroup$ – Max Horn Nov 18 '15 at 11:41 $\begingroup$ I don't suppose you could add a short description of what the groups $U_x$ are in the case of $\mathbb{P}^1(k)$, and ideally a synthetic description if $X$ is a line in a synthetic projective plane? This looks a lot like the definition in Buekenhout's paper, where I think the groups are supposed to be "central collineations", except that Buekenhout slices up the collineations according to their axis as well as center, so I'm wondering if the structures are the same. Does this work for projective lines over division rings, or in non-Desarguesian planes? $\endgroup$ – Mike Shulman Nov 18 '15 at 16:12 $\begingroup$ I was just saying I think the answer above would be enriched by a concise description of the construction, so a reader doesn't have to make their way through 20 pages of notes to piece together the definition. And octonion division algebras are one particular nonDesarguesian situation, but is there a synthetic construction that works in general? $\endgroup$ – Mike Shulman Nov 18 '15 at 19:43 $\begingroup$ Yes, it is the Buekenhout definition, when the two points are equal (and no hypothesis with the points distinct). The Moufang set people tend not to cite people who worked on the topic before Tits, probably because they didn't use the terminology of Tits. (So, for instance, John Faulkner's early work is also not often cited.) $\endgroup$ – Tim Penttila Nov 20 '15 at 15:07 One defining feature of $\mathbb P^1(k)$ is that it provides a sharply 3-transitive permutation representation for $\operatorname{PGL}_2(k)$. I believe that the abstraction of projective line to "sharply 3-transitive permutation group" is the most studied one. The characterization of sharply 3-transitive groups as groups of projectivities over KT-fields came up in an answer to Jacob Lurie's question Action of PGL(2) on Projective Space. That answer mentions that every sharply 3-transitive group is the "group of projectivities" of a KT-field $F$, but note that the correspondence goes both ways, and one can construct $F$ out of the permutation representation. Judging by how there is an equivalence of categories of near fields and of sharply 2-transitive groups, I wouldn't be surprised if one can say something similar for KT-fields and sharply 3-transitive groups. A reference for this is "Kerby, W., Wefelscheid, H. "Uber eine scharf 3-fach transitiven Gruppen zugeordnete algebraische Struktur." Abh. Math. Sem. Univ. Hamburg 37 (1972), 225–235. 122 silver badges33 bronze badges Gjergji ZaimiGjergji Zaimi You might benefit from reading Section 5.3 of John Faulkner's book The role of nonassociative algebra in projective geometry AMS 2014. The results there may have been what you had in mind by 'axiomatizing the "quadrangular hexad" relation'. These are (almost) all very old theorems, appearing in (and mostly predating) Pickert's Projektive Ebenen book from 1955, one going back to Veblen and Young from 1910 and another to von Staudt from 1860. The basic underlying idea of using quadrangular sets goes back to Desargues in 1639. (It was Desargues who introduced the term involution into mathematics, in this context, often referred to as an involution of six points.) But Desargues' work is an elaboration and extension of results of Pappus from c.340. $\begingroup$ Yes, Faulkner's lemma 5.6 and the first sentence of his lemma 5.7 are some of the axioms I would include. I didn't know that uniqueness of the sixth point is equivalent to Desargues (though it doesn't surprise me). But of course his section 5.3 is not itself what I was thinking of doing, since he is working in an ambient projective plane throughout, rather than axiomatizing a hexary relation on a given line. $\endgroup$ – Mike Shulman Dec 3 '15 at 22:36 Thanks for contributing an answer to MathOverflow! Not the answer you're looking for? Browse other questions tagged ag.algebraic-geometry projective-geometry incidence-geometry or ask your own question. Action of PGL(2) on Projective Space Can one axiomatize projective lines using the cross-ratio? Natural examples of Reverse Mathematics outside classical analysis? When do two lines and three points determine exactly two conics? Exactly four? When do 27 lines lie on a cubic surface? Combinatorial curves in combinatorial projective planes
CommonCrawl
Risk perception and decision-making: do farmers consider risks from climate change? Anton Eitzinger ORCID: orcid.org/0000-0001-7317-33811,2, Claudia R. Binder2 nAff3 & Markus A. Meyer2 Climatic Change volume 151, pages507–524(2018)Cite this article Small-scale farmers are highly threatened by climate change. Experts often base their interventions to support farmers to adapt to climate change on their own perception of farmers' livelihood risks. However, if differences in risk perception between farmers and experts exist, these interventions might fail. Thus, for effective design and implementation of adaptation strategies for farmers, it is necessary to understand farmers' perception and how it influences their decision-making. We analyze farmers' and experts' systemic view on climate change threats in relation to other agricultural livelihood risks and assess the differences between their perceptions. For Cauca, Colombia, we found that experts and farmers perceived climate-related and other livelihood risks differently. While farmers' perceived risks were a failure in crop production and lack of access to health and educational services, experts, in contrast, perceived insecurity and the unreliable weather to be the highest risks for farmers. On barriers that prevent farmers from taking action against risks, experts perceived both external factors such as the national policy and internal factors such as the adaptive capacity of farmers to be the main barriers. Farmers ranked the lack of information, especially about weather and climate, as their main barrier to adapt. Effective policies aiming at climate change adaptation need to relate climate change risks to other production risks as farmers often perceive climate change in the context of other risks. Policymakers in climate change need to consider differences in risk perception. Climate change poses major challenges to our society, especially in the agricultural sector in developing countries (Vermeulen et al. 2011). Experts have argued that adaptation and mitigation actions are urgently needed to pave climate-resilient pathways for the future (IPCC 2014a). One major challenge with the design and implementation of adequate actions is the complexity of the systems characterized by interactions between environmental and human dynamics at different scales (Turner et al. 2003). Delayed and unexpected feedback loops, nonlinearities, and abrupt rather than gradual changes render the climate system exceedingly hard to predict and the reactions of the exposed human system even less foreseeable (Alley et al. 2003). These entailed uncertainties make decision- and policymaking a difficult task. The difficulties in climate-relevant decision- and policymaking in agriculture are further aggravated by differing perceptions of climate change by experts and farmers. Despite the scientific consensus about existence, risks, and possible solutions to climate change, nonspecialists largely seem to underestimate and misinterpret these causes and risks (Ding et al. 2011). This is partly due to two key facts: first, most people do not differentiate between weather and climate (Weber 2010) and are thereby unable to distinguish climate variability from climate change (Finnis et al. 2015). Second, most people still perceive the likelihood that climate change might affect them directly as low (Weber 2010; Barnes and Toma 2011; Lee et al. 2015). When taking decisions towards adaptation, people tend to relate possible actions to probable consequences in a linear manner without considering feedback loops, delays, and nonlinearities (Weber 2006). The success of agricultural climate policies relies to a large extent on farmers' awareness of climate change including their knowledge and beliefs regarding climate change and how it will affect them (Patt and Schröter 2008; Carlton et al. 2016). Scholars have found that small-scale farmers in Latin America are highly vulnerable to climate change (Baca et al. 2014; Eitzinger et al. 2014). While farmers have adapted continuously to social and environmental change in the past, the magnitude of climate change strikes the already stressed rural population. In Latin America, inequality and economic vulnerability call for an approach that tackles the underlying causes of vulnerability before implementing adaptation strategies (Eakin and Lemos 2010). Without visualizing climate change as one of the multiple exposures, small-scale farmers rarely adapt their farming practices even if suggested by climate policies (Niles et al. 2015). This reluctance is greatly influenced by the farmers' beliefs and perception concerning causes and local impacts of climate change (Haden et al. 2012). Furthermore, adaptive actions are driven by individuals and groups ideally supported by institutions and governmental organizations. In many countries in Latin America, the influence of governments has become weaker due to economic liberalization. Thus, governance mechanisms have lost their capacity to manage risks and to address issues of social vulnerability, especially in rural areas (Eakin and Lemos 2006). "By 2050, climate change in Colombia will likely impact 3.5 million people" (Ramirez-Villegas et al. 2012, p. 1), and scenarios of impacts from long-term climate change will likely threaten socioeconomics of Colombian agriculture. In Colombia's southwestern department Cauca, the average increase in annual temperature to the 2050s is estimated to be 2.1 °C with a minor increase in precipitation (Ramirez-Villegas et al. 2012). In this region, coffee farmers face several challenges through climate change, like shifting suitable areas into higher altitudes, implying reduced yields and increasing pest and disease pressure (Ovalle-Rivera et al. 2015). Ovalle-Rivera et al. (2015) estimate a national average of 16% decrease of climate suitability for coffee in Colombia by 2050, mostly for areas below 1800 m a.s.l. During the twentieth century, Colombia's agrarian reform was the best example of failed top-down approaches to promote self-reliant grassroots organizations in agriculture (Gutiérrez 2014), which might be more likely to adapt to climate change. Vulnerabilities in Colombia are structural and need to be addressed through transformative adaptation (Feola 2013). First, rural populations in Colombia, and especially resource-limited farmers, depend on natural resources and are particularly sensitive to environmental stress. Second, the level of human security is low and tied to deeply rooted socioeconomic and political inequality. Third, the institutional setting is a mix of formal and informal institutions that facilitate or impede building adaptive capacity of farmers (Eakin and Lemos 2010; Feola 2013). For the successful adaptation of Colombian agriculture to agricultural risks from climate, the government should set up enabling policies and release funds for research and development to subsectors (Ramirez-Villegas and Khoury 2013). Adaptation options should be developed based on underlying vulnerability analysis and participatory processes with farmers and experts (Feola 2013). The interaction between grassroots organizations (bottom-up) and institutions (top-down) is crucial for transformative adaptation (Bizikova et al. 2012). The development of adaptation options is hampered by the fact that experts often have an incomplete view of farmers' perceptions which might have vast implications for effective risk communication, e.g., regarding climate change, and during the participatory design process of adaptation strategies (Thomas et al. 2015). These findings imply that an improved, in-depth understanding of the differences in risk perception between farmers and experts is necessary for the design of more effective and successful policies to promote adaptation initiatives. To gauge the prevailing perception of various groups, mental models (MMs) have been successfully employed in the past, for example, to elicit farmers' perceptions and underlying views on livelihood risks (Schoell and Binder 2009; Binder and Schöll 2010; Jones et al. 2011). MMs provide insight into perceptions and priority setting of individuals (Morgan et al. 2002) and can help to understand risk perceptions and to inform the design of effective risk communication strategies. In risk analysis, MMs have been used to identify how individuals construct representations of risk (Atman et al. 1994; Schöll and Binder 2010; Binder and Schöll 2010). Based on the mental model approach (MMA) (Morgan et al. 2002), Binder and Schöll (2010) developed the structured mental model approach (SMMA). The SMMA combines the so-called sustainable livelihood framework (SLF) (Scoones 1998)—a framework that shows how sustainable livelihoods are achieved through access to resources of livelihood capitals with the MMA (Morgan et al. 2002). The SMMA can help to understand how farmers perceive and balance livelihood risks for their agricultural practices (Schoell and Binder 2009; Binder and Schöll 2010). This study aims (i) to understand how climate risks are integrated in the context of other risks in the farmers' perception and decision-making process for taking action, (ii) to identify differences between farmers' and experts' mental models regarding farmers' agricultural risk perception, and (iii) to elaborate on possible consequences for policies addressing farmers' livelihood risks and their agricultural adaptation strategies in the face of climate change. The paper is structured as follows: first, we present material and methods on how we analyze climate risks in the context of farmers' livelihood risks and analyze differences in perception between farmers' and experts' MMs. Second, we present results from applying our approach to the Cauca Department in Colombia (South America) as an exemplary study for a region for small-scale farmers in a developing country. Finally, we discuss our findings concerning other literature and draw our conclusions. The Cauca Department is located in the southwestern part of Colombia with a size of approximately 30,000 km2. Cauca is composed of a lowland coastal region, two Cordilleras of the Andes, and an inner Andean valley. Agricultural land is concentrated in the inner Andean valley. According to the latest agricultural census (DANE 2014), 83% of the farmers in Cauca have a low educational achievement (elementary school only), 22% are illiterate, and 52% live in poverty according to Colombia's Multidimensional Poverty Index (Salazar et al. 2011). The main stressors for agriculture and farmers alongside climate change are trade liberalization and violent conflicts (Feola et al. 2015). Colombia has one of the longest ongoing civil conflicts and one of the highest rates of internal displacement, estimated to be 7% of the country's population and 29% of the rural population (Ibáñez and Vélez 2008). Cauca is one of the regions in Colombia with a high rate of violence from armed conflicts (Holmes et al. 2006). Especially for small farm households, weak institutional support and absence of the state in rural areas have led to unequal land distribution and lacking technical assistance as well as financial services for agricultural transformation (Pérez Correa and Pérez Martínez 2002). Due to Cauca's proximity to the Pacific Ocean, the region is subject to inter-annual climate variability mainly driven by the El Niño Southern Oscillation (ENSO) (Poveda et al. 2001), a feature that has great influence on agricultural productivity and, in consequence, farmers' livelihood. A study by the CGIAR Research Program on Climate Change, Agriculture and Food Security (CCAFS) shows that farmers in the study area are mostly affected by more frequent droughts, storm and hail events, more erratic rains, and landslides as a consequence of heavy rains (Garlick 2016). Even if uncertainty in future scenarios of extreme events is still high, changes in inter-annual climate variability are of high relevance for farmers; there is agreement that more intense and frequent extreme events are likely to be observed in the future (IPCC 2014b). The Cauca region is particularly relevant for these types of analyses as (i) the region has a high potential of being affected by climate change, (ii) interventions for rural development by the government have been weak in the past, and (iii) because of the national and international efforts to implement the peace process, Cauca has caught attention for implementing development interventions. Many of these interventions could benefit from an in-depth understanding of farmers' perceptions regarding the climate and nonclimate risks affecting their livelihoods. Exemplary for Cauca, we selected a geographical domain of 10 km2 with altitudes between 1600 and 1800 m a.s.l. within the boundaries of the municipality Popayan. We conducted the interviews with experts and farmers in five rural villages and selected randomly 11 to 12 farmers each village (see details on sampling design in Chapter 2.4). The farm size of interviewees was between 1 and 4 ha, half of them (45%) possessed legal land titles, and 41% of farmers have started the legalization process recently. The average age of interviewees was 47 years old, 48% of them were women farmer, and the average household size was five people. Overall, 74% of farmers depend on coffee (Coffea arabica) as their main agricultural livelihood besides other crops and some livestock to complement income and for self-consumption. Other crops and livestock that are managed in the farming systems are cassava (Manihot esculenta), dry beans (Phaseolus vulgaris), maize (Zea mays), banana (Musa acuminata), cattle, and poultry. As the second most important crop, 19% of interviewed households depend on sugarcane (Saccharum officinarum) and the derived product panela, which is unrefined sugar in compact loaves of a rectangular shape. Most of farmers' income is coming from on-farm agricultural activities and also from off-farm day labor activities in the agricultural sector (harvest coffee in other farms). Generally, there are few job opportunities in the study area. Assessment of climate risks Before we started analyzing risk perceptions, we conducted an assessment of climate risks and impacts on main crops grown in the region and reviewed existing literature on the vulnerability of farmers in the study area. First, we compared anomalies of precipitation, maximum temperature, and minimum temperature in the study area with records about ENSO events. We used data of a local weather station from the Instituto de Hidrologia, Meteorologia y Estudios Ambientales de Colombia (IDEAM) and data of the Oceanic Niño Index (ONI) from the National Oceanic and Atmospheric Administration (NOAA) (NOAA 2014). Second, we used a simple climate envelope model to analyze the current and future climate suitability of six crops in the study area. Finally, we reviewed the existing literature on climate change impact assessment for Colombia. Detailed results of climate risk assessment in the study area are presented in Online Resource 1. Analyzing mental models to understand perceptions Figure 1 presents the conceptual approach of the study. Farmers' perceptions regarding climate risks are shaped by their knowledge about the causes of climate change, their beliefs, social norms, and values as well as through their experience with climate-related information and past climate-related events. However, farmers' decision-making is not only shaped by climate risks, but other agricultural production risks are also equal or even more important for farmers. Farmers consider the complete mental model of risks when envisioning goals concerning their livelihood strategy and make appropriate decisions about investments and adaptations of the agricultural production system. In applying our approach, we captured experts' external views of farmers' perception and compared it to the farmers' internal views. Approach used for understanding climate risks in the context of farmers' livelihood risks To assess the importance of climate risks in the context of another risk in farmers' agricultural production system, we identified differences between the perception of farmers and that of experts regarding climate risks as placed in the context of other risks within the farmers' livelihood system by analyzing and comparing each group's MMs. The experts' perspective on farmers' perceptions represented the external view, whereas the perspective of the farmers themselves represented the internal one. We captured the external and internal views on climate risks with two sets of structured interviews with experts and farmers, and we used ranking techniques to show differences in perception. Interviews with experts and farmers A qualitative semi-structured interview study was conducted between June and September 2014 to examine perceptions of experts and farmers about farmers' livelihood risks and farmers' barriers for adaptation to cope with risks they face in agricultural production. In a first step, we conducted open interviews with 13 experts. In order to obtain a holistic view of experts' perceptions, we included regional, national, and international experts from different fields of the analyzed agro-environmental system, namely four agronomists, three economists, one environmental lawyer, one public government administrator, one nutritionist, one climate change scientist, one ecologist, and one veterinarian. All experts have been regularly working with farmers in the study region during the last 5 to 10 yrs and have still been working with them at the time of the study. Following the expert interviews, we conducted 58 semi-structured interviews with farmers from five different villages in the municipality of Popayan, performing between 10 and 12 farmer interviews from different households and for each village. The total population of farmers of the five villages was 499 at the time of the interviews. We included farmers aged 20 to 60, and we designed the sample to ensure an equal representation of women and men. Morgan et al. (2002) judge a small sample for interviews within a population group that has relatively similar beliefs as reasonable. Schoell and Binder (2009) found for the case of small farmers in Boyacá, Colombia, that after 5–10 interviews, no more new concepts emerged (Binder et al. 2015). To avoid interruption from notes taken by the interviewer and to keep the natural flow of conversation, we recorded all interviews with the consent of the participants. Subsequently, we transcribed the records of the interviews for the analysis. The used guidebook for expert interviews can be found in Online Resource 2 and the guidebook for farmer interviews in Online Resource 3. First, we assessed the experts' views on the farmers' concerns, risks, barriers for taking action, and enablers to take action by asking the following questions: What are the farmers' main livelihood concerns? Which risks do farmers face in agricultural production? Which are farmers' barriers to cope with these risks? What motivates (enablers) farmers to cope and adapt? In the expert interviews, we received answers and explanations to the four guiding questions about farmers' concerns, risks, barriers for taking action, and enablers to take action when facing risks in agricultural production. We noted all answers of experts for each question on small cards. Answers from all experts were pooled after finishing all the interviews; we got 16 concerns, 10 risks, 13 barriers for taking action, and eight enablers to take action. Based on the pooled elements, we used an online survey tool to ask the same group of experts to rank all compiled elements according to the importance of the elements for farmers. The highest ranked elements by experts were then selected to start the farmer interviews. Second, we carried out the farmers' interviews. After explaining the overall purpose of the study briefly as part of informed consent with farmers, we visualized the elements of the experts through drawings we created for each question and then asked farmers to rank them according to their priorities. After piloting the interviews with farmers, we decided to use only the six highest ranked elements by experts to keep the ranking exercise for farmers simple. In addition, we asked farmers at the end of each ranking if they would consider other elements to be more important for them that the ones we used for the ranking (see Online Resource 3). We did not mention climate change during the interviews for a specific purpose. Farmers should rank the card elements without being biased by knowing the purpose of the interview, namely to understand how they perceive climate risks in relation to other livelihood risks. After finishing both interview series, we analyzed the differences in perception between experts and farmers. To aggregate the individual rankings, we calculated a weighted average based on the ranking of each element for the four questions. The overall ranking of experts and farmers was calculated separately as follows: $$ {f}_{\mathrm{ranking}}=\frac{\sum_{i=0}^n\left({x}_i\times {w}_i\right)}{n} $$ where w is the weight, x the response count of an answer choice of each question, and n the total number of answer elements. In our case of six elements per question, we calculated the average ranking using weights starting at 6 for the highest ranked element and decreasing towards 1 for the lowest ranked element. We compared the average experts' rankings to farmers' rankings stratified by gender and age group and then applied the hierarchical clustering approach (Ward 1963) to the farmers' rankings to obtain groups of farmers with similar choices. The hierarchical clustering approach by Ward (1963) is a widely used data analysis approach for similarity grouping to determine distinct subgroups with similar characteristics (Vigneau and Qannari 2003). After obtaining groups of farmers from clusters, we described them based on high ranks using first and second ranked answers each question and demographic variables collected during the surveys. Climate change risks in the study area Figure 2 shows that inter-annual rainfall variability is high. High variability in rainfall can be observed between October and February for long-term weather records since 1980. Inter-annual climate variations in the study area are mainly driven by the ENSO. The consequences of ENSO for farmers and agricultural production are prolonged droughts (El Niño) or intense rainfall over more extended periods (La Niña). The assessment of the six most relevant crops in the study area revealed that variation in crop exposure to climate variability in Cauca is high (see Online Resource 1). Farm households in the study area grow coffee, sugarcane, maize, dry beans, banana, and cassava. While banana, sugarcane, and cassava can better cope with long-term climate change scenarios, dry beans and coffee are more likely affected by increasing mean annual temperatures. Production of coffee and dry beans represents an important livelihood for farmers in the study region but will likely face impacts through climate change in the future. See Online Resource 1 for more details on climate change risks in the study area. a Inter-annual precipitation variability calculated from weather records from a station (Apto G L Valencia, elevation of 1749 m a.s.l.) in the study area, and b ONI and precipitation anomalies show the frequent influence of ENSO episodes Farmers' rankings and differences to experts' rankings We found that experts and farmers perceived farmers' livelihood concerns and enablers for adaptation to agricultural production risks similarly, but risks and barriers for adaptation differently (see Fig. 3). Also, farmers agreed on the selected answers as the most relevant for them for each question; only a few farmers mentioned other elements. Beyond, the most mentioned elements by farmers were concerns about health (five times) and access to tap water (three times). Differences in experts' (solid thick line) and farmers' (dashed thick line) rankings of farmers' a worries, b risks, c barriers to adaptation, and d enablers for adaptation. Rankings of male farmers (dashed narrow line) and female farmers (dashed-dotted narrow line) Older farmers are more worried about climate change than younger farmers but rank production failure low as risk (see Fig. 4). Interestingly, older farmers saw insecure transport as a major risk and production failure as a lower risk, whereas this was the opposite for younger farmers. Differences in experts' (solid thick line) and farmers' (dashed thick line) rankings of farmers' a worries, b risks, c barriers to adaptation, and d enablers for adaptation. Rankings of farmers with age below 50 (dotted narrow line) and farmers with age above 50 (dashed-doubled-dotted narrow line) Regarding farmers' concerns (Fig. 3a), we found two issues experts and farmers agreed upon: poverty is a chief concern in this region (ranked first by experts and second by farmers) and neither climate change nor security problems are perceived to be relevant in the study area. The key differences in perceived concerns were related to government policies, access to credit, and market opportunities. Farmers were highly worried regarding government policies (rank 1). They argued: "The government in the capital, Bogota, is too far away and does not take into account the context of our region when making new laws" (farmer's interview, translated from Spanish, Colombia, October 2014). Experts ranked government policies lower with respect to concerns (rank 3), but they agreed in their explanations with farmers that: "The government is focusing on international trade agreements and is supporting medium-sized and large farmers, they are not investing in small-scale farmers' production" (expert's interview, translated from Spanish, Colombia, August 2014). Both male and female farmers were highly worried regarding their access to credit to be able to pay for labor and to purchase inputs for crop production (rank 3). Experts, on the other hand, did not perceive that farmers need to be worried about having access to credit (rank 6). In contrast, experts believed that farmers were worried about market opportunities—a perception shared more often by women than by men (see Fig. 3a). The main differences in the rankings between experts and farmers were related to risks (Fig. 3b). For farmers, the highest perceived risks were a failure in crop production and social vulnerability (lack of access to health and educational services). Experts, in contrast, perceived insecurity (theft of products from plots or during transportation) and the unreliable weather to be the highest risks for farmers. From a gender perspective, results showed that women and men disagreed in rankings with experts for few themes. Whereas women agreed with experts that insufficient planning is a major risk (even ranking it higher than experts), men agreed with experts that insecurity is a high risk (for women, this was among the lowest risks). The risk rankings showed clearly that farmers see the symptoms of social inequality (first rank of social vulnerability), agricultural production, and market risks such as unstable prices or production failure. Farmers ranked insufficient planning lower and unreliable weather very low compared to experts. These results showed that experts rather ranked risks from climate higher than farmers did. Experts would rather expect a higher planning activity of the farmers for adaptation to climate risks. Contrastingly, farmers believed that they were doing already as much as they could. Experts and farmers also ranked barriers to adaptation differently (Fig. 3c). Experts perceived both external factors such as the national policy and internal factors such as the adaptive capacity of farmers to be the main barriers for deciding to take action and to adapt to change. Farmers, in contrast, ranked the lack of information about weather and climate, especially seasonal weather forecasts, as their main barrier to act by adapting to change. Farmers with age above 50 ranked not acting collectively the highest among the barriers for adaptation. The ranking of barriers showed that especially younger farmers felt financially unable (they ranked adaptation is too expensive high) to adapt to production risks from climate change (Fig. 4c). The fact that they ranked adaptive capacity low as barrier showed that they felt prepared to adapt to change but missed access to reliable weather information for planning (ranked high as a barrier). The experts rather saw the necessity for more activity in adaptation (high ranking of adaptive capacity as a barrier) and the rigid national policies impeding farmers' adaptation. Experts did not share farmers' perception about the relevance of improved weather information. The agreement between experts and farmers was mostly on farmers' motivations (enablers to adaptation), which were family interests, increased quality of life, and traditional attachment to land (Fig. 3d). Regarding the motivations, one expert mentioned during the interview that: "Farmers in Cauca do have a strong connection to their roots. Territories and family unity are very important" (expert's interview, translated from Spanish, Colombia, August 2014). Within these motivations, however, men and women placed different emphases. While women ranked food security and traditional attachment to land higher than men, men ranked economic interests and improved quality of life higher than women. Farmer typologies of risk perception The cluster analysis of farmers' first ranked answer to each question yielded four typologies of farmers based on the farmers' perception of concerns, risks, barriers to adaptation, and enablers for adaptation: Cluster 1—Higher-educated women–dominated farmers that are attributing risks to external factors: farmers belonging to this group are worried about ending up in poverty and fear that they will not be supported by the government. They consider insufficient planning of their farming activities as well as a lack of access to social services (social vulnerability) as key risks for their future. In the view of this group, farmers are dependent on weather forecasts which they consider necessary to adapt to risks in agricultural production; they perceive that not cooperating as a community is a barrier for taking action. Their adaptive capacity could potentially be triggered if they perceived that the quality of life for them and their families would increase from implementing adaptation measures. The group of farmers in cluster 1 consists of 62.5% women and 37.5% men with an average age of 44 years; 50% of the farmers reached the primary education level only, and 38% have obtained a legal land title (50% have started a legal process). The average farm size is 4 ha. Cluster 2—Lower-educated production–focused farmers with the land title: farmers belonging to this group are worried about a lack of access to credit or money to adapt agricultural production to change, and they are concerned about the government policies for rural development. These farmers perceive production failure due to uncontrollable factors (pest and diseases, climate events) and volatile selling prices for their products as the highest risks. The main barrier to adapt to change is a combination of low adaptive capacity and missing support from institutions. Similar to the first group, production-focused farmers are motivated to adapt to changes if their own and their families' quality of life would increase. The group of farmers in cluster 2 consists of 43% women and 57% men with an average age of 44 years; 64% of farmers reached the primary education level only, and 57% have obtained a legal land title (29% have started a legal process). The average farm size is 2 ha. Cluster 3—Vulnerable, less-educated farmers with lower access to land: farmers belonging to this group are worried about unstable markets for selling their products and the associated poverty risk. Compared to the others, their perceived risk is based not only on production but also on insecurity issues on their farms and during the transport of their products to the market. The main barriers for this group of farmers are high costs for implementing adaptation measures to cope with risks and the missing support from institutions. Members of this group share motivation for adapting to change due to being traditionally attached to their land and region. They want to improve the quality of life for themselves and their families. The group of farmers in cluster 3 consists of 47% women and 53% men with an average age of 46 years; 67% of farmers reached the primary education level only, and 27% have obtained a legal land title (47% have started a legal process). The average farm size is 2 ha. Cluster 4—Risk-aware male–dominated elderly farmers with the land title: farmers of this group are worried about the government, risks from climate change, and the overall security in their region. The risks perceived as the highest by these farmers are social vulnerability such as the lacking access to social services and the risks associated with regional insecurity. The main barriers to adaptation lack weather forecasts and a low adaptive capacity on their farms. Like cluster 3 farmers, they feel traditionally attached to their land and also believe that their land is highly suitable for agricultural activities. The group of farmers in cluster 4 consists of 38% women and 62% men with an average age of 57 years; 69% of farmers reached the primary education level only, and 62% have obtained a legal land title (38% have started a legal process). The average farm size is 3 ha. Detailed results of all comparisons, gender differences, and the hierarchical clustering of farmers' rankings are presented in Online Resource 4. This paper presented an integrative approach to understanding how climate risks are integrated into the context of other risks in the farmers' decision-making process. We compared the experts' with the farmers' view and differentiated between concerns, risks, and barriers for adaptation, and enablers to adaptation. Two explanations in the literature stress why this type of integrated analysis of farmers' risk is more suitable than an isolated analysis of climate change risks: (i) farming systems of smallholders in the developing world are complex systems of location-specific characteristics integrating agricultural and nonagricultural livelihood strategies, which are vulnerable to a range of climate-related and other stressors (Morton 2007; Feola et al. 2015), and (ii) farmers' long-term memory of climate events tends to decrease significantly after a few years; therefore, the importance of climate risks in farmers' perceptions may equally decline very quickly after disturbing climate events (Brondizio and Moran 2008). In the case of Cauca, the interviews were conducted in 2014, a year with ENSO neutral conditions, the same as the two previous years. Farmers ranked climate risks low among their perceived risks in agricultural production, a perception that might change if the interviews would have taken place in a year affected by ENSO conditions (e.g., with a prolonged drought and high temperatures). Reasons for potential maladaptation Our findings showed that in Cauca, differences in experts' and farmers' perception and related farmers' concerns, risks, and barriers and enablers for adaptation existed and could lead to miscommunication and, consequently, to a maladaptation to climate change. This was partly explained by the finding that experts agreed with farmers about main concerns for farmers but disagreed about risks and barriers for adaptation, thus suggesting that the same view on a problem might not necessarily lead to similar action propositions. Our study contributes to a growing literature on how perception influences farmers' decision-making for adaptation and adaptation behavior. We especially analyze how climate risks relate to and interact with other risks and concerns in the farmers' decision-making process. This is important because smallholder farmers in countries like Colombia are subject to multiple interdependent stressors and deeply rooted social vulnerability. This interdependency requires a systemic perspective in farmers' risks. Some other studies simply compare meteorological data with people's memories of historical climate events (Boissiere et al. 2013); they attempt to link farmers' perceptions about climate change and related risks to adaptive behavior (Jacobi et al. 2013; Quiroga et al. 2014; Barrucand et al. 2016). Our integrated view on farmers' perceptions and decision-making might better capture the multitude of stressors for farmers and showed a lower perceived relevance of climate risks than other studies focusing on farmers' perception of climate risks. Especially for countries like Colombia, where multiple stressors and rooted causes of social vulnerability act simultaneously on farmers' decision-making, the adaptive capacity to climate risks is constraint (Reid and Vogel 2006; Feola et al. 2015). Our findings show that farmers see the symptoms of social inequality but not their low adaptive capacity to cope with risks from climate change. The farmers' low ranking of insufficient planning and unpredictable weather as risk equally underlined their lack of perception of climate risks, which was not perceived in the same way by experts. Contrastingly, the experts rather looked first at climate risks and insecurity for transport, but instead did not perceive production failure, unstable prices, or roots of social vulnerability as high risks. What can we learn about climate risk communication? While experts focus on communicating climate change risks, in cases such as we found in Cauca, farmers do not see such information as practical since their highest perceived risk is the poverty trap (social vulnerability) and the sum of risks related to the agricultural production of which climate risks are merely a part. In their article, Reid and Vogel (2006) pointed to this fact by stating that farmer's associate crop losses sometimes with climate events which are, however, not always seen as extraordinary and farmers are accustomed to coping with them. This is also supported by our findings. Farmers in Cauca do not rank climate risks high among their perceived risks, but they rank the lack of weather forecast and weak institutional services as the most important barriers for adaptation to agricultural production risks. Differences between experts' and farmers' views related to the weather forecast, seasonal forecast, and climate change projections of long-term changes and inter-annual climate variability are relevant issues in climate risk communication (Weber 2010). In the case of Cauca, experts do not perceive that there is a lack of climate information for farmers. Thus, we recommend that experts should provide context-based–climate-related information in such a way that it becomes tangible and usable for farmers in their everyday and long-term decision-making, for example daily and seasonal weather information associated with agro-advisory services on varieties, planting dates, and water management. A need for a more holistic perspective on adaptation Our findings show that farmers in Colombia do not perceive climate risks separately; they are embedded in their mental models of agricultural livelihood risks. Other scholars have shown that in Colombia, climate change, trade liberalization, and violent conflicts act simultaneously on farmers' livelihoods, but policies address them separately (Feola et al. 2015). If the implementation of policy actions is not coordinated, they might hinder each other or lead to failure. Understanding differences between experts' and farmers' mental models about risks is the first step to better design adequate policy actions for adaptation. Additionally, our results show that farmers in Cauca hardly trust national policies as mentioned by some experts as well as by farmers during the interviews. Farmers in Cauca are overall concerned about national policies. Llorente (2015) asserts that this is a result of the violent conflict which, in rural areas like Cauca, has led to profound mistrust in the state. Feola et al. (2015) argue that the institutional integration between different levels of government has been historically difficult in Colombia. Agricultural policies are often not based on the realities of smallholders. However, before designing adaptation strategies for farmers, the deeply rooted social vulnerability and inequality must be addressed and brought to the focus of experts. Ideally, this should be done together with farmers as a social learning process. "Adaptation is a dynamic social process" (Adger 2003, p. 387), including many different actors. We agree with Vogel and Henstra (2015) to involve local actors in the development process of adaptation plans instead of operationalizing top-down adaptation measures. We suggest starting this process by developing a Local Adaptation Plan of Action (LAPA) in Cauca, aiming at initiating a bottom-up process of adaptation planning, which takes into account the community and individual levels (Jones and Boyd 2011; Regmi et al. 2014). The uptake of adaptation strategies depends on barriers and the adaptive capacity of both the community and the individual farmer. Effective adaptation at the community level would require a mix of top-down structural measures, often provided by institutions, including national adaptation plans, financial services, economic incentives, and nonstructural measures developed by the community itself as a collective action (Girard et al. 2015). Finally, transformative adaptation instead of targeting climate change by individual technological solutions would be a better approach for Colombian smallholders because it focuses on the root of vulnerability rather than on the adaptation of production systems only (Feola 2013). Such an approach would bring a more central role to farmers in developing adaptation options together with experts and would stimulate a social learning process in which science engages with lay knowledge and contributes with its transformative role to society (Feola 2013; Mauser et al. 2013). Climate change in the context of Latin America is characterized by complex lay and expert knowledge systems, social coping mechanisms, and ancient resilience mechanisms to adapt to perturbations (Sietz and Feola 2016). Several scholars support the need for an integrated approach to address critical dynamics of vulnerabilities and constraints for adaptation around climate change more integrated into cultural and socioeconomic realities (De los Ríos Cardona and Almeida 2011; Ulloa 2011). Other authors call for identification of causes of vulnerability and transformative solutions to cope with risks from climate change (Ribot 2014). Anyway, the state and its institutions are also important to provide a policy framework for adaptation, to intervene when resources are required, and to enable needed policies (Ramirez-Villegas and Khoury 2013). Finally, cooperatives could play a crucial role and become vehicles for rural development, opposite to previous top-down approaches that have failed in Colombia (Gutiérrez 2014). For further research, we recommend to study the dynamics in the farmers' complex livelihood system, to analyze the actor's network of farmers, and to identify adaptation pathways for farmers to cope with climate change in Cauca, Colombia. Since the 2015 Paris Agreement (COP 21), the political commitment to take action on climate change increased. Even in developing countries, policymakers have started working more specifically towards policies for achieving climate resilience, especially in the agricultural sector. Agriculture, both contributing to climate change and being affected by climate change, needs a transformation to become more sustainable and climate resilient by improving farmers' livelihood system and farm productivity while reducing emissions from agriculture. Especially, transforming smallholders' agriculture in developing countries such as Colombia requires greater attention to human livelihoods and related concerns, risks, barriers to decision-making, and the adoption of adaptation strategies. This study applied a mental model approach to understand better climate risks in the context of farmers' decision-making process. It showed that climate risks need to be seen in the overall context of farmers' livelihood risks. Climate change adaptation strategies and policies can be more successful if they (i) address specific climate risks, (ii) simultaneously address other risks of major importance for farmers, and (iii) target more climate risk–sensitive groups of farmers. Our research demonstrates that understanding differences in experts' and farmers' perception of farmers' livelihood risks could avoid maladaptation and improve climate risk communication from experts to farmers. Therefore, we recommend to study the dynamics in the farmers' complex livelihood system, to analyze the actor's network of farmers, and to identify adaptation pathways for farmers to cope with climate change in Cauca, Colombia. Adger WN (2003) Social capital, collective action, and adaptation to climate change. Econ Geogr 79:387–404 Alley RB, Marotzke J, Nordhaus WD et al (2003) Abrupt climate change. Science 299:2005–2010. https://doi.org/10.1126/science.1081056 Atman CJ, Bostrom A, Fischhoff B et al (1994) Designing risk communications: completing and correcting mental models of hazardous processes, part I. Risk Anal 14:779–788 Baca M, Läderach P, Haggar J et al (2014) An integrated framework for assessing vulnerability to climate change and developing adaptation strategies for coffee growing families in Mesoamerica. PLoS ONE 9:11. https://doi.org/10.1371/journal.pone.0088463 Barnes AP, Toma L (2011) A typology of dairy farmer perceptions towards climate change. Clim Chang 112:507–522. https://doi.org/10.1007/s10584-011-0226-2 Barrucand MG, Giraldo Vieira C, Canziani PO (2016) Climate change and its impacts: perception and adaptation in rural areas of Manizales, Colombia. Clim Dev 5529:1–13. https://doi.org/10.1080/17565529.2016.1167661 Binder CR, Schöll R (2010) Structured mental model approach for analyzing perception of risks to rural livelihood in developing countries. Sustainability 2:1–29. https://doi.org/10.3390/su2010001 Binder CR, Schoell R, Popp M (2015) The structured mental model approach. In: Ruth M (ed) Handbook of research methods and applications in environmental studies, 1st edn. Edward Elgar Publishing Ltd, Cheltenham, pp 122–147 Bizikova L, Crawford E, Nijnik M, Swart R (2012) Climate change adaptation planning in agriculture: processes, experiences and lessons learned from early adapters. Mitig Adapt Strateg Glob Chang 19:411–430. https://doi.org/10.1007/s11027-012-9440-0 Boissiere M, Locatelli B, Sheil D, et al (2013) Local perceptions of climate variability and change in tropical forests of Papua, Indonesia. Ecol Soc 18. doi: https://doi.org/10.5751/ES-05822-180413 Brondizio ES, Moran EF (2008) Human dimensions of climate change: the vulnerability of small farmers in the Amazon. Philos Trans R Soc Lond Ser B Biol Sci 363:1803–1809. https://doi.org/10.1098/rstb.2007.0025 Carlton JS, Mase AS, Knutson CL et al (2016) The effects of extreme drought on climate change beliefs, risk perceptions, and adaptation attitudes. Clim Chang:211–226. https://doi.org/10.1007/s10584-015-1561-5 DANE (2014) Censo Nacional Agropecuario 2014. http://www.dane.gov.co/index.php/estadisticas-por-tema/agropecuario/censo-nacional-agropecuario-2014. Accessed 1 Jan 2016 De los Ríos Cardona JC, Almeida J (2011) Percepciones y formas de adaptación a riesgos socioambientales: análisis en contextos locales en la región del páramo de Sonsón, Antioquia, Colombia. In: Ulloa A (ed) Perspectivas culturales del clima, Centro Edi. Universidad Nacional de Colombia, Bogotá, pp 451–473 Ding D, Maibach EW, Zhao X et al (2011) Support for climate policy and societal action are linked to perceptions about scientific agreement. Nat Clim Chang 1:462–466. https://doi.org/10.1038/nclimate1295 Eakin H, Lemos MC (2006) Adaptation and the state: Latin America and the challenge of capacity-building under globalization. Glob Environ Chang 16:7–18. https://doi.org/10.1016/j.gloenvcha.2005.10.004 Eakin H, Lemos MC (2010) Institutions and change: the challenge of building adaptive capacity in Latin America. Glob Environ Chang 20:1–3. https://doi.org/10.1016/j.gloenvcha.2009.08.002 Eitzinger A, Läderach P, Bunn C et al (2014) Implications of a changing climate on food security and smallholders' livelihoods in Bogotá, Colombia. Mitig Adapt Strateg Glob Chang 19:161–176. https://doi.org/10.1007/s11027-012-9432-0 Feola G (2013) What (science for) adaptation to climate change in Colombian agriculture? A commentary on "A way forward on adaptation to climate change in Colombian agriculture: perspectives towards 2050" by J. Ramirez-Villegas, M. Salazar, A. Jarvis, C. E. Navarro-Valc. Clim Chang 119:565–574. https://doi.org/10.1007/s10584-013-0731-6 Feola G, Agudelo Vanegas LA, Contesse Bamón BP (2015) Colombian agriculture under multiple exposures: a review and research agenda. Clim Dev 7:37–41. https://doi.org/10.1080/17565529.2014.934776 Finnis J, Sarkar A, Stoddart MCJ (2015) Bridging science and community knowledge? The complicating role of natural variability in perceptions of climate change. Glob Environ Chang 32:1–10. https://doi.org/10.1016/j.gloenvcha.2014.12.011 Garlick C (2016) CCAFS household baseline study, Latin America and South East Asia (2014-2015) Girard C, Pulido-Velazquez M, Rinaudo JD et al (2015) Integrating top-down and bottom-up approaches to design global change adaptation at the river basin scale. Glob Environ Chang 34:132–146. https://doi.org/10.1016/j.gloenvcha.2015.07.002 Gutiérrez JD (2014) Smallholders' agricultural cooperatives in Colombia: vehicles for rural development? Desarro Soc:219–271. https://doi.org/10.13043/DYS.73.7 Haden VR, Niles MT, Lubell M et al (2012) Global and local concerns: what attitudes and beliefs motivate farmers to mitigate and adapt to climate change? PLoS One 7. https://doi.org/10.1371/journal.pone.0052882 Holmes JS, Gutiérres de Piñeres SA, Curtin KM (2006) Drugs, violence, and development in Colombia: a department-level analysis. Lat Am Polit Soc 48:157–184. https://doi.org/10.1111/j.1548-2456.2006.tb00359.x Ibáñez AM, Vélez CE (2008) Civil conflict and forced migration: the micro determinants and welfare losses of displacement in Colombia. World Dev 36:659–676. https://doi.org/10.1016/j.worlddev.2007.04.013 IPCC (2014a) Climate Change 2014: Synthesis Report. Contribution of Working Groups I, II and III to the Fifth Assessment Report of the Intergovernmental Panel on Climate Change [Core Writing Team, R.K. Pachauri and L.A. Meyer (eds.)]. IPCC, Geneva, pp151 IPCC (2014b) Climate change 2014: Impacts, Adaptation, and Vulnerability. Part A: Global and Sectoral Aspects. Contribution of Working Group II to the Fifth Assessment Report of the Intergovernmental Panel on Climate Change [Field, C.B., V.R. Barros, D.J. Dokken, K.J Mach, M.D. Mastrandrea, T.E. Bilir, M. Chatterjee, K.L. Ebi, Y.O. Estrada, R.C. Genova, B. Girma, E.S. Kissel, A.N. Levy, S. MacCracken, P.R. Mastrandrea, and L.L. White (eds.)]. Cambridge University Press, Cambridge, pp1132 Jacobi J, Schneider M, Bottazzi P et al (2013) Agroecosystem resilience and farmers' perceptions of climate change impacts on cocoa farms in Alto Beni. Bolivia Renew Agric Food Syst:1–14. https://doi.org/10.1017/S174217051300029X Jones L, Boyd E (2011) Exploring social barriers to adaptation: insights from Western Nepal. Glob Environ Chang 21:1262–1274. https://doi.org/10.1016/j.gloenvcha.2011.06.002 Jones NA, Ross H, Lynam T et al (2011) Mental model an interdisciplinary synthesis of theory and methods. Ecol Soc 16:–46 Lee TM, Markowitz EM, Howe PD et al (2015) Predictors of public climate change awareness and risk perception around the world. Nat Clim Chang. https://doi.org/10.1038/nclimate2728 Llorente MV (2015) From war to peace: security and the stabilization of Colombia. Stab: Int J Sec Dev 4:1–5 Mauser W, Klepper G, Rice M et al (2013) Transdisciplinary global change research: the co-creation of knowledge for sustainability. Curr Opin Environ Sustain 5:420–431. https://doi.org/10.1016/j.cosust.2013.07.001 Morgan MG, Fischhoff B, Bostrom A, Atman CJ (2002) Risk communication. Cambridge University Press, New York Morton JF (2007) The impact of climate change on smallholder and subsistence agriculture. Proc Natl Acad Sci U S A 104:19680–19685. https://doi.org/10.1073/pnas.0701855104 Niles MT, Brown M, Dynes R (2015) Farmer's intended and actual adoption of climate change mitigation and adaptation strategies. Clim Chang. https://doi.org/10.1007/s10584-015-1558-0 NOAA (2014) El Niño - Southern Oscillation (ENSO): historical episodes. http://origin.cpc.ncep.noaa.gov/products/precip/CWlink/MJO/enso.shtml. Accessed 31 Aug 2014 Ovalle-Rivera O, Läderach P, Bunn C et al (2015) Projected Shifts in Coffea arabica Suitability among Major Global Producing Regions Due to Climate Change. PLoS ONE 10(4):e0124155. https://doi.org/10.1371/journal.pone.0124155 Patt AG, Schröter D (2008) Perceptions of climate risk in Mozambique: implications for the success of adaptation strategies. Glob Environ Chang 18:458–467. https://doi.org/10.1016/j.gloenvcha.2008.04.002 Pérez Correa E, Pérez Martínez M (2002) El sector rural en Colombia y su crisis actual. Cuad Desarro Rural:35–58 Poveda G, Jaramillo A, Gil MM et al (2001) Seasonality in ENSO related precipitation, river discharges, soil moisture, and vegetation index in Colombia. Water Resour Res 37:2169–2178. https://doi.org/10.1029/2000WR900395 Quiroga S, Suárez C, Solís JD (2014) Exploring coffee farmers' awareness about climate change and water needs: smallholders' perceptions of adaptive capacity. Environ Sci Pol 5. https://doi.org/10.1016/j.envsci.2014.09.007 Ramirez-Villegas J, Khoury CK (2013) Reconciling approaches to climate change adaptation for Colombian agriculture. Clim Chang. https://doi.org/10.1007/s10584-013-0792-6 Ramirez-Villegas J, Salazar M, Jarvis A, Navarro-Racines CE (2012) A way forward on adaptation to climate change in Colombian agriculture: perspectives towards 2050—supplementary material. Clim Chang 2006. https://doi.org/10.1007/s10584-012-0500-y Regmi BR, Star C, Leal Filho W (2014) Effectiveness of the local adaptation plan of action to support climate change adaptation in Nepal. Mitig Adapt Strateg Glob Chang:461–478. https://doi.org/10.1007/s11027-014-9610-3 Reid P, Vogel C (2006) Living and responding to multiple stressors in South Africa—glimpses from KwaZulu-Natal. Glob Environ Chang 16:195–206. https://doi.org/10.1016/j.gloenvcha.2006.01.003 Ribot J (2014) Cause and response: vulnerability and climate in the Anthropocene. J Peasant Stud 41:667–705. https://doi.org/10.1080/03066150.2014.894911 Salazar RCA, Cuervo YD, Pardo R (2011) Índice de Pobreza Multidimensional para Colombia. Arch Econ Schoell R, Binder CR (2009) System perspectives of experts and farmers regarding the role of livelihood assets in risk perception: results from the structured mental model approach. Risk Anal 29:205–222. https://doi.org/10.1111/j.1539-6924.2008.01153.x Schöll R, Binder CR (2010) Comparison of farmers' mental models of the present and the future: a case study of pesticide use. Futures 42:593–603. https://doi.org/10.1016/j.futures.2010.04.030 Scoones I (1998) Sustainable rural livelihoods: a framework for analysis: IDS working paper 72. Brighton Sietz D, Feola G (2016) Resilience in the rural Andes: critical dynamics, constraints and emerging opportunities. Reg Environ Chang 16:2163–2169. https://doi.org/10.1007/s10113-016-1053-9 Thomas M, Pidgeon N, Whitmarsh L, Ballinger R (2015) Mental models of sea-level change: a mixed methods analysis on the Severn Estuary, UK. Glob Environ Chang 33:71–82. https://doi.org/10.1016/j.gloenvcha.2015.04.009 Turner BL, Matson PA, McCarthy JJ et al (2003) Illustrating the coupled human-environment system for vulnerability analysis: three case studies. Proc Natl Acad Sci U S A 100:8080–8085. https://doi.org/10.1073/pnas.1231334100 Ulloa A (2011) Construcciones culturales sobre el clima, Centro Edi. Universidad Nacional de Colombia, Bogotá Vermeulen SJJ, Aggarwal PKK, Ainslie A et al (2011) Options for support to agriculture and food security under climate change. Environ Sci Pol 15:1–9. https://doi.org/10.1016/j.envsci.2011.09.003 Vigneau E, Qannari EM (2003) Clustering of variables around latent components. Commun Stat Part B Simul Comput 32:1131–1150. https://doi.org/10.1081/SAC-120023882 Vogel B, Henstra D (2015) Studying local climate adaptation: a heuristic research framework for comparative policy analysis. Glob Environ Chang 31:110–120. https://doi.org/10.1016/j.gloenvcha.2015.01.001 Ward JH (1963) Hierarchical grouping to optimize an objective function. J Am Stat Assoc 58:236–244. https://doi.org/10.1080/01621459.1963.10500845 Weber EU (2006) Experience-based and description-based perceptions of long-term risk: why global warming does not scare us (yet). Clim Chang 77:103–120. https://doi.org/10.1007/s10584-006-9060-3 Weber EU (2010) What shapes perceptions of climate change? Wiley Interdiscip Rev Clim Chang 1:332–342. https://doi.org/10.1002/wcc.41 This work was implemented as part of the CGIAR Research Program on Climate Change, Agriculture and Food Security (CCAFS), which is carried out with support from the CGIAR Fund Donors and through bilateral funding agreements. For details, please visit https://ccafs.cgiar.org/donors. The views expressed in this document cannot be taken to reflect the official opinions of these organizations. We thank the International Center for Tropical Agriculture (CIAT) and Fundacion Ecohabitats for supporting the fieldwork. Claudia R. Binder Present address: Laboratory for Human-Environment Relations in Urban Systems, IIE, ENAC, Ecole Polytechnique Fédérale de Lausanne (EPFL), CH-1015, Lausanne, Switzerland International Center for Tropical Agriculture (CIAT), Km 17, Recta Cali–Palmira 6713, Apartado Aéreo, 763537, Cali, Colombia Anton Eitzinger Department of Geography, University of Munich (LMU), Luisenstraße 37, 80333, Munich, Germany , Claudia R. Binder & Markus A. Meyer Search for Anton Eitzinger in: Search for Claudia R. Binder in: Search for Markus A. Meyer in: Correspondence to Anton Eitzinger. Online Resource 1 Assessment of climate change risks in Cauca, Colombia (PDF 1098 kb) Guidebook for experts' interviews (PDF 262 kb) Guidebook for farmers' interviews (PDF 705 kb) Detailed results of ranking hierarchical clustering of farmers' perceptions (PDF 190 kb) Eitzinger, A., Binder, C.R. & Meyer, M.A. Risk perception and decision-making: do farmers consider risks from climate change?. Climatic Change 151, 507–524 (2018) doi:10.1007/s10584-018-2320-1 Issue Date: December 2018
CommonCrawl
Article | Open | Published: 06 September 2018 Spatial-temporal dynamics of carbon emissions and carbon sinks in economically developed areas of China: a case study of Guangdong Province Jie Pei1,2, Zheng Niu1,2, Li Wang1,3, Xiao-Peng Song ORCID: orcid.org/0000-0002-5514-03214, Ni Huang1, Jing Geng2,5, Yan-Bin Wu3 & Hong-Hui Jiang6 Scientific Reportsvolume 8, Article number: 13383 (2018) | Download Citation This study analysed spatial-temporal dynamics of carbon emissions and carbon sinks in Guangdong Province, South China. The methodology was based on land use/land cover data interpreted from continuous high-resolution satellite images and energy consumption statistics, using carbon emission/sink factor method. The results indicated that: (1) From 2005 to 2013, different land use/land cover types in Guangdong experienced varying degrees of change in area, primarily the expansion of built-up land and shrinkage of forest land and grassland; (2) Total carbon emissions increased sharply, from 76.11 to 140.19 TgC yr−1 at the provincial level, with an average annual growth rate of 10.52%, while vegetation carbon sinks declined slightly, from 54.52 to 53.20 TgC yr−1. Both factors showed significant regional differences, with Pearl River Delta and North Guangdong contributing over 50% to provincial carbon emissions and carbon sinks, respectively; (3) Correlation analysis showed social-economic factors (GDP per capita and permanent resident population) have significant positive impacts on carbon emissions at the provincial and city levels; (4) The relationship between economic growth and carbon emission intensity suggests that carbon emission efficiency in Guangdong improves with economic growth. This study provides new insight for Guangdong to achieve carbon reduction goals and realize low-carbon development. Global warming, one of the most serious environmental issues humans must currently confront, is due primarily to greenhouse gas (GHG) emissions1. Land use/land cover change (LULCC) is the second largest source of GHG emissions, after fossil fuel combustion2. LULCC accounts for 25% of all carbon emissions since industrialization3. While the contribution of LULCC to anthropogenic carbon emissions has been decreasing since 20004, the decreasing proportion is mainly attributed to the increase in emissions from fossil fuel combustion5. Earth system models and observation-based methods are commonly used in estimating carbon emissions from LULCC6. However, uncertainties in data and differences in considering LULCC activities create considerable uncertainty5. Since it is difficult to assess the accuracy of historical LULCC data before the satellite era, which were constructed based on nationally and internationally aggregated land-use statistics5, and the interpretation and classification of remotely sensed images might be influenced by human subjectivity, variations in LULCC emissions estimates will emerge when using different LULCC data sets. For example, Shevliakova et al.7 estimated that a difference of 0.2 Pg-C yr−1 was produced when using two different spatial data sets for croplands (the SAGE and HYDE data sets). Furthermore, variations also originate from inclusion of different LULCC processes (e.g. forest management)8. Satellite-based observations of LULCC provide an alternative to estimate spatiotemporal variations of carbon emissions9,10. To date, the spatial resolution of existing global land cover data sets has improved from 1 km to 30 m11,12,13,14,15. However, these global land cover products lack interoperability because they were produced using different classification systems and algorithms16,17. Furthermore, they are not continuous in time and their accuracies may not always satisfy the requirements of research on a local scale. It is possible to solve these problems and produce more accurate results of spatiotemporal patterns of LULCC in study areas by using digitally enhanced, multi-sensor satellite images with high spatial resolution produced with a standard classification system over several consecutive years. Extensive research has been done on LULCC-related carbon emissions at the global, regional, and national spatial scales10,18,19,20,21,22. Those studies presented various views on the relationships between carbon emissions and land use by discussing natural carbon processes of LULCC and the impacts on anthropogenic carbon emissions by LULCC. The carbon dynamics caused by LULCC could be investigated through the division of different land use/land cover types, and carbon emission effects of different land use/land cover types could thus be accurately explored23. However, previous studies were mainly focused on carbon emissions taking place on a single land-use type, such as carbon emissions resulting from agricultural land utilization24 or built-up land construction25, neglecting to establish the whole carbon estimation system based on land use/land cover23. Furthermore, a comprehensive comparison between carbon emissions and carbon sinks of land use/land cover remains lacking. Meanwhile, due to data limitation, there were few reports on continuous time series estimation of regional land-use carbon emissions, which is of critical importance for local governments to formulate or adjust carbon reduction policies in a timely manner23. It has been reported that coastal regions are an ideal study area for exploring the relationship between land use patterns and carbon emissions derived from social and economic factors, since these regions are characterized by high population densities, rapid economic growth and intense energy consumption26,27,28. Guangdong, the most affluent and populous province in China, has undergone rapid development since the implementation of China's economic reform and opening policy in 197829,30. The gross domestic product (GDP) of Guangdong has steadily increased, with an average annual growth of 18% for the last three decades, reaching 1.197 trillion USD in 2016. Nevertheless, in economically developed regions like Guangdong Province, due to the relatively high degree of urbanization, although LULCC during the most recent years is not dramatic compared with other rapidly developing regions, carbon emissions of land use are still very large. This is because a considerable amount of carbon emissions could be originated from many sources in addition to LULCC, including mechanized agricultural tillage and urban construction driven by human economic activities23,31. Both are associated with carbon emissions resulting from energy consumption and industrial production. As reported, Guangdong's high-speed economic development relies heavily upon energy consumption, 21.9 million tons of standard coal was consumed in the province in 2010, with the amplitude of the annual growth in energy consumption close to 11% since the year 200032. Large increases in energy consumption lead to large increases in carbon emissions. However, the carbon emissions of energy and industrial sources also indirectly come from land use carbon emissions when taking into account the demand that population growth and economic development have induced for more lands converted to construction use33. Therefore, the regional land use carbon emissions should be analysed to determine how to reduce carbon emissions through land use policy34,35. Previous studies of carbon emissions in Guangdong Province were either focused on energy-related carbon emissions or terrestrial ecosystem carbon stock dynamics29,36,37. However, few studies comprehensively estimated multi-scale land use carbon emissions and its relationship with social-economic factors in this region. To bridge this gap, in this study, we attempted to realize a complete accounting of carbon dynamics in Guangdong Province on multiple spatial scales, from 2005 to 2013, by using continuous land use/land cover data with high spatial resolution (≤30 m) combined with energy consumption statistics. This may also help to understand the relationship between urbanization and land use carbon emissions38. The specific objectives of this paper are: (1) to examine the area of different land use/land cover types in Guangdong by visually interpreting multi-sensor high-resolution satellite images for nine consecutive years based on a standard classification system and interpretation symbol library; (2) to investigate spatiotemporal changes in vegetation carbon sinks and carbon emissions on a provincial, sub-provincial and city scale; (3) to explore the relationship between carbon emissions and social-economic factors on a provincial and city scale; (4) to quantitatively study spatiotemporal changes in carbon emissions intensity (CEI) and its relationship with economic growth. Guangdong Province (20°13′–25°31′N, 109°39′–117°19′E; Fig. 1) is in the southernmost part of the Chinese mainland. The terrain is generally high in the north and low in the south, incorporating plains, hills, mountains, and plateaus. East Asian monsoon is the main climatic type in this area. Guangdong is one of the richest regions in light, heat, and water resources in China. The average annual temperature is 21.8 °C and the average annual rainfall is 1789.3 mm. The permanent resident population is 108 million, of which 68.71% is urban (2015). The real GDP has been the highest in China for the last 27 years, and was 1.197 trillion USD in 201639. Guangdong is divided into four regions by geography and economic level40: Pearl River Delta (PRD), East Guangdong (EGD), West Guangdong (WGD), and North Guangdong (NGD). The cities included in each region are shown in Fig. 1. Location of Guangdong Province in China, and the economic geographical division of Guangdong. Map created using ArcGIS [9.3], (http://www.esri.com/software/arcgis). Three forms of data were used in this study, specifically, spatial data, statistical data and empirical coefficients. Spatial data included highly qualified annual land use/land cover data (from the consecutive years 2005–2013) that were derived from multi-source satellite images, composed of CBERS and Landsat series satellite images at a fine spatial scale (≤30 m). The specific descriptions of yearly satellite images acquired from 2005–2013 are shown in Supplementary Table S1. The statistical data included energy consumption data, agricultural practice-related statistics and social-economic data. Specifically, annual energy consumption data were obtained from "Guangdong Statistical Yearbook" and "China Energy Statistical Yearbook". Agriculture-related statistical data included consumption of chemical fertilizers, pesticide, plastic film for farm use, total power of agricultural machinery, effective irrigation area, and sown area of major farm crops, and these data were acquired from "Guangdong Statistical Yearbook" and "China Agriculture Statistical Yearbook", complemented by the annual statistical yearbooks of 21 cities in Guangdong. Social and economic data, including population and GDP, originated from "Guangdong Statistical Yearbook". Empirical coefficients, including carbon emission factors and carbon sink factors, were taken from relevant literature. Derivation of land use/land cover data Remote sensing technology was used to interpret the area of main land use/land cover types from 2005 to 2013. Firstly, a collection of multi-temporal satellite images was selected. Data pre-processing was then conducted, including radiometric calibration, atmospheric correction, geometric correction, band composite, and image mosaic. To detect changes in land use/land cover, multi-temporal images were co-registered in the same coordinate system. A standard classification system (i.e. National Ecological Remote Sensing Monitoring Land Use/Land Cover Classification System) was determined to ensure multi-source images were classified consistently. The land use/land cover classifications included six first-level classifications and 25 second-level classifications (Supplementary Table S2). To improve the interpretation accuracy, a multi-temporal interpretation symbol library of land use/land cover types was established. Visual interpretation was applied on multi-source remotely sensed images to achieve land use/land cover data of Guangdong Province from 2005 to 2013. Finally, field investigations were made to validate the accuracy. For the purposes of this study, we mainly focused on first-level classification types. To identify the transfer between different land types, a land use transition matrix between 2005 and 2013 was created using intersect analysis tool in ArcGIS 9.331. Carbon sink estimations from forest land and grassland Forest land and grassland contribute most of the gain to vegetation carbon storage in Guangdong, due to photosynthesis. While farm crops absorb CO2 from the atmosphere while growing, most of the increased biomass is released into the atmosphere through decomposition in the short term41,42,43. Therefore, cropland was not included in the calculation. Due to data limitation, carbon sinks of water bodies and barren land were also excluded. With reference to existing research23,31, vegetation carbon sinks of land cover were measured by using carbon sink factors and corresponding land cover area, which provided a simple and practical method for the calculation of multi-scale carbon sinks. The formula for estimating the annual carbon sinks of land cover from 2005 to 2013 is shown in Eq. (1): $${C}_{veg}=\sum _{i}{R}_{veg \mbox{-} i}\times Are{a}_{veg \mbox{-} i}$$ where Cveg represents annual total carbon sinks; Rveg-i refers to carbon sink factors of different land cover types (i.e. forest land and grassland, respectively), which were quoted from relevant research conducted in Guangdong (Supplementary Table S4). To improve the accuracy of estimation, different values of carbon sink factor of the same land cover type were averaged. Meanwhile, considering the environmental conditions are relatively consistent in the whole study area, carbon sink factor of the same land cover type in different regions/cities remains unchanged. Areaveg-i is the area of forest land and grassland, respectively. Carbon emissions estimation from cropland utilisation Carbon emissions from cropland utilisation refer to the direct and indirect carbon emissions resulting from agricultural production activities based on cropland resources24. Rice paddies are one of the primary emitters of methane (CH4) to the atmosphere44,45, which is regarded as the second most important trace GHG after CO246. Two rice crops are grown in Guangdong Province each year under irrigation, causing the CH4 emissions in this area to be relatively higher than other rice cultivation regions of China46. Meanwhile, in Guangdong, water is usually drained out from paddy fields during the non-cropping season, and the amount of CO2 released into the atmosphere will significantly increase in this period47. Therefore, CH4 emissions in the flooding period and CO2 emissions in the non-flooding period were viewed in this study as two direct carbon emission processes for cropland. Referring to the calculation method adopted by IPCC48, CH4 emissions from paddy fields were estimated by paddy area and CH4 emission factor. Similarly, CO2 emissions during non-growing season were calculated by paddy area and CO2 emission factor. The specific formula for estimating direct carbon emissions from agricultural activities is as follows (Eq. (2)): $$C{E}_{crop \mbox{-} dir}=Are{a}_{paddy}\times ({C}_{C{H}_{4} \mbox{-} C}\times \frac{12}{16}+{C}_{C{O}_{2} \mbox{-} C}\times \frac{12}{44})$$ where CEcrop-dir denotes annual direct carbon emissions from agricultural activities; Areapaddy refers to area of paddy fields, which was derived from interpretation of satellite images; CCH4-C and CCO2-C represent CH4 emission factor and CO2 emission factor, respectively. The value of CCH4-C (0.76 Mg CH4 ha−1 yr−1) was derived from Kang et al.49, who carried out field measurements on CH4 emissions from irrigated paddy fields during rice growing periods in Guangzhou, capital of Guangdong Province. The value of CCO2-C (8.65 Mg CO2 ha−1 yr−1) is the average value based on two related studies conducted by Liu et al.50 and Wang et al.51, with CO2 emissions during non-growing season measured by static chamber-gas chromatography and eddy covariance technique, respectively. According to the Mass Balance Method, CH4 and CO2 is transformed into carbon by multiplying the constant of 12/16 and 12/44, respectively. Furthermore, according to the existing research52,53,54, indirect carbon emissions from cropland utilisation mainly included (1) carbon emissions during the production process of chemical substances, including chemical fertilizers, pesticides and agricultural plastic film; (2) carbon emissions from energy consumption during agricultural machinery usage and cropland irrigation; and (3) organic carbon loss generated by cropland tillage practices. According to IPCC48, the total indirect carbon emissions from agricultural activities were calculated through formula below (Eq. (3)): $$C{E}_{crop \mbox{-} indir}=\sum _{i=1}^{n}{E}_{i}=\sum _{i=1}^{n}{T}_{i}\cdot {\delta }_{i}$$ where CEcrop-indir indicates the total indirect carbon emissions from agricultural activities; Ei is the carbon emissions from the i-th agricultural carbon source; Ti denotes the consumption (amount) of the fertilizer, pesticide, agricultural plastic film, total power of agricultural machinery, effective irrigation area and sown area of farm crops, with δi as the corresponding carbon emission factor. For details of agricultural carbon sources and their corresponding values of carbon emission factors, please see Supplementary Table S5. Thus, the formula for calculating total carbon emissions from cropland utilisation is shown as follows (Eq. (4)): $$C{E}_{crop}=C{E}_{crop \mbox{-} dir}+C{E}_{crop \mbox{-} indir}$$ where CEcrop represents the total carbon emissions from cropland utilisation; CEcrop-dir and CEcrop-indir denote the direct and indirect carbon emissions produced by cropland utilisation, respectively. Carbon emissions estimation from built-up land construction As the main land-use type supporting human life and production, built-up land consumes a great amount of energy and produces a large quantity of carbon emissions, including from production and delivery of construction materials and installing and constructing buildings55. According to Chuai et al.25, carbon emissions from built-up land construction were estimated by direct energy consumption during building construction phase. We used the following calculation method for measuring carbon emissions produced by the construction of built-up land (Eq. (5)). $$C{E}_{built \mbox{-} up}=\sum _{i=1}^{n}{C}_{energy \mbox{-} i}=\sum _{i=1}^{n}{Q}_{energy \mbox{-} i}\times {K}_{energy \mbox{-} i}\times ({V}_{C{O}_{2} \mbox{-} i}+{V}_{C{H}_{4} \mbox{-} i})$$ where CEbuilt-up refers to annual total carbon emissions during the construction phase of built-up land; Cenergy-i represents the quantity of carbon emissions from energy i; Qenergy-i is the quantity of energy consumption by energy i; Kenergy-i is the per unit calorific value of energy i; and VCO2-i and VCH4-i are the carbon emission factors for CO2 and CH4 from energy i, respectively. As noted, since China's energy carbon emission factors are still under study and we are still lacking published standards for the factors in China, in this study we mainly used emission factors from IPCC25, which were detailedly described in Supplementary Table S6. Given that energy consumption was also involved in the calculation of indirect carbon emissions from cropland utilisation. Thus, to avoid overestimation of carbon emissions from the construction of built-up land, energy consumption during agricultural machinery usage and cropland irrigation was subtracted from the consumption of the corresponding energy type when calculating carbon emissions related to the built-up land construction. Total carbon emissions of land use were calculated as the sum of carbon emissions from cropland utilisation and the construction of built-up land (Eq. (6)). $$C{E}_{Total}=C{E}_{crop}+C{E}_{built \mbox{-} up}$$ where CETotal is the total carbon emissions of land use; CEcrop is carbon emissions from cropland utilisation; CEbuilt-up is carbon emissions from the construction of built-up land. It is worth noting that Eqs (1–6) was applied on multiple spatial scales, including the provincial, sub-provincial and city scale. Carbon emissions intensity calculation Three forms of carbon emissions intensity (CEI) were calculated and analysed in this study. CEI per unit land area provides a measure of the carbon emissions allocated to a unit of terrestrial land. CEI per unit GDP provides a measure of the carbon emissions required to produce a unit of economic activity56. Meanwhile, per capita carbon emissions reflects carbon emissions intensity based on population. Here, three forms of CEI were calculated by the following methods (Eqs (7–9)). $$CE{I}_{land}=\frac{C{E}_{Total}}{L}$$ $$CE{I}_{gdp}=\frac{C{E}_{Total}}{G}$$ $$CE{I}_{population}=\frac{C{E}_{Total}}{P}$$ where CEIland, CEIgdp and CEIpopulation represent CEI per unit land area, CEI per unit GDP and per capita carbon emissions, respectively. CETotal is annual total carbon emissions of land use; L, G, P denotes annual total land area, annual gross domestic product and annual permanent resident population, respectively. In this study, Pearson correlation coefficient (r) was used to investigate how social-economic development influenced the quantity and direction of carbon emissions on the provincial and city scale. We used regression analysis to explore whether the relationship between economic growth and CEI was consistent with the environmental Kuznets curve (EKC) hypothesis (i.e. inverted U-shaped curve)57. The formula used is as follows (Eq. (10)): $$y=ax+b{x}^{2}+c$$ where y represents the CEI per unit GDP; x denotes GDP per capita (the intuitive indication of economic growth); a, b, and c are the equation coefficients, which determine the curve relationship between economic growth and CEI. Specifically, if the coefficient of x in the quadratic equation of one unknown is a positive value and the coefficient of x2 is negative, an inverted U-shaped curve relationship is thus formed38. All statistical analyses in this study were performed using the SPSS software package (version 16.0). Statistically significant differences were set with P values = 0.05. Land use/land cover area and changes Our study found that different land use/land cover types in Guangdong experienced varying degrees of change in area from 2005 to 2013, primarily presented as increasing of built-up land and water bodies, and decreasing of grassland, forest land, cropland and barren land (Supplementary Table S3 and Fig. S1). Field surveys showed that the overall interpretation accuracy of first-level classification was more than 90%. Land use transition analysis (Table 1) shows that a total of 12569.46 km2 of land across Guangdong Province changed in terms of its land type, which accounted for 7.14% of the entire study area. As the largest land cover type in Guangdong, forest land showed a decreasing trend over the study period, with a declining rate of 1.61%. The area transferred out of forest land was 4321.19 km2, mainly occupied by cropland and built-up land. Guangdong was also rich in cropland, which had a much lower change rate (−0.39%). A total of 2096.09 km2 cropland was converted to built-up land, accounting for 51.63% of the total area transferred out of cropland. Forest land was the second type of land cover to occupy cropland, with the area of 1224.15 km2. Grassland continually decreased between 2005 and 2013, at a rate of 13.66% and area of 1218.46 km2. A total of 932.48 km2 grassland was converted to forest land. Built-up land underwent the largest area change (2872.06 km2), with an increase rate of 23.19%. The transfer-out of built-up land was mainly occupied by cropland, comprising 1008.50 km2, which accounted for 66.46% of the entire transfer-out area. However, the transfer-out area of built-up land to cropland was only half of the transfer-out area of cropland to built-up land. Water bodies accounted for approximately 5% of the total area of the province, with a rate of increase of 1.91% from 2005–2013. Water bodies were mainly converted to built-up land and cropland, with 492.00 km2 and 288.20 km2 transferred, respectively. Although barren land accounted for less than 1% of the total land area of Guangdong, it had the greatest rate of change in area (48.23%). Barren land was mainly transferred to forest land and built-up land. Table 1 Land use transition matrix of Guangdong Province between 2005 and 2013 (km2). Vegetation carbon sinks and carbon emissions estimation On a provincial scale, vegetation carbon sinks from both forest land and grassland did not change dramatically from 2005–2013, only decreasing from 54.52 to 53.20 TgC yr−1 (Fig. 2 and Table 2). Forest land contributed over 93% to annual vegetation carbon sinks, serving as the most important carbon sink type in Guangdong Province. Meanwhile, vegetation carbon sinks from grassland attained 31.36 TgC in total over the nine years. Provincial carbon emissions on both new built-up land and cropland sharply increased from 76.11 to 140.19 TgC yr−1 between 2005–2013, with an average annual rate of increase of 10.52% (Fig. 2 and Table 2). The cumulative carbon emissions in Guangdong were 999.07 TgC from 2005–2013, which corresponded to 2.07 times the total carbon sinks over the study period. The ratio of carbon emissions to carbon sinks kept increasing from 2005–2013, reaching 2.64 in 2013. Carbon emissions from the construction of built-up land increased from 63.74 to 127.49 TgC yr−1, an increase of 1.0 times. Built-up land construction contributed 83.75–90.94% to annual provincial carbon emissions, serving as the greatest contributor to carbon emissions in Guangdong. In contrast, carbon emissions from cropland utilisation only increased by 0.026 times, from 12.37 to 12.70 TgC yr−1. Temporal changes of provincial carbon emissions and vegetation carbon sinks from 2005–2013. Bars represent annual quantity of carbon emissions on new built-up land and cropland. Line represents annual carbon sinks contributed by forest land and grassland. Table 2 Calculation results of carbon emissions, vegetation carbon sinks, and carbon emissions intensity (CEI) on the provincial scale from 2005–2013. On a sub-provincial scale, annual carbon emissions of each of the four regions presented a generally increasing trend, of varying degrees, from 2005–2013. Vegetation carbon sinks of these regions also showed a generally decreasing trend (Supplementary Fig. S2). Nevertheless, in terms of absolute values, carbon emissions and carbon sinks in Guangdong showed typical regional differences (Fig. 3). The mean and standard deviations of carbon emissions in Pearl River Delta during 2005–2013 was 58.82 ± 13.09 TgC yr−1, higher than other three regions combined (Fig. 4a). This was followed by North Guangdong (21.10 ± 4.42 TgC yr−1), West Guangdong (19.70 ± 3.02 TgC yr−1), and East Guangdong (12.17 ± 2.56 TgC yr−1), which had only a fifth of that of Pearl River Delta. Meanwhile, the mean and standard deviations of vegetation carbon sinks in North Guangdong during 2005–2013 was 27.94 ± 0.09 TgC yr−1, larger than other three regions combined (Fig. 4b), followed by Pearl River Delta (14.23 ± 0.22 TgC yr−1), West Guangdong (7.62 ± 0.23 TgC yr−1), and East Guangdong (3.89 ± 0.04 TgC yr−1), accounting for only 14% of that of North Guangdong. The mean and standard deviations of carbon emissions and vegetation carbon sinks of different regions in Guangdong Province between 2005 and 2013. The red and blue bars represent the mean value of carbon emissions and carbon sinks, respectively, with vertical bars representing + one standard deviation. GD: Guangdong Province; PRD: Pearl River Delta; EGD: East Guangdong; WGD: West Guangdong; NGD: North Guangdong. Regional comparisons of carbon emissions and vegetation carbon sinks. (a) Annual carbon emissions in Pearl River Delta (PRD) and non-PRD regions (i.e. East Guangdong, West Guangdong and North Guangdong). The red and blue bars represent carbon emissions of PRD and non-PRD, respectively. (b) Annual vegetation carbon sinks in North Guangdong (NGD) and non-NGD (i.e. Pearl River Delta, East Guangdong, and West Guangdong). The green and purple bars represent carbon sinks of NGD and non-NGD, respectively. At the city scale, carbon emissions underwent more drastic spatial and temporal changes than vegetation carbon sinks during the study period (Supplementary Figs S3–S6). For further analysis, we took the year 2013 as an example (Table 3). Cities were divided into high-intensity, moderate-intensity, and low-intensity carbon emissions areas. Guangzhou, Shenzhen, Foshan, Dongguan, Huizhou, Jiangmen, Zhanjiang, Maoming and Qingyuan were high-intensity because their carbon emissions were higher than the average provincial level (6.68 TgC yr−1). Zhongshan, Zhaoqing, Jieyang, and Meizhou were placed into the moderate-intensity class. Zhuhai, Shantou, Chaozhou, Shanwei, Yangjiang, Shaoguan, Heyuan, and Yunfu were low-intensity. Guangzhou, the provincial capital, had the highest amount of carbon emissions among the 21 cities (13.51 TgC yr−1). Meanwhile, in terms of carbon sinks, Shaoguan produced the largest carbon sink in 2013 compared with other 20 cities (6.76 TgC yr−1), followed by Qingyuan, Meizhou, Heyuan and Zhaoqing, all much higher than the average level of the whole province (2.53 TgC yr−1). With regard to the ratio of total emissions to total sinks, Shaoguan, Heyuan and Meizhou attained the values less than 1, indicating that they were the only three cities playing the role of net carbon sink in 2013. Meanwhile, Dongguan had the highest ratio of total emissions to total sinks (32.65), over 12 times the average provincial level (2.64). Table 3 Calculation results of carbon emissions, vegetation carbon sinks, and carbon emissions intensity (CEI) on the city scale in 2013. Relationship between carbon emissions and social-economic development Correlation analysis was conducted to quantitatively evaluate the relationship between social-economic factors and carbon emissions on a provincial and city scale (Fig. 5). GDP per capita had a significant, positive correlation with annual carbon emissions at the provincial level (r = 0.97, p < 0.001) (Fig. 5a). There was also a significant positive correlation between permanent resident population and annual carbon emissions, with a higher Pearson's correlation coefficient (r = 0.98, p < 0.001) (Fig. 5b), suggesting that population growth promotes energy demand and thus increases carbon emissions at the provincial level. Likewise, taking 2013 as an example on the city scale, GDP per capita had a positive impact on carbon emissions, (r = 0.43, p < 0.05), relatively lower than that of the province (Fig. 5c). Meanwhile, permanent resident population also had a highly positive correlation with city-produced carbon emissions (r = 0.80, p < 0.001) (Fig. 5d), although the correlation coefficient was comparatively lower than that of the province. Results of this analysis indicate that social-economic development is the main driver of increasing carbon emissions. Results of correlation analysis between carbon emissions and social-economic factors. (a) Correlation between GDP per capita and annual carbon emissions at the provincial level. (b) Correlation between permanent resident population and annual carbon emissions at the provincial level. (c) Correlation between GDP per capita and carbon emissions at the city level in 2013. (d) Correlation between permanent resident population and carbon emissions at the city level in 2013. Two-tailed test of significance was used in the analysis. Spatiotemporal changes of carbon emissions intensity On the provincial scale, CEI per unit land area displayed an increasing trend, rising from 4.32 to 7.96 MgC yr−1 ha−1 from 2005–2013 (Table 2), representing an increase of 10.55% per year. In contrast, CEI per unit GDP declined from 2.76 to 1.40 MgC yr−1 10−4 USD, i.e. a reduction of 6.18% per year. The relationship between GDP per capita and CEI per unit GDP formed a half-U shape curve (Fig. 6a), which means that carbon emission efficiency in Guangdong Province improved with economic growth. Per capita carbon emissions displayed a generally increasing trend, rising from 0.83 to 1.32 MgC yr−1, with an average annual growth rate of 7.39%. Results of regression analysis between economic growth and carbon emissions intensity. (a) Fitted curve between GDP per capita and CEI per unit GDP on the provincial scale from 2005–2013; (b) Fitted curve between GDP per capita and CEI per unit GDP on the city scale in 2013. The unit of Y-axis in (a) and (b) is MgC yr−1 10−4 USD. On the city scale, we focused on the spatial differences among 21 cities in terms of CEIs (Table 3). For CEI per unit land area, seven cities, mostly in PRD, far exceeded the provincial average value in 2013 (7.96 MgC yr−1 ha−1). Shenzhen had the largest value (44.90 MgC yr−1 ha−1) among the 21 cities, nearly 16 times higher than the lowest value (Shaoguan). In contrast, there were eight cities, mostly in North Guangdong, with a significantly lower value than the provincial average. For CEI per unit GDP, Guangzhou, Shenzhen, Foshan, Zhuhai, Dongguan, and Zhongshan, all in PRD, had values lower than the average level of the province (1.40 MgC yr−1 10−4 USD). Therefore, these cities could be categorized as having high-efficiency carbon emissions. The remaining 15 cities could thus be categorized as low-efficiency carbon emitters. Although total carbon emissions in Guangzhou and Foshan were among the highest in the province, their carbon emission efficiency was also among the highest (0.54 and 0.99 MgC yr−1 10−4 USD in Guangzhou and Foshan, respectively). The relationship between GDP per capita and CEI per unit GDP was also a half-U shape (Fig. 6b), suggesting that with the rapid economic development of a city, carbon emission efficiency also constantly increases. For per capita carbon emissions, there were 13 cities with values higher than the provincial average (1.32 MgC yr−1), and eight cities with values lower than the average. Shenzhen had the lowest value among the cities (0.78 MgC yr−1), despite having high total carbon emissions in 2013. Land use/land cover and carbon emissions estimation Our study shows that Guangdong Province didn't undergo drastic LULCC dynamics from 2005 to 2013, with only 7.14% of land area having experienced changes in land types (Table 1), which is much lower than that in Jiangsu Province26. Built-up land has been continuously expanded from 2005–2013, mainly at the expense of cropland and forest land. As the most populous and affluent province in China29, rapid urbanization and high-speed economic development are the main drivers of significant built-up area expansion30,58,59. In contrast, cropland decreased from 2005 to 2013, primarily due to transformation into built-up land, but its rate of decrease was only 0.39%, which can be mainly attributed to the national farmland protection policy60. As one of the major rice cultivation regions in China46, Guangdong Province has made great efforts to preserve cropland, especially farmland with great production potential. Similarly, although forest land showed a decreasing trend over the study period, but it owned a low declining rate of 1.61%, which can be explained by the fact that sustainable development of forestry and ecological protection were initiated by the local government in 200561,62. Grassland is the only ecological land cover type undergoing a relatively high degree of decline in area (13.66%), mainly converted to forest land, which increased carbon sink capacity to a certain extent25. Land use carbon emissions account for a large proportion of human-driven carbon emissions48. In our study, we defined land as a carrier for carbon emissions31 and we also proposed an assumption that carbon emissions on certain land surfaces will increase or decrease in proportion according to land areas and the amount of energy and materials consumed on the land, which may not entirely conform to the reality. Carbon emissions thus vary widely according to the land types and land surface features and functions31. Built-up land contributed the most carbon emissions, which is consistent with other studies63,64. This is mainly due to the fact that human activities are associated with high carbon emissions, and these activities are always based on built-up land65,66. Therefore, built-up land restrictions play a significant role in reducing local carbon emissions31. Specifically, the built-up land should be controlled by rigid management of urban expansion and cropland occupation23. Meanwhile, cropland contributed to carbon emissions through agricultural production activities and CH4 emissions during the flooding period and CO2 emissions during the non-flooding period in paddy fields54. Among the various land types, forest land has the highest level of biomass sequestered as vegetation, in accordance with previous research67,68. Thus, forest land protection is critical not only for ecological conservation but also for the vegetation carbon sink increase69. In contrast, grassland contributed the least to vegetation carbon storage increase due to its relatively small capacity for carbon sequestration25. In summary, land use carbon emissions intensity should be effectively reduced through land use structure adjustment and optimization, mainly by strictly limiting the excessive growth of built-up land and promoting the energy use efficiency, in the meantime, increasing ecological land area and improving carbon sequestration efficiency23. However, taking into account the differences in regional land use, the local governments should take corresponding actions based on the local practical conditions31. Regional differences in carbon emissions and carbon sinks In this study, we found that carbon emissions in Guangdong rose sharply from 2005–2013, while carbon sinks did not change dramatically (Fig. 2). Both carbon emissions and carbon sinks displayed significant regional differences (Figs 3 and 4). On the sub-provincial scale, carbon emission capacities clearly differed by region, in accordance with economic status. This proves that economic growth is one of the main drivers of increasing carbon emissions during regional development70,71. Pearl River Delta region contributed more than 50% of provincial carbon emissions during the study period, due to its position as an advanced manufacturing and modern service base with global influence. The total carbon emissions presented a rising trend in the Pearl River Delta region from 2005 to 2013 (Supplementary Fig. S2). As a result, although this region has been benefiting from high economic level brought about by rapid urbanization, it is currently confronting a more serious carbon emissions reduction task25. Therefore, in the future, carbon reduction work in the Pearl River Delta region should be concentrated on enhancing energy use efficiency, augmenting the use of green energy types and speeding up industrial upgrading38. Guangzhou, the capital city of Guangdong Province, has witnessed the high-speed economic development during the past decades and represents a groundbreaking example of China's early industrialization and urbanization trends38. Early economic development usually comes at the cost of environmental benefits72. Guangzhou had the largest amount of carbon released in the Pearl River Delta region and the whole province, totalling 102.36 TgC from 2005–2013, with the annual average growth rate of 10.55%. Furthermore, the annual carbon emissions in Guangzhou experienced a relatively slow growth trend between 2010 and 2013 after initially rapid increase, and a sharp decline in annual carbon emissions was detected in 2010. These data indicate that Guangzhou has achieved a certain degree of economic stability and has reasonable control over the carbon emissions associated with urban development38. In Guangzhou-Foshan, Guangzhou is the center for promoting the economic development of Foshan38, which explains total carbon emissions in Foshan ranked second in the province. Meanwhile, carbon sequestration capacity also clearly differed by region, but in a different order (Figs 3 and 4), which is consistent with the spatial distribution of forest vegetation carbon stocks in the four economic regions of Guangdong Province73. North Guangdong had the highest capacity for carbon sequestration from 2005 to 2013 because the forest land in this region has been well conserved due to historical reasons and thus can provide significant ecological services for the province74. However, due to rapid population and economic development in the Pearl River Delta region, since the early 1980s, primitive forests have rapidly declined and were replaced by plantations, which are used to protect and adjust the urban environment, and instead result in a relatively higher level of forest carbon storage increase in this region, only after North Guangdong74. Vegetation carbon sinks in West Guangdong and East Guangdong were low, which could be attributed to the large area of forest land occupied by young plantations37. Relationships between carbon emissions and social-economic factors Our study found strong linear correlations between social-economic factors and carbon emissions on the provincial and city scales (Fig. 5a–d). Economic growth has a positive effect on carbon emissions, as shown in other related studies75,76,77. Although rapid economic growth led by industrialization and urbanization has remarkably enhanced per capita incomes and living standards, a considerable amount of energy consumption results in a relatively high, increasing trend of carbon emissions in the region78. Guangdong Province needs to change the current mode of economic development by adjusting land policies to create a land-saving, environment- and eco-friendly land use system79. Moreover, population growth also significantly increases carbon emissions at the provincial and city levels. Guangdong's permanent resident population increased from 91.94 to 106.44 million persons from 2005–201380. To satisfy the basic living requirements of a growing population, more energy will be consumed to meet industry, electricity, and transportation needs81, and thus generate more carbon emissions. Historical data showed that global population growth is synchronized with growth of energy consumption and carbon emissions82,83. Therefore, with the increase of permanent resident population, mainly due to many migrant workers entering Guangdong Province from other underdeveloped provinces of China, annual carbon emissions also correspondingly increase. According to our study, the relationship between GDP per capita and CEI per unit GDP were half-U shape curves (Fig. 6a,b), indicating that carbon emission efficiency in Guangdong Province improved when the economy grew. However, according to the environmental Kuznets curve (EKC) hypothesis, the relationship between economic growth and environmental pollution follows an inverted U-shaped curve, indicating that in the early stages of economic development, the environment deteriorates with economic growth71. When the economic development reaches a certain stage, also referred to as the threshold point, environmental degradation is curbed and the situation improves with further economic development84. This is because with economic growth, technology improves and the status of economic growth depending on a large number of investments and energy consumption changes. Guangdong's economic growth has clearly exceeded the threshold; thus, the carbon emissions intensity decreases with the growing economy. Therefore, by selecting a low-carbon development path, the increasing rate of economic development will be higher than that of energy consumption, and carbon emissions intensity will eventually begin to decrease, creating sustainable and healthy economic growth. In this study, we estimated carbon emissions and sinks in economically developed areas of China from a spatial-temporal perspective, taking Guangdong as a case study. However, there existed some uncertainties in carbon estimations here. Our study proposed a carbon emission factor method for indirectly calculating carbon emissions from built-up land construction based on the theory that a large amount of carbon emissions from energy consumption arise during the building construction and demolition phases85,86. Given that China's energy carbon emission factors are still under study and because published standard factors are still lacking, thus in accordance with most other related research conducted in China, our study mainly adopted the emission factors from the IPCC25. This calculation method may not accurately reflect the actual conditions in Guangdong Province, but it makes our research comparable with previous research25. In the meanwhile, in our study, we didn't consider carbon emissions from the building operation phase. According to You et al.85, based on the 50 years of building usage, the carbon emissions from the building operation phase is 5–7 times higher than that of the building construction and construction materials preparation phases. According to our calculation, the amount of carbon emissions on new built-up land was 98.60 ± 23.53 TgC yr−1 during 2005–2013 at the provincial level, contributing more than 83% to annual total carbon emissions (Table 2). This proportion is much larger than previous estimates85,87,88. Chuai et al.25 estimated that anthropogenic carbon emissions from the construction sector increased from 39.05 to 1037.21 TgC yr−1 in China between 1995 and 2010. Therefore, the carbon emissions estimate from built-up land construction in our study was approximately a tenth of the national counterpart. Our study also estimated carbon emissions from cropland utilisation, with the value of 12.40 ± 0.27 TgC yr−1 from 2005–2013 in the province, which is larger than carbon emissions estimation by Lu et al.24. This is mainly because they didn't consider the CH4 and CO2 emissions from rice production. Zhang et al.46 estimated the CH4 emissions from irrigated rice cultivation in China using a CH4MOD model, with the value of 6.62 Tg CH4 yr−1 from 2005 to 2009, which is 4.38 times the amount of our estimate in Guangdong Province over the same period. However, considering that the total paddy area in China is 15.79 times of that of Guangdong Province89, CH4 emissions in our study may have been overrated to a certain extent. In this study, we did not consider the effect of soil organic carbon (SOC) because SOC changes may need much more time compared with vegetation25,26. Our study mainly used a carbon sink factor method to estimate carbon sinks from forest land and grassland, assuming that the carbon sink factor in the same land type remained unchanged over time, and without spatial variations considered90. Some scholars have pointed out that such literature-based estimate of carbon factors may only represent carbon uptake for a specific time and fail to consider changes caused by environmental impacts (e.g. climate change)5. However, in our study, the main environmental factors (i.e. temperature and precipitation) didn't show significant interannual variations across the whole province during 2005–2013 (Supplementary Fig. S7), indicating that the environmental conditions remained relatively stable during the study period. Moreover, sensitivity analysis91 showed that total carbon sinks estimated in this study area were relatively inelastic with respect to the carbon factor value (Supplementary Table S7). In other words, the adjustments to carbon factors associated with forest land and grassland had very minimal impact on the carbon estimations over the study area. Hence, in theory, our method provides a simple and practical method for the calculation of large-scale carbon sinks, which has been widely used across China23,31. In fact, with the maturation and development of forests, the carbon sequestration capacity of forests in Guangdong Province will continue to grow, which leads to a potential huge carbon sink37,73. Meanwhile, the increase of forest biomass carbon storage presented a certain degree of spatial heterogeneity throughout the province37. The average carbon storage was larger in North Guangdong than in other three regions from 2005 to 2013. Therefore, in the future, additional studies will be needed to produce modified model parameters that reflect the actual situation in Guangdong Province and its four economic regions by conducting the field experiments, which will improve the accuracy of the estimation on carbon sinks38. Furthermore, due to data limitation, carbon sinks of logged forests, bamboo forest, farmland protection forest, and afforested trees were not calculated in this study. In addition, shrubland biomass served as a net carbon sink of 0.022 ± 0.01 PgC yr−1, accounting for approximately 30% of forest sink in China in the 1980s42. However, shrubland in Guangdong Province was not considered because of the extremely scarce information on the carbon balance of this land cover type. Similarly, due to the lack of relevant experimental data, the carbon effect of water bodies was not considered here, although it can be regarded as a weak carbon sink land cover type69. In this regard, our estimate of carbon sink in Guangdong Province during 2005–2013 may be underestimated to a certain extent. Moreover, it is worth noting that the overall carbon sink, in addition to vegetation, includes soil92; this was excluded because estimates of the soil carbon sink, especially natural soil, are rarely reported in China since data from repeated inventories is lacking42. In addition to the qualitative descriptions on uncertainty of carbon estimation, it is necessary to discuss the uncertainty in a relatively quantitative way through comparison to other related studies. Xu et al.69 estimated the carbon sinks of forest land and grassland in Pearl River Delta by applying the improved CASA model and photosynthetic reaction equation, ranging from 9.83 TgC yr−1 to 10.50 TgC yr−1 during 2005–2013, which is lower than our estimate of 14.49 TgC yr−1 to 14.17 TgC yr−1 over the same period. Few studies have directly estimated the terrestrial carbon sinks over the whole Guangdong Province. However, as noted, net primary productivity (NPP), as an important component of the ecosystem carbon cycle, reflects not only the production capability of plant communities under natural conditions, but also characterizes the carbon source/sink function of ecosystems93. In fact, carbon sinks of forests and grasslands gradually increase with the increase of NPP43. Therefore, it is reasonable to compare our estimate of carbon sinks in Guangdong Province with related NPP estimates by previous studies. For example, a recent assessment conducted by Pei et al.94 simulated an average annual vegetation NPP of 94.77 TgC yr−1 in Guangdong Province for the period 2000–2009 using a Biome-BGC model, nearly double our estimate during 2005–2013 (53.68 TgC yr−1). A large portion of the difference was due to their estimate comprising the NPP of cropland and shrubland, without subtraction of heterotrophic respiration. Using an improved CASA model with MODIS NDVI, meteorological data and land-use map, Luo and Wang95 estimated the terrestrial NPP in Guangdong Province was between 103.50–166.80 TgC yr−1 during 2001–2007, which is more than twice the amount of our results. A separate analysis of inventory-satellite-based method, ecosystem modeling and atmospheric inversions estimated that the net terrestrial carbon sink was between 18.6–27.4 TgC yr−1 in South China (i.e. Guangdong, Guangxi and Hainan Provinces) during the 1980s and 1990s42. In this regard, our study likely overestimated the carbon sinks in Guangdong, which may be attributed to the relatively higher carbon sink factors. Additionally, Janssens et al.96 estimated a net carbon sink between 135–205 TgC yr−1 in the terrestrial biosphere of geographic Europe, the equivalent of 7–12% of carbon emitted by anthropogenic sources. By contrast, our study estimated that up to 38–72% of Guangdong's carbon emissions produced by built-up land construction and cropland utilisation has been removed by carbon sequestration in forests and grassland. This proportion is much larger than Janssens's estimate. Moreover, Sleeter et al.97 estimated the terrestrial carbon sinks in the conterminous United States to be 254 TgC yr−1 on average between 1973–2010, almost four times larger than our estimate, while the land area of the conterminous United States is 42 times higher than that of Guangdong Province. Our study attempted to provide a comprehensive accounting of carbon estimations at multiple spatial scales during 2005–2013, taking Guangdong as an example. We also explored the relationship between carbon emissions and social-economic development to find out the factors influencing the quantity and direction of carbon emissions in Guangdong Province. An interesting phenomenon relating economic growth and carbon emissions intensity was also discussed in this paper. Although there existed considerable uncertainties in carbon emissions/sinks estimation, and we did not discuss all possible factors influencing carbon emissions, our study nevertheless provides a new insight for Guangdong Province to achieve carbon reduction goals and realize low-carbon development. We believe further study is needed to improve the accuracy of estimation on carbon sinks and emissions in this area and to analyse more social-economic factors related to carbon emissions. Additionally, more attention should be paid to the optimal and feasible land use configurations, effective land use and urban planning, for the purpose of combining economic growth and ecological conservation98. IPCC. Climate Change 2007: The physical science basis (Cambridge University Press, 2007). IPCC. Land Use, Land-Use Change and Forestry (Cambridge University Press, 2000). Le Quéré, C. et al. Global carbon budget 2016. Earth Syst. Sci. Data. 8(2), 605 (2016). Friedlingstein, P. et al. Update on CO2 emissions. Nat. Geosci. 3, 811–812 (2010). Houghton, R. A. et al. Carbon emissions from land use and land-cover change. Biogeosciences. 9(12), 5125–5142 (2012). Friedlingstein, P. & Prentice, I. C. Carbon-climate feedbacks: a review of model and observation based estimates. Curr. Opin. Environ. Sustain. 2(4), 251–257 (2010). Shevliakova, E. et al. Carbon cycling under 300 years of land use change: Importance of the secondary vegetation sink. Glob. Biogeochem. Cycle. 23(2), GB2022, https://doi.org/10.1029/2007GB003176 (2009). Huang, M. & Asner, G. P. Long-term carbon loss and recovery following selective logging in Amazon forests. Glob. Biogeochem. Cycle. 24, GB3028, https://doi.org/10.1029/2009GB003727 (2010). DeFries, R. S. Carbon emissions from tropical deforestation and regrowth based on satellite observations for the 1980s and 1990s. Proc. Natl. Acad. Sci. USA 99(22), 14256–61 (2002). Song, X. P., Huang, C., Saatchi, S. S., Hansen, M. C. & Townshend, J. R. Annual carbon emissions from deforestation in the Amazon Basin between 2000 and 2010. Plos One 10(5), e0126754, https://doi.org/10.1371/journal.pone.0126754 (2015). Loveland, T. R. et al. Development of a global land cover characteristics database and IGBP DISCover from 1 km AVHRR data. Int. J. Remote Sens. 21(6–7), 1303–1330 (2000). Hansen, M. C., DeFries, R. S., Townshend, J. R. & Sohlberg, R. Global land cover classification at 1 km spatial resolution using a classification tree approach. Int. J. Remote Sens. 21(6–7), 1331–1364 (2000). Arino, O. et al. The most detailed portrait of Earth. Eur. Space Agency 136, 25–31 (2008). Hansen, M. C. et al. High-resolution global maps of 21st-century forest cover change. Science. 342(6160), 850–853 (2013). Chen, J., Ban, Y. F. & Li, S. N. China: Open access to Earth land-cover map. Nature. 514(7523), 434–434 (2014). Herold, M., Mayaux, P., Woodcock, C. E., Baccini, A. & Schmullius, C. Some challenges in global land cover mapping: An assessment of agreement and accuracy in existing 1 km datasets. Remote Sens. Environ. 112(5), 2538–2556 (2008). Tchuenté, A. T. K., Roujean, J. L. & Jong, S. M. D. Comparison and relative quality assessment of the GLC2000, GLOBCOVER, MODIS and ECOCLIMAP land cover data sets at the African continental scale. Int. J. Appl. Earth Obs. Geoinf. 13(2), 207–219 (2011). Arora, V. K. & Boer, G. J. Uncertainties in the 20th century carbon budget associated with land use change. Glob. Change Biol. 16(12), 3327–3348 (2010). Houghton, R. A. & Nassikas, A. A. Global and regional fluxes of carbon from land use and land cover change 1850–2015. Glob. Biogeochem. Cycle. 31(3), 456–472 (2017). Harris, N. L. et al. Baseline map of carbon emissions from deforestation in tropical regions. Science 336(6088), 1573–1576 (2012). Houghton, R. A. & Hackler, J. L. Sources and sinks of carbon from land-use change in China. Glob. Biogeochem. Cycle. 17(2), 1034, https://doi.org/10.1029/2002GB001970 (2003). Zhang, M. et al. Impact of land use type conversion on carbon storage in terrestrial ecosystems of China: A spatial-temporal perspective. Sci Rep. 5, 10233, https://doi.org/10.1038/srep10233 (2015). Zhao, R. Q. et al. Carbon emission of regional land use and its decomposition analysis: case study of Nanjing city, China. Chinese. Geogr. Sci. 25(2), 198–212 (2015). Lu, X. H., Kuang, B., Li, J. H. J. & Zhang, Z. Dynamic evolution of regional discrepancies in carbon emissions from agricultural land utilization: evidence from Chinese provincial data. Sustainability 10, 552 (2018). Chuai, X. et al. Spatiotemporal changes of built-up land expansion and carbon emissions caused by the Chinese construction industry. Environ. Sci. Technol. 49(21), 13021–13030 (2015). Chuai, X. et al. A preliminary study of the carbon emissions reduction effects of land use control. Sci. Rep. 6, 36901, https://doi.org/10.1038/srep36901 (2016). Di, X. H., Hou, X. Y., Wang, Y. D. & Wu, L. Spatial-temporal characteristics of land use intensity of coastal zone in China during 2000–2010. Chinese. Geogr. Sci. 25(1), 51–61 (2015). Ellis, J. T., Spruce, J. P., Swann, R. A., Smoot, J. C. & Hilbert, K. W. An assessment of coastal land-use and land-cover change from 1974–2008 in the vicinity of Mobile Bay, Alabama. J. Coast. Conserv. 15, 139–149 (2011). Wang, W., Kuang, Y. & Huang, N. Study on the decomposition of factors affecting energy-related carbon emissions in Guangdong province, China. Energies. 4(12), 2249–2272 (2011). Kuang, W., Liu, J., Dong, J., Chi, W. & Zhang, C. The rapid and massive urban and industrial land expansions in China between 1990 and 2010: A CLUD-based analysis of their trajectories, patterns, and drivers. Landsc. Urban Plan. 145, 21–33 (2016). Chuai, X. et al. Land use, total carbon emissions change and low carbon land management in Coastal Jiangsu, China. J Clean Prod. 103, 77 (2014). Wang, P., Wu, W., Zhu, B. & Wei, Y. Examining the impact factors of energy-related CO2 emissions using the STIRPAT model in Guangdong Province, China. Appl. Energy. 106, 65–71 (2013). Wu, C. et al. Effects of endogenous factors on regional land-use carbon emissions based on the grossman decomposition model: a case study of Zhejiang province, China. Environ. Manage. 55(2), 467–478 (2015). Rounsevell, M. & Reay, D. Land use and climate change in the UK. Land Use Pol. 26, S160–S169 (2009). Böttcher, H., Frank, S., Havlík, P. & Elbersen, B. Future GHG emissions more efficiently controlled by land-use policies than by bioenergy sustainability criteria. Biofuels Bioprod. Biorefining. 7(2), 115–125 (2013). Zhou, C. et al. Impacts of a large-scale reforestation program on carbon storage dynamics in Guangdong, China. Forest Ecol. Manag. 255(3), 847–854 (2008). Ren, H. et al. Spatial and temporal patterns of carbon storage from 1992 to 2002 in forest ecosystems in Guangdong, southern China. Plant Soil. 363, 123–138 (2013). Xu, Q., Dong, Y. X. & Yang, R. Urbanization impact on carbon emissions in the Pearl River Delta region: kuznets curve relationships. J. Clean Prod. 180, 514–523 (2018). National Bureau of Statistics of China. Comparison of Gross Domestic Product Among Provinces in 2017. Available at, http://data.stats.gov.cn/easyquery.htm (2017). Lu, L. & Wei, Y. D. Domesticating globalisation, new economic spaces and regional polarisation in Guangdong Province, China. Tijdschr. Econ. Soc. Geogr. 98(2), 225–244 (2007). Pacala, S. W. et al. Consistent land-and atmosphere-based US carbon sink estimates. Science 292(5525), 2316–2320 (2001). Piao, S. L. et al. The carbon balance of terrestrial ecosystems in China. Nature 458(7241), 1009–1014 (2009). Fang, J. Y., Guo, Z. D., Piao, S. L. & Chen, A. P. Terrestrial vegetation carbon sinks in China, 1981–2000. Sci. China Ser. D. 50, 1341–1350 (2007). Yan, X. Y., Cai, Z. C., Ohara, T. & Akimoto, H. Methane emission from rice fields in mainland China: Amount and seasonal and spatial distribution. J. Geophys. Res.-Atmos. 108(D16), 4505, https://doi.org/10.1029/2002JD003182 (2003). Cai, Z. C., Kang, G. D., Tsuruta, H. & Mosier, A. Estimate of CH4 emissions from year-round flooded rice field during rice growing season in China. Pedosphere. 15(1), 66–71 (2005). Zhang, W., Yu, Y., Huang, Y., Li, T. & Wang, P. Modeling methane emissions from irrigated rice cultivation in China from 1960 to 2050. Glob. Change Biol. 17(12), 3511–3523 (2011). Kudo, Y., Noborio, K., Shimoozono, N., Kurihara, R. & Minami, H. Greenhouse gases emission from paddy soil during the fallow season with and without winter flooding in central Japan. Paddy Water Environ. 15(1), 1–4 (2016). Intergovernmental Panel on Climate Change (IPCC). National Greenhouse Gas Inventories Programme. In IPCC Guidelines for National Greenhouse Gas Inventories; Eggleston, H. S., Buendia, L., Miwa, K., Ngara, T. & Tanabe, K., Eds; Institute for Global Environmental Strategies: Hayama, Japan (2006). Kang, G., Cai, Z. & Feng, X. Importance of water regime during the non-rice growing period in winter in regional variation of CH4, emissions from rice fields during following rice growing period in China. Nutr. Cycl. Agroecosyst. 64, 95–100 (2002). Liu, H. et al. Characteristics of CO2, CH4 and N2O emissions from winter-fallowed paddy fields in hilly area of South China. Chinese. J. Appl. Ecol. 18(1), 57–62 (2007). Wang, S. et al. Characteristic analysis of CO2 fluxes from a rice paddy ecosystem in a subtropical region. Acta Scientiae Circumstantiae. 31(1), 217–224 (2011). West, T. O. & Marland, G. A synthesis of carbon sequestration, carbon emissions, and net carbon flux in agriculture: comparing tillage practices in the United States. Agric. Ecosyst. Environ. 91(1), 217–232 (2002). Zhao, R. Q. & Qin, M. Z. Temporospatial variation of partial carbon source/sink of farm land ecosystem in coastal China. J. Ecol. Rural Environ. 23(2), 1–6 (2007). Luo, Y., Long, X., Wu, C. & Zhang, J. Decoupling CO2 emissions from economic growth in agricultural sector across 30 Chinese provinces from 1997 to 2014. J. Clean Prod 159, 220–228 (2017). Yan, H., Shen, Q., Fan, L. C., Wang, Y. & Zhang, L. Greenhouse gas emissions in building construction: A case study of One Peking in Hong Kong. Build. Environ 45(4), 949–955 (2010). Canadell, J. G. et al. Contributions to accelerating atmospheric CO2 growth from economic activity, carbon intensity, and efficiency of natural sinks. Proc. Natl. Acad. Sci. USA 104, 18866–18870 (2007). Grossman, G. M. & Krueger, A. B. Environmental impacts of a North American Free Trade Agreement. Soc. Sci. Electron. Publ. 8(2), 223–250 (1991). Antrop, M. Landscape change and the urbanization process in Europe. Landsc. Urban Plan. 67(1), 9–26 (2004). Wu, K. Y. & Zhang, H. Land use dynamics, built-up land expansion patterns, and driving forces analysis of the fast-growing Hangzhou metropolitan area, eastern China (1978–2008). Appl. Geogr. 34, 137–145 (2012). Lichtenberg, E. & Ding, C. Assessing farmland protection policy in China. Land Use Pol. 25(1), 59–68 (2008). Lin, M., Ma, X., Xie, S., Chen, Z. & Xu, Y. Dynamic change in forest resources and drives in Guangdong province. Ecol. Environ. 17, 785–791 (2008). Li, P., Huang, Z. L., Ren, H., Liu, H. X. & Wang, Q. The evolution of environmental management philosophy under rapid economic development in China. Ambio. 40, 88–92 (2011). Churkina, G. Modeling the carbon cycle of urban systems. Ecol. Model. 216(2), 107–113, https://doi.org/10.1016/j.ecolmodel.2008.03.006 (2008). Zhang, C. et al. Impacts of urbanization on carbon balance in terrestrial ecosystems of the Southern United States. Environ. Pollut. 164, 89–101, https://doi.org/10.1016/j.envpol.2012.01.020 (2012). Huang, Y., Xia, B. & Yang, L. Relationship study on land use spatial distribution structure and energy-related carbon emission intensity in different land use types of Guangdong, China, 1996–2008. The Scientific World J. 2013 (2013). Zhao, R. Q. & Huang, X. J. Carbon emission and carbon footprint of different land use types based on energy consumption of Jiangsu Province. Geogr. Res. 29(9), 1639–1649 (2010). Achat, D. L. et al. Forest soil carbon is threatened by intensive biomass harvesting. Sci. Rep. 5, 15991 (2015). Woodall, C. W. et al. Monitoring Network Confirms Land Use Change is a Substantial Component of the Forest Carbon Sink in the eastern United States. Sci. Rep. 5, 17028 (2015). Xu, Q., Dong, Y. & Yang, R. Influence of different geographical factors on carbon sink functions in the Pearl River Delta. Sci. Rep. 7(1), 110, https://doi.org/10.1038/s41598-017-00158-z (2017). Acaravci, A. & Ozturk, I. On the relationship between energy consumption, CO2 emissions and economic growth in Europe. Energy 35(12), 5412–5420 (2010). Zhang, X. P. & Cheng, X. M. Energy consumption, carbon emissions, and economic growth in China. Ecol. Econ. 68(10), 2706–2712 (2009). Fong, W. K., Hiroshi, M. & Ho, C. S. Energy consumption and carbon dioxide emission considerations in the urban planning process in Malaysia. Build. Environ. 44(7), 1528–1537 (2009). Zhang, L., Lin, W., Wang, Z. & Na, Y. U. Spatial distribution pattern of carbon storage in forest vegetation of Guangdong province. Ecol. Environ. Sci. 19(6), 1295–1299 (2010). Deng, J. F. & Lin, Z. D. The development planning of Guangdong forestry. (Chinese Forestry Publishing House, 2009). Dong, F., Long, R., Chen, H., Li, X. & Yang, Q. Factors affecting regional per-capita carbon emissions in China based on an LMDI factor decomposition model. Plos One 8(12), e80888 (2013). Shanthini, R. & Perera, K. Is there a cointegrating relationship between Australia's fossil-fuel based carbon dioxide emissions per capita and her GDP per capita? Int. J. Oil, Gas Coal Technol. 3, 1753–3309 (2010). Lozano, S. & Gutiérrez, E. Non-parametric frontier approach to modelling the relationships among population, GDP, energy consumption and CO2 emissions. Ecol. Econ. 66(4), 687–699 (2008). Soytas, U. & Sari, R. Energy consumption, economic growth, and carbon emissions: challenges faced by an EU candidate member. Ecol. Econ. 68(6), 1667–1675 (2009). Li, Y., Huang, X. J. & Zhen, F. Effects of land use patterns on carbon emission in Jiangsu Province. Tran. Chin. Soc. Agric. Eng. 24(Supp. 2), 102–107 (2008). Statistics Bureau of Guangdong Province. 2014 Guangdong Statistical Yearbook. Available at, http://www.gdstats.gov.cn/tjnj/2014/directory/content.html (2014). Shen, L., Cheng, S., Gunson, A. J. & Wan, H. Urbanization, sustainability and the utilisationof energy and mineral resources in China. Cities 22(4), 287–302 (2005). Dietz, T. & Rosa, E. A. Effects of population and affluence on CO2 emissions. Proc. Natl. Acad. Sci. USA 94(1), 175–179 (1997). Jiang, L. & Hardee, K. How do recent population trends matter to climate change? Popul. Res. Policy Rev. 30(2), 287–312 (2011). Narayan, P. K., Saboori, B. & Soleymani, A. Economic growth and carbon emissions. Econ. Model. 53, 388–397 (2016). You, F. et al. Carbon emissions in the life cycle of urban building system in China-a case study of residential buildings. Ecol. Complex. 8(2), 201–212 (2011). Acquaye, A. A. & Duffy, A. P. Input-output analysis of Irish construction sector greenhouse gas emissions. Build. Environ. 45(3), 784–791 (2010). Chau, C. K., Leung, T. M. & Ng, W. Y. A review on life cycle assessment, life cycle energy assessment and life cycle carbon emissions assessment on buildings. Appl. Energy. 143, 395–413 (2015). Dimoudi, A. & Tompa, C. Energy and environmental indicators related to construction of office buildings. Resour. Conserv. Recy. 53(1), 86–95 (2008). Li, C. et al. Modeling impacts of farming management alternatives on CO2, CH4, and N2O emissions: A case study for water management of rice agriculture of China. Global Biogeochem. Cycles. 19, GB3010, https://doi.org/10.1029/2004GB002341 (2005). Zhang, C. et al. China's forest biomass carbon sink based on seven inventories from 1973 to 2008. Clim. Change. 118, 933–948 (2013). Abulizi, A. et al. Land-use change and its effects in Charchan Oasis, Xinjiang, China. Land Degrad. Dev. 28(1), 106–115 (2017). Xie, Z. et al. Soil organic carbon stocks in china and changes from 1980s to 2000s. Glob. Change Biol. 13(9), 1989–2007 (2007). Pei, F., Xia, L. I., Liu, X. & Xia, G. Dynamic simulation of urban expansion and their effects on net primary productivity: a scenario analysis of Guangdong province in China. J. Geo-Inf. Sci. 17(4), 469–477 (2015). Pei, F., Li, X., Liu, X., Lao, C. & Xia, G. Exploring the response of net primary productivity variations to urban expansion and climate change: A scenario analysis for Guangdong Province in China. J. Environ. Manage. 150, 92–102 (2015). Luo, Y. & Wang, C. L. Valuation of the net primary production of terrestrial ecosystems in Guangdong Province based on remote sensing. Ecol. Environ. Sci. 18(4), 1467–1471 (2009). Janssens, I. A. et al. Europe's terrestrial biosphere absorbs 7 to 12% of European anthropogenic CO2 emissions. Science. 300(5625), 1538–1542 (2003). Sleeter, B. M. et al. Effects of contemporary land-use and land-cover change on the carbon balance of terrestrial ecosystems in the United States. Environ. Res. Lett. 13(4), 045006, https://doi.org/10.1088/1748-9326/aab540 (2018). Abid, M. The close relationship between informal economic growth and carbon emissions in Tunisia since 1980: the (ir) relevance of structural breaks. Sustain. Cities Soc. 15, 11–21 (2015). Financial assistance for this work was supported by the grant from the National Key Research and Development Project (2016YFC0502501), the National Key R&D Program of China (2017YFA0603002), the National High Technology Research and Development Program of China (863 Program No. 2014AA06A511), the Natural Science Foundation of Hebei Province (D2015207008), Talent Training Project of Hebei Province (A201400215), Young Prominent Talent Project of Hebei Province Higher School (BJ2014021), and National Science and Technology Major Project (20-Y30B17-9001-14/16). The State Key Laboratory of Remote Sensing Science, Institute of Remote Sensing and Digital Earth, Chinese Academy of Sciences, Beijing, 100101, P.R. China Jie Pei , Zheng Niu , Li Wang & Ni Huang University of Chinese Academy of Sciences, Beijing, 100049, P.R. China & Jing Geng College of Management Science and Engineering, Hebei University of Economics and Business, Shijiazhuang, 050061, P.R. China Li Wang & Yan-Bin Wu Department of Geographical Sciences, University of Maryland, College Park, Maryland, 20742, USA Xiao-Peng Song Key Laboratory of Ecosystem Network Observation and Modeling, Institute of Geographic Sciences and Natural Resources Research, Chinese Academy of Sciences, Beijing, 100101, China Jing Geng Key Area Planning Construction and Management Bureau of Longgang, Shenzhen, Shenzhen, 518116, P.R. China Hong-Hui Jiang Search for Jie Pei in: Search for Zheng Niu in: Search for Li Wang in: Search for Xiao-Peng Song in: Search for Ni Huang in: Search for Jing Geng in: Search for Yan-Bin Wu in: Search for Hong-Hui Jiang in: J.P. initiated the concept of the study. J.P. and L.W. designed the research. J.P. conducted the analysis and drafted the manuscript. Z.N. and L.W. supervised the project. X.-P.S., N.H., J.G., Y.-B.W. and H.-H.J. provided strategic advice and comments on the manuscript. J.P., L.W. and N.H. revised the manuscript. Corresponding authors Correspondence to Zheng Niu or Li Wang. The authors declare no competing interests. Publisher's note: Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this license, visit http://creativecommons.org/licenses/by/4.0/. Important evidence of constant low CO2 windows and impacts on the non-closure of the greenhouse effect Jing Zhao , Guoqing Li , Weihong Cui , Qianqian Cao & Haoping Zhang
CommonCrawl
Genetic and Environmental Effects on Carcass Traits of Japanese Brown Cattle Sri Rachma Aprilita Bugiwati, T.D.;Harada, H.;Fukuh, R. 1 Studies on the genetic and environmental effects on M.longissimus thoracis area (MLTA), fat thickness (SFT), rib thickness (RT) and marbling score (MS) were conducted on 21,086 steers and 7,151 heifers of Japanese Brown breed. All carcass traits were affected significantly (p<0.01) by sire, sex and initial year effects. Both of the MLTA and MS of steers were greater than heifers. Their differences were $1.4cm^2$ for MLTA and 0.05 for MS, respectively. Cattle started for fattening in winter tend to have higher of MLTA and MS and thicker of SFT and RT than those in other seasons. MLTA increase from 1987 to 1989 (about $1.9cm^2$) and decrease until 1994 (about $2.4cm^2$) and then increase again up to 1995 (about $1.5cm^2$). MS were nearly equal from 1987 to 1991 (about "1") and then decrease up to 1995 (about "1"). Heritability estimates of MLTA, RT, SFT and MS were ranged from 0.22 to 0.36. Genetic and phenotypic correlations of MLTA, RT, SFT and MS were positive and ranged from 0.05 to 0.62 and from 0.03 to 0.32 except SFT with MLTA was negative (-0.14 and -0.03). Effects of Sperm Number and Semen Type on Sow Reproductive Performance in Subtropical Area Kuo, Y.H.;Hnang, S.Y.;Lee, K.H. 6 The purpose of this study was to evaluate the effect of lower numbers of sperm $(3{\times}10^9)$ per dose liquid semen and type of semen used in artificial insemination (AI) on sow reproductive performance in subtropical area. Semen was supplied by two commercial AI centers. A total of 671 female pigs from seven farms were inseminated with either $3{\times}10^9$ or $5{\times}10^9$ sperm per dose. Two types of semen were used: heterospermic semen from two boars of the same breed and homospermic semen from a single boar. After insemination, conception rate, farrowing rate, total litter size, and number of dead piglets were recorded. The analysis of variance indicated that there was no significant effect of interactions between pig farm, type of semen, or number of sperm on any of the traits measured. There were significant differences in conception rate, farrowing rate, and total litter size among pig farms (p<0.05). The effect of number of sperm per dose liquid semen ($3{\times}10^9$ or $5{\times}10^9$) was not significant. Sows inseminated with homospermic semen showed significantly higher conception and farrowing rates but significantly lower total litter size (p<0.05). In conclusion, the number of sperm per dose liquid semen for AI could be lowered to $3{\times}10^9 $ without affecting reproductive performance in subtropical areas like Taiwan. Magnetic Orientations of Bull Sperm Treated by DTT or Heparin Suga, D.;Shinjo, A.;Kumianto, E.;Nakada, T. 10 This paper describes the magnetic orientation of the intact and demembranated bull sperm treated by DTT or heparin in a 5,400 G static field. Semen samples collected from four bulls (Japanese Black) were mixed to the same sperm density. One percentage triton X-100 was used to extract the plasma membrane. The intact and demembranated sperm suspensions were treated with 20, 200, 2,000 mM DTT, 100, 1,000 or 10,000 units heparin solutions at $4{^{\circ}C}$ for 6 days. The decondensation of the sperm nuclei treated by DTT or heparin was examined by measuring the sperm head area at 1, 3, and 6 days. After measuring the area, each sperm sample was exposed to a 5,400 G static magnetic field generated by Nd-Fe-B permanent magnets for 24 hours at room temperature. Results showed that the decondensation of bull sperm nuclei was not induced by the heparin treatment, however, incomplete decondensation was induced by the DTT treatment. During the magnetic orientation, bull sperms treated by DTT or heparin had low percentages of long axis perpendicular to the magnetic lines of force. However, different aspects were obtained for long axis perpendicular orientations following treatment of DTT or heparin. Through the DTT treatment, the decline of long axis perpendicularly oriented percentages was due to the increase of long axis parallel orientation with the head of the flat plane perpendicular to the magnetic lines of force, whereas, using the heparin treatment, the decline of long axis perpendicular orientation was due to the increment of long axis parallel orientation with the head of the flat plane parallel to the magnetic lines of force. Also, percentages of the head of the flat plane perpendicular were decreased by the heparin treatment. These findings suggest that maintaining the structure of protamine in the chromatin is necessary for the sperm head to orient with its flat plane perpendicular, and maintaining the disulfide bond in the chromatin is necessary for the long axis of sperm to orient perpendicularly. Effect of Feeding Different Ratios of Green Fodder and Straw Supplemented with Wheat Bran on the Performance of Male Crossbred Calves Sahoo, A.;Chaudhary, L.C.;Agarwal, N.;Kamra, D.N.;Pathak, N.N. 19 Twenty male crossbred calves of about one year of age (average body weight, 196 kg) were distributed in four equal groups following complete randomized design. Wheat bran was supplemented to four different combinations of wheat straw and green fodder (Sorghum vulgare) at 40:60, 30:70, 20:80 and 10:90 ratios (on as fed basis) for the feeding of animals in Group 1, 2, 3 and 4, respectively. The feeding trial was continued for a period of 70 days including one metabolism trial of 6 days collection of feed, faeces and urine sample to determine the intake and utilization of nutrients. The intakes (g/kg $W^{0.75}$) of DM, TDN and CP were $93.0{\pm}1.8$, $55.5{\pm}1.1$ and $9.51{\pm}0.18$ in Group 1; $98.0{\pm}1.8$, $59.6{\pm}1.1$ and $10.33{\pm}0.19$ in Group 2; $98.1{\pm}2.4$, $60.5{\pm}1.5$ and $10.79{\pm}0.26$ in Group 3; and $97.7{\pm}1.7$, $59.1{\pm}1.0$ and $10.78{\pm}0.19$ in Group 4, respectively. The digestibility of nutrients did not differ significantly among the groups. Relatively higher nutrient intake and balances of nitrogen reflected non-significantly high her live weight gain in the later three groups (436, 439 and 464 g, respectively) as compared to Group 1 (400 g). The DM intake remained unchanged by increasing the proportion of green fodder beyond 20:80 ratio and thus was assessed to be satisfactory for optimum productivity in animals. Effect of Grass Lipids and Long Chain Fatty Acids on Cellulose Digestion by Pure Cultures of Rumen Anaerobic Fungi, Piromyces rhizinflata B157 and Orpinomyces joyonii SG4 Lee, S.S.;Ha, J.K.;Kim, K.H.;Cheng, K.J. 23 The effects of grass lipids and long chain fatty acids (LCFA; palmitic, stearic and oleic acids), at low concentrations (0.001~0.02%), on the growth and enzyme activity of two strains of anaerobic fungi, monocentric strain Piromyces rhizinflata B157 and polycentric strain Orpinomyces joyonii SG4, were investigated. The addition of grass lipids to the medium significantly (p<0.05) decreased filter paper (FP) cellulose digestion, cellulase activity and fungal growth compared to control treatment. However, LCFA did not have any significant inhibitory effects on fungal growth and enzyme activity, which, however, were significantly (p<0.05) stimulated by the addition of oleic acid as have been observed in rumen bacteria and protozoa. This is the first report to our knowledge on the effects of LCFA on the rumen anaerobic fungi. Continued work is needed to identify the mode of action of LCFA in different fungal strains and to verify whether these microorganisms have ability to hydrogenate unsaturated fatty acids to saturated fatty acids. Optimal Lysine:DE Ratio for Growing Pigs of Different Sexes Chang, W.H.;Kim, J.D.;Xuan, Z.N.;Cho, W.T.;Han, In K.;Chae, B.J.;Paik, In K. 31 This study was conducted to evaluate changes in the lysine to digestible energy (DE) ratio on performance, apparent ileal and fecal nutrient digestibilities as well as blood urea nitrogen (BUN), and to estimate optimal lysine:DE ratios for growing pigs of different sexes. A total of 150 pigs ($(Landrace{\times}Yorkshire){\times}Duroc$, 16.78 kg average body weight, 75 barrows and 75 gilts) was randomly allotted into a $2{\times}3$ (sex by diet) factorial design. Three diets were formulated to contain a crude protein level of 19%, a DE level of 3.5 Mcal/kg with three lysine:DE ratios of 3.2 (low), 3.5 (middle) and 3.8 (high) g lysine/Mcal DE per kg diet for both barrows and gilts throughout the study. With increasing dietary lysine:DE ratio, the average daily gain (ADG) of barrows decreased but there was no significant difference among treatments (p>0.05). However, ADG was significantly higher in gilts fed the diet containing the high lysine:DE ratio (p<0.05), followed by the middle and low lysine:DE ratio dietary groups. No significant effects of lysine:DE ratios on feed intake (ADFI) and feed conversion (F/G) were observed for barrows and gilts during overall period (p>0.05), while the optimal F/G was found in barrows fed diets of low and in gilts fed high lysine:DE ratio. Blood urea nitrogen had a positive relationship with growth rate. The results showed that the optimal lysine:DE ratios were 3.2 and 3.8 g lysine/Mcal DE per kg diet for barrows and gilts of 16 to 57 kg body weight, respectively. Nutritional Evaluation of Chinese Nonconventional Protein Feedstuffs for Growing-Finishing Pigs - 1. Linseed Meal Li, Defa;Vi, G.F.;Qiao, S.Y.;Zheng, C.T.;Wang, R.J.;Thacker, P.;Piao, X.S.;Han, In K. 39 Two experiments were conducted to determine the ileal digestibility of the amino acids contained in linseed meal using the regression technique and then applying the values obtained, in a growth trial, using growing-finishing pigs. For the digestibility trial, four $20{\pm}0.5kg$ crossbred $(Yorkshire{\times}Landrace{\times}Beijing\;Black)$ barrows were fitted with simple T-cannula in the terminal ileum. After recovery, the barrows were fed one of four experimental diets according to a $4{\times}4$ Latin Square design. The pigs were fed corn-soybean meal based diets supplemented with 0, 25, 50 or 75% linseed meal. For the growth trial, 80 crossbred $(Yorkshire{\times}Landrace{\times}Beijing\;Black)$ growing pigs $(20.2{\pm}1.5kg)$ were fed corn-soybean meal diets supplemented with 0, 5, 10 or 15% linseed meal. Five pens (2 gilts and 2 castrates) were assigned to each treatment. With the exception of leucine, the digestibility coefficients for the indispensible amino acids declined as the level of linseed meal in the diet increased. There was a good agreement between the amino acid digestibilities for lysine, methionine, threonine and tryptophan determined using the regression technique and amino acid digestibilities previously published for linseed meal. During both the growing (20-49 kg) and finishing (49-95 kg) periods, the addition of linseed meal decreased average daily gain and feed conversion in a linear manner (p<0.05). Feed intake was not significantly different among treatments. The overall results suggest that linseed meal can be used at levels of between 5 and 10% in diets fed to growing-finishing pigs provided that the diet has been balanced for digestible amino acids. Nutritional Evaluation of Chinese Nonconventional Protein Feedstuffs for Growing-Finishing Pigs - 2. Rapeseed Meal Li, Defa;Qiao, S.Y.;Yi, G.F.;Jiang, J.Y.;Xu, X.X.;Thacker, P.;Piao, X.S.;Han, In K. 46 Two experiments were conducted to determine ileal digestibilities for the amino acids contained in rapeseed meal using the regression technique and then applying the values obtained, in a growth trial, using growing-finishing pigs. For the digestibility trial, four 20 kg crossbred $(Yorkshire{\times}Landrace{\times}Beijing\;Black)$ barrows were fitted with simple T-cannula in the terminal ileum. After recovery, the barrows were fed one of four experimental diets according to a $4{\times}4$ Latin Square design. The pigs were fed corn-soybean meal based diets supplemented with 0, 25, 50 or 75% rapeseed meal. For the growth trial, 80 crossbred $(Yorkshire{\times}Landrace{\times}Beijing\;Black)$ growing pigs $(20{\pm}2.4kg)$ were fed corn-soybean meal diets supplemented with 0, 3, 6, 9 or 12% rapeseed meal. Four pens (2 gilts and 2 castrates) were assigned to each treatment. With the exception of isoleucine and methionine, the digestibility coefficients for the indispensible amino acids declined as the level of rapeseed meal in the diet increased. There was little agreement between the amino acid digestibilities determined with the regression technique and values previously published for rapeseed meal. During the growing (22-42 kg) period, the addition of rapeseed meal had no significant effects on gain, feed intake or feed conversion. During the finishing period (58-91 kg), daily gain was not affected by rapeseed meal inclusion but feed conversion declined (p<0.04) as the level of rapeseed meal in the diet increased. A Comparative Evaluation of Integrated Farm Models with the Village Situation in the Forest-Garden Area of Kandy, Sri Lanka Ibrahim, M.N.M.;Zemmeli, G. 53 Data from a village household dairy survey was compared with technical parameters of three model farms (0.2, 0.4 and 0.8 ha in extent) established by the Mid-country Livestock Development Centre (MLDC). In terms of land size, about 67% of the 250 dairy farmers interviewed corresponded with the MLDC models, but only 33% of the farmers were keeping dairy cattle under conditions comparable to the MLDC models (no regular off-farm income). In the 0.2 ha category, village farmers kept more cows, and in the other two categories the village farmers kept less cows than their MLDC model counterparts. In all three categories, the milk production per cow was higher in the model farms (1540 to 2137 vs. 1464 to 1508 litres/cow/year), and this could be attributed to higher feeding levels of concentrates in the model farms as compared to the village farmers (430 to 761 vs. 233 to 383 kg/cow/year). The amount of milk produced from fodder was higher in the village situation in comparison to the models. In the mid country, dairy production seems to depend on access to fodder resources rather than on the extent of land owned. Except in the 0.8 ha village category, the highest contribution to the total income was made by the dairy component (44 to 60%). With 0.8 ha village farmers, the income contribution from dairy and crops was similar (41%). Income from other livestock was important for the 0.2 ha MLDC model, but for all other categories their contribution to total income ranged from 0 to 10%. Access to fodder resources outside own-farm land is vital for economic dairy production. As such, an in-depth analysis of feed resources available and their accessibility needs to be further investigated. Livestock Production under Coconut Plantations in Sri Lanka: Cattle and Buffalo Production Systems Ibrahim, M.N.M.;Jayatileka, T.N. 60 A survey involving 71 cattle and buffalo farming households under coconut plantations was carried out in three districts (Pannala, Bingiriya and Kuliyapitiya) with the aim of assessing the status of livestock farming. Also, 24 households (eight from each district) were visited monthly for period of one year to collect information on feeding practices. Apart from milk, animals were reared for selling, draught, bio-gas and for manure. Due to difference in system of management of cattle and buffaloes, manure from buffaloes (46%) was more frequently used for coconuts than that from cattle (10%). Majority of cattle were improved breeds (temperate origin) or their crosses, as compared to buffaloes (mainly indigenous). The most predominant management system was tethered grazing during the day, and stall feeding during the night. Coconut land (own or others) and paddy fields were the major grazing areas for the animals. The grass from coconut land was lower in crude protein (8.2%) and digestibility (48%) compared to those from paddy fields (12.1 and 57%, respectively). Of the 288 rations analysed, grass was included in 280 of the daily rations for cattle as compared to 251 for buffaloes. Straw was more commonly included in mixed rations for buffaloes (137 out of 288) than for cattle (53 out of 288). The frequency of use of straw for buffaloes was high in Pannala (75 out of 137 cases). There was wide variation among the improved breeds of cattle and buffaloes in milk production (2 to 9 litres/day), lactation length (6 to 10 months) and calving interval (13-21 months). Objectively Predicting Ultimate Quality of Post-Rigor Pork Musculature: I. Initial Comparison of Techniques Joo, S.T.;Kauffman, R.G.;Warner, R.D.;Borggaard, C.;Stevenson-Barry, J.M.;Lee, S.;Park, G.B.;Kim, B.C. 68 A total of 290 pork loins were selected to include a wide variation of quality to investigate the quality categories into which most pork falls, selection criteria for these categories and methods to objectively assess ultimate pork quality. They were probed at 24 h postmortem (PM) for the following: A) light reflectance by Danish Meat Quality Marbling (MQM), Hennessy Grading Probe (HGP) and Sensoptic Invasive Probe (SIP); B) electrical properties by NWK LT-K21 conductivity (NLT) and Sensoptic Resistance Probe (SRP): and C) pH by NWK pH-K21 (NpH). Also, measurements of % drip loss (PDL) and filter paper wetness (FPW), color brightness (L*), ultimate pH (pHu), lipid content, subjective color (SC), firmness/wetness (SF) and marbling scores (SM) were assessed. Each loin was categorized as either pale, soft and exudative (PSE), reddish-pink, soft and exudative (RSE), reddish-pink, firm and non-exudative (RFN) or dark, firm and dry (DFD). Statistically comparing coefficients of determination (CD), the results indicated that overall, the HGP predicted quality groups slightly better than MQM (CD=71 and 62% respectively), NpH and SRP were less effective (CD= 56 and 44% respectively), and SIP and NLT had the lowest values (CD=36 and 5% respectively). Combining various independent variable did not greatly improve the variation accounted for. When the data was sorted into marbling groups based on lipid content, this was not accurately predicted by any of the probe measurements. The MQM probe remained the best predictor for marbling class and accounted for about 25% of the lipid content variation. This was slightly improved to 33% when pHu was combined with MQM. Objectively Predicting Ultimate Quality of Post-Rigor Pork Musculature: II. Practical Classification Method on the Cutting-Line Joo, S.T.;Kauffman, R.G.;Warner, R.D.;Borggaard, C.;Stevenson-Barry, J.M.;Rhee, M.S.;Park, G.B.;Kim, B.C. 77 To investigate the practical assessing method of pork quality, 302 carcasses were selected randomly to represent commercial conditions and were probed at 24 hr postmortem (PM) by Danish Meat Quality Marbling (MQM), Hennessy Grading Probe (HGP), Sensoptic Resistance Probe (SRP) and NWK pH-K21 meter (NpH). Also, filter paper wetness (FPW), lightness (L*), ultimate pH (pHu), subjective color (SC), firmness/wetness (SF) and marbling scores (SM) were recorded. Each carcass was categorized as either PSE (pale, soft and exudative), RSE (Reddish-pink, soft and exudative), RFN (reddish-pink, firm and non-exudative) or DFD (dark, firm and dry). When discriminant analysis was used to sort carcasses into four quality groups the highest proportion of correct classes was 65% by HGP, 60% by MQM, 52% by NpH and 32% by SRP. When independent variables were combined to sort carcasses into groups the success was only 67%. When RSE and RFN groups were merged so that there were only three groups (PSE, RSE+RFN, DFD) differentiating by color MQM was able to sort the same set of data into the new set of three groups with 80% accuracy. The proportions of correct classifications for HGP, NpH and SRP were 75%, 61% and 35% respectively. There was a decline in predication accuracy when only two groups, exudative (PSE and RES) and non exudative (RFN and DFD) were sorted. However, when two groups designated PSE and non-PSE (RSE, RFN and DFD) were sorted then the proportion of correct classification by MQM, HGP, SRP and NpH were 87%, 81%, 71% and 66% respectively. Combinations of variables only increased the prediction accuracy by 1 or 2% over prediction by MQM alone. When the data was sorted into three marbling groups based on SM this was not well predicted by any of the probe measurements. The best prediction accuracy was 72% by a combination of MQM and NpH. Recent Advances in Animal Feed Additives such as Metabolic Modifiers, Antimicrobial Agents, Probiotics, Enzymes and Highly Available Minerals - Review - Wenk, C. 86 Animal feed additives are used worldwide for many different reasons. Some help to cover the needs of essential nutrients and others to increase growth performance, feed intake and therefore optimize feed utilization. They can positively effect technological properties and product quality. The health status of animals with a high growth performance is a predominant argument in the choice of feed additives. In many countries the use of feed additives is more and more questioned by the consumers: substances such as antibiotics and $\beta$-agonists with expected high risks are banned in animal diets. Therefore, the feed industry is highly interested in valuable alternatives which could be accepted by the consumers. Probiotics, prebiotics, enzymes and highly available minerals as well as herbs can be seen as alternatives to metabolic modifiers and antibiotics. Manipulation of the Rumen Ecosystem to Support High-Performance Beef Cattle - Review - Jouany, J.P.;Michalet-Doreau, B.;Doreau, M. 96 Genetically selected beef cattle are fed high-energy diets in intensive production systems developed in industrial countries. This type of feeding can induce rumen dysfunctions that have to be corrected by farmers to optimise cost-effectiveness. The risk of rumen acidosis can be reduced by using slowly degradable starch, which partly escapes rumen fermentation and goes on to be digested in the small intestine. Additives are proposed to stabilise the rumen pH and restrict lactate accumulation, thus favouring the growth of cellulolytic bacteria and stimulating the digestion of the dietary plant cell wall fraction. This enhances the energy value of feeds when animals are fed maize silage for example. Supplementation of lipids to increase energy intake is known to influence the population of rumen protozoa and some associated rumen functions such as cellulolysis and proteolysis. The end products of rumen fermentation are also changed. Lipolysis and hydrogenation by rumen microbes alter the form of fatty acids supplied to animals. This effect is discussed in relation with the quality of lipids in beef and the implications for human health. Conditions for optimising the amount of amino acids from microbial proteins and dietary by-pass proteins flowing to the duodenum of ruminants, and their impact on beef production, are also examined. Reevaluation of the Metabolic Essentiality of the Vitamins - Review - McDowell, L.R. 115 https://doi.org/10.5713/ajas.2000.115 PDF In recent years a great deal of information has accumulated for livestock on vitamin. function, metabolism and supplemental needs. The role of the antioxidant "vitamins" (carotenoids, vitamin E and vitamin C) in immunity and health of livestock has been a fruitful area of research. These nutrients play important roles in animal health by inactivating harmful free radicals produced through normal cellular activity and from various stressors. Both in vitro and in vivo studies showed that these antioxidant vitamins generally enhance different aspects of cellular and noncellular immunity. A compromised immune system will result in reduced animal production efficiency through increased susceptibility to diseases, thereby leading to increased animal morbidity and mortality. Vitamin E has been shown to increase performance of feedlot cattle and to increase immune response for ruminant health, including being beneficial for mastitis control. Vitamin E given to finishing cattle at higher than National Research Council (NRC) requirements dramatically maintained the red color (oxymyoglobin) compared with the oxidized metmyoglobin of beef. Under commercial livestock and poultry production conditions, vitamin allowances higher than NRC requirements may be needed to allow optimum performance. Generally, the optimum vitamin supplementation level is the quantity that achieves the best growth rate, feed utilization, health (including immune competency), and provides adequate body reserves.
CommonCrawl
DIBR-synthesized image quality assessment based on morphological multi-scale approach Dragana Sandić-Stanković1, Dragan Kukolj2 & Patrick Le Callet3 The depth-image-based rendering (DIBR) algorithms used for 3D video applications introduce new types of artifacts mostly located around the disoccluded regions. As the DIBR algorithms involve geometric transformations, most of them introduce non-uniform geometric distortions affecting the edge coherency in the synthesized images. Such distortions are not handled efficiently by the common image quality assessment metrics which are primarily designed for other types of distortions. In order to better deal with specific geometric distortions in the DIBR-synthesized images, we propose a full-reference metric based on multi-scale image decomposition applying morphological filters. Using non-linear morphological filters in multi-scale image decomposition, important geometric information such as edges is maintained across different resolution levels. Edge distortion between the multi-scale representation subbands of the reference image and the DIBR-synthesized image is measured precisely using mean squared error. In this way, areas around edges that are prone to synthesis artifacts are emphasized in the metric score. Two versions of morphological multiscale metric have been explored: (a) Morphological Pyramid Peak Signal-to-Noise Ratio metric (MP-PSNR) based on morphological pyramid decomposition, and (b) Morphological Wavelet Peak Signal-to-Noise Ratio metric (MW-PSNR) based on morphological wavelet decomposition. The performances of the proposed metrics have been tested using two databases which contain DIBR-synthesized images: the IRCCyN/IVC DIBR image database and MCL-3D stereoscopic image database. Proposed metrics achieve significantly higher correlation with human judgment compared to the state-of-the-art image quality metrics and compared to the tested metric dedicated to synthesis-related artifacts. The proposed metrics are computationally efficient given that the morphological operators involve only integer numbers and simple computations like min, max, and sum as well as simple calculation of MSE. MP-PSNR has slightly better performances than MW-PSNR. It has very good agreement with human judgment, Pearson's 0.894, Spearman 0.77 when it is tested on the MCL-3D stereoscopic image database. We have demonstrated that PSNR has particularly good agreement with human judgment when it is calculated between images at higher scales of morphological multi-scale representations. Consequently, simplified and in essence reduced versions of multi-scale metrics are proposed, taking into account only detailed images at higher decomposition scales. The reduced version of MP-PSNR has very good agreement with human judgment, Pearson's 0.904, Spearman 0.863 using IRCCyN/IVC DIBR image database. The advanced 3D video (3DV) systems are mostly based on multi-view video plus depth (MVD) format [1] as the recommended 3D video format adopted by the moving picture experts group (MPEG). In the 3DV system, smaller number of captured views is transmitted and greater number of views is generated at the receiver side from the transmitted texture views and their associated depth maps using depth-image-based rendering (DIBR) technology. DIBR techniques can be used to generate views for different 3D video applications: free viewpoint television, 3DTV, 3D technology based entertainment products, and 3D medical applications. The perceptual quality of the synthesized view is considered as the most significant evaluation criterion for the whole 3D video processing system. Reliable quality assessment metric for synthesized views is of a great importance for the 3D video technology development. The use of subjective tests is expensive, time consuming, cumbersome, and practically no feasable in systems where real-time quality score of an image or video sequence is needed. Objective metrics are intended to predict human judgment. The reliability of objective metrics is based on their correlation to subjective assessment results. The evaluation of DIBR system depends on the application. The main difference between free viewpoint video (FVV) and 3DTV is the stereopsis phenomenon (fusion of left and right views in human visual system) existing in 3DTV. FVV does not have to be used in 3D context. It can be applied in 2D context. In this paper, the quality assessment of still images from MVD video sequences in both 2D and 3D contexts as a first step of 3D quality assessment is concerned. The evaluation of still images is important scenario in the case when the user switches the video in pause mode [2]. For the comparision of DIBR algorithms, virtual views synthesized from the uncompressed data which contain only synthesis artifact need to be evaluated. When encoding either depth data or color sequences before performing the synthesis, compression-related artifacts are combined with synthesis artifact. In this paper, the distortions introduced only by view synthesis algorithms are evaluated using the IRCCyN/IVC DIBR image dataset [3, 4] and part of the MCL-3D image dataset [5, 6]. DIBR algorithms introduce new types of artifacts mostly located around disoccluded regions [2]. They are not scattered in the entire image such as 2D video compression distortions. As DIBR algorithms involve geometric transformations, most of them introduce mainly geometric distortions affecting edges coherency in the synthesized images. These artifacts are consequently challenging for standard quality metrics, usually tuned for other types of distortions. In order to better deal with specific geometric distortions in DIBR-synthesized images, we propose multi-scale image quality assessment metric based on morphological filters in multi-resolution image decomposition. Due to multi-scale character of primate visual system [7], the introduction of multi-resolution image decomposition in the image quality assessment contributes to the improvement of metric performances relative to single-resolution method. Introduced non-linear morphological filters in multi-resolution image decomposition maintain important geometric information such as edges on their true positions, neither drifted nor blurred, across different resolution levels [8]. Edge distortion between appropriate subbands of the multi-scale representations of the reference image and the DIBR-synthesized image is precisely measured pixel-by-pixel using mean squared error (MSE). In this way, areas around edges that are prone to synthesis artifacts are emphasized in the metric score. Mean squared errors of subbands are combined into multi-scale mean squared error, which is transformed into multi-scale peak signal-to-noise ratio measure. More precisely, two types of morphological multi-scale decompositions for the multi-scale image quality assessment (IQA) have been explored: morphological bandpass pyramid decomposition in the Morphological Pyramid Peak Signal-to-Noise Ratio measure (MP-PSNR) and morphological wavelet decomposition in the Morphological Wavelet Peak Signal-to-Noise Ratio measure (MW-PSNR). Morphological bandpass pyramid decomposition can be interpreted as a structural image decomposition tending to enhance image features such as edges which are segregated by scale at the various pyramid levels [9]. Using non-linear morphological wavelet decomposition, geometric structures such as edges are better preserved in the lower resolution images compared to the case when the linear wavelets are used in the decomposition [10]. Both separable and true non-separable morphological wavelet decompositions using the lifting scheme have been investigated. Both measures, MP-PSNR and MW-PSNR, are highly correlated with the judgment of human observers, much better than standard IQA metrics and much better than their linear counterparts. They have better performances than tested metric dedicated to synthesis-related artifacts also. Since the morphological operators involve only integers and only max, min, and addition in their computation, as well as simple calculation of MSE, the proposed morphological multi-scale metrics are of low computational complexity. Moreover, it is experimentaly shown that PSNR has very good agreement with human judgment when it is calculated for the subbands at higher morphological decomposition scales. We propose the reduced versions of morphological multi-scale measures, reduced MP-PSNR, and reduced MW-PSNR, using only detail images from higher decomposition scales. The performances of the reduced versions of the morphological multi-scale measures are improved comparing to their full versions. In the next section, the distortion of the DIBR-synthesized view is shortly described. Previous work on the quality assessment of the DIBR-synthesized views and multi-scale image quality assessment is also shortly reviewed in Section 2. In Section 3, we describe two versions of the proposed multi-scale metric, based on two types of multi-resolution decomposition schemes, morphological pyramid, and morphological wavelets. Description of the distortion computation stage and pooling stage of the proposed multi-scale measures is given also in Section 3. The performances of MP-PSNR and MW-PSNR and discussion of results are presented in Section 4, while the conclusion is given in Section 5. Distortion in the DIBR-synthesized view The synthesis process changes the pixels position in the synthesized image and induces new types of distortion in DIBR-synthesized views. View synthesis noise mainly appears along object edges. Typical DIBR artifacts include object shifting, geometric distortions, edge displacements or misalignments, boundary blur, and flickering. Incorrect depth map induces object shifting in the synthesized image. Object shifting artifact or ghost artifact manifests as slight translation or resize of an image regions due to depth map errors. A large number of tyny geometric distortions are caused by the depth inaccuracy and the numerical rounding operation of pixel positions. Geometric distortions appear in the synthesized images because the pixels are projected to wrong positions. Blurry regions appear due to inpainting method used to fill the disoccluded areas. Incorrect rendering of textured areas appears when inpainting method fails in filling complex textured areas. When the objects move, the distortion around edges is more noticeable. The view synthesis distortion flickering locates on the edge of the foreground object which has a movement. Flickering can be observed as significant and high-frequency alternated variation between different luminance levels [11]. The temporal flicker distortion is the most significant difference between the traditional 2D video and the synthesized video. Some of the typical artifacts due to DIBR synthesis are shown on Fig. 1. Typical artifacts due to DIBR synthesis. Original images are in the left column and synthesized images are in the right column Quality assessment of DIBR-synthesized view The evaluation of DIBR views synthesized from uncompressed data using standard image quality metrics has been discussed in literature for still images from FVV in 2D context [3] using IRCCyN/IVC DIBR image database. It has been demonstrated that 2D quality metrics originally designed to address image compression distortions are very far to be effective to assess the visual quality of synthesized views. Full-reference objective image quality assessment metrics, VSQA [12], and 3DswIM [13], have been proposed to improve the performances obtained by standard quality metrics in the evaluation of the DIBR-synthesized images. Both metrics are dedicated to synthesis-related artifacts without compression-related artifacts and both metrics are tested using IRCCyN/IVC DIBR images dataset. VSQA [12] metric dedicated to view synthesis quality assessment is aimed to handle areas where disparity estimation may fail. It uses three visibility maps which characterize complexity in terms of textures, diversity of gradient orientations, and presence of high contrast. SSIM-based VSQA metric achieves the gain of 17.8 % over SSIM in correlation with subjective measurements. 3DswIM [13], relies on a comparision of statistical features of wavelet subbands of the original and DIBR-synthesized images. Only horizontal detail subbands from the first level of Haar wavelet decomposition are used for the degradation measurement. A registration step is included before the comparison to ensure shifting-resilience property. A skin detection step weights the final quality score in order to penalize distorted blocks containing skin-pixels based on the assumption that a human observer is most sensitive to impairments affecting human subjects. It was reported that 3DswIM metric outperforms the conventional 2D metrics and tested DIBR-synthesized views dedicated metrics. Edge-based structural distortion indicator addressing the distortion related to DIBR systems is proposed in [14]. The method relies on the analysis of edges in the synthesized view. The proposed method does not assess the image quality, but it is able to detect the structural distortion. Since it does not take the color consistency into account, the method remains a tool for assessing the structural consistency of an image. Vision-based quality measures for 3D DIBR-based video, both full-reference FR-3VQM [15], and no-reference NR-3VQM [16] are proposed to evaluate the quality of stereoscopic 3D video generated by DIBR. Both measures are a combination of three measures: temporal outliers, temporal inconsistencies, and spatial outliers, using ideal depth. Ideal depth is derived for both no-reference and for full-reference metric for distortion-free rendered video. 3VQM metrics show better performances than PSNR and SSIM using a database of DIBR-generated video sequences. Quality metric proposed in [17] is designed for the evaluation of synthesized images which contain artifacts introduced by the rendering process due to depth map errors. It consists of two parts. One part is the calculation of the conventional 2D metric after the consistent object shifts. After shift compensation, the 2D QA model matches the subjective quality score better. The other part is the calculation of the structural score by the Hausdorff distance. The Hausdorf distance identify the degree of the inconsistent object shift or ghost-type artifact at object boundaries. The proposed metric shows better performances than traditional IQA metrics in the evaluation of synthesized stereo images from MVD video sequences. SIQE metric [18] proposed to estimate the quality of DIBR-synthesized images compares the statistical characteristics of the synthesized and the original views estimated using the divisive normalization transform. In the evaluation of compressed MVD video sequences, it achieves high correlation with widely used image and video quality metrics. A full-reference video quality assessment of synthesized view with texture/depth compression presented in [11] focuses on the temporal flicker distortion due to depth compression distortion and the view synthesis process. It is based on two quality features which are extracted from both spatial and temporal domains of the synthesized sequence. The first feature focuses on capturing the temporal flicker distortion and the second feature is used to measure the change of the spatio-temporal activity in the synthesized sequence due to blurring and blockiness distortion caused by texture compression. The performances of the proposed metric evaluated on the synthesized video quality database SIAT [11] are better than the performances of the commonly used image/video quality assessment methods. Multi-scale image quality assessment As in most other areas of image processing and analysis, multi-resolution methods have improved performances relative to single-resolution methods also for the image quality assessment. Pyramids and wavelets are among the most common tools for constructing multi-resolution signal decomposition schemes used in image processing and computer vision. Both redundant image pyramid representation and non-redundant image wavelet representations have been explored for multi-scale image quality assessment metrics. Multi-scale structural similarity measure, MS-SSIM [19] is based on linear low-pass pyramid decomposition. Multi-scale image quality measures using information content weighted pooling, IW-SSIM, and IW-PSNR [20], use Laplacian pyramid decomposition [21]. CW-SSIM [22] simultaneously insensitive to luminance and contrast changes and small geometric distortions of image is based on multi-orientation steerable pyramid decomposition using multi-scale bandpass-oriented filters. It has been shown that the local contrast in different resolutions can be easily represented in terms of Haar wavelet transform coefficients and computational models of visual mechanisms were incorporated into a quality measurement system [23]. Experiments have shown that Haar filters have good ability to simulate the human visual system (HVS) and the proposed metric is successful in measuring compressed image artifacts. Error-based image quality metric using Haar wavelet decomposition has been proposed in [24]. It has been reported that Haar wavelet provided more accurate quality scores than other wavelet bases. PSNR has been calculated between the edge maps calculated from detail subbands as well as between approximation subbands of the original and the distorted images. These two PSNR have been linearly combined to the overall quality score. The proposed metric predict quality scores more accurately than the conventional PSNR and can be used efficiently in real-time applications. Reduced-reference image quality assessment based on multi-scale geometric analysis (MGA) to mimic multichannel structure of HVS, contrast sensitivity function to re-weights MGA coefficients to mimic nonlinearities in HVS and the just noticeable difference threshold to remove visually insensitive MGA coefficients has been presented in [25]. The quality of the distorted image was measured by comparing the normalized histograms of the distorted and the reference images. MGA was utilized to decompose images by a series of transforms including wavelet, curvelet, bandelet, contourlet, wavelet-based contourlet, hybrid wavelets, and directional filter banks. MGA can capture the characteristics of image, e.g., lines, curves, contour of object. IQA based on MGA and IQ metric using Haar wavelet decomposition [24] have been evaluated on the database which contains compressed, white noisy, Gaussian-blurred, and fast-fading Rayleigh channel noisy images. Proposed morphological multi-scale metric Multi-scale image quality assessment (IQA) framework can be described as three-stage process. In the first stage, both the reference and the distorted images are decomposed into a set of lower resolution images using multi-resolution decomposition. In the second stage, image quality/distortion maps are evaluated for all subbands at all scales. In the third stage, a pooling is employed to convert each map into a quality score, and these scores are combined into the final multi-scale image quality measure score. The key stage of the multi-scale image quality assessment may be how to represent images effectively and efficiently, so it is necessary to investigate various kinds of transforms. Most of the current multi-scale IQA metrics use linear filters in the multi-resolution decomposition. In this paper, we propose to use non-linear morphological operators in the multi-scale decompositions in the first stage of multi-scale IQA framework, Fig. 2, in order to better deal with specific geometric distortions in DIBR-synthesized images. Introduced non-linear morphological filters used in the multi-scale image decomposition maintain important geometric information such as edges on their true positions, across different resolution levels [8]. More precisely, we investigate two types of morphological multi-scale decompositions in the first stage of multi-scale IQA framework: morphological bandpass pyramid decomposition in MP-PSNR and morphological wavelet decomposition in MW-PSNR. In the second stage of the multi-scale IQA framework, Fig. 2, we propose to calculate squared error maps between the appropriate images of the multi-scale representations of the two images, the reference image and the DIBR-synthesized image, in order to measure precisely, pixel-by-pixel, the edge distortion. In this way, the areas around edges that are prone to synthesis artifacts are emphasized in the metric score. In the third stage of IQA multi-scale framework, MSE is calculated from each squared error map. MSE of all multi-scale representation images are combined into multi-scale mean squared error, which is transformed into morphological multi-scale peak signal-to-noise ratio measure. Morphological multi-scale image quality assessment framework Morphological multi-scale image decomposition The importance of analyzing images at many scales arises from the nature of images themselves [26]. Scenes contain objects of many sizes and these objects contain features of many sizes. Objects can be at various distances from the viewer. Any analysis procedure that is applied only at a single-scale may miss information at other scales. The solution is to carry out analysis at all scales simultaneously. Psychophysics and physiological experiments have shown that multi-scale transforms seem to appear in the visual cortex of mammals [27]. A multi-scale representation is completely specified by the transformation from a finer scale to a coarser scale. In linear scale-spaces the operator for changing scale is a convolution by a Gaussian kernel. After the convolution with Gaussian kernel the images are uniformly blurred, also the regions of particular interest like the edges [28]. This is a drawback as the edges often correspond to the physical boundaries of objects. The edge and contour information may be the most important of an image's structure for human to capture the scene. To overcome this issue, non-linear multi-resolution signal decomposition schemes based on morphological operators have been proposed to maintain edges through scales [8]. In morphological image processing, geometric properties such as size and shape are emphasized rather than the frequency properties of signals. Mathematical morphology [29, 30] is a set-theoretic method for image analysis which provides a quantitative description of geometric structure of an image. It considers images as sets which permits geometry-oriented transformations of the images. The structuring element offers flexibility because it can be designed in different shapes and sizes according to the purpose. Morphological filters are non-linear signal transformations that locally modify geometric signal features. In the first stage of morphological multi-scale IQA framework, we have explored two types of multi-scale image decomposition using morphological pyramid and morphological wavelets. Multi-scale image decomposition using morphological pyramid The image pyramid offers a flexible, convenient multi-resolution format that matches the multiple scales found in the visual scenes and mirrors the multiple scales of processing in the human visual system [26]. Pyramid representations have much in common with the way people see the world, i.e., primate visual systems achieve a multi-scale character [7]. In this paper, we propose to use morphological bandpass pyramid (MBP) decomposition in the first stage of morphological multi-scale IQA framework. Morphological bandpass pyramid is generated using the Laplacian type pyramid decomposition scheme [21], but instead of linear filters, morphological filters are used. We propose to use morphological operator erosion (E) for low-pass filtering in analysis step and morphological operator dilation (D) for interpolation filtering in synthesis step leading to the morphological bandpass pyramid decomposition erosion/dilation (MBP ED) introduced in [31] and reviewed in [32]. One level of the proposed MBP ED pyramid is shown on Fig. 3. One level of morphological bandpass pyramid decomposition scheme, MPD. Morphological analysis operator erosion (E) followed by downsampling, morphological synthesis operator dilation (D) preceeded by upsampling In the MBP ED scheme, Fig. 3, a lower resolution image \( {\mathit{\mathsf{s}}}_{\mathit{\mathsf{j}}+\mathsf{1}} \) is obtained by applying morphological operator erosion on the previous pyramid level image \( {\mathit{\mathsf{s}}}_{\mathit{\mathsf{j}}} \) and downsampling the eroded image by factor 2 on both image dimensions (σ ↓) (1). We've used the square structuring element of size (2r+1) × (2r+1), r=1,…6 for erosion. $$ \begin{array}{l}{s}_{j\kern0.5em E}\;\left(m,\;n\right)= \min \kern0.5em \left\{\;{s}_j\left(m+k,\;n+l\right),\kern1em \Big|\kern1em -r\le k,l\kern0.5em \le r\kern0.24em \right\}\\ {}{s}_{j+1}={\sigma}^{\downarrow}\left({s}_{j\kern0.5em E}\kern0.5em \right)\kern0.5em \end{array} $$ The erosion as the analysis operator removes fine details smaller than the structuring element. A detail image is derived by subtracting from each level an interpolated version of the next coarser level. The image \( {\mathit{\mathsf{s}}}_{\mathit{\mathsf{j}}+\mathsf{1}} \) of the next pyramid level is upsampled by factor 2 on both dimensions (σ ↑) leading to the image \( {\mathit{\mathsf{s}}}_{\mathit{\mathsf{j}}\mathit{\mathsf{U}}} \). Morphological operator dilation is applied on the upsampled image \( {\mathit{\mathsf{s}}}_{\mathit{\mathsf{j}}\mathit{\mathsf{U}}} \) to produce expanded image \( {\widehat{\mathit{\mathsf{s}}}}_{\mathit{\mathsf{j}}} \). The detail image \( {\mathit{\mathsf{d}}}_{\mathit{\mathsf{j}}} \)is obtained as the difference of the pyramid image \( {\mathit{\mathsf{s}}}_{\mathit{\mathsf{j}}} \) and expanded image from the next pyramid level \( {\widehat{\mathit{\mathsf{s}}}}_{\mathit{\mathsf{j}}} \): $$ \begin{array}{l}{s}_{j\kern0.5em U}={\sigma}^{\uparrow}\left({s}_{j+1}\kern0.5em \right)\kern0.5em \\ {}{\widehat{s}}_{j\kern0.5em }\;\left(m,\;n\right)= \max \kern0.5em \left\{\kern0.5em {s}_{j\kern0.5em U}\left(m-k,\;n-l\right),\kern1em \Big|\kern1em -r\le k,l\kern0.5em \le r\kern0.24em \right\}\\ {}{d}_j\kern0.5em =\kern0.5em {s}_j\kern0.74em -{\widehat{s}}_{j\kern1em }\end{array} $$ Using square structuring element, morphological reduce and expand filtering can be implemented more efficiently separably by rows and columns using the structuring elements of size \( \mathsf{1}\times \left(\mathsf{2}\mathit{\mathsf{r}}+\mathsf{1}\right) \) for rows and \( \left(\mathsf{2}\mathit{\mathsf{r}}+\mathsf{1}\right)\times \mathsf{1} \) for columns. Morphological bandpass pyramid with M decomposition levels consists of detail (error) images of decreasing size \( {\mathit{\mathsf{d}}}_{\mathit{\mathsf{j}}} \), j = 0, … M-1 and the coarse lowest resolution image \( {\mathit{\mathsf{s}}}_{\mathit{\mathsf{M}}} \) [9]. MBP ED pyramid generated using SE of size 7 × 7 of the synthesized frame from the video sequence Newspaper is shown on Fig. 4. Morphological bandpass pyramid representation of the synthesized frame from the video sequence Newspaper. Squared structuring element of size 7 × 7 is used for morphological reduce filtering in MBP ED MBP ED pyramid based on adjunction satisfies the property that the detail signal is always non-negative. At any scale change, maximum luminance at the coarser scale is always lower than the maximum luminance at the finer scale, the minimum is always higher. Morphological bandpass pyramid decomposition can be interpreted as a structural image decomposition tending to enhance image features such as edges which are segregated by scale at the various pyramid levels [9]. Enhanced features are segregated by size: fine details are prominent in the lower level images while progressively coarser features are prominent in the higher level images. MBP ED pyramid using structuring element of size 2 × 2 is morphological Haar pyramid [31]. MBP satisfies pyramid condition [31] which states that synthesis of a signal followed by analysis returns the original signal, meaning that no information is lost by these two consecutive steps and the original image can be perfectly reconstructed from the pyramid representation. Perfect reconstruction, while not mandatory for image quality assessment is a valuable property for a representation in early vision not because a visual system needs to literally reconstruct the image from its representation but rather because it guarantees that no information has been lost, ie that if two images are different then their representations are different also [7]. There is neurophysiological evidence that the human visual system uses a similar kind of decomposition [33]. There is inherent congruence between the morphological pyramid decomposition scheme and human visual perception [9]. Multi-scale image decomposition using morphological wavelets Most current image quality assessment methods based on discrete wavelet transform use linear wavelet kernels [23, 24, 34]. In this paper, we propose to use morphological wavelet decomposition in order to better preserve geometric structures such as edges in the lower resolution images. The morphological wavelet transforms introduced in [10] and reviewed in [32] are non-linear wavelet transforms that use min and max operators. Due to non-linear nature of the morphological operators, important geometric information such as edges are well preserved across different resolution levels. A general and flexible approach for the construction of non-linear morphological wavelets in the spatial domain is provided by the lifting scheme using morphological lifting operators in prediction (P) step and update (U) step [35], Fig. 5 . We have explored both separable and true non-separable morphological wavelet decompositions using the lifting scheme. The lifting scheme for the wavelet transform: prediction (P) and update (U) Separable 2D discrete wavelet transform (DWT) is implemented by cascading two 1D DWT along the vertical and horizontal directions [36] producing three detail subbands and approximation signal. Separable wavelet decompositions using 1D morphological Haar wavelet (minHaar) and 1D morphological wavelet using min-lifting scheme (minLift) [10, 37] are explored. Their linear counterparts, Haar wavelet and biorthogonal wavelet of Cohen-Daubechies-Feauveau (cdf (2,2)) [38], are also tested for comparision. Non-separable sampling opens a possibility of having schemes better adapted to the human visual system [39]. Non-separable 2D morphological wavelet decomposition on a quincunx lattice using the min-lifting scheme (minLiftQ) [40] is also explored. Non-separable wavelet decomposition with linear wavelet of Cohen-Daubechies-Feauveau on a quincunx lattice (cdf(2,2)Q) [41] is implemented for comparision. 1D Morphological Haar min wavelet transformation (minHaar) One of the simplest example of non-linear morphological wavelets is the morphological Haar wavelet (minHaar) [10]. It is very similar structure to the linear Haar but it uses non-linear morphological operator erosion (by taking the minimum over two samples) in the update step of the lifting scheme [32, 37]. An illustration of one step of the wavelet transform with minHaar wavelet using the lifting scheme is shown on Fig. 6. Initially, the signal x (the first row in Fig. 6) is splitted to the even samples array (white nodes) and odd samples array (black nodes). The detail signal d (middle row in Fig. 6) is calculated as the difference of the odd array and the even array (3). The lower resolution signal s (bottom row in Fig. 6) is calculated from the even array and detail signal (4). One step of the morphological wavelet transform using minHaar wavelet. The calculation of the detail signal d and the lower resolution signal s from the higher resolution signal x using the lifting scheme $$ \mathit{\mathsf{d}}\left[\mathit{\mathsf{n}}\right]=\mathit{\mathsf{x}}\left[\mathsf{2}\mathit{\mathsf{n}}+\mathsf{1}\right]-\mathit{\mathsf{x}}\left[\mathsf{2}\mathit{\mathsf{n}}\right] $$ $$ \mathit{\mathsf{s}}\left[\mathit{\mathsf{n}}\right]=\mathit{\mathsf{x}}\left[\mathsf{2}\mathit{\mathsf{n}}\right]+ \min \left(\mathsf{0},\kern0.5em \mathit{\mathsf{d}}\left[\mathit{\mathsf{n}}\right]\kern0.5em \right) $$ The morphological Haar wavelet decomposition scheme may do a better job in preserving edges as compared to linear case [10]. The morphological Haar wavelet has some specific invariance properties. Besides of being translation invariant in the spatial domain, it is also gray-shift invariant and gray-multiplication invariant [37]. 1D Morphological wavelet transformation using min-lifting scheme (minLift) Min-lifting scheme [10] is constructed using two non-linear lifting steps: non-linear prediction and non-linear update, both using operator erosion (by taking the minimum over two/three samples). After splitting the signal x to an odd samples array (black nodes in the first row of Fig. 7) and an even samples array (white nodes in the first row of Fig. 7), each sample of the detail signal d (second row on Fig. 7) is calculated according to (5). The update step is chosen in such a way that local minimum of the input signal is mapped to scaled signal and a sample of the approximation signal s (third row on Fig. 7) is calculated according to (6). One step of the morphological wavelet decomposition using minLift wavelet. The detail signal d and the lower resolution signal s are calculated from the higher resolution signal x using the lifting scheme $$ \mathit{\mathsf{d}}\left[\mathit{\mathsf{n}}\right]=\mathit{\mathsf{x}}\left[\mathsf{2}\mathit{\mathsf{n}}+\mathsf{1}\right]- \min \left(\mathit{\mathsf{x}}\left[\mathsf{2}\mathit{\mathsf{n}}\right], \kern0.5em \mathit{\mathsf{x}}\left[\mathsf{2}\mathit{\mathsf{n}}+\mathsf{2}\right]\kern0.5em \right) $$ $$ \mathit{\mathsf{s}}\left[\mathit{\mathsf{n}}\right]=\mathit{\mathsf{x}}\left[\mathsf{2}\mathit{\mathsf{n}}\right]+ \min \left(\mathsf{0},\kern0.5em \mathit{\mathsf{d}}\left[\mathit{\mathsf{n}}-\mathsf{1}\right],\kern0.5em \mathit{\mathsf{d}}\left[\mathit{\mathsf{n}}\right]\;\right) $$ Morphological wavelet decomposition using minLift wavelet is both gray-shift invariant and gray-multiplication invariant [37]. Min-lifting scheme has the nice property that it preserves local minima of a signal, respectively, over several scales. It does not generate any new local minima. The detail signal is almost zero at areas of smooth gray level variation and sharp gray level variations are mapped to positive detail signal values (white). As an illustration of the wavelet decomposition using morphological minLift wavelet, the oriented wavelet subbands from the first decomposition level which contain vertical, horizontal, and corner details are shown on Fig. 8 for the synthesized frame from the video sequence Newspaper. Oriented wavelet subbands from the first level of separable morphological wavelet decomposition. The synthesized frame Newspaper is decomposed using morphological minLift wavelet Non-separable morphological wavelet transformation with quincunx sampling using min-lifting scheme (minLiftQ) Two-dimensional non-separable morphological wavelet decomposition on a quincunx lattice using the min-lifting scheme minLiftQ [40] is analog to separable morphological wavelet decomposition using minLift wavelet. Non-separable 2D wavelet transform on a quincunx lattice using the lifting scheme is performed through odd and even steps alternately, producing a detail subband at each step and an approximation image which is decomposed further. Each step, odd and even, is implemented using the lifting scheme which consists of three parts: splitting, prediction and update. In the odd step, the image pixels are splitted in two subsets, both on quincunx lattice, Fig. 9 upper row, one subset with white pixels, x and the other subset with black pixels, y. The pixel of the error signal d is calculated using the minimum of the four nearest pixels in the horizontal and vertical directions (7), Fig. 9 bottom row left, and the lower resolution signal s is updated from the four nearest detail signal pixels (8), Fig. 9 bottom row right. The odd step of 2D non-separable wavelet decomposition using the min-lifting scheme. In the upper row, the signal on the Cartesian lattice is split in two signals, both on quincunx lattice; In the bottom row on the left, the pixel of the detail signal d is calculated in prediction step from the four neighbor signal pixels from the vertical and horizontal directions; In the bottom row right, the pixel of the lower resolution signal s is calculated in update step from the four neighbor detail pixels from the vertical and horizontal directions $$ \mathit{\mathsf{d}}=\kern0.5em \mathit{\mathsf{y}}\kern0.5em -\kern0.5em \min\;\left(\;\mathit{\mathsf{x}},\kern0.62em {\mathit{\mathsf{x}}}_{\mathsf{1}},\kern0.62em {\mathit{\mathsf{x}}}_{\mathsf{2}},\kern0.62em {\mathit{\mathsf{x}}}_{\mathsf{3}}\right) $$ $$ \mathit{\mathsf{s}}=\kern0.5em \mathit{\mathsf{x}}\kern0.5em +\kern0.5em \min\;\left(\;\mathit{\mathsf{d}},\kern0.62em {\mathit{\mathsf{d}}}_{\mathsf{1}},\kern0.62em {\mathit{\mathsf{d}}}_{\mathsf{2}},\kern0.62em {\mathit{\mathsf{d}}}_{\mathsf{3}},\kern0.5em \mathsf{0}\right) $$ In the even step, the signal on the quincunx lattice is separated on two subsets, both on Cartesian lattice, one subset with white pixels x and the other subset with gray pixels y, Fig. 10 upper row. The pixel of the error signal d is calculated from the four nearest pixels on diagonal directions (7), Fig. 10 bottom row left, and the lower resolution signal s is updated from four nearest detail signal pixels on diagonal directions (8), Fig. 10 bottom row right. The even step of 2D non-separable wavelet decomposition using the min-lifting scheme. In the upper row, the signal on the quincinx lattice is split in two signals, white x and gray y, both on Cartesian lattice; In the bottom row on the left, the pixel of the detail signal d is calculated in prediction step from the four neighbor signal pixels from the diagonal directions; In the bottom row on the right, the pixel of the lower resolution signal s is calculated in update step from the four neighbor detail pixels from the diagonal directions Owing to the symmetry in the quincunx grid, the non-separable transform is insensitive to edge directions and image orientation. Non-oriented wavelet subbands from the first level of non-separable wavelet decomposition with quincunx sampling using morphological minLiftQ wavelet of the synthesized frame from the video sequence Newspaper are shown on Fig. 11. The detail image from the odd step is rotated \( {\mathsf{45}}^{\circ } \) before display. The detail images are almost zero at areas of smooth gray level variation. Sharp gray level variations are mapped to positive (white) detail image values. Non-oriented wavelet subbands from the first level of non-separable morphological wavelet decomposition with quincunx sampling. The synthesized frame Newspaper is decomposed using morphological minLiftQ wavelet. The detail image from the odd step is rotated for \( {\mathsf{45}}^{\circ } \) (on the top) Distortion computation and pooling stage Mean squared error (MSE) and peak signal-to-noise ratio (PSNR) are the most widely used objective image distortion/quality metrics. They are probably the simplest way to quantify the similarity between two images. The mean squared error remains the standard criterion for the assessment of signal quality and fidelity. It has many attractive features: simplicity, parameter free, memoryless [42]. The MSE is an excellent metric in the context of optimization. Moreover, competing algorithms have most often been compared using MSE/PSNR [42]. It is shown that MSE has poor performances in some cases (contrast strech, mean luminance shift, contamination by additive white Gaussian noise, impulsive noise distortion, JPEG compression, blur, spatial scaling, spatial shift, rotation) when it is used as a single-scale metric on the full resolution images in the base band [42, 43]. In this paper, we propose to use MSE for distortion measurement between pyramid images in MP-PSNR and between wavelet subbands in MW-PSNR. In the second stage of the multi-scale IQA framework we use squared error maps between the morphological multi-scale representations of the two images: the reference image and the DIBR-synthesized image. Squared error maps calculated pixel-by-pixel show wrong displacement of the object edges induced by DIBR process through different scales of multi-scale representations. From the squared error maps, mean squared errors are calculated and combined into the multi-scale mean squared error which is transformed into multi-scale peak signal-to-noise ratio in the third stage of the multi-scale IQA framework. The calculation of MP-PSNR When the morphological pyramid decomposition is used in the first stage of morphological multi-scale IQA framework, Fig. 12, multi-scale pyramid mean squared error MP_MSE is calculated as weighted product of \( \mathit{\mathsf{M}}\mathit{\mathsf{S}}{\mathit{\mathsf{E}}}_{\mathit{\mathsf{j}}} \) values at all pyramid levels (9). MP-PSNR is based on MSE between two pyramids images. MPD—one level of morphological bandpass pyramid decomposition $$ \mathit{\mathsf{M}}\mathit{\mathsf{P}}\_\mathit{\mathsf{M}}\mathit{\mathsf{S}}\mathit{\mathsf{E}}={\displaystyle \prod_{\mathit{\mathsf{j}}=\mathsf{0}}^{\mathit{\mathsf{M}}}\kern0.5em {\left[\mathit{\mathsf{M}}\mathit{\mathsf{S}}{\mathit{\mathsf{E}}}_{\mathit{\mathsf{j}}}\right]}^{\kern0.5em {\beta}_{\mathit{\mathsf{j}}}}} $$ where equal value weights \( {\beta}_{\mathit{\mathsf{j}}}=\frac{\mathsf{1}}{\mathit{\mathsf{M}}+\mathsf{1}} \)are used, M is the number of decomposition levels and M + 1 is the number of pyramid images. Finally, MP_MSE is transformed into Morphological Pyramid Peak Signal-to-Noise Ratio MP_PSNR (10). $$ \mathit{\mathsf{M}}\mathit{\mathsf{P}}\_\mathit{\mathsf{P}\mathsf{SNR}}=\mathsf{10}\cdot { \log}_{\mathsf{10}}\left(\frac{{\mathit{\mathsf{R}}}^{\mathsf{2}}}{\mathit{\mathsf{M}}\mathit{\mathsf{P}}\_\mathit{\mathsf{M}}\mathit{\mathsf{S}}\mathit{\mathsf{E}}}\right) $$ where R is the maximum dynamic range of the image. The calculation of MW-PSNR When the morphological wavelet decomposition is used in the first stage of morphological multi-scale IQA framework, multi-scale wavelet mean squared error (MW-MSE) is calculated as weighted sum of \( \mathit{\mathsf{M}}\mathit{\mathsf{S}}{\mathit{\mathsf{E}}}_{\mathit{\mathsf{j}}\mathit{\mathsf{i}}} \) values for all subbands at all scales of the two wavelet representations as final pooling (11). $$ \mathit{\mathsf{M}}\mathit{\mathsf{W}}\_\mathit{\mathsf{M}}\mathit{\mathsf{S}}\mathit{\mathsf{E}}=\mathit{\mathsf{M}}\mathit{\mathsf{S}}{\mathit{\mathsf{E}}}_{\mathit{\mathsf{M}},\mathit{\mathsf{D}}+\mathsf{1}}\cdot {\beta}_{\mathit{\mathsf{M}},\mathit{\mathsf{D}}+\mathsf{1}}+{\displaystyle \sum_{\mathit{\mathsf{j}}=\mathsf{1}}^{\mathit{\mathsf{M}}}{\displaystyle \sum_{\mathit{\mathsf{i}}=\mathsf{1}}^{\mathit{\mathsf{D}}}\mathit{\mathsf{M}}\mathit{\mathsf{S}}{\mathit{\mathsf{E}}}_{\mathit{\mathsf{j}},\mathit{\mathsf{i}}}\cdot {\beta}_{\mathit{\mathsf{j}},\mathit{\mathsf{i}}}}} $$ where equal value weights \( {\beta}_{\mathit{\mathsf{j}}\mathit{\mathsf{i}}}=\frac{\mathsf{1}}{\mathit{\mathsf{M}}\cdot \mathit{\mathsf{D}}+\mathsf{1}} \) are used. M is the number of decomposition levels, D is the number of detail subbands at one decomposition level. In the case of separable wavelet transforms, D = 3, Fig. 13, while for the non-separable wavelet decomposition, D = 2, \( \mathit{\mathsf{M}}\mathit{\mathsf{S}}{\mathit{\mathsf{E}}}_{\mathit{\mathsf{j}}\mathit{\mathsf{i}}} \) is the mean value of the squared error map of the subband i at decomposition level j. MW-PSNR is based on MSE between two wavelet representations subbands. MWD— one level of morphological wavelet transform Finally, multi-scale metric Morphological Wavelet Peak Signal-to-Noise Ratio, MW-PSNR, is calculated as: $$ \mathit{\mathsf{M}}\mathit{\mathsf{W}}\_\mathit{\mathsf{PSNR}}=\mathsf{10}\cdot { \log}_{\mathsf{10}}\left(\frac{{\mathit{\mathsf{R}}}^{\mathsf{2}}}{\mathit{\mathsf{M}}\mathit{\mathsf{W}}\_\mathit{\mathsf{M}}\mathit{\mathsf{S}}\mathit{\mathsf{E}}}\right) $$ In this section, experimental setup for the validation of proposed morphological multi-scale measures is described. The performances of two versions of the proposed morphological multi-scale metric, the Morphological Pyramid Peak Signal-to-Noise Ratio measure, MP-PSNR, and the Morphological Wavelet Peak Signal-to-Noise Ratio measure, MW-PSNR, are presented and discussed. Moreover, the PSNR performances by multi-scale decomposition subbands are analyzed. It is shown experimentally that PSNR has very good agreement with human judgment when it is calculated for the images at higher morphological decomposition scales. Therefore, we propose the reduced versions of the morphological multi-scale measures, reduced MP-PSNR, and reduced MW-PSNR, using only detail images from higher decomposition scales. The performances of the reduced morphological multi-scale measures are presented also. Since the morphological operators used in morphological multi-resolution decomposition schemes involve only integers and only max, min, and addition in their computation the calculation of morphological multi-resolution decompositions have low computational complexity. The calculation of MSE is of low computational complexity also. Therefore, the calculation of both measures, MP-PSNR and MW-PSNR, is not computationaly demanding. To compare the performances of the image quality measures the following evaluation metrics are used: root mean squared error between the subjective and objective scores (RMSE), Pearson's correlation coefficient with non-linear mapping between the subjective scores and objective measures (PCC) and Spearman's rank order correlation coefficient (SCC). The calculation of DMOS from given MOS and non-linear mapping between the subjective scores and objective measures are done according to test plan for evaluation of video quality models for use with high definition TV content by VQEG HDTV group [44]. The performances of the metrics MP-PSNR and MW-PSNR are evaluated using two publicly available databases which contain DIBR-synthesized images: the IRCCyN/IVC DIBR image database [3, 4] and part of the MCL-3D stereoscopic image database [5, 6]. The IRCCyN/IVC DIBR image quality database The IRCCyN/IVC DIBR image quality database contains frames from three multi-view video sequences: Book arrival (1024 × 768, 16 cameras with 6.5 cm spacing), Lovebird1 (1024 × 768, 12 cameras with 3.5 cm spacing) and Newspaper (1024 × 768, 9 cameras with 5 cm spacing). The selected contents are representative and used by MPEG also. For each sequence four virtual views are generated on the positions corresponding to those positions obtained by the real cameras using seven depth-image-based rendering algorithms, named A1-A7 [45–50]. One key frame from each synthesized sequence is randomly chosen for the database. For these key frames subjective assessment in form of mean opinion scores (MOS) is provided. The difference mean opinion scores (DMOS) is calculated as the difference between the reference frame's MOS and the synthesized frame's MOS. In the algorithm A1 [45], the depth-image is pre-processed by a low-pass filter. Borders are cropped and then the image is interpolated to reach its original size. The algorithm A2 is based on A1 except that the borders are not cropped but inpainted by the method described in [46]. The algorithm A3 [47] use inpainting method [46] to fill in the missing parts in the virtual image which introduces blur in the disoccluded area. This algorithm was adopted as the reference software for MPEG standardization experiments in 3D Video group. The algorithm A4 performs hole-filling method aided by depth information [48]. The algorithm A5 uses a patch-based texture synthesis as the hole-filling method [49]. The algorithm A6 uses depth temporal information to improve synthesis in the disoccluded areas [50]. The frames generated by algorithm A7 contain unfilled holes. Due to very noticeable object shifting artifacts in the frames generated by algorithm A1, these frames are excluded from the tests. The focus remains on images synthesized using A2–A7 DIBR algorithms and without registration procedure for alignment of the synthesized and the original frames. The results presented in Sections 4.2–4.4 for the IRCCyN/IVC DIBR database are based on the mixed statistics of the DIBR algorithms A2-A7. The MCL-3D stereoscopic image quality database The part of the stereoscopic image quality database MCL-3D which contains 36 stereopairs generated using four DIBR algorithms and associated mean opinion score (MOS) values is used for testing. These stereoscopic image pairs are rendered from nine image-plus-depth sources: Baloons, Kendo and Lovebird1 of resolution 1024 × 728 and Shark, Microworld, Poznan street, Poznan Hall2, Gt_fly, Undo_dancer of resolution 1920 × 1088. For each source, three views are used for the calculation of the metric score, Fig. 14. Original textures (T1, T2, T3) and their associated depth maps (D1, D2, D3) are obtained by selecting key frames from each of nine multi-view test sequences associated with depth maps. From the middle view (T2, D2), using one of the four DIBR algorithms, the stereoscopic image pair (SL, SR) is generated. The textures from the outer views, (T1, T3) are used as the reference stereo pair. We have calculated IQA metric score between the DIBR-synthesized stereopair (SL, SR) and the reference stereopair (T1,T3). The score for the stereo pair is calculated as the average of the left and right image scores. The generation of DIBR-synthesized stereo images in MCL-3D database. DIBR-synthesized stereopair (SL, SR) is generated from the original view which contains texture image T2 and depth map D2; the reference stereopair (T1,T3) In the generation of the MCL-3D database, four DIBR algorithms are used: DIBR with filtering, A1 [45], DIBR with inpainting, A2 [46], DIBR without hole-filling, A7 and DIBR with hierarchical hole-filling (HHF), A8 [51]. HHF uses pyramid-like approach to estimate the hole pixels from lower resolution estimates of the 3D wrapped image yielding to the virtual images that are free of any geometric distortions. Adding the depth adaptive preprocessing step before applying the hierarchical hole-filling, the edges and texture around the disoccluded areas can be sharpened and enhanced. The results presented in sections 4.2 – 4.4 for the MCL-3D database are based on the mixed statistics of four DIBR algorithms A1, A2, A7, and A8. The original image Shark and the left images from the stereopairs synthesized using four DIBR algorithms (A1, A2, A7, A8) are shown on Fig. 15 from top to bottom and from left to right. The image Shark: original and DIBR-synthesized. Original image, left image of the stereoscopic pair synthesized using DIBR algorithms: A1, A2, A7, and A8, from top to bottom, from left to right Analysis of MP-PSNR performances In this section, the performances of the Morphological Pyramid Peak Signal-to-Noise Ratio measure, MP-PSNR, are analyzed. Morphological bandpass pyramid decomposition using morphological operator erosion for low-pass filtering in analysis step and morphological operator dilation for interpolation filtering in synthesis step (MBP ED) is applied on the reference image and the DIBR-synthesized image. The influence of different size and shape of structuring element used in morphological operations and different number of decomposition levels in MBP ED pyramid decompositions on MP-PSNR performances are explored. For comparison with linear case, MP-PSNR performances are calculated using Laplacian pyramid decomposition with linear filters. In addition, PSNR performances calculated between two pyramids' images on different pyramid scales are investigated. The reduced version of MP-PSNR using only lower resolution images from higher pyramid scales is proposed and its performances are analyzed. The shape and the size of the structuring element (SE) used in morphological filtering determine which geometrical features are preserved in the filtered image especially the direction of object's enlargement or shrinking. Using square structuring element the objects are enlarged or shrinked equally in all directions. Squared-shaped structuring element is suitable to detect straight lines while round SE is suitable to detect circular features. The MP-PSNR performances using different shapes of structuring element (square, round, rhomb and cross type structuring element, Fig. 16) for morphological filtering in analysis step are evaluated. Better performances of MP-PSNR are achieved with square or round type SE than by rhomb or cross type SE. The results are similar with square and round type structuring element, but the computational complexity is significantly lower when the square structuring element is used. Namely, in that case separable pyramid decomposition by rows and columns with downsampling after each step can be easily implemented. In the images from the two chosen databases, straight lines are dominant and squared-shaped structuring element is chosen. Structuring elements of size 5 × 5 in different shapes. From left to right: square, round, rhomb, cross Moreover, the impact of structuring element size used in morphological operations and the number of decomposition levels in MBP ED pyramid decompositions on MP-PSNR performances is investigated. MP-PSNR performances are calculated using MBP ED pyramid decomposition with different number of decomposition levels (1–7 for IRCCyN/IVC DIBR database and 1–8 for MCL-3D database) and with square structuring elements of different sizes from 2 × 2 to 13 × 13. More features are removed from the image at each decomposition level as larger structuring element is used. The number of decomposition levels for the best MP-PSNR performances depends on the size of structuring element. The performances of MP-PSNR using SE of different sizes and the best number of decomposition levels for that size of SE are shown in the upper part of Table 1. For the IRCCyN/IVC DIBR database, the MP-PSNR performances show improvement with enlargement of the structuring element. The MP-PSNR performances are noticable better for SE of size 5 × 5 and higher. Matlab implementation of MP-PSNR is available online [52]. Table 1 Performances of the full and the reduced versions of MP-PSNR In the case of MCL-3D database, the operation sum is used in the calculation of MP-MSE (9) as better performances of MP-PSNR are achieved. For the MCL-3D database, there is just a slight improvement of MP-PSNR performances with the enlargement of the structuring element. Scatter plot of MP-PSNR using SE of size 3 × 3 versus MOS for MCL-3D database is shown in Fig. 17. Each point represents one stereopair from the database. MCL-3D: scatter plot MP-PSNR versus MOS. MP-PSNR is based on MBP ED pyramid in five levels using SE of size 3 × 3 For the comparison with linear case, the image decomposition is performed using Laplacian pyramid with linear filters. Simple and efficient binomial filters [53] as approximation of a Gaussian filters are used. Binomial filters' coefficients are from Pascal's triangle, normalized with their sum. Two-dimensional filter is implemented as cascade of one-dimensional filters. The MP-PSNR performances using pyramid decompositions with linear filters are similar for all filter lengths. For the IRCCyN/IVC DIBR database, Pearson's correlation varies from 0.771 for the linear filter of length 2 to 0.799 for the linear filter of length 13. For the MCL-3D database, Pearson's correlation varies from 0.322 for the linear filter of length 2 to 0.377 for the linear filter of length 3. Pearson's correlation coefficients of MP-PSNR versus DMOS for different filter lengths used in linear pyramid decomposition and for different sizes of SE used in morphological pyramid decomposition are shown on Fig. 18, left for the IRCCyN/IVC DIBR database and right for the MCL-3D database. The results on Fig. 18 are based on the mixed statistics of the DIBR algorithms A2–A7 for the IRCCyN/IVC DIBR database and A1, A2, A7, A8 for the MCL-3D database. MP-PSNR using pyramid decomposition with morphological filters has much better performances than MP-PSNR using pyramid decomposition with linear filters. Pearson's correlation coefficients of MP-PSNR using morphological and linear filters of different lengths versus subjective scores. On the top for the IRCCyN/IVC DIBR database and for the MCL-3D database, bottom Analysis of PSNR performances by pyramid images It is shown in [54] that better performances of IQA metrics PSNR and SSIM are achieved when these metrics are calculated for the lower resolution images after low-pass filtering and downsampling than for the full resolution images. The downsampling scale depends on the image size and the viewing distance. We have investigated PSNR performances for the detail images of the morphological bandpass pyramid at different pyramid scales. The reference image and the DIBR-synthesized image are decomposed into a set of lower resolution pyramid images using morphological bandpass erosion/dilation pyramid decomposition. At each pyramid scale, PSNR is calculated between the detail images of the two pyramids, the reference image pyramid and the DIBR-synthesized image pyramid. For the IRCCyN/IVC DIBR database, Pearson's correlation coefficients of PSNR versus DMOS for pyramid images by pyramid scales using structuring elements of different sizes are shown on Fig. 19. The IRCCyN/IVC DIBR database: Pearson's correlation coefficients of pyramid images PSNR versus DMOS at all pyramid scales. Squared structuring elements of different sizes are used in MBP ED pyramid decomposition The smallest PCC is for the first pyramid scale (d 0) for all sizes of SE. Higher value PCC is for the middle and high scales. For the morphological pyramid decomposition using SE of size 2 × 2 and 3 × 3, the highest PCC is at scale 5 (d 4). For the SE of size 5 × 5, the best PSNR performances are obtained at pyramid scale 4 (d 3). For the pyramid decomposition with larger SE, the best PSNR performances are obtained at scale 3 for detail images \( {\mathit{\mathsf{d}}}_{\mathsf{2}} \). Also, PSNR performances at middle and higher pyramid scales are much better than the PSNR performances for the case when the PSNR is calculated between the original and the DIBR-synthesized images without decomposition, in the base band. The best PSNR performances by pyramid images for different sizes of SE used in morphological pyramid decomposition are shown in Table 2. For the morphological pyramid decomposition using SE of size 3 × 3, the best PSNR performances are achieved for the detail image at pyramid level 5, Pearson correlation coefficient 0.89 and Spearman correlation coefficient 0.867. Table 2 The best performances of PSNR by pyramid scale for structuring element (SE) of different sizes For the MCL-3D database, Pearson's correlation coefficients of PSNR versus MOS for pyramid images at all pyramid scales using structuring elements of different sizes are shown on Fig. 20. For this database, smaller differences between PCC for pyramid images at different scales exist. The smallest PCC is at the first scale (detail images \( {\mathit{\mathsf{d}}}_{\mathsf{0}} \)) and the highest PCC is for the aproximation images at the highest scale. The best pyramid image PSNR performances for different sizes of SE used in morphological pyramid decomposition are shown in Table 2. The MCL-3D database: Pearson's correlation coefficients of PSNR by pyramid images versus MOS. Squared structuring elements of different sizes are used in MBP ED pyramid decomposition For both databases,it is shown that PSNR shows very good agreement with human quality judgments when it is calculated at higher scales of MBP ED pyramid, much better than for the full resolution images in the base band. Matlab implementation of PSNR by morphological pyramid images is available online [52]. The performances of the reduced version of MP-PSNR Based on the results of PSNR performances calculated separately by pyramid scales, we propose reduced version of MP-PSNR using only pyramid images with higher PCC values of PSNR towards subjective scores. Reduced version of MP_MSE is calculated as the weighted sum of the used subbands' MSE (9). For the IRCCyN/IVCDIBR database, the reduced version of MP-PSNR is calculated using only three detail images with higher PCC values of PSNR towards DMOS. The performances of the reduced versions of MP-PSNR using equal value weights are presented in the bottom left part of Table 1. Reduced version of MP-PSNR has better performances than its full version: from 1.74 % when the MBP ED pyramid decomposition with SE of size 11 × 11 is used to 6.75 % when the MBP ED pyramid decomposition with SE of size 3 × 3 is used. HVS visually integrates an image edges in a coarse-to-fine-scale (global-to-local) fashion [34]. Visual cortex cells integrate activity across spatial frequency in an effort to enhance the representation of edges. Because the edges are visually integrated in a coarse-to-fine-scale order, the visual fidelity of an image can be maintained by preserving coarse scales at the expense of fine scales. Reduced version of MP-PSNR is computationaly more efficient than its full version as the MSE is only calculated for lower resolution pyramid images. The reliable and fast evaluation is obtained with reduced version MP-PSNR using MBP ED pyramid with SE of size 5 × 5 (Pearson's 90.39 %, Spearman 86.3 %). Scatter plot of nonlinearly mapped reduced MP-PSNR versus subjective DMOS for that case is shown in Fig. 21. Each point represents one frame from the database. Matlab implementation of reduced version of MP-PSNR is available online [52]. Fitted scores of reduced MP-PSNR versus DMOS for the IRCCyN/IVC DIBR database. Reduced MP-PSNR is based on pyramid detail images from scales 3–5 of MBP ED pyramid using SE = 5 × 5 For the MCL-3D database, the reduced version of MP-PSNR is calculated without detail images from the first three pyramid scales when the SE of size less than 7 × 7 is used. When the SE of size 7 × 7 and bigger is used, only the pyramid image from the first scale is omitted in the calculation of the reduced version of MP-PSNR. The performances of the reduced versions of MP-PSNR using equal value weights are presented in the bottom right part of Table 1. Only marginal improvement is achieved using reduced version of MP-PSNR for MCL-3D database. Analysis of MW-PSNR performances In this section, the performances of the Morphological Wavelet Peak Signal-to-Noise Ratio measure, MW-PSNR, are analyzed. MW-PSNR uses morphological wavelet decomposition of the reference and the DIBR-synthesized images. Both separable morphological wavelet decompositions using morphological Haar min wavelet (minHaar) and min-lifting wavelet (minLift) and non-separable morphological wavelet decomposition with quincunx sampling using min-lifting wavelet (minLiftQ) are investigated. Separable morphological wavelet decompositions are computationally less expensive than non-separable wavelet decompositions. Also, they are less expensive than morphological pyramid decompositions for the same filter length. The influence of different number of wavelet decomposition levels on MW-PSNR performances are explored. For the comparison with linear wavelet decompositions, MW-PSNR performances are calculated using separable linear wavelet decompositions using Haar wavelet (Haar) and Cohen-Daubechies-Feauveau wavelet cdf(2,2) and non-separable linear wavelet decomposition with quincunx sampling using cdf(2,2)Q. PSNR performances calculated by wavelet subbands through decomposition scales are investigated. The reduced version of MW-PSNR using only wavelet subbands with better PSNR performances is analyzed. The number of decomposition levels has been varied between 1 and 8 and the configurations with the best MW-PSNR performances have been chosen. The best MW-PSNR performances have been achieved using separable wavelet transformations in M = 7 levels producing 22 subbands. Using non-separable wavelet transformation with quincunx sampling for the IRCCyN/IVC DIBR database, the best MW-PSNR performances have been achieved also with M = 7 levels producing 15 subbands. For the MCL-3D database the best MW-PSNR performances using non-separable wavelet transformation have been achieved with M = 4 levels producing nine subbands. Equal value weights are used in the calculation of MW-MSE (11). Matlab implementation of MW-PSNR is available online [55]. The performances of MW-PSNR for different wavelet transformations are presented in the upper part of Table 3. The performances of MW-PSNR using morphological wavelet transforms are better than the performances of MW-PSNR using linear wavelet transforms. The best MW-PSNR performances have been obtained using separable wavelet decomposition with morphological Haar wavelet which is of the lowest computational complexity: for the IRCCyN/IVC DIBR database, Pearson 0.85, Spearman 0.77 and for the MCL-3D database, Pearson 0.87, Spearman 0.70. Scatter plot of MW-PSNR using separable wavelet decomposition with morphological Haar wavelet versus MOS for MCL-3D database is shown on Fig. 22. Table 3 Performances of the full and reduced versions of MW-PSNR The MCL-3D database: scatter plot of MW-PSNR versus MOS. MW-PSNR is based on separable wavelet decomposition with morphological Haar wavelet in seven levels Analysis of PSNR performances by wavelet subbands We have investigated PSNR performances by wavelet subbands at different wavelet decomposition scales. The reference image and the DIBR-synthesized image are decomposed into a sets of lower resolution subbands using morphological wavelet decomposition. At each decomposition scale, for each wavelet subband, PSNR is calculated between the subbands of the two wavelet representations, the reference image wavelet representation and the DIBR-synthesized image wavelet representation. Pearson's correlation coefficient (PCC) of PSNR to subjective scores is calculated for each subband for three types of morphological wavelets: minHaar, minLift and minLiftQ. Matlab implementation of PSNR by morphological wavelet subbands is available online [55]. For the IRCCyN/IVC DIBR database, Fig. 23, Pearson's correlation coefficients calculated for wavelet subbands on decomposition levels 4–7 are higher than Pearson's correlation coefficients calculated for wavelet subbands on decomposition levels 1–3. For the MCL-3D database, smaller differences by wavelet subbands between Pearson's correlation coefficients can be noticed, Fig. 24. The IRCCyN/IVC DIBR database: Pearson's correlation coefficients of PSNR versus DMOS by wavelet subbands. Morphological wavelets minHaar, minLift, and minLiftQ are used The MCL-3D database: Pearson's correlation coefficients of PSNR by wavelet subbands versus MOS. Morphological wavelets minHaar, minLift, and minLiftQ are used Moreover, the best PSNR performances by wavelet subbands for each wavelet decomposition are shown in Table 4. For instance, for the IRCCyN/IVC DIBR database for the separable wavelet decomposition using morphological minLift wavelet, the best PSNR performances are obtained for subband on the scale 6 with vertical details (d61), PCC 0.887 and SCC 0.828. Also, for all tested wavelets, the PSNR of the wavelet subband with the highest PCC show much better performances than PSNR calculated between the reference image and the DIBR-synthesized image without decomposition in the base band. Table 4 The best performances of PSNR by wavelet subbands for each wavelet Analysis of the reduced version MW-PSNR performances Based on the PSNR performances by subbands for the IRCCyN/IVC DIBR database given in Fig. 23, it can be concluded that the PSNR performances of wavelet subbands at decomposition levels 4–7 are much better than the subband PSNR performances on levels 1–3. Therefore, we propose reduced version of MW-PSNR using only these higher level subbands. Reduced versions of MW_MSE is calculated as weighted sum of the used subbands' MSE. For the separable wavelet decomposition, the reduced version of MW-PSNR is calculated using only 11 subbands from levels 4–7 with indices 41–72. For the non-separable wavelet decomposition with quincunx sampling, reduced version of MW-PSNR is calculated using 6 subbands from decomposition levels 4–7 with indices 42–71. Matlab implementation of the reduced version of MW-PSNR is available online [55]. The performances of the reduced MW-PSNR are presented in the bottom left part of Table 3. It is shown that for each wavelet type, the performances of the reduced version MW-PSNR are better than the performances of the full version MW-PSNR: 3.1 % for minHaar, 1.43 % for minLift and 3.08 % for minLiftQ. The best reduced version MW-PSNR performances are obtained using separable wavelet decomposition with morphological minHaar wavelet, Pearson's 88.5 %, Spearman 82.98 %. Scatter plot of nonlinearly mapped reduced MW-PSNR versus subjective DMOS for that case is shown in Fig. 25. The IRCCyN/IVC DIBR database: Fitted scores of reduced MW-PSNR versus MOS. Reduced version of MW-PSNR is based on wavelet subbands from decomposition levels 4–7; morphological wavelet decomposition using minHaar wavelet in seven levels is used For the MCL-3D database, only marginal improvement is achieved using reduced version of MW-PSNR, Table 3 bottom right. Summary of the results The performances of the selected proposed metrics, the commonly used 2D image quality assessment metrics and the metric dedicated to synthesis-related artifacts, 3DswIM [13], are presented in Table 5. The considered commonly used 2D metrics are: PSNR, universal quality index UQI [56], structural similarity index SSIM [57], multi-scale structural similarity MS-SSIM [19], information weighted IW-PSNR [20], and IW-SSIM [20]. Single-scale structural similarity SSIM [57] is calculated between the original and the synthesized images using the given matlab code [58]. 3DswIM [13] is calculated using the given matlab p-code [59]. Selected versions of the proposed metrics using morphological pyramid decompositions presented in Table 5 are: PSNR calculated on scale 5 of the MBP ED pyramid representations using SE of size 3 × 3; reduced version of MP-PSNR using SE of size 5 × 5 in pyramid MBP ED decomposition; full versions of MP-PSNR using SE of size 5 × 5. The selected proposed metrics using morphological wavelet decompositions shown in Table 5 are: PSNR calculated on scale 6 between wavelet subbands with vertical details of the two wavelet representations using minLift wavelet for the IRCCyN/IVC DIBR database and PSNR calculated on scale 7 between approximation wavelet subbands using minLift wavelet for the MCL-3D database; reduced and full versions of MW-PSNR using minHaar wavelet. The performances of the proposed metrics are much better than the performances of the commonly used 2D metrics and better than the performances of the metric dedicated to synthesis-related artifacts, 3DswIM. The Pearson's correlation coefficients of the selected commonly used 2D metrics, the metric dedicated to synthesis-related artifacts, 3DswIM, and the reduced versions of MP-PSNR and of MW-PSNR are shown on Fig. 26. Table 5 Performances of the selected proposed metrics and other metrics Pearson's correlation coefficients of proposed metrics and other metrics versus subjective scores. Top, for the IRCCyN/IVC DIBR database and for the MCL-3D database, bottom Most of the depth-image-based rendering (DIBR) techniques produce images which contain non-uniform geometric distortions affecting the edge coherency. This type of distortions are challenging for common image quality assessment (IQA) metrics. We propose full-reference metric based on multi-scale decomposition using morphological filters in order to better deal with specific geometric distortions in the DIBR-synthesized images. Introduced non-linear morphological filters in multi-resolution image decomposition maintain important geometric information such as edges across different resolution scales. The proposed metric is dedicated to artifact detection in DIBR-synthesized images by measuring the edge distortion between the multi-scale representations of the reference image and the DIBR-synthesized image using MSE. We have explored two versions of morphological multi-scale metric, Morphological Pyramid Peak Signal-to-Noise Ratio measure, MP-PSNR, based on morphological pyramid decomposition and Morphological Wavelet Peak Signal-to-Noise Ratio measure, MW-PSNR, based on morphological wavelet decomposition. The proposed metrics are evaluated using two databases which contain images synthesized by DIBR algorithms: IRCCyN/IVC DIBR image database and MCL-3D stereoscopic image database. Both metric versions demonstrate high improvement of performances over standard IQA metrics and over tested metric dedicated to synthesis-related artifacts. Also, they have much better performances than their linear counterparts for the evaluation of DIBR-synthesized images. MP-PSNR has slightly better performances than MW-PSNR. For the MCL-3D database, MP-PSNR achieves Pearson 0.888 and Spearman 0.756 using MBP ED pyramid decomposition with square structuring element of size 7 × 7 in 4 levels. For the same database, MW-PSNR achieves Pearson 0.87 and Spearman 0.707 using separable wavelet decomposition with morphological Haar wavelet in 7 levels. It is shown that PSNR has particularly good agreement with human judgment when it is calculated between the appropriate detail images at higher decomposition scales of the two morphological multi-scale image representations. For IRCCyN/IVC DIBR images database, PSNR calculated on scale 5 of the MBP ED pyramid image representations using structuring element of size 3 × 3 has very good performances, Pearson's 0.89 and Spearman 0.86. For MCL-3D database, PSNR calculated on scale 4 of the MBP ED pyramid image representations using square structuring element of size 7 × 7 achieves Pearson's 0.88, Spearman 0.82. For IRCCyN/IVC DIBR images database, it has been shown that reduced versions of multi-scale metrics, reduced MP-PSNR and reduced MW-PSNR, can be used for the assessment of DIBR-synthesized frames with high reliability. Reduced version of MP-PSNR using morphological pyramid decomposition MBP ED with square structuring element of size 5 × 5 achieves the improvement 15.2 % of correlation over PSNR (Pearson's 0.904, Spearman 0.863) and reduced version of MW-PSNR using morphological wavelet decomposition with minHaar wavelet gains the improvement of 13.3 % of correlation over PSNR (Pearson's 0.885, Spearman 0.829). Since the morphological operators involve only integers and only min, max, and addition in their computation, as well as simple calculation of MSE, the multi-scale metrics MP-PSNR and MW-PSNR are computationally efficient procedures. They provide reliable DIBR-synthesized image quality assessment even without any parameter optimization and precise registration procedure. cdf(2,2), linear, biorthogonal (2,2) of Cohen-Daubechies-Feauveau wavelet; cdf(2,2)Q, non-separable linear (2,2) of Cohen-Daubechies-Feauveau wavelet with quincunx sampling; DIBR, Depth-Image-Based Rendering; DMOS, Difference Mean Opinion Score; FVV, Free Viewpoint Video; Haar, Haar wavelet transformation; IQA, Image Quality Assessment; MBP ED, morphological bandpass pyramid erosion/ dilation; minHaar, morphological Haar min wavelet transformation; minLift, morphological min-lifting wavelet transformation; minLiftQ, non-separable morphological min-lifting wavelet transformation with quincunx sampling; MOS, Mean Opinion Score; MP-PSNR, Morphological Pyramid Peak Signal-to-Noise Ratio metric; MSE, Mean Squared Error; MW-PSNR, Morphological Wavelet Peak Signal-to- Noise ratio metric; PCC, Pearson's Correlation Coefficient; PSNR, Peak Signal-to-Noise ratio; SE, structuring element for morphological operations K Mueller, P Merkle, T Wiegand, 3D video representation using depth maps. Proc. IEEE 99(4), 643–656 (2011) E Bosc, P Le Callet, L Morin, M Pressigout, Visual quality assessment of synthesized views in the context of 3DTV, in 3D-TV system with depth-image-based rendering, ed. by C Zhu, Y Zhao, L Yu, M Tanimoto (Springer, New York, 2013), pp. 439–473 E Bosc, R Pepion, P Le Callet, M Koppel, P Ndjiki-Nya, M Pressigout, L Morin, Towards a new quality metric for 3-d synthesized view assessment. IEEE Journal on Selected Topics in Signal Processing 5(7), 1332–1343 (2011) IRCCyN/IVC DIBR image quality database. ftp://ftp.ivc.polytech.univ-nantes.fr/IRCCyN_IVC_DIBR_Images R Song, H Ko, CCJ Kuo, MCL-3D: a database for stereoscopic image quality assessment using 2D-image-plus-depth source, 2014. http://arxiv.org/abs/1405.1403 MCL-3D stereoscopic image quality database. http://mcl.usc.edu/mcl-3d-database E. Adelson, E. Simoncelli, W. Freeman, Pyramids and multiscale representations. Proc. European Conf. on Visual Perception, Paris (1990) P Maragos, R Schafer, Morphological systems for multidimensional signal processing. Proc. IEEE 78(4), 690–710 (1990) A Toet, A morphological pyramidal image decomposition. Pattern Recogn. Lett. 9(4), 255–261 (1989) H Heijmans, J Goutsias, Multiresolution signal decomposition schemes-Part II: morphological wavelets. IEEE Trans. Image Process. 9(11), 1897–1913 (2000) X Liu, Y Zhang, S Hu, S Kwong, CCJ Kuo, Q Peng, Subjective and objective video quality assessment of 3D synthesized views with texture/depth compression distortion. IEEE Trans. Image Process. 24(12), 4847–4861 (2015) P. Conze, P. Robert, L. Morin, Objective view synthesis quality assessment. Proc. SPIE 8288, Stereoscopic Displays and Applications XXIII (2012) F Battisti, E Bosc, M Carli, P Le Callet, S Perugia, Objective image quality assessment of 3D synthesized views. Elsevier Signal Processing: Image Communication. 30(1), 78–88 (2015) E Bosc, P Le Callet, L Morin, M Pressigout, An edge-based structural distortion indicator for the quality assessment of 3D synthesized views, Picture Coding Symposium, 2012, pp. 249–252 M. Solh, G. AlRegib, J.M. Bauza, 3VQM: A 3D video quality measure, 3VQM: a vision-based quality measure for DIBR-based 3D videos, IEEE Int. Conf. on Multimedia and Expo (ICME) (2011) M. Solh, G. AlRegib, J.M. Bauza, A no reference quality measure for DIBR based 3D videos, IEEE Int. Conf. on Multimedia and Expo (ICME) (2011) CT Tsai, HM Hang, Quality assessment of 3D synthesized views with depth map distortion, visual communications and image processing (VCIP), 2013 M.S. Farid, M. Lucenteforte, M. Grangetto, Objective quality metric for 3d virtual views, IEEE Int. Conf. on Image Processing (ICIP) (2015) Z. Wang, E. Simoncelli, A.C. Bovik, Multi-scale structural similarity for image quality assessment. Asilomar Conference on Signals, Systems and Computers (2003) Z Wang, Q Li, Information content weighting for perceptual image quality assessment. IEEE Trans. On Image Processing 20(5), 1185–1198 (2011) PJ Burt, EH Adelson, The Laplacian pyramid as a compact image code. IEEE Trans. on Communications 31(4), 532–540 (1983) Z. Wang, E. Simoncelli, Translation insensitive image similarity in complex wavelet domain. Proc. IEEE Int. Conf. on Acoustics, Speech and Signal processing, 573–576 (2005) Y.K. Lai, C.C. Jay, Kuo, Image quality measurement using the Haar wavelet. Proc. SPIE 3169, Wavelet Applications in Signal and Image Processing V, 127 (1997) S Rezazadeh, S Coulombe, A novel wavelet domain error-based image quality metric with enhanced perceptual performance. Int. J. Comput. Electrical Eng. 4(3), 390–395 (2012) X Gao, W Lu, D Tao, X Li, Image quality assessment based on multiscale geometric analysis. IEEE Trans. Image Process. 18(7), 1409–1423 (2009) E. Adelson, C. Anderson, J. Bergen, P. Burt, J. Ogden, Pyramid methods in image processing. RCA Engineer (1984) S Mallat, Wavelets for a vision. Proc. IEEE 84(4), 604–614 (1996) F Meyer, P Maragos, Nonlinear scale-space representation with morphological levelings. J. Vis. Commun. Image Represent. 11, 245–265 (2000) G Matheron, Random sets and integral geometry (Wiley, New York, 1975) J Serra, Introduction to mathematical morphology. J. on Comput. Vision, Graph. Image Process. 35(3), 283–305 (1986) J Goutsias, H Heijmans, Nonlinear multiresolution signal decomposition schemes—Part I: morphological pyramids. IEEE Trans. Image Process. 9(11), 1862–1876 (2000) D. Sandić-Stanković, Multiresolution decomposition using morphological filters for 3D volume image decorrelation. European Signal Processing Conf. EUSIPCO, Barcelona (2011) H. Heijmans, J. Goutsias, Some thoughts on morphological pyramids and wavelets. European Signal Processing Conf. EUSIPCO, Rodos (1998) D Chandler, S Hemami, VSNR: a wavelet-based visual signal-to-noise ratio for natural images. IEEE Trans. Image Process. 16(9), 2284–2298 (2007) H. Heijmans, J. Goutsias, Constructing morphological wavelets with the lifting scheme, Int. Conf. on Pattern Recognition and Information Processing, Belarus, 65–72 (1999) S Mallat, Multifrequency channel decompositions of images and wavelet models. IEEE Trans. on Acoustics, Speech and. Signal Process. 37(12), 2091–2110 (1989) H Heijmans, J Goutsias, Multiresolution signal decomposition schemes Part2: morphological wavelets. Tech. Rep. PNA-R9905 (CWI, Amsterdam, The Netherlands, 1999) I Daubechies, W Sweldens, Factoring wavelet transforms into lifting steps. J. Fourier Anal. Appl. 4(3), 247–269 (1998) J Kovacevic, M Vetterli, Nonseparable two- and three-dimensional wavelets. IEEE Trans. on Signal Processing 43(5), 1269–1273 (1995) H. Heijmans, J. Goutsias, Morphological pyramids and wavelets based on the quincunx lattice. in Mathematical morphology and its applications to image and signal processing, ed. by J Goutsias, L Vincent, D Bloomberg, (Springer US, 2000), 273–281 G. Uytterhoeven, A. Bultheel, The red-black wavelet transform. Proc. of IEEE Benelux Signal Processing Symposium (1997) Z Wang, A Bovik, Mean squared error: love it or leave it. IEEE Signal Process. Mag. 26(1), 98–117 (2009) Z. Wang, A. Bovik, L. Lu, Why is image quality assessment so difficult. IEEE Int. Conf. on Acoustics, Speech and Signal Processing (ASSP), 4, 3313–3316, Orlando FL, US (2002) VQEG HDTV Group, Test plan for evaluation of video quality models for use with high definition tv content, 2009 C. Fehn, Depth image based rendering (DIBR), compression and transmission for a new approach on 3D-TV. Proc. SPIE, Stereoscopic Displays and Applications XV, 5291, 93–104, San Jose, CA (2004) A Telea, An image inpainting technique based on the fast matching method. J. Graph, GPU and Game Tools 9(1), 23–34 (2004) Y Mori, N Fukushima, T Yendo, T Fujii, M Tanimoto, View generation with 3D warping using depth information for FTV. Signal Process. Image Commun. 24(1–2), 65–72 (2009) K Muller, A Smolic, K Dix, P Merkle, P Kauff, T Wiegand, View synthesis for advanced 3D video systems. EURASIP Journal on Image and Video Processing 2008, 438148 (2008) P. Ndjiki-Nya, P. Koppel, M. Doshkov, H. Lakshman, P. Merkle, K. Muller, T. Wiegand, Depth image based rendering with advanced texture synthesis. IEEE Int. Conf. on Multimedia&Expo, 424–429, Suntec City (2010) M. Koppel, P. Ndjiki-Nya, M. Doshkov, H. Lakshman, P. Merkle, K. Muller, T. Wiegand, Temporally consistent handling of disocclusions with texture synthesis for depth-image-based rendering. IEEE Int. Conf. on Image Processing, 1809–1812, Hong Kong (2010) M Solh, G AlRegib, Depth adaptive hierarchical hole filling for DIBR-based 3D videos, Proceedings of SPIE, 8290, 829004 (Burlingame, CA, US, 2012) MP-PSNR matlab p-code. https://sites.google.com/site/draganasandicstankovic/code/mp-psnr M Aubury, W Luk, Binomial filters. Journal of VLSI Signal Processing for Signal, Image and Video Technology 12(1), 35–50 (1995) K Gu, M Liu, G Zhai, X Yang, W Zhang, Quality assessment considering viewing distance and image resolution. IEEE Trans. On Broadcasting 61(3), 520–531 (2015) MW-PSNR matlab p-code. https://sites.google.com/site/draganasandicstankovic/code/mw-psnr Z Wang, AC Bovik, A universal image quality index. IEEE Signal Processing Letters 9(3), 81–84 (2002) Z Wang, AC Bovik, HR Sheikh, E Simoncelli, Image quality assessment: from error visibility to structural similarity. IEEE Trans. Image Process. 13(4), 600–612 (2004) SSIM matlab code. https://ece.uwaterloo.ca/~z70wang/research/ssim/ssim_index.m 3DSwIM matlab p-code. http://www.comlab.uniroma3.it/3DSwIM.html Institute for Telecommunications and Electronics IRITEL, Belgrade, Serbia Dragana Sandić-Stanković Faculty of Technical Sciences, University of Novi Sad, Novi Sad, Serbia Dragan Kukolj Ecole polytechnique de l'Universite de Nantes, IRCCyN Lab, Nantes, France Patrick Le Callet Correspondence to Dragana Sandić-Stanković. This work was partially supported by COST Action IC1105-3D ConTourNet, the Ministry of Education, Science and Technological Development of the Republic of Serbia under Grant TR-32034 and by the Secretary of Science and Technology Development of the Province of Vojvodina under Grant 114-451-813/2015-03. DSS proposed the framework of this work, carried out the whole experiments, and drafted the manuscript. DK supervised the whole work, offered useful suggestions, and helped to modify the manuscript. PLC participated in the discussion of this work and helped to polish the manuscript. All authors read and approved the final manuscript. Sandić-Stanković, D., Kukolj, D. & Le Callet, P. DIBR-synthesized image quality assessment based on morphological multi-scale approach. J Image Video Proc. 2017, 4 (2016). https://doi.org/10.1186/s13640-016-0124-7 DIBR-synthesized image quality assessment Multi-scale IQA metric using morphological operations Geometric distortions Morphological pyramid Morphological wavelets
CommonCrawl
Publications of the Astronomical Society of Australia (3) Cardiology in the Young (1) AEPC Association of European Paediatric Cardiology (1) The Evolutionary Map of the Universe Pilot Survey – ADDENDUM Ray P. Norris, Joshua Marvil, J. D. Collier, Anna D. Kapińska, Andrew N. O'Brien, L. Rudnick, Heinz Andernach, Jacobo Asorey, Michael J. I. Brown, Marcus Brüggen, Evan Crawford, Jayanne English, Syed Faisal ur Rahman, Miroslav D. Filipović, Yjan Gordon, Gülay Gürkan, Catherine Hale, Andrew M. Hopkins, Minh T. Huynh, Kim HyeongHan, M. James Jee, Bärbel S. Koribalski, Emil Lenc, Kieran Luken, David Parkinson, Isabella Prandoni, Wasim Raja, Thomas H. Reiprich, Christopher J. Riseley, Stanislav S. Shabala, Jaimie R. Sheil, Tessa Vernstrom, Matthew T. Whiting, James R. Allison, C. S. Anderson, Lewis Ball, Martin Bell, John Bunton, T. J. Galvin, Neeraj Gupta, Aidan Hotan, Colin Jacka, Peter J. Macgregor, Elizabeth K. Mahony, Umberto Maio, Vanessa Moss, M. Pandey-Pommier, Maxim A. Voronkov Journal: Publications of the Astronomical Society of Australia / Volume 39 / 2022 Published online by Cambridge University Press: 02 November 2022, e055 The ASKAP Variables and Slow Transients (VAST) Pilot Survey Australian SKA Pathfinder Tara Murphy, David L. Kaplan, Adam J. Stewart, Andrew O'Brien, Emil Lenc, Sergio Pintaldi, Joshua Pritchard, Dougal Dobie, Archibald Fox, James K. Leung, Tao An, Martin E. Bell, Jess W. Broderick, Shami Chatterjee, Shi Dai, Daniele d'Antonio, Gerry Doyle, B. M. Gaensler, George Heald, Assaf Horesh, Megan L. Jones, David McConnell, Vanessa A. Moss, Wasim Raja, Gavin Ramsay, Stuart Ryder, Elaine M. Sadler, Gregory R. Sivakoff, Yuanming Wang, Ziteng Wang, Michael S. Wheatland, Matthew Whiting, James R. Allison, C. S. Anderson, Lewis Ball, K. Bannister, D. C.-J. Bock, R. Bolton, J. D. Bunton, R. Chekkala, A. P Chippendale, F. R. Cooray, N. Gupta, D. B. Hayman, K. Jeganathan, B. Koribalski, K. Lee-Waddell, Elizabeth K. Mahony, J. Marvil, N. M. McClure-Griffiths, P. Mirtschin, A. Ng, S. Pearce, C. Phillips, M. A. Voronkov Published online by Cambridge University Press: 12 October 2021, e054 The Variables and Slow Transients Survey (VAST) on the Australian Square Kilometre Array Pathfinder (ASKAP) is designed to detect highly variable and transient radio sources on timescales from 5 s to $\sim\!5$ yr. In this paper, we present the survey description, observation strategy and initial results from the VAST Phase I Pilot Survey. This pilot survey consists of $\sim\!162$ h of observations conducted at a central frequency of 888 MHz between 2019 August and 2020 August, with a typical rms sensitivity of $0.24\ \mathrm{mJy\ beam}^{-1}$ and angular resolution of $12-20$ arcseconds. There are 113 fields, each of which was observed for 12 min integration time, with between 5 and 13 repeats, with cadences between 1 day and 8 months. The total area of the pilot survey footprint is 5 131 square degrees, covering six distinct regions of the sky. An initial search of two of these regions, totalling 1 646 square degrees, revealed 28 highly variable and/or transient sources. Seven of these are known pulsars, including the millisecond pulsar J2039–5617. Another seven are stars, four of which have no previously reported radio detection (SCR J0533–4257, LEHPM 2-783, UCAC3 89–412162 and 2MASS J22414436–6119311). Of the remaining 14 sources, two are active galactic nuclei, six are associated with galaxies and the other six have no multi-wavelength counterparts and are yet to be identified. The Evolutionary Map of the Universe pilot survey Published online by Cambridge University Press: 07 September 2021, e046 We present the data and initial results from the first pilot survey of the Evolutionary Map of the Universe (EMU), observed at 944 MHz with the Australian Square Kilometre Array Pathfinder (ASKAP) telescope. The survey covers $270 \,\mathrm{deg}^2$ of an area covered by the Dark Energy Survey, reaching a depth of 25–30 $\mu\mathrm{Jy\ beam}^{-1}$ rms at a spatial resolution of $\sim$ 11–18 arcsec, resulting in a catalogue of $\sim$ 220 000 sources, of which $\sim$ 180 000 are single-component sources. Here we present the catalogue of single-component sources, together with (where available) optical and infrared cross-identifications, classifications, and redshifts. This survey explores a new region of parameter space compared to previous surveys. Specifically, the EMU Pilot Survey has a high density of sources, and also a high sensitivity to low surface brightness emission. These properties result in the detection of types of sources that were rarely seen in or absent from previous surveys. We present some of these new results here. Outcomes of an electronic medical record (EMR)–driven intensive care unit (ICU)-antimicrobial stewardship (AMS) ward round: Assessing the "Five Moments of Antimicrobial Prescribing" Misha Devchand, Andrew J. Stewardson, Karen F. Urbancic, Sharmila Khumra, Andrew A. Mahony, Steven Walker, Kent Garrett, M. Lindsay Grayson, Jason A. Trubiano Journal: Infection Control & Hospital Epidemiology / Volume 40 / Issue 10 / October 2019 Published online by Cambridge University Press: 13 August 2019, pp. 1170-1175 Print publication: October 2019 The primary objective of this study was to examine the impact of an electronic medical record (EMR)–driven intensive care unit (ICU) antimicrobial stewardship (AMS) service on clinician compliance with face-to-face AMS recommendations. AMS recommendations were defined by an internally developed "5 Moments of Antimicrobial Prescribing" metric: (1) escalation, (2) de-escalation, (3) discontinuation, (4) switch, and (5) optimization. The secondary objectives included measuring the impact of this service on (1) antibiotic appropriateness, and (2) use of high-priority target antimicrobials. A prospective review was undertaken of the implementation and compliance with a new ICU-AMS service that utilized EMR data coupled with face-to-face recommendations. Additional patient data were collected when an AMS recommendation was made. The impact of the ICU-AMS round on antimicrobial appropriateness was evaluated using point-prevalence survey data. For the 202 patients, 412 recommendations were made in accordance with the "5 Moments" metric. The most common recommendation made by the ICU-AMS team was moment 3 (discontinuation), which comprised 173 of 412 recommendations (42.0%), with an acceptance rate of 83.8% (145 of 173). Data collected for point-prevalence surveys showed an increase in prescribing appropriateness from 21 of 45 (46.7%) preintervention (October 2016) to 30 of 39 (76.9%) during the study period (September 2017). The integration of EMR with an ICU-AMS program allowed us to implement a new AMS service, which was associated with high clinician compliance with recommendations and improved antibiotic appropriateness. Our "5 Moments of Antimicrobial Prescribing" metric provides a framework for measuring AMS recommendation compliance. Cardiac performance and quality of life in patients who have undergone the Fontan procedure with and without prior superior cavopulmonary connection Andrew M. Atz, Thomas G. Travison, Brian W. McCrindle, Lynn Mahony, Andrew C. Glatz, Aditya K. Kaza, Roger E. Breitbart, Steven D. Colan, Jonathan R. Kaltman, Renee Margossian, Sara K. Pasquali, Yanli Wang, Welton M. Gersony Journal: Cardiology in the Young / Volume 23 / Issue 3 / June 2013 A superior cavopulmonary connection is commonly performed before the Fontan procedure in patients with a functionally univentricular heart. Data are limited regarding associations between a prior superior cavopulmonary connection and functional and ventricular performance late after the Fontan procedure. We compared characteristics of those with and without prior superior cavopulmonary connection among 546 subjects enrolled in the Pediatric Heart Network Fontan Cross-Sectional Study. We further compared different superior cavopulmonary connection techniques: bidirectional cavopulmonary anastomosis (n equals 229), bilateral bidirectional cavopulmonary anastomosis (n equals 39), and hemi-Fontan (n equals 114). A prior superior cavopulmonary connection was performed in 408 subjects (75%); the proportion differed by year of Fontan surgery and centre (p-value less than 0.0001 for each). The average age at Fontan was similar, 3.5 years in those with superior cavopulmonary connection versus 3.2 years in those without (p-value equals 0.4). The type of superior cavopulmonary connection varied by site (p-value less than 0.001) and was related to the type of Fontan procedure. Exercise performance, echocardiographic variables, and predominant rhythm did not differ by superior cavopulmonary connection status or among superior cavopulmonary connection types. Using a test of interaction, findings did not vary according to an underlying diagnosis of hypoplastic left heart syndrome. After controlling for subject and era factors, most long-term outcomes in subjects with a prior superior cavopulmonary connection did not differ substantially from those without this procedure. The type of superior cavopulmonary connection varied significantly by centre, but late outcomes were similar.
CommonCrawl
Applied Network Science Evolving network representation learning based on random walks Farzaneh Heidari1 & Manos Papagelis1 Applied Network Science volume 5, Article number: 18 (2020) Cite this article Large-scale network mining and analysis is key to revealing the underlying dynamics of networks, not easily observable before. Lately, there is a fast-growing interest in learning low-dimensional continuous representations of networks that can be utilized to perform highly accurate and scalable graph mining tasks. A family of these methods is based on performing random walks on a network to learn its structural features and providing the sequence of random walks as input to a deep learning architecture to learn a network embedding. While these methods perform well, they can only operate on static networks. However, in real-world, networks are evolving, as nodes and edges are continuously added or deleted. As a result, any previously obtained network representation will now be outdated having an adverse effect on the accuracy of the network mining task at stake. The naive approach to address this problem is to re-apply the embedding method of choice every time there is an update to the network. But this approach has serious drawbacks. First, it is inefficient, because the embedding method itself is computationally expensive. Then, the network mining task outcome obtained by the subsequent network representations are not directly comparable to each other, due to the randomness involved in the new set of random walks involved each time. In this paper, we propose EvoNRL, a random-walk based method for learning representations of evolving networks. The key idea of our approach is to first obtain a set of random walks on the current state of network. Then, while changes occur in the evolving network's topology, to dynamically update the random walks in reserve, so they do not introduce any bias. That way we are in position of utilizing the updated set of random walks to continuously learn accurate mappings from the evolving network to a low-dimension network representation. Moreover, we present an analytical method for determining the right time to obtain a new representation of the evolving network that balances accuracy and time performance. A thorough experimental evaluation is performed that demonstrates the effectiveness of our method against sensible baselines and varying conditions. Network science, built on the mathematics of graph theory, leverage network structures to model and analyze pairwise relationships between objects (or people) (Newman 2003). With a growing number of networks — social, technological, biological — becoming available and representing an ever increasing amount of information, the ability to easily and effectively perform large-scale network mining and analysis is key to revealing the underlying dynamics of these networks, not easily observable before. Traditional approaches to network mining and analysis inherit a number of limitations. First, networks are typically represented as adjacency matrices, which suffer from high-dimensionality and data sparsity issues. Then, network analysis typically requires domain-knowledge in order to carry out the various steps of network data modeling and processing that is involved, before (multiple iterations of) analysis can take place. An ineffective network representation along with a requirement for domain expertise, render the whole process of network mining cumbersome for non-experts and limits their applicability to smaller networks. To address the aforementioned limitations, there is a growing interest in learning low-dimensional representations of networks, also known as network embeddings. These representations are learned in an agnostic way (without domain-expertise) and have the potential to improve the performance of many downstream network mining tasks that now only need to operate in lower dimensions. Example tasks include node classification, link prediction and graph reconstruction (Wang et al. 2016), to name a few. Network representation learning methods are typically based on either a graph factorization or a random-walk based approach. The graph factorization ones (e.g., GraRep (Cao et al. 2015), TADW (Yang et al. 2015), HOPE (Ou et al. 2016)) are known to be memory intensive and computationally expensive, so they don't scale well. On the other hand, random-walk based methods (e.g., DeepWalk (Perozzi et al. 2014), node2vec (Grover and Leskovec 2016)) are known to be able to scale to large networks. A comprehensive coverage of the different methods can be found in the following surveys (Cai et al. 2018; Hamilton et al. 2017; Zhang et al. 2018). A major shortcoming of these network representation learning methods is that they can only be applied on static networks. However, in real-world, networks are continuously evolving, as nodes and edges are added or deleted over time. As a result, any previously obtained network representation will now be outdated having an adverse effect on the accuracy of the data mining task at stake. In fact, the more significant the network topology changes are, the more likely it is for the mining task to perform poorly. One would expect though that network representation learning should account for continuous changes in the network, in an online mode. That way, (i) the low-dimensional network representation could continue being employed for downstream data mining tasks, and (ii) the results of the mining tasks obtained by the subsequent network representations would be comparable to each other. Going one step further, one would expect that while obtaining the network representation at any moment is possible, the evolving network representation learning framework suggest the best time to obtain the representation based on the upcoming changes in the network. The main objective of this paper is to develop methods for learning representations of evolving networks. The focus of our work is on random-walk based methods that are known to scale well. The naive approach to address this problem is to re-apply the random-walk based network representation learning method of choice every time there is an update to the network. But this approach has serious drawbacks. First, it will be very inefficient, because the embedding method is computationally expensive and it needs to run again and again. Then, the data mining results obtained by the subsequent network representations are not directly comparable to each other, due to the differences involved between the previous and the new set of random walks, as well as, the non-deterministic nature of the deep learning process itself (see "Background and motivation" section for a detailed discussion). Therefore the naive approach would be inadequate for learning representations of evolving networks. In contrast to the naive approach, we propose a novel random-walk based method for learning representations of evolving networks. The key idea of our approach is to design efficient methods that are incrementally updating the original set of random walks in such a way that it always respects the changes that occurred in the evolving network. As a result, we are able to continuously learn a new mapping function from the evolving network to a low-dimension network representation, by only updating a small number of random walks required to re-obtain the network embedding. The advantages of this approach are multifold. First, since the changes that occur in the network topology are typically local, only a small number of the original set of random walks will be affected, giving rise to substantial time performance gains. In addition, since the network representation will now be continuously informed, the accuracy performance of the network mining task will be improved. Furthermore, since the original set of random walks is maintained as much as possible, subsequent results of the mining tasks will be comparable to each other. In summary, the major contributions of this work include: a systematic analysis that illustrates the instability of the random-walk based network representation methods and motivates our work. an algorithmic framework for efficiently maintaining a set of random walks that respect the changes that occur in the evolving network topology. The framework treats random walks as "documents" that are indexed using an open-source distributed indexing and searching library. Then, the index allows for efficient ad hoc querying and update of the collection of random walks in hand. a novel algorithm, EVONRL, for Evolving Network Representation Learning based on random walks, which offers substantial time performance gains without loss of accuracy. The method is generic, so it can accommodate the needs of different domains and applications. an analytical method for determining the right time to obtain a new representation of the evolving network. The method is based on adaptive evaluation of the degree of divergence between the most recent random-walk set and the random-walk set utilized in the most recent network embedding. The method is tunable so it can be adjusted to meet the accuracy/sensitivity requirement of different domains, therefore can provide support for a number of real-world applications. a thorough experimental evaluation on synthetic and real data sets that demonstrates the effectiveness of our method against sensible baselines, for a varying range of conditions. An earlier version of this work appeared in the proceedings of the International Conference on Complex Networks and their Applications 2018 (Heidari and Papagelis 2018). The conference version addressed only the case of adding new edges. The current version extends the problem to the cases of deleting existing edges, adding new nodes and deleting existing nodes. In addition, it provides an analytical method that aims to provide support to the decision making process of when to obtain a new network embedding. This decision is critical as it can effectively balance accuracy versus time performance of the method extending its applicability in domains of diverse sensitivity. In addition, it provides further experiments for the additional cases that offer substantial, new insights of the problem's complexity and the performance of our EVONRL method. The remainder of this paper is organized as follows: "Background and motivation" section provides background and motivates our problem. "Problem definition" section formalizes the problem of efficiently indexing and maintaining a set of random walks defined on the evolving network and "Algorithmic framework of dynamic random walks" section presents our algorithmic framework for addressing it. Our evolving network representation method and analytical method for obtaining new representations of the evolving network are presented in "Evolving network representation learning" section. "Experimental evaluation" section presents the experimental evaluation of our methods and "Extensions and variants" section discusses interesting variants and future directions. After reviewing the related work in "Related work" section, we conclude in "Conclusions" section. Background and motivation As mentioned earlier, there are many different approaches for static network embedding. A family of these methods is based on performing random walks on a network. Random-walk based methods, inspired by the word2vec's skip-gram model of producing word embeddings (Mikolov et al. 2013b), try to establish an analogy between a network and a document. While a document is an ordered sequence of words, a network can effectively be described by a set of random walks (i.e., ordered sequences of nodes). Typical examples of these algorithms include DeepWalk (Perozzi et al. 2014) and node2vec (Grover and Leskovec 2016). In fact, the latter can be seen as a generalization of the former, as node2vec can be configured to behave as DeepWalk. We collectively refer to these methods as StaticNRL for the rest of the manuscript. A typical StaticNRL method, is operating in two steps: (i) a set of random walks, say walks, is collected by performing r random walks of length l starting at each node in the network (typical values are r=10,l=80). (ii) walks are provided as input to an optimization problem that is solved using variants of Stochastic Gradient Descent using a deep neural network architecture (Bengio et al. 2013). The context size employed in the deep learning phase is k (typical value is k=5). The outcome is a set of d-dimensional representations, one for each node. These representations are learned in an unsupervised way and can be employed for a number of predictive tasks. It is important to note that there are many possible strategies for performing random walks on nodes of a network, resulting in different learned feature representations and different strategies might work better for specific prediction tasks. The methods we will be presenting in this paper are orthogonal to what features the random walks aim to learn, therefore they can accommodate most of the existing random-walk based network representation learning methods. Evaluation of the stability of StaticNRL methods In this paragraph, we present a systematic evaluation of the stability of the StaticNRL methods, similar to the one presented in (Antoniak and Mimno 2018). The evaluation aims to motivate our approach to address the problem of interest. Intuitively, a stable embedding method is one in which successive runs of it on the same network would learn the same (or similar) embedding. Our interest for such an evaluation is stemming from the fact that StaticNRL methods are to a great degree dependent on two random processes: (i) the set of random walks collected, and (ii) the initialization of the parameters of the optimization method. Both factors can be a source of instability for the StaticNRL method. Comparing two embeddings can happen either by measuring their similarity or by measuring their distance. Let us introduce the following measures of instability: Cosine Similarity: Cosine similarity is a popular similarity measure for real-valued vector space models. It can also been used to compare two network embeddings using the pairwise cosine similarity on the learned d-dimensional representations (Kim et al. 2014; Hamilton et al. 2016). Formally, given the vector representations ni and \(n_{i}^{\prime }\) of the same node ni in two different network embeddings obtained at two different attempts, their cosine similarity is represented as: $$sim(n_{i}, n_{i}^{\prime}) = cos(\theta)=\frac{\mathbf{n_{i}} \cdot \mathbf{n_{i}^{\prime}}}{ \|\mathbf{n_{i}} \|\|\mathbf{n_{i}^{\prime}} \|} $$ We can extend the similarity to two network embeddings E and E′ by summing and normalizing over all nodes: $$sim(E, E^{\prime}) = \frac{\sum_{i \in V}sim\left(n_{i}, n_{i}^{\prime}\right)}{|V|} $$ Matrix Distance: Another possible way is to obtain the distance between two network embeddings by subtracting the matrices that represent the embeddings of all nodes, similarly to the approach followed in (Goyal et al. 2018). Formally, given a graph G=(V,E), a network embedding is a mapping \(f: V \rightarrow \mathbb {R}^{d}\), where d≪|V|. Let \(F(V) \in \mathbb {R}^{|V| \times d}\) be the matrix of all node representations. Then, we can define the following distance measure for the two network embeddings E, E′: $$distance(E, E^{\prime}) = ||F^{\prime}(V) - F(V)||_F $$ Experimental scenario: We design a controlled experiment on two real-world networks, namely Protein-Protein-Interaction (PPI) (Breitkreutz et al. 2007) and a collaboration network, Digital Bibliography Library & Project (dblp) (Yang and Leskovec 2015) that aims to evaluate the effect of the two random processes in the final network embeddings. In these experiments, we have three settings. For each setting, we run StaticNRL on a network (using parameter values: r=10,l=10,k=5) two consecutive times, say t and t+1, and compute the cosine similarity and the matrix distance of the two network embeddings Et,Et+1 obtained. We repeat the experiment 10 times and report averages. The three settings are: StaticNRL: Each run collects independent random walks and random weights are used in the initialization phase. StaticNRL-i: Each run collects independent random walks but employs the same set of weights for the initialization phase, over all runs. The purpose is to eliminate one of the random processes. StaticNRL-rw-i: Each run employs the same set of random walks and the same set of weights for the initialization phase, over all runs. The purpose is to eliminate both random processes. Results: The results of the experiment are shown in Fig. 1a (cosine similarity) and Fig. 1 (matrix distance). They show that the set of random walks and the randomized initialization of the deep learning process have a significant role in moving the embedding despite the fact that there is no actual change in the topology of the network. As a matter of fact, when the same set of random walks and the same initialization is used then consecutive runs of StaticNRL result in the same embedding (as depicted by the sim(·,·)=1 in Fig. 1a or distance(·,·)=0 in Fig. 1b). However, when the set of random walks is independent or both the random walks and the initialization are independent then substantial differences are illustrated in consecutive runs of the StaticNRL methods. Instability of the StaticNRL methods. Controlled experiments on running StaticNRL multiple times on the same network depict that the network representations learned are not stable, as a result of random initialization and random walks collected. When any of these random processes are fixed, then the network representations learned become more stable. a cosine similarity and b matrix distance Implications: Let us start by noting that the implications of the experiment is not that StaticNRL is not useful. In fact, it has been shown to work very well. The problem is that while each independent embedding is inherently correct and has approximately same performance in downstream data mining task, these embeddings are not directly comparable to each other. In reality, the embeddings will be approximately equivalent if we are able to rotationally align them — most of similar work in the literature correct this problem by applying an alignment method (Hamilton et al. 2016). While alignment methods can bring independent embeddings closer and eliminate the effect of different embeddings, this approach won't work well in random walk based models. The main reason for that is that as we have showed in the experiment, consecutive runs suffer from instability that is introduced by the random processes. Therefore, in the case of evolving networks (which is the focus of this work), changes that occur in the network topology will not be easily interpretable in the changes observed in the network embedding (since differences might incorporate changes due to the two random processes). However, changes in the evolving network need to be proportional to the changes in the learned network representation. For instance, minor changes in the network topology should cause small changes in the representation, and significant changes in the network topology should cause large changes in the network representation. Key idea: This motivated us to consider learning representations of evolving networks by efficiently maintaining a set of random walks that consistently respect the network topology changes. At the same time, we eliminate the effect of the random processes by, first, preserving, as much as possible, the original random walks that haven't been affected by the network changes. Then, by initializing the model with a previous run's initialization (Kim et al. 2014). There are two main advantages in doing so. Changes to the network representations of successive instances of an evolving network will be more interpretable and data mining task results will be more comparable to each other. In addition, it is possible to detect anomalies in the evolving network or extract laws of change in domain-specific networks (e.g., a social network) that explain which nodes are more vulnerable to change, similar to research in linguistics (Hamilton et al. 2016). Furthermore, our framework makes it possible to quantify the importance of any occurring changes in the network topology and therefore obtain a new network representation at an optimal time or when is really needed. In "Background and motivation" section, we have established the instability of random walk based methods even when they are repeatedly applied to the same static network. That observation alone highlights the main challenge of employing these methods for learning representations of evolving networks. We have also introduced our key idea to address this problem. Stemming from our key idea, in this Section, we present a few definitions that allow to formally define the problem of interest in this paper. (simple random walk or unbiased random walk on a graph) A simple random walk or unbiased random walk on a graph is a stochastic process that describes a path in a mathematical space (Pearson 1905), where the random walker transits from its current state (node) to one of its potential new states (neighboring nodes) with an equal probability. For instance, assume a graph G=(V,E) and a source node v0∈V. We uniformly at random select a node v1 to visit from the set Γ(v0)of all neighbors of v0. Then, we uniformly at random select a node v2 to visit from the set Γ(v1) of all neighbors of v1, and so on. Apparently, the sequence of vertices v0,v1,...,vk,... forms a simple random walk or an unbiased random walk on G. Formally, at every step k, we have a random variable Xk taking values on V, and the random sequence X0,X1,...,Xk,... is a discrete time stochastic process defined on the state space V. Assuming that at time k we are at node vi, we select to uniformly at random move to one of its adjacent nodes vj∈Γ(vi) based on the following transition probability: $${} p_{v_{i}, v_{j}}=P(X_{k+i} = v_{j} | X_{k} = v_{i}) = \left\{ \begin{array}{ll} \frac{1}{d_{v_{i}}}, \quad if \ (v_{i}, v_{j}) \in E\\ 0, \quad otherwise \end{array}\right. $$ where \(d_{v_{i}}\) is the degree of node vi. (biased random walk) A biased random walk is a stochastic process on graph, where the random walker jumps from its current state (node) to one of its potential new states (neighboring nodes) with unequal probability. Formally, assuming that at time k we are at node vi, we select to move to one of its adjacent nodes vj∈Γ(vi) based on the following transition probability: $${} p_{v_{i}, v_{j}}=P(X_{k+i} = v_{j} | X_{k} = v_{i}) = \left\{ \begin{array}{ll} p, \quad if \ (v_{i}, v_{j}) \in E\\ 0, \quad otherwise \end{array}\right. $$ where p is unequal for each of the neighbours vj∈Γ(vi). (evolving graph) Assume a connected, unweighted and undirected graph Gt=(Vt,Et) where Vt denotes the node set of Gt and Et denotes the edge set of Gt at time t. Since all nodes are connected to at least another node it holds that ∀u∈Vt it is du≥1. Now assume that at time t+1 a change occurs in the network topology of Gt forming Gt+1=(Vt+1,Et+1). This change can occur due to the following events: a new edge (u′,v′)∉Et is added in Gt; then Et+1=Et∪(u′,v′). an existing edge (u,v)∈Et of Gt is deleted; then, Et+1=Et∖(u,v). a new node u′∉Vt is added in Gt; then Vt+1=Vt∪u′. an existing node u∈Vt of Gt is deleted; then, Vt+1=Vt∖u. Note that since we have assumed that the graph is connected, the events of adding a new nodeu′∉Vt in Gt or deleting an existing nodeu∈Vt from Gt can be treated as instances of edge addition and edge deletion, respectively. We discuss construction of these cases in "Algorithmic framework of dynamic random walks" section. Over time, nodes and edges are added to and/or deleted from the graph at time t′=t+i, i∈[1,2,...,+∞) forming an evolving graph\(G_{t}^{\prime }\). (a valid set of random walks) A set of random walks RWt at time t is valid, if and only if, every random walk in RWt is an unbiased random walk on Gt. (maintaining a valid set of random walks on an evolving network) Let a connected, unweighted and undirected graph Gt=(Vt,Et) where Vt denotes the node set of Gt and Et denotes the edge set of Gt at time t. Assume a valid set of random walks RWt are obtained on Gt at time t. As new edges are added to and/or deleted to the evolving graph, at any time t′=t+i, i∈[1,2,...,+∞) forming \(G_{t}^{\prime }\), the original set of random walks RWt will soon be rendered invalid, since many of its random walks will begin introducing a bias. We would like to design and develop methods for efficiently updating any biased random walk in \(RW_{t}^{\prime }\) with an unbiased random walk, so that \(RW_{t}^{\prime }\) always represents a valid set of random walks of \(G_{t}^{\prime }\). The premise is that if we are able to solve Problem1 efficiently, then we will be in a position to obtain an accurate representation of the evolving network at anytime. Algorithmic framework of dynamic random walks In this Section, we describe a general algorithmic framework and novel methods for incrementally updating the set of random walks in reserve, obtained on the original network Gt(Vt,Et) at time t, so that they respect the updated network \(G_{t}^{\prime } \left (V_{t}^{\prime }, E_{t}^{\prime }\right)\) at time t′, where t′>t, and do not introduce any bias. Recall that these are random walks that could have been obtained directly by performing random walks on \(G_{t}^{\prime }\). The framework we describe is generic and can be used in any random walk-based embedding method. The first part of the Section presents algorithms for incrementally updating the set of random walks in hand, as edges and/or nodes are added to and/or deleted from the evolving network. The second part, presents an indexing mechanism that supports the efficient storage and retrieval (i.e., query, insert, update, deletion operations) of the set of random walks used for learning subsequent representations of the evolving network. A summary of notations is provided in Table 1. Table 1 Summary of notations used in the dynamic random walk framework Incremental update of random walks Given a network Gt=(Vt,Et) at time t, we employ a standard StaticNRL methodFootnote 1 to simulate random walks. This method is configured to perform r random walks per node, each of length l (default values are r=10 and l=80). Let RWt be the set of random walks obtained, where |RWt|=|Vt|×r. We store the random walks in memory, using a data structure that provides random access to its elements (i.e., a 2-Dnumpy matrixFootnote 2). In practice, each finite-length random walk is stored as a row of a matrix, and each matrix element represents a single node of the network that is traversed by a random walk. As we monitor changes in the evolving network, there are four distinct events that need to be addressed: i) edge addition, ii) edge deletion, iii) node addition, and iv) node deletion. These events can affect the network topology (and the set of random walks in hand) in different ways, therefore they need to be studied separately. First, we provide details of the edge addition and edge deletion events. This will bring up the challenges that need to be addressed in updating random walks and will introduce our main methods. Then, we visit node addition and node deletion and show that they can be treated as special cases of edge addition and edge deletion, respectively. Edge addition Assume that a single new edge eij=(nodei,nodej) arrives in the network at time t+1, so Et+1=Et∪(nodei,nodej). There are two operations that need to take place in order to properly update the set RWt of the random walks in hand: Operation 1: contain the new edge to existing random walks in RWt. Operation 2: discard obsolete parts of random walks of RWt and replace them with new random walks to form the new RWt+1. Details of each operation are provided in the next paragraphs. Operation 1: contain a new edge in RW We want to update the set RWt to contain the new edge (nodei,nodej). The update should occur in a way that it represents an instance of a possible random walk on Gt+1, and at the same time, it preserves the previous set of random walks RWt, as much as possible (to maintain network embedding stability). Note that due to the way that the original set of random walks was obtained, both nodei and nodej will occur in a number of random walks of RWt. We explain the update process for nodei; the same process is followed for nodej. First, we need to find all the random walks walksi∈RWt that include nodei. Then, we need to update them so as to reflect the existence of the new edge (nodei,nodej). In practice, the new edge offers a new possibility for each random walk in Gt+1 that reaches nodei to traverse nodej in the next step. The number of these random walks that include (nodei,nodej) depends on the node degree of nodei and it is critical for correctly updating random walks in RW. Formally, if the node degree of nodei in Gt is dt then in Gt+1 it will be incremented by one, dt+1=dt+1. Effectively, a random walk that visits nodei in Gt+1 would have a probability \(\frac {1}{d_{t+1}}\) to traverse nodej. This means that if there are freqi occurrences of nodei in RWt, then \(\frac {freq_{i}}{d_{t+1}}\) edges (nodei,nodej) need to be contained, by setting the next node of nodei to be nodej, in the current random walk. If nodei is the last node in a random walk then, there is no need to update the new edge in that random walk. Naive approach: The naive approach to perform the updates is to visit all freqi occurrences of nodei in walksi∈RW and for each of them to decide whether to perform an update of the random walk (or not), by setting the next node to be nodej. The decision is based on tossing a biased coin, where with probability \(p_{{success}}=\frac {1}{d_{t+1}}\) we update the random walk, and with probability pfailure=1−psuccess we do not. While this method is accurate, it is not efficient as all occurrences of nodei need to be examined, when only a portion of them needs to be updated. Faster approach: A more efficient way is to find all the freqi occurrences of nodei, and then to uniformly at random sample \(\frac {freq_{i}}{d_{t+1}}\) of them and update them by setting the next node to be nodej. While this method will be faster than the naive approach, it still resides on finding all the freqi occurrences of nodei in the set of random walks RW, which is an expensive operation. We will soon describe how this method can be accelerated by using an efficient indexing library that allows for fast querying and retrieval of all occurrences a node in random walks. Operation 2: replace obsolete random walks Once a new edge (nodei,nodej) is contained in an existing random walk, it renders the rest of it obsolete, so it is best to be avoided. Our approach is to replace the remainder of the random walk by simulating a new random walk on the updated network Gt+1. The random walk starts at nodej and has a length lsim=l−(Indi+1), where Indi,0≤Indi≤l−1, is the index of nodei in the random walk that is currently updated. Once updates for nodei have been performed, the updates that are due to nodej are computed and performed. Figure 2a presents an illustrative example of how updates of random walks work, in the case of a single incoming edge on a simple network. First, a set of random walks RWt are obtained (say 5 as illustrated by the upper lists of random walks). Let us assume that a new edge (1,4) arrives. Note that now, the degree of node 1 and node 2 will increase by 1 (dt+1=dt+1). Because of the new edge, some random walks need to be updated to account for the change in the topology. To perform the updates, we first search for all occurrences of i, freqi. Then, we uniformly at random sample \(\frac {freq_{i}}{d_{t+1}} = 2 / 2 = 1\) of them to determine where to contain the new edge. In the example, node 4 is listed after node 1 (i.e., the second node in the random walk #4 is now updated). The rest of the current random walk is obsolete, so it needs to be replaced. To perform the replacement a new random walk is simulated on the updated network Gt+1 that starts at node 4 and has a length of lsim=l−(Ind1+1)=10−(0+1)=9. The same process is repeated for node 4 of the added edge (1,4) (see the updates in random walks #2 and #5, respectively). Illustrative example of EVONRL updates for edge addition and edge deletion (colored). a Example addition of a new edge (1;4). Random walks in reserve need to be updated to adhere to the change in the network topology. Our method guarantees that the new edge is equally represented in the updated set of random walks. b Example deletion of an existing edge (1;4). Random walks in reserve need to be updated to adhere to the change in the network topology. In this example, random walk #2 and #4 traverse edge (1;4) and need to be updated The details of the proposed algorithm are described in Algorithm 1. Lines ?? and ?? of the algorithm invoke a Query operator. This operator is responsible for searching and retrieving information about all the occurrences of nodei in the set of the random walks RWt. In addition, lines ?? and ?? of the algorithm invoke a UpdateRandomWalks operator. This operator is responsible for updating any obsolete random walks of RWt with the updated ones to form the new valid set of random walks RWt+1, related to Gt+1. However, these operators are very computationally expensive, especially for larger networks, and therefore will perform very poorly. In paragraph 1, we describe how these two slow operators, UpdateRandomWalks and Query, can be replaced by similar operators offered off-the-shelf by high performance indexing and searching open-source technologies. In addition, so far, we have relied on maintaining the set of random walks RWt in memory. However, this is unrealistic for larger networks — while storing a network in memory as an edge list requires O(E), storing the set of random walks requires O(V·r·l) that is typically much larger for sparse networks. The indexing and searching technologies we will employ are very fast and at the same time are designed to scale to very large number of documents. Therefore, they are in position to scale well to very large number of random walks, as we discuss in "Extensions and variants" section. To accommodate a set of new edges E+, the same algorithm needs to be applied repeatedly. The main assumption is that edges become available in a temporal order (a stream of edges), which is a common assumption for evolving networks. The premise of our method is that every time, only a small portion of the random walks need to be updated, therefore large performance gains are possible, without any loss in accuracy. In fact, the number of random walks affected depends on the node centrality of the nodes nodei and nodej that form the new edge (nodei,nodej). While our approach suggests that a new representation is required every time a single change occurs in the network that is not the case in real-world use cases. In fact, in paragraph 1, we provide an analytical method for determining the right time to obtain a new representation of the evolving network. As will see the method is based on an adaptive evaluation of the degree of divergence between the most recent random-walk set and the random-walk set utilized in the most recent network embedding. The method is tunable so it can be adjusted to meet the accuracy/sensitivity requirement of different domains, therefore can provide support for a number of real-world applications. We discuss also the implications of this issue to the time performance of the method in "Experimental evaluation" section. Edge deletion Assume a single existing edge eij=(nodei,nodej) is deleted from the network. Similar to edge addition, there are two operations that need to take place: Operation 1: delete the existing edge from current random walks in RWt by removing any consecutive occurrence of edge's endpoints in the set. Operation 1: delete an existing edge from RW In edge deletion, unlike with the case of edge addition (where we had to sample over all the occurrences of a specific node), all the walks that have traversed the existing edge (nodei,nodej) should be modified because all of them are now invalid. Other than that, the rest of the process is similar to that of edge addition. First, all random walks that have occurrences of (nodei,nodej) and (nodej,nodei) need to be retrieved. Then, the retrieved random walks need to be modified according to the method described in 1. Algorithm 2 describes this procedure in detail. Figure 2b presents an illustrative example of updates that need to take place due to a single edge deletion. First, a set of random walks are obtained. Let us assume that a new edge (1,4) is deleted, therefore random walks that traverse it, need to be updated. First, we retrieve random walks where node 1 and node 4 occur the one right after the other. For example, in random walk #4 of Fig. 2b, node 4 appears right after 1. Since now that edge doesn't exist anymore in the network, we need to update the random walk so as to allow an existing neighbor of node 1 to appear after node 4. This action is performed in operation 2. Operation 2: replace obsolete random walks This operation is similar to the one in the case of adding a new edge. We just need to replace the remainder of any random walk affected by the Operation 1 by simulating a new random walk on the updated network Gt+1 of the right length. Following up with the running example, to perform the replacement of the obsolete random walk, a new random walk is simulated on network Gt+1 that starts at node 1 and has a length of lsim=l−(Ind1+1)=10−(0+1)=9. A Note About Disconnected Nodes: During the process of deleting edges, any of the edge nodes might be disconnected from the rest of the network, forming isolated nodes. In that case, all r random walks in RW that start from an isolated node need to be deleted. In the case that only one of the nodes of a deleted edge becomes isolated, then the simulated random walk is obtained by starting a random walk from the node that remains connected in the network. Node addition Assume that a new node nodei is added to the network at time t+1, so Vt+1=Vt∪{nodei}. Initially, this node forms an isolated node (i.e., \(d_{i}^{t+1} = 0\)) and therefore there is no need to update the set of random walks RW. Now, assume that at a later time the node connects to the rest of the network through an edge (nodei,nodej). In that case, we treat the new edge as described earlier in paragraph 1. In addition to that we need to simulate a set of r new random walks, each of length l, all of which start from the new node nodei (recall that our original set of random walks consisted of r random walks of length l for each node in the graph). The newly obtained random walks are appended to RWt (i.e., it is |RWt+1|=|RWt|+r) and are utilized in subsequent network embeddings. There is also a special case where two isolated nodes are connected. In that case we need to simulate r random walks of length l starting from each node of nodei and nodej, respectively and append them to RWt. Node deletion Assume that an existing node nodei is deleted from the network at time t+1, so Vt+1=Vt∖{nodei}. In this case, first we obtain the set of neighbors Γ(nodei) of nodei. For each nodej∈Γ(nodei) there is an edge (nodei,nodej) in the network that needs to be deleted. We delete each of these edges as described earlier in paragraph 1 and obtain the updated set RW. The deletes occur in an arbitrary order, without any side effect. Eventually, this process forms an isolated node, which is removed from the graph. Deletion of the isolated node itself doesn't further affect the set RW. Efficient storage and retrieval of random walks The methods of updating random walks presented in the previous paragraph are accurate. However, they depend on operators Query and UpdateRandomWalks that are computationally expensive and cannot scale to larger networks. The most expensive operation is to search the random walks RWt to find occurrences of nodei and nodej of the new edge (nodei,nodej). In addition, updates of random walks can be expensive as large number of existing random walks might need to be updated. To address these shortcomings, our framework of efficiently updating random walks relies on popular open-source indexing and searching technologies. These technologies offer operations for efficiently indexing and searching large collections of documents. For example, they support efficient full-text search capabilities where given a query term q, all documents in the collection that contain q are retrieved. In our framework we treat each random walk as a text "document". Therefore, each node visited by a random walk would be represented as a text "term", and all random walks would represent "a collection of documents". Using this analogy, we build an inverted random walk index, IRW. IRW is an index data structure that stores a mapping from nodes (terms) to random walks (documents). The purpose of IRW is to enable fast querying of nodes in random walks, and fast updates of random walks that can inform Algorithm 1. Figure 3 provides an illustrative example of a small inverted random walk index. In addition, we briefly describe how to create the index and use it in our setting. Example inverted random walk index. Given a graph, five random walks are performed. Each random walk is treated as a document and is indexed using an open-source distributed indexing and searching library. The result is an inverted index that provides information about the frequency of any node in the random walks and information about where in the random walk the node is found Indexing Random Walks: We obtain the initial set of random walks RWt at time t by performing random walks on the original network, similarly to the process followed in standard StaticNRL methods. Each random walk is transformed to a document by properly concatenating the ids of the nodes in the walk. For example, a short walk (x→y→z) over nodes x, y and z, will be represented as a document with content "x y z". These random walks are indexed to create IRW. It is important to note that once an index is available, there is no need to maintain the random walks in memory any more. Querying Random Walks: We rely on the index IRW to perform any Query operation. Note, however, that there are additional advantages on using an efficient index. Besides searching and retrieving all random walks that contain a specific nodei, the index IRW can be configured to provide more quantities of interest. Specifically, we configure IRW so that every query retrieves additional information about the frequency of nodei,freqi and the position Indi of nodei in a retrieved random walk (see Fig. 3). The first quantity (freqi) is used to determine the number of updates that are required as discussed earlier. The second (Indi), is used to inform the operator Position in Algorithm 1 (lines ?? and ??). Note that there is a slight variation of how the Query operation is configured in the case of the edge deletion. Recall that in that event we need to retrieve random walks where the two nodes nodei and nodej are found the one right after the other (i.e., they form a step of the random walk). To accommodate this case we just need to configure the Query operation to retrieve all random walks that contain the bigram " nodeinodej". A bigram is a pair of contiguous sequence of words in a document or, following the analogy, a pair of contiguous sequence of nodes in a random walk. The indexing and searching technology we employ can handily support such queries. Updating Random Walks: We rely on the index IRW for any UpdateRandomWalk operation. An update of a random walk is analogous to an update of a document in the index. In practice, any update of the index IRW is equivalent to deleting an old random walk and then indexing a new random walk. While querying using an inverted index is a fast process, updating an index is a slower process. Therefore, the performance of our methods is dominated by the number of random walks updates required. Still, our methods would perform multitude of times faster than StaticNRL methods. A detailed analysis of this issue is provided in "Experimental evaluation" section. Following the discussion about the edge deletion/addition, special care is required when these events involve isolated nodes. In particular, if a new edge connects a previously isolated node nodei to the network, then r new random walks need to be added in the index, each of which starts from nodei. The process of indexing the new random walks is similar to the process described in paragraph 1. Similarly, if an edge deletion event resulted in a node nodei being isolated, then all the r random walks that start from nodei need to be removed from the index. Removing a random walk from the index is analogous to deleting a document from the index. Bulk updates: Additional optimizations are available as a result of employing an inverted index for the random walks. For example, we can take advantage of bulk updates, where the index need only be updated when a number of new edges have arrived. This means that changes of single incoming edges won't be reflected in IRW right away. While this optimization has the premise to make our methods faster (since updates occur once in a while), it risks harming its accuracy. In practice, it offers an interesting trade-off between accuracy and time performance that domain-specific applications need to tune. Experiments in "Experimental evaluation" section demonstrate this tradeoff. Evolving network representation learning So far we have described our framework for maintaining an always valid set of random walks RWt at time t. Recall that our final objective is to be able to learn a representation of this evolving network. For the embedding process we resort to the same embedding of standard StaticNRL methods. Below we describe how embeddings of the evolving network are obtained, given a set of random walks RWt. Then, a general strategy for obtaining an embedding only when it is mostly needed. Learning embeddings Given a general network, Gt=(Vt,Et), our goal is to learn the network representation f(Vt) using the skip-gram model. f(Vt) is a |Vt|×d matrix where d is the network representation dimension and each row is the vector representation of a node. At the first time-stamp, the node vector representations (neural network's weights) are initialized randomly and we use this initialization for other timestamps' training. The training objective function is to maximize the log-probability of the nodes appearing in the context of the node ni. Context of each node ni is provided by the valid set of random walks RWt, similarly to the process described in previous work (Perozzi et al. 2014; Grover and Leskovec 2016). Using the approximate objective, skip-gram with negative sampling (Mikolov et al. 2013a), these embeddings are optimized by stochastic gradient decent so that: $$ Pr(\mathit{n_{j}}|\mathbf{n_{i}}) \propto \exp{\left(\mathbf{n_{j}^{T}}\mathbf{n_{i}}\right)} $$ where ni is the vector representation of a node ni(f(ni)=ni). Pr(nj|ni) is the probability of the observation of neighbor node nj, within the window-size given that the window contains ni. In our experiments, we use the gensim implementation of the skip-gram modelFootnote 3. We set our context-size to k=5 and the number of dimensions to d=128, unless otherwise stated. Analytical method for determining the timing of a network embedding EVONRL has the overhead of first indexing the set of initial random walks RW. At that time, we randomly initialize the skip-gram model and keep these initialization weights for the learning phase of subsequent times. As new edges/nodes are added/deleted, EVONRL performs the necessary updates as described earlier. At each time step a valid set of random walks is available that can be used to obtain a network embedding. As we show in "Experimental evaluation" section an embedding obtained by our incrementally updated set of random walks effectively represents embeddings obtained by applying a StaticNRL method directly on the updated network. However, while re-embedding the network every time a change occurs in it will result in accurate embeddings, this process is very expensive and risks to render the method non-applicable in real-world scenarios. Therefore, and depending on the domain, it is reasonable to assume that only a limited number of re-embeddings be obtained. This introduces a new problem: when is the right time to obtain a network embedding? In fact, this decision process demonstrates an interesting tradeoff between accuracy and time performance of the method proposed. In the rest of the paragraph we introduce two strategies for determining the time to obtain network embeddings. PERIODIC: This is a sensible baseline where, as the name reveals, obtains embeddings periodically, every q time steps. Depending on the sensitivity of the domain we operate on, the period can be shorter or longer. This method is easy to implement, but it is obtaining network embedding being agnostic of the different changes that occur in the network and whether they are significant (or not). ADAPTIVE: We introduce an analytical method for determining the right timing of obtaining a network embedding. The key idea of the method is to continuously monitor the changes that occur in the network. Then, if significant changes are detected it obtains a new network embedding. In fact, we monitor two conditions, the first is able to detect occurrence of a critical change (e.g., addition of a very important edge) and is based on the idea of peak detection; the second is able to evaluate cumulative effects due to a number changes. We discuss the structure of these conditions in the following paragraphs. Peak detection: We start by providing background of a z-score. A z-score (or standard score) is a popular statistical measure that indicates how many standard deviations away an observation is from its mean. When the population mean and the population standard deviation are unknown, the standard score may be calculated using the sample mean and sample standard deviation as estimates of the population values. In that case, the z-score of observed values x can be calculated from the following formula: $$ z = \frac{x - \hat{x}}{\hat{\sigma}} $$ where \(\hat {x}\) is the mean of the sample and \(\hat {\sigma }\) is the standard deviation of the sample. In our setting, we want to detect when important changes occur in the network, so as to obtain a timely network representation. As we described earlier a good proxy for what consists an important change in a network is the number of random walks that are affected because of the change (edge addition/deletion, node addition/deletion). We can utilize the z-score of Eq. (4) to detect peaks. A peak or spike is a generic term which describes a sudden increase or outburst in a sequenced data (Barnett and Lewis 1974). In our problem, the number of random walk changes are monitored and peaks represent significant changes in the number of random walks affected. Formally, let lag be the number of changes observed in the sample. The observation window is spanning from t−lag to t and we compute the mean of the sample at t as avg[t]. In a similar way, we calculate the standard deviation of the sample at t to be std[t]. Let N[t] be the observation at time t that represents the number of random walks that have been updated due to a network change. Now, given N[t],avg[t],std[t] and a threshold τ, a peak occurs at time t if the following condition holds: $$ N[t] > \tau \times std[t] + avg[t] $$ If the condition of Eq. (5) holds, then we know that a significant change has occurred and we decide to obtain a new network representation. The details of the procedure are shown in Algorithm 3. Notations used in this algorithm are summarized in Table 2. Figure 4 provides an illustrative example of the peak detection method. In this example we set lag=10 and τ=3. The figure shows the results of the peak detection method for 100 changes occurring in a network (BlogCatalog network, edge addition; edges are added one by one and are randomly selected from the potential edges of the network). Our peak detection algorithm detects a total of 6 peaks occurring at t={13,19,48,53,57,60}. Example peak detection method for the case of adding edges in the BlogCatalog network. The upper plot shows the number of random walks that are updated in RW as a function of new edges added. It is evident that some edges have a larger effect in RW as depicted by higher values. The middle plot, shows the mean (middle almost straight line), as well as the boundaries defined by the current threshold of τ×std (the two lines above and below the mean line). The bottom plot provides the signal for decision making; every time that the current change at time t is outside the threshold it signals that a network embedding should be obtained. In the example this is the case for five times t={13,19,48,53,57,60} Table 2 Summary of notations used in decision-making algorithm Cut-off score: Sometimes, changes in the network can be smooth, without any acute changes. In that case the peak detection method will fail to obtain any embedding as peaks (almost) never occur. To avoid these cases, besides the peak detection method, we employ an additional metric that monitors the cumulative effect of all the changes since the last embedding was obtained. Formally, let N[t] be the observation at time t that represents the number of random walks that have been updated due to a network change. Then, the total number of random walks that have been changed between the time that the last embedding told was obtained and the current time t is given by: $$ \#RW_{t_{{old}}}^{t} = \sum_{t=t^{old}}^{t} N[t] $$ Now, given \(\#RW_{t_{{old}}}^{t}\) and a threshold cutoff, we monitor the following condition: $$ \#RW_{t_{{old}}}^{t} > cutoff $$ If at any time t Eq. (7) holds, then we know that significant cumulative changes have occurred in the network and we decide to obtain a new network representation. As we show in "Experimental evaluation" section combining both conditions of Eqs. (5) and (7) gives the best results, as it balances locally significant as well as cumulative effect of changes. Experimental evaluation In this Section, we experimentally evaluate the performance of our dynamic random walk framework and EVONRLFootnote 4. In particular, we aim to answer the following questions: Q1 effect of network topology How the topology of the network affects the number of random walks that need to be updated? Q2 effect of arriving edge importance How edges of different importance affect the overall random walk update time? Q3 accuracy performance ofEVONRL What is the accuracy performance of EvoNRL compared to the ground truth provided by StaticNRL methods? Q4 classification performance ofEVONRL What is the accuracy performance of EvoNRL in a downstream data-mining task? Q5 time performance ofEVONRL What is the time performance of EvoNRL? Q6 decision-making performance ofEVONRL How well does the strategy of EvoNRL for obtaining network representations work? Q1 and Q2 aim to shed light on the behavior of our generic computational framework for dynamically updating random walks in various settings. Q3, Q4, Q5 and Q6 aim to demonstrate how EVONRL performs. Before presenting the results, we provide details of the computational environment and the data sets employed. Environment: All experiments are conducted on a workstation with 8x Intel(R) Core(TM) i7-7700 CPU @ 3.60GHz and 64GB memory. Python 3.6 is used and the static graph calculations use the state-of-the-art algorithms for the relevant metrics provided by the NetworkX network library. Data: For the needs of our experiments both synthetic data and real data sets have been employed. Protein-Protein Interactions (PPI): We use a subgraph of PPI for Homo Sapiens and use the labels from the preprocessed data used in (Grover and Leskovec 2016). The network consists of 3890 nodes, 76584 edges and 50 different labels. BlogCatalog (Reza and Huan): BlogCatalog is a social network of blogers which each edge indicates a social interaction among them. This network consists of 10312 nodes, 333983 edges and 39 different labels. Facebook Ego Network (Leskovec and Krevl 2014): Facebook ego network is the combined ego network of each node. There is an edge from a node to each of its friends. This network consists of 4039 nodes, 88234 edges. Arxiv HEP-TH (Leskovec and Krevl 2014): Arxiv HEP-TH (high energy physics theory) network is the citation network from e-print Arxiv. If paper i cites paper j, there is a directed edge from i to j. This network consists of 27770 nodes, 352807 edges. Synthetic Networks: We create a set of Watts-Strogatz (Newman 2003) random networks of different sizes (n={1000,10000}) and different rewiring probabilities (p={0,0.5,1.0}). The rewiring probability is used to create representative Lattice (p=0), Small-world (p=0.5) and Erdos-Reyni (p=1) networks, respectively. Q1 effect of network topology We evaluate the effect of randomly adding a number of new edges in networks of different topologies, but same size. For each case, we report the number of the random walks that need to be updated. Figure 5 shows the results, where it becomes clear that as more new edges are added, more random walks are affected. The effect is more stressed in the case of the Small-world and Erdos-Reyni networks. This is to be expected, since these networks are known to have small diameter, therefore every node is easily accessible from any other node. As a result, every node has a high chance to appear in any random walk. In contrast, Lattices are known to have larger diameter, therefore only a small number of nodes (out of all nodes in the network) can be accessible by any random walk. As a result, nodes are more equally distributed in all random walks. Effect of network topology (the axis of #RW affected is in logarithmic scale). As more new edges are added, more random walks are affected. The effect is more stressed in the case of the Small-world and Erdos-Reyni networks, than the Lattice network Q2 effect of arriving edge importance By answering Q1, it becomes evident that even a single new edge can have a dramatic effect in the number of random walks that need to be updated. Eventually, the number of random walks affected, will have an effect to the time performance of updating these random walks in our framework. In this set of experiments we perform a systematic analysis of the effect of the importance of an arriving edge to the time required for the update to occur. Importance of an incoming edge \(e_{{ij}}^{t+1} = (n_{i}, n_j)\) at time t+1 in a network can be defined in different ways. Here, we define three metrics of edge importance, based on properties of the endpointsni,nj of the arriving edge: Sum of frequencies of edge endpoints inRWt. Sum of the node degrees of edge endpoints inGt. Sum of the node-betweenness of edge endpoints inGt. Results of the different experiments are presented in Fig. 6. The first observation is that important incoming edges are more expensive to update, sometimes up to three or four times (1.6sec vs 0.4sec). This is expected, as more random walks need to be updated. However, the majority of the edges are of least importance (lower left dense areas in Fig. 6a, b, and c), so fast updates are more common. Finally, the behavior of sum of node frequencies (Fig. 6a) and sum of node degrees (Fig. 6b) of the edge endpoints are correlated. This is because the node degree is known to be directly related to the number of random walks that traverse it. On the other hand, node-betweenness demonstrates more unstable behavior since it is mostly related to shortest paths and not just paths (which are related to random walks). Dependency of EVONRL running time on importance of added edge as described by various metrics on PPI Network. a frequency of the new edge endpoints, b node degree of the new edge endpoints, and c node betweenness of the new edge endpoints Q3 accuracy performance of EVONRL In this set of experiments we evaluate the accuracy performance of EVONRL and show that it is very accurate. At this point, it is important to note that evidence of our EVONRL performing well is provided by demonstrating it obtains similar representations to the ground truth provided by running StaticNRL on different instances of the evolving network. This is because the objective of our method is to resemble as much as possible what the actual changes in the original network are by incrementally maintaining a set of random walks and monitoring the changes. In practice, we aim to show that our proposed algorithm is able to update random walks in reserve such that they are always representing unbiased random walks that could have been obtained by running StaticNRL on the updated network. In these experiments, we show the representation learned by EvoNRL and the ground truth provided by the StaticNRL are similar to each other by using a representational similarity metric. Similarity of two representations Our goal here is to compare the representations learned by the neural network and show that EvoNRL results in a similar representations to ground truth provided by StaticNRL methods. Comparing representations in neural networks is difficult as the representations vary even across the neural networks trained on the same input data with the same task (Raghu et al. 2017). In this paper, representations are weights of the representation learned by either our EvoNRL method or the StaticNRL method, and they represent the representation learned by a skip-gram neural network. In order to determine the correspondence between these representations, we use the recent similarity measures of neural networks studied in (Morcos et al. 2018) and (Kornblith et al. 2019). Dynamics of neural networks call for a similarity metric that is invariant to orthogonal transformation and invariant to isotropic scaling. Assuming two representations \(X \in \mathbb {R}^{n \times d}\) and \(Y \in \mathbb {R}^{n \times d}\), we are concerned about a scalar similarity index s(X,Y) which can be used to compare the two neural network representations. There are many methods for comparing two finite set of vectors and measure the similarity between them. The simplest approach is to employ a dot-product based similarity. By summing the square dot-product of each corresponding pair of vectors in X and Y, we can have a similarity index between matrices X and Y. This approach is not practical as representations of the neural networks can be described on two different basis and result in a misleadingly similarity index. Therefore invariance to linear transforms is crucial in neural network representational similarity metrics. Recently, Canonical Correlation Analysis (CCA) (Hotelling 1992) is used as a tool to compare representations across networks. Canonical Correlation Analysis has been widely used to evaluate the similarity between computing models and brain activity. CCA can find similarity between representations where they are superficially dissimilar. Its invariance to linear transforms makes CCA a useful tool to quantify the similarity of EvoNRL and StaticNRL representations (Morcos et al. 2018). Canonical correlation analysis (CCA): Canonical Correlation Analysis (Hotelling 1992) is a statistical technique to measure the linear relationship between two multidimensional set of vectors. Ordinary Correlation analysis is highly dependent on the basis which the vectors are described on. The important property of CCA is that it is invariant to affine transformations of the variables which makes it a proper tool to measure representation similarity by. If we have two sets of matrices \(X \in \mathbb {R}^{n \times d}\) and \(Y \in \mathbb {R}^{n \times d}\), Canonical Correlation Analysis will find two bases, one for X and one for Y such that after their projections into these bases, their correlation will be maximized. for 1≤i≤d, the ith, canonical correlation coefficient is given by: $$ \begin{aligned} \rho_{i} = \max_{w^{i}_{X}, w^{i}_{Y}} corr\left(Xw^{i}_{X}, Yw^{i}_{Y}\right)\\ {subject to} \; \forall_{j < i} \: Xw^{i}_{X} \bot Xw^{j}_{X}\\ \forall_{j < i} \: Yw^{i}_{Y} \bot Yw^{j}_{Y} \end{aligned} $$ where the vectors \(w^{i}_{X} \in \mathbb {R}^{d}\) and \(w^{i}_{Y} \in \mathbb {R}^{d}\) transform the original matrices into canonical variables \(Xw^{i}_{X}\) and \(Yw^{i}_{Y}\). $$ R^{2}_{{CCA}} = \frac{\Sigma_{i=1}^{d} \rho_{i}^{2}}{d} $$ The mean squared CCA correlation (J. Ramsay et al. 1984), \(R^{2}_{{CCA}}\) reports the sum of the squared canonical correlations. This sum is a metric that shows the similarity of the two multidimensional sets of vector. Experimental scenario: In these experiments, the original network is the initial network at the beginning. We simulate random walks on this network and learn its representation. After that, we sequentially make changes (add edges, remove edges, add nodes and remove nodes) to the initial network and keep the random walks updated using EvoNRL. In certain points (for example after every 1000 edge addition in the PPI network), we learn the network representation in two ways. One is by simulating new random walks on the updated network (original network with new edges/nodes or missing edges/nodes) and second is learning the representation using EvoNRL. Now we have two representations of the same network and the goal is to compare them to see how similar EvoNRL is to StaticNRL. Note that StaticNRL simulates walks on the updated networks while EvoNRL has been updating the original random walk set. Representations obtained by StaticNRL are results of simulating random walks on the network. Because of the randomness involved in the process, it is typical that two differnet StaticNRL representations of the same network are not identical. We can measure, the similarity of the different representations using CCA. In our evaluation, we aim to demonstrate that EvoNRL is as similar to StaticNRL and that this similarity is comparable to the similarity obtained by applying StaticNRL multiple times on the same network. At any stage of the change (edge addition, edge deletion, node addition, node deletion) in the network, EvoNRL is updating the random walk set in a way that it is representing the network. First, we run StaticNRL multiple times (x5) on a network. Each StaticNRL is simulating a random walk set on the evolving network at certain times. Representations are two finite sets of vectors in d-dimensional space and compare how similar these two sets are. Adding edges: Given a network G=(V,E), we can add a new edge by randomly picking two nodes in the network that are not currently connected and connect them. Adding new edges to the network should have an effect on the network embedding. By adding edges, as the network diverges from its original state, the embedding will diverge from the original network as well. Figure 7 shows the accuracy results of EvoNRL. We observe that the CCA similarity index of EVONRL follows the same trend as the StaticNRL in all the networks: BlogCatalog (Fig. 7a) and the PPI (Fig. 7b), Facebook (Fig. 7c) and Cit-HepTh (Fig. 7d) networks. The similarity of the two methods remains consistent as more edges are added (up to 12% of the number of edges in the original PPI; up to 14% of the number of edges in the original BlogCatalog, Facebook and Cit-HepTh). In Fig. 7, there are two sorts of comparison. First, The similarity of EvoNRL and the Original Network (The network before changes occur to it) is measured. The decreasing trend in orange stars in Fig. 7 shows that the EvoNRL is updating the set of random walks and the representations of the updated networks are diverging from the representation of the original network. On the other hand, we see that EvoNRL is more correlated to the original set of the random walk (orange stars), compared to StaticNRL (Blue Triangles). Blue Triangles are the average of canonical correlation of the original network with 4 different runs of StaticNRL. It shows that the representation of the evolving network is diverging from the original network. So far we have showed that EvoNRL is consistently updating the original set of random walks and makes difference in the network's representation. The question is are these updates accurate? To answer this question we add edges step by step to the original network. Using EvoNRL we keep updating a set of random walk and get the representation of the network in a certain points. On the other hand, we run StaticNRL on the updated network at the same certain points. Because of the randomness of the random walks we repeat StaticNRL 4 times. We compare the StaticNRL representations obtained from the same network with each other to have a baseline of the similarity metric. The red squares showing as 'StaticNRL vs StaticNRL' in Fig. 7 are showing the average similarity of representations of StaticNRL compared to each other 2 by 2. Our goal is to show, EvoNRL keeps updating the random walk set in an accurate way and the representation obtained by EvoNRL is as accurate as StaticNRL. To show this, we measure the canonical correlation of EvoNRL representation and the StaticNRl. We observe that (green circles) EvoNRL representations is very similar to the StaticNRL representations and can be an instance on StaticNRL. Accuracy performance of EVONRL — adding edges. a BlogCatalog, b PPI, c Facebook, d Cit-HepTh Removing edges: Given a network G=(V,E), we can remove an edge by randomly choosing an existing edge e∈E and remove it from the network. Removing existent edges should have an effect in the network embedding. Figure 8 show the accuracy results of edge deletion. Similar to edge addition, We observe that the CCA similarity of EVONRL follows the same trend as the StaticNRL in all the networks: BlogCatalog (Fig. 8a) and the PPI (Fig. 8b), Facebook (Fig. 8c) and Cit-HepTh (Fig. 8d) networks. Accuracy performance of EVONRL — removing edges. a BlogCatalog, b PPI, c Cit-HepTh, d Facebook Adding nodes: As we described in "Evolving network representation learning" section node addition can be treated as a special case of edge addition. This is because whenever a node is added in a network, a number of edges attached to that node need to be added as well. To emulate this process, given a network G=(V,E), first we create a network G′=(V′,E′), where V′⊆V,E′⊆E as follows. We uniformly at random sample nodes V′⊆V from G and then remove these nodes and all their attached edges E′⊆E from G, forming G′. Following that process, we obtain a new network for BlogCatalog with V′=8312 and a new network for PPI with V′=3390 nodes, respectively. Then, we start adding the nodes v∈V′′=V∖V′ that have been removed from G, one by one. Whenever, a node v∈V′′ is added to G′, any edge between v and nodes existing in the current state of network G′ are added as well. Adding nodes to the network should have an effect in the network embedding. Figure 9 shows the accuracy results of node addition. CCA compares two sets of vectors with the same cardinality. Because the number of the nodes and therefore the number of the vectors in the representation are variant, we can not compare the updated representations with the original network. In these experiments we show that EvoNRL and StaticNRL on the same network are very similar to each other and EvoNRL is an accurate instance of StaticNRL. Accuracy performance of EVONRL — adding nodes. a BlogCatalog, b PPI, c Cit-HepTh, d Facebook Removing nodes: As we described in "Evolving network representation learning" section node deletion can be treated as a special case of edge deletion. Given a network G=(V,E), we start removing nodes v∈V from the network, one by one. When a node is removed all the edges connecting this node to the network are removed as well. The process of removing nodes will result in a new network G′(V′,E′), where V′⊆V and E′⊆E. Removing existing nodes from the networ effect in the network embedding. Figure 10 shows the accuracy result of node deletion. In the evolving network, nodes are removed from the network sequentially and EvoNRL always maintains a valid set of random walks. we show that the representations obtained from these random walks are similar to StaticNRL representations. Same as node addition, because the number of the nodes are changing, we can not compare the representations with the original network's representation. The experiments above provides strong evidence that our random walk updates are correct and can incrementally maintain a set of random walks that is their corresponding representations are similar to that of obtained by StaticNRL. Accuracy performance of EVONRL — removing nodes. a BlogCatalog, b PPI, c Cit-HepTh, d Facebook Q4 classification performance of EVONRL In this set of experiments we evaluate the accuracy performance of EVONRL and show that it is very accurate. At this point, it is important to note that evidence of our EVONRL performing well is provided by demonstrating it has similar accuracy to StaticNRL, for the various aspects of the evaluation (and not by demonstrating loss/gains in accuracy). This is because the objective of our method is to resemble as much as possible what the actual changes in the original network are by incrementally maintaining a set of random walks and monitoring the changes. In practice, we aim to show that our proposed algorithm is able to update random walks in reserve such that they are always representing unbiased random walks that could have been obtained by running StaticNRL on the updated network. Experimental scenario: To evaluate our random walk update algorithm, we resort to accuracy experiments performed on a downstream data mining task: multi-label classification. The network topology of many real-world networks can change over time due to either adding/removing edges or adding/removing nodes in the network. In our experimental scenario, given a network we simulate and monitor network topology changes. Then, we run StaticNRL multiple times, one time after each network change and learn multiple network representations over time. The same process is followed for EVONRL but this time we only need to update the random walks RWt at each time t and use these for learning multiple network representations over time. In multi-label classification each node has one or more labels from a finite set of labels. In our experiments, we see 50% of nodes and their labels in the training phase and the goal is to predict labels of the rest of the nodes. We use node vector representations as input to a one-vs-rest logistic regression classifier with L2 regularization. Finally, we report the Macro−F1 accuracy of the multi-label classification of StaticNRL and EVONRL as a function of the fraction of the network changes. For StaticNRL, since it is sensitive to the fresh set of random walks obtained every time, we run multiple times (10x) and report the averages. We experiment with the BlogCatalog and PPI networks. In the following paragraphs we present and discuss the results for each of the interesting cases (adding/removing edges, adding/removing nodes). Adding edges: Given a network G=(V,E), we can add a new edge by randomly picking two nodes in the network that are not currently connected and connect them. Adding new edges to the network should have an effect on the network embedding and thus in the overall accuracy of the classification results. Figure 7 shows the results. We observe that the Macro-F1 accuracy of EVONRL follows the same trend as the one of StaticNRL in both the BlogCatalog (Fig. 11a) and the PPI (Fig. 11b) networks. The accuracy of the two methods remains consistent as more edges are added (up to 12% of the number of edges in the original PPI; up to 14% of the number of edges in the original BlogCatalog). This provides strong evidence that our random walk updates are correct and can incrementally maintain a set of random walks that is similar to that obtained by StaticNRL when applied in an updated network. Accuracy performance of EVONRL — adding new edges. a BlogCatalog, b PPI Removing edges: Given a network G=(V,E), we can remove an edge by randomly choosing an existing edge e∈E and remove it from the network. Removing existent edges should have an effect in the network embedding and thus in the overall accuracy of the classification results. We evaluate the random walk update algorithm for the case of edge deletion in a way similar to that of adding edges. The only difference is that every time an edge is deleted at t we update random walks to obtain RWt. Then, the updated RWt can be used for obtaining a network representation. Same setting is used in multi-label classification. Figure 12 shows the results. Again we observe that the Macro-F1 accuracy of EVONRL follows the same trend as the one of StaticNRL in both the BlogCatalog (Fig. 12a) and the PPI (Fig. 12b) networks. Accuracy performance of EVONRL — removing edges. a BlogCatalog, b PPI Adding nodes: As we described in "Evolving network representation learning" section node addition can be treated as a special case of edge addition. This is because whenever a node is added in a network, a number of edges attached to that node need to be added as well. To emulate this process, given a network G=(V,E), first we create a network G′=(V′,E′), where V′⊆V,E′⊆E as follows. We uniformly at random sample nodes V′⊆V from G and then remove these nodes and all their attached edges E′⊆E from G, forming G′. Following that process, we obtain a new network for BlogCatalog with V′=8312 and a new network for PPI with V′=3390 nodes, respectively. Then, we start adding the nodes v∈V′′=V∖V′ that have been removed from G, one by one. Whenever, a node v∈V′′ is added to G′, any edge between v and nodes existing in the current state of network G′ are added as well. Adding nodes to the network should have an effect in the network embedding and thus in the overall accuracy of the classification results. We evaluate the random walk update algorithm for the case of node addition in a way similar to that of adding edges. The only difference is that every time a node is added at t we update random walks to obtain RWt, by adding a number of edges. Then, the updated RWt can be used for obtaining a network representation. Figure 13 shows the results. Again we observe that the Macro-F1 accuracy of EVONRL follows the same trend as the one of StaticNRL in both the BlogCatalog (Fig. 13a) and the PPI (Fig. 13b) networks. Accuracy performance of EVONRL — adding new nodes. a BlogCatalog, b PPI Removing nodes: As we described in "Evolving network representation learning" section node deletion can be treated as a special case of edge deletion. Given a network G=(V,E), we start removing nodes v∈V from the network, one by one. When a node is removed all the edges connecting this node to the network are removed as well. The process of removing nodes will result in a new network G′(V′,E′), where V′⊆V and E′⊆E. Removing existing nodes from the network should have an effect in the network embedding and thus in the overall accuracy of the classification results. We evaluate the random walk update algorithm for the case of node deletion in a way similar to that of deleting edges. The only difference is that every time a node is deleted at t we update random walks to obtain RWt, by removing a number of edges. Then, the updated RWt can be used for obtaining a network representation. Figure 14 shows the results. Again we observe that the Macro-F1 accuracy of EVONRL follows the same trend as the one of StaticNRL in both the BlogCatalog (Fig. 14a) and the PPI (Fig. 14b) networks. Accuracy performance of EVONRL — removing new nodes. a BlogCatalog, b PPI Discussion about accuracy value fluctuations: While we have demonstrated that EVONRL is able to resemble the accuracy performance obtained by StaticNRL, one can observe that in some cases the accuracy values of the methods can substantially fluctuate. This behavior can be explained by the sensitivity of the StaticNRL methods to the set of random walks obtained from the network, as discussed in the motivating example of "Evaluation of the stability of StaticNRL methods" section. EVONRL would also inherit this problem, as it depends on an initially obtained set of random walks that is subsequently updated at every network topology change. To demonstrate this sensitivity effect, we run control experiments on the PPI network for the case of adding new nodes in the network G, similar to the experiment in Fig. 13b. However, this time, instead of reporting the average over a number of runs for the StaticNRL method, we report all its instances (ref(Fig. 15)). In particular, as we add more nodes (the number of nodes increases from 3390 to 3990) a new network is obtained. We report the accuracy values obtained by running StaticNRL multiple times (40x) on the same network. We also depict the values of two different runs for EVONRL. Each run obtains an initial set of random walks that is incrementally updated in subsequent network topology changes. It becomes evident that the StaticNRL values can significantly fluctuate due to the sensitivity to the set of random walks obtained. It is important to note that EVONRL manages to fall within the range of these fluctuations. Accuracy values obtained by running StaticNRL multiple times on the same network. The values are significantly fluctuating due to sensitivity to the set of random walks obtained. Similarly, EVONRL is sensitive to the initial set of random walks obtained. Two instances of EVONRL are shown, each of which operates on a different initial set of random walks Q5 time performance of EVONRL In this set of experiments we evaluate the time performance of our method and show that EVONRL is very fast. We run experiments on two Small-world networks (Watts-Strogatz (p=0.5)), with two different number of nodes (|V|=1000 and |V|=10000). We evaluate EVONRL against a standard StaticNRL method from the literature (Grover and Leskovec 2016). Both algorithms start with the same set of random walks RW. As new edges are arriving, StaticNRL needs to learn a new network representation by resimulating a new set of walks every time. On the other hand, EVONRL has the overhead of first indexing the set of initial random walks RW. Then, for every new edge that is arriving it just needs to perform the necessary updates as described earlier. Figure 16 shows the results. It can be seen that the performance of StaticNRL is linear to the number of new edges, since it has to run again and again for every new edge. At the same time, EVONRL is able to accommodate the changes more than 100 times faster than StaticNRL. This behavior is even more stressed in the larger network (where the number of nodes is larger). By increasing the number of nodes, running StaticNRL becomes significantly slower, because by design it needs to simulate larger amount of random walks. On the other hand, EVONRL has a larger initialization overhead, but after that it can easily accommodate new edges. This is because every update is only related to the number of random walks affected and not the size of the network. This is an important observation, as it means that the benefit of EVONRL will be more stressed in larger networks. EVONRL scalability (running time axis is in logarithmic scale). StaticNRL scales linearly to the number of new edges added in the network, since it has to run again and again for every new edge. At the same time, EVONRL is able to accommodate the changes more than 100 times faster than StaticNRL. This behavior is even more stressed in the larger network (where the number of nodes is larger) Q6 decision-making performance of EVONRL In this experiment, we compare the two different strategies for deciding when to obtain a network representation, PERIODIC and ADAPTIVE. The experiment is performed using the BlogCatalog network and the changes in the network are related to edge addition. For presentation purposes, we limit the experiment to 1000 edges. The evaluation of this experiment is based on the number of random walk changes \(RW^{t}_{t_{{old}}}\) between a random walk set obtained at time t (one edge is added at each time) and a previously obtained network representation as defined by each strategy. Results are shown in Fig. 17. The PERIODIC strategy represents a "blind" strategy where new embeddings are obtained periodically (every 50 times steps or every 100 time steps). On the other hand, the ADAPTIVE method is able to make informed decisions as it monitors the importance of every edge added in the network. The ADAPTIVE method is basing its decisions on the a peak detection method (τ=3.5) and a method that monitors cumulative effects due to a number of changes (cutoff=4000). As a result, ADAPTIVE is able to perform much better, as depicted by many very low values in the \(RW^{t}_{t_{{old}}}\). Comparative analysis of different strategies for determining when to obtain a network representation. The PERIODIC methods will obtain a new representation every 50 or 100 time steps (i.e., network changes). Our proposed method, ADAPTIVE, is combining a peak detection method and a cumulative changes cut-off method to determine the time to obtain a new network representation. As a result it is able to make more informed decisions and perform better. This is depicted by smaller (on average) changes of the \(RW_{t_{old}}^{t}\), which implies that a more accurate network representation is available for down-stream network mining tasks Extensions and variants While our algorithms have been described and evaluated on a single machine, they have been designed with scalability in mind. Recall that our indexing and searching of random walks is supported by ElasticsearchFootnote 5, which itself is based on Apache LuceneFootnote 6. Elasticsearch is a distributed index and search engine that can naturally scale to very large number of documents (i.e., a very large number of random walks in our setting). There are a couple of basic concepts that make a distributed index and search engine scalable enough to be suitable for the needs of our problem: Index sharding: One of the great features of a distributed index is that it's designed from the ground up to be horizontally scalable, meaning that more nodes can be added to the cluster to match the capacity required by the problem. It achieves horizontal scalability by sharding its index and assigning each shard to a node in the cluster. This allows each node to have to deal with only part of the full random walk index. Furthermore, it also has the concept of replicas (copies of shards) to allow fault tolerance and redundancy, as well as an increased throughput. Distributed search: Searching a distributed index is done in two phases: Querying: Each query q is sent to all shards of the distributed index and each shard returns a list of the matching random walks. Then, the lists are merged, sorted and returned along with the random walk ids. Fetching: Each random walk is fetched by the shard that owns it using the random walk id information. Random walks that lie in different shards can be processed in parallel by the method requesting them. Therefore, while our algorithms are demonstrated in smaller networks for clarity of coverage and better representation of the algorithmic comparison, in practice they can be easily and naturally expanded to very large graphs. Extensions of the algorithms to a distributed environment are out of the scope of this work. Our work is mostly related to research in the area of static network representations learning and dynamic network representation learning. It is also related to research in random walks. Static network representations learning: Starting with Deepwalk (Perozzi et al. 2014), these methods use finite length random walks as their sampling strategy and inspired by word2vec (Mikolov et al. 2013b) use skip-gram model to maximize likelihood of observing a node's neighborhood given its low dimensional vector. This neighborhood is based on random walks. LINE (Tang et al. 2015) proposes a breadth-first sampling strategy which captures first-order proximity of nodes. In (Grover and Leskovec 2016), authors presented node2vec that combines LINE and Deepwalk as it provides a flexible control of random walk sampling strategy. HARP (Chen et al. 2017) extends random walks by performing them in a repeated hierarchical manner. Also there have been further extensions to the random walk embeddings by generalizing either the embeddings or random walks (Chamberlain et al. 2017;Perozzi et al. 2016). Role2Vec (Ahmed et al. 2018) maps nodes to their type-functions and generalizes other random walk based embeddings. Our work is focusing on how many of the above methods introduced for static networks (the ones that use random walks) can be extended to the case of evolving networks. Dynamic network representation learning: Existing work on embedding dynamic networks often apply static embedding to each snapshot of the network and then rotationally align the static embedding across each time-stamp (Hamilton et al. 2016). Graph factorization approaches attempted to learn the embedding of dynamic graphs by explicitly smoothing over consecutive snapshots (Ahmed et al. 2013). DANE (Li et al. 2017) is a dynamic attributed network representation framework which first proposes an offline embedding method, then updates the embedding results based on the changes in the attributed evolving network. Know-Evolve (Trivedi et al. 2017) proposes an evolving network embedding method in a knowledge-graph for entity embeddings based on multivariate event detection. EvoNRL is a more general method which extracts the network representation without using node features or explicit use of events. CTDN (Nguyen et al. 2018) is a random walk-based continuous-time dynamic network embedding. Our work is different from this paper in two aspects. First the random walk in CTDN is a temporal random walk and second CTDN is not an online framework and you need to have all the snapshots of the network before embedding it. HTNE (Zuo et al. 2018) tries to model the temporal network as a self-excited system and using Hawkes process model neighbourhood formation in the network and optimize the embedding based on point-time process. HTNE is an online dynamic network embedding framework which is different from EvoNRL as it uses history in its optimization and it needs to be tuned for history in each step. NetWalk (Yu et al. 2018) is a random walk based clique embedding. The random walk update in that paper is different from EvoNRL. First in NetWalk, the reservoir is in memory which finds the next step based on the reservoir and it doesn't benefit the sampling method used in EvoNRL which is based on node degrees. Also, EvoNRL leverages the speed of the inverted-indexing tools. In (Du et al. 2018), authors propose a dynamic skip-gram framework which is orthogonal to our work. Moreover, (Rudolph and Blei 2018) proposes a dynamic word embedding which uses Gaussian random walks to project the vector representations of words over time. The random walks in that work are based on vector representations and are defined over time-series, which is different to our approach. Random walks: Our work is also related to general concept of random walks on networks (Lovász 1993) and its applications (Craswell and Szummer 2007;Page et al. 1999). READS (Jiang et al. 2017) is an indexing scheme for Simrank computation in dynamic graphs which keeps an online set of reverse-random walks and re-simulates the walks on all of the instances of the node queries. Our proposed method, keeps a set of finite-length random walks which is different from pagerank random walks and has a different sampling strategy and application compared to READS. Another aspects of random walk used in streaming data are continuous-time random walks. Continuous Time Random Walks (CTRW) (Kenkre et al. 1973) are widely studied in time-series analysis and has applications in Finance (Paul and Baschnagel 2010). CTRW is orthogonal to our work as we are not using time-variant random walks and our random walks do not jump over time. Our focus in this paper is on learning representations of evolving networks. To extend static random walk based network representation methods to evolving networks, we proposed a general framework for updating random walks as new edges and nodes are arriving in the network. The updated random walks leverage time and space efficiency of inverted indexing methods. By indexing an initial set of random walks in the network and efficiently updating it based on the occurring network topology changes, we manage to always keep a valid set of random walks with minimum possible divergence from the initial random walk set. Our proposed method, EVONRL, utilizes the always valid set of random walks to obtain new network representations that respect the changes that occurred in the network. We demonstrated that our proposed method, EVONRL is both accurate and fast. We also discussed the interesting trade-off between time performance and accuracy when obtaining subsequent network representations. Determining the right time for obtaining a network embedding is a challenging problem. We demonstrated that simple strategies for monitoring the changes that occur in the network can provide support in decision making. Overall, the methods presented are easy to understand and simple to implement. They can also be easily adopted in diverse domains and applications of graph/network mining. Reproducibility: We make source code and data sets used in the experiments publicly availableFootnote 7 to encourage reproducibility of results. node2vec — code is available at https://github.com/aditya-grover/node2vec NumPy — https://www.numpy.org/ https://github.com/RaRe-Technologies/gensim code is available at https://github.com/farzana0/EvoNRL Elasticsearch: https://www.elastic.co Apache Lucene: http://lucene.apache.org/core/ https://github.com/farzana0/EvoNRL CCA: Canonical correlation analysis Cit-HepTh: High energy physics theory citation network CTDN: Continious-time dynamic network embedding CTRW: Continuous time random walks Dynamic attributed network embedding Digital bibliography &library project EvoNRL: HOPE: High-order proximity preserved embedding HTNE: Hawkes process based temporal network embedding NSERC: Natural sciences and engineering research council of Canada PPI: Protein-protein interaction READS: Randomized efficient accurate dynamic SimRank computation StaticNRL: Static network representation learning TADW: Text-associated DeepWalk Computer science bibliography. Ahmed, A, Shervashidze N, Narayanamurthy S, Josifovski V, Smola AJ (2013) Distributed large-scale natural graph factorization In: Proceedings of the 22nd international conference on World Wide Web - WWW '13.. ACM Press. https://doi.org/10.1145/2488388.2488393. Ahmed, NK, Rossi R, Lee JB, Kong X, Willke TL, Zhou R, Eldardiry H (2018) Learning role-based graph embeddings. arXiv preprint arXiv:1802.02896. Antoniak, M, Mimno D (2018) Evaluating the stability of embedding-based word similarities. TACL 6:107–119. Barnett, V, Lewis T (1974) Outliers in statistical data; 3rd ed. Wiley series in probability and mathematical statistics. Wiley, Chichester. Bengio, Y, Courville A, Vincent P (2013) Representation learning: A review and new perspectives. IEEE TPAMI 35(8):1798–1828. Breitkreutz, B-J, Stark C, Reguly T, Boucher L, Breitkreutz A, Livstone M, Oughtred R, Lackner DH, Bähler J, Wood V, et al. (2007) The biogrid interaction database: 2008 update. Nucleic Acids Res 36(suppl_1):D637–D640. Cai, H, Zheng VW, Chang K (2018) A comprehensive survey of graph embedding: Problems, techniques and applications. IEEE TKDE 30(9):1616–1637. https://doi.org/10.1109/tkde.2018.2807452. Cao, S, Lu W, Xu Q (2015) Grarep: Learning graph representations with global structural information In: Proceedings of the 24th ACM International on Conference on Information and Knowledge Management, CIKM '15, 891–900.. ACM, New York. Chamberlain, BP, Clough J, Deisenroth MP (2017) Neural embeddings of graphs in hyperbolic space. arXiv preprint arXiv:1705.10359. Chen, H, Perozzi B, Hu Y, Skiena S (2017) Harp: Hierarchical representation learning for networks. arXiv preprint arXiv:1706.07845. Craswell, N, Szummer M (2007) Random walks on the click graph In: Proceedings of the 30th annual international ACM SIGIR conference on Research and development in information retrieval - SIGIR '07.. ACM Press. https://doi.org/10.1145/1277741.1277784. Du, L, Wang Y, Song G, Lu Z, Wang J (2018) Dynamic network embedding: an extended approach for skip-gram based network embedding In: Proceedings of the Twenty-Seventh International Joint Conference on Artificial Intelligence.. International Joint Conferences on Artificial Intelligence Organization. https://doi.org/10.24963/ijcai.2018/288. Goyal, P, Kamra N, He X, Liu Y2018. Dyngem: Deep embedding method for dynamic graphs. Grover, A, Leskovec J (2016) node2vec: Scalable feature learning for networks In: Proceedings of the 22nd ACM SIGKDD International Confer- ence on Knowledge Discovery and Data Mining, KDD '16, 855–864.. Association for Computing Machinery, New York. Hamilton, WL, Leskovec J, Jurafsky D (2016) Diachronic word embeddings reveal statistical laws of semantic change In: Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), 1489–1501.. Association for Computational Linguistics, Berlin. Hamilton, WL, Ying R, Leskovec J (2017) Representation learning on graphs: Methods and applications. IEEE Data Eng Bull. Heidari, F, Papagelis M (2018) Evonrl: Evolving network representation learning based on random walks In: Proceedings of the 7th International Conference on Complex Networks and Their Applications, 457–469.. Springer International Publishing. https://doi.org/10.1007/978-3-030-05411-3_37. Hotelling, H (1992) Relations between two sets of variates In: Breakthroughs in statistics, 162–190.. Springer, New York. Jiang, M, Fu AW-C, Wong RC-W (2017) Reads: A random walk approach for efficient and accurate dynamic simrank. Proc VLDB Endow 10(9):937–948. Kenkre, VM, Montroll EW, Shlesinger MF (1973) Generalized master equations for continuous-time random walks. J Stat Phys 9:45–50. Kim, Y, Chiu Y, Hanaki K, Hegde D, Petrov S (2014) Temporal analysis of language through neural language models In: Proceedings of the ACL 2014 Workshop on Language Technologies and Computational Social Science.. Association for Computational Linguistics. https://doi.org/10.3115/v1/w14-2517. Kornblith, S, Norouzi M, Lee H, Hinton G (2019) Similarity of neural network representations revisited. In: Chaudhuri K Salakhutdinov R (eds)Proceedings of the 36th International Conference on Machine Learning, volume 97 of Proceedings of Machine Learning Research, 3519–3529, Long Beach. Leskovec, J, Krevl A (2014) SNAP Datasets: Stanford large network dataset collection. http://snap.stanford.edu/data. Accessed 1 June 2019. Li, J, Dani H, Hu X, Tang J, Chang Y, Liu H2017. Attributed network embedding for learning in a dynamic environment. ACM Press. https://doi.org/10.1145/3132847.3132919. Lovász, L (1993) Random walks on graphs. Comb Paul erdos is eighty 2(1-46):4. Mikolov, T, Chen K, Corrado G, Dean J (2013a) Efficient estimation of word representations in vector space. In: Bengio Y LeCun Y (eds)1st International Conference on Learning Representations, ICLR 2013.. Workshop Track Proceedings, Scottsdale. Mikolov, T, Sutskever I, Chen K, Corrado GS, Dean J (2013b) Distributed representations of words and phrases and their compositionality In: Proceedings of the 26th International Conference on Neural Information Processing Systems - Volume 2, NIPS'13, 3111–3119.. Curran Associates Inc., Red Hook. Morcos, A, Raghu M, Bengio S (2018) Insights on representational similarity in neural networks with canonical correlation In: Proceedings of the 32nd International Conference on Neural Information Processing Systems, NIPS'18, 5727–5736.. Curran Associates Inc., Red Hook. Newman, ME (2003) The structure and function of complex networks. SIAM Rev 45(2):167–256. MathSciNet Article Google Scholar Nguyen, GH, Lee JB, Rossi RA, Ahmed NK, Koh E, Kim S (2018) Continuous-time dynamic network embeddings In: Companion of the The Web Conference 2018 on The Web Conference 2018 - WWW '18.. ACM Press. https://doi.org/10.1145/3184558.3191526. Ou, M, Cui P, Pei J, Zhang Z, Zhu W (2016) Asymmetric transitivity preserving graph embedding In: Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining - KDD '16, 1105–1114.. ACM Press. https://doi.org/10.1145/2939672.2939751. Page, L, Brin S, Motwani R, Winograd T (1999) The pagerank citation ranking: Bringing order to the web. Technical report, Stanford InfoLab. Paul, W, Baschnagel J (2010) Stochastic Processes: From Physics to Finance. Springer, Berlin Heidelberg. MATH Google Scholar Pearson, K (1905) The problem of the random walk. Nature 72(1867):342. Perozzi, B, Al-Rfou R, Skiena S2014. Deepwalk: Online learning of social representations. Association for Computing Machinery, New York. Perozzi, B, Kulkarni V, Chen H, Skiena S (2016) Don't walk, skip! online learning of multi-scale network embeddings In: Proceedings of the 2017 IEEE/ACM International Conference on Advances in Social Networks Analysis and Mining 2017, ASONAM '17, 258–265.. Association for Computing Machinery, New York. Raghu, M, Gilmer J, Yosinski J, Sohl-Dickstein J (2017) Svcca: Singular vector canonical correlation analysis for deep learning dynamics and interpretability. In: Guyon I, Luxburg UV, Bengio S, Wallach H, Fergus R, Vishwanathan S, Garnett R (eds)Advances in Neural Information Processing Systems 30, 6076–6085.. Curran Associates, Inc., Red Hook. J. Ramsay, J, Berge Jt, Styan G (1984) Matrix correlation. Psychometrika 49(3):403–423. Reza, Z, Huan LSocial computing data repository. Rudolph, M, Blei D (2018) Dynamic embeddings for language evolution In: Proceedings of the 2018 World Wide Web Conference on World Wide Web - WWW '18.. ACM Press. https://doi.org/10.1145/3178876.3185999. Tang, J, Qu M, Wang M, Zhang M, Yan J, Mei Q (2015) LINE: large-scale information network embedding In: Proceedings of the 24th International Conference on World Wide Web, WWW '15, 1067–1077. Republic and Canton of Geneva, CHE, 2015. International World Wide Web Conferences Steering Committee. Trivedi, R, Dai H, Wang Y, Song L (2017) Know-evolve: Deep temporal reasoning for dynamic knowledge graphs. ICML 70:3462–3471. Wang, D, Cui P, Zhu W (2016) Structural deep network embedding In: Proceedings of the 22Nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, KDD '16, 1225–1234.. ACM, New York. Yang, C, Liu Z, Zhao D, Sun M, Chang EY (2015) Network representation learning with rich text information In: Proceedings of the 24th International Conference on Artificial Intelligence, IJCAI'15, 2111–2117.. AAAI Press. Yang, J, Leskovec J (2015) Defining and evaluating network communities based on ground-truth. KAIS 42(1):181–213. Yu, W, Cheng W, Aggarwal CC, Zhang K, Chen H, Wang W (2018) Netwalk: A flexible deep embedding approach for anomaly detection in dynamic networks In: Proceedings of the 24th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining KDD '18, 2672–2681.. ACM, New York. Zhang, D, Yin J, Zhu X, Zhang C (2018) Network representation learning: A survey. IEEE Transac Big Data:1. https://doi.org/10.1109/tbdata.2018.2850013. Zuo, Y, Liu G, Lin H, Guo J, Hu X, Wu J (2018) Embedding temporal network via neighborhood formation In: Proceedings of the 24th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining.. ACM. https://doi.org/10.1145/3219819.3220054. This research has been supported by a Natural Sciences and Engineering Research Council of Canada (NSERC) Discovery Grant (#RGPIN-2017-05680). York University, Toronto, M3J1P3, ON, Canada Farzaneh Heidari & Manos Papagelis Farzaneh Heidari Manos Papagelis FH has made substantial contributions to the design of the work; the acquisition, analysis, and interpretation of data; the creation of new software used in the research; has drafted and revised the work. MP has made substantial contributions to the conception and design of the work; interpretation of data and results; has drafted and revised the work. The author(s) read and approved the final manuscript. Correspondence to Farzaneh Heidari. Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/. Heidari, F., Papagelis, M. Evolving network representation learning based on random walks. Appl Netw Sci 5, 18 (2020). https://doi.org/10.1007/s41109-020-00257-3 Network representation learning Evolving networks Dynamic random walks Dynamic graph embedding Machine Learning with Graphs
CommonCrawl
The importance of harvest residue and fertiliser on productivity of Pinus patula across various sites in their first, second and third rotations, at Usutu Swaziland Lindani Z. Mavimbela1, Jacob W. Crous2, Andrew R. Morris3 & Paxie W. Chirwa4 Concern is growing about the future of forestry productivity due to intensive nutrient removal, as a result of different harvesting operations. This study aimed to determine the effects on forest productivity when using different slash-retention scenarios with the recommended amounts of mineral fertiliser in Usutu forest. Usutu is a plantation forest that grows mostly softwood where the predominant species is Pinus patula Schiede ex Schltdl. & Cham. The first trial series (F) comprised of one trial located in each of five forest blocks. It was established in 1971 and compared the effects of various site preparation scenarios (slash removal, slash retention and cultivation) on the early growth of Pinus patula for adjacent first (F1R) and second rotation (F2R) sites (i.e. grassveld and clearfelled first-rotation sites). The second (S) trial series was re-established in April 1991 on exactly the same position as the first trial series, and involved second (S2R) and third rotation (S3R) sites. Three main treatments, standard pitting through harvest residue (control); manual pitting after removal of harvest residue and forest floor (cleared); and manual pitting and broadcast application of dolomitic lime (2 t ha− 1) over the slash (lime), were undertaken in factorial combination with the application of phosphate and potassium fertiliser. The data reported here are for tree volume productivity across the five sites of the first trial series up to age 10 years and three of the five sites of the second trial series up to age 9 years. Slash removal decreased volume productivity by 9 and 13% in the F1R and F2R at 10 years of age and further by 21 and 33% in S2R and S3R, respectively at 9 years of age. However, fertiliser application increased volume productivity by 14 and 15% in the F1R and F2R at 10 years age and further by 18 and 10% in the S2R and S3R, respectively at 9 years of age. In order to sustain or increase productivity, it is recommended that harvest residue slash should be conserved and fertiliser containing phosphate and potassium be applied at planting at Usutu. The nature of research into multiple forest rotations poses difficulties with some aspects like data collection, trial maintenance and records retrieval. In addition, the site and climate conditions can also vary over time and thus there have been only a small number of multiple-rotation studies (Evans 1998). The aim of long-rotation studies is to understand how forest site-establishment treatments affect growth patterns and productivity, which is essential for adopting sustainable management techniques (Jokela et al. 2010). Globally, attention has been given to some site-establishment practices that have negative impacts on long-term site productivity (LTSP). Whole-tree harvesting has been reported to export more nutrients from sites than conventional harvesting (CH), where slash is retained, resulting in a decline in site productivity (Weber 1985; Olsson et al. 1996; Hyvönen et al. 2000; Egnell and Valinger 2003; Smaill et al. 2008; Eisenbies et al. 2009; Titshall et al. 2013). The conventional harvesting method applied at Usutu involved removing tree stems with bark from sites but not slash (Crous et al. 2007b) and is considered to have a less negative effect on site productivity than slash removal due to low nutrient content losses from stem wood (Mälkönen 1976; Olsson et al. 1996; Wall 2012). Harvest slash (foliage, branches and non-utilisable timber) is regarded as nutrient rich (Raison 1982; Wall 2012) so its retention on the forest floor results in no loss of growth and also increased volume production over time (Egnell 2011). This practice is followed in the case of Usutu where forest floors are rarely burnt (Crous et al. 2007b). The retained slash on CH sites breaks down to form humus and helps maintain critical physical, chemical, and biological properties of soil (Eisenbies et al. 2009; Sayer 2006) that are crucial for the maintenance and supply of nutrients throughout multiple rotations (Raison 1982). Abiotic (water holding capacity, nutrients, soil temperature) site conditions have also been reported to have improved on CH sites (Raison 1982; Lundkvist 1983; Olsson et al. 1996; Smaill et al. 2008; Eisenbies et al. 2009; Palviainen et al. 2010; Titshall et al. 2013) as a result of maintenance of a humus layer (Egnell and Leijon 1999). Poor base saturation of calcium (Ca) and magnesium (Mg) due to slash removal on sites has been found to result in a decrease in soil pH (Olsson et al. 1996). Furthermore, retained slash cushions and protects soil against compaction and erosion, through improved infiltration (Sayer 2006; Eisenbies et al. 2009; Cortini et al. 2010) and also suppresses weeds (Titshall et al. 2013). Southern African forestry soils are naturally low in fertility, occurring in old weathering surfaces in high rainfall areas (Olbrich et al. 1997). Low fertility is more prevalent on soils derived from quartzites, acid crystalline and sandstone rocks, as these often have potassium (K) or phosphate (P) as limiting nutrients. Timber harvesting at some sites has been reported to have caused an increase in the leaching of base cations, which can lead to soils becoming more acidic (Olbrich et al. 1997). Many tree species have been genetically improved to increase productivity, but this does not occur on soils deficient in P and K (Titshall et al. 2013) or Ca (Herbert and Schönau 1990). Site-specific, remedial fertiliser applications have been reported to overcome the effects of mineral nutrient deficiencies in soils, leading to yield improvement (Herbert and Schönau 1989; Herbert and Schönau 1990; Titshall et al. 2013). Conversely, Wall (2012) found that application of lime (calcium hydroxide) did not increase productivity when compared with retained slash. Although application of lime was found to increase biological activity within the soil (Baath and Arnebrant 1994), it promoted the breakdown and release of nutrients from humus (Marschner and Wilczynski 1991). Furthermore, lime is reported to have a negative effect on fine-root development (Persson and Ahlström 1990). Woods (1990) observed that second-rotation (2R) Pinus radiata D.Don in South Australia had yellow, unhealthy foliage where slash was burnt during site preparation. Regional site assessments showed that the average growth rate of 18 to 20 m3 ha− 1 per annum declined by 25 to 40% in 2R stands across different locations. In subsequent studies, the cause of the decline was attributed to the loss of organic material due to burning of forest slash from the first rotation (1R) stands after clear felling. Yellowing of foliage may be symptomatic of nitrogen (N) deficiency and the burning of slash may have resulted in the loss of nitrogen (N) (Woods 1990). Fertiliser trials were conducted, and good tree-growth responses were obtained from high N application treatments, which indicated that N deficiency was most likely the cause of yield declines (Woods 1990). Similar concerns, initiated productivity research at the Usutu forest in 1968 in Swaziland (Evans 1975, 1996). The yield monitoring of successive rotations showed a 20% decline in productivity on the eastern part of Usutu, in the 2R crop (Evans 1978). Further investigation showed that the yield decrease occurred on soils derived from gabbro parent material, which had a high P-fixation capacity leading to low levels of plant-available P and K, resulting in deficiencies of these nutrients (Morris: Soil fertility and long term productivity of Pinus patula in the Usutu Forest Swaziland, Unpublished). A review by Wall (2012) showed reductions in site productivity of 31–39% after clear-felling using the whole-tree harvesting technique. This harvesting technique reduced soil P, which resulted in the most significant loss of yield. It also led to a 39% decrease in soil Ca followed by K and Mg. A number of authors have reported on the need for appropriate fertiliser application in order to maintain soil timber-producing capacity (Mälkönen 1976; Jokela et al. 1991; Scott and Bliss 2012; Jokela and Long 2015). At Usutu, spot application of 20 kg ha− 1 of P and K fertiliser at planting and broadcast application of 75 kg ha− 1 of P and K fertiliser at first pruning at age 5 years, corrected the yield decline (Morris: Soil fertility and long term productivity of Pinus patula in the Usutu Forest Swaziland, Unpublished). Further research showed that a large proportion of P fertiliser was retained as residual P in the subsequent rotation as the inorganic P fertiliser was converted into organic P (Po) forms (Crous et al. 2007b, 2011a). At Usutu forest, a trial comparing slash removal to slash retention and fertiliser application in the first and second rotations showed that slash removal at establishment resulted in unsustainable growth (Germishuizen 1979). Other studies have shown that this growth decline was caused by the loss of soil organic carbon (SOC) and nutrients (especially N) (Woods 1990), soil erosion and displacement (Titshall et al. 2013) and export of aboveground biomass (du Toit et al. 1999). Furthermore, removal of slash resulted in reduced soil moisture retention potential, low pH (Eisenbies et al. 2009), increased both soil temperature and bulk density (Smaill 2008) when compared with slash-retained plots (Sayer 2006). Short rotations also affect the mechanisms of nutrient supply and uptake balance because of changes in organic matter turnover (Mälkönen 1976; Raison 1982; Chen et al. 2000; Laclau et al. 2003). This occurs as a result of modified organic matter distribution and quality (Mälkönen 1976; Crous et al. 2011b). Hence, managing tree felling debris or slash during compartment reestablishment is considered important for maintenance of the nutrient cycle of the soil surface though mineralisation of harvest residue (Weber 1985; Perez-Batallon et al. 2001; Saarsalmi et al. 2010; Titshall et al. 2013). Therefore, retaining slash is considered beneficial in maintaining site productivity on nutrient-poor sites (Wall 2012), by promoting long-term maintenance of soil productivity (Carter and Foster 2004; Laclau et al. 2010). Crous et al. (2011b) suggested that larger quantities of K are exported from the site through stem bark and stem wood at P. patula harvest than at any other stage of the rotation. Retention of branches and foliage prevents nutrient export at about 37% of the total N, 13% of total Ca and Mg and 25% of total P and K of the aboveground biomass is in foliage (Hernández et al. 2009). A reduction of 10 to 35% in tree height was recorded on 2R Pinus taeda L. and P. elliottii Engelm in the southern USA due to nutrient deficiency as a result of nutrient removal, particularly of P, due to timber harvest (Tiarks et al. 1999; Rose and Shiver 2000). In Southern Africa, there is a need to understand the biogeochemical cycling processes for the maintenance of forest productivity as highlighted in studies conducted by du Toit and Scholes (2002), Scholes (2002), Crous et al. (2007b) and Dovey et al. (2011). This study was, therefore, aimed at investigating the effects of residue removal in comparison with residue retention and fertiliser application on tree growth over three rotations of two trials (F1R & F2R and S2R & S3R) in a pine plantation. Specifically, the study focused only on tree-based indicators of site productivity (tree height, tree diameter, basal area and stand volume). Sappi Usutu forest plantation is situated 25 km southwest of Mbabane on the western Highveld of Swaziland. The land area covered is about 66,516 ha, of which 49,238 ha is productive (commercial) land, consisting of pine and eucalyptus species with P. patula being the dominant species. The soils at Usutu forest are generally red clay, derived from gabbro lithology, classified as Oxisol (Soil Survey Staff 2006). A description of each site, indicating the compartmental underlying geology (Pallett: Forest land types of the Usutu Forest Swaziland, Unpublished) and soil sets namely: A9 (M-set),Footnote 1 B17 or C9 (Q-set),Footnote 2 D5 (T/Q-set)Footnote 3 and E2 (N-set)Footnote 4 (Murdoch 1968; Nixon 2006) is provided in Table 1. The Q/T-set site is on a ridge top and has a high stone content (Germishuizen 1979). Stony soils have a poor nutrients and moisture-storage ability (Macadam 1989). Several authors have stated that the soil at Usutu is generally highly leached, with low nutrient content and low cation exchange capacity (CEC), and moderate to strong acidic conditions (Morris 1995; Evans 1999; Crous et al. 2007a), as shown in Table 2. The CEC range for Usutu forest sites was 0.8–2.7 cmol kg− 1 (Germishuizen 1979). Similarly, Skinner et al. (2001) reported results of low CEC of 2.2 cmol kg− 1 from kaolinite clay mineral soils found in forest soils of Washington State of the United States, North Island of New Zealand, and Kalimantan of Indonesia. The soil chemical analysis for the 0–8-cm depth of all the first trial, second rotation (F2R) sites, show exchangeable aluminium (Al) results which are above 1 cmolc kg− 1 corresponding with lower pH values of sites (Table 2). Soil solutions with 1 cmolc kg− 1 of exchangeable Al cause Al toxicity to plants (Akhtaruzzaman et al. 2014). Table 1 Description of the underlying geology, soil set and additional comments related to the five sites used for the first trial Table 2 Soil chemical characteristics of the five compartment soil sets, at the establishment of the first and second rotations of the first trial series; adapted from Germishuizen (1979) First trial series The first trial series involved one trial located in each of five forest blocks (A9, B17, C9, D5 and E2; Table 1). It was designed to compare the effect of various site-preparation scenarios on the early growth of Pinus patula for adjacent first (F1R) sites (i.e. grassveld) and second rotation (F2R) sites (i.e. clearfelled first-rotation sites). Three site preparation treatments were applied in factorial combination with a spot application of fertiliser. The site preparation treatments consisted of: Removal of all harvest residue and forest-floor layer or grass to mineral soil followed by standard manual pit preparation (CLEARED). Standard manual pit preparation into remaining harvest residue or grassveld (CONTROL). Complete cultivation of the site by ploughing and rotavating to 20-cm depth (CULTIVATED). The fertiliser application supplied 180, 120 and 150 kg ha− 1 elemental N, P and K respectively. These nutrients were applied as granular limestone ammonium nitrate, double superphosphate and potassium chloride respectively (Germishuizen 1979). The trials were planted in 1971 at a 1.37 × 1.37 m spacing (equivalent to 5330 stems per hectare (sph) with a plot size of six rows by six rows, which was a much higher planting density than the standard 2.7 × 2.7 m spacing (equivalent to 1372 sph). At almost 4 years of age, mechanical thinning was conducted over the whole diameter distribution to decrease the stand density by 50% (to 2665 sph) (Morris: Trial series B43: Comparison of establishment method on first and second rotation sites, Unpublished). Estimates of standing volume were made in 1976, 1978 and 1982 at 4, 6 and 10 years age respectively (Morris: Trial series B43: Site preparation and fertilizer application at time of planting, part 1: Influence on volume increment, Unpublished). Needle sampling and nutrient analysis In July 1982, needle samples were collected from all five sites. A bulked sample from 15 trees was obtained by sampling three trees in each of the five replications for each treatment combination on each site. Mature needles formed during the previous spring or early summer period were collected from the upper half of the live crown. Samples were oven dried at 65 °C and ground in a Wiley Mill. Chemical analysis was conducted by the Malkerns Research Station of Swaziland. Total nitrogen was determined by Markham distillation with micro-Kjeldahl digestion method using concentrated sulphuric acid and salicylic acid with a selenium catalyst. The amount of nitrogen present in the form of ammonium ions was determined after filtration with hydrochloric acid. The elements P, K, Ca and Mg were determined on a sample dry ashed with magnesium nitrate and taken up in dilute hydrochloric acid. Phosphate was measured colorimetically as the molybdenum blue complex, potassium by flame photometer and calcium and magnesium by atomic absorption spectrometry (Morris: Trial series B43: Site preparation and fertilizer application at time of planting, Part 2: Influence on needle nutrient content, Unpublished). Second trial series The second trial series also involved one trial located in each of five forest blocks. This trial series was established in April 1991 at exactly the same position of the first trial series (Morris: Trial series B43: Comparison of establishment method on first and second rotation sites, Unpublished). Three site preparation treatments were applied in factorial combination with a spot application of P and K fertiliser. The cultivation treatment used in the first trial series could not be repeated due to the large number of stumps remaining. Therefore, lime was applied on the previously ploughed treatment plots with the objective of increasing the breakdown of slash on the forest floor. It should be noted that the lime treatment effects are confounded as their performance could have been influenced by residual effects associated with the previous cultivation treatment. The site preparation consisted of three treatments: Removal of all harvest residue and forest floor layer to mineral soil followed by standard manual pit preparation (CLEARED). Standard manual pit preparation into remaining harvest residue (CONTROL). Manual pitting and broadcast application of dolomitic lime (2 t ha− 1) over the slash (LIME). The fertiliser type used in the series was 0:1:1(17%) PK and spot applied at 170 g seedling− 1 (72.3 kg ha− 1 elemental P and K, respectively). Nutrient depletion was accelerated by repeating the harvest-residue removal treatment. Each treatment was replicated five times within each of the rotation blocks. Trees were established at a spacing of 2 m × 2 m (i.e. 2500 sph) to achieve a similar stocking rate as in the first trial series after thinning (Morris: Trial series B43: Site preparation and fertilizer application at time of planting, Part 2: Influence on needle nutrient content, Unpublished; Morris: Pitting of R143 trial sites. Internal Memorandum, Unpublished). Compartment D5 was destroyed by hail in 2002 and C9 was destroyed by fire in 2007 so only trial A9 was measured up to rotation age (S. Khoza, Planning Manager Usutu, personal communication, 2013). Therefore, results from 9-year-old trees at three sites (A9, C9 and D5) in the second trial series are reported here and are compared to the results from the first trial series on the same locations. Soil analysis was not conducted at the end of the F1R and F2R or the S2R and S3R trials as the initial experiment did not focus on soil-based indicators of productivity. No information was recorded about the level of genetic improvement of material, but it is assumed to be unimproved material as most P. patula breeding programmes in South Africa only started in the 1990s. Tree growth measurements, calculations and statistical analysis Tree height and diameter at breast height (DBH) were recorded at intervals in order to calculate estimated standing volume at ages 4, 6 and 10 years, using local volume-basal area relationships. Bark thickness of 4- and 6-year-old trees was determined using a bark gauge. Volume values for 10-year trees were extrapolated from destructive sampling of 120 trees from guard rows around each plot in 1982. A single tree per diameter range was selected from each plot for height measurement and data were fitted into the Schumacher Logarithmic Tree Volume Equation (Germishuizen (1979). Results were analysed statistically using two-way analysis of variance (ANOVA) to assess the effect of treatments on volume production and needle nutrient content. Correlations between 10-year growth and needle nutrient content were determined and analysed using multiple regression (Morris: Trial series B43: Site preparation and fertilizer application at time of planting, part 1: Influence on volume increment, Unpublished). Tree height and DBH were recorded at 2-year intervals. Measurements within treatment plots were taken on 7, 9, 11, 13 and 15-year-old trees within sub-plots. Sub-plots measuring 12.3 m × 17.8 m set up to incorporate 6 × 8 rows with a spacing of 2 m × 2 m between plots. The height and DBH of six randomly selected trees per plot were measured. The DBH was expressed conventionally as the quadratic mean (QmDBH), which is the average stand diameter corresponding to tree arithmetic mean basal area (Curtis and Marshall 2000). Volume per hectare estimations were obtained from DBH and height measurements of six un-deformed live trees in each plot within the blocks using the Schumacher and Hall model (Bredenkamp 2000) shown in Eq. 1. $$ lnV=b0+b1\mathit{\ln}\ \left( DBH+f\right)+b2 lnH $$ ln = natural logarithm to the base e V = stem volume (m3, under-bark) usually to a 75-mm tip diameter DBH = breast height diameter (cm, over-bark) f = correlation factor H = tree height (m) b0 = patula coefficient − 13.4694 b1 = patula coefficient 2.4396 f = patula coefficient 8 Basal area was calculated from the DBH of live trees in each plot using Eq. 2: $$ G\_\mathrm{ha}=\frac{\ \uppi {\left(\frac{\mathrm{QmDBH}}{200}\right)}^2\times \mathrm{No}.\mathrm{trees}/\mathrm{plot}}{\mathrm{Plot}\ \mathrm{A}\ \left(\mathrm{ha}\right)\ } $$ G_ha = basal area per hectare QmDBH = quadratic mean DBH in centimetres No.trees/plot = number of live trees per plot Plot A (ha) = plot area in hectares Statistical analysis of the change in stocking rate and volume data was conducted using Genstat 15.1 software, using analysis of variance. ANOVA was undertaken using a factorial approach with site preparation by fertiliser as main treatment effects. Tree volume production at 6 years of age on re-established (F2R) sites was not statistically different from the production on afforested grassland (F1R) sites (Germishuizen 1979). Cultivation significantly increased volume growth up to age 6 years, but subsequent growth to age 10 years was not increased (Morris: Trial series B43: Comparison of establishment method on first and second rotation sites, Unpublished). Site preparation and fertiliser treatments had a significant effect on volume production at most sites (Table 3a). There was no interaction between site preparation and fertiliser application (Table 3a). Table 3 ANOVA summary of (a) significance level of five individual sites and (b) volume production per site main treatments at age 10 years, on the first and second rotations for the first (F) trial series established in 1971 The main treatments were not statistically significant on every site (Table 3a) but the absolute treatment responses were similar (Table 3b), resulting in significant responses in the mean treatment effects across the sites (Table 3a). The combination of fertiliser and slash-retention treatment outperformed the no-fertiliser and slash-removed treatment combination (Table 3b). The mean treatment effect showed substantial increase in volume productivity for the control compared with the residue-removed treatment of 19 m3 ha− 1 and 28 m3 ha− 1 in the F1R and F2R respectively (Table 3b). Clearing all vegetation and/or harvest residue to mineral soil caused a substantial reduction in growth of up to 21% at age 10 years (Morris: Trial series B43: Comparison of establishment method on first and second rotation sites, Unpublished). The high rate of fertiliser applied at planting across all sites produced substantial increases of 28 m3 ha− 1 at age 10 years in the F1R and 29 m3 ha− 1 in the F2R (Table 3b). The main treatment and fertiliser effects remained significant from 4 to 10 years (Table 4). Table 4 The ANOVA summary, (a) main treatment significance level and (b) volume production per main treatment across five sites over time for the first trial series established in 1971 on first and second rotation sites Yields for site-preparation treatments decreased in the following order: cultivation, control, and residue removed across all sites and both rotation cycles (Table 4b) The effects of site-preparation differed between F1R and F2R; the cultivation treatment in F1R increased volume production in relation to the control did not differ from the control at the age of 6 and 10 years in F2R (Table 4b). Fertiliser effects differed significantly across F1R and F2R at all ages (4, 6 and 10 years) as shown in Table 4b. In general, the site-preparation treatments had little effect on needle nutrient concentration. The application of fertiliser increased the needle P and K levels and reduced Mg level on the F2R site (Table 5). It had no effect on N or Ca levels. Table 5 The influence of fertiliser on needle nutrient concentration at age 10 years across all five sites of the first trial series established in 1971 Across all treatments and sites, the amounts of P and K in needles showed a significant linear correlation with standing volume at age 10 years (Table 6). However, the strength of the correlation was low with a correlation coefficient of only 0.35 and 0.54 for P and K respectively. There was also a strong positive correlation between foliar P and K concentrations as well as between the Ca and Mg concentrations. Table 6 Correlation coefficients and statistical significance of relationship between needle nutrient concentration (%) and volume production at age 10 years across all treatments and sites of the first trial series established in 1971 The results for trees on individual sites at age 9 years indicate that volume production in S2R and S3R differed significantly among sites, as in the F1R and F2R trials. Site preparation had a significant effect on all sites on both rotations tested, except for the A9 S2R site (Table 7a). Soils of the three sites (A9, C9 and D5) all showed a positive response to fertiliser application on both the S2R and S3R sites even though only the A9 site was located on the gabbro (M-set). The absolute trend was similar on all sites, which resulted in a significant fertiliser effect when the mean responses across all sites were analysed (Table 7a). On the A9 S3R site, a significant site preparation by fertiliser interaction was observed (Table 7a). Table 7 The ANOVA summary (a) significance level and (b) volume production per main treatment over three sites at age 9 years from the second and third rotation trials established in 1991 The slash-retention treatment (control) produced a larger increase in tree volume on Q-set soils compared with the other soil types in both S2R and S3R by 237 and 244 m3 ha− 1 respectively. Slash retention led to significantly increased tree volume than slash removal across almost all rotations and compartments—with only one exception of the Q/T-set soil in the S2R (Table 7b). The fertiliser-application treatment also led to significantly increased tree volume across all sites and rotations. The interaction of site preparation and fertiliser recorded on the M-set SR3 site was mainly driven by a significant increase in tree volume production when fertiliser was applied on the plots where harvest residue was removed (Fig. 1), whilst fertiliser application did not increase tree volume significantly on the other site preparation treatments. Interaction between fertiliser and site preparation treatment on volume production at age 9 years on site A9, third rotation plot The effects of residue removal, lime or fertiliser application on tree volume across the three sites remained significant over the 7- and 9-year assessments, but their interactions were not significant (Table 8a). The effect of residue removed was significantly lower than either the control or the lime treatments across sites and rotations at age 7 and 9 years (Table 8b). The fertiliser treatment exhibited significantly higher volume production than no fertiliser treatment in all three sites and S2R and S3R (Table 8b). Table 8 The ANOVA summary (a) main treatment significance level and (b) volume production per treatment across three sites over time of a trial in the second and third rotation sites established in 1991 Comparison of two trial series The results across all treatments on the same three sites indicated that there was no significant difference between F1R and F2R for plantings in 1971 as well as S2R and S3R established in 1991 (Fig. 2). The two trial series also demonstrated similar volume production despite the fact that they were established at different times and might have been subjected to different climatic conditions. Mean volume increment across all treatments over the three sites (A9, C9, D5) in the first (F1R) and second rotation (F2R) areas established in 1971, and the second (S2R) and third rotation (S3R) areas established in 1991 In the first trial series (1971) planting, the effect of cultivation of grassveld (F1R) decreased from 4 to 10 years, which resulted in no significant additional volume production by age 10 (Fig. 3). This is a Type-I response and commonly observed in silvicultural treatments that enhance resource availability at planting and shortly after planting. The fertiliser response remained constant over time and resulted in 12% more volume across all F1R sites by the age of 10 years. Fertiliser application in the following rotation (S2R, established in 1991) had a larger impact than in the previous rotation (Fig. 3). Removal of harvest residue had a negative impact on F1R growth that decreased over time. The negative impact of repeated residue removal was much greater in the following rotation and resulted in a 20% decrease in volume production by age 9 years (Fig. 3). Relative response compared to the control treatment over time for site preparation and fertiliser application in the first rotation (F1R) established in 1971 (dotted lines) and the second rotation (S2R) on same three sites established in 1991 (solid lines) The effect of fertiliser at the age of 4 years produced 35% additional volume at F2R dropping to 20% at the age of 10 years. In the third rotation (S3R), the application of fertiliser had a smaller effect when compared to F2R, which only increased volume by 10% at the age 9 years (Fig. 4). The removal of harvest residue at the F2R site decreased volume production by more than 10% at age 10 years. In the following rotation (S3R), the repeated removal of harvest residue reduced volume production by more than 30% by age 9 years (Fig. 4). Relative response compared to the control treatment over time for site preparation and fertilisation in the second rotation (F2R) established in 1971 (dotted lines) and third rotation (S3R) on same three sites established in 1991 (solid lines) A significant decrease in the mean tree volume produced when slash was removed was observed across all sites in both the first trial series (F1R and F2R) and second trial series (S2R and S3R). Furthermore, over time, the negative impact (reduction in volume) of slash removal across all sites increased. In the first trial series at 10 years, volume loss was 19 m3 ha− 1 (9%) and 28 m3 ha− 1 (13%) for F1R and F2R respectively, while in the second trial at 9 years it was 41 m3 ha− 1 (21%) and 69 m3 ha− 1 (33%) for S2R and S3R respectively. The results concur with those reported elsewhere (Egnell and Leijon 1999, Egnell and Valinger 2003, Simpson et al. 2000,) of productivity loss of 11% for P. patula, 32% Pinus sylvestris L., 20% P. sylvestris and 31.4% Pinus elliottii at 10, 15, 24, and 3 years respectively. The productivity loss can possibly be attributed to the fact that harvest residue provides an important pool of nutrients to the successive crop. Hernández et al. (2009) stated that leaves contain about 37%, 13% and 25% of N, Calcium (Ca) Magnesium (Mg) and PK, respectively. Furthermore, slash from harvested timber, when left on the forest floor, is broken down and mineralised from its organic form to supply nutrients to ground vegetation and trees (Saarsalmi et al. 2010). In support of this, Wall and Hytönen (2011) stated that when slash is retained on forest floor, there is greater redepositing of nutrients mined by trees such as K, Ca, Mg and P than when slash is removed. Additionally, slash removal can have a negative effect on the supply of N and exchangeable base cations in the soil. Bark removal can result in the loss of Ca and P (Titshall et al. 2013). The differential decomposition rates of foliage, branches, bark and non-utilisable timber explains the continued nutrient supply (Saarsalmi et al. 2010). Other trials at Usutu have shown that the retention of organic matter plays an important role in the nutrient cycling of especially P (Crous et al. 2007b, 2008, 2011a). Germishuizen (1979) reported that site C9 (Q-set) was more productive, than A9 (M-set) and D5 (Q/T-set) by 7 and 31% respectively in the F1R and by 18 and 44% in the F2R plot treatments with no fertiliser application respectively. Only in S2R was Q-set site highly productive by 17 and 62% than the M-set and Q/T-set sites, and in the 3R M-set soils were highly productive by 2 and 40% than the Q-set and Q/T-set sites respectively. According to Germishuizen (1979), the location of the site with the lowest productivity, the Q/T-set site was on a ridge top, thus exposed and subjected to winds and also had a high stone content. Stony soils are known to have reduced nutrients and moisture storage ability (Macadam 1989). The cultivation treatment applied in the F1R had a significant effect on tree volume at age 10 years while in the F2R cultivation was only significant at age 4 years and showed no difference to the control at ages 6 and 10 years. This corresponded to all the initial silviculture land preparation trials that showed cultivation was beneficial during afforestation (planting of grassveld) but had no effect at re-establishment. According to du Toit et al. (2010), surface ploughing for the establishment of the F1R (grasslands) site resulted in improved productivity because of nutrient mineralisation and the physical break-down of the dense root mat normally formed on the grassveld. Ripping or subsoiling after the F1R had shown no significant improvement in site productivity (du Toit et al. 2010). The application of lime had no significant difference in productivity between the control (slash retained) and lime treatments, similar to findings by Wall (2012). This was observed across all three sites (A9, C9, D5) of the second trial (S2R and S3R) at ages 7 and 9 years. In other studies, it has been demonstrated that lime increased biological activity (Baath and Arnebrant 1994), which helps further break down and release of nutrients from humus (Marschner and Wilczynski 1991). This effect might have released nutrients in abundance which could have resulted in more losses through leaching shortly after establishment. Furthermore, lime can have negative effects on fine root development, fine root distribution and mycorrhizal root tip development (Persson and Ahlström 1990), often resulting in a negative tree growth response (Wall 2012). By the age of 10 years, fertiliser application across all sites increased volume production by 28 m3 ha− 1 (14%) and 29 m3 ha− 1 (15%) in the first trial series on F1R and F2R sites respectively. In the second trial series, the increased volume production at 9 years due to fertiliser application was 30 m3 ha− 1 (18%) and 17 m3 ha− 1 (10%) on S2R and S3R sites respectively. Likewise, Herbert and Schönau (1989) reported a 30 and 58% productivity improvement of P. patula at the age 6 and 8 years, while Donald and Glen (1974) reported an improvement of 12.9 and 11.5% on Pinus radiata D.Don at age of 8 and 10 years after fertiliser application. The application of fertiliser resulted in a 11% and 15% increase in the needle P and K levels in the F1R and by 13 and 13% in the F2R respectively, while it reduced the Mg level by 17% in the F2R site. In other studies, it was shown that P fertiliser application significantly (P < 0.08) (Crous et al. 2007b) increased P concentration in the needles (Donald and Glen 1974). Needles P and K were also highly correlated with each other. This made it difficult to determine the importance of these two nutrients. The needle nutrient content could only account for 30% of the variation in standing volume over the six treatments and 10 sites. This indicated that foliar analysis has its limitations and might not be an efficient management tool for providing fertiliser prescriptions when viewed alone (Morris: Trial series B43: Site preparation and fertilizer application at time of planting, part 1: Influence on volume increment, Unpublished). In the first trial F1R, site response to fertiliser was highest in the N-set (E2) site with 44 m3 ha− 1 more productivity which denotes 27% gain, followed by the M-set (A9) with 32 m3 ha− 1 (14%) gain. The greatest response to fertiliser application occurred at the site located on the M-set soil in F2R and also in S2R and S3R, with volume increases of 54 m3 ha− 1 (28%), 53 m3 ha− 1 (30%) and 32 m3 ha− 1 (16%) respectively. Worth noting is that the M-set site was located on gabbro parent materials. A growth decline over rotations was previously observed by Evans (1978), which was later shown to be due to deficiencies in both P and K that were developing (Morris: Soil fertility and long term productivity of Pinus patula in the Usutu Forest Swaziland, Unpublished). Therefore, the site by fertiliser interaction on the M-set soil can possibly be explained by the significant increase in volume production when fertiliser was applied on the plots where harvest residue was removed in the S3R, whilst there were no interactions in the previous rotations. Similar responses were observed in other regional trials (Morris 2003; Crous et al. 2007a, 2009) and in trials outside Southern Africa (Fox 2000; Devine and Harrington 2007). The maintenance of high productivity by fertiliser indicated that its application caused a sustainable change in stand productivity due to inducement of trees to develop a root system that can exploit mineral nutrients and moisture on site at full potential (Herbert and Schönau 1990). This can be described as a Type-2 response (Snowdon 2002; Egnell 2011) or a Type-A response (Nilsson and Allen 2003). The productivity loss associated with slash removal increased from 19 m3 ha− 1 (9%) in F1R of the first trial series to 69 m3 ha− 1 (33%) in S3R of the second trial series. This clearly indicated that the removal of slash from the forest floor during site preparation before planting of trees is a non-sustainable management option and should be avoided. A high rate of fertiliser applied at planting across all sites increased volume productivity by 28 m3 ha− 1 (14%) and 29 m3 ha− 1 (15%) at age 10 years of the first trial series on the F1R and F2R respectively compared to the un-fertilised treatment. Similarly, in the second trial series at age 9 years, fertiliser application improved productivity by 30 m3 ha− 1 (18%) and 17 m3 ha− 1 (10%) on the S2R and S3R sites respectively. Therefore, fertiliser application significantly improved volume productivity and maintained high productivity across all sites and rotations, which can be described as a Type-2 response. Lime application had no effect on tree growth when compared with the control (slash retained) treatment. Therefore, it is not recommended for use at Usutu plantation. In order to sustain or increase productivity, it is recommended that harvest residue slash should be conserved and PK fertiliser be applied at planting at Usutu. M-set: Soils deep red, medium textured, occur on upper and mid-slopes and parental material is intermediate colluvium. Q-set: Topsoil sandy, overly poorly structured olive yellow calcareous sandy clay, occurs in bottom land positions or in drainage lines and parental material is sandstones and shales. T/Q-set: Dark greyish brown sandy clay loam which may contain soft and hard iron concretions, occurs in mid-slopes where drainage is impeded, and parental material is dolerite and basalt. N-set: Soils brown, sandy-loam, occur on gentle lower slopes adjacent to river terraces and parent material is either basalt or dolerite (ancient alluvium). CH: Conventional harvesting DBH: Diameter at breast height LTSP: Long-term site productivity QmDBH: Quadratic mean SCO: Soil organic carbon Akhtaruzzaman, M, Haque, M, Osman, K. (2014). Morphological, physical and chemical characteristics of hill forest soils at Chittagong University, Bangladesh. Open Journal of Soil Science, 4, 26–35. Baath, E, & Arnebrant, K. (1994). Growth rate and response of bacterial communities to pH in limed and ash treated forest soils. Journal of Soil Biology and Biochemistry, 26(8), 995–1001. Bredenkamp, B (2000). Volume and mass of logs and standing trees. Section 4.5. In DL Owen (Ed.), SAIF forestry handbook, (pp. 167–174). Pinetown: Southern African Institute of Forestry. Carter, MC, & Foster, DC. (2004). Prescribed burning and productivity in southern pine forests: A review. Journal of Forest Ecology & Management, 191, 93–109. Chen, GX, Yu, KW, Liao, LP, Xu, GS. (2000). Effect of human activities on forest ecosystems: N cycle and soil fertility. Nutrient Cycling in Agroecosystems, 57, 47–54. Cortini, F, Comeau, FG, Boateng, JO, Bedford, L. (2010). Yield implications of site preparation treatments for lodgepole pine and white spruce in northern British Columbia. Forests, 1, 25–48. Crous, J, Morris, A, Scholes, M. (2007a). The significance of residual phosphorus and potassium fertiliser in countering yield decline in a fourth rotation of Pinus patula in Swaziland. Southern Hemisphere Forestry Journal, 69, 1–8. Crous, J, Morris, A, Scholes, M. (2007b). Effects of residual phosphorus and potassium fertiliser on organic matter and soil nutrients in a Pinus patula plantation. Journal of Australian Forestry, 70, 200–208. Crous, J, Morris, A, Scholes, M. (2008). Growth and foliar nutrient response to recent applications of phosphorus (P) and potassium (K) and to residual P and K fertiliser applied to the previous rotation of Pinus patula at Usutu, Swaziland. Journal of Forest Ecology and Management, 256, 712–721. Crous, J, Morris, A, Scholes, M. (2009). Effect of phosphorus and potassium fertiliser on tree growth and dry timber production of Pinus patula on gabbro-derived soils in Swaziland. Journal of Southern Forests, 71, 235–243. Crous, J, Morris, A, Scholes, M. (2011a). Changes in topsoil, standing litter and tree nutrient content of a Pinus patula plantation after phosphorus and potassium fertilization. European Journal of Forest Research, 130, 277–292. Crous, J, Morris, A, Scholes, MC. (2011b). Investigating the utilization of potassium fertilizer in a Pinus patula Schiede ex Schltdl. Cham. plantation. Journal of Forest Science, 57, 222–231. Curtis, RO, & Marshall, DD. (2000). Why quadratic mean diameter? Journal of Western Applied Forestry, 15(3), 137–139. Devine, WD, & Harrington, CA. (2007). Influence of harvest residues and vegetation on microsite soil and air temperatures in a young conifer plantation. Journal of Agriculture Forestry and Meteorology, 145, 125–138. Donald, DGM, & Glen, LM. (1974). The response of Pinus radiata and Pinus pinaster to N, P and K fertilizers applied at planting. South African Forestry Journal, 91(1), 19–28. Dovey, SB, du Toit, B, de Clercq, W. (2011). Nutrient fluxes in rainfall, throughfall and stemflow in eucalyptus stands on the Zululand coastal plain, South Africa. Southern Forests, 73, 193–206. du Toit, B, & Scholes, MC. (2002). Nutritional sustainability of eucalyptus plantations: A case study at Karkloof, South Africa. Southern African Forestry Journal, 195, 63–72. du Toit, B, Smith, C, Carlson, C, Esprey, L, Allen, R, Little, K (1999). Eucalypt and pine plantations in South Africa, Workshop proceedings, Site Management and Productivity in Tropical Plantation Forests (pp. 23–30). Bogor: Indonesia Center for International Forestry Research. du Toit, B, Smith, CW, Little, KM, Boreham, G, Pallett, RN. (2010). Intensive, site-specific silviculture: Manipulating resource availability at establishment for improved stand productivity. A review of South African research. Forest Ecology and Management, 259, 1836–1845. Egnell, G. (2011). Is the productivity decline in Norway spruce following whole-tree harvesting in the final felling in boreal Sweden permanent or temporary? Forest Ecology and Management, 261, 148–153. Egnell, G, & Leijon, B. (1999). Survival and growth of planted seedlings of Pinus sylvestris and Picea abies after different levels of biomass removal in clear-felling. Scandinavian Journal of Forestry Resources, 14, 303–311. Egnell, G, & Valinger, E. (2003). Survival, growth and growth allocation of planted scots pine trees after different levels of biomass removal in clear-felling. Forest Ecology and Management, 177, 65–74. Eisenbies, MH, Vance, ED, Aust, WM, Seiler, JR. (2009). Intensive utilization of harvest residues in southern pine plantations: Quantities available and implications for nutrient budgets and sustainable site productivity. Bioenergy Research, 2, 90–98. Evans, J. (1975). Two rotations of Pinus patula in the Usutu forest, Swaziland. Commonwealth Forestry Review, 54(1), 69–81. Evans, J. (1978). A further report on second rotation productivity in the Usutu Forest, Swaziland- results of the 1977 assessment. Commonwealth Forestry Review, 57, 253–261. Evans, J. (1996). The sustainability of wood production from plantations: Evidence over three successive rotations in the Usutu Forest, Swaziland. Commonwealth Forestry Review, 75(3), 234–239. Evans, J. (1998). The suitability of wood production in plantation forestry. Unasylva, 192(49), 47–52. Evans, J. (1999). Sustainability of plantation forestry: Impact of species change and successive rotations of pine in the Usutu Forest, Swaziland. Southern African Forestry Journal, 184, 63–70. Fox, TR. (2000). Sustained productivity in intensively managed forest plantations. Journal of Forest Ecology and Management, 138(1), 187–202. Germishuizen, PJ. (1979). The re-establishment of Pinus patula Schlecht. & Cham. On a pulpwood rotation in Swaziland. MSc Thesis, University of Stellenbosch. Herbert, MA, & Schönau, APG. (1989). Fertilising commercial forest species in southern Africa: Research progress and problems (part 1). South African Forestry Journal, 151(1), 58–70. Hernández, J, del Pino, A, Salvo, L, Arrarte, G. (2009). Nutrient export and harvest residue decomposition patterns of a Eucalyptus dunnii maiden plantation in temperate climate of Uruguay. Forest Ecology and Management, 258, 92–99. Hyvönen, R, Olssona, BA, Lundkvista, H, Staaf, H. (2000). Decomposition and nutrient release from Picea abies (L.) Karst. and Pinus sylvestris L. logging residues. Forest Ecology and Management, 126, 97–112. Jokela, EJ, Allen, HL, McFee, WW. (1991). Fertilization of southern pines at establishment [chapter 14]. In ML Duryea, PM Dougherty (Eds.), Forest regeneration manual. Forestry sciences book series, 36, (pp. 263–277). Dordrecht: Springer. Jokela, EJ, & Long, AJ. (2015). Using soils to guide fertilizer recommendations for southern pines. [circular 1230]. Gainesville: University of Florida, Institute of Food and Agricultural Sciences (IFAS) Extension Service. Jokela, EJ, Martin, TA, Vogel, JG. (2010). Twenty-five years of intensive forest management with southern pines: Important lessons learned. Journal of Forestry, 108, 338–347. Laclau, J, Deleporte, P, Ranger, J, Bouillet, J, Kazotti, G. (2003). Nutrient dynamics throughout the rotation of eucalyptus clonal stands in Congo. Annals of Botany, 91, 879–892. Laclau, J, Levillain, J, Deleporte, P, Nzila, JDD, Bouillet, J, Saint André, L, Versini, A, Mareschal, L, Nouvellon, Y, Thongo, MA, Ranger, J. (2010). Organic residue mass at planting is an excellent predictor of tree growth in Eucalyptus plantations established on a sandy tropical soil. Journal of Forest Ecology and Management, 260, 2148–2159. Lundkvist, H. (1983). Effects of clear-cutting on the enchytraeids in a scots pine forest soil in Central Sweden. Journal of Applied Ecology, 20, 873–885. Macadam, A. (1989). Effects of prescribed fire on forest soils., [research report 89001-PR]. Victoria: Ministry of Forests and Land. https://www.for.gov.bc.ca/hfd/pubs/docs/Rr/R89001pr.pdf. Mälkönen, E. (1976). Effect of whole-tree harvesting on soil fertility. Silva Fennica, 10(3), 157–164. Marschner, B, & Wilczynski, AB. (1991). The effect of liming on quantity and chemical composition of soil organic matter in a pine forest in berlin, Germany. Journal of Plant and Soil, 137(2), 229–236. Morris, AR. (1995). Forest floor accumulation, nutrition and productivity of Pinus patula in the Usutu Forest, Swaziland. Journal of Plant and Soil, 168(1), 271–278. Morris, AR. (2003). Site and stand age effects on fertiliser responses in Pinus patula pulpwood plantations in Swaziland. Southern African Forestry Journal, 199, 27–39. Murdoch, G. (1968). Soils and land capability in Swaziland. Ministry of Agriculture bulletin 23–25. Soil map of Swaziland. National Soil Reconnaissance 1963–1967. Nilsson, U, & Allen, HL. (2003). Short and long-term effects of site preparation, fertilization and vegetation control on growth and stand development of planted loblolly pine. Journal of Forest Ecology and Management, 175, 367–377. Nixon, DJ. (2006) Guide to the soils of the Swaziland sugarcane industry: A correlation of the Swaziland soil classification system with the south African binomial system. http://www.wossac.com/downloads/19837_Soils_of_the_Swaziand_Sugar_Industry.pdf. Accessed 8 Mar 2018. Olbrich, K, Christie, SI, Evans, J, Everard, D, Olbrich, B, Scholes, RJ. (1997). Factors influencing the long term sustainability of the south African Forest industry. Southern African Forestry Journal, 178, 53–71. Olsson, BA, Bengtsson, J, Lundkvist, H. (1996). Effects of different forest harvest intensities on the pools of exchangeable cations in coniferous forest soils. Forest Ecology and Management, 4, 135–147. Palviainen, M, Fine, L, Laiho, R, Shorohova, E, Kapitsa, E, Vanha-Majamaa, I. (2010). Carbon and nitrogen release from decomposing scots pine, Norway spruce and silver birch stumps. Forest Ecology and Management, 259, 390–398. Perez-Batallon, P, Ouro, G, Macias, F, Merino, A. (2001). Initial mineralization of organic matter in a forest plantation soil following different logging residue management techniques. Annals of Forest Science, 58, 807–818. Persson, H, & Ahlström, K. (1990). The effects of forest liming on fertilization on fine-root growth. Journal of Water, Air, and Soil Pollution, 54(1), 365–375. Raison, RJ, Khanna, PK, & Crane, WJB. (1982). Effects of intensified harvesting on rates of nitrogen and phosphorus removal from Pinus radiata and Eucalyptus forests in Australia and New Zealand. New Zealand Journal of Forestry Science, 12(2), 394–403. Rose, CE, & Shiver, BD. (2000). A comparison of first and second rotation dominant and codominant heights for Flatwoods slash pine plantations. Plantation Management Research Cooperative, 2, 15–23. Saarsalmi, A, Tamminen, P, Kukkola, M, Hautajärvi, R. (2010). Whole-tree harvesting at clear-felling: Impact on soil chemistry, needle nutrient concentrations and growth of scots pine. Scandinavian Journal of Forest Research, 25, 148–156. Sayer, EJ. (2006). Using experimental manipulation to assess the roles of leaf litter in the functioning of forest ecosystems. Biological Reviews, 81, 1–31. Scholes, MC. (2002). Biological processes as indicators of sustainable plantation forestry. Southern African Forestry Journal, 195, 57–62. Scott, DA, & Bliss, CM. (2012). Phosphorus fertilizer rate, soil P availability, and long-term growth response in a loblolly pine plantation on a weathered ultisol. Forests, 3, 1071–1085. Simpson, JA, Xu, ZH, Smith, T, Keay, P, Osborne, DO, Podberscek, M. (2000). Effects of site management in pine plantations on the coastal lowlands of subtropical Queensland, Australia. Centre for International Forestry Research, 9, 73–81. Skinner, MF, Zabowski, D, Harrison, R, Lowe, A, Xue, D. (2001). Measuring the cation exchange capacity of forest soils. Communications in Soil Science and Plant Analysis, 32, 1751–1764. Smaill, SJ, Clinton, PW, & Greenfield, LG. (2008). Postharvest organic matter removal effects on FH layer and mineral soil characteristics in four New Zealand Pinus radiata plantations. Forest Ecology and Management, 256, 558–563. Snowdon, P. (2002). Modeling type 1 and type 2 growth responses in plantations after application of fertilizer or other silvicultural treatments. Journal of Forest Ecology and Management, 163, 229–244. Soil Survey Staff (2006). Keys to soil taxonomy. 10 th ed. United States, Department of Agriculture, (p. 341). Washington, DC: U.S. Government Printing Office. Tiarks, A, Klepzig, K, Sanchez, F, Lih, M, Powell, J, Buford, M. (1999). Role of coarse woody debris in the loblolly pine ecosystem. Biennial Southern Silvicultural Research Conference, 16(18), 238–242. Titshall, L, Dovey, S, Rietz, D. (2013). A review of management impacts on the soil productivity of South African commercial forestry plantations and the implications for multiple-rotation productivity. Journal of Southern Forests, 75(4), 169–183. Wall, A. (2012). Risk analysis of effects of whole-tree harvesting on site productivity. Forest Ecology and Management, 282, 175–184. Wall, A, & Hytönen, J. (2011). The long-term effects of logging residue removal on forest floor nutrient capital, foliar chemistry and growth of a Norway spruce stand. Journal of Biomass Bioenergy, 35, 3328–3334. Weber, MG, Methven, IR, & Van Wagner, CE. (1985). The effect of forest floor manipulation on nitrogen status and tree growth in an eastern Ontario jack pine ecosystem. Canadian Journal of Forest Research, 15, 313–318. Woods, RV. (1990). Second rotation decline in P. radiata plantations in South Australia has been corrected. Journal of Water, Air and Soil Pollution, 54, 607–619. The authors would like to thank SAFCOL for the financial support through the University of Pretoria. The management of Sappi Usutu granted access to the trial data. Additional thanks to Mr. S Khoza for assistance in providing all the required data and information. The South African Forest Company Limited funded my tuitions and cost involved from data collection to analysis, which covered travel, meals and accommodation. Please contact author to link you with Sappi Usutu the custodians of data. Department of Plant Production and Soil Science, University of Pretoria, Private bag X20 Hatfield, Pretoria, 0028, South Africa Lindani Z. Mavimbela Sappi Forests, PO Box 473, Howick, 3290, South Africa Jacob W. Crous Institute for Commercial Forestry Research, PO Box 100281, Scottsville, 3209, South Africa Andrew R. Morris Paxie W. Chirwa Search for Lindani Z. Mavimbela in: Search for Jacob W. Crous in: Search for Andrew R. Morris in: Search for Paxie W. Chirwa in: LZM cleaned availed the data for analysis and wrote the manuscript. JWC coordinated field monitoring and oversaw the data analyses process. ARM contributed to the research design and data collection along with technical assistance. PWC facilitated and coordinated the process of acquiring data, its analysis partnership and writing of the paper. All authors read and approved the final manuscript. Correspondence to Lindani Z. Mavimbela. LZM, MSc holder and currently PhD Candidate with University of Kwa-Zulu Natal. JWC, PhD holder and working as Programme Leader - Land Management, Sappi Forests. ARM, PhD holder working as the Acting Director and Research Manager, Institute for Commercial Forestry Research. PWC, PhD holder and a Professor at University of Pretoria under the Forest Science Postgraduate Programme and he is the SAFCOL Forest Chair. Mavimbela, L.Z., Crous, J.W., Morris, A.R. et al. The importance of harvest residue and fertiliser on productivity of Pinus patula across various sites in their first, second and third rotations, at Usutu Swaziland. N.Z. j. of For. Sci. 48, 5 (2018) doi:10.1186/s40490-018-0110-1 Accepted: 20 March 2018 Slash removal Slash retention Type-2 response
CommonCrawl
How would someone destroy a dam in a world without explosives? I'm considering the mechanics of a "shock and awe" scene in my novel which could see an old dam being brought down. The era is equivalent to that of the middle Roman empire, so introducing explosives would be far too convenient and would require altering previous work for the simple logic of conflicts. What was available, or has been used in the past, to undermine the foundations of large structures in minimal time? It's undecided at present how deep the water is on the other side of the dam, and the construction materials. I'm most likely to use the Roman style of architecture (reference). science-based architecture ancient-history water-bodies explosions BreretonBrereton $\begingroup$ catapult, it can be a catapult was used (not in Rome empire) to demolish walls of castles. $\endgroup$ – MolbOrg Apr 23 '17 at 0:42 $\begingroup$ If you dig a trench around the edge of the dam, the water which begins to flow through the trench, will erode that space until the water runs out. It wouldn't necessarily destroy the dam, but compromise it's purpose. And that is assuming that the dam is built in a canyon and the walls of the canyon can be quickly eroded by flowing water. $\endgroup$ – Nolo Apr 23 '17 at 4:08 $\begingroup$ The first two articles mention mining under the wall. The people that do that are called sappers. en.wikipedia.org/wiki/Sapper $\endgroup$ – Martin York Apr 23 '17 at 4:20 $\begingroup$ You need a group of enraged Ents $\endgroup$ – RedSonja Apr 23 '17 at 17:50 $\begingroup$ youtube.com/watch?v=VipVo8zPH0U is how we do it today without explosives. $\endgroup$ – Bald Bantha Apr 23 '17 at 18:22 Forts using stone, rock and earth walls as fortifications (and in some cases, still standing !) were often attacked during sieges. Such a wall has much in common with a dam. To breach such a wall you tunnel underneath. It's a well established technique. You dig a tunnel, using normal techniques to prop and seam your tunnel. Then when you've dug enough, you set fire to the supporting structures (or otherwise destroy them) and, the attacker hopes, the subsidence will collapse the wall or (for a siege) damage or weaken it. I gather the tunnels were often (always ?) lined or filled with materials that would burn for an extended period to further weaken the wall above using heat. I see no reason the same principles would not apply to a dam. StephenGStephenG $\begingroup$ From this technique comes the word "to undermine", originally meaning "to make a passage or mine under (a wall, etc.), esp. as a military operation", and now more commonly "to weaken, injure, destroy or ruin, surreptitiously or insidiously" (definitions from the OED). $\endgroup$ – AlexP Apr 23 '17 at 1:13 $\begingroup$ That kind of approach is extra effective, because once you start the water flowing, it will very quickly erode the dam and result in a complete failure. Have a look at the Teton Dam, in which a small leak quickly resulted in a complete and catastrophic failure and many deaths. $\endgroup$ – Fake Name Apr 23 '17 at 3:10 $\begingroup$ Would the presence of the dammed water make digging the tunnel either impossible or exceedingly dangerous? $\endgroup$ – Rich Holton Apr 24 '17 at 1:51 $\begingroup$ Danger? no problem - send down enemy POWs to do the digging... $\endgroup$ – Baldrickk Apr 24 '17 at 9:41 $\begingroup$ @rich-holton : Digging a tunnel is always dangerous, but a dam, by it's nature, tends to be built with a water-proof or water-resistant layer that would make the danger of water (during undermining) no more than with normal groundwater, I'd guesstimate. I'd have thought that in these ancient techniques ventilation (i.e. the lack of) was the biggest problem. $\endgroup$ – StephenG Apr 24 '17 at 10:06 A more devious scheme to bring a dam down: A secret second dam above the main target dam. You simply install a second dam above the main dam (as high as possible ideally). You reduce the flow inconspicously so that the main dam manning do not see anything unusual, but slowly the reservoir of your second dam is filling. Now you wait for the perfect time when the second dam reservoir is filled and the main target dam is nearly filled. If the main dam needs only some filling, you can increase the flow from the second dam. Your attack commences with destroying the second dam which is purposefully built for being brought down (you have some pivot support columns which are disengaged). The water rushes down and gets faster and faster, converting the stored potential energy from a higher point to kinetic energy. Friction and obstacles will slow down the water masses to a point, but it will still be very fast. When the water enters the main dam, an effect called the water hammer comes in effect. The main dam does not allow the moving water masses to continue running, so the moving water causes a sudden pressure increase. The incoming water not only causes water to slosh over, it literally pushes the dam crest apart. Result: catastrophic failure. ADDITION: The original question states that we are on the technological level of the early Roman era, so we should neither expect to have a hydro dam like the Hoover nor a reservoir for a city of million people. It will be more like a dam with the height of metres and the reservoir like a big lake. Still we can compare dynamic with static pressure. The dam need to withstand static pressure, so we can assume we need approximately a pressure with the same order of magnitude to break the dam. \begin{eqnarray*} \rho & = & density(kg \, m^{-3}) \\ g & = & gravitational \; acceleration = 9.81 \approx 10 \; m \, s^{-2} \\ h & = & height \; m \\ v & = & velocity \; m \, s^{-1} \\ mean \; static \; pressure & = & \frac{1}{2} \, \rho \; g \; h (The \; dam \; holds \; this \; pressure) \\ dynamic \; pressure & = & \frac{1}{2} \; \rho \; v^2 \Rightarrow v \approx \sqrt{10*h} \end{eqnarray*} Moderate flash flood velocity is 2.6 m/s and a very fast flashflood is in the range of 26 m/s. A moderate flashflood will be held by a 0.6m dam, a worst case scenario of 26 m/s would give an impressive height of 70 m. But the flash flood water will merge with the still water in the reservoir, so an inelastic collision will occur and the water slows considerably down. So the final velocity of the water will be the ratio $$ r = \frac{flash \; flood \; mass}{reservoir \; mass + flash \; flood \; mass}$$ of the flash flood speed (I also neglected friction and energy dissipation by waves). Result: If the dam is something like 10 m high and the reservoir is big (10-100 times), even the ugliest flashflood will have no pressure effect. Moderate flash floods can be contained even with small dams. On the other hand, if the dam is only a few meters high and the reservoir has not a much bigger capacity (10 times) than the incoming water, an incoming massive flash flood is able to forcibly remove the dam. Thorsten S.Thorsten S. $\begingroup$ This is so silly it's good. "Build a secret dam." $\endgroup$ – Nit Apr 23 '17 at 21:55 $\begingroup$ @Nit Ha, ha, a secret dam. You mean something which is quite massive and could be there undiscovered for 30 years? How ridiculous, ha ha ha... $\endgroup$ – Thorsten S. Apr 23 '17 at 22:22 $\begingroup$ You're not going to get a water hammer effect: the main dam's reservoir will dissipate the force of the incoming water. What will happen is that the surge of excess water will overtop and erode the dam, causing it to fail. $\endgroup$ – Mark Apr 23 '17 at 22:55 $\begingroup$ Several problems with this. First: the beaver dam example is nice but you can't rely on it. That dam's some 200 km inside an uninhabited forest; yours must be relatively close to the first dam, which would have people (for maintenance, fishing, bathing) going there day in day out. Second: it's not just "a secret dam". Its construction would need roads, carriages, a lot of workers, and probably a lot of noise (hammer and chisel is a very distinct sound) to also be secret. Third: water hammer effect only applies to closed systems. Open dams like the Romans' would just overflow. $\endgroup$ – walen Apr 24 '17 at 7:13 $\begingroup$ The fluid hammer effect does not apply to rivers and dams. However.... the plan will still work because flooding and uncontrolled overflow is the greatest threat to an otherwise "healthy" and well built dam. $\endgroup$ – MichaelK Apr 24 '17 at 11:27 Roman type dams were usually buttressed rather than relying purely on their weight to hold the water back, break away the supports and they will fail. Earth dams just need a single point of failure to be induced and the weight of the water will do the rest in short order. Artillery targeting the non water side of either type of dam should be enough to make either type fail fairly quickly, they're not designed to withstand that sort of stress. The Romans were excellent engineers, they would quickly recognise the weakness of a dam and either pull away supports or dig out making a point of failure. KilisiKilisi $\begingroup$ The highest Roman dam at Subiaco (50 meters tall, built under Nero in the 1st century) is said to have been destroyed accidentally in 1305, when "two monks took stones from the wall, because they wanted to lower the water level, presumably in order to make the water further from their fields; the wall no longer withstood the weight of the water; apparently, a breach appeared and grew ever larger, until the wall finally gave way" (Wikipedia). $\endgroup$ – AlexP Apr 23 '17 at 1:28 $\begingroup$ @AlexP Interestingly the Subiaco wikipedia article linked above starts by telling these were gravity dams. This raises a point: the engineers would not have wanted their dam to collapse in a "shock and awe" fashion. So any dam holding enough water to create the effect would be a gravity dam that is torn apart by water gradually and any dam that might collapse catastrophically would hold too little water. Of course, for dramatic effect you can assume the engineers were not up to Roman standards, but people building large dams do tend to take the job seriously. $\endgroup$ – Ville Niemi Apr 23 '17 at 13:02 $\begingroup$ @VilleNiemi: Well, Wikipedia says that it was made of "masonry", and we must remember that it had stood for more than 12 centuries, of which at least eight without any maintenance... $\endgroup$ – AlexP Apr 23 '17 at 16:12 $\begingroup$ @AlexP Not sure about your point. Mine was that a large dam like Subiaco was designed in a way that made it resistant to catastrophic collapse and indeed even after centuries of neglect it failed gradually, just like people designing it wanted it to. Which is not "shock and awe" inducing. $\endgroup$ – Ville Niemi Apr 24 '17 at 17:50 You can of course excavate under the dam and have it crash for lack of support ("undermining"). This was the routine siege attack against walls and the crash would be quit abrupt. Its feasibility depends on the strength of the foundations and the ease of tunneling through rocks. Romans were quite proficient at that. Depending on the situation, you could perhaps use a malvoisin - a high structure built near the dam. Get a heavy stone or iron ball secured to a chain hinged on the dam, raise it with pulleys on the malvoisin, let it fall down and impact the dam. Repeat as needed. Essentially you have built yourself a wrecking ball. LSerniLSerni Make a hole in it. Once there is water flow, the hole will expand. If the hole is low in the dam, the structure above it will collapse. But even if at the top of the dam, it will work its way down over time. You might ask if dams don't already let water through. Sure, this is called a spillway. But spillways receive extra reinforcement so as not to compromise the overall structure. In other words, they are deliberately designed not to wear when water runs through them. If you choose a random portion of the dam (or the ground beside the dam), it won't have that reinforcement. The hole will expand. Making the hole can be as simple as using a pick and shovel. Dams were often simply big earthworks then. You could dig through them. A dam made of mortared stone would be more difficult. You might find it easier to dig near the dam. That would often still be regular earth. The dam might also be buttressed. Then you could remove the buttress by digging around it and removing its support. A buttress near the center of the dam will likely cause the most strain when removed. Note that the water is part of the dam's support. As you remove the water and relieve the pressure, the dam may collapse into the water. This is the shock and awe moment, when a small breech turns into total collapse. This is most likely with a buttressed dam. The reason to do this rather than undermine the dam is that undermining requires more digging. Not only do you have to dig through the thickest part of the dam, but you also have to dig down to get there. And with a dam, it's unlikely that you'd be dodging defenders. Unlike a wall, where defenders stand on top and throw things at you. So there's less utility in starting a tunnel away from the wall. Most dam construction in Roman times will be such that it is as easy to dig through the dam as under it. If a mortared stone dam, look to removing its support. This may be easier on the sides than underneath. Let the water do the hard work. You just need to give it a chance to start. I would only see undermining as necessary with a buttress, which is smaller. BrythanBrythan $\begingroup$ Re "... spillways receive extra reinforcement so as not to compromise the overall structure." Oroville. And dams are often just big earthworks now. Oroville again. $\endgroup$ – jamesqf Apr 23 '17 at 4:09 $\begingroup$ I believe this is how dams often fail in real life and that this is known as piping failure community.dur.ac.uk/~des0www4/cal/dams/foun/seep4.htm $\endgroup$ – K-Feldspar Apr 26 '17 at 9:27 Speaking of tunnels... https://en.wikipedia.org/wiki/Ruina_montium an ancient Roman mining technique that draws on the principle of Pascal's barrel. Miners would excavate narrow cavities down into a mountain, whereby filling the cavities with water would cause pressures large enough to fragment thick rock walls. Mine tunnel to the center of the damm, as long as possible so that it contains a big quantity of air. Once the tunnel is ready, flood it as fast as possible. The trapped air builds up pressure as the water pushes in, and once it exceeds the tunnel walls resistance it escapes breaking the tunnel walls, effectively acting as the compressed air blast of an explosive's shockwave. Only no explosives needed, just water and air... and look, there's a big damm with water just behind you! How convenient. Additionally, effects from hydrostatic pressure (the higher the water falls, the bigger the force it will do against the walls) add up for extra effect. What the air doesn't blast away, the water force will. JDługosz Dan FernandezDan Fernandez According to tradition, when Hannibal needed to clear boulders off Alpine paths, he used fire-setting; build a large fire against the rockface, and when it was properly heated, throw a large quantity of cold water (or vinegar, for the acid) on it to cause cracks through thermal shock. This would work well against stone or masonry dams; it's not itself very destructive, but a thin crack all the way through would very quickly widen into a breach. An added bonus is that acetic acid would presumably react with, and weaken, the lime mortar. Tim LymingtonTim Lymington $\begingroup$ Plus, you'd get that refreshing effervescent flow. I could see a great 80s-era tv spot out of this approach. $\endgroup$ – The Nate Apr 24 '17 at 15:02 $\begingroup$ Depending on the thickness of the damn, this is hindered by the heat transference to the rather large mass of water on the other side. Hannibal didn't have to boil a reservoir just to crack a single rock. $\endgroup$ – Ruscal Apr 25 '17 at 16:37 If water could penetrate cavities of the structure and freeze there that would generate large pressure (because water expands when it freezes) that can break rocks. This process is called ice wedging and it occurs in nature. Maxim UmanskyMaxim Umansky Roman concrete used lime mortar, strongly alkali. I don't know if they used it in dams, but it's plausible they might so it's not too far a guess. Alkalis react with acids, which is why modern concrete and cement also come in sulphate (i.e., SO4 ion) resisting versions. Carbonic acid is also an issue. Acids were well within Roman technology, although I'm not sure of the practicalities. But if you could weaken a critical point with acid, or even by just gradually acidifying the water in some static part of the reservoir in contact with the critical mortar, perhaps it would gradually weaken the dam. As a twist, maybe a tunnel would allow the sub-surface part of the dam foundation to be attacked, removing critical support invisibly, and without having to do it slowly to avoid detection, until suddenly......? StilezStilez $\begingroup$ fondriest.com/environmental-measurements/parameters/… Acid can come from pretty much anything. Probably the quickest available would be diverting a source of water through a coal deposit, or for a slower method, you just need a source of carbon (i.e. trees) the link above recommends pine fir needles. $\endgroup$ – Scott Apr 24 '17 at 3:30 $\begingroup$ Like Hannibal crossing the Alps. $\endgroup$ – fectin Apr 24 '17 at 5:11 $\begingroup$ Romans used concrete and water-resistant mortars quite extensively - notably in the reservoirs, some of which are still functioning today. If memory serves me there is evidence they could even flood the Colosseum - they were the experts in controlling water in that period. $\endgroup$ – StephenG Apr 24 '17 at 10:11 @LSerni is probably right about how the Romans would really do this. If you want some alternate technology within the grasp of the Romans you could destroy it by shaking it at its resonant frequency. Like buildings, dams have resonant frequencies and these are of interest mostly to prevent destruction by earthquakes. https://en.wikipedia.org/wiki/Mechanical_resonance Here is an interesting article about a skyscraper that was evacuated because it was shaking. The shaking was caused by 17 people exercising in unison. http://news.blogs.cnn.com/2011/07/19/scientist-tae-bo-workout-sent-skyscraper-shaking/ Additional reading: Tesla's oscillator, or "earthquake machine" which he claimed could bring down the Empire State Building. https://en.wikipedia.org/wiki/Tesla%27s_oscillator I could not find a report of a dam which actually collapsed because of this phenomenon. But as far as the shocking awesomeness, using vibrations to destroy a structure should qualify. You could have people atop it stomping in unison faster or slower as the maestro directs. Could one affix a big piece of metal to a rigid structure and cause the structure to resonate by drawing a bow across the metal? $\begingroup$ The walls of Jericho may have come down this way. $\endgroup$ – KalleMP Apr 25 '17 at 7:45 My vote goes towards piercing/hammering: Either with a stated wrecking ball (Malvoisin) or with flotsam (in autumn/spring when flow is high, throw a large, somewhat pointed trunk in the river, upstream of the dam...). If it has butresses, undermine these (possibly in a similar way: Chop down the largest tree nearby to fall away from the buttress, but after tying a rope from tree top to buttress -- rope slack on water, to be inconspicuous, so that it suddenly tenses as the falling tree reaches maximum velocity). If you can build your secret dam upstream, even better --- use it to give the flotsam additional speed. This can be part of the plot: The dam(ned) engineers know of the 'secret' dam, and have calculated it to be ineffectual; so it's monitored to an extent and they're very prepared for 'attack day' --- however, they're WRONGLY prepared as in their overconfidence they've overlooked the other legs of the plan: The quickly-tied rope-to-buttress-on-falling-tree, and/or the flotsam-missile. (The flotsam-tree and tree-pulling both have the advantages of being normal jobs as timber is main building material --hence inconspicuous-- and quick versus the secret-dam-building --hence the full attack isn't understood until too late to defend, and it may be difficult to get the people prepared to counterattack (=heavily armed) to adjust to civil-defence jobs (plus, maybe they'll sink in the quagmire from their armour etc). Tunneling under a river doesn't work (you drown, using roman tech), so castle-demolition-analogues won't work I think. $\begingroup$ Welcome to worldbuilding, @user3445853 $\endgroup$ – kingledion Apr 24 '17 at 16:22 One possibility - but not sure how to execute it - would be to use steam. Dig a hole, put a sealed metal cylinder with water in it and fire it up. Steam has tremendous power and if there will be enough pressure in the cylinder it would be much like explosion. But someone would need to run some pretty neat equations on cylinder size and water content and I'm not sure you can build one with only blacksmith-level metal craft. Any other way in a short form: not bloody likely. A bit longer answer: unless it's a really small dam, good luck! Long answer: there is a reason why most of the Roman-built dams are still around, with a lot of them still in use. You'd think that engineers who built roads and aqueducts that are around today and are definitely pieces of finest engineering would build a dam that is a simple wall? Yes, there are a few - like subiaco dam, which are like that, but they are exception to the rule. Subiaco especially, as it's said to be built so that Nero had a lake next to his villa. But most of them were lime or concrete (yes, concrete) core, with compacted earth and masonry to protect it against erosion. First of all you can forget catapult or trebuchet. Yes, they would crack the "outer layer", but will be completely ineffective against earth underneath. With limestone or concrete core you can forget burrowing. The only way to break solid Roman dam is with good-old earthworks: spades, picks and shovels. AcePLAcePL I'm not sure if the physics of this hold up, but could you destroy a dam by movement of the water it holds back? I'm thinking you climb up a convenient high, rocky mountain next to the lake above a dam. You carve out a large, round boulder and let it roll down the mountain and into the water. The bow wave from the resulting impact stresses the dam sufficiently to breach it. I guess if you were capable of all that, you might argue you'd be better off rolling the boulder onto or against the dam and have the same effect, although that would require a very conveniently located mountain. Ralph BoltonRalph Bolton $\begingroup$ Methinks you'd need more than a boulder, but if you could trigger a large and fast enough landslide, that would do it. $\endgroup$ – nigel222 Apr 25 '17 at 18:59 I think another way to do this would be to block the outflow of the dam, When the dam fills normally it would push water down the overflow. Issues with the overflow can be dangerous. Please see Oroville Dam had an issue with its overflow. Very dangerous if Dam's have the overflow blocked. JamesDJamesD $\begingroup$ A little trench along the side to focus the overflow to adjacent to the dam could help in such an approach. $\endgroup$ – The Nate Apr 24 '17 at 15:05 How to knock down a dam without explosives? Two words: Miley Cyrus Russell HankinsRussell Hankins How about a 'solar death ray'? Arguably a whole load of soldiers could spend a day or two polishing the insides of their shields and then hold them in a carefully choreographed way such that they each reflect a small amount of (bright?) sunshine to a single point on the dam. I'm not sure how you'd be able to ensure that every soldier's reflection was on the hot spot, but I guess you might be able to solve that by grouping them and having groups bring their reflections together at the end of the procedure. Even if it didn't work, you'd freak out the dam owners pretty badly. It's been suggested Archimedes did this, although I'm not sure if it's true. Here's one attempt to recreate: http://web.mit.edu/2.009/www/experiments/deathray/10_ArchimedesResult.html Heat it up, cool it off, heat it up, cool it off, heat it up, cool it off, heat it up, cool it off, heat it up, cool it off. This seems stupid, but I assure you as someone who has some background in engineering if you stress out the material enough like this it will break. you just need to get it really hot and then get it cooled off quickly. I'm pretty sure this happened with a wall either in Egypt or Constantinople - I'm not really sure it was called that when it happened though. This method alone could break a damn by weakening its structure so much that it can no longer hold the load it was supposed to hold, but it would make more sense to use this method to try to weaken the damn before you hit it with a projectile battery. Alternatively if this is too boring, you could just dig a bunch of holes. It's a more cartoonish solution, but it would work in theory. You could also just break all the support structures. The episode of Avatar the Last Airbender where team boomarang breaks the fire nation's drill to stop it from boring into the walled city displays this concept really well actually. If you can go near the dam (meaning there is no secrecy involved, no guards, the dam is far from a village, etc. etc.), and if you want a cheap, quick (a few hours total), 100% reliable, 100% risk free method, then it's extremely easy to do. Just use dry wood wedges: You need a boat, a bunch of dry wood wedges, hammer and chisel, and a bucket row to the middle of the dam use hammer and chisel to do a few holes -two or three should be enough, and no need to go through the entire dam, just go half the depth of the wedges. Plant the wedges inside the holes Fill the bucket with water and wet the wedges. Keep doing it till you start hearing crack noises coming from the stone Row away Just watch the dam getting teared in pieces. Keep in mind that if you have time or resources you can skip the bucket part and either have a tiny bit of water flowing outside the holes where you put the wedges, or use a bronze tube to move water from the top of the dam to the wedges. But in the end, whatever solution you choose to wet the wedges, you'll get your dam destroyed. And this is not hard science nor something unthinkable in that age, as we (humans being, I mean) have used this exact method since the dawn of humanity to tear apart mountains... motoDrizztmotoDrizzt Try a ballista - https://en.wikipedia.org/wiki/Ballista when it hits a tree or a rock, it pierces it easily. Such is the engine which bears this name, being so called because it shoots with very great force RRRRRRRRRRRR Tesla supposedly brought down a building with a device about the size of a shoebox. Supposedly it used resonant vibration to compromise the structure over time - it was also attached to a metal portion of said structure. As you are using the science-based tag, this might be enough to go by. Also, considering the technological feats already accomplished in China, India, and the Americas around that time, it is not too far a stretch. Not to mention that the local neighbors, the Greeks, were building metal clockwork computers around that time. Just bring in an expert (foreigner with a briefcase... or, given the time period, a satchel or similar). Even without using an electrical based device such as Tesla's, anything which produces rhythmic vibrations at the correct frequency would do the trick. A clockwork hammer mechanism might be able to pull it off, and has the advantage of the tech already having been worked out by the neighbors (again, the Greeks). nijinekonijineko $\begingroup$ The middle Roman Empire didn't have electrical motor technology or use metal for construction, the last time I checked. $\endgroup$ – Nij Apr 23 '17 at 2:53 $\begingroup$ @Nij Might want to look into things a little further East. More locally, the Greeks were building computers around that time, so.... Shrug. $\endgroup$ – nijineko Apr 23 '17 at 19:43 $\begingroup$ There's a hell of a long way between custom-built handcrafted mechanical analogue orrery and a high speed electric motor with durability to establish structural resonance. Affirming a -1 for not making sense when the question is science-based. $\endgroup$ – Nij Apr 23 '17 at 21:00 $\begingroup$ ninjas in japan were already using self-driving cars by 1AD so it doesn't surprise me $\endgroup$ – Weyland Yutani Apr 25 '17 at 10:59 $\begingroup$ not to mention that while there is no evidence of smart phone use, archaeologists have found many ancient greek tablets $\endgroup$ – Weyland Yutani Apr 25 '17 at 11:00 Not the answer you're looking for? Browse other questions tagged science-based architecture ancient-history water-bodies explosions or ask your own question. In a post-apocalyptic world (>100 years since the fall) where could you find explosives How much charge would destroy the world? How much TnT do you need to destroy the Asian continent Dam wall failure: How to stop a Domino effect? How slowly would continental gears need to turn to not destroy everything? What explosives could a small island nation feasibly discover without metal? How would someone from ancient times kill themselves without any weapons? What's the largest body in the solar system that you could destroy without endangering humanity? Can underwater Atlanteans make explosives without coming onto dry land? How to completely destroy a city, without leaving clues for humanity 4,000 years later?
CommonCrawl
Chemistry Meta Chemistry Stack Exchange is a question and answer site for scientists, academics, teachers, and students in the field of chemistry. It only takes a minute to sign up. Why isn't the aniline hydrolysed in Sanger's protein sequencing method? Asked 1 month ago Recently I was reading about identifying N-terminal amino acid residues using Sanger's reagent (1-fluoro-2,4-dinitrobenzene). The following image showing the reaction is taken fro Wikimedia Commons: The free N-terminus amino group performs a nucleophilic aromatic substitution to get the first product; after that we hydrolyse the adduct. Why doesn't the Ar–N bond in the final product also undergo hydrolysis to give 2,4-dinitroaniline, though? There are many examples (e.g. isocyanides, cyanides) where heating with strong acid causes cleavage of the C–N bond. Is it because resonance with the aromatic ring causes the nitrogen lone pair to be not basic? organic-chemistry biochemistry amines Chemical Brewster Chemical BrewsterChemical Brewster $\begingroup$ Please note my edit to your question: (1) The chemical names are wrong; substituents (fluoro, dinitro) are joined to the parent chain (benzene) by hyphens, not separated by spaces. (2) The $\mathrm{S_NAr}$ mechanism is not an "ary $\mathrm{S_N2}$". (3) No need to put three or four consecutive question marks; one is enough. $\endgroup$ – orthocresol ♦ $\begingroup$ @orthocresol brother can you please check my understanding in the comments that i made on your answer? $\endgroup$ – Chemical Brewster You raise the examples of isonitriles and nitriles undergoing C–N bond cleavage via hydrolysis. That is true, but it is worth thinking about how this happens. Both isonitriles and nitriles are converted to amides first when reacted with aqueous acid. Now, amides will certainly be hydrolysed at the last stage of the peptide sequencing: you can see that in the image itself, as the amide bonds holding the polypeptide together are broken (so that the labelled residue can be detected). However, amides behave quite differently from amines. The C–N bond you're asking about ($\ce{R-NH-Ar}$, Ar = 2,4-dinitrophenyl) is an amine, not an amide. The comparison to isonitriles / nitriles is therefore not relevant. Amines simply don't undergo typical $\mathrm{S_N2}$ reactions (quaternary ammonium salts do, but that's a different matter entirely), so the C–N bond doesn't get broken via hydrolysis. It is true that there is resonance with the electron-withdrawing dinitrophenyl group, which reduces the availability of the nitrogen lone pair, but this is actually irrelevant: even if that aryl group were just a methyl group, the conclusion would not be affected. Note that this has nothing to do with the C–N bond strength, either. The C–N bond in an amide has partial double bond character, and is stronger than the C–N bond in an amine (which is a plain single bond). The question is not which is stronger, it is which has an available pathway for cleavage that doesn't involve too high an activation energy. Amides can be cleaved via a nucleophilic acyl substitution mechanism: although this is not easy to accomplish (you need strong acid and lots of heat), it is still possible. Amines, on the other hand, could conceivably be cleaved via a $\mathrm{S_N2}$ or $\mathrm{S_N1}$ mechanism. Neither is very favourable in this case (amines are very poor leaving groups, and the carbocation above is not stable at all, with that electron-withdrawing carbonyl group next to it). orthocresol♦orthocresol $\begingroup$ I think what I got confused on was thinking that R-NH-R' If the Nitrogen in this formed a coordinate bond with a proton then the quaternary amine so formed will be a good leaving group but it turns out even then it doesn't. amines form salts with acids and this doesn't change the fact that they are bad leaving groups. Here the only possibility was nucleophilic acyl substitution. That is infact not related to Rnh2 being a good leaving group or not. It is just a property of acid derivatives to show this reaction whenever surrounding medium is deficient of the group that is attached to carbonyl $\endgroup$ $\begingroup$ Even if RNHR' form salts with some acid(ie donates a lp to $H^+$) even then we wont expect it to break the original C-N bonds that are present in the salt. kindly confirm these thoughts of mine. $\endgroup$ $\begingroup$ from the last line of my first comment sir I wanted to say that that the equilibrium of acyl sn2 will be drawn to the products only if there is a deficiency of the group being displaced in the surrounding med( to prevent reverse reaction)(le charterlier principal) $\endgroup$ Thanks for contributing an answer to Chemistry Stack Exchange! Not the answer you're looking for? Browse other questions tagged organic-chemistry biochemistry amines or ask your own question. Dinitrophenyl (DNP) derivatives of amino acids (specifically epsilon-DNP-lysine) Why is the E2 protein important for protein degradation? Why is phenol more acidic than aniline? Feasibility of Electrophilic attack on the Nitrogen over the Phenyl ring of Aniline Why isn't aluminium involved in biological processes? Can aniline be made with the method described in Dr Stone?
CommonCrawl
Published by editor on June 8, 2019 Quantum Nature of Black Holes: Fast Scrambling versus Echoes. (arXiv:1906.02653v1 [hep-th]) Authors: Krishan Saraswat, Niayesh Afshordi Two seemingly distinct notions regarding black holes have captured the imagination of theoretical physicists over the past decade: First, black holes are conjectured to be fast scramblers of information, a notion that is further supported through connections to quantum chaos and decay of mutual information via AdS/CFT holography. Second, black hole information paradox has motivated exotic quantum structure near horizons of black holes (e.g., gravastars, fuzzballs, or firewalls) that may manifest themselves through delayed gravitational wave echoes in the aftermath of black hole formation or mergers, and are potentially observable by LIGO/Virgo observatories. By studying various limits of charged AdS/Schwarzschild black holes we show that, if properly defined, the two seemingly distinct phenomena happen on an identical timescale of log(Radius)/$(\pi \times {\rm Temperature})$. We further comment on the physical interpretation of this coincidence and the corresponding holographic interpretation of black hole echoes. About one interesting and important model in quantum mechanics I. Dynamic decription. (arXiv:1906.02274v1 [quant-ph]) Authors: Yuri G Rudoy, Enock O Oladimeji In this paper the detailed investigation of one of the most interested models in the non relativistic quantum mechanics of one massive particle i.e., introduced by G. Poeschl and E. Teller in 1933 is presented. This model includes as particular cases two most popular and valuable models: the quasi free particle in the box with impenetrable hard walls (i.e., the model with confinement) and Bloch quantum harmonic oscillator, which is unconfined in space; both models are frequently and effectively exploited in modern nanotechnology e.g., in quantum dots and magnetic traps. We give the extensive and elementary exposition of the potentials, wave functions and energetic spectra of all these interconnected models. Moreover, the pressure operator is defined following the lines of G. Helmann and R. Feynman which were the first who introduced this idea in the late 30ies in quantum chemistry. By these means the baroenergetic equation of state is obtained and analyzed for all three models; in particular, it is shown the absence of the pressure for the Bloch oscillator due to the infinite width of the box. The generalization of these results on the case of nonzero temperature will be given later. Comment on Frauchiger and Renner paper (Nat. Commun. 9, 3711 (2018)): the problem of stopping times. (arXiv:1906.02333v1 [quant-ph]) Authors: P. B. Lerner The Gedankenexperiment advanced by Frauchiger and Renner in their Nature paper is based on an implicit assumption that one can synchronize stochastic measurement intervals between two non-interacting systems. This hypothesis, the author demonstrates, is equivalent to the complete entanglement of these systems. Consequently, Frauchiger and Renner's postulate Q is too broad and, in general, meaningless. Accurate reformulation of the postulate, Q1 does not seem to entail any paradoxes with measurement. This paper is agnostic with respect to particular interpretations of quantum mechanics. Nor does it refer to the collapse of the wavefunction. Black holes in the quantum universe. (arXiv:1905.08807v1 [hep-th] CROSS LISTED) Authors: Steven B. Giddings A succinct summary is given of the problem of reconciling observation of black hole-like objects with quantum mechanics. If quantum black holes behave like subsystems, and also decay, their information must be transferred to their environments. Interactions that accomplish this with `minimal' departure from a standard description are parameterized. Possible sensitivity of gravitational wave or very long baseline interferometric observations to these interactions is briefly outlined. Experimental simultaneous read out of the real and imaginary parts of the weak value. (arXiv:1906.02263v1 [quant-ph]) Authors: A. Hariri, D. Curic, L. Giner, J. S. Lundeen The weak value, the average result of a weak measurement, has proven useful for probing quantum and classical systems. Examples include the amplification of small signals, investigating quantum paradoxes, and elucidating fundamental quantum phenomena such as geometric phase. A key characteristic of the weak value is that it can be complex, in contrast to a standard expectation value. However, typically only either the real or imaginary component of the weak value is determined in a given experimental setup. Weak measurements can be used to, in a sense, simultaneously measure non-commuting observables. This principle was used in the direct measurement of the quantum wavefunction. However, the wavefunction's real and imaginary components, given by a weak value, are determined in different setups or on separate ensembles of systems, putting the procedure's directness in question. To address these issues, we introduce and experimentally demonstrate a general method to simultaneously read out both components of the weak value in a single experimental apparatus. In particular, we directly measure the polarization state of an ensemble of photons using weak measurement. With our method, each photon contributes to both the real and imaginary parts of the weak-value average. On a fundamental level, this suggests that the full complex weak value is a characteristic of each photon measured. Physicists Debate Hawking's Idea That the Universe Had No Beginning In 1981, many of the world's leading cosmologists gathered at the Pontifical Academy of Sciences, a vestige of the coupled lineages of science and theology located in an elegant villa in the gardens of the Vatican. Stephen Hawking chose the august setting to present what he would later regard as his most important idea: a proposal about how the universe could have arisen from nothing. Before Hawking's talk, all cosmological origin stories, scientific or theological, had invited the rejoinder, "What happened before that?" The Big Bang theory, for instance — pioneered 50 years before Hawking's lecture by the Belgian physicist and Catholic priest Georges Lemaître, who later served as president of the Vatican's academy of sciences — rewinds the expansion of the universe back to a hot, dense bundle of energy. But where did the initial energy come from? The Big Bang theory had other problems. Physicists understood that an expanding bundle of energy would grow into a crumpled mess rather than the huge, smooth cosmos that modern astronomers observe. In 1980, the year before Hawking's talk, the cosmologist Alan Guth realized that the Big Bang's problems could be fixed with an add-on: an initial, exponential growth spurt known as cosmic inflation, which would have rendered the universe huge, smooth and flat before gravity had a chance to wreck it. Inflation quickly became the leading theory of our cosmic origins. Yet the issue of initial conditions remained: What was the source of the minuscule patch that allegedly ballooned into our cosmos, and of the potential energy that inflated it? Hawking, in his brilliance, saw a way to end the interminable groping backward in time: He proposed that there's no end, or beginning, at all. According to the record of the Vatican conference, the Cambridge physicist, then 39 and still able to speak with his own voice, told the crowd, "There ought to be something very special about the boundary conditions of the universe, and what can be more special than the condition that there is no boundary?" The "no-boundary proposal," which Hawking and his frequent collaborator, James Hartle, fully formulated in a 1983 paper, envisions the cosmos having the shape of a shuttlecock. Just as a shuttlecock has a diameter of zero at its bottommost point and gradually widens on the way up, the universe, according to the no-boundary proposal, smoothly expanded from a point of zero size. Hartle and Hawking derived a formula describing the whole shuttlecock — the so-called "wave function of the universe" that encompasses the entire past, present and future at once — making moot all contemplation of seeds of creation, a creator, or any transition from a time before. "Asking what came before the Big Bang is meaningless, according to the no-boundary proposal, because there is no notion of time available to refer to," Hawking said in another lecture at the Pontifical Academy in 2016, a year and a half before his death. "It would be like asking what lies south of the South Pole." Hartle and Hawking's proposal radically reconceptualized time. Each moment in the universe becomes a cross-section of the shuttlecock; while we perceive the universe as expanding and evolving from one moment to the next, time really consists of correlations between the universe's size in each cross-section and other properties — particularly its entropy, or disorder. Entropy increases from the cork to the feathers, aiming an emergent arrow of time. Near the shuttlecock's rounded-off bottom, though, the correlations are less reliable; time ceases to exist and is replaced by pure space. As Hartle, now 79 and a professor at the University of California, Santa Barbara, explained it by phone recently, "We didn't have birds in the very early universe; we have birds later on. … We didn't have time in the early universe, but we have time later on." The no-boundary proposal has fascinated and inspired physicists for nearly four decades. "It's a stunningly beautiful and provocative idea," said Neil Turok, a cosmologist at the Perimeter Institute for Theoretical Physics in Waterloo, Canada, and a former collaborator of Hawking's. The proposal represented a first guess at the quantum description of the cosmos — the wave function of the universe. Soon an entire field, quantum cosmology, sprang up as researchers devised alternative ideas about how the universe could have come from nothing, analyzed the theories' various predictions and ways to test them, and interpreted their philosophical meaning. The no-boundary wave function, according to Hartle, "was in some ways the simplest possible proposal for that." But two years ago, a paper by Turok, Job Feldbrugge of the Perimeter Institute, and Jean-Luc Lehners of the Max Planck Institute for Gravitational Physics in Germany called the Hartle-Hawking proposal into question. The proposal is, of course, only viable if a universe that curves out of a dimensionless point in the way Hartle and Hawking imagined naturally grows into a universe like ours. Hawking and Hartle argued that indeed it would — that universes with no boundaries will tend to be huge, breathtakingly smooth, impressively flat, and expanding, just like the actual cosmos. "The trouble with Stephen and Jim's approach is it was ambiguous," Turok said — "deeply ambiguous." In their 2017 paper, published in Physical Review Letters, Turok and his co-authors approached Hartle and Hawking's no-boundary proposal with new mathematical techniques that, in their view, make its predictions much more concrete than before. "We discovered that it just failed miserably," Turok said. "It was just not possible quantum mechanically for a universe to start in the way they imagined." The trio checked their math and queried their underlying assumptions before going public, but "unfortunately," Turok said, "it just seemed to be inescapable that the Hartle-Hawking proposal was a disaster." The paper ignited a controversy. Other experts mounted a vigorous defense of the no-boundary idea and a rebuttal of Turok and colleagues' reasoning. "We disagree with his technical arguments," said Thomas Hertog, a physicist at the Catholic University of Leuven in Belgium who closely collaborated with Hawking for the last 20 years of the latter's life. "But more fundamentally, we disagree also with his definition, his framework, his choice of principles. And that's the more interesting discussion." After two years of sparring, the groups have traced their technical disagreement to differing beliefs about how nature works. The heated — yet friendly — debate has helped firm up the idea that most tickled Hawking's fancy. Even critics of his and Hartle's specific formula, including Turok and Lehners, are crafting competing quantum-cosmological models that try to avoid the alleged pitfalls of the original while maintaining its boundless allure. Garden of Cosmic Delights Hartle and Hawking saw a lot of each other from the 1970s on, typically when they met in Cambridge for long periods of collaboration. The duo's theoretical investigations of black holes and the mysterious singularities at their centers had turned them on to the question of our cosmic origin. In 1915, Albert Einstein discovered that concentrations of matter or energy warp the fabric of space-time, causing gravity. In the 1960s, Hawking and the Oxford University physicist Roger Penrose proved that when space-time bends steeply enough, such as inside a black hole or perhaps during the Big Bang, it inevitably collapses, curving infinitely steeply toward a singularity, where Einstein's equations break down and a new, quantum theory of gravity is needed. The Penrose-Hawking "singularity theorems" meant there was no way for space-time to begin smoothly, undramatically at a point. Hawking and Hartle were thus led to ponder the possibility that the universe began as pure space, rather than dynamical space-time. And this led them to the shuttlecock geometry. They defined the no-boundary wave function describing such a universe using an approach invented by Hawking's hero, the physicist Richard Feynman. In the 1940s, Feynman devised a scheme for calculating the most likely outcomes of quantum mechanical events. To predict, say, the likeliest outcomes of a particle collision, Feynman found that you could sum up all possible paths that the colliding particles could take, weighting straightforward paths more than convoluted ones in the sum. Calculating this "path integral" gives you the wave function: a probability distribution indicating the different possible states of the particles after the collision. Likewise, Hartle and Hawking expressed the wave function of the universe — which describes its likely states — as the sum of all possible ways that it might have smoothly expanded from a point. The hope was that the sum of all possible "expansion histories," smooth-bottomed universes of all different shapes and sizes, would yield a wave function that gives a high probability to a huge, smooth, flat universe like ours. If the weighted sum of all possible expansion histories yields some other kind of universe as the likeliest outcome, the no-boundary proposal fails. The problem is that the path integral over all possible expansion histories is far too complicated to calculate exactly. Countless different shapes and sizes of universes are possible, and each can be a messy affair. "Murray Gell-Mann used to ask me," Hartle said, referring to the late Nobel Prize-winning physicist, "if you know the wave function of the universe, why aren't you rich?" Of course, to actually solve for the wave function using Feynman's method, Hartle and Hawking had to drastically simplify the situation, ignoring even the specific particles that populate our world (which meant their formula was nowhere close to being able to predict the stock market). They considered the path integral over all possible toy universes in "minisuperspace," defined as the set of all universes with a single energy field coursing through them: the energy that powered cosmic inflation. (In Hartle and Hawking's shuttlecock picture, that initial period of ballooning corresponds to the rapid increase in diameter near the bottom of the cork.) Even the minisuperspace calculation is hard to solve exactly, but physicists know there are two possible expansion histories that potentially dominate the calculation. These rival universe shapes anchor the two sides of the current debate. The rival solutions are the two "classical" expansion histories that a universe can have. Following an initial spurt of cosmic inflation from size zero, these universes steadily expand according to Einstein's theory of gravity and space-time. Weirder expansion histories, like football-shaped universes or caterpillar-like ones, mostly cancel out in the quantum calculation. One of the two classical solutions resembles our universe. On large scales, it's smooth and randomly dappled with energy, due to quantum fluctuations during inflation. As in the real universe, density differences between regions form a bell curve around zero. If this possible solution does indeed dominate the wave function for minisuperspace, it becomes plausible to imagine that a far more detailed and exact version of the no-boundary wave function might serve as a viable cosmological model of the real universe. The other potentially dominant universe shape is nothing like reality. As it widens, the energy infusing it varies more and more extremely, creating enormous density differences from one place to the next that gravity steadily worsens. Density variations form an inverted bell curve, where differences between regions approach not zero, but infinity. If this is the dominant term in the no-boundary wave function for minisuperspace, then the Hartle-Hawking proposal would seem to be wrong. The two dominant expansion histories present a choice in how the path integral should be done. If the dominant histories are two locations on a map, megacities in the realm of all possible quantum mechanical universes, the question is which path we should take through the terrain. Which dominant expansion history, and there can only be one, should our "contour of integration" pick up? Researchers have forked down different paths. In their 2017 paper, Turok, Feldbrugge and Lehners took a path through the garden of possible expansion histories that led to the second dominant solution. In their view, the only sensible contour is one that scans through real values (as opposed to imaginary values, which involve the square roots of negative numbers) for a variable called "lapse." Lapse is essentially the height of each possible shuttlecock universe — the distance it takes to reach a certain diameter. Lacking a causal element, lapse is not quite our usual notion of time. Yet Turok and colleagues argue partly on the grounds of causality that only real values of lapse make physical sense. And summing over universes with real values of lapse leads to the wildly fluctuating, physically nonsensical solution. "People place huge faith in Stephen's intuition," Turok said by phone. "For good reason — I mean, he probably had the best intuition of anyone on these topics. But he wasn't always right." Imaginary Universes Jonathan Halliwell, a physicist at Imperial College London, has studied the no-boundary proposal since he was Hawking's student in the 1980s. He and Hartle analyzed the issue of the contour of integration in 1990. In their view, as well as Hertog's, and apparently Hawking's, the contour is not fundamental, but rather a mathematical tool that can be placed to greatest advantage. It's similar to how the trajectory of a planet around the sun can be expressed mathematically as a series of angles, as a series of times, or in terms of any of several other convenient parameters. "You can do that parameterization in many different ways, but none of them are any more physical than another one," Halliwell said. He and his colleagues argue that, in the minisuperspace case, only contours that pick up the good expansion history make sense. Quantum mechanics requires probabilities to add to 1, or be "normalizable," but the wildly fluctuating universe that Turok's team landed on is not. That solution is nonsensical, plagued by infinities and disallowed by quantum laws — obvious signs, according to no-boundary's defenders, to walk the other way. It's true that contours passing through the good solution sum up possible universes with imaginary values for their lapse variables. But apart from Turok and company, few people think that's a problem. Imaginary numbers pervade quantum mechanics. To team Hartle-Hawking, the critics are invoking a false notion of causality in demanding that lapse be real. "That's a principle which is not written in the stars, and which we profoundly disagree with," Hertog said. According to Hertog, Hawking seldom mentioned the path integral formulation of the no-boundary wave function in his later years, partly because of the ambiguity around the choice of contour. He regarded the normalizable expansion history, which the path integral had merely helped uncover, as the solution to a more fundamental equation about the universe posed in the 1960s by the physicists John Wheeler and Bryce DeWitt. Wheeler and DeWitt — after mulling over the issue during a layover at Raleigh-Durham International — argued that the wave function of the universe, whatever it is, cannot depend on time, since there is no external clock by which to measure it. And thus the amount of energy in the universe, when you add up the positive and negative contributions of matter and gravity, must stay at zero forever. The no-boundary wave function satisfies the Wheeler-DeWitt equation for minisuperspace. In the final years of his life, to better understand the wave function more generally, Hawking and his collaborators started applying holography — a blockbuster new approach that treats space-time as a hologram. Hawking sought a holographic description of a shuttlecock-shaped universe, in which the geometry of the entire past would project off of the present. That effort is continuing in Hawking's absence. But Turok sees this shift in emphasis as changing the rules. In backing away from the path integral formulation, he says, proponents of the no-boundary idea have made it ill-defined. What they're studying is no longer Hartle-Hawking, in his opinion — though Hartle himself disagrees. For the past year, Turok and his Perimeter Institute colleagues Latham Boyle and Kieran Finn have been developing a new cosmological model that has much in common with the no-boundary proposal. But instead of one shuttlecock, it envisions two, arranged cork to cork in a sort of hourglass figure with time flowing in both directions. While the model is not yet developed enough to make predictions, its charm lies in the way its lobes realize CPT symmetry, a seemingly fundamental mirror in nature that simultaneously reflects matter and antimatter, left and right, and forward and backward in time. One disadvantage is that the universe's mirror-image lobes meet at a singularity, a pinch in space-time that requires the unknown quantum theory of gravity to understand. Boyle, Finn and Turok take a stab at the singularity, but such an attempt is inherently speculative. There has also been a revival of interest in the "tunneling proposal," an alternative way that the universe might have arisen from nothing, conceived in the '80s independently by the Russian-American cosmologists Alexander Vilenkin and Andrei Linde. The proposal, which differs from the no-boundary wave function primarily by way of a minus sign, casts the birth of the universe as a quantum mechanical "tunneling" event, similar to when a particle pops up beyond a barrier in a quantum mechanical experiment. Questions abound about how the various proposals intersect with anthropic reasoning and the infamous multiverse idea. The no-boundary wave function, for instance, favors empty universes, whereas significant matter and energy are needed to power hugeness and complexity. Hawking argued that the vast spread of possible universes permitted by the wave function must all be realized in some larger multiverse, within which only complex universes like ours will have inhabitants capable of making observations. (The recent debate concerns whether these complex, habitable universes will be smooth or wildly fluctuating.) An advantage of the tunneling proposal is that it favors matter- and energy-filled universes like ours without resorting to anthropic reasoning — though universes that tunnel into existence may have other problems. No matter how things go, perhaps we'll be left with some essence of the picture Hawking first painted at the Pontifical Academy of Sciences 38 years ago. Or perhaps, instead of a South Pole-like non-beginning, the universe emerged from a singularity after all, demanding a different kind of wave function altogether. Either way, the pursuit will continue. "If we are talking about a quantum mechanical theory, what else is there to find other than the wave function?" asked Juan Maldacena, an eminent theoretical physicist at the Institute for Advanced Study in Princeton, New Jersey, who has mostly stayed out of the recent fray. The question of the wave function of the universe "is the right kind of question to ask," said Maldacena, who, incidentally, is a member of the Pontifical Academy. "Whether we are finding the right wave function, or how we should think about the wave function — it's less clear." Correction: This article was revised on June 6, 2019, to list Latham Boyle and Kieran Finn as co-developers of the CPT-symmetric universe idea. show enclosure Quantum Leaps, Long Assumed to Be Instantaneous, Take Time When quantum mechanics was first developed a century ago as a theory for understanding the atomic-scale world, one of its key concepts was so radical, bold and counter-intuitive that it passed into popular language: the "quantum leap." Purists might object that the common habit of applying this term to a big change misses the point that jumps between two quantum states are typically tiny, which is precisely why they weren't noticed sooner. But the real point is that they're sudden. So sudden, in fact, that many of the pioneers of quantum mechanics assumed they were instantaneous. A new experiment shows that they aren't. By making a kind of high-speed movie of a quantum leap, the work reveals that the process is as gradual as the melting of a snowman in the sun. "If we can measure a quantum jump fast and efficiently enough," said Michel Devoret of Yale University, "it is actually a continuous process." The study, which was led by Zlatko Minev, a graduate student in Devoret's lab, was published on Monday in Nature. Already, colleagues are excited. "This is really a fantastic experiment," said the physicist William Oliver of the Massachusetts Institute of Technology, who wasn't involved in the work. "Really amazing." But there's more. With their high-speed monitoring system, the researchers could spot when a quantum jump was about to appear, "catch" it halfway through, and reverse it, sending the system back to the state in which it started. In this way, what seemed to the quantum pioneers to be unavoidable randomness in the physical world is now shown to be amenable to control. We can take charge of the quantum. All Too Random The abruptness of quantum jumps was a central pillar of the way quantum theory was formulated by Niels Bohr, Werner Heisenberg and their colleagues in the mid-1920s, in a picture now commonly called the Copenhagen interpretation. Bohr had argued earlier that the energy states of electrons in atoms are "quantized": Only certain energies are available to them, while all those in between are forbidden. He proposed that electrons change their energy by absorbing or emitting quantum particles of light — photons — that have energies matching the gap between permitted electron states. This explained why atoms and molecules absorb and emit very characteristic wavelengths of light — why many copper salts are blue, say, and sodium lamps yellow. Bohr and Heisenberg began to develop a mathematical theory of these quantum phenomena in the 1920s. Heisenberg's quantum mechanics enumerated all the allowed quantum states, and implicitly assumed that jumps between them are instant — discontinuous, as mathematicians would say. "The notion of instantaneous quantum jumps … became a foundational notion in the Copenhagen interpretation," historian of science Mara Beller has written. Another of the architects of quantum mechanics, the Austrian physicist Erwin Schrödinger, hated that idea. He devised what seemed at first to be an alternative to Heisenberg's math of discrete quantum states and instant jumps between them. Schrödinger's theory represented quantum particles in terms of wavelike entities called wave functions, which changed only smoothly and continuously over time, like gentle undulations on the open sea. Things in the real world don't switch suddenly, in zero time, Schrödinger thought — discontinuous "quantum jumps" were just a figment of the mind. In a 1952 paper called "Are there quantum jumps?," Schrödinger answered with a firm "no," his irritation all too evident in the way he called them "quantum jerks." The argument wasn't just about Schrödinger's discomfort with sudden change. The problem with a quantum jump was also that it was said to just happen at a random moment — with nothing to say why that particular moment. It was thus an effect without a cause, an instance of apparent randomness inserted into the heart of nature. Schrödinger and his close friend Albert Einstein could not accept that chance and unpredictability reigned at the most fundamental level of reality. According to the German physicist Max Born, the whole controversy was therefore "not so much an internal matter of physics, as one of its relation to philosophy and human knowledge in general." In other words, there's a lot riding on the reality (or not) of quantum jumps. Seeing Without Looking To probe further, we need to see quantum jumps one at a time. In 1986, three teams of researchers reportedthem happening in individual atoms suspended in space by electromagnetic fields. The atoms flipped between a "bright" state, where they could emit a photon of light, and a "dark" state that did not emit at random moments, remaining in one state or the other for periods of between a few tenths of a second and a few seconds before jumping again. Since then, such jumps have been seen in various systems, ranging from photons switching between quantum states to atoms in solid materials jumping between quantized magnetic states. In 2007 a team in France reported jumps that correspond to what they called "the birth, life and death of individual photons." In these experiments the jumps indeed looked abrupt and random — there was no telling, as the quantum system was monitored, when they would happen, nor any detailed picture of what a jump looked like. The Yale team's setup, by contrast, allowed them to anticipate when a jump was coming, then zoom in close to examine it. The key to the experiment is the ability to collect just about all of the available information about it, so that none leaks away into the environment before it can be measured. Only then can they follow single jumps in such detail. The quantum systems the researchers used are much larger than atoms, consisting of wires made from a superconducting material — sometimes called "artificial atoms" because they have discrete quantum energy states analogous to the electron states in real atoms. Jumps between the energy states can be induced by absorbing or emitting a photon, just as they are for electrons in atoms. Devoret and colleagues wanted to watch a single artificial atom jump between its lowest-energy (ground) state and an energetically excited state. But they couldn't monitor that transition directly, because making a measurement on a quantum system destroys the coherence of the wave function — its smooth wavelike behavior — on which quantum behavior depends. To watch the quantum jump, the researchers had to retain this coherence. Otherwise they'd "collapse" the wave function, which would place the artificial atom in one state or the other. This is the problem famously exemplified by Schrödinger's cat, which is allegedly placed in a coherent quantum "superposition" of live and dead states but becomes only one or the other when observed. To get around this problem, Devoret and colleagues employ a clever trick involving a second excited state. The system can reach this second state from the ground state by absorbing a photon of a different energy. The researchers probe the system in a way that only ever tells them whether the system is in this second "bright" state, so named because it's the one that can be seen. The state to and from which the researchers are actually looking for quantum jumps is, meanwhile, the "dark" state — because it remains hidden from direct view. The researchers placed the superconducting circuit in an optical cavity (a chamber in which photons of the right wavelength can bounce around) so that, if the system is in the bright state, the way that light scatters in the cavity changes. Every time the bright state decays by emission of a photon, the detector gives off a signal akin to a Geiger counter's "click." The key here, said Oliver, is that the measurement provides information about the state of the system without interrogating that state directly. In effect, it asks whether the system is in, or is not in, the ground and dark states collectively. That ambiguity is crucial for maintaining quantum coherence during a jump between these two states. In this respect, said Oliver, the scheme that the Yale team has used is closely related to those employed for error correction in quantum computers. There, too, it's necessary to get information about quantum bits without destroying the coherence on which the quantum computation relies. Again, this is done by not looking directly at the quantum bit in question but probing an auxiliary state coupled to it. The strategy reveals that quantum measurement is not about the physical perturbation induced by the probe but about what you know (and what you leave unknown) as a result. "Absence of an event can bring as much information as its presence," said Devoret. He compares it to the Sherlock Holmes story in which the detective infers a vital clue from the "curious incident" in which a dog did not do anything in the night. Borrowing from a different (but often confused) dog-related Holmes story, Devoret calls it "Baskerville's Hound meets Schrödinger's Cat." To Catch a Jump The Yale team saw a series of clicks from the detector, each signifying a decay of the bright state, arriving typically every few microseconds. This stream of clicks was interrupted approximately every few hundred microseconds, apparently at random, by a hiatus in which there were no clicks. Then after a period of typically 100 microseconds or so, the clicks resumed. During that silent time, the system had presumably undergone a transition to the dark state, since that's the only thing that can prevent flipping back and forth between the ground and bright states. So here in these switches from "click" to "no-click" states are the individual quantum jumps — just like those seen in the earlier experiments on trapped atoms and the like. However, in this case Devoret and colleagues could see something new. Before each jump to the dark state, there would typically be a short spell where the clicks seemed suspended: a pause that acted as a harbinger of the impending jump. "As soon as the length of a no-click period significantly exceeds the typical time between two clicks, you have a pretty good warning that the jump is about to occur," said Devoret. That warning allowed the researchers to study the jump in greater detail. When they saw this brief pause, they switched off the input of photons driving the transitions. Surprisingly, the transition to the dark state still happened even without photons driving it — it is as if, by the time the brief pause sets in, the fate is already fixed. So although the jump itself comes at a random time, there is also something deterministic in its approach. With the photons turned off, the researchers zoomed in on the jump with fine-grained time resolution to see it unfold. Does it happen instantaneously — the sudden quantum jump of Bohr and Heisenberg? Or does it happen smoothly, as Schrödinger insisted it must? And if so, how? The team found that jumps are in fact gradual. That's because, even though a direct observation could reveal the system only as being in one state or another, during a quantum jump the system is in a superposition, or mixture, of these two end states. As the jump progresses, a direct measurement would be increasingly likely to yield the final rather than the initial state. It's a bit like the way our decisions may evolve over time. You can only either stay at a party or leave it — it's a binary choice — but as the evening wears on and you get tired, the question "Are you staying or leaving?" becomes increasingly likely to get the answer "I'm leaving." The techniques developed by the Yale team reveal the changing mindset of a system during a quantum jump. Using a method called tomographic reconstruction, the researchers could figure out the relative weightings of the dark and ground states in the superposition. They saw these weights change gradually over a period of a few microseconds. That's pretty fast, but it's certainly not instantaneous. What's more, this electronic system is so fast that the researchers could "catch" the switch between the two states as it is happening, then reverse it by sending a pulse of photons into the cavity to boost the system back to the dark state. They can persuade the system to change its mind and stay at the party after all. Flash of Insight The experiment shows that quantum jumps "are indeed not instantaneous if we look closely enough," said Oliver, "but are coherent processes": real physical events that unfold over time. The gradualness of the "jump" is just what is predicted by a form of quantum theory called quantum trajectories theory, which can describe individual events like this. "It is reassuring that the theory matches perfectly with what is seen" said David DiVincenzo, an expert in quantum information at Aachen University in Germany, "but it's a subtle theory, and we are far from having gotten our heads completely around it." The possibility of predicting a quantum jumps just before they occur, said Devoret, makes them somewhat like volcanic eruptions. Each eruption happens unpredictably, but some big ones can be anticipated by watching for the atypically quiet period that precedes them. "To the best of our knowledge, this precursory signal has not been proposed or measured before," he said. Devoret said that an ability to spot precursors to quantum jumps might find applications in quantum sensing technologies. For example, "in atomic clock measurements, one wants to synchronize the clock to the transition frequency of an atom, which serves as a reference," he said. But if you can detect right at the start if the transition is about to happen, rather than having to wait for it to be completed, the synchronization can be faster and therefore more precise in the long run. DiVincenzo thinks that the work might also find applications in error correction for quantum computing, although he sees that as "quite far down the line." To achieve the level of control needed for dealing with such errors, though, will require this kind of exhaustive harvesting of measurement data — rather like the data-intensive situation in particle physics, said DiVincenzo. The real value of the result is not, though, in any practical benefits; it's a matter of what we learn about the workings of the quantum world. Yes, it is shot through with randomness — but no, it is not punctuated by instantaneous jerks. Schrödinger, aptly enough, was both right and wrong at the same time. On the Universality of Hawking Radiation Gryb, Sean and Palacios, Patricia and Thebault, Karim P Y (2019) On the Universality of Hawking Radiation. [Preprint] Making a Difference: Essays on the Philosophy of Causation Edited by Helen Beebee, Christopher Hitchcock and Huw Price Analysis Advance Access on 2019-6-03 12:00am GMT Making a Difference: Essays on the Philosophy of Causation Edited by BeebeeHelen, HitchcockChristopher and PriceHuwOxford University Press, 2017. xii + 336 pp. To catch and reverse a quantum jump mid-flight Nature – Issue – nature.com science feeds Nature, Published online: 03 June 2019; doi:10.1038/s41586-019-1287-z Experiment overturns Bohr's view of quantum jumps, demonstrating that they possess a degree of predictability and when completed are continuous, coherent and even deterministic.
CommonCrawl
Spatial rank-based multifactor dimensionality reduction to detect gene–gene interactions for multivariate phenotypes Mira Park1, Hoe-Bin Jeong2, Jong-Hyun Lee2 & Taesung Park3 BMC Bioinformatics volume 22, Article number: 480 (2021) Cite this article Identifying interaction effects between genes is one of the main tasks of genome-wide association studies aiming to shed light on the biological mechanisms underlying complex diseases. Multifactor dimensionality reduction (MDR) is a popular approach for detecting gene–gene interactions that has been extended in various forms to handle binary and continuous phenotypes. However, only few multivariate MDR methods are available for multiple related phenotypes. Current approaches use Hotelling's T2 statistic to evaluate interaction models, but it is well known that Hotelling's T2 statistic is highly sensitive to heavily skewed distributions and outliers. We propose a robust approach based on nonparametric statistics such as spatial signs and ranks. The new multivariate rank-based MDR (MR-MDR) is mainly suitable for analyzing multiple continuous phenotypes and is less sensitive to skewed distributions and outliers. MR-MDR utilizes fuzzy k-means clustering and classifies multi-locus genotypes into two groups. Then, MR-MDR calculates a spatial rank-sum statistic as an evaluation measure and selects the best interaction model with the largest statistic. Our novel idea lies in adopting nonparametric statistics as an evaluation measure for robust inference. We adopt tenfold cross-validation to avoid overfitting. Intensive simulation studies were conducted to compare the performance of MR-MDR with current methods. Application of MR-MDR to a real dataset from a Korean genome-wide association study demonstrated that it successfully identified genetic interactions associated with four phenotypes related to kidney function. The R code for conducting MR-MDR is available at https://github.com/statpark/MR-MDR. Intensive simulation studies comparing MR-MDR with several current methods showed that the performance of MR-MDR was outstanding for skewed distributions. Additionally, for symmetric distributions, MR-MDR showed comparable power. Therefore, we conclude that MR-MDR is a useful multivariate non-parametric approach that can be used regardless of the phenotype distribution, the correlations between phenotypes, and sample size. Many attempts have been made to identify susceptible genes that influence the risk of complex diseases such as autism, hypertension, and diabetes [1,2,3]. Analyzing a single locus is not enough to understand the pathophysiology of complex diseases and results in the so-called missing heritability problem. To overcome this problem, several studies have sought to identify gene–gene interactions (GGIs) or gene-environmental interactions [4,5,6]. As a non-parametric model-free approach, multifactor dimensionality reduction (MDR) has been widely applied for detecting GGIs [5]. For binary phenotypes, such as those analyzed in case–control studies, MDR divides high-dimensional genotype combinations into a one-dimensional variable with two groups (high-risk and low-risk), according to whether the ratio of cases to controls exceeds a threshold. Then it finds the interaction model that best predicts the disease status. Balanced accuracy can be used for an evaluation measure [7]. To prevent overfitting, k-fold cross-validation (CV) can be applied. Cross-validation consistency (CVC), or the number of times each single-nucleotide polymorphism (SNP) combination is chosen as best, is obtained during the k-fold CV process. The SNP combination with the highest CVC is defined as the final best interaction model [8]. MDR has several advantages: i) the dimensions of the data are effectively reduced, ii) no specific genetic model is assumed, and iii) high-order interactions can be identified, even if there are no significant main effects [9, 10]. Since its introduction, many studies have been conducted to broaden the scope of application of MDR. According to Gola et al. [4], about 800 MDR-related studies were found as of February 2014 when searching PubMed and Google Scholar. For discrete traits, log-linear models MDR and robust MDR have been proposed to handle data with empty or sparse cells [11, 12]. Odds ratio MDR was proposed, replacing the naïve classifier with the odds ratio [9] and optimal MDR replaced the fixed threshold with a data-driven threshold using an ordered combinatorial partitioning algorithm [13]. As a method of dealing with continuous traits, generalized MDR (GMDR) was proposed. GMDR can handle both dichotomous and continuous phenotypes and can adjust for covariates [14]. Quantitative MDR (QMDR) for continuous traits uses the sample mean of each genotype combination as a classifier, reducing the computing time with comparable performance [15]. Recently, cluster-based MDR (CL-MDR) has been proposed as a method that is less sensitive to outliers and distributional assumptions [16]. For survival time with censored data, Surv-MDR was proposed, which uses the log-rank test statistic to classify the cells of a multifactor combination [17]. Cox-MDR and accelerated failure time MDR are extended versions of GMDR for the survival phenotype based on Cox regression and the accelerated failure time model, respectively [18, 19]. Recently, Kaplan–Meier MDR was also developed, which uses the median Kaplan–Meier survival time as a classifier [20]. As described above, many studies have been conducted to identify genetic interactions associated with single phenotype, but only a few studies have been done on methodologies to treat multiple phenotypes. For complex diseases, several correlated traits are often measured together. For example, weight, the waist-hip ratio, and the body mass index (BMI) can be jointly measured as obesity-related traits. The power to detect associations between genes and these traits is expected to increase if the multivariate approach is adopted [21]. Therefore, more research on multivariate methods detecting GGIs is needed. Recently, multivariate generalized MDR (GEE-GMDR) extended GMDR to the multivariate case by constructing generalized estimation equation models [22]. GEE-GMDR provides stable results, but it does not always show higher power than GMDR [23]. Multivariate QMDR (multi-QMDR) extended QMDR using principal component analysis scores and Hotelling's T2 statistic as a classifier and an evaluation measure instead [23]. Multi-QMDR gives a high CVC and stable results, but it is prone to outliers or influencing points. More recently, multivariate cluster-based MDR (multi-CMDR) has been proposed [24]. Multi-CMDR applies fuzzy k-means clustering to discriminate between high- and low-risk groups and uses Hotelling's T2 statistic for evaluation. Multi-CMDR is less sensitive to outliers and non-symmetric distributions. While MDR is a nonparametric approach, all these methods use parametric test statistics as evaluation measures based on a multivariate normal distribution or exponential family distribution. Instead of using parametric approach, this study considers non-parametric evaluation measures for testing the equality of two multivariate populations. Various methods based on multivariate ranks or distances between the pairs of individual observations have been studied [25]. Note that signs and ranks in a univariate case are based on the ordering of the data. Unfortunately, however, there is no natural ordering of the data for a multivariate case. To extend the concept of rank to a multivariate case, several principles have been considered. First, the methods using interdirection were introduced [26, 27]. Interdirection is a measure based on the angular distance between two observation vectors relative to the rest of the data [28]. Second, the tests based on data depth were proposed [29,30,31]. A data depth measures how deep a multivariate sample lies in the data cloud [32]. Any function which provides a reasonable central-outward ordering of points in multidimensional space can be considered as a depth function [33]. Third, multivariate extensions using spatial signs and ranks were also studied [34, 35]. Affine invariant methods based on spatial sign and rank vectors for various multivariate problems were proposed [34]. More recently, for high-dimensional data, a nonparametric multivariate test using spatial signs [36] and a spatial ranks test for two samples were proposed [37]. Among various approaches and measures, we chose the spatial rank-sum statistic as the non-parametric evaluation measure in this study, because it is one of the most widely statistics and implemented with R program. We propose a new non-parametric multivariate approach for identifying genetic interactions. We call the proposed method multivariate rank-based MDR (MR-MDR). During classification process, MR-MDR utilizes the fuzzy k-means clustering analysis with a noise cluster as in multi-CMDR. For the evaluation process, MR-MDR calculates the spatial rank-sum statistic as an evaluation measure and selects the best interaction model with the largest statistic. The tenfold CV method is adopted and the final best interaction model is determined by maximum CV consistency. This manuscript is organized as follows. We first start with an introduction of the spatial rank statistic. The algorithm of the proposed MR-MDR method is then described in detail. We then present the results of intensive simulation studies to investigate the performance of the proposed method. Our method is compared with multi-QMDR and multi-CMDR. We applied the proposed MR-MDR method to data on four multivariate phenotypes related to kidney function obtained from the Korean Genome and Epidemiology Study (KoGES): blood urea nitrogen (BUN), serum creatinine, urinary albumin, and urinary red blood cell (RBC) levels. MR-MDR successfully identified genetic interactions associated with these four phenotypes. Nonparametric multivariate rank test We first introduce the multivariate non-parametric test used for evaluation. To detect a two-sample location shift in univariate analysis, the two-sample t-test is popularly used when the response variable is normally distributed. The Mann–Whitney test based on the rank sum is well known as a nonparametric counterpart of the two-sample univariate t-test. Various robust univariate non-parametric tests have been developed for the two-sample location problem [38]. For multivariate analysis, a classical approach such as Hotelling's T2 is a popular parametric approach. T2 has optimal power under the assumption of a multivariate normal distribution. However, it performs poorly in the case of heavy-tailed distributions and is highly sensitive to outliers [39]. As an alternative, we consider a nonparametric approach based on spatial signs and ranks. We start with the definition of spatial sign and spatial rank. Let \({\mathbf{Y}} = ({\mathbf{y}}_{{\mathbf{1}}} ,...,{\mathbf{y}}_{{\mathbf{n}}} )^{\prime }\) be an n × p data matrix with n observations and p variables. The spatial sign function and spatial rank function are defined as follows [28]. $$\begin{aligned} {\mathbf{S}}({\mathbf{y}}) & = \left\{ {\begin{array}{*{20}c} {||{\mathbf{y}}||^{ - 1} {\mathbf{y}},} & {{\mathbf{y}} \ne {\mathbf{0}}} \\ {{\mathbf{0}},} & {{\mathbf{y}} = {\mathbf{0}}} \\ \end{array} } \right., \\ {\mathbf{R}}({\mathbf{y}}) & = ave_{j} \{ {\mathbf{S}}{(}{\mathbf{y}} - {\mathbf{y}}_{{\text{j}}} {)}\} = \frac{1}{n}\sum\limits_{j = 1}^{n} {\{ {\mathbf{S}}{(}{\mathbf{y}} - {\mathbf{y}}_{{\text{j}}} } )\} \\ \end{aligned}$$ where avej means the average taken over all observations for j = 1,…n, ||y|| is the Euclidean distance of y from 0, and y and yj are p-variate vectors [28]. The observed spatial signs si and observed centered spatial ranks ri are defined as $${\mathbf{s}}_{i} = {\mathbf{S}}({\mathbf{y}}_{i} ),$$ $${\mathbf{r}}_{i} = ave_{j} \{ {\mathbf{s}}_{ij} \} = \frac{1}{n}\sum\limits_{j = 1}^{n} {\{ {\mathbf{S}}({\mathbf{y}}_{i} - {\mathbf{y}}_{j} )\} } ,$$ respectively for i, j = 1, …, n. Here, \({\mathbf{s}}_{ij} = {\mathbf{S}}({\mathbf{y}}_{i} - {\mathbf{y}}_{j} )\), ave{ri} = 0. To make an affine-invariant test statistic, we can apply the spatial sign function to the transformed data points. The test statistic \(T({\mathbf{y}}_{1} ,...,{\mathbf{y}}_{n} )\) is said to be affine-invariant if \(T({\mathbf{y}}_{1} ,...,{\mathbf{y}}_{n} ) = T(D{\mathbf{y}}_{1} ,...,D{\mathbf{y}}_{n} )\) for every p × p nonsingular matrix D and for every p-variate dataset \({\mathbf{y}}_{1} ,...,{\mathbf{y}}_{n}\) [34]. Affine-invariant spatial signs and ranks are obtained by transforming yi to Ayyi, $$\begin{aligned} {\mathbf{s}}_{i}^{*} & = {\mathbf{S}}(A_{y} {\mathbf{y}}_{i} ), \\ {\mathbf{r}}_{i}^{*} & = ave_{j} \{ {\mathbf{s}}_{ij}^{*} \} = \frac{1}{n}\sum\limits_{j = 1}^{n} {\{ {\mathbf{S}}(A_{y} ({\mathbf{y}}_{i} - {\mathbf{y}}_{j} )\} } \\ \end{aligned}$$ where Ay is Tyler's transformation, which makes the spatial sign covariance matrix proportional to the identity matrix, that is, \(ave\{ {\mathbf{r}}_{i}^{{*{\prime }}} {\mathbf{r}}_{i}^{*} \} = [c_{y}^{2} /p]I_{p}\), where \(c_{y}^{2} = ave\{ ||{\mathbf{r}}_{i}^{*} ||^{2} \}\). Ay can be obtained during the iterative process and chosen so that \(trace(A_{y}^{\prime } A_{y} ) = p\) [34, 40, 41]. The ranks \({\mathbf{r}}_{i}^{*}\) lie in the unit p-sphere. The direction of \({\mathbf{r}}_{i}^{*}\) roughly points outward from the center of the data cloud and its length tells how far away this point is from the center of the data cloud [42]. Next, for the two-samples location problem, let \({\mathbf{Y}}_{1} = ({\mathbf{y}}_{1} ,...,{\mathbf{y}}_{{{\mathbf{n}}_{1} }} )^{\prime }\) and \({\mathbf{Y}}_{2} = ({\mathbf{y}}_{{{\mathbf{n}}_{1} + 1}} ,...,{\mathbf{y}}_{{{\mathbf{n}}_{1} + {\mathbf{n}}_{2} }} )^{\prime }\) be two independent samples with p variables that have the cumulative distribution functions \(F({\mathbf{x}} - {{\varvec{\upmu}}})\) and \(F({\mathbf{x}} - {{\varvec{\upmu}}} - {{\varvec{\Delta}}})\), respectively. To test the null hypothesis of no differences in location, H0:Δ:0, the multivariate version of Mann–Whitney test statistic U2 can be used. For the combined sample Y = [Y1:Y2], the affine-transformed spatial signs of the transformed differences \({\mathbf{s}}_{ij}^{*} = {\mathbf{S}}(A_{y} ({\mathbf{y}}_{i} - {\mathbf{y}}_{j} ))\) and spatial ranks \({\mathbf{r}}_{i}^{*} = ave_{j} \{ {\mathbf{S}}_{ij}^{*} \}\) are obtained. The multivariate Mann–Whitney test statistic U2 is given by $$U^{2} = \frac{p}{{c_{y}^{2} }}\sum\limits_{i = 1}^{2} {n_{i} ||}^{{}} {\overline{\mathbf{r}}}_{i}^{*} ||^{2}$$ where \({\overline{\mathbf{r}}}_{i}^{*}\) is the sample-wise mean vector of the spatial ranks \({\mathbf{r}}_{i}^{*}\) and \(c_{y}^{2} = ave\{ ||{\mathbf{r}}_{i}^{*} ||^{2} \}\) [34]. The p-value is obtained by \(E_{\eta } (I(U_{\gamma }^{2} \ge U^{2} ))\), where \(E_{\eta } ( \cdot )\) represents expectation value where \(\eta = (\eta_{1} ,...,\eta_{N} )\) is uniformly distributed over N! permutations of \(\left( {1, \ldots ,N} \right)\) and \(U_{\gamma }^{2}\) is the value of the test statistic of a permuted sample. As the dimension p increases and the distribution becomes heavier-tailed, the performance of U2 improves relative to Hotelling's T2 statistic [34]. We investigated the effect of Tyler's transformation on yi through an empirical study using a toy example. Two multivariate distributions of the response variables were considered: a bivariate normal distribution and a bivariate gamma distribution. The correlations between two response variables were set to 0.4 and 0.8. The original data were transformed to spatial signs, and then spatial centered ranks were obtained by averaging the spatial signs of differences. Figure 1 shows the spatial signs and ranks with and without Tyler's transformation. Spatial signs are represented as directions from the origin with unit length, and thus all the spatial signs lie on a circle of radius 1. The spatial signs with Tyler's transformation tend to be more evenly arranged around the circumference than the spatial signs without Tyler's transformation. The spatial ranks tend to spread evenly, as if they are uniformly distributed within a circle, for the Tyler's transformation case. Note that the spatial ranks before and after Tyler's transformation appear different when the correlation is high (r = 0.8); however, the change due to the transformation is negligible if the correlation is moderate (r = 0.4). Examples of spatial signs and ranks for two bivariate distributions MR-MDR procedure There are many variations of MDR methods for finding GGIs. However, most MDR approaches have two core procedures: how to classify the cells into two groups and how to evaluate the interaction models. The proposed MR-MDR adopts fuzzy clustering technique in the classification process as in multi-CMDR [24]. For the evaluation process, MR-MDR uses a multivariate spatial rank test statistic. Figure 2 shows the flow of the MR-MDR procedure. The detailed algorithm of MR-MDR is as follows. Overview of the MR-MDR algorithm for tenfold cross-validation and second-order interactions Step 0. Data Suppose there are n* samples, with p SNP data-points and q continuous phenotypes. Standardize all the phenotypes to have a mean of zero and a unit variance. Let \({\mathbf{Y}}_{i} = (y_{i1} ,y_{i2} , \ldots ,y_{iq} )^{T}\) be the standardized phenotype vector and \({\mathbf{X}}_{i} = (x_{i1} ,x_{i2} , \ldots ,x_{ip} )^{T}\) be the genotype vector for the ith subject, respectively. Step 1. Global process: clustering Perform fuzzy k-means clustering with k = 2 using phenotype information. We add a noise cluster such that noisy data points may be assigned to the noise class [43]. Remove all the samples in the noise cluster. The remaining samples have membership degrees for each of the two clusters. Denote these two clusters as C1 and C2. Define the global ratio θ as $$\theta = {{\sum\limits_{i = 1}^{n} {D_{i1} } } \mathord{\left/ {\vphantom {{\sum\limits_{i = 1}^{n} {D_{i1} } } {\sum\limits_{i = 1}^{n} {D_{i2} } }}} \right. \kern-\nulldelimiterspace} {\sum\limits_{i = 1}^{n} {D_{i2} } }},$$ where n is the number of remaining samples after deleting noise samples and Dik is the membership degree of the ith subject in cluster Ck, (k = 1, 2) [24]. For cross-validation (CV), split the samples randomly into 10 subgroups of equal size. Let nine sets of samples be the training dataset and the remaining dataset be the test dataset used for evaluating the model. Step 2. Local process: classification To find mth-order GGIs, select a set of m SNPs from a pool of SNPs. Calculate the local ratio θj for the jth genotype combination in the training set, $$\theta_{j} = {{\sum\limits_{i = 1}^{{n_{j} }} {D_{ij1} } } \mathord{\left/ {\vphantom {{\sum\limits_{i = 1}^{{n_{j} }} {D_{ij1} } } {\sum\limits_{i = 1}^{{n_{j} }} {D_{ij2} } }}} \right. \kern-\nulldelimiterspace} {\sum\limits_{i = 1}^{{n_{j} }} {D_{ij2} } }},\quad \left( {j = 1, \ldots ,3^{m} } \right),$$ where Dijk is the membership degree of the ith subject with the jth genotype combination in cluster Ck. Label each genotype combination either "H" if θj ≥ θ, or "L" if θj < θ. Step 3. Local process: evaluation Obtain spatial ranks of Yi for the combined samples from two clusters for i = 1, 2, …, n. $$\begin{aligned} r_{i}^{*} & = ave_{j} \{ {\mathbf{S}}(A_{y} ({\mathbf{y}}_{i} - {\mathbf{y}}_{j} )\} \\ & = \frac{1}{n}\sum\limits_{j = 1}^{n} {\frac{{A_{y} ({\mathbf{y}}_{i} - {\mathbf{y}}_{j} )}}{{\sqrt {(A_{y} ({\mathbf{y}}_{i} - {\mathbf{y}}_{j} ))^{2} } }}} \\ \end{aligned}$$ where S(·)is a sign function and Ay is Tyler's transformation. Calculate the multivariate Mann–Whitney test statistic: $$U^{2} = \frac{p}{{c_{y}^{2} }}\sum\limits_{k = 1}^{2} {n_{k} ||}^{{_{{}} }} {\overline{\mathbf{R}}}_{k} ||^{2}$$ where p is the number of variables, nk is the number of samples in cluster Ck, \({\overline{\mathbf{R}}}_{k}\) is the mean vectors of the spatial ranks for cluster Ck, and \(c_{y}^{2} = ave\{ ||{\mathbf{R}}_{k} ||^{2} \}\). Obtain U2 for every m SNP combination. Choose the SNP combination with the largest U2 statistic as the best mth-order interaction model in the training data. Step 4. Selection process: best interaction model Repeat step 2 and 3 for each fold and count the number of specific SNP combinations chosen for the best model. We call this cross-validation consistency (CVC). Select the model with the largest CVC as best final interaction model. Derive the final rank sum statistic for the best model by averaging all statistics for the test data. Calculate the empirical p-value by a permutation test. Simulation analysis To compare the performance of the proposed MR-MDR with other existing methods, we conducted simulations for various situations. We considered 20 SNPs, including two-way disease-causal SNPs. For each of the combinations of seven different heritability values (0.01, 0.025, 0.05, 0.1, 0.2, 0.3, 0.4) and two minor allele frequency (MAF) values (0.2, 0.4), five different interaction models (M1-M5) were considered. Typically, penetrance rate is defined as the proportion of individuals with a given genotype who exhibit the phenotype associated with that genotype. However, it is not appropriate for continuous phenotypes and there is no clear definition of continuous phenotype. For QMDR, the penetrance for continuous phenotypes was defined as a function of mean [15]. Similarly, we set the penetrance rate as a function of mean of the response variable in each genotype. Thus, a total of 70 epistatic models with various penetrance functions were generated, as done by Velez et al. [7]. All the models had little marginal effect. For the phenotype distribution, we considered a bivariate normal distribution and a bivariate gamma distribution. The correlations of the bivariate phenotypes also varied (0, 0.25, 0.5). Sample sizes of 100, 200, and 400 were considered. Finally, a total of 1000 replicated data sets were generated for 1260 (= 70 × 2 × 3 × 3) combinations. We used both a Tyler's-transformed version (MR-MDR_transformed) and an untransformed version (MRMDR_ untransformed). For the purpose of comparison, multi-CMDR and multi-QMDR methods were also used. All three summary statistics of the multi-QMDR—the first principal component (FPC), weighted sum of principal components (WPC), and weighted squared sum of principal components (WSPC)—were included. To compare the data with the univariate approach, QMDR was also performed for each phenotype separately. Ten-fold CV was considered. A summary of the simulation scheme is shown in Table 1. Table 1 Summary of the simulation scheme Since the epistatic models given by Velez et al. [7] were devised only for the discrete phenotype, we modified them to handle continuous phenotypes. Let \({\mathrm{SNP}}_{1}\) and \({\mathrm{SNP}}_{2}\) be the two true causal SNPs, \({\mathbf{Y}} = (Y_{1} ,Y_{2} )^{T}\) the continuous bivariate phenotypes, and fij the penetrance function for the ith genotype of \({\mathrm{SNP}}_{1}\) and jth genotype of \({\mathrm{SNP}}_{2}\) in [7]. For the bivariate normal distribution, \({\mathbf{Y}} = (Y_{1} ,Y_{2} )^{T}\) was generated by $${\mathbf{Y}}|({\text{SNP}}_{{1}} = i,{\text{ SNP}}_{2} = j)\sim MN({{\varvec{\upmu}}}_{ij} ,{{\varvec{\Sigma}}}),$$ where \({{\varvec{\upmu}}}_{ij} = \left( {\begin{array}{*{20}c} {f_{ij} } \\ {f_{ij} } \\ \end{array} } \right)\) and \({{\varvec{\Sigma}}} = \left( {\begin{array}{*{20}c} 1 & \rho \\ \rho & 1 \\ \end{array} } \right)\). We used the mvrnorm() function of the MASS package in R. Three different values of ρ (0, 0.025, and 0.5) were considered, as mentioned above. For the skewed asymmetrically distributed phenotypes, we used the copula-based multivariate gamma model [44]. A copula-based bivariate gamma distribution is given by $$f(y_{1} ,y_{2} ) = c(u,{{\varvec{\Sigma}}})\prod\limits_{k = 1}^{2} {f_{k} (y_{k} )} ,$$ where \(c(u,{{\varvec{\Sigma}}}) = |{{\varvec{\Sigma}}}|^{\frac{1}{2}} \exp \left[ { - \frac{{{\tilde{\mathbf{u}}}^{{\prime }} ({{\varvec{\Sigma}}}^{ - 1} - {\rm I}){\tilde{\mathbf{u}}}}}{2}} \right],\tilde{u} = (\Phi^{ - 1} (u_{1} ), \, \Phi^{ - 1} (u_{2} ))^{{\prime }}\), and \(u_{k} = {\mathbf{F}}_{k} (y_{k} )\) for k = 1, 2. Here fk and Fk are the marginal probability density function and cumulative distribution function of the kth phenotype, respectively, and Φ−1 is the inverse of the cumulative distribution of the standard normal distribution function. In this simulation, we set $$y_{k} |({\text{SNP}}_{1} = i,{\text{ SNP}}_{2} = j)\sim Gamma(2f_{ij} ,0.5)$$ for k = 1, 2. We used the mvdc(), rMvdc(), normalCopula() functions in the copula package in R. The power was estimated by the hit ratio, which is the proportion of times that each method correctly chose the causal SNP pairs (\({\mathrm{SNP}}_{1}\) and \({\mathrm{SNP}}_{2}\)) as the best model among all possible two-way interaction models out of each set of 1000 datasets. Figure 3 shows the hit ratios of the eight different methods for the bivariate normal distribution. The power of the multi-QMDR using FPC (multi-QMDR_FPC) was slightly higher than that of the proposed MR-MDR when there was no correlation between phenotypes. However, the difference between multi-QMDR_FPC and MR-MDR decreased as the correlation became stronger. The performances of the transformed one (MR-MDR_transformed) and untransformed (MR-MDR_untransformed) one were almost the same. Multi-QMDR with WSPC (multi-QMDR_WSPC) showed lower power even than the QMDR method. Hit ratios for a bivariate normal distribution Figure 4 shows the hit ratios for a bivariate gamma distribution. The proposed MR-MDR outperformed the other methods for all ranges of heritability. There was little difference between the performance of the two versions of MR-MDR, and the differences between them were less than 0.01 in all situations. The power of multi-CMDR was the next highest. It should be noted that multi-CMDR uses the fuzzy clustering approach to split data as in MR-MDR. The gap between MR-MDR and other methods became larger as the sample size decreased or the correlation became stronger. Multi-QMDR-FPC and multi-QMDR using the WPC (multi-QMDR-WPC) showed lower power than MR-MDR and multi-CMDR, but higher power than QMDR. The performance of multi-QMDR-WSPC was still poor, although the difference was less than in bivariate normal distribution. Hit ratios for a bivariate gamma distribution Through these simulation studies, we demonstrated that the proposed MR-MDR outperformed the other existing methods for all ranges of heritability when the phenotypes were asymmetrically distributed. However, when the phenotypes are symmetrically distributed, as in a normal distribution, all methods yielded similar performance. Three additional simulations were conducted to find out the further properties of the proposed method. The robustness of fuzzy k-means clustering, the effect of noise cluster size, and the effect of outliers were investigated. First, to explore the robustness of the fuzzy k-means clustering in MR-MDR algorithm, we performed a comparison study to investigate the effect of log-transformation on phenotypes which were generated from the bivariate gamma distribution. The power of MR-MDR after log transformation was obtained for each seven heritabilities. We set the correlation coefficient between two phenotypes 0.25 and the sample size 200. The average power of MR-MDR for five interaction model (M1-M5) for 1000 random samples are given in Table 2. The power slightly reduced after log-transformation. Therefore, we can conclude that the fuzzy k-means clustering is robust to the skewed distribution and does not require any further transformation of original data. Table 2 Hit ratios of MR-MDR according to log transformation for a bivariate gamma distribution (r = 0.25, n = 200) Secondly, we investigate the efficiency according to the size of noise cluster since the large size of the noise cluster can be a source of loss of efficiency. The size of noise cluster depends on the noise distance δ, which needs to be chosen in advance. If δ is too large, outliers are not removed and are classified as a real cluster. On the other hand, if δ is too small, many observations can be misplaced into the noise cluster. Still, the estimation of the optimal value of δ is an open-problem [45]. In our approach, we used an iterative procedure to determine the value of δ optimally, which is the default of FKM.noise procedure in R. To check the efficiency, we compared the performance with various values of δ. The results are shown in Table 3. For both bivariate normal and bivariate gamma distribution, the average power of MR-MDR for five interaction models (M1-M5) are given in Table 3. This new simulation result shows that the power using the iterative δ yielded the highest hit ratios for all heritabilities. Note that in this setting outliers were not generated. When there were outliers, a smaller noise cluster would have been created. Table 3 Hit ratios of MR-MDR according to the noise distance δ (r = 0.25, n = 200) Thirdly, we have conducted an additional simulation to investigate the effects of outliers. The power of MR-MDR for the datasets with or without outliers was obtained for each seven heritabilities. We set the correlation coefficient between two phenotypes 0.25 and the sample size 200 as in Table 3. The power for five interaction models (M1-M5) for 1000 random samples were obtained. For the datasets with outliers, about 5% of the data were replaced by outliers. For both phenotypes, outliers were generated by adding three times of IQR for the randomly chosen 5% samples. The results are summarized in Table 4. In the presence of outliers, the power tends to decrease for all methods. The differences were the smallest in MR-MDR, indicating the minimum power loss of MR-MDR. As a result, MR-MDR showed much higher power than other methods in the presence of outlying observations. Table 4 Hit ratios of MR-MDR according to the presence or absence of outliers (r = 0.25, n = 200) Real data analysis We illustrate the proposed MR-MDR method by analyzing data from the KoGES [46]. The data were collected from two recruitment areas. Each region is a cohort representing city (Ansan) and rural areas (Anseong). After standard quality control procedures for the subjects and SNPs, a total of 8842 participants and 327,872 SNPs were used for this analysis. We considered four phenotypes related to kidney function: BUN, serum creatinine, urinary albumin levels, and urinary RBC levels. Traditionally, BUN and serum creatinine levels have been used as surrogate markers of kidney function deterioration [47]. The amounts of albumin and RBC in urine also could be good indicators of kidney disease. Although there have been some studies on associations between genes and kidney-related phenotypes, few studies have performed GGI analyses for these phenotypes [48, 49]. In the case of albumin, the urine test is known to be more accurate than in the case of blood test, so the urine test result is used here. However, urine tests are conducted only on a relatively small number of subjects, which resulted in missing observations in phenotypes. For this high rate of missing data, imputation for phenotypes is not appropriate. We removed observations with at least one missing phenotype value, and 3267 samples remained. A linear model was separately fit to each phenotype with covariate adjustments for sex, age and recruitment area. Finally, 205 SNPs were selected that had significant marginal effects (p < 1 × 10−7). Residuals were used for the analysis to control for covariate effects. The largest correlation coefficient was 0.32, which was the correlation between BUN and creatinine. The distributions of the phenotypes were heavily skewed. Figure 5 shows scatter plots and box plots of the phenotypes. Scatterplot and boxplot of four phenotypes after adjustment by sex, age and recruitment area. The numbers in the scatter plot are correlation coefficients We applied the proposed MR-MDR method to identify the best first- and second-order interaction models. Table 5 shows the center of the global clusters from fuzzy k-means clustering during step 1. Cluster centers were determined by the means of each phenotype across samples belonging to the cluster. The clusters differed greatly for BUN and serum creatinine. There seemed to be no difference in RBC levels between the two clusters. Since the higher values of BUN and creatinine indicate abnormal kidney function, we can interpret the first cluster as a low-risk group and the second cluster as a high-risk group. Table 5 Average of phenotypes for global clusters We selected the SNP pair with the largest CVC as the best interaction model for each order. The test score was the average of spatial rank sum statistics calculated from the test data set. Empirical p-values were obtained by comparing the test score with the statistic from the permuted dataset generated by shuffle phenotype vectors. If the score calculated with the permuted data exceeded our score, that case was counted. Then p-value was calculated as a/b, where a is the number of cases with a permuted score higher than the original score and b is the total number of trials. List of the interaction models that had the highest training score at least once during the tenfold CV process by MR-MDR is shown in Table 6. For the first-order interaction, rs41476549 had the highest CVC and was selected as the most relevant to the four phenotypes. rs790410 was selected as the best once during 10 CV processes but showed the highest score. All SNPs except rs17168600 had a p-value lower than 0.05. For the second-order interaction model, the pair of rs41476549 and rs1117105 showed the highest CVC, while the pair of rs790410 and rs961413 yielded the highest test score. However, the CVC value was low in most cases. Among the selected SNPs, rs16862782 on chromosome 3 has been reported to be related to BUN in Korean and to serum urea measurement in European [47, 50]. rs4686914 on chromosome 3 is also known to be related to the kidney function in European and East Asian [47, 51]. Both SNPs are mapped to LINC01991 gene. rs17586294 maps to TUBBP11 gene. To the best of our knowledge, there are no studies that have analyzed kidney-related GGI in a multivariate manner. Therefore, none of the interactions of the selected pair of SNPs have ever been reported. Table 6 Best interaction models identified by MR-MDR, multi-CMDR and multi-QMDR We also applied multi-CMDR and multi-QMDR methods for comparison. Only multi-QMDR using FPC was considered for comparison, because it showed the highest power among three types of multi-QMDRs. There are some similarities between the results of MR-MDR and multi-CMDR. However, the results are totally different for multi-QMDR, which are expected to be caused by some outlying observations. For example, the pair of rs17014894 and rs10517358 was chosen as the best interaction model. However, this pair suffers from sparsity and outliers. In particular, there are four cells with zero counts and one cell with one count having an outlying observation. Figure 6 shows the box plots of the phenotypes after removing the noise cluster for the genotype combinations of the best model selected by MR-MDR. The distribution of each phenotype was different depending on the genotype combination, suggesting that there was an interaction effect. Box plots of four phenotypes after removing the noise cluster for the best SNP combination identified by MR-MDR ((i, j): ith genotype for rs1117105 and jth genotype for rs41476549, s creatinine, ALBU albmin) To evaluate pure interaction effects for continuous phenotypes, we adopted the classical linear model. For the selected SNP combinations, we fit multivariate analysis of variance (MANOVA) model including two main effects and the interaction effect. The SNP effects were tested for the interaction effect only (H01:β12 = 0) and for the total effects including both main and interaction effects (H02:β1 = β12=0 and H03:β2 = β12=0). The results nine SNP pairs are summarized in Table 7. None of the selected SNP pairs showed significant interaction effects (p > 0.05), while some pairs showed significant total effects. This is because most MDR methods tend to choose a model with a large total effect ignoring pure interaction effects. However, our further empirical study showed that the introduction of noise cluster by fuzzy k-means increased the power of detecting interaction effects. The same MANOVA model were fit to the trimmed data after removing the noise cluster by MR-MDR. As expected, several significant interaction effects were successfully identified after trimming. Although MANOVA requires a Gaussian assumption, the process removing the noise cluster in the proposed method had the effect of yielding more robust and reliable MANOVA results. Among the nine SNP pairs with non-significant interactions, four pairs showed significant interaction effects. The results are summarized in the last three columns of Table 7. Table 7 p-values from MANOVA test for the SNP combination selected by MR-MDR The proposed MR-MDR method is a non-parametric approach assuming no particular genetic model. To assign samples to two risk groups, MR-MDR uses the fuzzy clustering technique, as in the multi-CMDR method. MR-MDR uses the spatial rank sum statistic as evaluation measure for comparing two groups whereas Hotelling's T2 statistic is used in multi-CMDR and multi-QMDR. Therefore, robust results can be expected in MR-MDR for skewed distributions or those with outliers. When calculating the spatial rank statistic, a data-driven transformation matrix was needed to make the statistic invariant. It is known that an affine-invariant test performs better than noninvariant angle coordinate-wise sign tests when there is significant deviation from spherical symmetry caused by the presence of correlations among observed variables. Moreover, the affine-invariant test outperforms Hotelling's T2-test for heavy-tailed non-normal multivariate distributions [52]. As can be seen in our toy example, the invariant statistic differed from the untransformed statistic, especially in the presence of a high correlation between phenotypes. The problem with using Tyler's transformation statistic is that it takes much longer to calculate than the untransformed statistic. However, the change of the spatial ranks due to the Tyler's transformation is trivial even when the correlation is moderate (r = 0.4), as seen in the toy example. Moreover, the simulation results showed that there was little improvement in performance compared to untransformed versions of MR-MDR. This phenomenon was also observed in the case of negative correlation. Therefore, we recommend using the untransformed MR-MDR version if the absolute value of correlation between phenotypes is not too high (e.g., |r|< 0.5). To apply the proposed method for GWAS data, we considered a filtering procedure to reduce the number of SNPs to be investigated. We selected SNPs with significant marginal effects and investigated the interactions. Since MDR is an exhaustive method, this kind of filtering is needed. However, this filtering process can lead to ignoring gene–gene interactions for the SNPs with weak marginal effects. In this paper, we proposed the MR-MDR method to detect the best interaction model for multivariate quantitative traits. MR-MDR is based on the fuzzy clustering technique and spatial rank-sum statistic. Intensive simulation studies comparing MR-MDR with several current methods showed that the performance of MR-MDR was outstanding for skewed distributions. The difference in power between the MR-MDR and other methods increased as the sample size became smaller and the correlation became stronger. Additionally, for symmetric distributions, MR-MDR showed comparable power. Therefore, we conclude that MR-MDR is a useful multivariate non-parametric approach that can be used regardless of the phenotype distribution, the correlations between phenotypes, and sample size. The Korea Association Resource (KARE) project data will be publicly distributed by the Distribution Desk of Korea Biobank Network (https://koreabiobank.re.kr/). The data request should be made directly to Distribution Desk of Korea Biobank Network. Any inquiries should be sent to [email protected]. GGI: Gene–gene interaction MDR: Multifactor dimensionality reduction CVC: Cross validation consistency SNP: Single nucleotide polymorphism BUN: Blood urea nitrogen RBC: Red blood cell MAF: Minor allele frequency FPC: First principal component WPC: Weight sum of principal components WSPC: Weighted squared sum of principal components MANOVA: Multivariate analysis of variance Grove J, Ripke S, Als TD, Mattheisen M, Walters RK, Won H, Pallesen J, Agerbo E, Andreassen OA, Anney R, et al. Identification of common genetic risk variants for autism spectrum disorder. Nat Genet. 2019;51(3):431–44. McCarthy MI, Zeggini E. Genome-wide association studies in type 2 diabetes. Curr Diab Rep. 2009;9(2):164–71. Levy D, Ehret GB, Rice K, Verwoert GC, Launer LJ, Dehghan A, Glazer NL, Morrison AC, Johnson AD, Aspelund T, et al. Genome-wide association study of blood pressure and hypertension. Nat Genet. 2009;41(6):677–87. Gola D, Mahachie John JM, van Steen K, Konig IR. A roadmap to multifactor dimensionality reduction methods. Brief Bioinform. 2016;17(2):293–308. Ritchie MD, Hahn LW, Roodi N, Bailey LR, Dupont WD, Parl FF, Moore JH. Multifactor-dimensionality reduction reveals high-order interactions among estrogen-metabolism genes in sporadic breast cancer. 2001. Ritchie MD, Van Steen K. The search for gene–gene interactions in genome-wide association studies: challenges in abundance of methods, practical considerations, and biological interpretation. Ann Transl Med. 2018;6(8):157. Velez DR, White BC, Motsinger AA, Bush WS, Ritchie MD, Williams SM, Moore JH. A balanced accuracy function for epistasis modeling in imbalanced datasets using multifactor dimensionality reduction. Genet Epidemiol. 2007;31(4):306–15. Moore JH, Gilbert JC, Tsai CT, Chiang FT, Holden T, Barney N, White BC. A flexible computational framework for detecting, characterizing, and interpreting statistical patterns of epistasis in genetic studies of human disease susceptibility. J Theor Biol. 2006;241(2):252–61. Chung Y, Lee SY, Elston RC, Park T. Odds ratio based multifactor-dimensionality reduction method for detecting gene–gene interactions. Bioinformatics. 2007;23(1):71–6. Hahn LW, Ritchie MD, Moore JH. Multifactor dimensionality reduction software for detecting gene–gene and gene-environment interactions. Bioinformatics. 2003;19(3):376–82. Lee SY, Chung Y, Elston RC, Kim Y, Park T. Log-linear model-based multifactor dimensionality reduction method to detect gene gene interactions. Bioinformatics. 2007;23(19):2589–95. Gui J, Andrew AS, Andrews P, Nelson HM, Kelsey KT, Karagas MR, Moore JH. A robust multifactor dimensionality reduction method for detecting gene–gene interactions with application to the genetic analysis of bladder cancer susceptibility. Ann Hum Genet. 2011;75(1):20–8. Hua X, Zhang H, Zhang H, Yang Y, Kuk AYC. Testing multiple gene interactions by the ordered combinatorial partitioning method in case–control studies. Bioinformatics. 2010;26(15):1871–8. Lou XY, Chen GB, Yan L, Ma JZ, Zhu J, Elston RC, Li MD. A generalized combinatorial approach for detecting gene-by-gene and gene-by-environment interactions with application to nicotine dependence. Am J Hum Genet. 2007;80(6):1125–37. Gui J, Moore JH, Williams SM, Andrews P, Hillege HL, van der Harst P, Navis G, Van Gilst WH, Asselbergs FW, Gilbert-Diamond D. A simple and computationally efficient approach to multifactor dimensionality reduction analysis of gene–gene interactions for quantitative traits. PLoS ONE. 2013;8(6):e66545–e66545. Lee Y, Park M, Park T, Kim H. Gene–gene interaction analysis for quantitative trait using cluster-based multifactor dimensionality reduction method. Int J Data Min Bioinform. 2018;20(1):66. Gui J, Moore JH, Kelsey KT, Marsit CJ, Karagas MR, Andrew AS. A novel survival multifactor dimensionality reduction method for detecting gene–gene interactions with application to bladder cancer prognosis. Hum Genet. 2011;129(1):101–10. Lee S, Kwon MS, Oh JM, Park T. Gene–gene interaction analysis for the survival phenotype based on the Cox model. Bioinformatics. 2012;28(18):i582–8. Oh S, Lee S. An extension ofmultifactor dimensionality reduction method for detecting gene–gene interactions with the survival time. J Korean Data Inf Sci Soc. 2014;25(5):1057–67. Park M, Lee JW, Park T, Lee S. Gene–gene interaction analysis for the survival phenotype based on the Kaplan–Meier median estimate. Biomed Res Int. 2020;2020:5282345. Oh S, Huh I, Lee SY, Park T. Analysis of multiple related phenotypes in genome-wide association studies. J Bioinform Comput Biol. 2016;14(5):1644005. Choi J, Park T. Multivariate generalized multifactor dimensionality reduction to detect gene–gene interaction. BMC Syst Biol. 2013;6:66. Yu W, Kwon MS, Park T. Multivariate quantitative multifactor dimensionality reduction for detecting gene–gene interactions. Hum Hered. 2015;79(3–4):168–81. Kim H, Jeong H-B, Jung H-Y, Park T, Park M. Multivariate cluster-based multifactor dimensionality reduction to identify genetic interactions for multiple quantitative phenotypes. Biomed Res Int. 2019;2019:4578983. Anderson MJ. A new method for non-parametric multivariate analysis of variance. Austral Ecol. 2001;26:32–46. Randles RH, Peters D. Multivariate rank tests for the two-sample location problem. Commun Stat. 1990;19(11):4225–38. Dawn Peters RHR. A multivariate signed-rank test for the one-sample location problem. J Am Stat Assoc. 1990;85(410):552–7. Sirkiä S, Taskinena S, Oja H. Symmetrised M-estimators of multivariate scatter. J Multivar Anal. 2007;98(8):1611–29. Liu RY, Kesar S. A quality index based on data depth and multivariate rank tests. J Am Stat Assoc. 1993;88:252–60. Liu RY, Kesar S. Ordering directional data: concepts of data depth on circles and spheres. Ann Stat. 1992;20(3):1468–84. Yijun Zuo XH. On the limiting distributions of multivariate depth-based rank sum statistics and related tests. Ann Stat. 2006;34(6):2879–96. Liu RY, Parelius JM, Kesar S. Multivariate analysis by data depth: descriptive statistics, graphics and inference, (with discussion and a rejoinder by Liu andSingh). Ann Stat. 1999;27(3):783–858. Vencálek O. Concept of data depth and its applications. Mathematica. 2001;50(2):111–9. Oja H, Randles RH. Multivariate nonparametric tests. Stat Sci. 2004;19(4):598–605. Choi K, Marden J. An approach to multivariate rank tests in multivariate analysis of variance. J Am Stat Assoc. 1997;92(440):1581–90. LanWang BP, Li R. A high-dimensional nonparametric multivariate test for mean vector. J Am Stat Assoc. 2015;110(512):1658–69. Article CAS Google Scholar Chakraborty A, Chaudhuri P. Tests for high-dimensional data based on means, spatial signs and spatial ranks. Ann Stat. 2017;45(2):771–99. Fried R, Dehling H. Robust nonparametric tests for the two-sample location problem. Stat Methods Appl. 2011;20(4):409–22. Sirkiä S, Taskinen S, Nevalainen J, Oja H. Multivariate nonparametrical methods based on spatial signs and ranks: the R package SpatialNP. J Stat Softw. 2007;6:66. Tyler D. A distribution-free m-estimator of multivariate scatter. Ann Stat. 1987;15:66. Oja H. Multivariate nonparametric methods with R: an approach based on spatial signs and ranks. Springer; 2010. Liu RY. Data depth: robust multivariate analysis, computational geometry, and applications, vol. 72. American Mathematical Society; 2006. Dave RN. Characterization and detection of noise in clustering. Pattern Recogn Lett. 1991;12(11):657–64. Stitou Y, Lasmar N-E, Berthoumieu Y. Copulas based multivariate Gamma modeling for texture classification. 2012. Cimino MGCA, Frosini G, Lazzerini B, Marcelloni F. On the noise distance in robust fuzzy C-means. In: Proceedings of world academy of science, engineering and technology; 2005. p. 1. ISSN 1307-6884. Kim Y, Han B-G. group tK: Cohort Profile: the Korean Genome and Epidemiology Study (KoGES) Consortium. Int J Epidemiol. 2016;46(2):e20–e20. PubMed Central Article Google Scholar Lee J, Lee Y, Park B, Won S, Han JS, Heo NJ. Genome-wide association analysis identifies multiple loci associated with kidney disease-related traits in Korean populations. PLoS ONE. 2018;13(3):e0194044. Freedman BI, Skorecki K. Gene–gene and gene-environment interactions in apolipoprotein L1 gene-associated nephropathy. Clin J Am Soc Nephrol. 2014;9(11):2006–13. Tin A, Köttgen A. Genome-wide association studies of CKD and related traits. Clin J Am Soc Nephrol. 2020;6:66. Sinnott-Armstrong NTY, Amar D, Mars N, Benner C, Aguirre M, Venkataraman GR, Wainberg M, Ollila HM, Kiiskinen T, Havulinna AS, Pirruccello JP, Qian J, Shcherbina A, FinnGen F, Rodriguez F, Assimes TL, Agarwala V, Tibshirani R, Hastie T, Ripatti S, Pritchard JK, Daly MJ, Rivas MA. Genetics of 35 blood and urine biomarkers in the UK Biobank. Nat Genet. 2021;53(2):185–94. Thio CHL RA, van der Most PJ, Kamali Z, Vaez A, Smit JH, Penninx BWJH, Haller T, Mihailov E, Metspalu A, Damman J, de Borst MH, van der Harst P, Verweij N, Navis GJ, Gansevoort RT, Nolte IM, Snieder H; Lifelines Cohort Study group. Genome-wide association scan of serum urea in European populations identifies two novel loci. Am J Nephrol. 2019;ss49(3):193–202. Chakraborty B, Chaudhuri P, Oja H. Operating transformation retransformation on spatial median and angle test. Stat Sin. 1998;8(3):767–84. This work was supported by the National Research Foundation of Korea (NRF) grant funded by the Korea government (MSIT) (2013M3A9C4078158, NRF-2021R1A2C1007788). Department of Preventive Medicine, Eulji University, Daejeon, 34824, Republic of Korea Mira Park Department of Statistics, Korea University, Seoul, 02841, Republic of Korea Hoe-Bin Jeong & Jong-Hyun Lee Department of Statistics, Seoul National University, Seoul, 08826, Republic of Korea Taesung Park Hoe-Bin Jeong Jong-Hyun Lee Conceptualization, M.P.; methodology, M.P. and T.P.; software, H.J. and J.L.; formal analysis, H.J. and J.L.; writing—original draft preparation, M.P.; writing—review and editing, T.P. All authors read and approved the final manuscript. Correspondence to Taesung Park. The study was reviewed and approved by the Institutional Review Board of Seoul National University (IRB No. E1908/001004). The patients/participants provided their written informed consent to participate in this study. All methods were carried out in accordance with relevant guidelines and regulations (Declaration of Helsinki). The authors declare no competing interests. Park, M., Jeong, HB., Lee, JH. et al. Spatial rank-based multifactor dimensionality reduction to detect gene–gene interactions for multivariate phenotypes. BMC Bioinformatics 22, 480 (2021). https://doi.org/10.1186/s12859-021-04395-y DOI: https://doi.org/10.1186/s12859-021-04395-y Fuzzy clustering Spatial rank statistic
CommonCrawl
Sig2GRN: a software tool linking signaling pathway with gene regulatory network for dynamic simulation Fan Zhang1, Runsheng Liu1 & Jie Zheng1,2,3 Linking computational models of signaling pathways to predicted cellular responses such as gene expression regulation is a major challenge in computational systems biology. In this work, we present Sig2GRN, a Cytoscape plugin that is able to simulate time-course gene expression data given the user-defined external stimuli to the signaling pathways. A generalized logical model is used in modeling the upstream signaling pathways. Then a Boolean model and a thermodynamics-based model are employed to predict the downstream changes in gene expression based on the simulated dynamics of transcription factors in signaling pathways. Our empirical case studies show that the simulation of Sig2GRN can predict changes in gene expression patterns induced by DNA damage signals and drug treatments. As a software tool for modeling cellular dynamics, Sig2GRN can facilitate studies in systems biology by hypotheses generation and wet-lab experimental design. Availability: http://histone.scse.ntu.edu.sg/Sig2GRN/ One of the major forms of cellular responses to extracellular perturbations is to change the gene expression in response to the cellular signals transmitted by signaling pathways. Diverse stimuli can be converted into a series of intercellular reactions through signal transduction pathways which generate various transcription factor activities, thereby producing different gene expression patterns that result in subsequent cellular behaviors. Over the past few decades, many studies have presented various computational strategies, such as data-driven, logic-based and biochemical kinetic methods, in modeling signaling pathways or gene regulatory networks separately. Data-driven methods [1–4], which are constructed mainly based on statistical modeling, show great potential when the underlying biological mechanisms are unclear. Logic-based models, such as Boolean Network [5–11] and generalized logic models [12–16] are suitable formalisms for modeling relatively large networks in which the detailed kinetic parameters are not fully available. If the underlying biochemical mechanisms are known, biochemical kinetic modeling [17–20] is a well-established strategy for describing the dynamic sub-cellular systems using a set of mathematical equations. In the field of gene regulation, thermodynamic models have also been successfully applied [21–23] besides the aforementioned methods. Despite the many models of signaling pathways and gene regulatory network (GRN), it is still a big challenge to link the models of signal transduction with the downstream gene expression regulation. To address this challenge, Peng et al. [24] used a set of differential equations to do forward simulations of the NF-kB signaling pathway and then used Network Component Analysis [25–27], a data-driven method based on matrix decomposition, to reversely engineer a gene regulatory network (GRN). Then they matched the forward simulations and reverse engineering results and successfully linked the signaling profiles with the subsequent gene expression profiles. However, their method needs detailed kinetic parameters which may not be available as yet in many cases. A similar study of Melas et al. [28] first employed a multi linear regression algorithm to identify correlation-based relationships between signaling proteins and cellular responses (e.g., cytokine releases) and connected them using "non-canonical" edges. Integrating a canonical network of the signaling pathway from prior knowledge, the whole network was then converted into a Boolean model. Next, they optimized the network against the experimental data using Integer Linear Programming [8] and identified the pathway activities that induced the diverse cellular responses. Their reconstructed model is able to predict the dynamics of signaling pathways and cellular responses; however, because the biological meaning of the "non-canonical" edges learnt from the data is difficult to interpret, their model can hardly reveal the molecular mechanisms of how signal transduction regulates gene expression. Here, we present Sig2GRN, a software tool which links the models of signaling pathway with gene regulatory networks (GRNs). A generalized logical model, which we proposed previously in [13] and is based on network topology, is employed here to capture the dynamical trends of transcription factors in cellular signaling pathways. Then two different models, i.e., a Boolean model and a thermodynamics-based model [21], are integrated to predict the downstream gene expression patterns based on the predicted transcription factor activities. As a Java plugin for Cytoscape [29] (version 2.8.3), Sig2GRN is able to simulate the dynamics of the signaling pathways and the subsequent time-series gene expression data. We first provide an overview of Sig2GRN's core functionalities, and then describe two case studies to illustrate the usage and performance of Sig2GRN. Methods and implementation Generalized logical modeling of signaling pathways for predicting transcription factor activities Sig2GRN takes a directed graph as the input where each vertex x in the network represents a molecular species (e.g., a signaling protein, a transcription factor or a gene) and each directed edge (u, v) denotes a biological interaction (e.g., protein phosphorylation or transcriptional regulation) from node u to node v. The input network is divided into two layers, i.e., the upstream signaling pathways and the downstream gene regulatory network, according to the type of biological interactions from prior knowledge (Fig. 1). The simulation starts from the user-defined perturbations and generates the dynamical trends of the signaling proteins using a generalized logical model in our previous work [13]. The goal of the upstream simulation is to generate the dynamical trends of the transcription factors under the perturbations. Then the simulated transcription factor activities are employed to predict the gene expression patterns over time, using either a Boolean model or a thermodynamic-based model [21], which can be selected by users. Therefore, the two layers of network are linked together by the transcription factors, and the cellular responses can be predicted given the extracellular perturbations. The illustration of the overall strategy. A generalized logical model is used in modeling the upstream signaling pathways to generate the activities of the transcription factors. Then the simulated transcription factor activities are used to predict the downstream gene expression according to either a Boolean model or a thermodynamic-based model In the upstream signaling pathways, the state S t (with value ∈[0,1], where 0 means fully inhibited and 1 means fully activated) of node s at the t-th simulation iteration is updated based on its previous state at the (t−1)-th iteration and the incoming signals from its parent nodes, according to Eq. (1) [13], $$\begin{array}{*{20}l} {} S_{t} = & \left(1 \!- \!d\right)S_{t-1}+\left[1-\prod(1-A_{i})\right]\prod(1\,-\,B_{j})(1-S_{t-1}) \\ & -\prod(1-A_{i})\left[1-\prod(1-B_{j})\right]S_{t-1} \end{array} $$ where A i (or B j ) represents the amount of signals transmitted from the i-th activating (or j-th blocking) parent node upstream of s, and d is the degradation rate (value ∈(0,1)) at each iteration. Using this model, we have successfully predicted the dynamics of a cancer signaling pathway under various perturbations [13]. In this work, we select the simulated transcription factor activities (e.g., the proportion of the concentration of transcription factor in the active form) as the output of the upstream generalized logical mode and use them as the input to the downstream models to further predict the gene expressions as shown in Fig. 1. Boolean modeling of transcriptional regulation Once the time-series data of the transcription factor activities (value ∈[0,1] at each simulation iteration) are generated, users can select either a Boolean model [30] or a thermodynamic model [21] to predict the subsequent gene expression patterns. Under the Boolean scenario, the AND logical relation is assigned to the transcription factors that have the same transcriptional regulation type (e.g., activation or inhibition) for a gene, so that the gene will be switched ON (or OFF) when the maximum activity level of activating (or inhibiting) transcription factors surpasses a user-defined threshold (value ∈(0,1)). When both activation and inhibition regulations are present on the same gene, the inhibition is assumed to precede the activation. The simulation result of the Boolean model is a list of 0s and 1s, over the course of time. Thermodynamic modeling of transcriptional regulation Since the Boolean simulation only refers to whether a gene up-regulated or down-regulated without revealing to what extent they will be expressed or not, we implement a thermodynamic (also termed fractional occupancy) model [21] to describe the gradual responses of gene expression to signal transduction. The thermodynamic model is derived under the assumption that the system is in the thermodynamic equilibrium. As such, the gene expression level is defined as a function of the activity levels of the bound transcription factors as shown in Eq. (2) [21], $$\begin{array}{*{20}l} [E]=\frac{\sum_{i \in G}\left(\prod_{j=1}^{n_{i}}K_{j}[TF_{j}]\right)Q_{i}}{\sum_{m=1}^{N}\left(\prod_{h=1}^{n_{m}}K_{h}[TF_{h}]\right)} \end{array} $$ where [ E] is the gene expression level, N is the number of all possible arrangements of transcription factors attaching to their corresponding binding sites, G is the set of transcription factor arrangements that turn the gene on, n i (n m ) is the number of transcription factor binding sites employed in the i-th (m-th) arrangement, K j and [ TF j ] represent the binding affinity of binding site j and the activity level of the transcription factor corresponding to binding site j, and Q i is the probability of the gene being expressed when the i-th arrangement comprises the binding of both activating and inhibiting transcription factors (Q i =1 when only activating transcription factors are included). Case Study 1: DNA damage induced cell apoptosis. DNA damage caused by ionizing radiation will activate ATM, while that by UV light will activate ATR and DNA-PK [31–33]. The stimulated kinases ATM, ATR and DNA-PK can phosphorylate p53 and E2F1 transcription factors directly or indirectly via Chk1 and Chk2. The activated p53 and E2F1 can regulate transcription of apoptosis regulator Bax, Bcl-2 and Apaf-1. Figure 2 shows the regulatory cascade of DNA damage induced apoptosis regulation. The network is constructed using GeneGo MetaCore database [34]. The network of cell apoptosis regulation induced by DNA damage [34]. The signals will be transmitted from the upstream signaling proteins to the transcription factors (e.g.,p53 and E2F1), then the transcription factors will regulate the transcription of apoptosis regulators (e.g., Bax, Bcl-2, Apaf-1 and Caspases). Rectangle, diamond and ellipse nodes represent signaling proteins, transcription factors and regulated genes, respectively. Each activation interaction is denoted as a green edge with an arrow head and each inhibition interaction is represented by a red edge with a flat-head. The solid and dash lines represent signal transduction and transcription regulation, respectively Given the input data (value ∈[0,1]) associated with the receptors of the network (i.e., ATM, ATR and DNA-PK), the user-specified edge weights (value ∈[0,1]) and the number of iterations, Sig2GRN will first generate the dynamics of all the nodes' activities in the network based on Eq. (1). By manually selecting the transcription factors that regulate the transcription of the genes of interest, we can run Sig2GRN to further predict this gene's expression status over time. Figure 3 shows the simulated expression of Bax and Bcl-2 as an example. Here ATR and DNA-PK are selected as input nodes to simulate the exposure of the cells to UV light. The input levels of the input nodes were both set to 1; the edge weights of activation and inhibition interactions were set to 0.7 and 0.8, respectively; and the number of iterations was set to 100. In the Boolean model, for Bax, the selected transcription factor was p53 and the interaction type from p53 to Bax was set to activation; for Bcl-2, the transcription factors were E2F1 and p53 and the interaction types were activation and inhibition, respectively. In the thermodynamic model, the binding affinities of E2F1 and p53 were both set to 2. The parameter settings used here are only for purpose of demonstration because the prior knowledge available for parameter settings is insufficient in this case study. Moreover, the robustness of our model to the variations of the parameters (including the edge weights and the initial values of the nodes) have been empirically demonstrated in our previous work [13]. It can be seen from Fig. 3 a that Bax is expressed (in the Boolean model, 1 means the gene is expressed) after about 12 iterations when the DNA damage signals are transmitted from p53. Similar conclusion can be drawn from the thermodynamic model in Fig. 3 b that the expression of Bax increases rapidly to a plateau. In Fig. 3 d, Bcl-2 is also turned on after about 15 iterations. Compared with Bax, the simulated expression of Bcl-2 using the thermodynamic model (Fig. 3 e) increases more smoothly and the maximum expression is less than that of Bax because of incoming inhibiting signals from p53. Simulated gene expression of Bax in a and b and Bcl-2 in d and e under DNA damage stimuli. ATR and DNA-PK are selected as input nodes. The parameter settings are only for illustration purpose. Both the simulated gene expression patterns of the Boolean model (plots a and d) and the thermodynamic model (plots b and e) agree with the time-series wet-lab experimental data of c Bax and f Bcl-2 [35]. The experimental data consist of the ratios of gene expression levels between UV light treated group and control group To validate the simulation, we use a dataset in which human TK6 cells were treated with UV light and then the gene expression was measured at three time points, i.e., 4, 8 and 24 hrs [35]. Figures 3 c and 3 f give the experimental data (the ratio of the gene expression levels between UV light treated and control groups) of Bax and Bcl-2 expression over the three time points. These two genes are the overlap between the network (Fig. 2) and the dataset [35], the dataset has measurements of many other genes which cannot be included in the network and the gene Apaf-1 in the network has no measurements in the dataset. It can be seen from the real data that the expression levels of both Bax and Bcl-2 increase over time when the cells are exposed to UV light; the slope of Bcl-2 curve is smoother and the height of the Bcl-2 curve is lower than that of the Bax curve. This suggests that, to some extent, our simulation tool is able to link the signal transduction with the gene expression regulation through transcription factors. Case Study 2: apoptotic signaling network treated by different combinations of drugs. Predicting the efficacy of drugs and the design of combination therapy is a major endeavor for biomedical research and pharmaceutical industry. Lee et al. [36] studied the effects of different combinations of drugs in enhancing cell death in human breast cancer cells (cell line BT20). Here we construct a network based on their experiments and simulate the cell responses under different combinations of drug treatments to evaluate the performance of our simulator. The network (Fig. 4) comprises 35 nodes and 57 edges [13, 34, 36]. In the 35 nodes, 32 represent signaling proteins and 3 represent cell fates (e.g., apoptosis, proliferation and cell cycle). From the dataset in [36], we select four samples, i.e., treated with (1) EGFR inhibitor, (2) DNA damage activator, (3) both drugs and (4) the control group. The dataset has no measurement of gene expression, instead, the numbers of cells that fall into each cell fate were measured at 5 time points (i.e., 0, 6, 8, 12 and 24 hrs). Therefore, no interaction of transcriptional regulation is included in the network. We directly calculate the proportion of the dead cells at each time point as the cells response to the perturbations. Network constructed based on [36]. Rectangle and diamond nodes represent signaling proteins and cell fates, respectively. Each activation interaction is denoted as a green edge with an arrow head and each inhibition interaction is represented by a red edge with a flat-head As shown in Table 1, four different types of simulation input are defined to correspond to the experimental settings in [36]. For example, half activating (0.5) signals are assigned to both TNFR and EGFR to simulate the control group; full blocking (0), half activating (0.5) and full activating (1) signals are assigned to EGFR, TNFR and DNA damage, respectively, to simulate the addition of both EGFR inhibitor and DNA damage activator together. The edge weights of activation and inhibition interactions are 0.7 and 0.8, respectively; and the number of iterations is 100. Since the network in Fig. 4 does not involve transcriptional regulation, the predicted dynamics of Caspase 3 (the only upstream node of Apoptosis) is considered as the predicted cell responses to the perturbations. Table 1 Input to the simulation in case study 2 Figure 5 a shows the simulated proportions of cell death over time. Compared with the control group (the blue curve), the addition of drugs (the orange, yellow and purple curves) enhances cell death. While the EGFR inhibitor (the orange curve) increases cell death to a small extent, the effect of DNA damage activator (the yellow curve) is significant. Furthermore, the treatment with both drugs together (the purple curve) performs the best in enhancing the cell death. Compared with the real data in Fig. 5 b, the predictions are consistent with the experimental measurements of the drug effects in terms of trends and ranking of the curves. However, there is a synergistic effect of the co-treatment in the real data, e.g., the cell death proportion induced by the co-treatment exceeds the sum of the cell death proportions induced by the two treatments separately, which has not been captured by the simulation. Moreover, mapping the simulation iterations to the real time points remains a challenge for our simulator. a Simulation results using the network in Fig. 4 and the input in Table 1. The predicted dynamics of Caspase 3 (the only upstream node of Apoptosis) is used as a proxy for the programmed cell death. b Experimental measurements of the cell death proportions under different treatments in [36] In spite of the promising performance of our computational simulations, limitations have also been noticed. For example, in case study 2, the simulation did not reveal the synergistic effect of the co-treatment by two drugs. Possible reasons include the insufficient prior knowledge of the input networks and an oversimplification of the computational model of the nonlinear regulatory system. Moreover, since the simulation is iterated over discrete time points, it is hard to assign real time to simulation steps, which is a major obstacle for linking the two biological processes (e.g., signal transduction and transcriptional regulation) with different time scales. Techniques of multiscale modeling and simulation will be incorporated into the software in near future. Computational simulation is an important systems biology approach to the analysis of signaling pathways and gene regulatory networks. In this work, we present a software tool called Sig2GRN which is able to link the cellular signaling pathways with the downstream gene expression regulation. A generalized logical model is used in modeling the upstream signaling pathways, while a Boolean Network and a thermodynamic model are employed in modeling the downstream gene expression based on the simulated activities of transcription factors. We have shown two case studies on simulating the cell responses to the extracellular perturbations and validated the simulations with wet-lab experimental data. As a Cytoscape plugin, Sig2GRN is designed to be extensible so that more computational models of gene regulation (e.g., epigenetic modifications) can be integrated to facilitates studies in systems biology. Compared with existing methods to link signaling pathways with gene regulation, such as in [24], Sig2GRN is a parameter-free software which requires no kinetic parameters of the pathways, and thus it is still applicable when only insufficient prior knowledge of the underlying mechanisms is available. Moreover, Sig2GRN is able to predict the gene expression time-course data given the perturbations to the signaling pathways, whereas in [24] the gene expression data are required as the input of their model, which is therefore unable to predict new gene expression patterns. Janes KA, Kelly JR, Gaudet S, Albeck JG, Sorger PK, Lauffenburger DA. Cue-signal-response analysis of TNF-induced apoptosis by partial least squares regression of dynamic multivariate data. J Comput Biol. 2004; 11(4):544–61. Janes KA, Yaffe MB. Data-driven modelling of signal-transduction networks. Nat Rev Mol Cell Biol. 2006; 7(11):820–8. Jaqaman K, Danuser G. Linking data to models: data regression. Nat Rev Mol Cell Biol. 2006; 7(11):813–9. Duren Z, Wang Y. A systematic method to identify modulation of transcriptional regulation via chromatin activity reveals regulatory network during mESC differentiation. Scientific Reports 6. 2016; 22656. doi:10.1038/srep22656. Calzone L, Tournier L, Fourquet S, Thieffry D, Zhivotovsky B, Barillot E, Zinovyev A. Mathematical modelling of cell-fate decision in response to death receptor engagement. PLoS Comput Biol. 2010; 6(3):1000702. Kauffman SA. Homeostasis and differentiation in random genetic control networks. Nature. 1969; 224(5215):177–8. Kauffman SA. Metabolic stability and epigenesis in randomly constructed genetic nets. J Theor Biol. 1969; 22(3):437–67. Mitsos A, Melas IN, Siminelakis P, Chairakaki AD, Saez-Rodriguez J, Alexopoulos LG. Identifying drug effects via pathway alterations using an integer linear programming optimization formulation on phosphoproteomic data. PLoS Comput Biol. 2009; 5(12):1000591. Saez-Rodriguez J, Simeoni L, Lindquist JA, Hemenway R, Bommhardt U, Arndt B, Haus UU, Weismantel R, Gilles ED, Klamt S, Schraven B. A logical model provides insights into T cell receptor signaling. PLoS Comput Biol. 2007; 3(8):163. Sharan R, Karp RM. Reconstructing boolean models of signaling. J Comput Biol. 2013; 20(3):249–57. Thomas R. Boolean formalization of genetic control circuits. J Theor Biol. 1973; 42(3):563–85. Aldridge BB, Saez-Rodriguez J, Muhlich JL, Sorger PK, Lauffenburger DA. Fuzzy logic analysis of kinase pathway crosstalk in TNF/EGF/insulin-induced signaling. PLoS Comput Biol. 2009; 5(4):1000340. Zhang F, Chen H, Zhao LN, Liu H, Przytycka TM, Zheng J. Generalized logical model based on network topology to capture the dynamical trends of cellular signaling pathways. BMC Syst Biol. 2016; 10(Suppl 1):7. Huang ZY, Hahn J. Fuzzy modeling of signal transduction networks. Chem Eng Sci. 2009; 64(9):2044–056. Morris MK, Saez-Rodriguez J, Clarke DC, Sorger PK, Lauffenburger DA. Training signaling pathway maps to biochemical data with constrained fuzzy logic: Quantitative analysis of liver cell responses to inflammatory stimuli. PLoS Comput Biol. 2011; 7(3):1001099. Zheng J, Zhang D, Przytycki PF, Zielinski R, Capala J, Przytycka TM. Simboolnet–a cytoscape plugin for dynamic simulation of signaling networks. Bioinformatics. 2010; 26(1):141–2. Albeck JG, Burke JM, Aldridge BB, Zhang M, Lauffenburger DA, Sorger PK. Quantitative analysis of pathways controlling extrinsic apoptosis in single cells. Mol Cell. 2008; 30(1):11–25. Michaelis L, Menten ML. Die kinetik der invertinwirkung. Biochem. 1913; Z(49):333–69. Neumann L, Pforr C, Beaudouin J, Pappa A, Fricker N, Krammer PH, Lavrik IN, Eils R. Dynamics within the CD95 death-inducing signaling complex decide life and death of cells. Mol Syst Biol. 2010; 6(352). doi:10.1038/msb.2010.6. Novák B, Tyson JJ. A model for restriction point control of the mammalian cell cycle. J Theor Biol. 2004; 230(4):567–79. Dresch JM, Thompson MA, Arnosti DN, Chiu C. Two-layer mathematical modeling of gene expression: Incorporating DNA-level information and system dynamics. SIAM J Appl Math. 2013; 73(2):804–26. Dresch JM, Liu X, Arnosti DN, Ay A. Thermodynamic modeling of transcription: sensitivity analysis differentiates biological mechanism from mathematical model-induced effects. BMC Syst Biol.2010;4(142). doi:10.1186/1752-0509-4-142. He X, Samee MAH, Blatti C, Sinha S. Thermodynamics-based models of transcriptional regulation by enhancers: The roles of synergistic activation, cooperative binding and short-range repression. PLoS Comput Biol. 2010; 6(9):e1000935. doi:10.1371/journal.pcbi.1000935. Peng SC, Wong DS, Tung KC, Chen YY, Chao CC, Peng CH, Chuang YJ, Tang CY. Computational modeling with forward and reverse engineering links signaling network and genomic regulatory responses: NF-kB signaling-induced gene expression responses in inflammation. BMC Bioinforma. 2010; 11:308. Chang C, Ding Z, Hung YS, Fung PC. Fast network component analysis (FastNCA) for gene regulatory network reconstruction from microarray data. Proc Natl Acad Sci USA. 2008; 24(11):1349–1358. Liao JC, Boscolo R, Yang YL, Tran LM, Sabatti C, Roychowdhury VP. Network component analysis: reconstruction of regulatory signals in biological systems. Proc Natl Acad Sci USA. 2003; 100(26):15522–15527. Noor A, Ahmad A, Serpedin E, Nounou M, Nounou H. ROBNCA: robust network component analysis for recovering transcription factor activities. Bioinformatics. 2013; 29(19):2410–418. Melas IN, Mitsos A, Messinis DE, Weiss TS, Alexopoulos LG. Combined logical and data-driven models for linking signalling pathways to cellular response. BMC Syst Biol. 2011; 5:107. Shannon P, Markiel A, Ozier O, Baliga NS, Wang JT, Ramage D, Amin N, Schwikowski B, Ideker T. Cytoscape: a software environment for integrated models of biomolecular interaction networks. Genome Res. 2003; 13(11):2498–504. Ay A, Arnosti DN. Mathematical modeling of gene expression: a guide for the perplexed biologist. Crit Rev Biochem Mol Biol. 2011; 46(2):137–51. Bakkenist CJ, Kastan MB. Dna damage activates ATM through intermolecular autophosphorylation and dimer dissociation. Nature. 2003; 421(6922):499–506. Norbury CJ, Zhivotovsky B. DNA damage-induced apoptosis. Oncogene. 2004; 23(16):2797–808. Dhanalakshmi S, Agarwal C, Singh RP, Agarwal R. Silibinin up-regulates DNA-protein kinase-dependent P53 activation to enhance UVB-induced apoptosis in mouse epithelial JB6 cells. J Biol Macromol. 2005; 280(21):20375–0383. Ekins S, Nikolsky Y, Bugrim A, Kirillov E, Nikolskaya T. Pathway mapping tools for analysis of high content data. Methods Mol Biol. 2007; 356(356):319–350. Glover KP, Chen Z, Markell LK, Han X. Synergistic gene expression signature observed in TK6 cells upon co-exposure to UVC-irradiation and protein kinase c-activating tumor promoters. PLoS ONE. 2015; 10(10):0139850. Lee MJ, Ye AS, Gardino AK, Heijink AM, Sorger PK, MacBeath G, Yaffe MB. Sequential application of anticancer drugs enhances cell death by rewiring apoptotic signaling networks. Cell. 2012; 149(4):780–94. We would like to thank Ms. Jing Guo, a Ph.D. student at the School of Computer Science and Engineering, Nanyang Technological University, for her help with testing the software. This project is supported by MOE AcRF Tier 2 Grant ARC39/13 (MOE2013-T2-1-079) and MOE AcRF Tier 1 seed grant on complexity (RGC2/13), Ministry of Education, Singapore. The publication cost is supported by MOE AcRF Tier 2 Grant ARC39/13 (MOE2013-T2-1-079), Ministry of Education, Singapore. Availability of supporting data Software availability: http://histone.scse.ntu.edu.sg/Sig2GRN/. FZ designed study; acquired, analysed and interpreted data; implemented main experiments; drafted manuscript. RSL implemented the software. JZ conceived the study, participated in conceptualization and discussion, critically reviewed and revised the manuscript and gave final approval for submission. All authors read and approved the final manuscript. School of Computer Science and Engineering, Nanyang Technological University, Singapore, 639798, Singapore Fan Zhang, Runsheng Liu & Jie Zheng Complexity Institute, Nanyang Technological University, Singapore, 637723, Singapore Jie Zheng Genome Institute of Singapore, Agency for Science, Technology and Research, Singapore, 138672, Singapore Fan Zhang Runsheng Liu Correspondence to Jie Zheng. Zhang, F., Liu, R. & Zheng, J. Sig2GRN: a software tool linking signaling pathway with gene regulatory network for dynamic simulation. BMC Syst Biol 10, 123 (2016). https://doi.org/10.1186/s12918-016-0365-1 Transcription Factor Activity Gene Regulatory Network Boolean Network Boolean Model Network Component Analysis
CommonCrawl
Modeling algorithmic bias: simplicial complexes and evolving network topologies Valentina Pansanella1, Giulio Rossetti2 & Letizia Milli2,3 Applied Network Science volume 7, Article number: 57 (2022) Cite this article Every day, people inform themselves and create their opinions on social networks. Although these platforms have promoted the access and dissemination of information, they may expose readers to manipulative, biased, and disinformative content—co-causes of polarization/radicalization. Moreover, recommendation algorithms, intended initially to enhance platform usage, are likely to augment such phenomena, generating the so-called Algorithmic Bias. In this work, we propose two extensions of the Algorithmic Bias model and analyze them on scale-free and Erdős–Rényi random network topologies. Our first extension introduces a mechanism of link rewiring so that the underlying structure co-evolves with the opinion dynamics, generating the Adaptive Algorithmic Bias model. The second one explicitly models a peer-pressure mechanism where a majority—if there is one—can attract a disagreeing individual, pushing them to conform. As a result, we observe that the co-evolution of opinions and network structure does not significantly impact the final state when the latter is much slower than the former. On the other hand, peer pressure enhances consensus mitigating the effects of both "close-mindedness" and algorithmic filtering. The tendency to observe political polarization (McCarty 2019; Fiorina and Abrams 2008) on online social networks (Conover et al. 2011; Vicario et al. 2016; Bessi et al. 2016)- e.g., online discourse's tendency to divide users into opposite political factions not aiming at reaching any form of synthesis—has attracted a great deal of attention from different fields in recent years. Polarization not only emerges in political debates but also characterizes various controversial topics (Drummond and Fischhoff 2017; https://thepolarizationindex.com/), and, in some cases, it may affect policymaking (McCarty 2019) and society. It was argued (Hague and Loader 1999) that the advent of the Internet and social media, guaranteeing free access to a massive amount of information, would be a boon for democracy. Nonetheless, recent studies (e.g., Hills 2019) underlined that such a vast and unfiltered access to information could be—at least to some extent—harmful. To enforce such interpretation, in 2020, WHO coined the term "infodemic", a neologism describing a situation where a considerable quantity of false and misleading information (e.g., during a disease outbreak) (Cinelli et al. 2020) may generate social issues (e.g., causing severe repercussions on public health systems). Besides infodemics, information proliferation (Hills 2019)—i.e., the capacity to access and contribute to a growing quantity of information—is reducing the quantity and quality of content many people engage with. Indeed, it is evident that individuals have a limited amount of time and attention (Hogan 2001) to dedicate to collecting, processing, and discussing information. Unfortunately, the choice of which users/content to engage with is rarely made to pursue a balanced information diet; instead, it is a process highly affected by preexisting cognitive biases (Benson 2016). People tend to concentrate on the pieces of information that confirm their beliefs and ignore details that may contradict these (Festinger 1957; Knobloch-Westerwick et al. 2020). Moreover, there is evidence that when people are presented with different news articles to read, they tend to make their selection based on anticipated agreement, i.e., focusing on news sources they know to be closer to their political/cultural leaning (Iyengar and Hahn 2009). These cognitive tendencies are exacerbated by some peculiar mechanisms of online platforms, such as recommender systems and algorithmic filtering: technological systems that implicitly create a polarizing feedback loop by further reducing the amount of diversity in the user experience (Anderson et al. 2020)—and possibly contributing to the creation and maintenance of echo chambers (Sunstein 2007) and filter bubbles (Pariser 2011). Although personalization is essential in information-rich environments to allow people to find what they are looking for and increase user engagement, there is great concern about the negative consequences of algorithmic filtering both from institutions and scientists (Sîrbu et al. 2019). Algorithms recommending new people on social networking sites may consider the similarity of user-created content (Chen et al. 2009) again possibly reinforcing the underlying system polarization. Despite belief-consistent selection and confirmation bias playing a critical role in opinion formation, the diversity of content/sources encountered during daily activities is not the only driver of such complex realities. Peer pressure-like phenomena play a role in shaping people's opinions (Asch 1956; Haun and Tomasello 2011) and therefore should be considered when addressing the study of how public opinion evolves in social networks. People are likely to experience social pressure in both face-to-face and digital interactions. For example, suppose three individuals are mutual friends, and there is a disagreement on a particular topic. In that case, the majority opinion within the group will likely prevail, and the minority will adopt the majority opinion. Within social networks like Twitter, users participate in binary opinion exchanges with other users, i.e., through direct messaging, which can be modeled as binary interactions. However, the possibility of sharing tweets and engaging in public discussions opens the question of how participants can be influenced by others' opinions expressed in the thread and, in turn, influence their peers' opinions. Understanding how different social mechanisms may influence the direction of public opinion and the levels of polarization in society has always been a crucial task, and a great challenge for computational social scientists (Conte et al. 2012). Unfortunately, empirical studies on how opinions form and evolve—influenced by environmental and sociological factors—are still lacking (Peralta et al. 2022): Indeed, if on one side moving toward data-driven analyses is necessary, on the other, models are essential to comprehend causes and consequences within controlled scenarios. Unfortunately, classic opinion dynamics models are often very simplistic and cannot capture the complexity of the observed phenomenon. In the last twenty years, there was a proliferation of opinion dynamics models that aimed at including more and more characteristics of real social systems and their properties, primarily using the classical models (Holley and Liggett 1975; Castellano et al. 2009; Sznajd-Weron and Sznajd 2000; Degroot 1974; Friedkin and Johnsen 1990, 1999; Deffuant et al. 2001), as baselines. Indeed, "digital era" specific characteristics are being included in recent opinion dynamics models, i.e., algorithmic personalization (Maes and Bischofberger 2015; Sîrbu et al. 2019; Peralta et al. 2021a, b; Perra and Rocha 2019) to understand what changes this new world brought into the way public opinion is shaped; however, several others are still missing making thus leaving room for more accurate modeling. Among such often neglected peculiarities, we can list the temporal dynamic of social interactions. Not only do network topologies evolve, but often this evolution is co-dependent on the dynamic process taking place over the networks, such as opinion exchange (McPherson et al. 2001). For this reason, recent efforts focus on studying opinion dynamics on dynamic/adaptive networks (Sasahara et al. 2019; Kan et al. 2021; Holme and Newman 2006; Iñiguez et al. 2009) also in the context of the Deffuant–Weisbuch and other bounded confidence models (Kozma and Barrat 2008; Kan et al. 2021; Sasahara et al. 2019), describing the effects of the evolution of the underlying network structure on the final state of the population and explaining the effects of the opinion exchange on the structure topology. Despite group phenomena being present in classical opinion dynamics models (see (Galam 2002)), it has been recently recognized the importance of using higher-order structures to explain and predict collective behaviors that could not be described otherwise (Battiston et al. 2020; Hickok et al. 2022)—e.g., peer-pressure. Moving from the results discussed in (Pansanella et al. 2022) where static network models and binary interactions were assumed, we aim to study the effects of adaptive networks—where the dynamic of the network depends on the opinion dynamics—and higher-order interactions have on opinion formation and evolution when in the presence of a filtering algorithm. To such an extent, we focus our analysis on the same network models employed in Pansanella et al. (2022), namely Erdős–Rényi (Erdás and Rényi 1959) and Barabási–Albert (Barabási and Albert 1999). Adopting such controlled environments, used to simulate the social structure among a population of interacting individuals, we analyze the behaviors of the two extensions of the Algorithmic Bias model (Sîrbu et al. 2019) and discuss the role of arc rewiring towards like-minded individuals and peer-pressure within 2-cliques. The paper is organized as follows. In "Methods" section we introduce the Algorithmic Bias model and the two extensions and describe our experimental workflow. "Results and discussion" section discusses the main finding of our simulations, finally "Conclusions" section concludes the paper while opening to future investigations. Algorithmic Bias: from Mean-field to Complex Topologies. Online social networks have become the primary source of information and an excellent platform for discussions and opinion exchanges. However, each user's flow of content is organized by algorithms built to maximize platform usage. From this comes the hypothesis that there is an algorithmic bias (also called algorithmic segregation) since these contents are selected based on users' precedent actions on the platform or the web, reinforcing the human tendency to interact with content confirming their beliefs (confirmation bias). To introduce in the study of opinion dynamics the idea of a recommender system generating an algorithmic bias, we started from a recent extension of the well-known Deffuant–Weisbuch model (DW-model henceforth), proposed in Sîrbu et al. (2019) (Algorithmic Bias model or AB model, henceforth). Definition 1 (Deffuant–Weisbuch model (DW-model)) Let us assume a population of N agents, where each agents i has a continuous opinions \(x_{i} \in [0,1]\). At every discrete time step, a pair (i, j) of agents is randomly selected, and, if their opinion distance is lower than a threshold \(\epsilon\), \(|x_{i} - x_{j}| \le \epsilon\), then both of them change their opinion according to the following rule: $$\begin{aligned} \begin{aligned} x_{i}(t+1)&= x_{i} + \mu (x_{j}-x_{i}) \\ x_{j}(t+1)&= x_{j} + \mu (x_{i}-x_{j}). \end{aligned} \end{aligned}$$ In the DW-model, the parameter \(\epsilon \in [0,1]\) models the population's confidence bound, assumed constant among all the agents. A low \(\epsilon\) creates a close-minded population, where the individuals can only be influenced by those whose opinions are similar to theirs; a high \(\epsilon\), instead, creates an open-minded population since two agents can influence each other even if their initial opinions are very distant. The parameter \(\mu \in (0, 0.5]\) is a convergence parameter, modeling the strength of the influence the two individuals have on each other or, in other words, how much they change their opinion after the interaction. Even if there is no reason to assume that \(\epsilon\) should be constant across the population or at least symmetrical in the binary encounters, this parameter is often considered equal for every agent (apart from a few exceptions as in Lorenz (2010)). The numerical simulations of this model show that the qualitative dynamic is mainly dependent on \(\epsilon\): as \(\epsilon\) grows, the number of final opinion clusters decreases. As for \(\mu\) and N, these parameters influence only the time to convergence and the final opinion distribution width. The AB model introduces another parameter to model the algorithmic bias: \(\gamma \ge 0\). This parameter represents the filtering power of a generic recommendation algorithm: if it is close to 0, the agent has the same probability of interacting with all its peers. As \(\gamma\) grows, so does the probability of interacting with agents holding similar opinions while interacting with those who hold distant opinions decreases. Therefore, this extended model modifies the rule to choose the interacting pair (i, j) to simulate a filtering algorithm's presence. An agent i is randomly picked from the population, while j is chosen from i's peers according to the following rule: $$\begin{aligned} p_{i}(j)=\frac{d_{ij}^{-\gamma }}{\sum _{k \ne i}d_{ik}^{-\gamma }} \end{aligned}$$ where \(d_{ij} = |x_{i}-x_{j}|\) is the opinion distance between agents i and j, so that for \(\gamma = 0.0\) the model goes back to the DW-model, i.e. the interacting peer j is chosen at random from \(i's\) neighbors or—in other words—every neighbor is assigned the same probability to be chosen. When two agents interact, their opinions change if and only if the distance between their opinions is less than the parameter \(\epsilon\), i.e., \(|x_{i}-x_{j}| \le \epsilon\), according to Eq. 1. In Sîrbu et al. (2019) the discussed AB model results focus only on the mean-field scenario, i.e., the authors assume a complete graph as the underlying social structure. While considering real-world social interactions, however, we can assume that individuals will likely interact only with whom they are acquainted. Therefore, building from the analysis proposed in Pansanella et al. (2022), we evaluate the effects of two different topological models on the unfolding of the identified opinion formation process: Erdős–Rényi (ER) (Erdás and Rényi 1959) and scale-free Barabási–Albert (Barabási and Albert 1999) networks. Algorithmic bias: from fixed topologies to adaptive networks In Sîrbu et al. (2019) and Pansanella et al. (2022), it emerged that the dynamics and final state are mainly determined by \(\epsilon\) and \(\gamma\), with the confidence threshold enhancing consensus and the bias enhancing fragmentation. Comparing simulations performed on complete, ER, and scale-free networks, it emerged that the role of the underlying topology is negligible concerning the effects of the model parameters (thus, confirming what was previously observed in Weisbuch (2004); Fortunato (2004); Stauffer and Meyer-Ortmanns (2004) for the scale-free scenario). However, a higher sparsity implies that fragmentation emerges for lower values of the algorithmic bias. Despite this being a crucial step towards reality, assuming that social networks are static during the whole period is unrealistic. Interactions and relationships evolve, and this evolution influences and is influenced by the dynamical process of opinion exchanges and the presence of recommender systems and filtering algorithms for the construction of the social network, reinforcing the tendency toward homophilic choices. In the present work, we extended the baseline model (Sîrbu et al. 2019) introducing the possibility of arc rewiring, creating the Adaptive Algorithmic Bias model (AABM henceforth), where peer-to-peer interactions are affected by algorithmic biases, and the networks evolve influenced by such interactions, bringing people to connect to peers with opinions within their confidence bound. To incorporate such behavior, we added a new parameter to the Algorithmic Bias model, namely \(p_{r} \in [0, 1]\), indicating the probability that the agent in a situation of cognitive dissonance decides to rewire their link instead of just ignoring their peer opinion. To maintain the model as simple as possible, this parameter is assumed to be constant across the population and does not depend on the opinion distance. Thus, in the model, every time an agent i interacts with a neighbor j and their opinion distance is beyond the confidence threshold, i.e., \(|x_i - x_j| \ge \epsilon\): with probability \(p_r\) the agent rewires the arc looking for a like-minded individual with probability \(1-p_r\) the DW-model is applied, i.e. both opinions and network structure remain unchanged. To rewire the arc, a node z is randomly selected from the set of non-neighboring nodes, and if \(|x_i-x_z| < \epsilon\), the agent z links to the agent i, otherwise the structure of the graph remains unchanged (see Fig. 1 for an example of this process and algorithm 1 for the process pseudocode). Without considering the algorithmic bias in the choice of the interacting peer, our work is similar to Kozma and Barrat (2008); Kan et al. (2021). In Kozma and Barrat (2008) the process of rewiring works in the same fashion as in the present work, however, every time the rewiring option is chosen over the standard DW-model update rule, the old link (i, j) is broken and a new link (i, z) is formed towards a random non-neighboring agent, even if this agent's opinion is beyond \(i's\) confidence threshold. Even in Kan et al. (2021) the rewiring process has a different formulation diverging from the proposed one due to the following specificities: (a) at iteration, a set M of discordant edges is rewired and then a set K of edges undergoes the process of opinion update (i.e., if \(M < K\) opinion change faster than node rewire, like in the present work); (b) during the rewiring stage the node selection does not happen entirely at random, rather it is "biased" towards similar individuals (still allowing the connection with peers with opinions beyond the confidence threshold); finally (c) the confidence bound and the tolerance threshold for the rewiring are modeled as two independent parameters. Conversely, from such contributions, our implementation assumes a "zero-knowledge" scenario where agents are not aware of the statuses of their peers beforehand: once rejected the algorithmically biased interaction suggestion, if an agent decides to break the tie and search for a new peer to connect with it will not rely (for that task) on other algorithm suggestions. We adopt such modeling since in social contexts (e.g., in online social networks), the status of unknown peers is hardly known by a user—at least before a first attempt at interacting with them. Moreover, not delegating the identification of potential peers to connect with to the "algorithm," we allow users to react to a first non-successful system recommendation independently (i.e., during the same iteration, the user prefers not to trust the algorithm. Instead, he/she makes a blind connection choice). Therefore, rewiring a link toward a like-minded individual is not always feasible given the limits of users' local views. Beyond pairwise interactions: modeling peer pressure Classical networks, with vertex and arcs, only capture dyadic relationships, and every collective dynamic is analyzed as a decomposition of pairwise dynamics. However, there are many systems where it is crucial to capture group dynamics and where considering only binary interactions can limit the explanatory power of models. As introduced in "Introduction" section, different structures can be employed to model higher-order interactions. However, in the specific context of this paper, we chose to employ simplicial complexes since the idea is that a triangle of connected agents may experience peer pressure because it constitutes a group of friends, a strong friendship relationships, where in addition to the binary friendships there is a higher-order relationship among these agents. Simplexes are in fact the simplest mathematical object allowing to consider higher-order relationships. A k-simplex \(\sigma\) is the convex hull of a set of \(k+1\) nodes \(\sigma = [p_{0}, p_{1}, ..., p_{k}]\). The 0-simplex is a single node; the 1-simplex is an arc, the 2-simplex is a triangle, etc. A simplicial complex requires that each face of a simplex is again in the simplicial complex and that the nonempty intersection of two simplices is a face for each. Since a triadic friendship, denoted by a triangle on the social network, does include the binary friendships between each of the three individuals, we propose and analyze an extension of the Algorithmic Bias model to include second-order interactions: the Algorithmic Bias model on Simplicial Complexes (inspired by Horstmeyer and Kuehn (2020) and adapted to the context of bounded confidence models with continuous opinions). This allows us to incorporate peer pressure in an environment where confirmation and algorithmic biases are still present. In order to implement peer pressure, i.e., a mechanism for which the majority opinion pressures the individual "minority" one to conform, we first need to define what a majority is in the context of a continuous opinion dynamics model. We choose to consider two nodes "agreeing" if their opinion distance is below the confidence threshold, i.e. \(|x_i - x_j | < \epsilon\), similarly to Kozma and Barrat (2008). The model selects a pair (i, j) according to Eq. 2 at every discrete time step and computes the set of triangles T including (i, j). If the set is nonempty, the model selects a third node z from T according to Eq. 2. Otherwise, the baseline rule is applied, i.e., there is a pairwise interaction between i and j according to the AAB-model rules. Once the interacting triplet is chosen, if two agents form a majority, two scenarios may arise: the third agent already "agrees" with the majority, i.e., its opinion distance from the average opinion of the majority is below the confidence threshold the third agent is in a situation of cognitive dissonance with the majority, i.e., its opinion distance from the average opinion of the majority is beyond the confidence threshold In the former scenario, the attractive behavior of the pairwise model is adapted to the triadic case: the agents take the average opinion of the triplet; in the latter, we implemented peer pressure by making the three agents adopt the average opinion of the majority. These rules are detailed in the algorithm 2. Our goal here is to understand the effects of higher-order interactions in a biased environment on the degree of fragmentation reached by the population in the final state. To such an extent, we tested this extended model on the same two graph models: ER (Erdás and Rényi 1959) and a scale-free (Barabási and Albert 1999) network. We also added the possibility of arc rewiring in this model: to do so, a rewiring takes place with probability \(p_r\) between the disagreeing pair (i, j) with \(|x_i - x_j| \ge \epsilon\) when T is an empty set or a "majority" cannot be found in T. Experimental settings Like in Sîrbu et al. (2019), to avoid undefined operations in Eq. 2, when \(d_{ik} = 0\) we use a lower bound \(d_{\epsilon } = 10^{-4}\). The simulations are designed to stop when the population reaches an equilibrium, i.e., the cluster configuration will not change anymore, even if the agents keep exchanging opinions. We also set an overall maximum number of iterations at \(10^5\) as in Pansanella et al. (2022). We compute the average results over 30 independent executions for each configuration to account for the model's stochastic nature. The initial opinion distribution is always drawn from a random uniform probability distribution in [0;1]. To better understand the differences in the final state concerning the different topologies considered, we study the model on all networks for different combinations of the parameters. We are interested in understanding the effects of a co-evolving topology affected by homophily on the dynamics of public opinion in a population and the consequences of peer pressure when moving from pairwise to higher-order interactions. Moreover, we are also interested in whether, parameters being equal, different initial network topologies influence the final cluster configuration in such extended models. We tested our model, seeding the co-evolution with two different network topologies: an Erdős–Rényi (random) and a Barabási–Albert (scale-free). We set the number of nodes \(N=250\) in both networks. For the ER network, we fix the p parameter (probability to form a link) to 0.1 (thus imposing a supercritical regime, as expected from a real-world network); we obtain a network composed of a single giant component with an average degree of 24.94. In the BA network, we set the \(m=5\) (i.e., the parameter regulating the number of edges to attach from a new node to existing nodes), thus obtaining a network instance with an average degree equal to 9.8.Footnote 1 In our simulations, we evaluated the different models on the different possible combinations of the parameters over the following values: \(\epsilon\) takes a value from 0.2 to 0.4 with a step of 0.1. We chose these values because these are the values for which, in the AB model, we can observe a shift from polarization to fragmentation and from consensus to polarization. Higher values of \(\epsilon\) lead to consensus regardless of the strength of the algorithmic bias until the bias is high enough and fragmentation explodes. \(\gamma\) takes value from 0 to 1.6 with a step of 0.4; for \(\gamma = 0\) the model becomes the DW-model. We would see only fragmented final states for higher values of \(\gamma\). \(\mu = 0.5\), so whenever two agents interact, they update their opinions to the pair's average opinion if their opinions are close enough \(p_r\) (for the Adaptive version of the models) takes a value from 0.0 to 0.5 with a step of 0.1; for \(p_r = 0.0\) the model becomes the AB model in the case of the Adaptive Algorithmic Bias model. The models implementation used to carry out our experiments is the one provided by the NDlib (Rossetti et al. 2018) Python library.Footnote 2 To analyze the simulation results, we start by considering the number of final opinion clusters in the population to understand the degree of fragmentation produced by the different combinations of the parameters. This value indicates how many peaks there are in the final distribution of opinions and provides a first approximation of whether a consensus can be obtained or not. To compute the effective number of clusters, accounting for the presence of major and minor ones, we use the cluster participation ratio, as in Sîrbu et al. (2019): $$\begin{aligned} C = \frac{(\sum _{i}{c_{i}})^{2}}{\sum _{i}{c_{i}^{2}}} \end{aligned}$$ where \(c_{i}\) is the dimension of the ith cluster, i.e., the fraction of the population we can find in that cluster. In general, for m clusters, the maximum value of the participation ratio is m and is achieved when all clusters have the same size, while the minimum can be close to 1, if one cluster contains most of the population and a small fraction is distributed among the other \(m - 1\). To study the degree of polarization/fragmentation, we computed the average pairwise distance between the agents' opinions. Given an agent i with opinion \(x_{i}\) and an agent j with opinion \(x_{j}\) at the end of the diffusion process, the pairwise distance between the two agents is \(d_{ij} = | x_{i} - x_{j} |\). The average pairwise distance in the final state can be computed as \({\frac{\sum _{i=0}^{N}{\frac{(\sum _{j=0}^{N}{d_{ij}})}{N}}}{N}}\). While the asymptotic number of opinion clusters and the degree of polarization are essential metrics to describe the results of the dynamics qualitatively, the time to obtain such a final state is equally so. In a realistic setting, available time is finite, so if consensus forms only after a very long time, it may never actually emerge in the population. Thus, we measure the time needed for convergence (to either one or more opinion clusters) in our extended model, recalling that every iteration is made of N interactions, whether pairwise or higher-order (triadic) (Fig. 2). Adaptive algorithmic bias model: close-mindedness leads to segregation in co-evolving networks Our simulations suggest that allowing users to break friendships that cause disagreement in a biased online environment has little effect on the levels of polarization/fragmentation when the evolution of the network is remarkably slower than the process of convergence towards a steady-state into one or more opinion clusters. However, when two or more opinion clusters form, allowing the rewiring process to continue eventually breaks the network into multiple connected components. To understand the effects of the interplay of cognitive and algorithmic biases and the probability of link rewiring, we start by looking at the average number of final clusters. Figure 3 shows the average number of clusters as a function of \(p_r\) and \(\gamma\) for \(\epsilon \in \{0.2, 0.3\}\). Results for \(\epsilon =0.4\) and standard deviation analysis can be found in Additional file 1. We can observe from the first row of each heatmap that, without rewiring (\(p_r=0.0\)), the behavior of the Algorithmic Bias model discussed in Pansanella et al. (2022) is recovered: fragmentation is enhanced by the bias, while higher values of \(\epsilon\) counter the effects of the bias and drive the population towards a consensus around the mean opinion of the spectrum. We already saw in Pansanella et al. (2022) that concerning the mean-field case, when the topology is sparser even for \(\epsilon \ge 0.4\) for a sufficiently large bias, the final states result in a high number of clusters or even not to be clustered at all, i.e., in the final state opinions are still uniformly distributed across the population since the bias is so strong that even like-minded people (whose opinion distance is below the confidence bound) can never converge to each other because they will unlikely interact. Erdős–Rényi network. In Fig. 3a, b, we can see that in the DW model, i.e., \(\gamma =0.0\) adding the possibility to rewire arcs during conflicting interactions does not change the final number of clusters on average. For \(\epsilon =0.2\) we obtain a polarised population for every value of \(p_r\). A consensus is always reached for \(\epsilon \ge 0.3\), specifically for \(\epsilon =0.3\), the main cluster coexists with smaller clusters. In comparison, for \(\epsilon =0.4\), a perfect consensus around the mean opinion is always reached (see Additional file 1). A schematic illustration of the rewiring step under bounded confidence. In this example the confidence bound is \(\epsilon =0.2\). In a, we can see that the interacting pair (i, j) has an opinion distance further than the confidence bound. For this reason b node i tries to break the arc (i, j) and form a new arc (i, z) (with probability \(p_r\), with probability \(1-p_r\) nothing happens). Node z is chosen randomly between the remaining nodes in the network. In the case that \(|x_i - x_z| < \epsilon\) (c) the arc (i, j) is broken and the arc (i, z) is formed. Otherwise, if \(|x_i - x_z| \ge \epsilon\) (d), the rewiring fails, and the network structure remains the same Example of the AABSC model. Examples of different cases in the Adaptive Algorithmic Bias Model on Simplicial Complexes. In a, a triangle (i, j, z) is chosen, and the minority node adopts the mean opinion of the majority. In b, there is no minority, so the three agents adopt their average opinion. In c, there is no majority: nothing happens. In d, there is no majority, and agent i rewires the discording arc with j towards a more like-minded agent. The process in d is the same described in Fig. 1 Average number of clusters in the steady-state of the Adaptive Algorithmic Bias model. The average number of clusters in the final state of the Adaptive Algorithmic Bias model as a function of \(\gamma\) and \(p_r\) for a \(\epsilon =0.2\) and b \(\epsilon =0.3\) startingr from the Erdős–Rényi graph and c, d starting from the scale-free Barabási–Albert graph. These values are averaged over 30 runs Average number of iterations to convergence in the Adaptive Algorithmic Bias model. Average number of iterations to convergence in the Adaptive Algorithmic Bias model as a function of \(\gamma\) and \(p_r\) for a \(\epsilon =0.2\), b \(\epsilon =0.3\) and c, d in a scale-free Barabási–Albert graph. These values are averaged over 30 runs The co-evolving topology does not impact the dynamics of the Algorithmic Bias model either: the adaptive topology does not change the fact that the system reaches consensus or polarization. Plots in Fig. 3a, b show the population moving from two to three cluster in \(\epsilon =0.2\) and from one to two clusters in \(\epsilon =0.3\) and always reaching a consensus for \(\epsilon =0.4\) (see Additional file 1), even if we consider \(p_r = 0.5\). For what concerns the time (i.e., the number of total interactions) the population needs to reach an equilibrium, we can see how the general behavior of the baseline model is kept, even when the network co-evolves with a biased opinion dynamic. Convergence is relatively fast when interactions are not biased, while it slows down as the bias grows until it reaches a peak, after which it speeds up again. Until the peak, a higher number of iterations positively correlates with a higher number of clusters. In contrast, even if convergence is faster, the population is spread across the opinion space after the peak. Since the bias is strong, two agents cannot get closer in the opinion distribution after the first few interactions, and the condition to reach the steady-state is met very quickly. Moreover, every node has a limited network of agents to interact with; with a strong bias, they always exchange opinions with the same agents. Not much can change once they adopt their average opinion, even long-term. We can see from Fig. 4a, b that this measure is less dependent on the population's open-mindedness as the bias level mainly influences it. However, we can see from the average values that an increase in \(\epsilon\) often means a small convergence speed-up, all other parameters being equal. Results for \(\epsilon =0.4\) and standard deviation analysis can be found in Additional file 1. When we introduce the possibility of rewiring, convergence is generally slower. Deleting edges beyond one's confidence bound denies agents the possibility of participating in a possible path towards convergence. This does not mean that the population cannot converge; instead, a higher number of total interactions is needed. Without bias, it is worth noticing that a small probability of arc rewiring \(p_r=0.1\) in a close-minded population (\(\epsilon =0.2\)) has the slowest time to convergence. Arc rewiring towards like-minded individuals and selection bias combined slow down convergence, especially in close-minded populations (i.e., \(\epsilon =0.2\)): we can see that even for a relatively small bias and a relatively small probability of arc rewiring, the steady-state needs tens of thousands of iterations, while without arc rewiring less than one hundred would be enough. Barabási–Albert network. In Fig. 3c, d, we can see that the same results that were drawn for the ER network also hold for scale-free networks, though, on the latter, fragmentation is higher on average and the same level of fragmentation arises for lower values of the bias. Also, for the time at convergence, similar conclusions can be drawn. However, in the scale-free network, the peak is always reached for \(\gamma =1.2\), regardless of the value of \(p_r\). As \(p_r\) grows, the convergence slows down so much that the system can no longer reach a steady state, and for higher values of the bias, convergence is much faster, while in the ER network, it is overall slower. To sum up, we can say that the process of co-evolution of the network along with the diffusion of opinions in the population does not affect the final opinion distribution in terms of the number of opinion clusters. This is because when there is no bias—or the bias is low—despite a lot of conflicting interactions happen, the process of convergence is too fast with respect to the process of link rewiring to separate the network into many different opinion clusters, enhancing fragmentation; when the bias is high-instead—despite this already has a fragmenting effect, it also reduces the number of conflicting interactions and therefore slows down the process of network evolution, even more, leaving the network structure practically unchanged when the population reaches its steady state. Despite not changing the final number of opinion clusters, it impacts the network's topology, as we can observe from the examples in Figs. 5 and 6. In this case, we performed experiments with the same initial configuration of the Erdős–Rényi network (i.e., 250 nodes, p = 0.1, and uniformly distributed initial opinions). We stopped the simulations when no opinion change (nor arc rewiring) happened for 1000 consecutive iterations. In these case we set \(\epsilon =0.2\) and we compared results for \(\gamma \in \{0.0, 0.5\}\) and \(p_r \in \{0.0, 0.5\}\). As we can observe, polarization occurs when the network is static, meaning that there are two opinion clusters in the population while the network structure remains unchanged. As we can observe from Fig. 5e, f, the two opinion clusters tend to separate into two different components or—at least—into two different communities on the network, with fewer and fewer inter-communities links, when rewiring is allowed. After 100 iterations (Fig. 5e) there is still one connected component, but two polarized communities started to form . It is also worth noticing that, in the steady-state (Fig. 5f), every node is connected only to agents holding identical opinions, since there are three separated components, each holding perfect consensus. The long left tail of the degree distribution in Fig. 5h is due to the two-nodes component. If we do not consider that component, the final degree distribution is substantially similar to the distribution in Fig. 5h with a slightly lower variance. Introducing an algorithmic bias in the process slows down the convergence, as we can see from the example in Fig. 6e where—starting from the same initial configuration—the network does not present node clusters holding similar opinions after 100 iterations (while this was the case in the absence of bias). This is because bias skews interactions towards more like-minded nodes, further slowing down the process of arc rewiring by reducing the amount of discording encounters. We can see from Fig. 6f that, in this case, equilibrium is reached before two components could form on the network, but there are two well-separated communities, each holding a separate opinion. While a steady state in terms of opinion clusters may be reached within a few iterations, which are not enough to separate the network into different components, if we allow the process to go on until there are no possible rewirings—since every node is connected to agreeing nodes - the network eventually splits into two or three components when opinions are clustered. Even when the maximum number of iterations set in such simulations is not enough, we can still see that links between polarized opinion clusters are fewer and fewer over time. Example of the effects of the adaptive topology on the Adaptive Algorithmic Bias Model on the Erdős–Rényi graph with \(\gamma =0.0\). An example of the effects of the co-evolution of network structure and opinions in the Adaptive Algorithmic Bias model on the Erdős–Rényi graph for \(\epsilon =0.2\), \(p_r \in \{0.0, 0.5\}\) and \(\gamma =0.0\) Example of the effects of the adaptive topology on the Adaptive Algorithmic Bias Model on the Erdős–Rényi graph with \(\gamma =0.5\). An example of the effects of the co-evolution of network structure and opinions in the Adaptive Algorithmic Bias model on the Erdős–Rényi graph for \(\epsilon = 0.2\), \(p_r \in \{0.0,0.5\}\) and \(\gamma = 0.5\) Adaptive algorithmic bias model on simplicial complexes: peer pressure enhances consensus In the Adaptive Algorithmic Bias model on Simplicial Complexes, we introduced a simple form of higher-order interaction where three agents can influence each other—as a group—if they form a complete subgraph. Introducing higher-order interactions lets us model the phenomenon of peer pressure, where the majority edge pushes the minority node to conform to their ideology. If there is no minority opinion, we assume there would be an attractive dynamic similar to the one present in the binary case, i.e., the three nodes attract each other and adopt the mean opinion of the group. The main result from our simulations is that peer pressure promotes consensus and reduces fragmentation with respect to the binary counterpart. Besides this general conclusion, we can observe in Fig. 7 the model's behavior is different in the two chosen networks and that \(\gamma\) and \(p_r\) still play a role in shaping the final state of the population. Adaptive Deffuant–Weisbuch model on Simplicial Complexes on complex topologies. Before analyzing the effects of peer pressure and algorithmic biases on the Algorithmic Bias model, we briefly analyze the results for the Deffuant–Weisbuch model, i.e., \(\gamma =0.0\). We can observe in Fig. 8a that in the ER network, a perfect consensus is always reached, regardless of the level of bounded confidence and rewiring probability (the number of clusters is 1 in every execution of the model and the standard deviation of the final distribution is always 0). Results for \(\epsilon =0.4\) and standard deviation analysis can be found in Additional file 1. Also, in the scale-free network (Fig. 8b), the consensus is always reached, but it is not always perfect and depends on both the confidence bound \(\epsilon\) and the probability of rewiring \(p_r\). In a static network (\(p_r=0.0\)), introducing peer-pressure reduces fragmentation: in the case of \(\epsilon =0.2\), for example, the baseline model would lead to polarization, on average. Changing the update rule to account for group interactions and social pressure reduces the level of fragmentation in the final state. It leads almost the whole population to converge on a common opinion. Not surprisingly, increasing the confidence bound enhances consensus in the same way as in the baseline model. However, in this case, increasing the probability of rewiring reduces fragmentation leading the population to a perfect consensus. In particular we can see from Fig. 8b that for \(\epsilon =0.2\) perfect consensus is reached for \(p_r > 0.2\), for \(p_r > 0.0\) in the case of \(\epsilon =0.3\) and always in the case of \(\epsilon =0.4\) Average number of clusters in the steady-state for the Adaptive Algorithmic Bias model on Simplicial Complexes. Average number of clusters in the final state for the Adaptive Algorithmic Bias model on Simplicial Complexes as a function of \(\gamma\) and \(p_r\) for a \(\epsilon =0.2\), b \(\epsilon =0.3\) and c, d in a scale-free Barabási–Albert graph. These values are averaged over 30 runs Average number of clusters in the steady-state for the Adaptive Deffuant–Weisbuch bounded confidence model on Simplicial Complexess. The average number of clusters in the final state for the Adaptive Deffuant–Weisbuch bounded confidence model on Simplicial Complexes fixing \(\gamma =0.0\), as a function of \(\epsilon\) and \(p_r\) for a, b a Erdős–Rényi graph and c, d a scale-free Barabási–Albert graph. These values are averaged over 30 runs Erdős–Rényi network. Simulating the Algorithmic Bias model on Simplicial Complexes on the chosen ER graph, for \(\epsilon =0.2\) we can see how the population always reaches a consensus for low values of the bias, while for \(\gamma \ge 1.2\), peer pressure is not enough to stop the population from polarizing into two opposing clusters. However, if compared with the same results with only pairwise interactions (Fig. 3), it is clear that fragmentation is strongly reduced. For \(\epsilon =0.3\), the qualitative dynamic remains the same. However, the average number of clusters is overall reduced due to a higher open-mindedness of the population, and the population splits into two clusters only in a few cases. In contrast, in most simulations, a majority cluster forms along with a few smaller ones. When the population is open-minded, i.e., \(\epsilon \ge 0.4\) consensus is always reached around the mean opinion (i.e., 0.5). The only effect of a higher algorithmic bias is that a few agents cannot converge into the main cluster. Introducing the possibility of rewiring towards a more like-minded individual after a conflicting interaction enhances polarization when combined with a mild or high selection bias (i.e., \(\gamma > 0.8\)). The population converges into two or three clusters when the confidence threshold is low (either two polarized clusters or two polarized clusters and a moderate one). When the population is mildly open-minded (\(\epsilon =0.3\)), the system converges into one or two clusters (either two polarized clusters or a moderate cluster). At the same time, it always reaches a consensus for higher values of the confidence bound (around the mean opinion). As we did for the previous model, we also analyzed the average time to convergence. From Fig. 9 we can see that a higher bias slows down convergence like in the binary model. It is slowed down so much that the population cannot reach a steady state within the imposed time interval. While in the model by Sîrbu et al. (2019) the level of open-mindedness did not play a crucial role in the time at convergence, in this case, we can see that increasing the open-mindedness of the population also means a faster convergence towards an equilibrium. Average number of iterations at convergence for the Adaptive Algorithmic Bias model on Simplicial Complexes. The average number of iterations at convergence for the Adaptive Algorithmic Bias model on Simplicial Complexes as a function of \(\gamma\) and \(p_r\) for a \(\epsilon =0.2\) and b \(\epsilon =0.3\) in a Erdős–Rényi graph and c, d in a scale-free Barabási–Albert graph. These values are averaged over 30 runs Barabási–Albert network. In the scale-free network, the model's behavior is slightly different: a higher probability of arc rewiring seems to reinforce consensus: we can see that when \(p_r=0.0\), the number of clusters in the final opinion distribution is higher as \(\gamma\) grows. This small fragmentation is reduced as \(p_r\) grows. For example, in the case of a close-minded population, i.e., \(\epsilon =0.2\), we can see that, without rewiring, a consensus is possible until the algorithmic bias is not very strong. However, it is not a perfect consensus (like in the ER network), but there is a major cluster coexisting with many agents scattered across the opinion space. Moreover, such a cluster does not necessarily form around the mean opinion but can be pretty extreme (with the final consensus below 0.2 or above 0.7). For \(\gamma = 1.2\), the population becomes polarized: two homogeneous and opposed clusters form, and, in some cases, there are few "outlier" agents around the mean opinion or further at the extremes. Finally, for \(\gamma = 1.6\), the population splits into multiple clusters: still, in most cases, two major polarized clusters form, alongside a variety of minor clusters below, between, and above the two. Two cohesive groups coexist with a population of individuals scattered across the opinion space so that the final distribution is not so different from the initial one: multiple opinions are still present in the population and cover the whole range [0,1]. Raising \(p_r\) to 0.1 prevents fragmentation, but for strong biases the population polarizes. For \(p_r \ge 0.2\) consensus is always reached. However, as in the baseline case (without rewiring), consensus does not necessarily form around the center of the opinion space but can vary and form on strongly extreme opinions. An increase in open-mindedness also counters the fragmenting effects of the algorithmic bias. The average number of clusters reduces as \(\epsilon\) grows, all other parameters being equal. In the case of a highly mildly open-minded population, i.e., \(\epsilon =0.4\) consensus can be prevented only with an extreme algorithmic bias (\(\gamma =1.6)\) and without the possibility of arc rewiring. Moreover, in the scale-free topology, convergence is faster with respect to the ER network. As we can see from Figs. 10 and 11 since, in this case, the consensus is enhanced by peer-pressure and triadic interactions that also fasten convergence, opinions reach a steady state before the topology of the network can impact the process. We can observe that neither opinions nor nodes segregate during the process. Figure 10 shows that in the absence of an algorithmic bias, consensus can be reached within a few iterations, even with low confidence bound. Figure 11 shows how introducing an algorithmic bias does not prevent the population from reaching consensus but slows down the process even with the help of peer pressure mechanisms. Comparing \(\gamma =0.0\) and \(\gamma =0.5\) after 10 iterations, we can see how in the first case population has already reached a consensus, while in the second, two opinion clusters are still present in the network. Due to the fast convergence process towards consensus, even if rewiring is allowed, it does not significantly impact the network structure, as shown in Figs. 10e–h and 11e–h. Example of the effects of the adaptive topology on the Algorithmic Bias Model on Simplicial Complexes on the Erdős–Rényi graph. An example of the effects of the adaptive topology on the Algorithmic Bias Model on Simplicial Complexes on the Erdős–Rényi graph for \(\epsilon =0.2\), \(p_r \in \{0.0,0.5\}\) and \(\gamma = 0.0\). The convergence towards consensus is faster and is always reached before the network can cluster around different opinions Example of the effects of the adaptive topology on the Algorithmic Bias Model on Simplicial Complexes on the Erdős–Rényi graph An example of the effects of the adaptive topology on the Algorithmic Bias Model on Simplicial Complexes on the Erdős–Rényi graph for \(\epsilon =0.2\), \(p_r \in \{0.0,0.5\}\) and \(\gamma = 0.5\). Bias slightly slows down the convergence process Algorithmic bias is an existing factor affecting several (online) social environments. Since interactions occurring among agents embedded in such realities are far from being easily approximated by a mean-field scenario, in our study, we aimed to understand the role played by alternative network topologies on the outcome of biased opinion dynamic simulations. From our study in Pansanella et al. (2022) emerged that the qualitative dynamic of opinions remains substantially in line with what was observed assuming a mean-field context: an increase in the confidence bound \(\epsilon\) favors consensus. In contrast, introducing the algorithmic bias, \(\gamma\) hinders it and favors fragmentation. Conversely, both simulations' time to convergence and opinions fragmentation appears to increase as the topology becomes sparser and the hub emerges. Therefore, our analysis underlines that, alongside the algorithmic bias, the network's density heavily affects the degree of consensus reachable, assuming a population of agents with the same initial opinion distribution. The present work extends the work in Pansanella et al. (2022), proposing two extensions of the model and analyzing such extension on the same two complex networks as in Pansanella et al. (2022), leaving out the complete network. The first extension considers a straightforward mechanism of arc rewiring so that the underlying structure co-evolves with the opinion dynamics, generating the Adaptive Algorithmic Bias model. The second adds a peer-pressure mechanism, considering triangles as simplicial complexes, where a majority—if there is one—can attract a disagreeing node pushing them to conform. We found that—in general—the role of bounded confidence and algorithmic bias remains the same as in the baseline models, with the former enhancing consensus while the latter enhancing fragmentation. Going from a static to an underlying adaptive topology does not strongly affect the dynamics, leading to the same number of opinion clusters in the steady-state. However, suppose we allow the agents to continue interacting with each other. In that case, opinion clusters eventually lead to the formation of mesoscale network ones, then finally separating the network into different connected components. On the other hand, peer pressure enhances consensus, reducing the effects of low bounded confidence and high algorithmic biases. Such a model suggests how different sociological and topological factors interact with each other, thus leading populations towards polarization and echo chamber phenomena, contributing to the creation and maintenance of inequalities on social networks. These models can also be employed to study different phenomena besides opinion diffusion, such as the effects of peer pressure on the adoption of different behaviors where social network structure and psychological factors play a role. The present work presents some of the limitations already considered in Sîrbu et al. (2019) while overcoming others. The existence of bounded confidence, for example, and the fact that it is constant across the population is an assumption that should be empirically validated along with the role of algorithmic filtering in influencing the path towards polarization/fragmentation. We went beyond the concept of static networks considering an adaptive topology; however, to further investigate the role of arc rewiring, a more thorough analysis of the model's parameters' effects on the network's topology should be made. The present implementation of a rewiring mechanism is just one way to incorporate the fact that users hardly know their neighbors' state before interacting with them; however, a mechanism considering only the set of agents with opinions within the confidence threshold would be a useful comparison to the present model. Moreover, to better understand the role of homophily in the sense of friendship formation and its relation to the online social network environment, the role of the recommender system—and therefore algorithmic bias—a biased mechanism simulating "link recommendations" could be implemented—as in Kan et al. (2021). Finally, the importance of social interactions in opinion formation is undeniable. However, external media can be essential in polarizing opinions or driving the population towards consensus. For this reason, we feel their role needs to be further investigated while embedded in an algorithmically biased environment. Availibility of data and materials The datasets analyzed during the current study are synthetic networks. The implementation of the introduced models is available on the NDlib: Network Diffusion library https://ndlib.readthedocs.io/ python library. Note that the empirical average degree slightly deviates from the expected asymptotic value (\(\langle k \rangle =2m=10\)) due to statistical fluctuations introduced by the random seed used by the generative process. NDlib: http://ndlib.rtfd.io DW model: Deffuant–Weisbuch model AB model: Algorithmic bias model ER network: Erdős–Rényi network BA network: Barabási–Albert network AABM: Adaptive Algorithmic bias model AABSC model: Adaptive algorithmic bias model on simplicial complexes Anderson A, Maystre L, Anderson I, Mehrotra R, Lalmas M (2020) Algorithmic effects on the diversity of consumption on spotify. In: Proceedings of the web conference 2020 Asch SE (1956) Studies of independence and conformity: I. A minority of one against a unanimous majority, vol 70. American Psychological Association, Washington, p 1 Barabási AL, Albert R (1999) Emergence of scaling in random networks. Science 286(5439):509–512 Battiston F, Cencetti G, Iacopini I, Latora V, Lucas M, Patania A et al (2020) Networks beyond pairwise interactions: structure and dynamics. Elsevier, Amsterdam Benson B (2016) Cognitive bias cheat sheet. https://betterhumans.pub/cognitive-bias-cheat-sheet-55a472476b18 Bessi A, Zollo F, Vicario MD, Puliga M, Scala A, Caldarelli G et al (2016) Users polarization on Facebook and Youtube. PLoS ONE 11:e0159641 Castellano C, Muñoz MA, Pastor-Satorras R (2009) Nonlinear q-voter model. Phys Rev E 80(4):041129 Chen J, Geyer W, Dugan C, Muller M, Guy I (2009) Make new friends, but keep the old: recommending people on social networking sites. In: Proceedings of the SIGCHI conference on human factors in computing systems, pp 201–210 Cinelli M, Quattrociocchi W, Galeazzi A, Valensise CM, Brugnoli E, Schmidt AL et al (2020) The COVID-19 social media infodemic. Sci Rep 10:1–10 Conover M, Ratkiewicz J, Francisco M, Gonçalves B, Menczer F, Flammini A (2011) Political polarization on twitter. In: Proceedings of the international AAAI conference on web and social media, vol 5, pp 89–96 Conte R, Gilbert N, Bonelli G, Cioffi-Revilla C, Deffuant G, Kertész J et al (2012) Manifesto of computational social science. Eur Phys J Spec Top 214:325–346 Deffuant G, Neau D, Amblard F, Weisbuch G (2001) Mixing beliefs among interacting agents. Adv Complex Syst 3:11 Degroot M (1974) Reaching a consensus. J Am Stat Assoc 69:118–121 Article MATH Google Scholar Drummond C, Fischhoff B (2017) Individuals with greater science literacy and education have more polarized beliefs on controversial science topics. Proc Natl Acad Sci 114:9587–9592 Erdás P, Rényi A (1959) On random graphs. I. Publ Math 6:290–297 Festinger L (1957) A theory of cognitive dissonance, vol 2. Stanford University, Redwood City Fiorina MP, Abrams SJ (2008) Political polarization in the American public. Annu Rev Polit Sci 11:563–588 Fortunato S (2004) Universality of the threshold for complete consensus for the opinion dynamics of Deffuant et al. Int J Mod Phys C 15(09):1301–1307 Friedkin NE, Johnsen E (1990) Social influence and opinions. J Math Sociol 15:193–206 Friedkin N, Johnsen E (1999) Social influence networks and opinion change. Adv Group Process 01:16 Galam S (2002) Minority opinion spreading in random geometry. Eur Phys J B Condens Matter Complex Syst 25:403–406 Hague BN, Loader BD (1999) Digital democracy: an introduction. Discourse and decision making in the information age. Digital Democracy, Routledge, pp 3–22 Haun DBM, Tomasello M (2011) Conformity to peer pressure in preschool children. Child Dev 82(6):1759–67 Hickok A, Kureh YH, Brooks HZ, Feng M, Porter MA (2022) A bounded-confidence model of opinion dynamics on hypergraphs. ArXiv. 2022;abs/2102.06825 Hills TT (2019) The dark side of information proliferation. Perspect Psychol Sci 14:323–330 Hogan EA (2001) The attention economy: understanding the new currency of business. Acad Manag Perspect 15:145–147 Holley RA, Liggett TM (1975) Ergodic theorems for weakly interacting infinite systems and the voter model. Ann Probab 3(4):643–663 Holme P, Newman MEJ (2006) Nonequilibrium phase transition in the coevolution of networks and opinions. Phys Rev E 74(5):056108 Horstmeyer L, Kuehn C (2020) Adaptive voter model on simplicial complexes. Phys Rev E 101(2):022305 Iñiguez G, Kertész J, Kaski KK, Barrio RA (2009) Opinion and community formation in coevolving networks. Phys Rev E 80(6):066119 Iyengar S, Hahn KS (2009) Red media, blue media: evidence of ideological selectivity in media use. J Commun 59:19–39 Kan U, Feng M, Porter MA (2021) An adaptive bounded-confidence model of opinion dynamics on networks. ArXiv. 2021;abs/2112.05856 Knobloch-Westerwick S, Mothes C, Polavin N (2020) Confirmation bias, ingroup bias, and negativity bias in selective exposure to political information. Commun Res 47:104–124 Kozma B, Barrat A (2008) Consensus formation on coevolving networks: groups' formation and structure. J Phys A 41:224020 Lorenz J (2010) Heterogeneous bounds of confidence: meet, discuss and find consensus! Complexity 15(4):43–52 Maes M, Bischofberger L (2015) Will the personalization of online social networks foster opinion polarization? Available at SSRN 2553436 McCarty N (2019) Polarization: what everyone needs to know®. Oxford University Press, Oxford McPherson M, Smith-Lovin L, Cook JM (2001) Birds of a feather: homophily in social networks. Ann Rev Sociol 27(1):415–444 Pansanella V, Rossetti G, Milli L (2022) From mean-field to complex topologies: network effects on the algorithmic bias model. In: Gaito S, Quattrociocchi W, Sala A (eds) Complex networks and their applications X. Springer, Berlin, pp 329–340 Pariser E (2011) The filter bubble: what the Internet is hiding from you. Penguin UK, London Peralta AF, Kertész J, Iñiguez G (2021b) Opinion formation on social networks with algorithmic bias: dynamics and bias imbalance. IOP Publishing, Bristol Peralta AF, Kertész J, Iñiguez G (2022) Opinion dynamics in social networks: from models to data. arXiv:2201.01322 Peralta AF, Neri M, Kertész J, Iñiguez G (2021a) Effect of algorithmic bias and network structure on coexistence, consensus, and polarization of opinions. APS, New York Perra N, Rocha LEC (2019) Modelling opinion dynamics in the age of algorithmic personalisation. Sci Rep 9:1–11 Rossetti G, Milli L, Rinzivillo S, Sîrbu A, Pedreschi D, Giannotti F (2018) NDlib: a python library to model and analyze diffusion processes over complex networks. Int J Data Sci Anal 5(1):61–79 Sasahara K, Chen W, Peng H, Ciampaglia GL, Flammini A, Menczer F (2019) Social influence and unfollowing accelerate the emergence of echo chambers. J Comput Soc Sci 4:381–402 Sîrbu A, Pedreschi D, Giannotti F, Kertész J (2019) Algorithmic bias amplifies opinion fragmentation and polarization: a bounded confidence model. PLoS ONE 14(3):e0213246 Stauffer D, Meyer-Ortmanns H (2004) Simulation of consensus model of Deffuant et al. on a Barabasi–Albert network. Int J Mod Phys C 15(02):241–246 Sunstein CR (2007) Republic.Com 2.0. Princeton University Press, Princeton Sznajd-Weron K, Sznajd J (2000) Opinion evolution in closed community. HSC Research Reports The Polarization Index. https://thepolarizationindex.com/ Vicario MD, Vivaldo G, Bessi A, Zollo F, Scala A, Caldarelli G et al (2016) Echo chambers: emotional contagion and group polarization on Facebook. Sci Rep 6:1–12 Weisbuch G (2004) Bounded confidence and social networks. Eur Phys J B 38:339–343 This work was supported by the scheme 'INFRAIA-01-2018-2019: Research and Innovation action', Grant Agreement n. 871042 'SoBigData++: European Integrated Infrastructure for Social Mining and Big Data Analytics, by the CHIST-ERA Grant CHIST-ERA-19-XAI-010, by MUR (Grant No. not yet available), FWF (Grant No. I 5205), EPSRC (Grant No. EP/V055712/1), NCN (Grant No. 2020/02/Y/ST6/00064), ETAg (Grant No. SLTAT21096), BNSF (Grant No. o КП-06-ДОО-2/5) and by the HumaneAI-Net project, Grant Agreement No. 952026. Faculty of Science, Scuola Normale Superiore, Pisa, Italy Valentina Pansanella Institute of Information Science and Technologies "Alessandro Faedo" (ISTI), National Research Council (CNR), Pisa, Italy Giulio Rossetti & Letizia Milli Department of Computer Science, University of Pisa, Pisa, Italy Letizia Milli Giulio Rossetti All the authors discussed and designed the proposed opinion dynamics models; VP wrote the code and executed the experiments. All authors read and approved the final manuscript. Correspondence to Valentina Pansanella. Additional figures for the average number of clusters and the average number of iterations at convergence with standard deviation values for the two models introduced in the present work. Pansanella, V., Rossetti, G. & Milli, L. Modeling algorithmic bias: simplicial complexes and evolving network topologies. Appl Netw Sci 7, 57 (2022). https://doi.org/10.1007/s41109-022-00495-7 Opinion dynamics Special Issue of the 10th International Conference on Complex Networks and their Applications
CommonCrawl
Single image super resolution based on multi-scale structure and non-local smoothing Wenyi Wang ORCID: orcid.org/0000-0003-1619-02941 na1, Jun Hu2, Xiaohong Liu3, Jiying Zhao2 & Jianwen Chen1 In this paper, we propose a hybrid super-resolution method by combining global and local dictionary training in the sparse domain. In order to present and differentiate the feature mapping in different scales, a global dictionary set is trained in multiple structure scales, and a non-linear function is used to choose the appropriate dictionary to initially reconstruct the HR image. In addition, we introduce the Gaussian blur to the LR images to eliminate a widely used but inappropriate assumption that the low resolution (LR) images are generated by bicubic interpolation from high-resolution (HR) images. In order to deal with Gaussian blur, a local dictionary is generated and iteratively updated by K-means principal component analysis (K-PCA) and gradient decent (GD) to model the blur effect during the down-sampling. Compared with the state-of-the-art SR algorithms, the experimental results reveal that the proposed method can produce sharper boundaries and suppress undesired artifacts with the present of Gaussian blur. It implies that our method could be more effect in real applications and that the HR-LR mapping relation is more complicated than bicubic interpolation. The problem of enlarging images to the ones with bigger spatial size is regarded as image super-resolution (SR), which builds the mathematical relation between the low-resolution (LR) image and the high-resolution (HR) image. Although image SR is a frequent manipulation in image processing, this problem still remains challenging because it is under constrained and there is no closed form without extra constraints. The concept of image SR was first proposed and studied by Tsai and Huang in the 1980s [1]. Through the past three decades, varieties of SR algorithms and models have been developed to deal with this problem. Among them, single-image super-resolution (SISR) is an important branch, which enlarges an image based on the image itself as the observation. In general, the existing SISR algorithms can be classified into three categories: interpolation-based, reconstruction-based and learning-based SR. The interpolation-based SR usually utilizes fixed function [2] or adaptive structure kernels [3, 4] to predict the missing pixels in HR grid. It assumes that the LR observations are degraded by down-sampling, and the unknown HR pixels can be estimated from their observed neighbors. Now, the interpolation-based methods are often used as the comparison baseline. However, considerable blurring and aliasing artifacts are often inevitable in the up-scaled images. Different from the interpolation-based SR, the reconstruction-based methods [5, 6] refine the observation model. In addition to sampling, reconstruction-based SR assumes that the LR image is obtained by a series of degradations: down-sampling, blurring, and additive noise. Using different prior assumptions, this family of approaches are capable of enhancing the features of low-resolution images through a regularized cost function. As a result, they are able to produce sharper edges and clearer textures while removing the undesired artifacts. However, these methods are usually inadequate in producing novel details and perform unsatisfactory under high scaling factor. Compared with the aforementioned methods, the learning-based SR is generally superior since it is capable of generating convincing novel details that are almost lost in the low-resolution image. Basically, these algorithms and models exploit the prior texture knowledge from extensive sample images to learn the underlying mapping relations between LR and HR images. During the past decades, numerous mapping formulations have been designed, the most representative methods include neighbor-based SR, regression-based SR, sparsity-based SR, and the ones using deep neural networks. Compared with neighbor-based [7] and regression-based SR [8] that always heavily rely on the quality and the size of sample images, the sparsity-based SR is capable of learning more compact dictionaries based on signal sparse representation. In this case, it has been widely studied for its superior performance in producing clear images with low computational complexity. Based on this idea, Yang et al. proposed a classic sparse model by training a joint dictionary pair in the sparse domain for image reconstruction [9]. Subsequently, they introduced a coupled dictionary training approach with two acceleration schemes which overcame the sparse coding bottleneck [10]. With the similar framework of [8], Zeyde et al. [11] provided the refinements in dictionary training and image optimization, so that it improves the efficiency and performance. Timofte et al. proposed an anchored neighborhood regression (ANR) model to reconstruct the LR image via neighbor-based dictionaries in a fast way and also introduced the global regression (GR) model for some extreme cases [12]. They subsequently introduced the A+ [13] method that improves the SR performance by combining ANR with SF. Shi et al. further used the anchored neighborhood as the image prior to the deep network [14]. However, [15] implies that a large dictionary may cause unstable HR restoration. Hence, the concept of sub-dictionary has been widely used. Dong et al. raised an adaptive sparse domain selection (ASDS) model with PCA-based sub-dictionary training [16]. Afterwards, they further applied this method on their non-locally centralized sparse representation (NCSR) model [17], which has been turned out to be one of the state-of-the-art SR algorithms which can deal with the blur effect during the down-sampling. In addition, based on Yang's framework in [9], Zhang et al. also utilized the PCA-based clustering to the sample image patches, so that multiple mapping relations can be trained [18]. In recent years, the deep neural networks have shown powerful capability in various computer vision tasks and brought significant benefit to image SR. To our best knowledge, Dong et al. [19] first proposed the deep learning-based SR method which is based on the CNN architecture. Afterwards, Kim et al. proposed to use a much deeper residual network to achieve superior performance [20]. In addition, Kim's method was capable of dealing with different enlarge ratios by expanding the training dataset to include the LR-HR patch pairs with different scaling factors. Although the aforementioned learning-based methods performed well on their test dataset, there was often an underlying assumption that the LR image was directly down-sampled from the HR image by bicubic interpolation method, which might not be the actual case in real scenarios that the blur effect could happen along with the down-sampling. And there is no sufficient evidence that can prove that the learning model is still valid if it is trained with the bicubic interpolation assumption. In this paper, a novel learning-based SR method is proposed based on hybrid dictionary training in the sparse domain [21]. In our proposed algorithm, multiple global dictionary pairs are trained from a large natural image dataset. Each global dictionary pair is trained to reveal the general mapping relation between the LR and the HR images under different scaling factors. In addition to the global dictionaries trained from general dataset, we also predict a local dictionary along with a self-similarity metric based on the input LR image. Since the local dictionary is more consistent with the input image, the reconstructed HR result can be more robust and the blur effect during the down-sampling could be suppressed. Given the dictionaries and an LR image, we iteratively enhance the details of LR image. In each iteration, the appropriate global dictionary is chosen according to the current LR image quality. Afterwards, the undesired artifacts are suppressed by self-similarity prior and non-locally centralized constraints based on local dictionary. The main contributions of this paper are as follows: (1) we combine the global and local dictionary to estimate the HR image so that the result could benefit from the large training dataset and the blur effect during the down-sampling could be also suppressed; (2) we apply multiple global dictionaries from multi-scale structures to iteratively improve the HR image quality. The remained part of this paper is organized as follows. In Section 2, we will provide a brief introduction to the related works of sparsity-based SR and dictionary training methodologies. In Section 3, our proposed SR method will be described in detail, including the multi-scale global dictionary training, K-PCA-based local dictionary training, and the single image super-resolution based on the hybrid dictionaries. In Section 4, the results and comparisons will be represented to prove that our proposed method can generate state-of-the-art HR images with the present of Gaussian blur effect. Finally, the conclusion will be drawn in Section 5. Based on the observed LR image, single image super-resolution can be modeled by Eq. (1): $$ \boldsymbol{y}=\boldsymbol{H}\boldsymbol{x}+\boldsymbol{v}, $$ where H is the degradation operator that combines down-sampling and blurring, x is the HR image, v is the additive noise, and y is the observed LR image. Image SR based on sparse representation Since the problem of solving x from Eq. (1) is ill-posed, the HR recovery from LR image is uncertain and the artifacts will be introduced. In this case, researchers proposed and studied different regularization terms such as total variation (TV) [22, 23], the non-local similarity [24], and the sparsity representation [25, 26]. The total variation suppresses the artifacts at the expense of over-smoothing the texture. In order to maintain the discontinuities or spatially inhomogeneous, the sparsity-based regularization is adopted in many recent single image super resolution methods [9, 16, 17, 27]. According to the sparse representation, the observation model in Eq. (1) can be re-written as: $$ y_{i}=\boldsymbol{H}D\alpha_{i}+v_{i}, $$ where xi=Dαi is a patch of the high resolution image, D is called as the dictionary, αi is the sparse code of patch xi,yi is the observed low-resolution patch of xi, and vi represents the additive noise. Based on the observation model, the sparsity-based image super-resolution can be formulated by Eq. (3): $$ {\boldsymbol{\alpha}_{\boldsymbol{y}}}=\underset{\alpha}{\mathrm{arg\ min}} \left\{\|\boldsymbol{y}-\boldsymbol{H}\boldsymbol{D}\boldsymbol{\alpha} \|^{2}_{2} +\lambda R(\boldsymbol{\alpha})\right\}, $$ where \(\|\boldsymbol {y}-\boldsymbol {H}\boldsymbol {D}\boldsymbol {\alpha } \|^{2}_{2}\) is called the fidelity term, R(α) is the sparsity based regularization term, and λ is the Lagrange multiplier which controls the balance between the fidelity and the regularization. In order to reliably represent the image by using sparse code α, the dictionary choice becomes a critical issue. Generally, the dictionaries can be classified into two categories: analytical dictionary and learning-based dictionary. Analytical dictionaries such as the ones from DCT or Haar wavelet are easily generated, but they are not adaptive to diverse images. Learning-based dictionary is trained according to information from real natural images. Therefore, it contains more comprehensive characteristics which make the reconstruction precise. The learning-based dictionaries can be further divided into global dictionary and local dictionary. The global dictionary extracts texture from a large set of natural images which have abundant details. Therefore, it may reconstruct the details, which are almost lost in LR image, based on the experience drawn from other images. However, there are still risks to completely rely on global dictionary. The LR-HR mapping relation in global dictionary is learned from large scale database, which has its own LR-HR scaling method. Given an image, the HR estimation may fail if its scale and blur model does not fit with the image pairs in the database. This problem could be a worthy consideration especially when most of the learning based SR methods built their training dataset with the assumption that LR images is generated from HR ones by bicubic interpolation. On the other hand, the local dictionary is trained based on the observed LR image itself by using self-similarity and the feature statistics. Therefore, it is possible to use the local dictionary as supplementary information to enhance the SR image quality by handling the problem that LR image suffers from different degradation to the ones in training dataset. Initial SR image for local dictionary training As aforementioned, the quality of SR image estimated by global dictionary could be improved by using local dictionary to suppress the blur effect not learned from the global training dataset, and vice versa—the SR image from the global learning, which can be regarded as an initial guess or the side information to a local dictionary based method, can affect the final estimation of the high resolution image. In order to verify this, a simple example is given as follows. First, an initial HR estimation is made for an LR image by using 5 different SR algorithms: bicubic, bilinear, nearest neighbor, global regression (GR) [12], and SC-SR [9]. Based on these initial guess, the HR estimation is refined by applying the same local procedure—K-PCA and non-locally centralized sparse representation (NSCR) [17]. Table 1 presents the final HR estimation by using the local dictionaries trained from different initial HRs. It can be observed from Table 1 that the final HR estimation would be better if the initial HR value was better generated. Similar phenomenon was also found and analyzed in another recent paper [28]. Based on this observation, [28] proposed to use ridge regression based method to produce the initial HR image. In this paper, we propose to update the initial HR values in each iteration of the SR processing, so that the final result can be further improved. The details of our proposed SR method will be introduced in the following section. Table 1 PSNR(dB)/SSIM values of the SR results by using K-PCA and non-locally centralized sparse representation [17] with different initial HR values Methods: image super-resolution based on multi-scale structure and non-local mean (MSNM-SR) According to recent researches of single image super-resolution (SR), the sparsity representation has shown the advantages in recovering discontinuous and inhomogeneous image regions [9, 16, 17, 27, 29]. Therefore, our proposed SR method still uses sparse coding to represent the image features. As we mentioned in Section 2, the global dictionary is beneficial to provide comprehensive image structure, and the local dictionary is more relevant to the image to be enhanced. In this case, we propose our SR method that utilizes the global and local dictionaries together to generate high resolution images with clear texture and suppressed blurring artifact. Overview of our proposed SR method The flowchart of our proposed single image super-resolution is presented in Fig. 1. First, a set of global dictionary pairs \(\left \{D^{H}_{i},D^{L}_{i}|i=1.1, 1.2, \dots, 4\right \}\) is trained from a large amount of natural images. Here, i represents the upscaling ratio from LR to HR images, and DH and DL represent the LR and HR image dictionary, respectively. These global dictionary pairs are generated based on the assumption that LR and HR images share the same sparse representation. By training multiple dictionary pairs, multi-scale mapping relation between HR and LR images can be established. Flowchart of the proposed SR method. s is the size ratio between the output HR and the input LR; si is the image magnification factor in the ith iteration, and it determines the choice of the global dictionaries \(D^{H}_{{si}}\) and \(D^{L}_{{si}}\); \(D^{0}_{{si}}\) is the local dictionary trained from the image itself Given a low resolution image \(I_{\text {LR}} \in \mathbb {R}^{m\times n}\), and the scale factor s, a high resolution image \(I_{\text {HR}} \in \mathbb {R}^{sm\times sn}\) will be gradually generated by our proposed SR method. First, the magnification factor si is initialized as s1=s. According to the value of si, the corresponding global dictionary pair \(\phantom {\dot {i}\!}\left \{D^{H}_{s_{i}},D^{L}_{s_{i}}\right \}\) is used to magnify the low resolution image. In order to suppress the artifacts and the noises introduced by sparse representation, a local dictionary \(\phantom {\dot {i}\!}\left \{D^{0}_{s_{i}}\right \}\) is generated. Since \(\phantom {\dot {i}\!}\left \{D^{0}_{s_{i}}\right \}\) is constructed based on the self-information of the image, this dictionary would be more consistent with the image content. Based on \(\phantom {\dot {i}\!}\left \{D^{0}_{s_{i}}\right \}\), a sparse fidelity term and a non-local smoothing term are used as the constraints, so that the structure of the reconstructed HR image is similar to the original input image. Afterwards, the magnifying factor si is updated according to a blind image quality estimation function f(HRc), where HRc is the current estimated HR image. The HR image is iteratively updated until function f(HRc) converge. The detailed descriptions of our proposed SR method will be introduced as follows. Global dictionary training based on multi-scale image structures According to Section 2.2, the initial value significantly affects the quality of the final HR image in local dictionary-based SR method such as NCSR. Compared with using the NCSR's default initial value, which is generated by bicubic interpolation, the quality of the estimated HR image can be significantly improved if the initial values are better generated by other SR methods. Therefore, we propose to train global dictionaries from large dataset within multiple scaling factors to better generate the initial guess of the HR image. In global dictionary-based sparse image representation, it is often assumed that the same image patch should have the same sparse code in different resolutions. Given an LR image Ilr and a dictionary trained Dlr from LR image dataset, the sparse codes of image patches in Ilr can be estimated. According to the assumption that the corresponding HR image Ihr shares the same sparse code with Ilr, we can reconstruct the high resolution image from the low-resolution one if an appropriate high resolution dictionary Dhr is also available. It is obvious that the most important step is to find out the dictionary pair Dhr and Dlr that can reliably represent the HR image and its LR version with the same sparse code. Given a large high-resolution training dataset \(S_{{hr}}=\left \{I_{hr1}, I_{hr2}, \dots, I_{{hrn}}\right \}\) with clear natural images Ihri, the low-resolution training dataset \(S_{{lr}}=\left \{I_{lr1}, I_{lr2}, \dots, I_{{lrn}}\right \}\) is generated by applying Gaussian blurring, down-sampling, and bicubic scaling to the same size as the images Ihri in Shr. Afterwards, the images in Slr and Shr are decomposed into patch sets \(P_{{lr}}=\{p_{lr1}, p_{lr2}, \dots, p_{{lrm}}\}\) and \(P_{{hr}}=\{p_{hr1}, p_{hr2}, \dots, p_{{hrm}}\}\), where m is the number of patches extracted from the dataset. In order to guarantee the dictionary a good representation of viewing-sensitive textures, we represent the image by its high-frequency component other than the original image. Similar with [11], the features in HR patch (phri) are extracted by subtracting the corresponding LR from the original HR, while the features in LR patch (plri) are extracted by using first- and second-order gradient filters. Afterwards, two training matrix can be generated: HR training matrix (\({X_{{hr}}}=[x_{hr1}, x_{hr2}, \dots, x_{{hrm}}]\)) and LR training matrix (\({X_{{lr}}}=[x_{lr1}, x_{lr2}, \dots, x_{{lrm}}]\)), where each x is a column vector reshaped from one training patch. Given Xhr and Xlr, the high- and low-resolution dictionaries can be estimated by Eqs. (4) and (5), respectively: $$\begin{array}{@{}rcl@{}} D_{h}&=&\underset{D_{h},\alpha}{\mathrm{arg\ min}} \left\{\|X_{{hr}}-D_{h}\alpha \|^{2}_{2}+\lambda\|\alpha\|_{1}\right\}, \end{array} $$ $$\begin{array}{@{}rcl@{}} D_{l}&=&\underset{D_{l},\alpha}{\mathrm{arg\ min}} \left\{\|X_{{lr}}-D_{l}\alpha \|^{2}_{2}+\lambda\|\alpha\|_{1}\right\}. \end{array} $$ Because the sparse code α is shared between LR and HR patches, Eqs. (4) and (5) can be combined in Eq. (6): $$ \left[D_{l}, D_{h}\right]=\underset{D_{l},D_{h},\alpha}{\mathrm{arg\ min}} \left\{ \|X_{{lr}}-D_{l}\alpha \|^{2}_{2}+ \|X_{{hr}}-D_{h}\alpha \|^{2}_{2}+\lambda\|\alpha\|_{1}\right\}. $$ The global dictionary pair is trained to reveal the underlying relation between the LR and HR images based on the knowledge from the training dataset. If the LR training images are generated by down-sampling the HR training images with a fixed scaling factor s, the LR images can only provide the structure knowledge in a fixed level. Therefore, one global dictionary pair trained from such datasets may not be suitable in different situations. For example, we can generate the LR training images by down-sampling the HR training images with a scale factor 4 and then estimate the global dictionary pair to reveal the LR-HR mapping relation. Although this dictionary pair may be effective if we scale an LR image to 4 times of its original size, it may performs unsatisfactorily if we scale the LR image to some other sizes. With this concern, we down-sample the HR training images by different scaling factors to generate multiple LR training set \(\left \{S_{lr1}, S_{lr2}, \dots, S_{{lrm}}\right \}\). The LR images in different training sets Slri represent the low-resolution structures in multi-scale. As shown in Fig. 2, we generate the LR-HR dictionary pair for each LR-HR training set pair {Slri,Shr}. Given the global dictionary pairs under different structure level, we can always choose the appropriate dictionary to enhance the LR image according to different situations. Global dictionary training In order to illustrate the necessity of multiple global dictionary pairs under different structure level, we generate 30 LR training datasets \(\{S_{lr1}, S_{lr2}, \dots, S_{lr30}\}\) from one HR training dataset Shr. Every LR set is generated by down-sampling the HR images at the ratio of si. In this paper, \(s_{i} \in \{1.1, 1.2, 1.3, \dots, 4.0\}\). In Fig. 3, we reconstruct the HR image from the LR image by using different global dictionary pairs. The horizontal axis represent the down-sampling ratio si at which the LR training set is generated. The vertical axis represent the PSNR value of the reconstructed HR image. In Fig. 3a, the LR image is scaled by 2.4 to generate the HR image, and the best reconstruction result is obtained when the LR training set is down-sampled by 2.5. Similarly, the LR image is scaled by 3.3 to generate the HR image in Fig. 3b, and the best reconstruction result is obtained when the LR training set is down-sampled by 3.5. It is clear that the choice of global dictionary pair affects the HR reconstruction result. In Fig. 4, the visual quality of the reconstructed HR images are presented along with the ground truth HR. It is visually noticeable that the quality of reconstructed HR would be better if appropriated global dictionary pair is used. Quality of HR image reconstructed by using different global dictionary pairs Enhance LR image to HR with different global dictionaries. HR\(_{a}^{b}\) represents the estimated HR image with different scale factors and global dictionaries, where a is the scale factor of the test image and b is the scale factor used in the training phase Local dictionary training using K-PCA There is a need for great diversity in global dictionary, so that it can be used to recover general images. Despite the comprehensive information provided by global dictionary, it is proved to be unstable for sparse representation because of the highly diversity [15]. In order to represent the image by using a robust and compact dictionary, we use K-PCA and non-locally centralized sparse representation (NSCR) [17] to generate the local dictionary that is consistent with the input image. The input LR image is scaled to a set of images \(\phantom {\dot {i}\!}S_{I}=\left \{I_{s_{k}}|k=1,2,3,\dots,N\right \}\) with different sizes by using bicubic interpolation. If the input \(LR \in \mathbb {R}^{m\times n}\), the desired output \(HR \in \mathbb {R}^{sm\times sn}\), the height and width of the scaled image \(\phantom {\dot {i}\!}I_{s_{i}}\) is \(\phantom {\dot {i}\!}0.8^{s_{k}}sm\) and \(\phantom {\dot {i}\!}0.8^{s_{k}}sn\). By extracting 7×7 image patches from SI, we generate the patch set P, which is further clustered into K groups \(\boldsymbol {P}=\{{\boldsymbol {P}_{\boldsymbol {i}}}|i=1,2,\dots,K\}\) using K-means clustering. We assume the patches within one group are similar, so these patches can be robustly represented by using a compact dictionary Di. Principal component analysis (PCA) is applied, and the PCA bases is regarded as Di for group Pi. After we combine all \(D_{i} (i=1,2,\dots,K)\) together, a complete local dictionary \(D^{0}=\left [D_{1}, D_{2}, \dots, D_{K}\right ]\) can be generated based on the input LR image itself. Image super-resolution based on local and global training In this section, we introduce the high resolution image reconstruction based on the global and local dictionaries. As shown in Eq. (3), a standard solution for image sparse representation can be formulated by the minimum optimization of an energy function with the fidelity term and the regularization term. The fidelity term ensures that the observed low-resolution image is a blurred and down-sampled version of the high-resolution image that is constructed by sparse representation. In this case, a reliable sparse representation is critical for high-resolution image reconstruction. In this paper, we adopt the global and the local dictionaries at the same time to ensure that the sparse representation can provide rich texture details and can be consistent with the observed low resolution image. With these concerns, we reformulate Eqs. (3) to (7). $$ \boldsymbol{\alpha}_{\boldsymbol{y}}=\underset{\alpha_{l}}{\mathrm{arg\ min}} \left\{\|\boldsymbol{y}-\boldsymbol{H}\boldsymbol{D}^{\boldsymbol{0}}\boldsymbol{\alpha}_{\boldsymbol{l}} \|^{2}_{2} +\lambda\| \boldsymbol{\alpha}_{\boldsymbol{l}} - \boldsymbol{\beta}_{\boldsymbol{l}}\|_{1}\right\}, $$ $$ \| U_{{IP}}(\boldsymbol{y})-\boldsymbol{D}^{\boldsymbol{L}}\boldsymbol{\alpha}_{\boldsymbol{g}} \|_{2}^{2}<\epsilon, $$ where ε is a small factor, y is the observed low-resolution image, H is a matrix for blurring and down-sampling, D0 is the local dictionary, αl is the sparse code of the high-resolution image according to local dictionary, DL is the global LR dictionaries, αg is the sparse code of the image according to global dictionary, U(·) is the upscaling operator, UIP(y) is the initial prediction of the upscaled y in the gradient decent based optimization for Eq. (7), and βl is the non-local mean of αl, which is formulated as follows: $$ \boldsymbol{\beta}_{\boldsymbol{i}}=\sum\limits_{n \in N_{i}} \omega_{i,n}\boldsymbol{\alpha}_{\boldsymbol{n}}, $$ where αn denotes the sparse code of an image patch xn,Ni denotes the N most similar patches to patch xi,βi is the sparse code of patch xi after nonlocal smoothing, and ωi,n is the weight factor defined as follows: $$\begin{array}{*{20}l} \omega_{i,n}=\frac{\text{exp}\left(-\|x_{i}-x_{n}\|^{2}_{2}\right)}{\sum\limits_{n \in N_{i}}\text{exp}\left(-\|x_{i}-x_{n}\|^{2}_{2}\right) }. \end{array} $$ Given an LR image, its HR version is estimated by iteratively solving Eq. (7). In the ith iteration, the global sparse code of the current LR image \(\alpha _{g}^{i}\) is estimated by using Eq. (11): $$\begin{array}{*{20}l} \alpha_{{gi}}=\underset{\alpha}{\mathrm{arg\ min}} \left\{ \|X^{L}_{i}-D^{L}_{i}\alpha \|^{2}_{2}+\lambda\|\alpha\|_{1}\right\}, \end{array} $$ where \(X^{L}_{i}\) is the initial LR image in the ith iteration and also the final HR image estimation in the (i−1)th iteration, note that U(y) in Eq. (8) is the general representation for \(X^{L}_{i}\); \(D^{L}_{i}\) is the LR global dictionary used in the ith iteration; α is the sparse code of \(X^{L}_{i}\); and αgi is the optimal α that can minimize Eq. (11). With the global sparse code αgi from Eq. (11), the HR estimation from global dictionary in the ith iteration is given in Eq. (12): $$\begin{array}{*{20}l} X^{L}_{i+1/3}=D^{H}_{i}\alpha_{{gi}}, \end{array} $$ where \(D^{H}_{i}\) is the HR global dictionary used in the ith iteration, and \(X^{L}_{i+1/3}\) is an intermediate HR estimation, and it is also the initial HR guess feeding to the following local dictionary based HR estimation. Given \(X^{L}_{i+1/3}\), the local dictionary D0 and the corresponding local sparse code αl in Eq. (7) can be generated by K-PCA as mentioned in Section 3.3. According to Eqs. (9) and (10), the sparse code βi of each patch's non-local mean can be estimated. With βi, the regularization term \(X^{\beta _{i}}_{i}\) can be calculated as follows: $$\begin{array}{*{20}l} X^{\beta_{i}}_{i}=D^{0}_{i}\beta_{i}. \end{array} $$ After we have H, D0, the initial sparse code αl, and the sparse code βl of non-local mean, the optimal αy in Eq. (7) could be iteratively approached. First, we fix the second regularization term, and the optimization problem becomes a least square problem which can be efficiently solved, the gradient decent-based updating processing is given in matrix form by Eq. (14): $$\begin{array}{*{20}l} X^{L}_{i+2/3}=\boldsymbol{D}^{\boldsymbol{0}}\boldsymbol{\alpha}_{\boldsymbol{l}}+\theta H^{T}\left(y-H\boldsymbol{D}^{\boldsymbol{0}}\boldsymbol{\alpha}_{\boldsymbol{l}}\right), \end{array} $$ where θ is the learning step size, which is set to 2.4 in this paper. Afterwards, we fix the first fidelity term in Eq. (7), and now, this function can be solved by iterative shrinkage algorithm: $$\begin{array}{*{20}l} X^{L}_{i+1}=S_{\tau}\left(X^{L}_{i+2/3}-X^{\beta_{i}}_{i}\right)+X^{\beta_{i}}_{i} \end{array} $$ where Sτ is a soft-thresholding operator; \(X^{\beta _{i}}_{i}\) is calculated in Eq. (13). According to the work presented in [30], the aforementioned algorithm is empirically converge. We also present the PSNR convergence of our proposed method in Fig. 5. There are three cases being compared: (1) the global SR DHαg is not used, and the SR image is generated based on local dictionary only; (2) the global SR is used only for once as the initial estimation for local SR; and (3) the global SR is used for two times to update the initial estimation during the gradient decent optimization of local SR. It can be observed that the use of global SR significantly improves the SR quality. It is worth notice that the global SR at the 201th iteration firstly reduces the PSNR but the final PSNR converges to a higher value compared with the cases without using global SR. It is very likely that the global SR updates the estimation during the gradient decent (GD) processing in local SR, and it avoids the GD processing being trapped in the local minima. According to our experiments, the proposed SR method can converge within 300 iterations for calculating Eq. (14). The convergence comparison of our proposed method The last problem is to find out the proper global dictionary in each iteration. Although we already generated global dictionary pairs in multi-scale structure, it is still difficult to select the most appropriate one in each iteration. According to our extensive experiments, we use the iteration number to stand for the degradation level and introduce a non-linear function in Eq. (16) to imply the selection of global dictionary pairs: $$\begin{array}{*{20}l} f(i)=\frac{a}{i}+b \end{array} $$ where a and b are numeric parameters, i is the iteration number, and f(i) is the index number for global dictionary selection. In this paper, we set a and b to be s−1 and 1 for all test images, and here,s is the scaling factor of the HR image. We evaluate and compare our proposed method with the present of Gaussian blurring during the down-sample procedures. All the methods are compared on Set5 [31], Set9 [17], and Set14 [11]. The hyperparameters in our method are set as follows: the patch size is 6×6 and the dictionary size is 1024 in global training, the patch size is 7×7, the K-PCA cluster size is 70, the learning rate in gradient decent is 2.4, and the sparsity lambda is 0.35 in local training. Given the test HR image, it is first blurred by Gaussian kernel (windows size=7, standard deviation=1.6), and afterwards, it is directly down-sampled to generated the test LR image. It is worth to notice that the LR image generation in this paper is different from many existing SR methods that applied bicubic interpolation. In practice, it is not convincing to make the assumption that the relation between LR and HR images obeys the bicubic interpolation models. Therefore, introducing the blurring effect during the down-sampling makes the SR method more robust in real practice. In Table 2, the PSNR/SSIM comparison is made in the Y channel of YCbCr color space. Specifically, we compare our method with 5 other SR methods, which include the bicubic interpolation, the sparse representation based on global dictionary (SC) [11], A+ [13], anti-blur SR based on local dictionary (NCSR) [17], and the deep learning-based SR (SRCNN) [19], since the performance of deep learning-based methods highly depends on the training dataset. Therefore, we provide two deep learning models: SRCNNb and SRCNNg. SRCNNb is the original model in [19]; the training HR-LR pairs are generated by bicubic interpolation. SRCNNg is the model trained by ourselves; the training HR-LR pairs are generated by Gaussian blur (windows size=7, standard deviation=1.6) and direct down-sample. Table 2 The PSNR(dB)/SSIM values of reconstructed SR images in 3 datasets Because the LR-HR mapping relation in the test phase is not consistent with the assumption made (i.e., bicubic interpolation) in the training phase of SC, A+, and SRCNNg, it is not surprising that our proposed method outperforms SC, A+, and SRCNNg in Table 2. The anti-blur SR based on local dictionary (NCSR) performs well with the present of Gaussian blur. As we mentioned in Section 2.2, the initial SR estimation in NCSR benefits the final result. Therefore, our proposed method outperforms NCSR by iteratively feeding the SR prediction from global dictionary to the local estimation. Although the training dataset for SRCNNg includes Gaussian blur effect, our proposed method can still generate better SR images with respect to PSNR/SSIM. In Fig. 6, we provide the visual comparison in the RGB color space. Due to the combination of global and local dictionary, our proposed method provide clearer details. Although it is potentially possible to produce artifacts and even errors when using global dictionaries, the self-similarity constraints and non-local centralized sparsity will efficiently suppress this problem. Compared with the other methods, our proposed SR method can produce convincing novel details, sharper boundaries, and clearer textures while it also effectively suppresses undesired artifacts. Visual comparison between different SR methods (scale ratio=4). Each column presents the HR estimations from one SR method. From the left to the right, the compared methods are SC [11], A+ [13], SRCNNb [19], SRCNNg [19], NCSR [17], and our proposed method In this paper, we propose a novel and effective SR method that utilize multi-scale image structures and non-local similarities. Specifically, a set of global dictionary pairs are trained under different image resolutions, so that the LR-HR mapping relations can be comprehensively established. When an LR image is enhanced to an HR image, the appropriate global dictionary pair can be chosen from the dictionary set by using a non-linear function. In addition, a K-PCA-based local dictionary is also trained according to the input LR image content. This local dictionary is more consistent with the input, and it helps to reduce the artifacts introduced by the feature inconsistency between the test image and the global training dataset. Furthermore, the sparsity-based non-local mean, which is proved to be effective in many SR methods, is used to smooth the estimated HR image in every iteration. In this case, less artifact will be propagated to the next iteration. The experimental results show that our proposed SR model is capable to recover HR images with clear textures, sharp edges, and convincing novel details. Low-resolution High-resolution SISR: Single image super-resolution K-PCA: K-means principal component analysis R. Y. Tsai, T. S. Huang, Advances in computer vision and image processing (JAI Press, Greenwich, CT, USA, 1984). T. Blu, P. Thevenaz, M. Unser, Linear interpolation revitalized. IEEE Trans. Image Process.13(6), 710–719 (2004). X. Li, M. Orchard, New edge-directed interpolation. IEEE Trans. Image Process.10(10), 1521–1527 (2001). L. Zhang, X. Wu, An edge-guided image interpolation algorithm via directional filtering and data fusion. IEEE Trans. Image Process.15(8), 2226–2238 (2006). J. Sun, Z. Xu, H. Shum, Gradient profile prior and its applications in image super-resolution and enhancement. IEEE Trans. Image Process.20(6), 1529–1542 (2011). K. Zhang, X. Gao, D. Tao, X. Li, Single image super-resolution with nonlocal means and steering kernel regression. IEEE Trans. Image Process.21(11), 4544–4556 (2012). W. Freeman, T. Jones, E. Pasztor, Example-based super-resolution. IEEE Comput. Graph. Appl. Mag.22(2), 56–65 (2002). K. Ni, T. Nguyen, Image super-resolution using support vector regression. IEEE Trans. Image Process.16(6), 1596–1610 (2007). J. Yang, J. Wright, T. S. Huang, Y. Ma, Image super-resolution via sparse representation. IEEE Trans. Image Process.19(11), 2861–2873 (2010). J. Yang, Z. Wang, Z. Lin, S. Cohen, T. Huang, Coupled dictionary training for image super-resolution. IEEE Trans. Image Process.21(8), 3467–3478 (2012). R. Zeyde, M. Elad, M. Protter, in Proceedings of 8th International Conference: Curves and Surfaces. On single image scale-up using sparse-representations, pp. 711–730 (2010). R. Timofte, V. D. Smet, L. V. Gool, in Proceedings of IEEE International Conference on Computer Vision. Anchored neighborhood regression for fast example-based super-resolution, pp. 1920–1927 (2013). R. Timofte, V. D. Smet, L. V. Gool, in Proceedings of Asian Conference on Computer Vision. Adjusted anchored neighborhood regression for fast super-resolution, pp. 111–126 (2014). W. Shi, S. Liu, F. Jiang, D. Zhao, Z. Tian, Anchored neighborhood deep network for single-image super-resolution. EURASIP J. Image Video Process.2018(34), 1–12 (2018). M. Elad, I. Yavneh, A plurality of sparse representations is better than the sparsest one alone. IEEE Trans. Inf. Theory. 55(10), 4701–4714 (2009). W. Dong, L. Zhang, G. Shi, X. Wu, Image deblurring and super-resolution by adaptive sparse domain selection and adaptive regularization. IEEE Trans. Image Process.20(7), 1838–1857 (2011). W. Dong, L. Zhang, G. Shi, X. Li, Nonlocally centralized sparse representation for image restoration. IEEE Trans. Image Process.22(4), 1620–1630 (2013). K. Zhang, D. Tao, X. Gao, X. Li, Z. Xiong, Learning multiple linear mappings for efficient single image super-resolution. IEEE Trans. Image Process.24(3), 846–861 (2015). C. Dong, C. C. Loy, K. He, X. Tang, in Proceedings of European Conference on Computer Vision. Learning a deep convolutional network for image super-resolution, pp. 184–199 (2014). J. Kim, J. K. Lee, K. M. Lee, in IEEE Conference on Computer Vision and Pattern Recognition. Accurate image superresolution using very deep convolutional networks, (2016). J. Hu, J. Zhao, in Proceedings of IEEE International Conference on Instrumentation and Measurement Technology. A joint dictionary based method for single image super-resolution, pp. 1440–1444 (2016). A. Marquina, S. J. Osher, Image super-resolution by tv-regularization and bregman iteration. J. Sci. Comput.37(3), 367–382 (2008). H. A. Aly, E. Dubois, Image up-sampling using total-variation regularization with a new observation model. IEEE Trans. Image Process.14(10), 1647–1659 (2005). G. Peyre, S. Bougleux, L. Cohen, Non-local regularization of inverse problems. Inverse Probl. Imaging. 5(2), 511–530 (2011). I. Daubechies, M. Defrise, C. D. Mol, An iterative thresholding algorithm for linear inverse problems with a sparsity constraint. Commun. Pur. Appl. Math.57(11), 1413–1457 (2004). J. A. Troopp, S. J. Wright, Computational methods for sparse solution of linear inverse problems. Proc. IEEE. 98(6), 948–958 (2010). K. I. Kim, Y. Kwon, Single-image super-resolution using sparse regression and natural image prior. IEEE Trans. Pattern Anal. Mach. Intell.32(6), 1127–1133 (2010). Y. Zhang, W. Yang, Z. Guo, Image super-resolution based on structure-modulated sparse representation. IEEE Trans. Image Process.24(9), 2797–2810 (2015). Y. Zhang, J. Liu, W. Yang, Z. Guo, Image super-resolution based on structure-modulated sparse representation. IEEE Trans. Image Process.24(9), 2797–2810 (2015). E. Candés, Enhancing sparsity by reweighted l1 minimization. J. Fourier Anal. Appl.14(5-6), 877–905 (2008). M. Bevilacqua, A. Roumy, C. Guillemot, M. -L. Alberi-Morel, in Proceedings of BMVC. Low complexity single-image super-resolution based on nonnegative neighbor embedding, pp. 135–113510 (2012). Work presented in this paper has been partially supported by the Provincial Key Research and Development Program of Sichuan, China (2020YFG0149). Wenyi Wang and Jun Hu contributed equally to this work. School of Electronics Engineering, University of Electronic Science and Technology of China, No. 2006, Xiyuan Ave., West Hi-Tech Zone, Chengdu, 611731, People's Republic of China Wenyi Wang & Jianwen Chen School of Electrical Engineering and Computer Science, University of Ottawa, 800 King Edward Ave., Ottawa, K1N 6N5, Canada Jun Hu & Jiying Zhao Department of Electrical and Computer Engineering, McMaster University, 1280 Main St W, Hamilton, ON L8S 4L8, Canada Xiaohong Liu Wenyi Wang Jun Hu Jiying Zhao Jianwen Chen All authors are involved in deriving the algorithm and making the validation experiments. All authors read and approved the final manuscript. Correspondence to Xiaohong Liu. Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/. Wang, W., Hu, J., Liu, X. et al. Single image super resolution based on multi-scale structure and non-local smoothing. J Image Video Proc. 2021, 16 (2021). https://doi.org/10.1186/s13640-021-00552-8 Sparse representation Dictionary learning K-PCA Non-local mean
CommonCrawl
Mathematics Meta Mathematics Stack Exchange is a question and answer site for people studying math at any level and professionals in related fields. Join them; it only takes a minute: Determine whether the series $\sum_{n=1}^{\infty} \frac{\ln(n)}{n^2 +1}$ converges or not. ** My trial ** I tried dividing $\frac{\ln(n)}{n^2 +1}$ by $1/n^2$ and finding the limit which was $\infty$ so I could not use the limit comparison test and this idea did not work. Could anyone give me a hint for studying the convergence of this series? calculus sequences-and-series Aaron Quitta hopefullyhopefully $\begingroup$ do you mean $$\sum_{n=1}^{\infty}\frac{\ln n}{n^2+1}$$? $\endgroup$ – clathratus Jan 16 at 4:52 $\begingroup$ Hint: Use the inequality $\ln n < n$ in the following way $\ln n = 2 \ln \sqrt{n} < 2 \sqrt{n}$ $\endgroup$ – RRL Jan 16 at 4:54 $\begingroup$ @clathratus yes sorry I corrected it. $\endgroup$ – hopefully Jan 16 at 4:56 Compare with $$\sum_{n=1}^{\infty}\frac{1}{n^{1.5}}$$ answered Jan 16 at 4:54 alex.jordanalex.jordan $\begingroup$ you mean comparison not limit comparison? $\endgroup$ – hopefully Jan 16 at 4:56 $\begingroup$ Either way. This series converges. With direct comparison, the OP series has smaller terms (eventually). With a limit comparison, the ratio of the OP terms to the terms here converges to $0$. Either implies the OP series converges. $\endgroup$ – alex.jordan Jan 16 at 5:06 $\begingroup$ but if I use the direct comparison test how can I compare them? $\endgroup$ – hopefully Jan 16 at 14:19 $\begingroup$ You would need to know that (eventually) $\ln(n)<n^{0.5}$. There are proofs out there that (eventually) $\ln(n)<n^{\varepsilon}$ for any positive $\varepsilon$. $\endgroup$ – alex.jordan Jan 16 at 15:41 $\begingroup$ For example, $\ln(n)/n^{0.5}$. What is the limit of this as $n\to\infty$? By L'Hospital, it is the same as the limit of $1/(0.5n^{0.5})$, which is $0$. So for large enough $n$, $\ln(n)/n^{0.5}<1$. So for large enough $n$, $\ln(n)<n^{0.5}$. $\endgroup$ – alex.jordan Jan 16 at 16:39 Thanks for contributing an answer to Mathematics Stack Exchange! Not the answer you're looking for? Browse other questions tagged calculus sequences-and-series or ask your own question. Determine whether the series $\sum_{n=1}^{\infty }\left ( \frac\pi2-\arctan n \right )$ converges or not. The Series( $\sum_{1}^{+ \infty}\frac{1}{n! + n}$ convergence or divergence? Determine whether the infinite series $\sum_{n=1}^{\infty}\frac{n!}{n^3}$ is convergent Determine whether or not the series $\sum_{k=1}^{\infty}(\sqrt[k]{a} - 1)$ converges? Determine whether the series converges or diverges. Determine whether $\sum_{n=1}^\infty \frac{2}{\sqrt{n}+2} $ converges or diverges? How to determine the convergence of $\sum \frac 1 {n!}\left(\frac n e\right)^n$ using Raabe's test Determine whether the series converges/diverges (difficulties with negative terms) Does the series $\sum_{n=1}^\infty \frac{n}{\sqrt[3]{8n^5-1}}$ Converge? Determine whether the series $\sum_{n =1}^{\infty} \frac{n + \sqrt{n}}{2n^3 -1}$ converges?
CommonCrawl
Full paper Upper mantle structure beneath the Society hotspot and surrounding region using broadband data from ocean floor and islands Takehi Isse1, Hiroko Sugioka2, Aki Ito3, Hajime Shiobara1, Dominique Reymond4 & Daisuke Suetsugu3 The Erratum to this article has been published in Earth, Planets and Space 2016 68:122 We determined the three-dimensional shear wave velocity structure beneath the South Pacific superswell down to a depth of 200 km by analyzing fundamental Rayleigh wave records from permanent and temporary land-based and seafloor seismometers in the Pacific Ocean. Data from the Tomographic Investigation by seafloor ARray Experiment for the Society hotspot (TIARES) project yield excellent spatial resolution of velocity anomalies in the central part of the superswell, near the Society hotspot. Localized slow anomalies are found near hotspots in the upper mantle, but the vertical profiles of the anomalies vary from location to location: Slow anomalies near the Samoa, Macdonald, Pitcairn, and Society hotspots extend to at least 200 km depth, while a slow anomaly near the Marquesas hotspot extends only to ~150 km depth. Owing to the recently deployed seafloor array, horizontal resolutions of slow anomalies near the Society hotspot are substantially improved: The slow anomalies are about 300 km in lateral extent and have velocity anomalies as low as −6 %. The lithosphere thickness is estimated to be ~70 km in the vicinity of all hotspots, which may indicate thermal erosion by mantle plumes. The French Polynesian region is characterized by large-scale positive topographic anomalies that reach 700 m (Adam and Bonneville 2005), the so-called South Pacific superswell (McNutt 1998), many hotspot chains (e.g., Society, Cook–Austral, Marquesas, and Pitcairn) and large-scale slow seismic velocity anomalies in the lower mantle. Locations of hotspots and the superswell are shown in Fig. 1. Although this area has an anomalous structure, high-resolution tomographic models have yet to be developed, due to the sparse coverage of seismic stations in the South Pacific. To improve the horizontal and vertical resolutions of velocity structure, temporary observations have been conducted. Map of seismic stations used in the present study. Hotspots and superswell are also shown. Open triangles show permanent land stations by IRIS, Geoscope, SPANET, CEA, and Geoscience Australia. Gray solid triangles show temporary land stations from the GEOFON and PLUME projects. Black solid triangles indicate temporary seafloor broadband stations. Open stars show hotspots. Broken line indicates the superswell region defined by anomalous seafloor uplift greater than ~300 m. Solid lines show plate boundaries. Solid rectangle outlines the studied region A French Polynesian Lithosphere and Upper Mantle Experiment (PLUME) project team conducted a temporary broadband seismic experiment on oceanic islands in the French Polynesian region from 2001 to 2005 (Barruol et al. 2002). Using a surface wave tomography method, Maggi et al. (2006) found slow anomalies to 400 km depths at the Macdonald and Society hotspots, with a lateral resolution of ~800 km. We conducted a temporary seafloor seismic experiment from 2003 to 2005 in the French Polynesian region (Suetsugu et al. 2005) by using broadband ocean-bottom seismometers (BBOBSs), with a flat velocity response in period from 0.02 to 360 s and a 24-bit data acquisition system. Isse et al. (2006) obtained a shear wave velocity model for the upper mantle (to a depth of 200 km) with a 500-km horizontal resolution, by analyzing Rayleigh wave data recorded on oceanic islands and by these BBOBS stations; they found varying slow velocity anomalies associated with each hotspot. Suetsugu et al. (2009) used BBOBS and PLUME data to obtain an improved seismic image beneath the superswell from the lower mantle to the upper mantle. Their work, which combined surface wave tomography, receiver function analysis, and body wave travel time tomography, suggests that large-scale slow velocity anomalies exist from the bottom of the mantle to 1000 km depth, while depth range of narrow slow anomalies in the upper mantle varies for narrow plumes associated with individual hotspots in the upper mantle. They further suggested that the Society and Macdonald hotspots are likely to be deep-rooted (i.e., extending down to the top of the large-scale slow anomalies in the lower mantle), while other hotspots may have shallower origins. The lateral resolution of upper mantle structure from the surface wave analysis was about 500 km in their model. Temporary land and seafloor array deployments have improved the lateral resolution of the superswell region, though these are not yet sufficient to reveal detailed mantle structure beneath individual hotspots, such as the relationship between the Society hotspot and slow velocity anomalies nearby. The Tomographic Investigation by seafloor ARray Experiment for the Society hotspot (TIARES) project, conducted from 2009 to 2010, focused on the Society hotspot to investigate the details of the narrow plumes beneath the hotspot from the lower mantle to the surface. The TIARES network consisted of nine BBOBSs paired with ocean-bottom electromagnetometers (OBEMs) (Suetsugu et al. 2012). In the present study, we resolve three-dimensional shear wave velocity structure of the upper mantle beneath the Society hotspot region and surroundings with a higher resolution than that of previous studies, using surface wave tomography that incorporates the TIARES data. We analyzed seismograms from temporary and permanent broadband stations installed on islands and the seafloor of the Central and South Pacific. The temporary stations include seven BBOBS stations deployed from 2003 to 2004, two BBOBS stations from 2004 to 2005 (Suetsugu et al. 2005), nine BBOBS stations from 2009 to 2010 by the TIARES project (Suetsugu et al. 2012), 11 island stations from 2001 to 2005 by PLUME project (Barruol et al. 2002), four island stations from 2005 to 2006 by GEOFON, and five island stations by SPANET (Ishida et al. 1999). The permanent stations include two stations operated by Geoscope, three operated by Commissariat à l'Energie Atomique (CEA), ten by GSN, and two by Geoscience Australia. Station locations are shown in Fig. 1. We selected events with M w or M b ≥ 5.5 and epicenters located in and around the Pacific Ocean, occurring between January 1995 and June 2010. In BBOBS observations, a few seconds of absolute time shift are possible during a 1-year seafloor experiment even though the recorder has a very precise clock (Isse et al. 2014); BBOBS observation between 2003 and 2004 had time shifts of ~10–40 s because relatively old recorders were used. We calibrated BBOBS raw data using a linear interpolation method based on the time difference between each recorder's clock and a GPS clock before and after the observations. The instrument responses were removed from all seismograms used in the present study before measuring phase velocities. We employed the two-station method of Isse et al. (2006) and Suetsugu et al. (2009) to measure dispersion curves of fundamental-mode Rayleigh waves. When two stations are located on approximately the same great circle from an earthquake, the phase velocity dispersion between stations can be determined by computing the phase differences of surface waves (Fig. 2). This method allows us to ignore the effects of phase shifts due to source excitation and lateral heterogeneities far outside the inter-station path. If the source location is far enough from the stations, the wave front of a surface wave can be treated as a plane wave. Under these conditions, the phase differences of surface waves between two stations are caused by the differential distance (AB′) between a farther station (A) and a point (B′) projected from the nearer station (B) onto the great circle path (AE) from the source to the far station (Fig. 2), so that the measured phase velocities are averages over the differential distance (AB′). We selected station pairs whose azimuthal differences from the source (α) were less than 5°, and this met the condition that the difference between the great circle distance from the event to the far station and the distance to the far station via the near station was less than 25 km. To remove phase velocity measurements near the nodal directions of the surface wave radiation, we calculated radiation patterns at the source locations of fundamental mode of Rayleigh waves using the Global CMT. We used only data with a normalized radiated amplitude of >0.4. We then measured the phase velocity dispersion curves of fundamental Rayleigh waves for periods between 30 and 140 s, whose RMS errors (Aki and Richards 2002) were less than 0.02 km/s. Schematic diagram of the two-station method. A, B Seismic stations. E is the source location. B′ is the projected point of B on the AE great circle. Gray broken lines indicate the Rayleigh wave front. Using the two-station method, the phase velocity between A and B′ is measured under the condition that the wave front propagates as a plane wave A total of 1127–1934 surface wave paths were collected in this period range (Fig. 3a). The ray distributions of the obtained phase velocities are shown in Fig. 3b. We inverted the measured phase velocities between station pairs for two-dimensional phase velocity maps using a method developed by Montagner (1986), in which a smoothness constraint can be applied by introducing a covariance function. Number of measured phase velocities and ray distributions. The number of phase velocity measurements in the present study as a function of period, between 30 and 140 s (a). The ray distribution at 100 s used in the present study (b) In the present study, the covariance function (C p ) is defined as $$C_{p} \left( {M_{1} ,M_{2} } \right) = \sigma \left( {M_{1} } \right)\sigma \left( {M_{2} } \right)\exp \left[ {\frac{\cos \varDelta - 1}{{L_{{M_{1} }} L_{{M_{2} }} }}} \right]$$ where Δ is the distance between points M 1 and M 2 on the Earth's surface. The a priori parameter error σ gives a constraint on the strength of the velocity perturbation. In constructing phase velocity maps with varying σ values, we chose 0.10 km/s, which provides a best fit to the data. As patterns of obtained velocity maps with varying σ are similar, the choice of σ has little effect on our results except for the strength. The correlation lengths \(L_{{M_{1} }} ,L_{{M_{2} }}\) control the smoothness of the model. Two correlation lengths (100 and 200 km) were examined with synthetic data to determine an optimal value of correlation length. We then inverted the dispersion curves for the shear wave velocity model at each grid, using the linearized relationship between the period dependence of surface wave phase velocity and the depth variation of shear wave velocity (e.g., Takeuchi and Saito 1972), as follows: $$\begin{aligned} \frac{{\delta c(\omega )}}{c} & = \int_{0}^{R} {\left\{ {K_{\rho } (\omega ,z)\frac{{\delta \rho (z)}}{\rho } + K_{\alpha } (\omega ,z)\frac{{\delta \alpha (z)}}{\alpha }} \right.} \\ & \quad \left. { + K_{\beta } (\omega ,z)\frac{{\delta \beta (z)}}{\beta }} \right\}{\text{d}}z, \\ \end{aligned}$$ where δc is the perturbation of the phase velocity; δρ, δα, and δβ are the density, P-wave velocity, and shear wave velocity, respectively; R is the radius of the Earth; and K ρ , K α , and K β are sensitivity kernels, which represent the partial derivatives of phase velocity with respect to each model parameter. We fixed the density and P-wave velocity structure at the reference model's values and solved only for shear wave velocity, as the effects of density and P-wave velocity on Rayleigh wave phase velocity perturbations are not significant (Nataf et al. 1986). The iterative least squares inversion technique proposed by Tarantola and Valette (1982) was used for the inversion; this nonlinear inversion procedure has been used in many previous surface wave studies (e.g., Nishimura and Forsyth 1989). Our reference one-dimensional model was modified from PREM (Dziewonski and Anderson 1981) by smoothing the 220-km discontinuity. We adopted the CRUST2.0 model (Bassin et al. 2000) for the crust. We chose an a priori parameter error of 0.10 km/s and an a priori data error of 0.05 km/s. Changing the a priori data error does not influence shear wave velocity models significantly. The vertical correlation length was 5 km at depths shallower than 30 km, and 20 km at greater depths. In these calculations, we corrected an anelastic effect caused by the attenuation of seismic waves by using PREM, so that the reference frequency of the obtained model was 1 Hz. To assess the lateral resolution of tomographic models and select appropriate horizontal correlation lengths, we performed ray-theoretical checkerboard resolution tests. We calculated the synthetic data from input checkerboard models with 8 % anomalies at a period of 80 s, with a cell size that varied from 3° to 8°. We added random errors with amplitudes up to 0.02 km/s, a value comparable to measured RMS errors, to the synthetic data. We then inverted the synthetic data for a two-dimensional phase velocity map using correlation lengths of 100 and 200 km. Figure 4 shows a recovery of the input checkerboard pattern of 3° and 5°. The 5° checkerboard pattern is well recovered in the whole studied region with the correlation length of 200 km (Fig. 4c), except for the southwest region, where ray paths are sparse. Using a correlation length of 100 km, the input pattern was well recovered in the vicinity of the Society hotspot (latitudes 12°–25°S and longitudes 141°–153°W; herein called the "Society region") and in the vicinity of the Samoa hotspot, where seismic stations were densely distributed (Fig. 4b). On the other hand, the retrieved patterns were distorted outside the Society region (herein called "the outer region"). This suggests that a correlation length of 200 km is appropriate in the outer region, and the best possible resolution in the outer region is ~5°. Results of a checkerboard resolution test for phase velocity distributions. We calculated the synthetic data from input checkerboard models with 8 % anomalies at a period of 80 s: a, d input models, b, c, e, f output models. Cell sizes are 5° and 3° in (a–c) and (d–f), respectively. Correlation length is 100 km in (b, e) and 200 km in (c, f). Broken lines show the region of the superswell defined by depth anomalies greater than ~300 m. Open triangles indicate the stations, and green diamonds indicate active hotspots. Green rectangle outlines the "Society region," where a correlation length of 100 km is applied in Fig. 5 Next, we investigated recovery of the checkerboard patterns with a cell size of 3° (Fig. 4e, f). Using a correlation length of 200 km, the recovered pattern was smeared throughout the whole studied region (Fig. 4f). A correlation length of 100 km yielded good recovery in the Society region, but the pattern was only poorly recovered in the outer region (Fig. 4e). Because our tests suggest that lateral resolution in the studied region is not uniform, a single choice of a correlation length may be inappropriate. Therefore, to optimize resolution in both the Society and outer regions, we use a correlation length of 100 km in the Society region and 200 km in the outer region. A third checkerboard test, using cell sizes of 3° in the Society region and 5° in the outer region (Fig. 5a), achieved satisfactory recovery in both regions (Fig. 5b). In the present study, we inverted for phase velocity maps using a correlation length of 100 km in the Society region and 200 km in the outer region. Results of a checkerboard resolution test with 3° and 5° cells. The region of 3° cells is indicated by the green rectangle (the Society region) in the central part of the figure. The other region (the outer region) has a cell size of 5°. a Input model, b output model with a correlation length of 100 km in the Society region and 200 km in the outer region. Symbols are as in Fig. 4 The results of these checkerboard tests suggest that lateral resolution in the Society region is about 300 km and that in the outer region is 500 km, which is a higher resolution than the previous studies achieved. The amplitude of the recovered patterns is also better in the present study. The dense coverage of the TIARES network in the Society region is likely to contribute to the improvement of the horizontal resolution. To assess the vertical resolution of the model, we performed spike tests (Fig. 6a–d). We created four synthetic models (dashed lines in Fig. 6) with 5 % fast anomalies in a narrow depth range of 60–180 km. We calculated synthetic phase velocities of Rayleigh waves from these models and inverted for shear wave profiles (red solid lines in Fig. 6). Although the shapes of the recovered spikes are vertically smeared, due to the long wavelength of the surface waves, the input anomaly is well recovered for the target depths of 60 and 100 km. At depths of 140 and 180 km, the shape of the recovered spike is largely smeared out, so the vertical resolution at these depths is worse than that at shallower depths. Results of vertical resolution test for shear wave velocity models. Dashed lines indicate the input model used to calculate synthetic phase velocities. Black lines indicate initial models in the inversion. Red lines indicate retrieved shear wave velocity models. Result of spike test at depths of a 60 km, b 100 km, c 140 km, and d 180 km is shown. e Vertical resolution test with an initial model uniformly 3 % slower than the input model. f as (e), for an initial model 3 % faster. Black arrows show the center of the input spike and red arrows show the depth where the difference between the initial and retrieved models is the maximum To assess the sensitivity to the initial model, we created a synthetic model using a modified PREM with a seafloor depth of 4.2 km, crustal thickness of 6.6 km, and ±3 % uniform velocity perturbations. The results suggest that shear wave velocities at depths shallower than 50 km are not well recovered if the initial model is substantially different from the synthetic model (Fig. 6e, f). Rayleigh waves in the range of periods analyzed are less sensitive to such depths, so final models resemble the initial model at shallow depths. Small misfits to the synthetic models, less than 0.03 km/s, are also observed at depths greater than 50 km, which compensate the misfits shallower than 50 km. In the present study, we focus on shear wave models at depths greater than 50 km. We obtained three-dimensional shear wave velocity models of the upper mantle at depths shallower than 200 km from two-dimensional phase velocity maps. Figure 7 shows the three-dimensional shear wave velocity structure in the upper mantle beneath the studied region (latitudes 5°–35°S, longitudes 125°–175°W). To investigate the average features of velocity anomalies in the superswell, we compared a one-dimensional average velocity profile beneath the superswell region in the studied region, where seafloor age is between 20 and 112 Ma after Muller et al. (2008) (region surrounded by broken lines in Fig. 7), with a profile from outside of the superswell region (Northern Hemisphere Pacific seafloor, with ages between 20 and 112 Ma), created from a global tomographic model (S362ANI; Kustowski et al. 2008). The two profiles are different by 0.05 km/s at most, suggesting that the superswell region has no uniformly slow anomaly. This is consistent with previous studies (Isse et al. 2006; Suetsugu et al. 2009). Shear wave model in the upper mantle beneath the studied region of the South Pacific. a Red line indicates the reference shear wave velocity in the present study, which is averaged shear wave velocity profile in the superswell region in the studied region. Green broken line indicates SV velocity profile of the superswell region created from S362ANI, and blue broken line indicates PREM which is the initial model in the mantle. The reference frequency of these profiles is 1 Hz. Map projections are shown of shear wave velocity structures at depths of b 60 km, c 100 km, d 140 km, and e 180 km. Symbols are as in Fig. 4 In the upper 100 km, the westward increase in shear wave velocity can be seen as a large-scale lateral variation, which is associated with the cooling of the Pacific plate with age (Fig. 7b, c). There are slow anomalies near the Society, Pitcairn, Macdonald, and Samoa hotspots at depths down to 180 km, while slow anomalies southwards of the Arago hotspots are confined to shallower depths, and slow anomalies beneath the Rarotonga hotspot are confined to greater depths (Fig. 7b–e). These features, except for the Rarotonga and Samoa hotspots, are consistent with previous models (Isse et al. 2006; Suetsugu et al. 2009). The surface wave tomography model of Maggi et al. (2006), created with land-based and the PLUME data, also showed deep-rooted slow anomalies beneath the Society and Macdonald hotspots, extending to the mantle transition zone. The slow anomalies of Marquesas were shallower than 150 km in their model. Vertical profiles of hotspots are shown in Fig. 8. The anomaly beneath the Marquesas hotspot appears shallow-rooted compared with the Society and Macdonald hotspots. The slow anomalies beneath the Society and Macdonald hotspots are found down to depths of 200 km or more, whereas those beneath Marquesas are confined to depths less than 160 km (Fig. 8b–d), consistent with previous studies. New results of the present study are that the slow anomalies beneath Rarotonga hotspot are at depths greater than 80 km, that the Arago hotspot has weak or no anomalies, and that slow anomalies near the Samoa hotspot extend to 200 km (Fig. 8b). The slow anomaly beneath the Society hotspot has a narrower and more vertically oriented profile compared with previous studies (Fig. 5c in Suetsugu et al. 2009, Fig. 4g in Isse et al. 2006). Some slow anomalies are apparently unassociated with any hotspot. The vertical profile along the Marquesas and Pitcairn hotspots, which does not lie along the hotspot trail, shows three slow anomalies, the middle of which is deep-rooted but not connected to any surface hotspot (Fig. 8d). Another deep-rooted slow velocity anomaly without a corresponding surface hotspot is located between the Society and Pitcairn hotspots (Fig. 8c), and a third can be seen in the south of the Cook–Austral chain between the Macdonald and Arago hotspots (Fig. 8b, e). The former is the site where Niu et al. (2002) and Suetsugu et al. (2009) found a substantially thin (presumably hot) mantle transition zone. These orphaned anomalies may represent mantle plumes that have not yet reached the surface, and they remain to be studied in future work. Vertical profiles of the shear wave velocity model. a Map projection at a depth of 100 km (same as Fig. 7c). Yellow broken lines indicate the locations of profiles. b Profile along the Cook–Austral to Samoa hotspot trail. c Profile along the Society to Pitcairn trail. d Profile along Marquesas and Pitcairn hotspots. e Profile between the Society and Arago hotspots. Circles indicate the depth of the lithosphere–asthenosphere boundary estimated from the maximum depth of the negative vertical gradient of shear wave velocity We focus on the seismic anomalies in the Society region, where ray coverage allows us to use the smaller correlation length of 100 km. Figure 9 shows the shear wave velocity models in the region. We can see a slow anomaly zone near the Society hotspot, whose center (~6 % slow) is located beneath Tahiti island, and no slow anomalies beneath the Arago hotspot, where fast anomalies exist. The Society hotspot is located at one edge of the slow anomalies. These anomalies are ~300 km in diameter, which is comparable to the lateral resolution of the checkerboard test. At depths of 140 and 180 km, we found a strong slow anomaly of about 5 %, 400 km to the south of the Society hotspot, with a lateral extent of 200–300 km; however, the size of this anomaly is comparable to, or slightly smaller than, the minimum lateral resolution of our checkerboard test. To examine whether the locations of the slow anomalies are well constrained, we performed spike tests on a two-dimensional phase velocity map at a period of 80 s by calculating the synthetic data from two input models: One has slow anomalies beneath the Society (300 km in diameter) and Arago (200 km in diameter) hotspots (Fig. 10a, b) and the other has a slow anomaly beneath Tahiti island (300 km in diameter) (Fig. 10c, d). All the anomalies are −7 % and random errors with amplitudes up to 0.02 km/s, a value comparable to measured RMS errors, were added to the synthetic data. In both cases, recovered anomalies were located on the same locations of the input models, suggesting that the lateral resolution is sufficient to resolve location of the two anomalies. Figure 8e shows a vertical profile across these two slow anomalies. The slow anomalies appear to merge with those immediately beneath the Society hotspot (and Tahiti island) at depths shallower than 100 km. Shear wave velocity model for the Society region. The area is indicated by a green rectangle in Fig. 7. Shear wave velocities at depths of a 60 km, b 100 km, c 140 km, and d 180 km are shown. Symbols are as in Fig. 4 Results of spike tests for phase velocity distributions. We performed spike tests on a two-dimensional phase velocity map at a period of 80 s. a Input model whose slow anomalies are located on the Society and Arago hotspots. b Output model from (a). c Same as (a) but a slow anomaly is located only beneath Tahiti island. d Output model from c Next, we examine effects of correlation lengths on the lateral heterogeneities imaged in the Society region. Figure 11a, b, d, and e shows that velocity anomalies imaged with a correlation length of 100 km are all presented in the images obtained with a correlation length 200 km. The only noticeable difference is that the slow anomalies are more localized when a correlation length of 100 km is used. The lateral velocity variation patterns in Fig. 9 are therefore robust with respect to the choice of correlation length. Effects of correlation lengths on the lateral heterogeneities imaged in the Society region. Shear wave velocity maps of obtained with correlation lengths (L) of a, d 200 km and b, e 100 km at depths of a, b 100 km and d, e 140 km are compared. Previous results at depths of c 100 km and f 140 km (from Suetsugu et al. 2009) are also shown. Symbols are as in Fig. 4 Figure 11b, c, e, f compares the velocity anomalies obtained in the present study with those obtained by Suetsugu et al. (2009). Although the Rayleigh wave tomography employed by Suetsugu et al. (2009) is not exactly identical to that of the present study, differences between the two studies are caused mainly by the better path coverage of the present study. Compared with Suetsugu et al. (2009), a substantial improvement in lateral resolution is achieved with data from the TIARES experiment. As the figures show, the slow velocity anomalies are more localized in the present study. The large-scale anomaly patterns found by Suetsugu et al. (2009) (e.g., slow anomalies beneath Tahiti island and the Society hotspot, and those east of the Society hotspot) are preserved in the present study. More localized slow anomalies, such as that located at 400 km south of the Society hotspot, were not detected by Suetsugu et al. (2009). The amplitudes of the velocity variations in the present study are 2–3 times as large as those in Suetsugu et al. (2009), which may be due to the new TIARES data, and/or the use of different inversion techniques. Finally, we address the issue of interactions between mantle plumes and the lithosphere by estimating the depth of the lithosphere–asthenosphere boundary (LAB). While the limited depth resolution of surface waves makes it difficult to determine LAB depth directly, recent surface wave tomography work suggests that the maximum of the negative vertical gradient of shear wave velocity is a good proxy for LAB depth (Burgos et al. 2014). Figure 12 shows the LAB depths thus determined. The average LAB depth in the entire studied region is ~90 km. Shallow LAB depths are found near all hotspots in this region and are well correlated with slow shear wave anomalies at depths of 60 km. LAB depths shallower than 50 km may be an artifact due to poor resolution of Rayleigh waves at the periods used in the present study for the depths shallower than 50 km, as indicated by Fig. 6e. Hereafter, we discuss areas whose LAB depths exceed 50 km. The LAB depths near the slow anomalies, irrespective of the existence of hotspots, are about 70 km, i.e., nearly 20 km shallower than those in the surrounding area (Figs. 8b–e, 12). The question may arise as to whether LAB depths in slow velocity regions may be necessarily estimated as shallow due to the artifact by the proxy of LAB depths. To address this possibility, we performed a series of synthetic tests, in which LAB depths varied from 50 to 80 km and synthetic models had uniformly slower velocities than the initial model (Fig. 13). We found that the LAB depths were correctly determined by the proxy and not estimated as shallow. This suggests that the shallower LAB depths in the slow anomaly regions are not artificial. The shallow LAB depths near the slow anomalies may be evidence of thermal erosion of the lithosphere by mantle plumes (e.g., Detrick and Crough 1978). It is desirable to analyze higher-mode Rayleigh waves to obtain a definitive conclusion on this issue. Estimated depths of the lithosphere–asthenosphere boundary (LAB). The depths are determined from the maximum of the negative shear velocity gradient for a the entire study region, including the South Pacific superswell, and b the Society region (rectangle area in a) Results of synthetic tests for LAB depths. Synthetic models have 4 % slow velocities than the initial model and LAB depth at a 80 km, b 70 km, c 60 km, and d 50 km. Lithosphere is 1 % slow velocities than the initial model. Lines are as in Fig. 6. Red arrows show the estimated LAB depths from the maximum of the negative shear wave velocity gradient We deployed temporary seafloor broadband seismic instruments around the Society hotspot in the South Pacific superswell as a part of the TIARES project. We obtained an unprecedentedly high-resolution shear wave velocity model for the upper mantle in the region, using TIARES data combined with data from permanent and other temporary (island and seafloor) networks. The use of TIARES data resulted in improved ray coverage, especially in the region around the Society hotspot, thereby enabling finer-scale mapping with surface wave tomography. The resolved structure reveals localized slow anomalies associated with nearby hotspots. The slow anomalies beneath the Samoa, Macdonald, Pitcairn, and Society hotspots extend at least down to a depth of 200 km and those of the Marquesas hotspot to ~150 km. The anomalies around the Society hotspot and 400 km south of the Society hotspot have a lateral extent of ~300 km. Several slow anomaly areas are apparently not associated with any hotspots; these may become the sites of future hotspots or represent failed hotspots. The LAB depths, estimated from the negative gradient of shear wave velocities, suggest that the lithosphere is thinned by ~20 km in the vicinity of all hotspots, which may represent thermal erosion due to mantle plumes. BBOBS: broadband ocean-bottom seismometer lithosphere–asthenosphere boundary PLUME: Polynesian Lithosphere and Upper Mantle Experiment TIARES: Tomographic Investigation by seafloor ARray Experiment for the Society hotspot Adam C, Bonneville A (2005) Extent of the South Pacific super-swell. J Geophys Res 110:B09408. doi:10.1029/2004JB003465 Aki K, Richards PG (2002) Quantitative seismology, 2nd edn. University Science Books, Sausalito Barruol G, Bosch D, Clouard V, Debayle E, Doin MP, Fontaine F, Godard M, Masson F, Reymond D, Tommasi A, Thoraval C (2002) PLUME investigates South Pacific superswell. Eos Trans AGU 83(45):511, 514 Bassin C, Laske G, Masters G (2000) The current limits of resolution for surface wave tomography in North America. Eos Trans AGU 81(48). Fall Meet. Suppl., Abstract S12A-03 Burgos G, Montagner JP, Beucler E, Capdeville Y, Mocquet A, Drilleau M (2014) Oceanic lithosphere–asthenosphere boundary from surface wave dispersion data. J Geophys Res 119(2):1079–1093. doi:10.1002/2013JB010528 Detrick RS, Crough ST (1978) Island subsidence, hot spots, and lithospheric thinning. J Geophys Res 83:1236–1244 Dziewonski AM, Anderson DL (1981) Preliminary reference Earth model. Phys Earth Planet Inter 25:297–356 Goldstein P, Minner L (1996) SAC2000: seismic signal processing and analysis tools for the 21st century. Seismol Res Lett 67:39 Ishida M, Maruyama S, Suetsugu D, Matsuzaka S, Eguchi T (1999) Superplume project: towards a new view of whole Earth dynamics. Earth Planets Space 51:1–5 Isse T, Suetsugu D, Shiobara H, Sugioka H, Yoshizawa K, Kanazawa T, Fukao Y (2006) Shear wave speed structure beneath the South Pacific superswell using broadband data from ocean floor and islands. Geophys Res Lett 33:L16303. doi:10.1029/2006GL026872 Isse T, Takeo A, Shiobara H (2014) Time correction and clock stability of ocean bottom seismometer using recorded seismograms. JAMSTEC Rep Res Dev 19:19–28 (in Japanese with English abstract) Kustowski B, Ekström G, Dziewonski AM (2008) Anisotropic shear-wave velocity structure of the Earth's mantle: a global model. J Geophys Res 113:B06306. doi:10.1029/2007JB005169 Maggi A, Debayle E, Priestley K, Barruol G (2006) Multi-mode surface waveform tomography of the Pacific Ocean: a closer look at the lithospheric cooling signature. Geophys J Int 166:1384–1397. doi:10.1111/j.1365-246X.2006.03037.x McNutt M (1998) Superswells. Rev Geophys 36:211–244 Montagner JP (1986) Regional three-dimensional structures using long-period surface waves. Ann Geophys 4:283–294 Muller RD, Sdrolias M, Gaina C, Roest WR (2008) Age, spreading rates, and spreading asymmetry of the world's ocean crust. Geochem Geophys Geosyst 9:Q04006. doi:10.1029/2007GC001743 Nataf HC, Nakanishi I, Anderson DL (1986) Measurements of mantle wave velocities and inversion for lateral heterogeneities and anisotropy: 3. Inversion. J Geophys Res 91(B7):7261–7307 Nishimura CE, Forsyth DW (1989) The anisotropic structure of the upper mantle in the Pacific. Geophys J 96:203–229 Niu F, Solomon SC, Silver PG, Suetsugu D, Inoue H (2002) Mantle transition-zone structure beneath the South Pacific superswell and evidence for a mantle plume underlying the Society hotspot. Earth Planet Sci Lett 198:371–380 Suetsugu D, Shiobara H, Sugioka H, Barruol G, Schindele F, Reymond D, Bonneville A, Debsyle E, Isse T, Kanazawa T, Fukao Y (2005) Probing South Pacific mantle plumes with broadband OBS. Eos Trans AGU 86(44):429, 435 Suetsugu D, Isse T, Tanaka S, Obayashi M, Shiobara H, Sugioka H, Kanazawa T, Fukao Y, Barruol G, Reymond D (2009) South Pacific mantle plumes imaged by seismic observation on islands and seafloor. Geochem Geophys Geosyst 10:Q11014. doi:10.1029/2009GC002533 Suetsugu D, Shiobara H, Sugioka H, Ito A, Isse T, Kasaya T, Tada N, Baba K, Abe N, Hamano Y, Tarits P, Barriot JP, Reymond D (2012) TIARES project—tomographic investigation by seafloor array experiment for the Society hotspot. Earth Planets Space 64(4):i–iv Takeuchi H, Saito M (1972) Seismic surface waves. In: Bolt BA (ed) Seismology: surface waves and free oscillations, methods in computational physics, vol 11. Academic Press, New York, pp 217–295 Tarantola A, Valette B (1982) Generalized nonlinear inverse problems solved using the least-squares criterion. Rev Geophys 20:219–232 Wessel P, Smith WHF (1991) Free software helps map and display data. EOS Trans AGU 72(41):441, 445–446 TI performed the observations by BBOBS, the data processing, and the analysis. H Sugioka, H Shiobara, and AI performed the observations by BBOBS. DR supported the observations by BBOBS and data collection by CEA stations. DS performed the observations by BBOBS, helped write the manuscript, and organized the TIARES project. All authors read and approved the final manuscript. We thank the staff of IRIS, Geoscope, SPANET, CEA, Geoscience Australia, and the GEOFON data center for their efforts in maintaining and managing the seismic stations. We thank Noriko Tada, Kiyoshi Baba, and Takafumi Kasaya for their help during the installation and recovery cruises of the seafloor experiment and thank Natsue Abe for organizing the installation cruise. We are grateful to Pierre Mery and Jean-Pierre Barriot for their kind support in Tahiti. We also thank the editor and two anonymous reviewers for critical reviews of the manuscript. This work was supported by a Grant-in-Aid for Scientific Research (KAKENNHI, 19253004) from the Japan Society for the Promotion Science. The GMT software package (Wessel and Smith 1991) and SAC2000 (Goldstein and Minner 1996) were used in the present study. Earthquake Research Institute, The University of Tokyo, 1-1-1 Yayoi, Bunkyo-ku, Tokyo, 113-0032, Japan Takehi Isse & Hajime Shiobara Department of Planetology, Graduate School of Science, Kobe University, 1-1 Rokkodai-cho, Nada-ku, Kobe, Hyogo, 657-8501, Japan Hiroko Sugioka Department of Deep Earth Structure and Dynamics Research, Japan Agency for Marine-Earth Science and Technology, 2-15 Natsushima-cho, Yokosuka, Kanagawa, 237-0061, Japan Aki Ito & Daisuke Suetsugu Laboratoire de Géophysique CEA/DASE/LDG, BP 640, 98713, Papeete, French Polynesia Dominique Reymond Search for Takehi Isse in: Search for Hiroko Sugioka in: Search for Aki Ito in: Search for Hajime Shiobara in: Search for Dominique Reymond in: Search for Daisuke Suetsugu in: Correspondence to Takehi Isse. Isse, T., Sugioka, H., Ito, A. et al. Upper mantle structure beneath the Society hotspot and surrounding region using broadband data from ocean floor and islands. Earth Planet Sp 68, 33 (2016) doi:10.1186/s40623-016-0408-2 Society hotspot South Pacific superswell Surface wave tomography Ocean-bottom seismometer
CommonCrawl
Ergodicity of certain cocycles over certain interval exchanges Multiple solutions for nonlinear elliptic equations with an asymmetric reaction term June 2013, 33(6): 2495-2522. doi: 10.3934/dcds.2013.33.2495 Population dynamical behavior of Lotka-Volterra cooperative systems with random perturbations Meng Liu 1, and Ke Wang 2, School of Mathematical Science, Huaiyin Normal University, Huaian 223300, China Department of Mathematics, Harbin Institute of Technology, Weihai 264209, China Received March 2011 Revised October 2012 Published December 2012 This paper is concerned with two $n$-species stochastic cooperative systems. One is autonomous, the other is non-autonomous. For the first system, we prove that for each species, there is a constant which can be represented by the coefficients of the system. If the constant is negative, then the corresponding species will go to extinction with probability 1; If the constant is positive, then the corresponding species will be persistent with probability 1. For the second system, sufficient conditions for stochastic permanence and global attractivity are established. In addition, the upper- and lower-growth rates of the positive solution are investigated. Our results reveal that, firstly, the stochastic noise of one population is unfavorable for the persistence of all species; secondly, a population could be persistent even the growth rate of this population is less than the half of the intensity of the white noise. Keywords: random perturbations, extinction, Cooperative system, persistence, global attractivity.. Mathematics Subject Classification: Primary: 34F05, 60H10; Secondary: 92B05, 60J2. Citation: Meng Liu, Ke Wang. Population dynamical behavior of Lotka-Volterra cooperative systems with random perturbations. Discrete & Continuous Dynamical Systems - A, 2013, 33 (6) : 2495-2522. doi: 10.3934/dcds.2013.33.2495 X. Abdurahman and Z.Teng, Persistence and extinction for general nonautonomous n-species Lotka-Volterra cooperative systems with delays,, Stud. Appl. Math., 118 (2007), 17. doi: 10.1111/j.1467-9590.2007.00362.x. Google Scholar S. Ahmad, Extinction of species in nonautonomous Lotka-Volterra system,, Proc. Am. Math. Soc., 127 (1999), 2905. doi: 10.1090/S0002-9939-99-05083-2. Google Scholar S. Ahmad and A. C. Lazer, Average conditions for global asymptotic stability in a nonautonomous Lotka-Volterra system,, Nonlinear Anal., 40 (2000), 37. doi: 10.1016/S0362-546X(00)85003-8. Google Scholar E. S. Allman and J. A. Rhodes, "Mathematical Models in Biology: An Introduction,", Cambridge University Press, (2004). Google Scholar A. Bahar and X. Mao, Stochastic delay population dynamics,, International J. Pure and Applied in Math., 11 (2004), 377. Google Scholar I. Barbalat, Systems dequations differentielles d'osci d'oscillations nonlineaires,, Revue Roumaine de Mathematiques Pures et Appliquees, 4 (1959), 267. Google Scholar A. Berman and R. J. Plemmons, "Nonnegative Matrices in the Mathematical Science,", Academic Press, (1979). Google Scholar S. Cheng, Stochastic population systems,, Stoch. Anal. Appl., 27 (2009), 854. doi: 10.1080/07362990902844348. Google Scholar B. S. Goh, Stability in models of mutualism,, Amer. Natural, 113 (1979), 261. doi: 10.1086/283384. Google Scholar K. Golpalsamy, "Stability and Oscillations in Delay Differential Equations of Population Dynamics,", Kluwer Academic, (1992). Google Scholar T. G. Hallam and Z. Ma, Persistence in population models with demographic fluctuations,, J. Math. Biol., 24 (1986), 327. doi: 10.1007/BF00275641. Google Scholar X. He and K. Gopalsamy, Persistence, attractivity, and delay in facultative mutualism,, J. Math. Anal. Appl., 215 (1997), 154. doi: 10.1006/jmaa.1997.5632. Google Scholar D. J. Higham, An algorithmic introduction to numerical simulation of stochastic diffrential equations,, SIAM Rev., 43 (2001), 525. doi: 10.1137/S0036144500378302. Google Scholar Y. Hu, F. Wu and C. Huang, Stochastic Lotka-Volterra models with multiple delays,, J. Math. Anal. Appl., 375 (2011), 42. doi: 10.1016/j.jmaa.2010.08.017. Google Scholar V. Hutson and K. Schmitt, Permanence and the dynamics of biological systems,, Math. Biosci., 111 (1992), 1. doi: 10.1016/0025-5564(92)90078-B. Google Scholar D. Jiang, N. Shi and X. Li, Global stability and stochastic permanence of a non-autonomous logistic equation with random perturbation,, J. Math. Anal. Appl., 340 (2006), 588. doi: 10.1016/j.jmaa.2007.08.014. Google Scholar J. Jiang, On the global stability of cooperative systems,, Bull. Lond. Math. Soc., 26 (1994), 455. doi: 10.1112/blms/26.5.455. Google Scholar I. Karatzas and S. E. Shreve, "Brownian Motion and Stochastic Calculus,", Springer-Verlag, (1991). doi: 10.1007/978-1-4612-0949-2. Google Scholar Y. Kuang, "Delay Differential Equations with Applications in Population Dynamics,", Academic Press, (1993). Google Scholar X. Li, A. Gray, D. Jiang and X. Mao, Sufficient and necessary conditions of stochastic permanence and extinction for stochastic logistic populations under regime switching,, J. Math. Anal. Appl., 376 (2011), 11. doi: 10.1016/j.jmaa.2010.10.053. Google Scholar X. Li and X. Mao, Population dynamical behavior of non-autonomous Lotka-Volterra competitive system with random perturbation,, Discrete Contin. Dyn. Syst., 24 (2009), 523. doi: 10.3934/dcds.2009.24.523. Google Scholar X. Li, D. Jiang and X. Mao, Population dynamical behavior of Lotka-Volterra system under regime switching,, J. Comput. Appl. Math., 232 (2009), 427. doi: 10.1016/j.cam.2009.06.021. Google Scholar M. Liu and K. Wang, Survival analysis of a stochastic cooperation system in a polluted environment,, J. Biol. Syst., 19 (2011), 183. doi: 10.1142/S0218339011003877. Google Scholar M. Liu and K. Wang, Persistence and extinction in stochastic non-autonomous logistic systems,, J. Math. Anal. Appl., 375 (2011), 443. doi: 10.1016/j.jmaa.2010.09.058. Google Scholar M. Liu, K. Wang and Q. Wu, Survival analysis of stochastic competitive models in a polluted environment and stochastic competitive exclusion principle,, Bull. Math. Biol., 73 (2011), 1969. doi: 10.1007/s11538-010-9569-5. Google Scholar G. Lu, Z. Lu and X. Lian, Delay effect on the permanence for Lotka-Volterra cooperative systems,, Nonlinear Anal. Real World Appl., 11 (2010), 2810. doi: 10.1016/j.nonrwa.2009.10.005. Google Scholar Z. Lu and Y. Takeuchi, Permanence and global stability for cooperative Lotka-Volterra diffusion systems,, Nonlinear Anal., 19 (1992), 963. doi: 10.1016/0362-546X(92)90107-P. Google Scholar Q. Luo and X. Mao, Stochastic population dynamics under regime switching II,, J. Math. Anal. Appl., 355 (2009), 577. Google Scholar X. Mao, S. Sabais and E. Renshaw, Asymptotic behavior of stochastic Lotka-Volterra model,, J. Math. Anal. Appl., 287 (2003), 141. doi: 10.1016/S0022-247X(03)00539-0. Google Scholar X. Mao and C. Yuan, "Stochastic Differential Equations with Markovian Switching,", Imperial College Press, (2006). Google Scholar J. Pan, Z. Jin and Z. Ma, Thresholds of survival for an n-dimensional Volterra mutualistic system in a polluted environment,, J. Math. Anal. Appl., 252 (2000), 519. doi: 10.1006/jmaa.2000.6853. Google Scholar S. Pang, F. Deng and X. Mao, Asymptotic properties of stochastic population dynamics,, Dyn. Contin. Discrete Impuls. Syst. Ser. A Math. Anal., 15 (2008), 603. Google Scholar H. L. Smith, On the asymptotic behavior of a class of deterministic models of cooperating species,, SIAM J. Math. Anal., 46 (1986), 368. doi: 10.1137/0146025. Google Scholar F. Wu and Y. Hu, Stochastic Lotka-Volterra system with unbounded distributed delay,, Discrete Contin. Dyn. Syst. Ser. B, 14 (2010), 275. doi: 10.3934/dcdsb.2010.14.275. Google Scholar J. Zhao and J. Jiang, Average conditions for permanence and extinction in nonautonomous Lotka-Volterra system,, J. Math. Anal. Appl., 229 (2004), 663. doi: 10.1016/j.jmaa.2004.06.019. Google Scholar J. Zhao, J. Jiang and A. Lazer, The permanence and global attractivity in a nonautonomous Lotka-Volterra system,, Nonlinear Anal. Real World Appl., 5 (2004), 265. doi: 10.1016/S1468-1218(03)00038-5. Google Scholar Sebastian J. Schreiber. On persistence and extinction for randomly perturbed dynamical systems. Discrete & Continuous Dynamical Systems - B, 2007, 7 (2) : 457-463. doi: 10.3934/dcdsb.2007.7.457 Ludwig Arnold, Igor Chueshov. Cooperative random and stochastic differential equations. Discrete & Continuous Dynamical Systems - A, 2001, 7 (1) : 1-33. doi: 10.3934/dcds.2001.7.1 M. R. S. Kulenović, Orlando Merino. A global attractivity result for maps with invariant boxes. Discrete & Continuous Dynamical Systems - B, 2006, 6 (1) : 97-110. doi: 10.3934/dcdsb.2006.6.97 Y. Chen, L. Wang. Global attractivity of a circadian pacemaker model in a periodic environment. Discrete & Continuous Dynamical Systems - B, 2005, 5 (2) : 277-288. doi: 10.3934/dcdsb.2005.5.277 Wen Jin, Horst R. Thieme. Persistence and extinction of diffusing populations with two sexes and short reproductive season. Discrete & Continuous Dynamical Systems - B, 2014, 19 (10) : 3209-3218. doi: 10.3934/dcdsb.2014.19.3209 Keng Deng, Yixiang Wu. Extinction and uniform strong persistence of a size-structured population model. Discrete & Continuous Dynamical Systems - B, 2017, 22 (3) : 831-840. doi: 10.3934/dcdsb.2017041 Wen Jin, Horst R. Thieme. An extinction/persistence threshold for sexually reproducing populations: The cone spectral radius. Discrete & Continuous Dynamical Systems - B, 2016, 21 (2) : 447-470. doi: 10.3934/dcdsb.2016.21.447 Pascal Noble, Sebastien Travadel. Non-persistence of roll-waves under viscous perturbations. Discrete & Continuous Dynamical Systems - B, 2001, 1 (1) : 61-70. doi: 10.3934/dcdsb.2001.1.61 Chunyan Ji, Daqing Jiang. Persistence and non-persistence of a mutualism system with stochastic perturbation. Discrete & Continuous Dynamical Systems - A, 2012, 32 (3) : 867-889. doi: 10.3934/dcds.2012.32.867 Weigu Li, Kening Lu. A Siegel theorem for dynamical systems under random perturbations. Discrete & Continuous Dynamical Systems - B, 2008, 9 (3&4, May) : 635-642. doi: 10.3934/dcdsb.2008.9.635 Giuseppe Buttazzo, Faustino Maestre. Optimal shape for elliptic problems with random perturbations. Discrete & Continuous Dynamical Systems - A, 2011, 31 (4) : 1115-1128. doi: 10.3934/dcds.2011.31.1115 Yuri Kifer. Computations in dynamical systems via random perturbations. Discrete & Continuous Dynamical Systems - A, 1997, 3 (4) : 457-476. doi: 10.3934/dcds.1997.3.457 G. A. Enciso, E. D. Sontag. Global attractivity, I/O monotone small-gain theorems, and biological delay systems. Discrete & Continuous Dynamical Systems - A, 2006, 14 (3) : 549-578. doi: 10.3934/dcds.2006.14.549 A. Berger, R.S. MacKay, Vassilis Rothos. A criterion for non-persistence of travelling breathers for perturbations of the Ablowitz--Ladik lattice. Discrete & Continuous Dynamical Systems - B, 2004, 4 (4) : 911-920. doi: 10.3934/dcdsb.2004.4.911 Ayadi Lazrag, Ludovic Rifford, Rafael O. Ruggiero. Franks' lemma for $\mathbf{C}^2$-Mañé perturbations of Riemannian metrics and applications to persistence. Journal of Modern Dynamics, 2016, 10: 379-411. doi: 10.3934/jmd.2016.10.379 Cristina Anton, Alan Yong. Stochastic dynamics and survival analysis of a cell population model with random perturbations. Mathematical Biosciences & Engineering, 2018, 15 (5) : 1077-1098. doi: 10.3934/mbe.2018048 Henri Schurz. Dissipation of mean energy of discretized linear oscillators under random perturbations. Conference Publications, 2005, 2005 (Special) : 778-783. doi: 10.3934/proc.2005.2005.778 Yujun Zhu. Topological quasi-stability of partially hyperbolic diffeomorphisms under random perturbations. Discrete & Continuous Dynamical Systems - A, 2014, 34 (2) : 869-882. doi: 10.3934/dcds.2014.34.869 Qiuxia Liu, Peidong Liu. Topological stability of hyperbolic sets of flows under random perturbations. Discrete & Continuous Dynamical Systems - B, 2010, 13 (1) : 117-127. doi: 10.3934/dcdsb.2010.13.117 Hal L. Smith, Horst R. Thieme. Persistence and global stability for a class of discrete time structured population models. Discrete & Continuous Dynamical Systems - A, 2013, 33 (10) : 4627-4646. doi: 10.3934/dcds.2013.33.4627 Meng Liu Ke Wang
CommonCrawl
Mixing of 0+ and 0− observed in the hyperfine and Zeeman structure of ultracold Rb2 molecules njp_17_8_083032.pdf (1.436Mb) Deiß, Markus Drews, Björn Hecker Denschlag, Johannes Tiemann, Eberhard New Journal of Physics ; 17 (2015), 8. - eISSN 1367-2630 Link to original publication https://dx.doi.org/10.1088/1367-2630/17/8/083032 Fakultät für Naturwissenschaften Institut für Quantenmaterie Center for Integrated Quantum Science and Technology (IQST) published version (publisher's PDF) We study the combination of the hyperfine and Zeeman structure in the spin–orbit coupled ${A}^{1}{\Sigma }_{u}^{+}-{b}^{3}{\Pi }_{u}$ complex of ${}^{87}{\mathrm{Rb}}_{2}$. For this purpose, absorption spectroscopy at a magnetic field around $B=1000$ G is carried out. We drive optical dipole transitions from the lowest rotational state of an ultracold Feshbach molecule to various vibrational levels with ${0}^{+}$ symmetry of the $A-b$ complex. In contrast to previous measurements with rotationally excited alkali-dimers, we do not observe equal spacings of the hyperfine levels. In addition, the spectra vary substantially for different vibrational quantum numbers, and exhibit large splittings of up to $160$ MHz, unexpected for ${0}^{+}$ states. The level structure is explained to be a result of the repulsion between the states ${0}^{+}$ and ${0}^{-}$ of ${b}^{3}{\Pi }_{u}$, coupled via hyperfine and Zeeman interactions. In general, ${0}^{-}$ and ${0}^{+}$ have a spin–orbit induced energy spacing Δ, that is different for the individual vibrational states. From each measured spectrum we are able to extract Δ, which otherwise is not easily accessible in conventional spectroscopy schemes. We obtain values of Δ in the range of $\pm 100$ GHz which can be described by coupled channel calculations if a spin–orbit coupling is introduced that is different for ${0}^{-}$ and ${0}^{+}$ of ${b}^{3}{\Pi }_{u}$. [GND]: Spektroskopie | Ultrakaltes Molekül | Hyperfeinstruktur | Zeeman-Effekt [LCSH]: Spectroscopy | Hyperfine structure | Zeeman effect | Atoms; Effect of low temperatures on | Molecules; Effect of low temperatures on | Low temperatures [Free subject headings]: ultracold molecules | dipole transitions | spin–orbit coupling | coupled channel calculations [DDC subject group]: DDC 530 / Physics DOI & citation Please use this identifier to cite or link to this item: http://dx.doi.org/10.18725/OPARU-42830 Deiß, Markus et al. (2022): Mixing of 0+ and 0− observed in the hyperfine and Zeeman structure of ultracold Rb2 molecules. Open Access Repositorium der Universität Ulm und Technischen Hochschule Ulm. http://dx.doi.org/10.18725/OPARU-42830 Citation formatter >
CommonCrawl
Navier-Stokes equations: Some questions related to the direction of the vorticity DCDS-S Home Singular solutions of a nonlinear equation in a punctured domain of $\mathbb{R}^{2}$ April 2019, 12(2): 189-202. doi: 10.3934/dcdss.2019013 On solutions of semilinear upper diagonal infinite systems of differential equations Józef Banaś 1,, and Monika Krajewska 2, Department of Nonlinear Analysis, Rzeszów University of Technology, Al. Powstańców Warszawy 8, 35-959 Rzeszów, Poland Institute of Economic and Management, State Higher School of Technology and Economics in Jarosław, ul. Czarnieckiego 16, 37-500 Jarosław, Poland * Corresponding author: Józef Banaś Dedicated to Professor Vicentiu Radulescu on the occasion of his 60th anniversary Received August 2017 Revised December 2017 Published August 2018 Full Text(HTML) The goal of the paper is to investigate the existence of solutions for semilinear upper diagonal infinite systems of differential equations. We will look for solutions of the mentioned infinite systems in a Banach tempered sequence space. In our considerations we utilize the technique associated with the Hausdorff measure of noncompactness and some existence results from the theory of ordinary differential equations in abstract Banach spaces. Keywords: Infinite system of differential equation, Banach space, tempered sequence, semilinear upper diagonal infinite syatem of differential equations, measure of noncompactness. Mathematics Subject Classification: Primary: 34G20; Secondary: 47H08. Citation: Józef Banaś, Monika Krajewska. On solutions of semilinear upper diagonal infinite systems of differential equations. Discrete & Continuous Dynamical Systems - S, 2019, 12 (2) : 189-202. doi: 10.3934/dcdss.2019013 R. R. Akhmerov, M. I. Kamenskii, A. S. Potapov, A. E. Rodkina and B. N. Sadovskii, Measures of Noncompactness and Condensing Operators, Birkhäuser, Basel, 1992. doi: 10.1007/978-3-0348-5727-7. Google Scholar J. M. Ayerbe Toledano, T. Dominguez Benavides and G. Lopez Acedo, Measures of Noncompactness in Metric Fixed Point Theory, Birkhäuser, Basel, 1997. doi: 10.1007/978-3-0348-8920-9. Google Scholar J. Banaś and K. Goebel, Measures of Noncompactness in Banach Spaces, Marcel Dekker, New York, 1980. Google Scholar J. Banaś and M. Krajewska, Existence of solutions for infinite systems of differential equations in spaces of tempered sequences, Electronic J. Differential Equations, 2017 (2017), 1-28. Google Scholar J. Banaś and M. Mursaleen, Sequence Spaces and Measures of Noncompactness with Applications to Differential and Integral Equations, Springer, New Delhi, 2014. doi: 10.1007/978-81-322-1886-9. Google Scholar L. Cheng, Q. Cheng, Q. Shen, K. Tu and W. Zhang, A new approach to measures of noncompactness of Banach spaces, Studia Math., 240 (2018), 21-45. doi: 10.4064/sm8448-2-2017. Google Scholar K. Deimling, Ordinary Differential Equations in Banach Spaces, Springer, Berlin, 1977. Google Scholar K. Deimling, Nonlinear Functional Analysis, Springer, Berlin, 1985. doi: 10.1007/978-3-662-00547-7. Google Scholar J. Mallet-Paret and R. D. Nussbaum, Inequivalent measures of noncompactness, Ann. Mat. Pura Appl., 190 (2011), 453-488. doi: 10.1007/s10231-010-0158-x. Google Scholar H. Mönch and G. H. von Harten, On the Cauchy problem for ordinary differential equations in Banach spaces, Arch. Math., 39 (1982), 153-160. doi: 10.1007/BF01899196. Google Scholar Gafurjan Ibragimov, Askar Rakhmanov, Idham Arif Alias, Mai Zurwatul Ahlam Mohd Jaffar. The soft landing problem for an infinite system of second order differential equations. Numerical Algebra, Control & Optimization, 2017, 7 (1) : 89-94. doi: 10.3934/naco.2017005 Shiping Cao, Anthony Coniglio, Xueyan Niu, Richard H. Rand, Robert S. Strichartz. The mathieu differential equation and generalizations to infinite fractafolds. Communications on Pure & Applied Analysis, 2020, 19 (3) : 1795-1845. doi: 10.3934/cpaa.2020073 Zhihua Liu, Pierre Magal. Functional differential equation with infinite delay in a space of exponentially bounded and uniformly continuous functions. Discrete & Continuous Dynamical Systems - B, 2017, 22 (11) : 0-0. doi: 10.3934/dcdsb.2019227 Jan A. Van Casteren. On backward stochastic differential equations in infinite dimensions. Discrete & Continuous Dynamical Systems - S, 2013, 6 (3) : 803-824. doi: 10.3934/dcdss.2013.6.803 Valery Y. Glizer, Oleg Kelis. Singular infinite horizon zero-sum linear-quadratic differential game: Saddle-point equilibrium sequence. Numerical Algebra, Control & Optimization, 2017, 7 (1) : 1-20. doi: 10.3934/naco.2017001 Can Li, Weihua Deng, Lijing Zhao. Well-posedness and numerical algorithm for the tempered fractional differential equations. Discrete & Continuous Dynamical Systems - B, 2019, 24 (4) : 1989-2015. doi: 10.3934/dcdsb.2019026 Giselle A. Monteiro, Milan Tvrdý. Generalized linear differential equations in a Banach space: Continuous dependence on a parameter. Discrete & Continuous Dynamical Systems - A, 2013, 33 (1) : 283-303. doi: 10.3934/dcds.2013.33.283 Samir K. Bhowmik, Dugald B. Duncan, Michael Grinfeld, Gabriel J. Lord. Finite to infinite steady state solutions, bifurcations of an integro-differential equation. Discrete & Continuous Dynamical Systems - B, 2011, 16 (1) : 57-71. doi: 10.3934/dcdsb.2011.16.57 Hernán R. Henríquez, Claudio Cuevas, Alejandro Caicedo. Asymptotically periodic solutions of neutral partial differential equations with infinite delay. Communications on Pure & Applied Analysis, 2013, 12 (5) : 2031-2068. doi: 10.3934/cpaa.2013.12.2031 Jiaohui Xu, Tomás Caraballo. Long time behavior of fractional impulsive stochastic differential equations with infinite delay. Discrete & Continuous Dynamical Systems - B, 2019, 24 (6) : 2719-2743. doi: 10.3934/dcdsb.2018272 Sergio Albeverio, Sonia Mazzucchi. Infinite dimensional integrals and partial differential equations for stochastic and quantum phenomena. Journal of Geometric Mechanics, 2019, 11 (2) : 123-137. doi: 10.3934/jgm.2019006 Teresa Faria. Normal forms for semilinear functional differential equations in Banach spaces and applications. Part II. Discrete & Continuous Dynamical Systems - A, 2001, 7 (1) : 155-176. doi: 10.3934/dcds.2001.7.155 Masakatsu Suzuki, Hideaki Matsunaga. Stability criteria for a class of linear differential equations with off-diagonal delays. Discrete & Continuous Dynamical Systems - A, 2009, 24 (4) : 1381-1391. doi: 10.3934/dcds.2009.24.1381 Yuri Bakhtin. Lyapunov exponents for stochastic differential equations with infinite memory and application to stochastic Navier-Stokes equations. Discrete & Continuous Dynamical Systems - B, 2006, 6 (4) : 697-709. doi: 10.3934/dcdsb.2006.6.697 Irene Benedetti, Nguyen Van Loi, Valentina Taddei. An approximation solvability method for nonlocal semilinear differential problems in Banach spaces. Discrete & Continuous Dynamical Systems - A, 2017, 37 (6) : 2977-2998. doi: 10.3934/dcds.2017128 Tomás Caraballo, P.E. Kloeden, Pedro Marín-Rubio. Numerical and finite delay approximations of attractors for logistic differential-integral equations with infinite delay. Discrete & Continuous Dynamical Systems - A, 2007, 19 (1) : 177-196. doi: 10.3934/dcds.2007.19.177 Jean-François Couchouron, Mikhail Kamenskii, Paolo Nistri. An infinite dimensional bifurcation problem with application to a class of functional differential equations of neutral type. Communications on Pure & Applied Analysis, 2013, 12 (5) : 1845-1859. doi: 10.3934/cpaa.2013.12.1845 Jianhui Huang, Xun Li, Jiongmin Yong. A linear-quadratic optimal control problem for mean-field stochastic differential equations in infinite horizon. Mathematical Control & Related Fields, 2015, 5 (1) : 97-139. doi: 10.3934/mcrf.2015.5.97 Abdelhai Elazzouzi, Aziz Ouhinou. Optimal regularity and stability analysis in the $\alpha-$Norm for a class of partial functional differential equations with infinite delay. Discrete & Continuous Dynamical Systems - A, 2011, 30 (1) : 115-135. doi: 10.3934/dcds.2011.30.115 Fuke Wu, Shigeng Hu. The LaSalle-type theorem for neutral stochastic functional differential equations with infinite delay. Discrete & Continuous Dynamical Systems - A, 2012, 32 (3) : 1065-1094. doi: 10.3934/dcds.2012.32.1065 HTML views (62) Józef Banaś Monika Krajewska
CommonCrawl
Complete dynamical analysis for a nonlinear HTLV-I infection model with distributed delay, CTL response and immune impairment Emma D'Aniello 1, and Saber Elaydi 2,, Dipartimento di Matematica e Fisica, Università degli Studi della Campania "Luigi Vanvitelli", Viale Lincoln n.5, 81100 Caserta, Italia Department of Mathematics, Trinity University, San Antonio, TX 78212-7200, USA * Corresponding author: Saber Elaydi Saber Elaydi acknowledges the hospitality of the Department of Mathematics and Physics of the Universit`a degli Studi della Campania "Luigi Vanvitelli" Revised December 2018 Published September 2019 We consider a discrete non-autonomous semi-dynamical system generated by a family of continuous maps defined on a locally compact metric space. It is assumed that this family of maps uniformly converges to a continuous map. Such a non-autonomous system is called an asymptotically autonomous system. We extend the dynamical system to the metric one-point compactification of the phase space. This is done via the construction of an associated skew-product dynamical system. We prove, among other things, that the omega limit sets are invariant and invariantly connected. We apply our results to two populations models, the Ricker model with no Allee effect and Elaydi-Sacker model with the Allee effect, where it is assumed that the reproduction rate changes with time due to habitat fluctuation. Keywords: Dynamical system, skew product, attractor. Mathematics Subject Classification: Primary: 54H20, 37C70; Secondary: 39A05. Citation: Emma D'Aniello, Saber Elaydi. The structure of $ \omega $-limit sets of asymptotically non-autonomous discrete dynamical systems. Discrete & Continuous Dynamical Systems - B, 2020, 25 (3) : 903-915. doi: 10.3934/dcdsb.2019195 W. C. Allee, The Social Life of Animals, 3rd Edition, William Heineman Ltd, London and Toronto, 1941. Google Scholar L. Assas, S. Elaydi, E. Kwessi, G. Livadiotis and D. Ribble, Hierarchical competition models with the Allee effect, J. Biological Dynamics, 9 (2015), 32-44. doi: 10.1080/17513758.2014.923118. Google Scholar L. Assas, B. Dennis, S. Elaydi, E. Kwessi and G. Livadiotis, Hierarchical competition models with the Allee effect II: The case of immigration, J. Biological Dynamics, 9 (2015), 288-316. doi: 10.1080/17513758.2015.1077999. Google Scholar B. Aulbach and S. Siegmund, The dichotomy spectrum for noninvertible systems of linear difference equations, J. Diff. Equ. Appl., 7 (2001), 895-913. doi: 10.1080/10236190108808310. Google Scholar B. Aulbach and T. Wanner, Invariant foliations and decoupling of non-autonomous difference equations, J. Diff. Equ. Appl., 9 (2003), 459-472. doi: 10.1080/1023619031000076524. Google Scholar B. Aulbach and T. Wanner, Topological simplification of non-autonomous difference equations, J. Diff. Equ. Appl., 12 (2006), 283-296. doi: 10.1080/10236190500489384. Google Scholar E. Cabral Balreira, S. Elaydi and R. Luís, Global dynamics of triangular maps, Nonlinear Analysis, Theory, Methods and Appl., Ser. A, 104 (2014), 75-83. doi: 10.1016/j.na.2014.03.019. Google Scholar L. S. Block and W. A. Coppel, Dynamics in One Dimension, Lecture Notes in Mathematics, 1513, Springer-Verlag, Berlin, 1992. doi: 10.1007/BFb0084762. Google Scholar [9] M. Brin and G. Stuck, Introduction to Dynamical Systems, Cambridge University Press, Cambridge, 2002. doi: 10.1017/CBO9780511755316. Google Scholar J. S. Cánovas, On $\omega$-limit sets of non-autonomous discrete systems, J. Diff. Equ. Appl., 12 (2006), 95-100. doi: 10.1080/10236190500424274. Google Scholar E. D'Aniello and H. Oliveira, Pitchfork bifurcation for non-autonomous interval maps, J. Diff. Equ. Appl., 15 (2009), 291-302. doi: 10.1080/10236190802258669. Google Scholar E. D'Aniello and T. H. Steele, The $\omega$-limit sets of alternating systems, J. Diff. Equ. Appl., 17 (2011), 1793-1799. doi: 10.1080/10236198.2010.488227. Google Scholar E. D'Aniello and T. H. Steele, Stability in the family of $\omega$-limit sets of alternating systems, J. Math. Anal. Appl., 389 (2012), 1191-1203. doi: 10.1016/j.jmaa.2011.12.056. Google Scholar Y. N. Dowker and F. G. Friedlander, On limit sets in dynamical systems, Proc. London Math. Soc., 4 (1954), 168-176. doi: 10.1112/plms/s3-4.1.168. Google Scholar J. Dugundji, Topology, Allyn and Bacon, Inc., Boston, Mass, 1966. Google Scholar J. Dvořáková, N. Neumärker and M. Štefánková, On $\omega$-limit sets of non-autonomous dynamical systems with a uniform limit of type $2^{\infty}$, J. Diff. Equ. Appl., 22 (2016), 636-644. doi: 10.1080/10236198.2015.1123706. Google Scholar S. Elaydi, E. Kwessi and G. Livadiotis, Hierarchical competition models with the Allee effect III: Multispecies, J. Biological Dynamics, 12 (2018), 271-287. doi: 10.1080/17513758.2018.1439537. Google Scholar S. Elaydi and R. J. Sacker, Global stability of periodic orbits of non-autonomous difference equations and population biology, J. Diff. Equ. Appl., 208 (2005), 258-273. doi: 10.1016/j.jde.2003.10.024. Google Scholar S. Elaydi and R. Sacker, Skew-Product Dynamical Systems: Applications to Difference Equations, 2004. Available from: http://www-bcf.usc.edu/ rsacker/pubs/UAE.pdf. Google Scholar S. N. Elaydi and R. J. Sacker, Population models with Allee effect: A new model, J. Biological Dynamics, 4 (2010), 397-408. doi: 10.1080/17513750903377434. Google Scholar N. Franco, L. Silva and P. Simões, Symbolic dynamics and renormalization of non-autonomous $k$-periodic dynamical systems, J. Diff. Equ. Appl., 19 (2013), 27-38. doi: 10.1080/10236198.2011.611804. Google Scholar J. E. Franke and A.-A. Yakubu, Population models with periodic recruitment functions and survival rates, J. Diff. Equ. Appl., 11 (2005), 1169-1184. doi: 10.1080/10236190500386275. Google Scholar R. Kempf, On $\Omega$-limit sets of discrete-time dynamical systems, J. Diff. Equ. Appl., 8 (2002), 1121-1131. doi: 10.1080/10236190290029024. Google Scholar P. E. Kloeden and M. Rasmussen, Nonautonomous Dynamical Systems, Mathematical Surveys and Monographs, 176, AMS, Providence, RI, 2011. doi: 10.1090/surv/176. Google Scholar S. F. Kolyada, On dynamics of triangular maps of the square, Ergodic Theory Dynam. Systems, 12 (1992), 749-768. doi: 10.1017/S0143385700007082. Google Scholar J. P. La Salle, The stability of dynamical systems, in CBMS-NSf Regional Conference Series in Applied Mathematics, Siam, (1976). doi: 10.1137/1.9781611970432. Google Scholar V. Jiménez López and J. Smítal, $\omega$-limit sets for triangular mappings, Fundamenta Math., 167 (2001), 1-15. doi: 10.4064/fm167-1-1. Google Scholar M. Mandelkern, Metrization of the one-point compactification, Proc. Amer. Math. Soc., 107 (1989), 1111-1115. doi: 10.1090/S0002-9939-1989-0991703-4. Google Scholar C. Pötzsche, Geometric Theory of Discrete Nonautonomous Dynamical Systems, Lecture Note in Mathematics, 2002. Springer-Verlag, Berlin 2010. doi: 10.1007/978-3-642-14258-1. Google Scholar G. Rangel, Global attractors in partial differential equations, in Handbook of Dynamical Systems, North-Holland, Amsterdam, 2 (2002), 885–982. doi: 10.1016/S1874-575X(02)80038-8. Google Scholar L. Silva, Periodic attractors of nonautonomous flat-topped tent systems, Discrete Contin. Dyn. Syst. B, 24 (2019), 1867-1874. Google Scholar Figure 1. The space $ \widehat {\cal F} = \left\{ {{f_n}:n = 0,1,2, \ldots } \right\} \cup \left\{ f \right\} $, where $ f_{n} \rightarrow f $, uniformly, as $ n \rightarrow \infty $. If $ x_{0} $ is on the fiber $ {\mathcal X}_{0} $, then $ f_{0}(x_{0}) = x_{1} $ is in the fiber $ {\mathcal X}_{1} $, and $ f_{1}(x_{1}) = x_{2} $ is on the fiber $ {\mathcal X}_{2} $, etc Figure 2. This commuting diagram illustrates the notion of a skew product discrete semidynamical system. Here $ p $ is the projection map, $ p: X \times {\mathcal F} \rightarrow {\mathcal F} $, that is $ p(x,g) = g $, for each $ (x, g) \in X \times {\mathcal F} $, $ i_{d} $ is the identity map and $ \sigma $ is the shift map, where $ \sigma(f_{i}, n) = f_{i+n} $ Figure 3. Beverton-Holt maps with $ {B}_{0} $, $ {B}_{1} $, $ {B}_{2} $, $ {B}_{3} $, $ {B}_{4} $, $ \dots $, with time-dependent $ {r}_{n} $ converging to the map $ B $ Figure 4. The phase space diagram of the 2-species hierarchical model with four interior fixed points and five fixed points on the axes Ahmed Y. Abdallah. Upper semicontinuity of the attractor for a second order lattice dynamical system. Discrete & Continuous Dynamical Systems - B, 2005, 5 (4) : 899-916. doi: 10.3934/dcdsb.2005.5.899 Patrick Bonckaert, Timoteo Carletti, Ernest Fontich. On dynamical systems close to a product of $m$ rotations. Discrete & Continuous Dynamical Systems - A, 2009, 24 (2) : 349-366. doi: 10.3934/dcds.2009.24.349 Sebastián Donoso, Wenbo Sun. Dynamical cubes and a criteria for systems having product extensions. Journal of Modern Dynamics, 2015, 9: 365-405. doi: 10.3934/jmd.2015.9.365 Juan A. Calzada, Rafael Obaya, Ana M. Sanz. Continuous separation for monotone skew-product semiflows: From theoretical to numerical results. Discrete & Continuous Dynamical Systems - B, 2015, 20 (3) : 915-944. doi: 10.3934/dcdsb.2015.20.915 Sylvia Novo, Carmen Núñez, Rafael Obaya, Ana M. Sanz. Skew-product semiflows for non-autonomous partial functional differential equations with delay. Discrete & Continuous Dynamical Systems - A, 2014, 34 (10) : 4291-4321. doi: 10.3934/dcds.2014.34.4291 Bogdan Sasu, A. L. Sasu. Input-output conditions for the asymptotic behavior of linear skew-product flows and applications. Communications on Pure & Applied Analysis, 2006, 5 (3) : 551-569. doi: 10.3934/cpaa.2006.5.551 H. M. Hastings, S. Silberger, M. T. Weiss, Y. Wu. A twisted tensor product on symbolic dynamical systems and the Ashley's problem. Discrete & Continuous Dynamical Systems - A, 2003, 9 (3) : 549-558. doi: 10.3934/dcds.2003.9.549 Dongfeng Zhang, Junxiang Xu, Xindong Xu. Reducibility of three dimensional skew symmetric system with Liouvillean basic frequencies. Discrete & Continuous Dynamical Systems - A, 2018, 38 (6) : 2851-2877. doi: 10.3934/dcds.2018123 P.K. Newton. The dipole dynamical system. Conference Publications, 2005, 2005 (Special) : 692-699. doi: 10.3934/proc.2005.2005.692 Junyi Tu, Yuncheng You. Random attractor of stochastic Brusselator system with multiplicative noise. Discrete & Continuous Dynamical Systems - A, 2016, 36 (5) : 2757-2779. doi: 10.3934/dcds.2016.36.2757 Tomás Caraballo, David Cheban. On the structure of the global attractor for non-autonomous dynamical systems with weak convergence. Communications on Pure & Applied Analysis, 2012, 11 (2) : 809-828. doi: 10.3934/cpaa.2012.11.809 Dorota Bors, Robert Stańczy. Dynamical system modeling fermionic limit. Discrete & Continuous Dynamical Systems - B, 2018, 23 (1) : 45-55. doi: 10.3934/dcdsb.2018004 Xiangnan He, Wenlian Lu, Tianping Chen. On transverse stability of random dynamical system. Discrete & Continuous Dynamical Systems - A, 2013, 33 (2) : 701-721. doi: 10.3934/dcds.2013.33.701 Jianfeng Feng, Mariya Shcherbina, Brunello Tirozzi. Dynamical behaviour of a large complex system. Communications on Pure & Applied Analysis, 2008, 7 (2) : 249-265. doi: 10.3934/cpaa.2008.7.249 Emma D'Aniello Saber Elaydi \begin{document}$ \omega $\end{document}-limit sets of asymptotically non-autonomous discrete dynamical systems" readonly="readonly">
CommonCrawl
Regulation of the Immune System by NF-κB and IκB Liou, Hsiou-Chi 537 NF-${\kappa}B$/Rel transcription factor family participates in diverse biological processes including embryo development, hematopoiesis, immune regulation, as well as neuronal functions. In this review, the NF-${\kappa}B$/Rel signal transduction pathways and their important roles in the regulation of immune system will be discussed. NF-${\kappa}B$/Rel members execute distinct functions in multiple immune cell types via the regulation of target genes essential for cell proliferation, survival, effector functions, cell trafficking and communication, as well as the formation of lymphoid architecture. Consequently, proper activation of NF-${\kappa}B$/Rel during immune responses to allergens, auto-antigens, allo-antigens, and pathogenic infection is crucial for the integrity of host innate and adaptive immunity. Effects of Intraperitoneally Administered Lipoic Acid, Vitamin E, and Linalool on the Level of Total Lipid and Fatty Acids in Guinea Pig Brain with Oxidative Stress Induced by H2O2 Celik, Sait;Ozkaya, Ahmet 547 The aim of our study was to investigate the protective effects of intraperitoneally-administrated vitamin E, dl-alpha lipoic acid, and linalool on the level of total lipid and fatty acid in guinea pig brains with oxidative stress that was induced by $H_2O_2$. The total brain lipid content in the $H_2O_2$ group decreased when compared to the $H_2O_2$ + vitamin E (p<0.05), $H_2O_2$ + linalool (p<0.05), ALA (p<0.05), control (p<0.01), linalool (p<0.01), and vitamin E (p<0.01) groups. While the proportion of total saturated fatty acid (${\Sigma}SFA$) in the $H_2O_2$ group significantly increased (p<0.005) when compared to the vitamin E group, it only slightly increased (p<0.01) when compared to the control and $H_2O_2$ + vitamin E groups. The ratio of the total unsaturated fatty acid (${\Sigma}USFA$) in the $H_2O_2$ groups was lower (p<0.05) than the control, vitamin E, and $H_2O_2$ + vitamin E groups. The level of the total polyunsaturated fatty acid (${\Sigma}PUEA$) in the $H_2O_2$ group decreased in when compared to the control, vitamin E, and $H_2O_2$ + vitamin E groups. While the proportion of the total w3 (omega 3), w6 (omega 6), and PUFA were found to be lowest in the $H_2O_2$ group, they were slightly increased (p<0.05) in the lipoic acid group when compared to the control and $H_2O_2$ + lipoic acid groups. However, the level of ${\Sigma}SFA$ in the $H_2O_2$ group was highest; the level of ${\Sigma}USFA$ in same group was lowest. As the proportion of ${\Sigma}USFA$ and ${\Sigma}PUFA$ were found to be highest in the linalool group, they were decreased in the $H_2O_2$ group when compared to the control group. Our results show that linalool has antioxidant properties, much the same as vitamin E and lipoic acid, to prevent lipid peroxidation. Additionally, vitamin E, lipoic acid, and linalool could lead to therapeutic approaches for limiting damage from oxidation reaction in unsaturated fatty acids, as well as for complementing existing therapy for the treatment of complications of oxidative damage. Identification and Characterization of a Putative Baculoviral Transcriptional Factor IE-1 from Choristoneura fumiferana Granulovirus Rashidan, Kianoush Khajeh;Nassoury, Nasha;Merzouki, Abderrazzak;Guertin, Claude 553 A gene that encodes a protein homologue to baculoviral IE-1 was identified and sequenced in the genome of the Choristoneura fumiferana granulovirus (ChfuGV). The gene has an 1278 nucleotide (nt) open-reading frame (ORF) that encodes 426 amino acids with an estimated molecular weight of 50.33 kDa. At the nucleotide level, several cis-acting regulatory elements were detected within the promoter region of the ie-1 gene of ChfuGV along with other studied granuloviruses (GVs). Two putative CCAAT elements were detected within the noncoding leader region of this gene; one was located on the opposite strand at -92 and the other at -420 nt from the putative start triplet. Two baculoviral late promoter motifs (TAAG) were also detected within the promoter region of the ie-1 gene of ChfuGV. A single polyadenylation signal, AATAAA, was located 18nt downstream of the putative translational stop codon of ie-1 from ChfuGV. At the protein level, the amino acid sequence data that was derived from the nucleotide sequence in ChfuGV IE-1 was compared to those of the Cydia pomonella granulovirus (CpGV), Xestia c-nigrum granulovirus (XcGV) and Plutella xylostella granulovirus (PxGV). The C-terminal regions of the granuloviral IE-1 sequences appeared to be more conserved when compared to the N-terminal regions. A domain, similar to the basic helix-loop-helix like (bHLH-like) domain in NPVs, was detected at the C-terminal region of IE-1 from ChfuGV (residues 387 to 414). A phylogenetic tree for baculoviral IE-1 was constructed using a maximum parsimony analysis. A phylogenetic estimation demonstrates that ChfuGV IE-1 is most closely related to that of CpGV. Interaction of Heliothis armigera Nuclear Polyhedrosis Viral Capsid Protein with its Host Actin Lu, Song-Ya;Qi, Yi-Peng;Ge, Guo-Qiong 562 In order to find the cellular interaction factors of the Heliothis armigera nuclear polyhedrosis virus capsid protein VP39, a Heliothis armigera cell cDNA library was constructed. Then VP39 was used as bait. The host actin gene was isolated from the cDNA library with the yeast two-hybrid system. This demonstrated that VP39 could interact with its host actin in yeast. In order to corroborate this interaction in vivo, the vp39 gene was fused with the green fluorescent protein gene in plasmid pEGFP39. The fusion protein was expressed in the Hz-AM1 cells under the control of the Autographa californica multiple nucleopolyhedrovirus immediate early gene promoter. The host actin was labeled specifically by the red fluorescence substance, tetramethy rhodamine isothicyanete-phalloidin. Observation under a fluorescence microscopy showed that VP39, which was indicated by green fluorescence, began to appear in the cells 6 h after being transfected with pEGFP39. Red actin cables were also formed in the cytoplasm at the same time. Actin was aggregated in the nucleus 9 h after the transfection. The green and red fluorescence always appeared in the same location of the cells, which demonstrated that VP39 could combine with the host actin. Such a combination would result in the actin skeleton rearrangement. New Action Pattern of a Maltose-forming α-Amylase from Streptomyces sp. and its Possible Application in Bakery Ammar, Youssef Ben;Matsubara, Takayoshi;Ito, Kazuo;Iizuka, Masaru;Limpaseni, Tipaporn;Pongsawasdi, Piamsook;Minamiura, Noshi 568 An $\alpha$-amylase (EC 3.2.1.1) was purified that catalyses the production of a high level of maltose from starch without the attendant production of glucose. The enzyme was produced extracellularly by thermophilic Streptomyces sp. that was isolated from Thailand's soil. Purification was achieved by alcohol precipiation, DEAE-Cellulose, and Gel filtration chromatographies. The purified enzyme exhibited maximum activity at pH 6-7 and $60^{\circ}C$. It had a relative molecular mass of 45 kDa, as determined by SDS-PAGE. The hydrolysis products from starch had $\alpha$-anomeric forms, as determined by $^1H$-NMR. This maltose-forming $\alpha$-amylase completely hydrolyzed the soluble starch to produce a high level of maltose, representing up to 90%. It hydrolyzed maltotetrose and maltotriose to primarily produce maltose (82% and 62%, repectively) without the attendant production of glucose. The high maltose level as a final end-product from starch and maltooligosaccharides, and the unique action pattern of this enzyme, indicate an unusual maltose-forming system. After the addition of the enzyme in the bread-baking process, the bread's volume increased and kept its softness longer than when the bread had no enzyme. Purification and Characterization of a Collagenase from the Mackerel, Scomber japonicus Park, Pyo-Jam;Lee, Sang-Hoon;Byun, Hee-Guk;Kim, Soo-Hyun;Kim, Se-Kwon 576 Collagenase from the internal organs of a mackerel was purified using acetone precipitation, ion-exchange chromatography on a DEAE-Sephadex A-50, gel filtration chromatography on a Sephadex G-100, ion-exchange chromatography on DEAE-Sephacel, and gel filtration chromatography on a Sephadex G-75 column. The molecular mass of the purified enzyme was estimated to be 14.8 kDa by gel filtration and SDS-PAGE. The purification and yield were 39.5-fold and 0.1% when compared to those in the starting-crude extract. The optimum pH and temperature for the enzyme activity were around pH 7.5 and $55^{\circ}C$, respectively. The $K_m$ and $V_{max}$ of the enzyme for collagen Type I were approximately 1.1 mM and 2,343 U, respectively. The purified enzyme was strongly inhibited by $Hg^{2+}$, $Zn^{2+}$, PMSF, TLCK, and the soybean-trypsin inhibitor. Gamma Irradiation-reduced IFN-γ Expression, STAT1 Signals, and Cell-mediated Immunity Han, Seon-Kyu;Song, Jie-Young;Yun, Yeon-Sook;Yi, Seh-Yoon 583 The signal transducer and activator of transcription (STAT)1 is a cytoplasmic-transcription factor that is phosphorylated by Janus kinases (Jak) in response to interferon $\gamma$ (IFN-$\gamma$). The phosphorylated STAT1 translocates to the nucleus, where it turns on specific sets of IFN-$\gamma$-inducible genes, such as the interferon regulatory factor (IRF)-1. We show here that gamma irradiation reduces the IFN-$\gamma$ mRNA expression. The inhibition of the STAT1 phosphorylation and the IRF-1 expression by gamma irradiation was also observed. In contrast, the mRNA levels of IL-5 and transcription factor GATA-3 were slightly induced by gamma irradiation when compared to the non-irradiated sample. Furthermore, we detected the inhibition of cell-mediated immunity by gamma irradiation in the allogenic-mixed lymphocytes' reaction (MLR). These results postulate that gamma irradiation induces the polarized-Th2 response and interferes with STAT1 signals, thereby causing the immunosuppression of the Th1 response. Effect of γ-Irradiation on the Molecular Properties of Myoglobin Lee, Yong-Woo;Song, Kyung-Bin 590 To elucidate the effect of gamma-irradiation on the molecular properties of myoglobin, the secondary and tertiary structures, as well as the molecular weight size of the protein, were examined after irradiation at various irradiation doses. Gamma-irradiation of myoglobin solutions caused the disruption of the ordered structure of the protein molecules, as well as degradation, cross-linking, and aggregation of the polypeptide chains. A SDS-PAGE study indicated that irradiation caused initial fragmentation of the proteins and subsequent aggregation, due to cross-linking of the protein molecules. The effect of irradiation on the protein was more significant at lower protein concentrations. Ascorbic acid protected against the degradation and aggregation of proteins by scavenging oxygen radicals that are produced by irradiation. A circular dichroism study showed that an increase of the irradiation decreased the a-helical content of myoglobin with a concurrent increase of the aperiodic structure content. Fluorescence spectroscopy indicated that irradiation increased the emission intensity that was excited at 280 nm. Identification and Characterization of a Conserved Baculoviral Structural Protein ODVP-6E/ODV-E56 from Choristoneura fumiferana Granulovirus Rashidan, Kianoush Khajeh;Nassoury, Nasha;Giannopoulos, Paresa N.;Guertin, Claude 595 A gene that encodes a homologue to baculoviral ODVP-6E/ODV-E56, a baculoviral envelope-associated viral structural protein, has been identified and sequenced on the genome of Choristoneura fumiferana granulovirus (ChfuGV). The ChfuGV odvp-6e/odv-e56 gene was located on an 11-kb BamHI subgenomic fragment using different sets of degenerated primers, which were designed using the results of the protein sequencing of a major 39 kDa structural protein that is associated with the occlusion-derived virus (ODV). The gene has a 1062 nucleotide (nt) open-reading frame (ORF) that encodes a protein with 353 amino acids with a predicated molecular mass of 38.5 kDa. The amino acid sequence data that was derived from the nucleotide sequence in ChfuGV was compared to those of other baculoviruses. ChfuGV ODVP-6E/ODV-E56, along with othe baculoviral ODVP-6E/ODV-E56 proteins, all contained two putative transmembrane domains at their C-terminus. Several putative N-and O-glycosylation, N-myristoylation, and phosphorylation sites were detected in the ChfuGV ODVP-6E/ODV-E56 protein. A similar pattern was detected when a hydrophobicity-plots comparison was performed on ChfuGV ODVP-6E/ODV-E56 with other baculoviral homologue proteins. At the nucleotide level, a late promoter motif (GTAAG) was located at -14 nt upstream to the start codon of the GhfuGV odvp-6e/odv-e56 gene. a slight variant of the polyadenylation signal, AATAAT, was detected at the position +10 nt that is downstream from the termination signal. A phylogenetic tree for baculoviral ODVP-6E/ODV-E56 was constructed using a maximum parsimony analysis. The phylogenetic estimation demonstrated that ChfuGV ODVP-6E/ODV-E56 is most closely related to those of Cydia pomonella granulovirus (CpGV) and Plutella xylostella granulovirus (PxGV). RU486 Suppresses Progesterone-induced Acrosome Reaction in Boar Spermatozoa Jang, Sun-Phil;YiLee, S.H. 604 The effects of progesterone on the acrosome reaction, as well as the effects of RU486 on the progesterone-induced acrosome reaction in capacitated boar spermatozoa, were investigated. Progesterone, a major steroid that is secreted by the cumulus cells of oocyte, clearly induced the acrosome reaction in a dose-dependent manner in capacitated boar spermatozoa, even though it failed to show similar effects in non-capacitated spermatozoa. RU486, a potent antiprogestin, significantly reduced the effects of progesterone on the progesterone-induced acrosome reaction; however, when treated alone, it showed no inhibitory effects on the acrosome reaction. The inhibitory effects of RU486 were also shown to be dose-dependent. These results imply that in addition to the well-known inducer of the acrosome reaction, zona pellucida, progesterone can also induce the acrosome reaction through its specific receptors on spermatozoa after the spermatozoa undergo capacitation. The Anti-proliferative Gene TIS21 Is Involved in Osteoclast Differentiation Lee, Soo-Woong;Kwak, Han-Bok;Lee, Hong-Chan;Lee, Seung-Ku;Kim, Hong-Hee;Lee, Zang-Hee 609 The remodeling process of bone is accompanied by complex changes in the expression levels of various genes. Several approaches have been employed to detect differentially-expressed genes in regard to osteoclast differentiation. In order to identify the genes that are involved in osteoclast differentiation, we used a cDNA-array-nylon membrane. Among 1,200 genes that showed ameasurable signal, 19 genes were chosen for further study. Eleven genes were up-regulated; eight genes were down-regulated. TIS21 was one of the up-regulated genes which were highly expressed in mature osteoclasts. To verify the cDNA microarray results, we carried out RT-PCR and real-time RT-PCR for the TIS21 gene. The TIS21 mRNA level was higher in differentiated-osteoclasts when compared to undifferentiated bone-marrow macrophages. Furthermore, the treatment with $1\;{\mu}M$ of a TIS21 antisense oligonucleotide reduced the formation of osteoclasts from the bone-marrow-precursor cells by ~30%. These results provide evidence for the potential role of TIS21 in the differentiation of osteoclasts. Anticarcinogenic Effect and Modification of Cytochrome P450 2E1 by Dietary Garlic Powder in Diethylnitrosamine-Initiated Rat Hepatocarcinogenesis Park, Kyung-Ae;Kweon, Sang-Hui;Choi, Hay-Mie 615 The purpose of this study was to determine the effects of dietary garlic powder on diethylnitrosamine (DEN)-induced hepatocarcinogenesis and cytochrome P450 (CYP) enzymes in weaning male Sprague-Dawley rats by using the medium-term bioassay system of Ito et al. The rats were fed diets that contained 0, 0.5, 2.0 or 5.0% garlic powder for 8 weeks, beginning the diets with the intraperitoneal (i.p.) injection of DEN. The areas of placental glutathione S-transferase (GST-P) positive foci, an effective marker for DEN-initiated lesions, were significantly decreased in the rats that were fed garlic-powder diets; the numbers were significantly decreased only in the 2.0 and 5.0% garlic-powder diets. The p-nitrophenol hydroxylase (PNPH) activities and protein levels of CYP 2E1 in the hepatic microsomes of the rats that were fed the 2.0 and 5.0% garlic powder diet were much lower than those of the basal-diet groups. Pentoxyresorufin O-dealkylase (PROD) activity and CYP 2B1 protein level were not influenced by the garlic-powder diets and carcinogen treatment. Therefore, the suppression of CYP 2E1 by garlic in the diet might influence the formation of preneoplastic foci during hepatocarcinogenesis in rats that are initiated with DEN. Identification of DC21 as a Novel Target Gene Counter-regulated by IL-12 and IL-4 Kong, Kyoung-Ah;Jang, Ji-Young;Lee, Choong-Eun 623 The Th1 vs. Th2 balance is critical for the maintenance of immune homeostasis. Therefore, the genes that are selectively-regulated by the Th1 and Th2 cytokines are likely to play an important role in the Th1 and Th2 immune responses. In order to search for and identify the novel target genes that are differentially regulated by the Th1/Th2 cytokines, the human PBMC mRNAs differentially expressed upon the stimulation with IL-4 or IL-12, were screened by employing the differential display-polymerase chain reaction. Among a number of clones selected, DC21 was identified as a novel target gene that is regulated by IL-4 and IL-12. The DC21 gene expression was up-regulated either by IL-4 or IL-12, yet counter-regulated by co-treatment with IL-4 and IL-12. DC21 is a dendritic cell protein with an unknown function. The sequence analysis and conserved-domain search revealed that it has two AU-rich motifs in the 3'UTR, which is a target site for the regulation of mRNA stability by cytokines, and that it belongs to the N-acetyltransferase family. The induction of DC21 by IL-12 peaked around 8-12 h, and lasted until 24 h. LY294002 and SB203580 significantly suppressed the IL-12-induced DC21 gene expression, which implies that PI3K and p38/JNK are involved in the IL-12 signal transduction pathway that leads to the DC21 expression. Furthermore, tissue blot data indicated that DC21 is highly expressed in tissues with specialized-resident macrophages, such as the lung, liver, kidney, and placenta. Together, these data suggest a possible role for DC21 in the differentiation and maturation of dendritic cells regulated by IL-4 and IL-12. The Ring-H2 Finger Motif of CKBBP1/SAG Is Necessary for Interaction with Protein Kinase CKII and Optimal Cell Proliferation Kim, Yun-Sook;Ha, Kwon-Soo;Kim, Young-Ho;Bae, Young-Seuk 629 Protein kinase CKII (CKII) is required for progression through the cell division cycle. We recently reported that the $\beta$ subunit of protein kinase CKII ($CKII{\beta}$) associates with CKBBP1 that contains the Ring-H2 finger motif in the yeast two-hybrid system. We demonstrate here that the Ring-H2 finger-disrupted mutant of CKBBP1 does not interact with purified $CKII{\beta}$ in vitro, which shows that the Ring-H2 finger motif is critical for direct interaction with $CKII{\beta}$. The CKII holoenzyme is efficiently co-precipitated with the wild-type CKBBP1, but not with the Ring-H2 finger-disrupted CKBBP1, from whole cell extracts when epitope-tagged CKBBP1 is transiently expressed in HeLa cells. Disruption of the Ring-H2 finger motif does not affect the cellular localization of CKBBP1 in HeLa cells. The increased expression of either the wild-type CKBBP1 or Ring-H2 finger-disrupted CKBBP1 does not modulate the protein or the activity levels of CKII in HeLa cells. However, the stable expression of Ring-H2 finger-disrupted CKBBP1 in HeLa cells suppresses cell proliferation and causes the accumulation of the G1/G0 peak of the cell cycle. The Ring-H2 finger motif is required for maximal CKBBP1 phosphorylation by CKII, suggesting that the stable binding of CKBBP1 to CKII is necessary for its efficient phosphorylation. Taken together, these results suggest that the complex formation of $CKII{\beta}$ with CKBBP1 and/or CKII-mediated CKBBP1 phosphorylation is important for the G1/S phase transition of the cell cycle. Identification of Bacteriophage K11 Genomic Promoters for K11 RNA Polymerase Han, Kyung-Goo;Kim, Dong-Hee;Junn, Eun-Sung;Lee, Sang-Soo;Kang, Chang-Won 637 Only one natural promoter that interacts with bacteriophage K11 RNA polymerase has so far been identified. To identify more, in the present study restriction fragments of the phage genome were individually assayed for transcription activity in vitro. The K11 genome was digested with two 4-bp-recognizing restriction enzymes, and the fragments cloned in pUC119 were assayed with purified K11 RNA polymerase. Eight K11 promoter-bearing fragments were isolated and sequenced. We report that the nine K11 promoter sequences (including the one previously identified) were highly homologous from -17 to +4, relative to the initiation site at +1. Interestingly, five had -10G and -8A, while the other four had -10A and -8C. The consensus sequences with the natural -10G/-8A and -10A/-8C, and their variants with -10G/-8C and -10A/-8A, showed nearly equal transcription activity, suggesting residues at -10 and -8 do not regulate promoter activity. Using hybridization methods, physical positions of the cloned promoter-bearing sequences were mapped on SalI-and KpnI-restriction maps of the K11 genome. The flanking sequences of six cloned K11 promoters were found to be orthologous with T7 or T3 genomic sequences.
CommonCrawl
Search found 15965 matches The table above shows the cancellation fee schedule that a travel agency uses to determine the fee charged to a tourist Re: The table above shows the cancellation fee schedule that a travel agency uses to determine the fee charged to a tour Missing the table Forum: Data Sufficiency Topic: The table above shows the cancellation fee schedule that a travel agency uses to determine the fee charged to a tourist A club sold an average (arithmetic mean) of 92 raffle tickets per member. Among the female members, the average number Re: A club sold an average (arithmetic mean) of 92 raffle tickets per member. Among the female members, the average numb A club sold an average (arithmetic mean) of 92 raffle tickets per member. Among the female members, the average number sold was 84, and among the male members, the average number sold was 96. What was the ratio of the number of male members to the number of female members in the club? (A) 1 : 1 (B)... Forum: Problem Solving Topic: A club sold an average (arithmetic mean) of 92 raffle tickets per member. Among the female members, the average number What is the value of the positive integer m ? Re: What is the value of the positive integer m ? What is the value of the positive integer m ? (1) When m is divided by 6, the remainder is 3. (2) When 15 is divided by m, the remainder is 6. OA B Source: GMAT Prep When it comes to remainders, we have a nice rule that says: If N divided by D, leaves remainder R, then the possible values of N are ... Topic: What is the value of the positive integer m ? Greg assembles units of a certain product at a factory. Each day he is paid $2.00 per unit for the first 40 units that Re: Greg assembles units of a certain product at a factory. Each day he is paid $2.00 per unit for the first 40 units th Greg assembles units of a certain product at a factory. Each day he is paid $2.00 per unit for the first 40 units that he assembles and $2.50 for each additional unit that he assembles that day. If Greg assembled at least 30 units on each of two days and was paid a total of $180.00 for assembling u... Topic: Greg assembles units of a certain product at a factory. Each day he is paid $2.00 per unit for the first 40 units that A sequence of numbers \(a_1, a_2, a_3,\ldots\) is defined as follows: \(a_1=3, a_2=5,\) and every term in the sequence Re: A sequence of numbers \(a_1, a_2, a_3,\ldots\) is defined as follows: \(a_1=3, a_2=5,\) and every term in the sequen A sequence of numbers \(a_1, a_2, a_3,\ldots\) is defined as follows: \(a_1=3, a_2=5,\) and every term in the sequence after \(a_2\) is the product of all terms in the sequence preceding it, e.g, \(a_3=(a_1)(a_2)\) and \(a_4=(a1)(a2)(a3).\) If \(a_n=t\) and \(n>2,\) what is the value of \(a_{n+2}\)... Topic: A sequence of numbers \(a_1, a_2, a_3,\ldots\) is defined as follows: \(a_1=3, a_2=5,\) and every term in the sequence In a 200 member association consisting of men and women, exactly 20% of men and exactly 25 % of women are homeowners. Re: In a 200 member association consisting of men and women, exactly 20% of men and exactly 25 % of women are homeowners In a 200 member association consisting of men and women, exactly 20% of men and exactly 25 % of women are homeowners. What is the least number of members who are homeowners? A. 49 B. 47 C. 45 D. 43 E. 41 Answer: E Source: Official Guide In order to minimize the number of homeowners, we must MAXIMIZ... Topic: In a 200 member association consisting of men and women, exactly 20% of men and exactly 25 % of women are homeowners. A pharmaceutical company received \(\$3\) million in royalties on the first \(\$20\) million in sales of the generic Re: A pharmaceutical company received \(\$3\) million in royalties on the first \(\$20\) million in sales of the generic A pharmaceutical company received \(\$3\) million in royalties on the first \(\$20\) million in sales of the generic equivalent of one of its products and then \(\$9\) million in royalties on the next \(\$108\) million in sales. By approximately what percent did the ratio of royalties to sales decr... Topic: A pharmaceutical company received \(\$3\) million in royalties on the first \(\$20\) million in sales of the generic The average salary of 15 people in the shipping department at a certain firm is $20,000. The salary of 5 of the Re: The average salary of 15 people in the shipping department at a certain firm is $20,000. The salary of 5 of the Source: Official Guide The average salary of 15 people in the shipping department at a certain firm is $20,000. The salary of 5 of the employees is $25,000 each and the salary of 4 of the employees is $16,000 each. What is the average salary of the remaining employees? A. $19,250 B. $18,500 C. $18,... Topic: The average salary of 15 people in the shipping department at a certain firm is $20,000. The salary of 5 of the If z is an integer, is z even? Re: If z is an integer, is z even? If z is an integer, is z even? (1) z/2 is even. (2) 3z is even. OA D Source: Manhattan Prep Target question : Is z an even integer? Aside: Integer n is even if we can express n as n = 2k for some integer k Statement 1 : z/2 is an even integer. This means z/2 =2k for some integer k Multiply both sid... Topic: If z is an integer, is z even? If d is the standard deviation x, y, and z, what is the standard deviation of x + 5, y + 5, z + 5 ? Re: If d is the standard deviation x, y, and z, what is the standard deviation of x + 5, y + 5, z + 5 ? If d is the standard deviation x, y, and z, what is the standard deviation of x + 5, y + 5, z + 5 ? A. d B. 3d C. 15d D. d + 5 E. d + 15 OA A Source: GMAT Prep Adding the same value to each value in a set does not change the standard deviation of the set. So, the standard deviation of {x + 5, y + 5... Topic: If d is the standard deviation x, y, and z, what is the standard deviation of x + 5, y + 5, z + 5 ? In planning for a trip, Joan estimated both the distance of the trip, in miles, and her average speed, in miles per hour Re: In planning for a trip, Joan estimated both the distance of the trip, in miles, and her average speed, in miles per In planning for a trip, Joan estimated both the distance of the trip, in miles, and her average speed, in miles per hour. She accurately divided her estimated distance by her estimated average speed to obtain an estimate for the time, in hours, that the trip would take. Was her estimate within 0.5 ... Topic: In planning for a trip, Joan estimated both the distance of the trip, in miles, and her average speed, in miles per hour Al and Ben are drivers for SD Trucking Company. One snowy day, Ben left SD at 8:00 a.m. heading east and Al left SD at Re: Al and Ben are drivers for SD Trucking Company. One snowy day, Ben left SD at 8:00 a.m. heading east and Al left SD Official Guide Al and Ben are drivers for SD Trucking Company. One snowy day, Ben left SD at 8:00 a.m. heading east and Al left SD at 11:00 a.m. heading west. At a particular time later that day, the dispatcher retrieved data from SD's vehicle tracking system. The data showed that, up to that time,... Topic: Al and Ben are drivers for SD Trucking Company. One snowy day, Ben left SD at 8:00 a.m. heading east and Al left SD at What is the probability of rolling three six-sided dice, and getting a different number on each die? Re: What is the probability of rolling three six-sided dice, and getting a different number on each die? What is the probability of rolling three six-sided dice, and getting a different number on each die? A. 1/12 B. 1/3 C. 4/9 D. 5/9 E. 7/18 OA D Source: Magoosh P(getting a different number on each die) = P(1st roll is ANY number AND 2nd roll is different from 1st roll AND 3rd roll is different from ... Topic: What is the probability of rolling three six-sided dice, and getting a different number on each die? How many men are in a certain company's vanpool program? Re: How many men are in a certain company's vanpool program? How many men are in a certain company's vanpool program? (1) The ratio of men to women in the program is 3 to 2. (2) The men and women in the program fill 6 vans. Answer: E Source: Official Guide Target question : How many men are in a certain company's vanpool program? Statement 1 : The ratio of m... Topic: How many men are in a certain company's vanpool program? What is the value of the positive integer \(m?\) Re: What is the value of the positive integer \(m?\) What is the value of the positive integer \(m?\) (1) When \(m\) is divided by \(6,\) the remainder is \(3.\) (2) When \(15\) is divided by \(m,\) the remainder is \(6.\) Answer: B Source: GMAT Prep When it comes to remainders, we have a nice rule that says: If N divided by D, leaves remainder R, th... Topic: What is the value of the positive integer \(m?\)
CommonCrawl
Can information and communication technology and institutional quality help mitigate climate change in E7 economies? An environmental Kuznets curve extension Bright Akwasi Gyamfi ORCID: orcid.org/0000-0002-7567-98856, Asiedu B. Ampomah1, Festus V. Bekun ORCID: orcid.org/0000-0003-4948-69052,3,4 & Simplice A. Asongu5 Understanding the role of information communication and technology (ICT) in environmental issues stemming from extensive energy consumption and carbon dioxide emission in the process of economic development is worthwhile both from policy and scholarly fronts. Motivated on this premise, the study contributes to the rising studies associated with the roles of economic growth, institutional quality and information and communication technology (ICT) have on CO2 emission in the framework of the 21st Conference of the Parties (COP21) on climate convention in Paris. Obtaining data from the emerging industrialized seven (E7) economies (China, India, Indonesia, Russia, Mexico, Brazil and Turkey) covering annual frequency from 1995 to 2016 for our analysis achieved significant outcome. From the empirical analysis, economic globalization and renewable energy consumption both reduce CO2 emissions while ICT, institutional quality and fossil fuel contribute to the degradation of the environment. This study affirms the presence of an environmental Kuznets curve (EKC) phenomenon which shows an invented U-shaped curve within the E7 economies. On the causality front, both income and its square have a feedback causal relationship with carbon emissions while economic globalization, institutional quality, ICT and clean energy all have a one-way directional causal relationship with CO2 emissions. Conclusively, the need to reduce environmental degradation activities should be pursued by the blocs such as tree planting activities to mitigate the effect of deforestation. Furthermore, the bloc should shift from the use of fossil-fuel and leverage on ICT to enhance the use of clean energy which is environmentally friendly. Environmental degradation is an urgent threat that transcends the state boundaries (Benzie and Persson 2019). A number of countries have encountered problems with regard to natural disasters which over recent years have affected their economic well-being and quality of lives. One of the arguments for environmental degradation is that it is caused by CO2 emission anthropogenic practices (Rogelj and Schleussner 2019). In particular, CO2 emissions from energy utilized in the developing world surpassed that of developed nations after the 1990s (Kasman and Duman 2015). Environmental challenges require nations all over the globe to be more open to the risks of environmental warming because of the Paris Agreement of 2015 (COP21). About half a decade ago, the United Nations (UN) launched the Sustainable Development Goals (SDGs) to deal with essential challenges such as to achieve decreasing degradation (SDG 12 and 13) and growing economic and social prosperity, making harmony among countries, and coping with global ecological decline. Ecological pollution has hit a significant stage and hence, it has become essential to learn how ecological factors can influence economic development (Bakhsh et al. 2017). Even when trade and industrial development are significant targets for all countries, advanced states are particularly anxious about the damage to the ecology (Raworth 2017). From a different point of view, emerging economies often disregarded these crucial problems in coping with economic development. As a consequence, the individuals who are the weakest and have no desire to get involved will struggle much. As a result, foreign collaboration on many of these problems will best be fulfilled as a gateway to industrialized economies and developing sustainably (Benzie and Persson 2019). In the current period, the effect of economic development on ecological deterioration has been investigated in various analyses and frequently linked to the theory of the EKC (Ozokcu et al. 2017; Adebayo and Kirikkaleli 2021; Adebayo et al. 2022). Essentially, the EKC claims that economic development steadily leads to a certain ecological deterioration until a turning point threshold where the attendant economic development engenders an opposite tendency on ecological deterioration, also known as an inverted U-shaped curve (Grossman et al. 1991). Carbon emission in a country during its transition years is directly tied to economic development. In the long term, economic development of the economy would have a beneficial effect on climate until the nation hits a reasonable amount of real Gross Domestic Product (GDP) (Grossman et al. 1991; Shafik 1994; Bekun 2022). For example, a recent study found that CO2 emission, energy intensity, and income are non-linearly connected among countries constituting the Association of Southeast Asian Nations (ASEAN) (Heidari et al. 2015). But these studies show that places with large income such as Singapore will reduce CO2 emission. Different nations which have marginally higher income including Thailand, Indonesia as well as Philippines, usually experience slower economic development due to their higher CO2 emission levels. Global trade is a channel that plays a crucial role in the development of nations' ecosystems. According to Shahbaz et al. (2018a, b, c), the advancement of the finance industry would improve economic development by growing aggregate demand to enterprises and companies and redistributing monetary capacity to more productive organizations, and thus raise energy use. Investment from overseas and economic globalization are largely based on modern source of internet usage. The internet has become a central catalyst in the innovation turn for environmental sustainability and there is an evolving stream of literature maintaining that CO2 emission can be minimized via the ICT sector (Elumban et al. 2016; Asongu et al. 2018). CO2 emission deteriorates the available environment due to the emissions created during the manufacturing process. Moreover, advancement in ICT entails substantial energy usage. As ICT is associated with incremental innovative prospects, it is making use of great energy which contributes a lot in terms of CO2 emission. ICT is ruling in our modern world today and it plays an essential role in the quality of institutions. Information technology has been brought into a new paradigm. It is being used for social interaction and it is creating change in the quality of institutions. ICT facilitates trade openness especially when such ICT is very apparent in economic activities (Cortes et al. 2011). Activities like trade openness or liberalization rely mostly on ICT which is modern in nature. Since the advent of the internet, internet use has played a crucial part in "international cooperation" finance, socio-economic growth, foreign exchange, organizational infrastructure and productivity. It has been imperative, inter alia, employment development, poverty reduction and energy consumption. Building on the evidence from the existing literature, it is reasonable to posit that with ICT, CO2 emissions in other sectors of the economy would decrease. These include, inter alia, the: energy, agriculture, power, transportation, and financial sectors. Rendering the internet more efficient means reducing CO2 emissions caused by associated energy consumption. Efficient measures need to be taken to control CO2 emission which is consistent with engaging labor force that uses modern technology that has a negative impact on the economy. Institutional quality has a significant impact on CO2 emission. Inefficient financing from weak institutions in sectors which are private leads to, inter alia, corruption, protocols which are weak and a poor bureaucratic process. Institutional quality has currently drawn the attention of researchers and scientists with regard to the environment (Godil et al. 2020). In ensuring good cooperation among dealers, effective and impartial government institutions may play major roles. The rule of law, therefore, is currently a critical factor in addressing environmental problems. Consequently, a strong rule of law is vital to impose CO2 emission control procedure as well as oversee how companies comply with attendant procedures. Relatively, if faults occur in institutional quality, by ignoring the externalities of the environment, companies are likely to disregard steps essential for the control of CO2 emissions. Two schools of thought have emerged from the literature on the linkage between CO2 emission and ICT. While a strand of the literature argues that ICT increases CO2 emissions, another group of extant researchers argues that ICT helps to reduce CO2 emissions (Belhir et al. 2018). The internet has become the popular technological streaming for many different purposes. The way most people use and rely on the internet has become a huge energy and global warming threat. However, the internet has also become the fundamental medium by which people maintain their livelihoods every day (Islam et al. 2020). Financial industries should incorporate environmentally friendly technologies in order to minimize the environmental degradation that occurs during production (Amri et al. 2018). The closest study in the literature to the present exposition is Asongu et al. (2018) which has assessed how enhancing ICT affects CO2 emissions in sub-Saharan Africa (SSA) to conclude that CO2 emissions can be reduced by ICT when certain critical thresholds of ICT penetration have been reached. Departing from the underlying study, this analysis explores the effect of ICT, institutional quality and income level on CO2 emission. To make this assessment, we have used the Augmented Mean Group (AMG), Common Correlated Effect Mean Group (CCEMG) and Driscoll–Kraay (DK) OLS over the time stretching from 1995 to 2016, within the remit of E7.Footnote 1 In 2017, all of the countries highlighted in the E7 bloc were within the top 25 nations accounting for global CO2 emissions. Chain was reported to have produced 9898.3 million metric tons, making it the first on the list. In comparison to the other countries, India manufactured 2466.8 million metric tons of pollutant, which placed it third, whereas Russia produced 1692.8 million metric tons, placing it in fourth position. Mexico was 11th, having produced 490.3 million metric tons of pollutants, and Indonesia was 12th, having realized 486.8 million metric tons of pollutants. Additionally, Brazil was ranked 13th in terms of 476.1 million metric tons of pollutants. Meanwhile, in relation to greenhouse gas emissions, the Turkish Republic, which was the last of the E7 nations to be added to the list, was ranked at number 15th, with CO2 emissions amounting to 447.9 million metric tons (see www.usato.com). With these E7 CO2 emissions figures, we may infer that the E7 economies are responsible for generating a significant amount of CO2 because of their commercial development. Beyond the extension of Asongu et al. (2018) as clarified above, our analysis adds to the existing studies discussed, on the following fronts: To our best of knowledge, this study examines the connection among ICT, economic globalization, institutional quality and CO2 emissions for the emerging seven (E7) economies: Additionally, most previous studies on the association between internet usage and carbon dioxide emissions have been based on panel data models that have failed to consider the issue of cross-sectional dependence and heterogeneity which are relevant in providing robust findings [see Lu (2018) for 12 Asian countries, Amri (2018) for Tunisia and Nguyen et al. (2020) for G-20 economies]. The present analysis accounts for accurate and unbiased effects of cross-sectional dependence and heterogeneity which require the employment of a more advanced group of econometric tests. We extend our analysis to examine if an EKC is apparent among E7 economies within the framework of an income-carbon emission relationship while accounting for combined effect of institutional quality and economic globalization in mitigating CO2 emission that have received little or no documentation in the extant literature in the context of E7. The authors employed the Augmented Mean Group (AMG) and Common Correlated Effect Mean Group (CCEMG) which provide more robust outcomes for policy recommendation. The remainder of this study is organized as: this introduction is followed by a literature review in Sect.2. The analysis of the econometric methods and description are outlined in Sect. 3. Section 4 focuses on the empirical result and discussion. The final section presents the conclusion and recommendations. Economic influence of ICTs has been argued about throughout the 1960s. ICT has had a huge positive impact on economic and social issues around the globe. The relationship regarding ICT and economic development is considered optimistic (Engelbrecht and Xayavong 2007; Kretschmer 2012; Asongu et al. 2016, 2017; Sinha 2018; Sinha et al. 2020; Chien et al. 2021). In recent years, the controversy on climate conservation has escalated significantly. Several studies suggest that substantial reliance on ICT can affect the environment in the long term. Emeri (2019) conducted a study in Tunisia using ICT, total factor efficiency and CO2 variables. The study found that ICT had great impact on CO2 emission used to proxy for pollution. Their study rejected the EKC assumption. Also, Nguyen (2020) conducted a study on the role of ICT and invention in deriving CO2 emission and commercial progress in designated G-20 countries. In the first place, there were only five obstacles that hinder CO2 emission including increasing oil costs, FDI, improved infrastructure, and spending on innovation. Secondly, ICT, financial sector progress, and economic development were all positive contributing forces for the economy. It appears that their observations show the invalidity of the omission of the EKC in their study. However, their research empirically shows that work to regulate use of oil and ecologically responsible refining, distribution, and steps like processing and manufacturing can mitigate emissions in these markets. Another study by Amri (2018) examined the linkage between CO2 emission, income, ICT, and trade in Tunisia. The findings show that the EKC assumption was dismissed as higher long-term total factor productivity (TFP) coefficients were found opposed to the short TFP coefficients. Additionally, the analysis revealed that ICT has little or no effect on carbon emission as an indicator of pollutants. Tunisian decision-makers must ensure both a continuous improvement in their total factor efficiency as well as an expansion in ICT. A study by Godil et al. (2020) confirmed that CO2 emission in the long run affects institutional quality and GDP positively impacted CO2 emission. Relatively, FDI and ICT showed a negative impact on CO2 emissions. Furthermore, the same study depicted that financial development and CO2 negatively impacted CO2 emission. Nonetheless, Magazzino (2021) tackled the connection among ICT, power ingestion air pollution and economic growth in EU countries. The study found that there was a unidirectional association between ICT and energy consumption. In addition, Park and Baloch (2018) found that electricity consumption had substantial effect on CO2 emission and internet usage in EU countries. The study suggested that internet used is increasing the risk to ecological devolvement. Seemingly, Raheem et al. (2020) examined ICT and CO2 emission. The result depict that ICT significantly impacts CO2 emission in the long run and short run. Additionally, Zafar et al. (2022) investigated the potential long-term effects of information and communications technology (ICT) and education on environmental quality from 1990 to 2018. The results obtained by the continuously updated and fully modified (Cup-FM) test indicate that economic growth, education, and energy consumption stimulate carbon emission intensity in Asian countries. In the wake of the above-mentioned studies, several empirical examinations have been done utilizing both total energy and energy consumption. Amidst the above assertion, two concepts have emerged, notably: green ICT and electricity for ICT. Indeed, as indicated in Salahuddin et al. (2016), green ICT is defined as a state where production can be achieved via energy and environmental efficiency. This suggests that the introduction of new technologies into the energy system itself affects the sector and thus helps towards reaching alternative goals. In order to improve the environment, renewable energy could save cost. Despite the importance of ICT in environmental outcomes, empirical studies have shown that some ICTs engender positive effects than others. The theoretical linkage between ICT, energy and the environment appeared of high interest as early as the 1990s. To see what is little known in the subject of EKC studies, it is significant that most investigations from extant studies have focused on country-specific or industrial sectors (Sadorsky 2012). Salahuddin et al. (2016) have conducted a very extensive study on this subject. Asongu (2018) assesses the effects of internet in 44 countries in Africa. The outcome indicated ICT use reduces CO2 emissions. Majeed (2018) experientially tested the consequence of ICT know-how on emissions reduction in developing countries. The research analyzed 232 countries over the years 1980 to 2016. ICT favorably impacted developed countries; however, the impact did not favor developing countries. Moreover, Cho et al. (2007) investigated the straight and knock-on influence of ICT investment on fuel consumption utilizing oil price and electricity as additional variables in South Korea. The study indicates that investment in ICT could decrease energy used under certain circumstances. Further, Salauddin et al. (2015) reiterated that internet usage ensures institutional quality. Similarly, a study by Abid (2017) on institutional quality in 41 European countries asserted that institutional quality reduces CO2 emissions. Lau et al. (2014) stressed that institutional quality is crucial in minimizing CO2 emissions. Lv et al. (2018) stressed that, better institutional quality strategies ensure environmental quality. Amri (2018) found causality amid CO2 emissions, ICT, trade, wages, FDI and energy consumption in Tunisia. ARDL result gave no evidence in advocate of either enriching or controlling the effect of technology on environmental pollution. Lu (2018) also compared the effect of energy utilization, ICT and FDI on CO2 emissions for distinct twelve Asian countries. Shabani et al. (2019) found unidirectional causation from energy intake, ICT to CO2 emissions using a horizontal approach. Their finding was closely aligned with Arshad et al. (2020) who has analyzed the effect of trade, ICT, monetary progress, and energy consumption on greenhouse gases in fourteen Asia countries. The research concluded that ICT may have an adverse and significant influence on pollution. Khan et al. (2018) asked if the rate of ICT investment matters when controlling for CO2 emissions. Therefore, the authors selected the E7 countries they thought were important in the use of ICT in enhancing productivity of their economies as they constitute comparatively faster growing emerging countries. To authors' best of knowledge, this is the first study to examine the connection among ICT, monetary globalization, institutional quality and CO2 emissions for the emerging seven (E7) economies. Subsequently, our analysis to examine if an EKC is apparent among E7 economies within the framework of an income–carbon emission relationship. Lastly, we employed the AMG and CCEMG which provide more robust outcomes for policy recommendation. The study indicated that ICT is linked with formation of waste and CO2 emission. Data and methodology This study adopts a battery of econometrics techniques to empirically analyze data on a group of seven emerging economies (E7) that comprises China, India, Brazil, Mexico, Russia, Indonesia, and Turkey. The data are sourced from the World Bank's development indicators from 1995 to 2016. The choice of these coefficients is in accordance with the 2030 Sustainable Development Goals (SDGs). These countries share some common economic traits with their fast-growing emerging status which has translated to substantial implications on energy-related developments alongside economic expansion in recent times. To assess the impacts of ICT, energy consumption and the current level of institutional quality in line with the level of economic globalization on their environmental quality, we provide a model specification for the empirical study in logarithm form in Eq. 1: $$\mathrm{Ln}{{\mathrm{CO}}_{2}}_{\mathrm{it}} = {\alpha }_{0}+{\alpha }_{1}{\mathrm{LnY}}_{\mathrm{it}}+{\alpha }_{2}\mathrm{Ln}{{Y}^{2}}_{\mathrm{it}}+{\alpha }_{4}{\alpha }_{3}{\mathrm{LnICT}}_{\mathrm{it}}+{\alpha }_{4}{\mathrm{LnINSQ}}_{\mathrm{it}}+{\alpha }_{5}{\mathrm{LnEG}}_{\mathrm{it}}+{\alpha }_{6}{\mathrm{LnR}}_{\mathrm{it}}+{\alpha }_{7}{\mathrm{LnFF}}_{\mathrm{it}}+{\varepsilon }_{\mathrm{it}}.$$ Data spanning from 1995 through 2016 were gathered from the World Bank (WDI 2020), and KOF globalization index of Gygli et al. (2020). This study adopted the KOF globalization index of Gygli et al. (2019) as obtained from KOF Swiss Economic Institute to capture economic globalization. The KOF globalization index is gaining more popularity in empirical literature due to its broad scope of capturing globalization compared to other narrow well-known approaches like the trade openness proxy that mainly capitalizes on trade dynamics in contextualizing the globalization measurement (Shahbaz et al. 2018a; Wang et al. 2018; Le and Ozturk 2020). The full description of variables in Eq. 1 is presented with measurement scales and symbols in Table 1. Table 1 Description of variables There are other internet indicators like the penetration rate of mobile phones, computer investment inter alia. However, the present study utilized internet users (per 100 people) because, a study that was conducted at Carnegie Mellon University found that the primary reasons people use the Internet are to have fun and to learn more about the things that they are interested in. Internet marketing is another function that both public and private entities put the Internet to use for. After the introduction of the personal computer, the Internet rapidly evolved into a tool for widespread communication, despite the fact that its initial purpose was to improve communication among the armed forces. Since its inception, the Internet has developed from its esoteric beginnings into a mainstream mode of communication. This transformation took place since the internet's beginnings. According to Miniwatts Marketing Group, there are over two billion people connected to the Internet at any given moment, which accounts for 34.3% of the total population of the world.Footnote 2 Varying degrees of relationship are expected among the variables in Eq. 1. Thus, we provide a simple descriptive statistics and correlation matrix to have a glimpse of what such relationships could exist between the variables under review as reported in Tables 2 and 3. Table 2 Descriptive statistics of the variables under review Table 3 Correlation matrix As can be seen in Table 2, ICT has the highest mean of 17.58 persons per year, minimum of 11.25 and a maximum of 21.03 persons per year. Next is CO2 emission which has a mean of 13.58 metric tons per year, minimum of 12.1 and maximum of 16.15 metric tons per year. Income on the other hand has a mean of 8.5 million USD per year, minimum of 6.514 million USD per year and a maximum of 9.55 million USD per year while its square's mean is 5.71 million USD per year, minimum of 1.98 million USD per year and a maximum of 5.97 million USD per year. Economic globalization has a mean of 3.8% per year, minimum of 2.75% per year and maximum of 4.57% per year. Clean energy has a mean of 2.94 metric tons per year, minimum of 1.2 metric tons per year and a maximum of 3.99 metric tons per year. While fossil fuel has a mean of 4.33 metric tons per year, minimum of 3.94 metric tons per year and a maximum of 4.52 metric tons per year. However, Table 3 also provides details of correlation metric which proofs that, income and economic globalization have positively significant relationships with CO2 emission, but ICT, institution quality, clean energy and fossil fuel all have positive significant correlations with CO2 emission. However, income has a positive correlation with ICT, clean energy, and fossil fuel but a negative correlated with institution quality and economic globalization. ICT on the other hand, has a positive correction with institution quality, economic globalization and fossil fuel but has a negative correlation with clean energy. Institution quality has positive correlation with economic globalization but a negative correlation with both clean energy and fossil fuel. Nevertheless, economic globalization has negative correlation with both clean energy and fossil-fuel, but clean energy has a positive correlation with fossil fuel. However, there may arise some level of concerns about possible cross-sectional dependency (CD) across individual unit of the panel model and it is highly imperative to carry out a test in this direction (De Hoyos and Sarafidis 2006; Dogan and Aslan 2017; Ozcan and Ozturk 2019). Hence, for confirmation purposes, we reported a cross-sectional dependency test results in Table 4 following the application of Breusch and Pagan (1980) LM test and Pesaran (2015) LM test. Table 4 Cross-sectional dependency (CD) test results From Table 4, all the three tests affirm the presence of cross-sectional dependence following the statistical significance of the test statistics owing to the rejection of the null hypothesis of no cross-sectional dependence; thus, indicating the necessity to advance some level of caution in selecting appropriate methodologies for both the intending unit root test and cointegration techniques (Bilgili et al. 2017; Shahbaz et al. 2018b). Following these results, conventional panel unit root tests as seen in some extant studies could pave way for misleading conclusions on the unit root status of the variables and the true nature of cointegrating relationships for the panel study (Onifade et al. 2021b). Hence, to circumvent the associated methodological flaws in using conventional panel unit root test in the presence of cross-sectional dependence, we applied Panel IPS and CIPS test of Pesaran (2007) for the unit root analysis. The results of the unit root test from Table 4 show that the understudied variables are integrated of first order I(1) (Table 5). Table 5 Panel IPS and CIPS unit root test Having established the order of integration, we applied Westerlund (2007) cointegration technique that is founded on error correction mechanism (ECM) with the assumption that variables exist in their first order of integration to establish a cointegration relationship for the panel study. The error rectification method (ECM) of the estimation follows the expression in Eq. 2: $${\Delta Y}_{it}={\pi }_{i}{d}_{i}+{\theta }_{i}\left({Y}_{it-1}-{\gamma }_{i}^{*}{X}_{it-1}\right)+{\sum }_{j=1}^{m}{\theta }_{ij}{\Delta Y}_{it-j}+{\sum }_{j=0}^{m}{\delta }_{ij}{\Delta X}_{it-j}+{\varepsilon }_{it}.$$ From Eq. 2, \({\pi }_{i}^{*}={({\pi }_{1i},{\pi }_{2i})}^{*}\), representing the vector of parameters, while \({d}_{t}={(1-t)}^{*}\), and \({\theta }_{i}\) are, deterministic mechanisms, as well as the error correction parameter correspondingly. To identify cointegration existence, Westerlund (2007) approach produces four major statistics based on the least square estimation and corresponding significance of the adjustment term \({\theta }_{i}\) of the ECM model in Eq. 2 and these statistics can be categorized under two major subdivisions namely the group statistics and the panel statistics. The following are the test statistics for the Westerlund cointegration: $${G}_{t}= \frac{1}{N}\sum_{i-1}^{N}\frac{{\acute{\alpha}}_{i}}{\mathrm{SE}({\acute{\alpha}}_{i})},$$ $${G}_{\alpha }= \frac{1}{N}\sum_{i-1}^{N}\frac{T{\acute{\alpha}}_{i}}{{\acute{\alpha}}_{i}(1)},$$ $${P}_{\mathrm{T}}= \frac{{\acute{\alpha}} }{\mathrm{SE}({\acute{\alpha}})},$$ $${P}_{\alpha }= T{\acute{\alpha}}_{i}.$$ The group means statistics, comprising \({G}_{\mathrm{a}}\) and \({G}_{\mathrm{t}}\), are shown in Eqs. 3 and 4. Panel statistics, comprising \({P}_{a}\) and \({P}_{t}\), are represented by Eqs. 5 and 6, where variables remained as earlier defined. The application of this test has been substantially reported in the literature as it is designed to accommodate cross-sectional dependency in a panel study (Shahbaz et al. 2018b; Le and Ozturk 2020; Alola et al. 2019; Nathaniel et al. 2020). The Westerlund (2007) cointegration test outputs in Table 6 provide enough evidence of cointegration among the variables while taking into cognizance the concerns about cross-sectional dependence as the probability values for the rejection of a null of an absence of a cointegration relationship is significant at the 1% level for the group statistics and the 5% significance level for the panel statistics, respectively. Table 6 Westerlund cointegration test Panel estimations Following the circumstances surrounding the results in Sect. 4.2, the panel estimators for the study should consequently take into cognizance the concerns on cross-sectional dependence. Hence, we applied three robust techniques that are designed to accommodate the latter concern for the study. The Augmented Mean Group (AMG) heterogeneous panel estimator of Eberhardt and Bond (2009) and Eberhardt and Teal (2010), and the advanced Common Correlated Effect Mean Group (CCEMG) panel estimator of Kapetanios et al. (2011) as initially developed by Pesaran (2006) were utilized in the study following Eqs. 7 and 8, respectively: $${\Delta Y}_{it}={{\alpha }}_{i}{+{\upbeta }_{i}\Delta X}_{it}+{\sum }_{t=1}^{T}{\pi }_{t}{D}_{t}+{{\varphi }}_{i}{\mathrm{UCF}}_{t}+{\mu }_{it},$$ $${Y}_{it}={{\alpha }}_{i}{+{\upbeta }_{i}X}_{it}+{\gamma }_{i}{{Y}^{*}}_{it}+{\delta }_{i}{{X}^{*}}_{it}+{\uptheta }_{i}{\mathrm{UCF}}_{t}+{\mu }_{it}.$$ From the CCEMG expression in Eq. 8, Y* and X* represent the mean values of the variables Yit and Xit alongside the unobserved common effects while D is a time variant dummy variable in Eq. 7. The OLS estimation of the differenced Eq. 7 is utilized to generate the AMG estimator as given in Eq. 9, where \({\varphi }_{i}\) denotes the estimated slope parameters of the Xit variable in Eq. 7: $$\mathrm{AMG}=\frac{1}{N}{\sum }_{i=1}^{N}{\varphi }_{i}.$$ We also reported the linear regression estimates with Driscoll–Kraay (DK) standard errors while conducting a robustness check for multicollinearity through the variance inflation factor (VIF) as reported in Table 9 in Appendix. A combination of these approaches has been noted to be very efficient in producing robust estimates especially when cross-sectional dependence issues have to be accommodated in a panel analysis (Hoechle 2007; Zhang and Lin 2012; Le and Ozturk 2020; Adedoyin et al. 2021; Appiah et al. 2022; Agboola et al. 2022; Bamidele et al. 2022). Table 7 presents the coefficients from the estimators. Table 7 AMG, CCEMG and Driscoll–Kraay result From Table 7, the adopted estimators namely the AMG, the CCEMG, and the Driscoll–Kraay approach produced relatively close results on the average, with little difference that are only observed in terms of the magnitudes of estimated coefficients and their corresponding levels of statistical significance. Both economic globalization and renewable energy consumption (Table 7) were significant for achieving positive result on the quest for cleaner environment among the E7 economies as these two variables have a significant negative impact on the level of carbon emission in these economies. The current findings from this study complement the results from contemporary studies on the possible ameliorating impact of globalization on carbon emission among countries (Zaidi et al. 2019; Saud et al. 2020; Bekun et al. 2021a; Gyamfi et al. 2021a; Onifade et al. 2021a, b; Ohajionu et al. 2021; Steve et al. 2022). Increasing renewable energy consumption is a well-known propelling force for a quality environment and it is worthy to note that economic globalization is expected to be an influential driver of this force among the understudied E7 economies. In addition, in line with a priori expectation, the empirical results also provide evidence that the level of non-renewable energy consumption on the other hand has a positive and significant impact on CO2 emission for the panel of the E7 economies. However, the results in Table 7 again show that the level of institutional quality plays a significant role in exacerbating carbon emission among the E7 economies given that the institutional quality proxy (I) emerged with a positive significant coefficient. In a nutshell, this result calls for more attention on the crucial roles of transparency, accountability, and the fight against corruption in the public sector in attaining a desirable sustainable environment. It would require not just economic globalization alone, but also a better institutional quality level to push for an environmentally friendly agenda while enhancing sustainable income growth that can foster renewable energy consumption among the E7 economies. Furthermore, on the income aspects of the study presented in Table 7 above, the results reflect a cushioning role of income growth on carbon emission among the E7 economies. As the impacts of income level (Y), and growth in income level (Y2) are positive and negative, respectively, the empirical findings support the inverted U-shape assumption that substantiates the validity of the environmental Kuznets curve (EKC) for the E7 economies. Economic expansion that translates to higher income levels among these nations is expected to assist in pushing these economies towards environmental sustainability. This affirms the findings of Gyamfi et al. 2021b, c and Bekun et al. 2021b, c. Internet use is ideal for ecological analysis since it is a reliable predictor in environmental studies. According to this analysis, the utilization of internet might emit carbon dioxide in the environment as a rise in internet consumption increases CO2 emission within the E7. This result is justifiable considering that there is a wide usage of web technology in E7 economies. This web of technological equipment used within the E7 nations consumes heavy energy which is largely not environmentally friendly. Another possible explanation is that E7 includes countries that primarily have most use of e-service and innovations. Owing to excessive usage of the internet, the resources used would be inappropriate. This result agrees with other research undertaken to shape policy guidelines of different studies related to economic and financial policies. Results are not in line with research conducted by Zhang and Liu (2015) for China, Salahuddin and Alam (2015) for Australia, Sarpong et al. (2020) for Southern Africa region and Gyamfi et al. (2021d) for E7 countries. Granger causality The estimates from the combined panel estimators that is applied in the study may not necessarily reflect the direction of causality among the variables, thus, we provide a causality test report for the variables in the present study following the importance of this test in various empirical studies (Saint Akadiri et al. 2019; Onifade et al. 2020; Alola and Kirikkaleli 2019; Çoban et al. 2020). We report the Dumitrescu and Hurlin (2012) Granger causality test for the study: $${Y}_{it}={\delta }_{i}+{\sum }_{k=1}^{p}{\beta }_{1ik}{Y}_{i,t-k}+{\sum }_{k=1}^{p}{\beta }_{2ik}{X}_{i,t-k}+{\varepsilon }_{it}.$$ From Eq. 10, \({\beta }_{2ik}\) and \({\beta }_{1ik}\) denote the regression coefficients and the autoregressive parameters for individual panel variable i at time t, respectively. Following the assumption of a balance panel of observation for the variable \({Y}_{it}\) and \({X}_{it}\) in the study, the null hypothesis of absence of causality among variables was tested against the alternative hypothesis of heterogeneous causality in the panel observation. The Granger causality results is provided in Table 8 while an annotated diagrammatical representation of the overall empirical scheme, based on the adopted econometric outcomes is detailed out in Fig. 1 in Appendix. Table 8 Dumitrescu and Hurlin causality analysis Outcome from Table 8 shows that both income and its square have a feedback causal relationship with carbon emissions. Also, economic globalization, institutional quality, ICT and clean energy all have one-way directional causal relationship with carbon emission while there is no causal relationship between fossil fuel and carbon emission. Following the UN-SDG-13 crusade to reduce climate change impact, this study explores this topical issue by investigating the effect of ICT, institutional quality, economic globalization and renewable energy consumption in the conventional EKC setting for E7 economies from 1995 to 2016. This study leverages on second-generational modeling methodology that corrects for cross-sectional dependence and heterogeneity to achieve the soundness of empirical findings. To this end, we used Augmented Mean Group, Common Correlated Effects Mean Group estimator; Driscoll–Kraay and Dumitrescu and Hurlin causality tests. The Westerlund cointegration analysis affirms the existence of a long-run bond between the studied highlighted variables. That is, jointly, income level and its quadratic form, economic globalization, and institutional quality explain the extent of environmental degradation in E7 economies. This study result affirms the EKC phenomenon in E7. The plausible explanation for this finding resonates with the bloc as the emerging and industrialized economies where economic activities are operated without environmental sustainability in view. This suggests that emphasis is placed on economic expansion relative to the bloc quality of the environment. We also observed from the empirical results that fossil fuel-based energy also contributes to dampen the environment. Furthermore, the bloc shows that the institutional level is still not sufficient to spur a clean environment. The quality of intuitional and commitment in E7 economies are weak relative to her counterpart G7 economies where rule of law and other institutional apparatus are reinforced to maintain environmental sustainability. Interestingly, our study shows that economic globalization and renewable energy consumption improve the quality of the environment. This connotes that environmental consciousness is creeping into the blocs amidst a wave of global and economic interconnectedness. The need for a transition to renewables such as hydro energy, photovoltaic, biomass among others, which are known to be cleaner and ecosystem friendly, should be pursued in earnest. It was also found from the outcome that ICT usage contributes more to environmental degradation within the bloc. Policy direction This study further highlighted policy prescriptions in the light of the study's outcomes. The policy suggestion includes: The E7 economies are encouraged to pursue more commitment to build ICT infrastructure and technologies that engender clean production processes in the bloc. In order to accomplish clean-ICT development, E7 economies should concentrate on the most polluting industries such as travel, industry and buildings. In the field of manufacturing, ICT can be used to maximize capital usage in industrial development operations, conserve electricity, and boost efficiency. There should be a comprehensive plan for E7 to leverage on emerging technologies to improve quality of transportation. ICT could perhaps be utilized for economic and environmental purposes. The implication of the EKC in E7 means that the bloc needs to minimize environmental degradation on its trajectory for an enhanced average income level. Given that this bloc is still very much emerging on its growth path, there is need to fortify the institutional apparatus needed to enact effective environmental strategies and regulations to achieve environmental sustainability without compromise for economic development. The need for a transition to renewables is pertinent given the advantages of a cleaner environment. As such, there should be concerted efforts on the part of all stakeholders, government officials for a paradigm shift to clean energy technologies by substituting the bloc's share of her energy mix from conventional energy of fossil-fuel to clean energy sources. Conclusively, the need to reduce environmental degradation activities should be pursued by the bloc. Measures such as tree planting in mitigating the effect of deforestation can be considered, inter alia. Limitation and future recommendations This study investigated the applicability of the EKC phenomenon and ICT for the E7. However, due to the lack of data available, it is not possible to incorporate governance actions and traditional indicators into the CO2 emission equation for E7 currently. These variables may have varying degrees of influence on the environment and the economy. Cultural events and governance factors (as measured by political, socioeconomic, and economic data) have a great deal of potential to play a significant role in a country's total reserves, economic expansion, natural resources, financial deepening, technical development, and the effective operation of its human capital. As a suggestion for further studies, other scholars can extend the EKC argument by accounting for covariates such as population, urbanization in an asymmetric framework for other blocs like the Middle East and North Africa (MENA), sub-Saharan Africa, inter alia, using disaggregated data. The data for this present study are sourced from WDI (World Bank Open Data|Data) as outlined in the data section. E7 Countries: Group of seven industrialized economies that comprise Brazil, Russia, India, China, Indonesia, Mexico and Turkey, which are all mostly emerging and newly industrialized nations. https://www.bing.com/ck/a?!&&p=b6b733b87fe5594bec376ff3d27906e19332c9ef44855ee59a6bf9b3bc87de7bJmltdHM9MTY1NTM3MzAxOSZpZ3VpZD01MDQ1YzI4Yy1jNDlhLTQ5YzctYTJlMS01NTlkYzJkOWIzMTkmaW5zaWQ9NTE1NA&ptn=3&fclid=bb071c97-ed59-11ec-8d25-20ce53457b3d&u=a1aHR0cHM6Ly9ibG9ncy53b3JsZGJhbmsub3JnL2VuZHBvdmVydHlpbnNvdXRoYXNpYS93aHktaWN0LWluZm9ybWF0aW9uLWFuZC1jb21tdW5pY2F0aW9uLXRlY2hub2xvZ2llcy1hbmQtd2h5LW5vdw&ntb=1. Abid M (2017) Does economic, financial and institutional developments matter for environmental quality? A comparative analysis of EU and MEA countries. J Environ Manag 188:183–194. https://doi.org/10.1016/j.jenvman.2016.12.007 Adedoyin FF, Bein MA, Gyamfi BA, Bekun FV (2021) Does agricultural development induce environmental pollution in E7? A myth or reality. Environ Sci Pollut Res 28(31):41869–41880 Adebayo TS, Agyekum EB, Altuntaş M, Khudoyqulov S, Zawbaa HM, Kamel S (2022) Does information and communication technology impede environmental degradation? Fresh insights from non-parametric approaches. Heliyon 8(3):e09108 Adebayo TS, Kirikkaleli D (2021) Impact of renewable energy consumption, globalization, and technological innovation on environmental degradation in Japan: application of wavelet tools. Environ Dev Sustain 23(11):16057–16082 Agboola PO, Hossain M, Gyamfi BA, Bekun FV (2022) Environmental consequences of foreign direct investment influx and conventional energy consumption: evidence from dynamic ARDL simulation for Turkey. Environ Sci Pollut Res 23:1–14 Alola AA, Kirikkaleli D (2019) The nexus of environmental quality with renewable consumption, immigration, and healthcare in the US: wavelet and gradual-shift causality approaches. Environ Sci Pollut Res 26(34):35208–35217 Alola AA, Eluwole KK, Alola UV, Lasisi TT, Avci T (2019) Environmental quality and energy import dynamics. Manag Environ Qual 23:78 Amri et al (2019) ICT, total factor productivity, and carbon dioxide emissions in Tunisia. Technol Forecast Soc Change 146:212–217 Amri F (2018) Carbon dioxide emissions, total factor productivity, ICT, trade, financial development, and energy consumption: testing environmental Kuznets curve hypothesis for Tunisia. Environ Sci Pollut Res 25(33):33691–33701 Appiah M, Gyamfi BA, Adebayo TS, Bekun FV (2022) Do financial development, foreign direct investment, and economic growth enhance industrial development? Fresh evidence from Sub-Sahara African countries. Portuguese Econ J 23:1–25 Arshad Z, Robaina M, Botelho A (2020) The role of ICT in energy consumption and environment: an empirical investigation of Asian economies with cluster analysis. Environ Sci Pollut Resh 27(26):32913–32932 Asongu SA (2018) ICT, openness and CO2 emissions in Africa. Environ Sci Pollut Res 25(10):9351–9359 Asongu SA, Le Roux S, Biekpe N (2017) Environmental degradation, ICT and inclusive development in Sub-Saharan Africa. Energy Policy 111:353–361 Asongu SA, Le Roux S, Biekpe N (2018) Enhancing ICT for environmental sustainability in sub-Saharan Africa. Technol Forecast Soc Chang 127:209–216 Asongu S, El Montasser G, Toumi H (2016) Testing the relationships between energy consumption, CO2 emissions, and economic growth in 24 African countries: a panel ARDL approach. Environ Sci Pollut Res 23(7):6563–6573 Bakhsh K, Rose S, Ali MF, Ahmad N, Shahbaz M (2017) Economic growth, CO2 emissions, renewable waste and FDI relation in Pakistan: New evidences from 3SLS. J Environ Manage 196:627–632 Bamidele R, Ozturk I, Gyamfi BA, Bekun FV (2022) Tourism-induced pollution emission amidst energy mix: evidence from Nigeria. Environ Sci Pollut Res 29(13):19752–19761 Belkhir L, Elmeligi A (2018) Assessing ICT global emissions footprint: Trends to 2040 & recommendations. Journal of cleaner production, 177, 448-463. Bekun FV, Alola AA, Gyamfi BA, Ampomah AB (2021a) The environmental aspects of conventional and clean energy policy in sub-Saharan Africa: is N-shaped hypothesis valid? Environ Sci Pollut Res 28(47):66695–66708 Bekun FV, Alola AA, Gyamfi BA, Yaw SS (2021b) The relevance of EKC hypothesis in energy intensity real-output trade-off for sustainable environment in EU-27. Environ Sci Pollut Res 28(37):51137–51148 Bekun FV, Gyamfi BA, Onifade ST, Agboola MO (2021c) Beyond the environmental Kuznets Curve in E7 economies: accounting for the combined impacts of institutional quality and renewables. J Clean Prod 314:127924 Bekun FV (2022) Mitigating emissions in India: accounting for the role of real income, renewable energy consumption and investment in energy. Seat 12(1):188–192 Benzie M, Persson Å (2019) Governing borderless climate risks: moving beyond the territorial framing of adaptation. Int Environ Agree Politics Law Econ 19(4):369–393 Bilgili F, Koçak E, Bulut Ü, Kuloğlu A (2017) The impact of urbanization on energy intensity: panel data evidence considering cross-sectional dependence and heterogeneity. Energy 133:242–256 Breusch T, Pagan A (1980) The LM test and its application to model specification in econometrics. Rev Econ Stud 47:239–254 Chien F, Anwar A, Hsu CC, Sharif A, Razzaq A, Sinha A (2021) The role of information and communication technology in encountering environmental degradation: proposing an SDG framework for the BRICS countries. Technol Soc 65:101587 Cho Y, Lee J, Kim TY (2007) The impact of ICT investment and energy price on industrial electricity demand: dynamic growth model approach. Energy Policy 35(9):4730–4738. https://doi.org/10.1016/j.enpol.2007.03.030 Çoban O, Onifade ST, Yussif AB (2020) Reconsidering trade and investment-led growth hypothesis: new evidence from Nigerian economy. J Int Stud 13(3):98–110. https://doi.org/10.14254/2071-8330.2020/13-3/7 Cortés EA, Navarro J-L (2011) Do ICT influence economic growth and human developmentin European Union countries? Int Adv Econ Res 17(1):28–44. https://doi.org/10.1007/s11294-010-9289-5 De Hoyos RE, Sarafidis V (2006) Testing for cross-sectional dependence in panel-data models. Stand Genomic Sci 6(4):482–496 Dogan E, Aslan A (2017) Exploring the relationship among CO2 emissions, real GDP, energy consumption and tourism in the EU and candidate countries: Evidence from panel models robust to heterogeneity and cross-sectional dependence. Renew Sustain Energy Rev 77:239–245 Dumitrescu EI, Hurlin C (2012) Testing for Granger non-causality in heterogeneous panels. Econ Model 29:1450–1460 Eberhardt M, Bond S (2009) Cross-section dependence in nonstationary panel models: a novel estimator. Munich Personal RePEc Archive. http://mpra.ub.uni-muenchen.de/17692/ Eberhardt M, Teal F (2010) Productivity analysis in global manufacturing production. Discussion Paper 515, Department of Economics, University of Oxford. http://www.economics.ox.ac.uk/research/WP/pdf/paper515.pdf Emeri PN (2019) Influence of social media on students' academic performance in Lagos Metropolis. Int J Edu Res 6(1):160–168 Engelbrecht HJ, Xayavong V (2007) The elusive contribution of ICT to productivity growth in New Zealand: evidence from an extended industry-level growth accounting model. Econ Innov New Technol 16(4):255–275 Erumban AA, Das DK (2016) Information and communication technology and economic growth in India. Telecommun Policy 40(5):412–431 Godil (2020) The dynamic nonlinear influence of ICT, financial development, and institutional quality on CO2 emission in Pakistan: new insights from QARDL approach. Environ Sci Pollut Res 2020(27):24190–24200 Grossman GM, Krueger AB (1991). Environmental impacts of a North American free trade agreement (No. w3914). National Bureau of economic research Gyamfi BA, Adebayo TS, Bekun FV, Agyekum EB, Kumar NM, Alhelou HH, Al-Hinai A (2021a) Beyond environmental Kuznets curve and policy implications to promote sustainable development in Mediterranean. Energy Rep 7:6119–6129 Gyamfi BA, Adedoyin FF, Bein MA, Bekun FV (2021b) Environmental implications of N-shaped environmental Kuznets curve for E7 countries. Environ Sci Pollut Res 28(25):33072–33082 Gyamfi BA, Adedoyin FF, Bein MA, Bekun FV, Agozie DQ (2021c) The anthropogenic consequences of energy consumption in E7 economies: juxtaposing roles of renewable, coal, nuclear, oil and gas energy: evidence from panel quantile method. J Clean Prod 295:126373 Gyamfi BA, Ozturk I, Bein MA, Bekun FV (2021d) An investigation into the anthropogenic effect of biomass energy utilization and economic sustainability on environmental degradation in E7 economies. Biofuels, Bioprod Biorefin 15(3):840–851 Gygli S, Hälg F, Potrafke N, Sturm J-E (2019) The KOF Globalisation Index—revisited. Rev Int Org 14(3):543–574 Heidari H, Katircioğlu ST, Saeidpour L (2015) Economic growth, CO2 emissions, and energy consumption in the five ASEAN countries. Int J Electr Power Energy Syst 64:785–791 Hoechle D (2007) Robust standard errors for panel regressions with cross-sectional dependence. Stand Genomic Sci 7(3):281–312 Islam MS, Sujan MSH, Tasnim R, Ferdous MZ, Masud JHB, Kundu S, et al (2020) Problematic internet use among young and adult population in Bangladesh: correlates with lifestyle and online activities during the COVID-19 pandemic. Addit behav rep 12:100311 Kapetanios G, Pesaran MH, Yamagata T (2011) Panels with nonstationary multifactor error structures. J Econ 160:326–348 Kasman A, Duman YS (2015) CO2 emissions, economic growth, energy consumption, trade and urbanization in new EU member and candidate countries: a panel data analysis. Econ Model 44:97–103 Khan D, Khan N, Baloch MA, Saud S, Fatima T (2018) The effect of ICT on CO2 emissions in emerging economies: does the level of income matters? Environ Sci Pollut Res 25(23):22850–22860. https://doi.org/10.1007/s11356-018-2379-2 Kretschmer T (2012) Information and communication technologies and productivity growth: a survey of the literature. Res J 67:89 Lau LS, Choong CK, Eng YK (2014) Carbon dioxide emission, institutional quality, and economic growth: empirical evidence in Malaysia. Renew Energy 68:276–281 Le HP, Ozturk I (2020) The impacts of globalization, financial development, government expenditures, and institutional quality on CO2 emissions in the presence of environmental Kuznets curve. Environ Sci Pollut Res 27:22680–22697. https://doi.org/10.1007/s11356-020-08812-2 Lu WC (2018) The impacts of information and communication technology, energy consumption, financial development, and economic growth on carbon dioxide emissions in 12 Asian countries. Mitig Adapt Strat Glob Change 23(8):1351–1365 Lv Z, Xu T (2018) Is economic globalization good or bad for the environmental quality? New evidence from dynamic heterogeneous panel models. Technol Forecast Soc Chang 137:340–343 Magazzino C, Porrini D, Fusco G, Schneider N (2021) Investigation the link between ICT, electricity consumption air pollution, economic growth in Eu countries. Energy Sourc. https://doi.org/10.1080/15567249.2020.1868622 Majeed MT (2018) Information and communication technology (ICT) and environmental sustainability: a comparative empirical analysis, Pakistan Journal of Commerce and Social Sciences (PJCSS), ISSN 2309–8619, Johar Education Society, Pakistan (JESPK). Lahore 12(3):758–783 Nathaniel SP, Nwulu N, Bekun F (2020) Natural resource, globalization, urbanization, human capital, and environmental degradation in Latin American and Caribbean countries. Environ Sci Pollut Res 89:1–15 Nguyen TT, Pham TAT, Tram HTX (2020) Role of information and communication technologies and innovation in driving carbon emissions and economic growth in selected G-20 countries. J Environ Manage 261:110162 Ohajionu UC, Gyamfi BA, Haseki MI, Bekun FV (2022) Assessing the linkage between energy consumption, financial development, tourism and environment: evidence from method of moments quantile regression. Environ Sci Pollut Res 29(20):30004–30018 Onifade ST, Alola AA, Erdoğan S, Acet H (2021a) Environmental aspect of energy transition and urbanization in the OPEC member states. Environ Sci Pollut Res. https://doi.org/10.1007/s11356-020-12181-1 Onifade ST, Çevik S, Erdoğan S, Asongu S, Bekun FV (2020) An empirical retrospect of the impacts of government expenditures on economic growth: new evidence from the Nigerian economy. J Econ Struct 9(1):6 Onifade ST, Gyamfi BA, Haouas I, Bekun FV (2021b) Re-examining the roles of economic globalization and natural resources consequences on environmental degradation in E7 economies: are human capital and urbanization essential components? Resour Policy 74:102435 Ozcan B, Ozturk I (2019) Renewable energy consumption-economic growth nexus in emerging countries: a bootstrap panel causality test. Renew Sustain Energy Rev 104:30–37 Özokcu S, Özdemir Ö (2017) Economic growth, energy, and environmental Kuznets curve. Renew Sustain Energy Rev 72:639–647 Park Y, Baloch M, Baloch, (2018) The effect on ICT, financial development, growth and trade openness on CO2 emissions; an empirical study. Environ Sci Pollut Res 25:30708–30719. https://doi.org/10.1007/s11356-018-3108-6 Pesaran MH (2006) Estimation and inference in large heterogeneous panels with a multifactor error structure. Econometrica 74:967–1012 Pesaran MH (2007) A simple panel unit root test in the presence of cross section dependence. J Appl Economet 22(2):265–312 Pesaran MH (2015) Testing weak cross-sectional dependence in large panels. Econom Rev 34(6–10):1089–1117 Raheem ID, Tiwari AK, Balsalobre-Lorente D (2020) The role of ICT and financial development in CO2 and economic growth. Environ Sci Pollut Res 2020(27):1912–1922. https://doi.org/10.1007/s11356-019-06590-0 Raworth K (2017) Doughnut economics: seven ways to think like a 21st-century economist. Chelsea Green Publishing, New York Rogelj J, Schleussner CF (2019) Unintentional unfairness when applying new greenhouse gas emissions metrics at country level. Environ Res Lett 14(11):114039 Sadorsky P (2012) Information communication technology and electricity consumption in emerging economies. Energy Policy 48:130–136. https://doi.org/10.1016/j.enpol.2012.04.064 Saint Akadiri S, Bekun FV, Sarkodie SA (2019) Contemporaneous interaction between energy consumption, economic growth and environmental sustainability in South Africa: what drives what? Sci Total Environ 686:468–475 Salahuddin M, Alam K (2015) Internet usage, electricity consumption and economic growth in Australia: a time series evidence. Telematics Inform 32(4):862–878. https://doi.org/10.1016/j.tele.2015.04.011 Salahuddin M, Alam K (2016) Information and Communication Technology, electricity consumption and economic growth in OECD countries: a panel data analysis. Int J Electr Power Energy Syst 76:185–193. https://doi.org/10.1016/j.ijepes.2015.11.005 Sarpong SY, Bein MA, Gyamfi BA, Sarkodie SA (2020) The impact of tourism arrivals, tourism receipts and renewable energy consumption on quality of life: a panel study of Southern African region. Heliyon 6(11):e05351 Saud S, Chen S, Haseeb A (2020) The role of financial development and globalization in the environment: Accounting ecological footprint indicators for selected one-belt-one-road initiative countries. J Clean Prod 250:119518 Shabani ZD, Shahnazi R (2019) Energy consumption, carbon dioxide emissions, information and communications technology, and gross domestic product in Iranian economic sectors: a panel causality analysis. Energy 169:1064–1078. https://doi.org/10.1016/j.energy.2018.11.062 Shafik N (1994) Economic development and environmental quality: an econometric analysis. Oxford, Oxford Economic Papers, pp 757–773 Shahbaz M, Nasir MA, Roubaud D (2018a) Environmental degradation in France: the effects of FDI, financial development, and energy innovations. Energy Economics 74:843–857 Shahbaz M, Shahzad SJH, Mahalik MK, Hammoudeh S (2018b) Does globalisation worsen environmental quality in developed economies? Environ Model Assess 23(2):141–156 Shahbaz M, Shahzad SJH, Mahalik MK, Sadorsky P (2018c) How strong is the causal relationship between globalization and energy consumption in developed economies? A country-specific time-series and panel analysis. Appl Econ 50(13):1479–1494 Sinha A (2018) Impact of ICT exports and internet usage on carbon emissions: a case of OECD countries, pp 228–257 Sinha A, Sengupta T, Alvarado R (2020) Interplay between technological innovation and environmental quality: formulating the SDG policies for next 11 economies. J Clean Prod 242:118549 Steve YS, Murad AB, Gyamfi BA, Bekun FV, Uzuner G (2022) Renewable energy consumption a panacea for sustainable economic growth: panel causality analysis for African blocs. Int J Green Energy 19(8):847–856 Wang N, Zhu H, Guo Y, Peng C (2018) The heterogeneous effect of democracy, political globalization, and urbanization on PM2.5 concentrations in G20 countries: evidence from panel quantile regression. J Clean Prod 194:54–68 Westerlund J (2007) Testing for error correction in panel data. Oxford Bull Econ Stat 69:709–748 World Development Indictors, WDI (2020) https://data.worldbank.org/ Accessed Jan 2021 Zafar MW, Zaidi SAH, Mansoor S, Sinha A, Qin Q (2022) ICT and education as determinants of environmental quality: the role of financial development in selected Asian countries. Technol Forecast Soc Chang 177:121547 Zaidi SAH, Zafar MW, Shahbaz M, Hou F (2019) Dynamic linkages between globalization, financial development and carbon emissions: evidence from Asia Pacific Economic Cooperation countries. J Clean Prod 228:533–543 Zhang C, Lin Y (2012) Panel estimation for urbanization, energy consumption and CO2 emissions: A regional analysis in China. Energy Policy 49:488–498 Zhang C, Liu C (2015) The impact of ICT industry on CO2 emissions: a regional analysis in China. Renew Sustain Energy Rev 44:12–19 Author gratitude is extended to the prospective editor(s) and reviewers that will/have spared time to guide toward a successful publication. The author of this article also assures that they follow the Springer publishing procedures and agree to publish it as any form of access article confirming to subscribe access standards and licensing. Many thanks in advance look forward to your favourable response We hereby declare that there is no form of funding received for this study. Faculty of Economics and Administrative Sciences, Cyprus International University, Via Mersin 10, Nicosia, North Cyprus, Turkey Asiedu B. Ampomah Faculty of Economics Administrative and Social Sciences, Istanbul Gelisim University, Istanbul, Turkey Festus V. Bekun Adnan Kassar School of Business, Department of Economics, Lebanese American University, Beirut, Lebanon Faculty of Economics and Commerce, The Superior University, Lahore, Pakistan School of Economics, University of Johannesburg, Johannesburg, South Africa Simplice A. Asongu Economic and Finance Application and Research Center, İstanbul Ticaret University, Istanbul, Turkey Bright Akwasi Gyamfi The first authors BAG was responsible for the conceptual construction of the study's idea. Second author SAA handled the literature section while third authors FVB managed the data gathering. Also, ABA managed analysis and was responsible for proofreading and manuscript editing. All authors read and approved the final manuscript. Correspondence to Festus V. Bekun. We wish to disclose here that there are no potential conflicts of interest at any level of this study. See Table 9. Table 9 VIF estimations table Gyamfi, B.A., Ampomah, A.B., Bekun, F.V. et al. Can information and communication technology and institutional quality help mitigate climate change in E7 economies? An environmental Kuznets curve extension. Economic Structures 11, 14 (2022). https://doi.org/10.1186/s40008-022-00273-9 Revised: 25 August 2022 Renewable energy transition Carbon reduction Economic globalization Panel econometrics E7 economies
CommonCrawl
Tag Archives: axiom of determinacy Determinacy for open class games is preserved by forcing, CUNY Set Theory Seminar, April 2018 Posted on March 6, 2018 by Joel David Hamkins This will be a talk for the CUNY Set Theory Seminar, April 27, 2018, GC Room 6417, 10-11:45am (please note corrected date). Abstract. Open class determinacy is the principle of second order set theory asserting of every two-player game of perfect information, with plays coming from a (possibly proper) class $X$ and the winning condition determined by an open subclass of $X^\omega$, that one of the players has a winning strategy. This principle finds itself about midway up the hierarchy of second-order set theories between Gödel-Bernays set theory and Kelley-Morse, a bit stronger than the principle of elementary transfinite recursion ETR, which is equivalent to clopen determinacy, but weaker than GBC+$\Pi^1_1$-comprehension. In this talk, I'll given an account of my recent joint work with W. Hugh Woodin, proving that open class determinacy is preserved by forcing. A central part of the proof is to show that in any forcing extension of a model of open class determinacy, every well-founded class relation in the extension is ranked by a ground model well-order relation. This work therefore fits into the emerging focus in set theory on the interaction of fundamental principles of second-order set theory with fundamental set theoretic tools, such as forcing. It remains open whether clopen determinacy or equivalently ETR is preserved by set forcing, even in the case of the forcing merely to add a Cohen real. Open and clopen determinacy for proper class games, VCU MAMLS April 2017 On the strengths of the class forcing theorem and clopen class game determinacy, Prague set theory seminar, January 2018 Open determinacy for games on the ordinals, Torino, March 2016 Open determinacy for games on the ordinals is stronger than ZFC, CUNY Logic Workshop, October 2015 Open determinacy for class games Determinacy for proper-class clopen games is equivalent to transfinite recursion along proper-class well-founded relations Posted in Talks | Tagged axiom of determinacy, determinacy, ETR, infinite games, open games | 5 Replies Posted on December 27, 2017 by Joel David Hamkins This will be a talk for the Prague set theory seminar, January 24, 11:00 am to about 2pm (!). Abstract. The class forcing theorem is the assertion that every class forcing notion admits corresponding forcing relations. This assertion is not provable in Zermelo-Fraenkel ZFC set theory or Gödel-Bernays GBC set theory, if these theories are consistent, but it is provable in stronger second-order set theories, such as Kelley-Morse KM set theory. In this talk, I shall discuss the exact strength of this theorem, which turns out to be equivalent to the principle of elementary transfinite recursion ETRord for class recursions on the ordinals. The principle of clopen determinacy for class games, in contrast, is strictly stronger, equivalent over GBC to the full principle of ETR for class recursions over arbitrary class well-founded relations. These results and others mark the beginnings of the emerging subject I call the reverse mathematics of second-order set theory. The exact strength of the class forcing theorem | Open determinacy for class games Posted in Talks | Tagged axiom of determinacy, determinacy, ETR, games, GBC, infinite games, KM, open games, Prague, reverse mathematics for second-order set theory | 1 Reply Games with the computable-play paradox Let me tell you about a fascinating paradox arising in certain infinitary two-player games of perfect information. The paradox, namely, is that there are games for which our judgement of who has a winning strategy or not depends on whether we insist that the players play according to a deterministic computable procedure. In the the space of computable play for these games, one player has a winning strategy, but in the full space of all legal play, the other player can ensure a win. The fundamental theorem of finite games, proved in 1913 by Zermelo, is the assertion that in every finite two-player game of perfect information — finite in the sense that every play of the game ends in finitely many moves — one of the players has a winning strategy. This is generalized to the case of open games, games where every win for one of the players occurs at a finite stage, by the Gale-Stewart theorem 1953, which asserts that in every open game, one of the players has a winning strategy. Both of these theorems are easily adapted to the case of games with draws, where the conclusion is that one of the players has a winning strategy or both players have draw-or-better strategies. Let us consider games with a computable game tree, so that we can compute whether or not a move is legal. Let us say that such a game is computably paradoxical, if our judgement of who has a winning strategy depends on whether we restrict to computable play or not. So for example, perhaps one player has a winning strategy in the space of all legal play, but the other player has a computable strategy defeating all computable strategies of the opponent. Or perhaps one player has a draw-or-better strategy in the space of all play, but the other player has a computable strategy defeating computable play. Examples of paradoxical games occur in infinite chess. We described such a paradoxical position in my paper Transfinite games values in infinite chess by giving a computable infinite chess position with the property that both players had drawing strategies in the space of all possible legal play, but in the space of computable play, then white had a computable strategy defeating any particular computable strategy for black. For a related non-chess example, let $T$ be a computable subtree of $2^{<\omega}$ having no computable infinite branch, and consider the game in which black simply climbs in this tree as white watches, with black losing whenever he is trapped in a terminal node, but winning if he should climb infinitely. This game is open for white, since if white wins, this is known at a finite stage of play. In the space of all possible play, Black has a winning strategy, which is simply to climb the tree along an infinite branch, which exists by König's lemma. But there is no computable strategy to find such a branch, by the assumption on the tree, and so when black plays computably, white will inevitably win. For another example, suppose that we have a computable linear order $\lhd$ on the natural numbers $\newcommand\N{\mathbb{N}}\N$, which is not a well order, but which has no computable infinite descending sequence. It is a nice exercise in computable model theory to show that such an order exists. If we play the count-down game in this order, with white trying to build a descending sequence and black watching. In the space of all play, white can succeed and therefore has a winning strategy, but since there is no computable descending sequence, white can have no computable winning strategy, and so black will win every computable play. There are several proofs of open determinacy (and see my MathOverflow post outlining four different proofs of the fundamental theorem of finite games), but one of my favorite proofs of open determinacy uses the concept of transfinite game values, assigning an ordinal to some of the positions in the game tree. Suppose we have an open game between Alice and Bob, where the game is open for Alice. The ordinal values we define for positions in the game tree will measure in a sense the distance Alice is away from winning. Namely, her already-won positions have value $0$, and if it is Alice's turn to play from a position $p$, then the value of $p$ is $\alpha+1$, if $\alpha$ is minimal such that she can play to a position of value $\alpha$; if it is Bob's turn to play from $p$, and all the positions to which he can play have value, then the value of $p$ is the supremum of these values. Some positions may be left without value, and we can think of those positions as having value $\infty$, larger than any ordinal. The thing to notice is that if a position has a value, then Alice can always make it go down, and Bob cannot make it go up. So the value-reducing strategy is a winning strategy for Alice, from any position with value, while the value-maintaining strategy is winning for Bob, from any position without a value (maintaining value $\infty$). So the game is determined, depending on whether the initial position has value or not. What is the computable analogue of the ordinal-game-value analysis in the computably paradoxical games? If a game is open for Alice and she has a computable strategy defeating all computable opposing strategies for Bob, but Bob has a non-computable winning strategy, then it cannot be that we can somehow assign computable ordinals to the positions for Alice and have her play the value-reducing strategy, since if those values were actual ordinals, then this would be a full honest winning strategy, even against non-computable play. Nevertheless, I claim that the ordinal-game-value analysis does admit a computable analogue, in the following theorem. This came out of a discussion I had recently with Noah Schweber during his recent visit to the CUNY Graduate Center and Russell Miller. Let us define that a computable open game is an open game whose game tree is computable, so that we can tell whether a given move is legal from a given position (this is a bit weaker than being able to compute the entire set of possible moves from a position, even when this is finite). And let us define that an effective ordinal is a computable relation $\lhd$ on $\N$, for which there is no computable infinite descending sequence. Every computable ordinal is also an effective ordinal, but as we mentioned earlier, there are non-well-ordered effective ordinals. Let us call them computable pseudo-ordinals. Theorem. The following are equivalent for any computable game, open for White. White has a computable strategy defeating any computable play by Black. There is an effective game-value assignment for white into an effective ordinal $\lhd$, giving the initial position a value. That is, there is a computable assignment of some positions of the game, including the first position, to values in the field of $\lhd$, such that from any valued position with White to play, she can play so as to reduce value, and with Black to play, he cannot increase the value. Proof. ($2\to 1$) Given the computable values into an effective ordinal, then the value-reducing strategy for White is a computable strategy. If Black plays computably, then together they compute a descending sequence in the $\lhd$ order. Since there is no computable infinite descending sequence, it must be that the values hit zero and the game ends with a win for White. So White has a computable strategy defeating any computable play by Black. ($1\to 2$) Conversely, suppose that White has a computable strategy $\sigma$ defeating any computable play by Black. Let $\tau$ be the subtree of the game tree arising by insisting that White follow the strategy $\sigma$, and view this as a tree on $\N$, a subtree of $\N^{<\omega}$. Imagine the tree growing downwards, and let $\lhd$ be the Kleene-Brouwer order on this tree, which is the lexical order on incompatible positions, and otherwise longer positions are lower. This is a computable linear order on the tree. Since $\sigma$ is computably winning for White, the open player, it follows that every computable descending sequence in $\tau$ eventually reaches a terminal node. From this, it follows that there is no computable infinite descending sequence with respect to $\lhd$, and so this is an effective ordinal. We may now map every node in $\tau$, which includes the initial node, to itself in the $\lhd$ order. This is a game-value assignment, since on White's turn, the value goes down, and it doesn't go up on Black's turn. QED Corollary. A computable open game is computably paradoxical if and only if it admits an effective game value assignment for the open player, but only with computable pseudo-ordinals and not with computable ordinals. Proof. If there is an effective game value assignment for the open player, then the value-reducing strategy arising from that assignment is a computable strategy defeating any computable strategy for the opponent. Conversely, if the game is paradoxical, there can be no such ordinal-assignment where the values are actually well-ordered, or else that strategy would work against all play by the opponent. QED Let me make a few additional observations about these paradoxical games. Theorem. In any open game, if the closed player has a strategy defeating all computable opposing strategies, then in fact this is a winning strategy also against non-computable play. Proof. If the closed player has a strategy $\sigma$ defeating all computable strategies of the opponent, then in fact it defeats all strategies of the opponent, since any winning play by the open player against $\sigma$ wins in finitely many moves, and therefore there is a computable strategy giving rise to the same play. QED Corollary. If an open game is computably paradoxical, it must be the open player who wins in the space of computable play and the closed player who wins in the space of all play. Proof. The theorem shows that if the closed player wins in the space of computable play, then that player in fact wins in the space of all play. QED Corollary. There are no computably paradoxical clopen games. Proof. If the game is clopen, then both players are closed, but we just argued that any computable strategy for a closed player winning against all computable play is also winning against all play. QED Posted in Exposition | Tagged axiom of determinacy, computability, game values, infinite games, Noah Schweber, open games | 8 Replies This will be a talk for the CUNY Logic Workshop on October 2, 2015. Abstract. The principle of open determinacy for class games — two-player games of perfect information with plays of length $\omega$, where the moves are chosen from a possibly proper class, such as games on the ordinals — is not provable in Zermelo-Fraenkel set theory ZFC or Gödel-Bernays set theory GBC, if these theories are consistent, because provably in ZFC there is a definable open proper class game with no definable winning strategy. In fact, the principle of open determinacy and even merely clopen determinacy for class games implies Con(ZFC) and iterated instances Con(Con(ZFC)) and more, because it implies that there is a satisfaction class for first-order truth, and indeed a transfinite tower of truth predicates $\text{Tr}_\alpha$ for iterated truth-about-truth, relative to any class parameter. This is perhaps explained, in light of the Tarskian recursive definition of truth, by the more general fact that the principle of clopen determinacy is exactly equivalent over GBC to the principle of elementary transfinite recursion ETR over well-founded class relations. Meanwhile, the principle of open determinacy for class games is provable in the stronger theory GBC+$\Pi^1_1$-comprehension, a proper fragment of Kelley-Morse set theory KM. This is joint work with Victoria Gitman, with the helpful participation of Thomas Johnstone. Related article and posts: Logic Workshop announcement: Open determinacy for games on the ordinals is stronger than ZFC Article: V. Gitman, J.D. Hamkins, Open determinacy for class games, submitted. Blog post: Open determinacy for proper class games implies Con(ZFC) and much more Blog post: Determinacy for proper class games is equivalent to transfinite recursion Posted in Talks | Tagged axiom of determinacy, determinacy, games, GBC, infinite games, KM, open games | 1 Reply Posted on September 4, 2015 by Joel David Hamkins V. Gitman and J. D. Hamkins, "Open determinacy for class games," in Foundations of Mathematics, Logic at Harvard, Essays in Honor of Hugh Woodin's 60th Birthday, A. E. Caicedo, J. Cummings, P. Koellner, and P. Larson, Eds., , 2016, Newton Institute preprint ni15064. @INCOLLECTION{GitmanHamkins2016:OpenDeterminacyForClassGames, author = {Victoria Gitman and Joel David Hamkins}, title = {Open determinacy for class games}, booktitle = {{Foundations of Mathematics, Logic at Harvard, Essays in Honor of Hugh Woodin's 60th Birthday}}, editor = {Andr\'es E. Caicedo and James Cummings and Peter Koellner and Paul Larson}, volume = {}, number = {}, series = {AMS Contemporary Mathematics}, type = {}, chapter = {}, pages = {}, address = {}, edition = {}, month = {}, note = {Newton Institute preprint ni15064}, url = {http://wp.me/p5M0LV-1af}, eprint = {1509.01099}, Abstract. The principle of open determinacy for class games — two-player games of perfect information with plays of length $\omega$, where the moves are chosen from a possibly proper class, such as games on the ordinals — is not provable in Zermelo-Fraenkel set theory ZFC or Godel-Bernays set theory GBC, if these theories are consistent, because provably in ZFC there is a definable open proper class game with no definable winning strategy. In fact, the principle of open determinacy and even merely clopen determinacy for class games implies Con(ZFC) and iterated instances Con(Con(ZFC)) and more, because it implies that there is a satisfaction class for first-order truth, and indeed a transfinite tower of truth predicates $\text{Tr}_\alpha$ for iterated truth-about-truth, relative to any class parameter. This is perhaps explained, in light of the Tarskian recursive definition of truth, by the more general fact that the principle of clopen determinacy is exactly equivalent over GBC to the principle of transfinite recursion over well-founded class relations. Meanwhile, the principle of open determinacy for class games is provable in the stronger theory GBC$+\Pi^1_1$-comprehension, a proper fragment of Kelley-Morse set theory KM. See my earlier posts on part of this material: Open determinacy for proper class games implies Con(ZFC) and much more Determinacy for proper class games is equivalent to transfinite recursion Posted in Publications | Tagged axiom of determinacy, determinacy, GBC, KM, open games, Victoria Gitman | 7 Replies The axiom of determinacy for small sets Posted on July 3, 2015 by Joel David Hamkins I should like to argue that the axiom of determinacy is true for all games having a small payoff set. In particular, the size of the smallest non-determined set, in the sense of the axiom of determinacy, is the continuum; every set of size less than the continuum is determined, even when the continuum is enormous. We consider two-player games of perfect information. Two players, taking turns, play moves from a fixed space $X$ of possible moves, and thereby together build a particular play or instance of the game $\vec a=\langle a_0,a_1,\ldots\rangle\in X^\omega$. The winner of this instance of the game is determined according to whether the play $\vec a$ is a member of some fixed payoff set $U\subset X^\omega$ specifying the winning condition for this game. Namely, the first player wins in the case $\vec a\in U$. A strategy in such a game is a function $\sigma:X^{<\omega}\to X$ that instructs a particular player how to move next, given the sequence of partial play, and such a strategy is a winning strategy for that player, if all plays made against it are winning for that player. (The first player applies the strategy $\sigma$ only on even-length input, and the second player only to the odd-length inputs.) The game is determined, if one of the players has a winning strategy. It is not difficult to see that if $U$ is countable, then the game is determined. To see this, note first that if the space of moves $X$ has at most one element, then the game is trivial and hence determined; and so we may assume that $X$ has at least two elements. If the payoff set $U$ is countable, then we may enumerate it as $U=\{s_0,s_1,\ldots\}$. Let the opposing player now adopt the strategy of ensuring on the $n^{th}$ move that the resulting play is different from $s_n$. In this way, the opposing player will ensure that the play is not in $U$, and therefore win. So every game with a countable payoff set is determined. Meanwhile, using the axiom of choice, we may construct a non-determined set even for the case $X=\{0,1\}$, as follows. Since a strategy is function from finite binary sequences to $\{0,1\}$, there are only continuum many strategies. By the axiom of choice, we may well-order the strategies in order type continuum. Let us define a payoff set $U$ by a transfinite recursive procedure: at each stage, we will have made fewer than continuum many promises about membership and non-membership in $U$; we consider the next strategy on the list; since there are continuum many plays that accord with that strategy for each particular player, we may make two additional promises about $U$ by placing one of these plays into $U$ and one out of $U$ in such a way that this strategy is defeated as a winning strategy for either player. The result of the recursion is a non-determined set of size continuum. So what is the size of the smallest non-determined set? For a lower bound, we argued above that every countable payoff set is determined, and so the smallest non-determined set must be uncountable, of size at least $\aleph_1$. For an upper bound, we constructed a non-determined set of size continuum. Thus, if the continuum hypothesis holds, then the smallest non-determined set has size exactly continuum, which is $\aleph_1$ in this case. But what if the continuum hypothesis fails? I claim, nevertheless, that the smallest non-determined set still has size continuum. Theorem. Every game whose winning condition is a set of size less than the continuum is determined. Proof. Suppose that $U\subset X^\omega$ is the payoff set of the game under consideration, so that $U$ has size less than continuum. If $X$ has at most one element, then the game is trivial and hence determined. So we may assume that $X$ has at least two elements. Let us partition the elements of $X^\omega$ according to whether they have exactly the same plays for the second player. So there are at least continuum many classes in this partition. If $U$ has size less than continuum, therefore, it must be disjoint from at least one (and in fact from most) of the classes of this partition (since otherwise we would have an injection from the continuum into $U$). So there is a fixed sequence of moves for the second player, such that any instance of the game in which the second player makes those moves, the result is not in $U$ and hence is a win for the second player. This is a winning strategy for the second player, and so the game is determined. QED This proof generalizes the conclusion of the diagonalization argument against a countable payoff set, by showing that for any winning condition set of size less than continuum, there is a fixed play for the opponent (not depending on the play of the first player) that defeats it. The proof of the theorem uses the axiom of choice in the step where we deduce that $U$ must be disjoint from a piece of the partition, since there are continuum many such pieces and $U$ had size less than the continuum. Without the axiom of choice, this conclusion does not follow. Nevertheless, what the proof does show without AC is that every set that does not surject onto $\mathbb{R}$ is determined, since if $U$ contained an element from every piece of the partition it would surject onto $\mathbb{R}$. Without AC, the assumption that $U$ does not surject onto $\mathbb{R}$ is stronger than the assumption merely that it has size less the continuum, although these properties are equivalent in ZFC. Meanwhile, these issues are relevant in light of the model suggested by Asaf Karagila in the comments below, which shows that it is consistent with ZF without the axiom of choice that there are small non-determined sets. Namely, the result of Monro shows that it is consistent with ZF that $\mathbb{R}=A\sqcup B$, where both $A$ and $B$ have cardinality less than the continuum. In particular, in this model the continuum injects into neither $A$ nor $B$, and consequently neither player can have a strategy to force the play into their side of this partition. Thus, both $A$ and $B$ are non-determined, even though they have size less than the continuum. Posted in Exposition | Tagged axiom of determinacy, continuum hypothesis, infinite games, uncountable | 4 Replies The continuum hypothesis and other set-theoretic ideas for non-set-theorists, CUNY Einstein Chair Seminar, April, 2015 At Dennis Sullivan's request, I shall speak on set-theoretic topics, particularly the continuum hypothesis, for the Einstein Chair Mathematics Seminar at the CUNY Graduate Center, April 27, 2015, in two parts: An introductory background talk at 11 am, Room GC 6417 The main talk at 2 – 4 pm, Room GC 6417 I look forward to what I hope will be an interesting and fruitful interaction. There will be coffee/tea and lunch between the two parts. Abstract. I shall present several set-theoretic ideas for a non-set-theoretic mathematical audience, focusing particularly on the continuum hypothesis and related issues. At the introductory background talk, in the morning (11 am), I shall discuss and prove the Cantor-Bendixson theorem, which asserts that every closed set of reals is the union of a countable set and a perfect set (a closed set with no isolated points), and explain how it led to Cantor's development of the ordinal numbers and how it establishes that the continuum hypothesis holds for closed sets of reals. We'll see that there are closed sets of arbitrarily large countable Cantor-Bendixson rank. We'll talk about the ordinals, about $\omega_1$, the long line, and, time permitting, we'll discuss Suslin's hypothesis. At the main talk, in the afternoon (2 pm), I'll begin with a discussion of the continuum hypothesis, including an explanation of the history and logical status of this axiom with respect to the other axioms of set theory, and establish the connection between the continuum hypothesis and Freiling's axiom of symmetry. I'll explain the axiom of determinacy and some of its applications and its rich logical situation, connected with large cardinals. I'll briefly mention the themes and goals of the subjects of cardinal characteristics of the continuum and of Borel equivalence relation theory. If time permits, I'd like to explain some fun geometric decompositions of space that proceed in a transfinite recursion using the axiom of choice, mentioning the open questions concerning whether there can be such decompositions that are Borel. Dennis has requested that at some point the discussion turn to the role of set theory in the foundation for mathematics, compared for example to that of category theory, and I would look forward to that. I would be prepared also to discuss the Feferman theory in comparison to Grothendieck's axiom of universes, and other issues relating set theory to category theory. Posted in Talks, Videos | Tagged AD, axiom of choice, axiom of determinacy, axiom of symmetry, cardinal characteristics, CH, continuum hypothesis | 9 Replies collin237 on A model of set theory with a definable copy of the complex field in which the two roots of -1 are set-theoretically indiscernible Joel David Hamkins on Lectures on the Philosophy of Mathematics Arthur on Lectures on the Philosophy of Mathematics Saul Schleimer on Lectures on the Philosophy of Mathematics JDH on Twitter A kind of library Aleph zero categorical Barbara Gail Montero Benjamin Steinberg Cantor's Attic GC Math Department Global Set Theory Talks Gowers's weblog JDH on Wikipedia Mathblogging.org Peano's parlour Richard Zach Mathoverflow activity Comment by Joel David Hamkins on Which cardinal $\kappa\geq \omega_1$ is critical for the following property...? If it is true for $\kappa$, then isn't it also true for any smaller $\kappa$, including $\kappa=\omega_1$? Given any set of size $\omega_1$, first extend it to a set of size $\kappa$, get the $X_\alpha$'s, and then cut back down to the original set. Or have I misunderstood? Oh, maybe when you cut down, you […] Comment by Joel David Hamkins on Statements in differential geometry independent from ZFC @BenjaminSteinberg Every computably undecidable decision problem is saturated with logical undecidability, over any base theory, since otherwise we could solve the problem by searching for proofs. In this sense, any computably undecidable problem of differential geometry will provide an answer to the question, even if one uses much stronger theories than ZFC. Comment by Joel David Hamkins on Dedekind-Peano axioms, but numbers have at most one successor Ah, that is helpful. One should think of it as a very weak theory, in which even exponentiation is problematic and induction is possible only for very local phenomena. Ah, I thought you were asking about the second order theory. Huge difference. Since the models of the first-order theory are exactly the cut-offs of models of $I\Delta_0$, as I explain in my answer, the question is whether those theorems are provable in $I\Delta_0$, and there are many open questions about that. For example, pigeon-hole […] It is problematic to speak of "provable" in a second-order theory, since we don't have a sound & complete proof system for second-order logic. Both of those will be provable in the second-order theory. For Bertrand, take the statement for every n, there is a prime between n and 2n. You interpret the numbers up to 2n using digit representation as in my answer. This is a valid consequence of PA2top because it is true in standard finite segments. […] Comment by Joel David Hamkins on Simpler proofs using the axiom of choice Thanks, Andrej, that is helpful. (I guess you mean the complement in $\mathbb{R}^3$ of the union of the circles?) New York Logic Cantor's Attic activity Subscribe to receive update notifications by email.
CommonCrawl
Earth and Environmental Sciences (18) Physics And Astronomy (18) Journal of Fluid Mechanics (18) Ryan Test (12) test society (6) Characterization of superhydrophobic surfaces for drag reduction in turbulent flow James W. Gose, Kevin Golovin, Mathew Boban, Joseph M. Mabry, Anish Tuteja, Marc Perlin, Steven L. Ceccio Journal: Journal of Fluid Mechanics / Volume 845 / 25 June 2018 Print publication: 25 June 2018 A significant amount of the fuel consumed by marine vehicles is expended to overcome skin-friction drag resulting from turbulent boundary layer flows. Hence, a substantial reduction in this frictional drag would notably reduce cost and environmental impact. Superhydrophobic surfaces (SHSs), which entrap a layer of air underwater, have shown promise in reducing drag in small-scale applications and/or in laminar flow conditions. Recently, the efficacy of these surfaces in reducing drag resulting from turbulent flows has been shown. In this work we examine four different, mechanically durable, large-scale SHSs. When evaluated in fully developed turbulent flow, in the height-based Reynolds number range of 10 000 to 30 000, significant drag reduction was observed on some of the surfaces, dependent on their exact morphology. We then discuss how neither the roughness of the SHSs, nor the conventional contact angle goniometry method of evaluating the non-wettability of SHSs at ambient pressure, can predict their drag reduction under turbulent flow conditions. Instead, we propose a new characterization parameter, based on the contact angle hysteresis at higher pressure, which aids in the rational design of randomly rough, friction-reducing SHSs. Overall, we find that both the contact angle hysteresis at higher pressure, and the non-dimensionalized surface roughness, must be minimized to achieve meaningful turbulent drag reduction. Further, we show that even SHSs that are considered hydrodynamically smooth can cause significant drag increase if these two parameters are not sufficiently minimized. On the scaling of air entrainment from a ventilated partial cavity Simo A. Mäkiharju, Brian R. Elbing, Andrew Wiggins, Sarah Schinasi, Jean-Marc Vanden-Broeck, Marc Perlin, David R. Dowling, Steven L. Ceccio Journal: Journal of Fluid Mechanics / Volume 732 / 10 October 2013 Published online by Cambridge University Press: 30 August 2013, pp. 47-76 The behaviour of a nominally two-dimensional ventilated partial cavity was examined over a wide range of size scales and flow speeds to determine the influence of Froude, Reynolds, and Weber number on the cavity shape, dynamics, and gas entrainment rate. Two geometrically similar experiments were conducted with a 14:1 length scale ratio. The results were compared to a two-dimensional semi-analytical model of the cavity flow, and Froude scaling was found to be sufficient to match basic cavity shapes. However, the air flux required to maintain a stable cavity did not scale with Froude number alone, as the dynamics of the cavity closure changed with increasing Reynolds number. The required air flux differed over one order of magnitude between the lowest and highest Reynolds number flows. But, for sufficiently high Reynolds numbers, the rate of scaled entrainment appeared to approach Reynolds number independence. Modest changes in surface tension of the small-scale experiment suggested that the Weber number was important only at the lowest speeds and smaller length scale. Otherwise, the Weber numbers of the flows were sufficiently high to make the effects of interfacial tension negligible. We also observed that modest unsteadiness in the inflow to the large-scale cavity led to a significant increase in the required air flux needed to maintain a stable cavity, with the required excess gas flux nominally proportional to the flow's perturbation amplitude. Finally, discussion is provided on how these results relate to model testing of partial cavity drag reduction (PCDR) systems for surface ships. On the scaling of air layer drag reduction Brian R. Elbing, Simo Mäkiharju, Andrew Wiggins, Marc Perlin, David R. Dowling, Steven L. Ceccio Journal: Journal of Fluid Mechanics / Volume 717 / 25 February 2013 Print publication: 25 February 2013 Air-induced drag reduction was investigated on a 12.9 m long flat plate test model at a free stream speed of $6. 3~\mathrm{m} ~{\mathrm{s} }^{- 1} $ . Measurements of the local skin friction, phase velocity profiles (liquid and gas) and void fraction profiles were acquired at downstream distances to 11.5 m, which yielded downstream-distance-based Reynolds numbers above 80 million. Air was injected within the boundary layer behind a 13 mm backward facing step (BFS) while the incoming boundary layer was perturbed with vortex generators in various configurations immediately upstream of the BFS. Measurements confirmed that air layer drag reduction (ALDR) is sensitive to upstream disturbances, but a clean boundary layer separation line (i.e. the BFS) reduces such sensitivity. Empirical scaling of the experimental data was investigated for: (a) the critical air flux required to establish ALDR; (b) void fraction profiles; and (c) the interfacial velocity profiles. A scaling of the critical air flux for ALDR was developed from balancing shear-induced lift forces and buoyancy forces on a single bubble within a shear flow. The resulting scaling successfully collapses ALDR results from the current and past studies over a range of flow conditions and test model configurations. The interfacial velocity and void fraction profiles were acquired and scaled within the bubble drag reduction (BDR), ALDR and transitional ALDR regimes. The BDR interfacial velocity profile revealed that there was slip between phases. The ALDR results showed that the air layer thickness was nominally three-quarters of the total volumetric flux (per unit span) of air injected divided by the free stream speed. Furthermore, the air layer had an average void fraction of 0.75 and a velocity of approximately 0.2 times the free stream speed. Beyond the air layer was a bubbly mixture that scaled in a similar fashion to the BDR results. Transitional ALDR results indicate that this regime was comprised of intermittent generation and subsequent fragmentation of an air layer, with the resulting drag reduction determined by the fraction of time that an air layer was present. Frequency spectra evolution of two-dimensional focusing wave groups in finite depth water Zhigang Tian, Marc Perlin, Wooyoung Choi Journal: Journal of Fluid Mechanics / Volume 688 / 10 December 2011 Published online by Cambridge University Press: 24 October 2011, pp. 169-194 An experimental and numerical study of the evolution of frequency spectra of dispersive focusing wave groups in a two-dimensional wave tank is presented. Investigations of both non-breaking and breaking wave groups are performed. It is found that dispersive focusing is far more than linear superposition, and that it undergoes strongly nonlinear processes. For non-breaking wave groups, as the wave groups propagate spatial evolution of wave frequency spectra, spectral bandwidth, surface elevation skewness, and kurtosis are examined. Nonlinear energy transfer between the above-peak ( ) and the higher-frequency ( ) regions, with being the spectral peak frequency, is demonstrated by tracking the energy level of the components in the focusing and defocusing process. Also shown is the nonlinear energy transfer to the lower-frequency components that cannot be detected easily by direct comparisons of the far upstream and downstream measurements. Energy dissipation in the spectral peak region ( ) and the energy gain in the higher-frequency region ( ) are quantified, and exhibit a dependence on the Benjamin–Feir Index (BFI). In the presence of wave breaking, the spectral bandwidth reduces as much as 40 % immediately following breaking and eventually becomes much smaller than its initial level. Energy levels in different frequency regions are examined. It is found that, before wave breaking onset, a large amount of energy is transferred from the above-peak region ( ) to the higher frequencies ( ), where energy is dissipated during the breaking events. It is demonstrated that the energy gain in the lower-frequency region is at least partially due to nonlinear energy transfer prior to wave breaking and that wave breaking may not necessarily increase the energy in this region. Complementary numerical studies for breaking waves are conducted using an eddy viscosity model previously developed by the current authors. It is demonstrated that the predicted spectral change after breaking agrees well with the experimental measurements. Flow-induced degradation of drag-reducing polymer solutions within a high-Reynolds-number turbulent boundary layer BRIAN R. ELBING, MICHAEL J. SOLOMON, MARC PERLIN, DAVID R. DOWLING, STEVEN L. CECCIO Journal: Journal of Fluid Mechanics / Volume 670 / 10 March 2011 Print publication: 10 March 2011 Polymer drag reduction, diffusion and degradation in a high-Reynolds-number turbulent boundary layer (TBL) flow were investigated. The TBL developed on a flat plate at free-stream speeds up to 20ms−1. Measurements were acquired up to 10.7m downstream of the leading edge, yielding downstream-distance-based Reynolds numbers up to 220 million. The test model surface was hydraulically smooth or fully rough. Flow diagnostics included local skin friction, near-wall polymer concentration, boundary layer sampling and rheological analysis of polymer solution samples. Skin-friction data revealed that the presence of surface roughness can produce a local increase in drag reduction near the injection location (compared with the flow over a smooth surface) because of enhanced mixing. However, the roughness ultimately led to a significant decrease in drag reduction with increasing speed and downstream distance. At the highest speed tested (20ms−1) no drag reduction was discernible at the first measurement location (0.56m downstream of injection), even at the highest polymer injection flux (10 times the flux of fluid in the near-wall region). Increased polymer degradation rates and polymer mixing were shown to be the contributing factors to the loss of drag reduction. Rheological analysis of liquid drawn from the TBL revealed that flow-induced polymer degradation by chain scission was often substantial. The inferred polymer molecular weight was successfully scaled with the local wall shear rate and residence time in the TBL. This scaling revealed an exponential decay that asymptotes to a finite (steady-state) molecular weight. The importance of the residence time to the scaling indicates that while individual polymer chains are stretched and ruptured on a relatively short time scale (~10−3s), because of the low percentage of individual chains stretched at any instant in time, a relatively long time period (~0.1s) is required to observe changes in the mean molecular weight. This scaling also indicates that most previous TBL studies would have observed minimal influence from degradation due to insufficient residence times. The mean velocity profile of a smooth-flat-plate turbulent boundary layer at high Reynolds number GHANEM F. OWEIS, ERIC S. WINKEL, JAMES M. CUTBRITH, STEVEN L. CECCIO, MARC PERLIN, DAVID R. DOWLING Published online by Cambridge University Press: 06 December 2010, pp. 357-381 Smooth flat-plate turbulent boundary layers (TBLs) have been studied for nearly a century. However, there is a relative dearth of measurements at Reynolds numbers typical of full-scale marine and aerospace transportation systems (Reθ = Ueθ/ν > 105, where Ue = free-stream speed, θ = TBL momentum thickness and ν = kinematic viscosity). This paper presents new experimental results for the TBL that forms on a smooth flat plate at nominal Reθ values of 0.5 × 105, 1.0 × 105 and 1.5 × 105. Nominal boundary layer thicknesses (δ) were 80–90mm, and Karman numbers (δ+) were 17000, 32000 and 47000, respectively. The experiments were conducted in the William B. Morgan Large Cavitation Channel on a polished (k+ < 0.2) flat-plate test model 12.9m long and 3.05m wide at water flow speeds up to 20ms−1. Direct measurements of static pressure and mean wall shear stress were obtained with pressure taps and floating-plate skin friction force balances. The TBL developed a mild favourable pressure gradient that led to a streamwise flow speed increase of ~2.5% over the 11m long test surface, and was consistent with test section sidewall and model surface boundary-layer growth. At each Reθ, mean streamwise velocity profile pairs, separated by 24cm, were measured more than 10m from the model's leading edge using conventional laser Doppler velocimetry. Between these profile pairs, a unique near-wall implementation of particle tracking velocimetry was used to measure the near-wall velocity profile. The composite profile measurements span the wall-normal coordinate range from y+ < 1 to y > 2δ. To within experimental uncertainty, the measured mean velocity profiles can be fit using traditional zero-pressure-gradient (ZPG) TBL asymptotics with some modifications for the mild favourable pressure gradient. The fitted profile pairs satisfy the von-Kármán momentum integral equation to within 1%. However, the profiles reported here show distinct differences from equivalent ZPG profiles. The near-wall indicator function has more prominent extrema, the log-law constants differ slightly, and the profiles' wake component is less pronounced. Energy dissipation in two-dimensional unsteady plunging breakers and an eddy viscosity model Journal: Journal of Fluid Mechanics / Volume 655 / 25 July 2010 Print publication: 25 July 2010 An experimental study of energy dissipation in two-dimensional unsteady plunging breakers and an eddy viscosity model to simulate the dissipation due to wave breaking are reported in this paper. Measured wave surface elevations are used to examine the characteristic time and length scales associated with wave groups and local breaking waves, and to estimate and parameterize the energy dissipation and dissipation rate due to wave breaking. Numerical tests using the eddy viscosity model are performed and we find that the numerical results well capture the measured energy loss. In our experiments, three sets of characteristic time and length scales are defined and obtained: global scales associated with the wave groups, local scales immediately prior to breaking onset and post-breaking scales. Correlations among these time and length scales are demonstrated. In addition, for our wave groups, wave breaking onset predictions using the global and local wave steepnesses are found based on experimental results. Breaking time and breaking horizontal length scales are determined with high-speed imaging, and are found to depend approximately linearly on the local wave steepness. The two scales are then used to determine the energy dissipation rate, which is the ratio of the energy loss to the breaking time scale. Our experimental results show that the local wave steepness is highly correlated with the measured dissipation rate, indicating that the local wave steepness may serve as a good wave-breaking-strength indicator. To simulate the energy dissipation due to wave breaking, a simple eddy viscosity model is proposed and validated with our experimental measurements. Under the small viscosity assumption, the leading-order viscous effect is incorporated into the free-surface boundary conditions. Then, the kinematic viscosity is replaced with an eddy viscosity to account for energy loss. The breaking time and length scales, which depend weakly on wave breaking strength, are applied to evaluate the magnitude of the eddy viscosity using dimensional analysis. The estimated eddy viscosity is of the order of 10−3 m2s−1 and demonstrates a strong dependence on wave breaking strength. Numerical simulations with the eddy viscosity estimation are performed to compare to the experimental results. Good agreement as regards energy dissipation due to wave breaking and surface profiles after wave breaking is achieved, which illustrates that the simple eddy viscosity model functions effectively. Bubble-induced skin-friction drag reduction and the abrupt transition to air-layer drag reduction BRIAN R. ELBING, ERIC S. WINKEL, KEARY A. LAY, STEVEN L. CECCIO, DAVID R. DOWLING, MARC PERLIN To investigate the phenomena of skin-friction drag reduction in a turbulent boundary layer (TBL) at large scales and high Reynolds numbers, a set of experiments has been conducted at the US Navy's William B. Morgan Large Cavitation Channel (LCC). Drag reduction was achieved by injecting gas (air) from a line source through the wall of a nearly zero-pressure-gradient TBL that formed on a flat-plate test model that was either hydraulically smooth or fully rough. Two distinct drag-reduction phenomena were investigated; bubble drag reduction (BDR) and air-layer drag reduction (ALDR). The streamwise distribution of skin-friction drag reduction was monitored with six skin-friction balances at downstream-distance-based Reynolds numbers to 220 million and at test speeds to 20.0ms−1. Near-wall bulk void fraction was measured at twelve streamwise locations with impedance probes, and near-wall (0 < Y < 5mm) bubble populations were estimated with a bubble imaging system. The instrument suite was used to investigate the scaling of BDR and the requirements necessary to achieve ALDR. Results from the BDR experiments indicate that: significant drag reduction (>25%) is limited to the first few metres downstream of injection; marginal improvement was possible with a porous-plate versus an open-slot injector design; BDR has negligible sensitivity to surface tension; bubble size is independent of surface tension downstream of injection; BDR is insensitive to boundary-layer thickness at the injection location; and no synergetic effect is observed with compound injection. Using these data, previous BDR scaling methods are investigated, but data collapse is observed only with the 'initial zone' scaling, which provides little information on downstream persistence of BDR. ALDR was investigated with a series of experiments that included a slow increase in the volumetric flux of air injected at free-stream speeds to 15.3ms−1. These results indicated that there are three distinct regions associated with drag reduction with air injection: Region I, BDR; Region II, transition between BDR and ALDR; and Region III, ALDR. In addition, once ALDR was established: friction drag reduction in excess of 80% was observed over the entire smooth model for speeds to 15.3ms−1; the critical volumetric flux of air required to achieve ALDR was observed to be approximately proportional to the square of the free-stream speed; slightly higher injection rates were required for ALDR if the surface tension was decreased; stable air layers were formed at free-stream speeds to 12.5ms−1 with the surface fully roughened (though approximately 50% greater volumetric air flux was required); and ALDR was sensitive to the inflow conditions. The sensitivity to the inflow conditions can be mitigated by employing a small faired step (10mm height in the experiment) that helps to create a fixed separation line. Bubble friction drag reduction in a high-Reynolds-number flat-plate turbulent boundary layer WENDY C. SANDERS, ERIC S. WINKEL, DAVID R. DOWLING, MARC PERLIN, STEVEN L. CECCIO Journal: Journal of Fluid Mechanics / Volume 552 / 10 April 2006 Published online by Cambridge University Press: 29 March 2006, pp. 353-380 Print publication: 10 April 2006 Turbulent boundary layer skin friction in liquid flows may be reduced when bubbles are present near the surface on which the boundary layer forms. Prior experimental studies of this phenomenon reached downstream-distance-based Reynolds numbers ($Re_{x}$) of several million, but potential applications may occur at $Re_{x}$ orders of magnitude higher. This paper presents results for $Re_{x}$ as high as 210 million from skin-friction drag-reduction experiments conducted in the USA Navy's William B. Morgan Large Cavitation Channel (LCC). Here, a near-zero-pressure-gradient flat-plate turbulent boundary layer was generated on a 12.9 m long hydraulically smooth flat plate that spanned the 3 m wide test section. The test surface faced downward and air was injected at volumetric rates as high as 0.38 m$^{3}$ s$^{-1}$ through one of two flush-mounted 40 $\mu$m sintered-metal strips that nearly spanned the test model at upstream and downstream locations. Spatially and temporally averaged shear stress and bubble-image-based measurements are reported here for nominal test speeds of 6, 12 and 18 m s$^{-1}$. The mean bubble diameter was $\sim$300 $\mu$m. At the lowest test speed and highest air injection rate, buoyancy pushed the air bubbles to the plate surface where they coalesced to form a nearly continuous gas film that persisted to the end of the plate with near-100% skin-friction drag reduction. At the higher two flow speeds, the bubbles generally remained distinct and skin-friction drag reduction was observed when the bubbly mixture was closer to the plate surface than 300 wall units of the boundary-layer flow without air injection, even when the bubble diameter was more than 100 of these wall units. Skin-friction drag reduction was lost when the near-wall shear induced the bubbles to migrate from the plate surface. This bubble-migration phenomenon limited the persistence of bubble-induced skin-friction drag reduction to the first few metres downstream of the air injector in the current experiments. Unsteady ripple generation on steep gravity–capillary waves LEI JIANG, HUAN-JAY LIN, WILLIAM W. SCHULTZ, MARC PERLIN Journal: Journal of Fluid Mechanics / Volume 386 / 10 May 1999 Print publication: 10 May 1999 Parasitic ripple generation on short gravity waves (4 cm to 10 cm wavelengths) is examined using fully nonlinear computations and laboratory experiments. Time-marching simulations show sensitivity of the ripple steepness to initial conditions, in particular to the crest asymmetry. Significant crest fore–aft asymmetry and its unsteadiness enhance ripple generation at moderate wave steepness, e.g. ka between 0.15 and 0.20, a mechanism not discussed in previous studies. The maximum ripple steepness (in time) is found to increase monotonically with the underlying (low-frequency bandpass) wave steepness in our simulations. This is different from the sub- or super-critical ripple generation predicted by Longuet-Higgins (1995). Unsteadiness in the underlying gravity–capillary waves is shown to cause ripple modulation and an interesting 'crest-shifting' phenomenon – the gravity–capillary wave crest and the first ripple on the forward slope merge to form a new crest. Including boundary layer efects in the free-surface conditions extends some of the simulations at large wave amplitudes. However, the essential process of parasitic ripple generation is nonlinear interaction in an inviscid flow. Mechanically generated gravity–capillary waves demonstrate similar characteristic features of ripple generation and a strong correlation between ripple steepness and crest asymmetry. Period tripling and energy dissipation of breaking standing waves LEI JIANG, MARC PERLIN, WILLIAM W. SCHULTZ Journal: Journal of Fluid Mechanics / Volume 369 / 25 August 1998 Print publication: 25 August 1998 We examine the dynamics of two-dimensional steep and breaking standing waves generated by Faraday-wave resonance. Jiang et al. (1996) found a steep wave with a double-peaked crest in experiments and a sharp-crested steep wave in computations. Both waveforms are strongly asymmetric in time and feature large superharmonics. We show experimentally that increasing the forcing amplitude further leads to breaking waves in three recurrent modes (period tripling): sharp crest with breaking, dimpled or flat crest with breaking, and round crest without breaking. Interesting steep waveforms and period-tripled breaking are related directly to the nonlinear interaction between the fundamental mode and the second temporal harmonic. Unfortunately, these higher-amplitude phenomena cannot be numerically modelled since the computations fail for breaking or nearly breaking waves. Based on the periodicity of Faraday waves, we directly estimate the dissipation due to wave breaking by integrating the support force as a function of the container displacement. We find that the breaking events (spray, air entrainment, and plunging) approximately double the wave dissipation. Highly nonlinear standing water waves with small capillary effect WILLIAM W. SCHULTZ, JEAN-MARC VANDEN-BROECK, LEI JIANG, MARC PERLIN We calculate spatially and temporally periodic standing waves using a spectral boundary integral method combined with Newton iteration. When surface tension is neglected, the non-monotonic behaviour of global wave properties agrees with previous computations by Mercer & Roberts (1992). New accurate results near the limiting form of gravity waves are obtained by using a non-uniform node distribution. It is shown that the crest angle is smaller than 90° at the largest calculated crest curvature. When a small amount of surface tension is included, the crest form is changed significantly. It is necessary to include surface tension to numerically reproduce the steep standing waves in Taylor's (1953) experiments. Faraday-wave experiments in a large-aspect-ratio rectangular container agree with our computations. This is the first time such high-amplitude, periodic waves appear to have been observed in laboratory conditions. Ripple formation and temporal symmetry breaking in the experiments are discussed. Moderate and steep Faraday waves: instabilities, modulation and temporal asymmetries Lei Jiang, Chao-Lung Ting, Marc Perlin, William W. Schultz Mild to steep standing waves of the fundamental mode are generated in a narrow rectangular cylinder undergoing vertical oscillation with forcing frequencies of 3.15 Hz to 3.34 Hz. A precise, non-intrusive optical wave profile measurement system is used along with a wave probe to accurately quantify the spatial and temporal surface elevations. These standing waves are also simulated by a two-dimensional spectral Cauchy integral code. Experiments show that contact-line effects increase the viscous natural frequency and alter the neutral stability curves. Hence, as expected, the addition of the wetting agent Photo Flo significantly changes the stability curve and the hysteresis in the response diagram. Experimentally, we find strong modulations in the wave amplitude for some forcing frequencies higher than 3.30 Hz. Reducing contact-line effects by Photo-Flo addition suppresses these modulations. Perturbation analysis predicts that some of this modulation is caused by noise in the forcing signal through 'sideband resonance', i.e. the introduction of small sideband forcing can generate large modulations of the Faraday waves. The analysis is verified by our numerical simulations and physical experiments. Finally, we observe experimentally a new form of steep standing wave with a large symmetric double-peaked crest, while simulation of the same forcing condition results in a sharper crest than seen previously. Both standing wave forms appear at a finite wave steepness far smaller than the maximum steepness for the classical standing wave and a surface tension far smaller than that for a Wilton ripple. In both physical and numerical experiments, a stronger second harmonic (in time) and temporal asymmetry in the wave forms suggest a 1:2 resonance due to a non-conventional quartet interaction. Increasing wave steepness leads to a new form of breaking standing waves in physical experiments. Boundary conditions in the vicinity of the contact line at a vertically oscillating upright plate: an experimental investigation Chao-Lung Ting, Marc Perlin To determine a suitable boundary-condition model for the contact line in oscillatory flow, an upright plate, oscillated vertically with sinusoidal motion in dye-laden water with an air interface, is considered experimentally. Constrained by the desirability of a two-dimensional flow field, eight frequencies in the 1–20 Hz range, each with seven different stroke amplitudes (0.5–6 mm) are chosen. The Reynolds number varies from 1.6 to 1878.3 in the experiments, large relative to the Reynolds number in the conventional uni-directional contact-line experiments (e.g. Dussan V.'s 1974 experiments). To facilitate prediction, a high-speed video system is used to record the plate displacement, the contact-line displacement, and the dynamic behaviour of the contact angle. Several interesting contact-line phenomena are shown in the present results. An expression for λ, the dimensionless capillary coefficient, is formulated such that the dynamic behaviour at the contact line is predicted reasonably well. A particle-tracking-velocimetry (PTV) technique is used to detect particle trajectories near the plate such that the boundary condition along the entire plate can be modelled. Two sets of PTV experiments are conducted. One set is for stick contact-line motion, the other set is for stick–slip contact-line motion. The results from the PTV experiments show that a vortex is formed near the meniscus in the stick-slip contact-line experiments; however, in the stick contact-line experiments, no such vortex is present. Using the present experimental results, a model is developed for the boundary condition along the vertically oscillating vertical plate. In this model, slip occurs within a specific distance from the contact line while the flow obeys the no-slip condition outside this slip region. Also, the mean slip length is determined for each experimental stroke amplitude. On parasitic capillary waves generated by steep gravity waves: an experimental investigation with spatial and temporal measurements Marc Perlin, Huanjay Lin, Chao-Lung Ting Published online by Cambridge University Press: 26 April 2006, p. 376 Journal: Journal of Fluid Mechanics / Volume 255 / October 1993 An experimental investigation of steep, high-frequency gravity waves (∼ 4 to 5 Hz) and the parasitic capillary waves they generate is reported. Spatial, as well as temporal, non-intrusive surface measurements are made using a new technique. This technique employs cylindrical lenses to magnify the vertical dimension in conjunction with an intensified, high-speed imaging system, facilitating the measurement of the disparate scales with a vertical surface-elevation resolution on the order of 10 μm. Thus, high-frequency parasitic capillary waves and the underlying gravity wave are measured simultaneously and accurately in space and time. Time series of spatial surface-elevation measurements are presented. It is shown that the location of the capillary waves is quasi-stationary in a coordinate system moving with the phase speed of the underlying gravity wave. Amplitudes and wavenumbers of the capillaries are modulated in space; however, they do not propagate with respect to the gravity wave. As capillary amplitudes are seen to decrease significantly and then increase again in a recurrence-like phenomenon, it is conjectured that resonance mechanisms are present. Measured surface profiles are compared to the theories of Longuet-Higgins (1963) and Crapper (1970) and the exact, two-dimensional numerical formulation of Schwartz & Vanden-Broeck (1979). Significant discrepancies are found between experimental and theoretical wavetrains in both amplitude and wavenumber. The theoretical predictions of the capillary wave amplitudes are much smaller than the measured amplitudes when the measured phase speed, amplitude, and wavelength of the gravity wave are used in the Longuet-Higgins model. In addition, this theory predicts larger wavenumbers of the capillaries as compared to experiments. The Crapper model predicts the correct order-of-magnitude capillary wave amplitude on the forward face of the gravity wave, but predicts larger amplitudes on the leeward face in comparison to the experiments. Also, it predicts larger capillary wavenumbers than are experimentally determined. Comparison of the measured profiles to multiple solutions of the stationary, symmetric, periodic solutions determined using the Schwartz & Vanden-Broeck numerical formulation show similar discrepancies. In particular, the assumed symmetry of the waveform about crest and trough in the numerical model precludes a positive comparison with the experiments, whose underlying waves exhibit significantly larger capillaries on their forward face than on their leeward face. Also, the a priori unknown multiplicity of numerical solutions for the same dimensionless surface tension and steepness parameters complicates comparison. Finally, using the temporal periodicity of the wave field, composite images of several successive wavelengths are constructed from which potential energy and surface energy are calculated as a function of distance downstream. Experiments on ripple instabilities. Part 3. Resonant quartets of the Benjamin–Feir type Marc Perlin, Joe Hammack Journal: Journal of Fluid Mechanics / Volume 229 / August 1991 Instabilities and long-time evolution of gravity-capillary wavetrains (ripples) with moderate steepnesses (ε < 0.3) are studied experimentally and analytically. Wave-trains with frequencies of 8 ≤ f ≤ 25 Hz are generated mechanically in a channel containing clean, deep water; no artificial perturbations are introduced. Frequency spectra are obtained from in situ measurements; two-dimensional wavenumber spectra are obtained from remote sensing of the water surface using a high-speed imaging system. The analytical models are in viscid, uncoupled NLS (nonlinear Schrödinger) equations: one that describes the temporal evolution of longitudinal modulations and one that describes the spatial evolution of transverse modulations. The experiments show that the evolution of wavetrains with sensible amplitudes and frequencies exceeding 9.8 Hz is dominated by modulational instabilities, i.e. resonant quartet interactions of the Benjamin–Feir type. These quartet interactions remain dominant even for wavetrains that are unstable to resonant triad interactions (f > 19.6 Hz) – if selective amplification does not occur (see Parts 1 and 2). The experiments further show that oblique perturbations with the same frequency as the underlying wavetrain, i.e. rhombus-quartet instabilities, amplify more rapidly and dominate all other modulational instabilities. The inviscid, uncoupled NLS equations predict the existence of modulational instabilities for wavetrains with frequencies exceeding 9.8 Hz, typically underpredict the bandwidth of unstable transverse modulations, typically overpredict the bandwidth of unstable longitudinal modulations, and do not predict the dominance of the rhombus-quartet instability. When the effects of weak viscosity are incorporated into the NLS models, the predicted bandwidths of unstable modulations are reduced, which is consistent with our measurements for longitudinal modulations, but not with our measurements for transverse modulations. Both the experiments and NLS equations indicate that wavetrains in the frequency range 6.4–9.8 Hz are stable to modulational instabilities. However, in these experiments, wavetrains with sensible amplitudes excite one of the members of the Wilton ripples family. When second-harmonic resonance occurs, both the first-and second-harmonic wavetrains undergo rhombus-quartet instabilities. When third-harmonic resonance occurs, only the third-harmonic wavetrain undergoes rhombus-quartet instabilities. Experiments on ripple instabilities. Part 2 Selective amplification of resonant triads Marc Perlin, Diane Henderson, Joe Hammack Published online by Cambridge University Press: 26 April 2006, pp. 51-80 Resonant three-wave interactions among capillary–gravity water waves are studied experimentally using a test wavetrain and smaller background waves (noise) generated mechanically in a channel. The spectrum of the background waves is varied from broad-banded to one with discrete components. When the noise spectrum is broad-banded, the test wavetrain amplifies all waves in its low-frequency band of allowable triads B[ell ], as anticipated from RIT (resonant interaction theory). When the noise spectrum has a discrete component in the high-frequency band of allowable triads Bh, the test wavetrain selectively amplifies a triad with two waves from B[ell ], contrary to expectations based on RIT. (Although, in accordance with RIT, no waves in Bh are amplified.) We conjecture that the mechanism for selective amplification comprises a sequence of exceedingly weak, higher-order interactions, normally neglected in RIT. This sequence allows the small amount of energy in a discrete spectral component to cascade to two waves in B[ell ], which then amplify, as anticipated from RIT, and dominate all other waves in B[ell ]. The conjectured sequence of nonlinear interactions is tested using both frequency and wave-vector data, which are obtained by in situ probes and by remote sensing of the water surface with a highspeed imaging system. Our predictions of selective amplification, as well as its absence, are consistent with all of the experiments presented herein and in Part 1. Selective amplification occurs for signal-to-noise (amplitude) ratios as large as 200, and its effects are measurable within ten wavelengths of the wavemaker. When selective amplification occurs, it has a profound impact on the long-time evolution of a ripple wavetrain.
CommonCrawl
Publisher's Erratum Erratum to: Statistical methodology for age-adjustment of the GH-2000 score detecting growth hormone misuse Dankmar Böhning1, Walailuck Böhning2, Nishan Guha2,3, David A. Cowan4, Peter H. Sönksen2 & Richard I. G. Holt2 The original article was published in BMC Medical Research Methodology 2016 16:147 After publication of the original article [1], it came to the authors' attention that there was an error affecting equation (7). A correction to this equation was submitted by the authors during proofing, but was not implemented correctly by the Production team. The correct version of equation (7) is as follows: $$ \begin{array}{l}\widehat{\beta^{*}}=\frac{{\displaystyle \sum_{i=1}^n\left({Y}_i^{*}-\overline{Y*}\right)\left({x}_i-\overline{x}\right)}}{{\displaystyle \sum_{i=1}^n{\left({x}_i-\overline{x}\right)}^2}}=\frac{{\displaystyle \sum_{i=1}^n\left({Y}_i-\widehat{\beta}{x}_i-\left(\overline{Y}-\widehat{\beta}\overline{x}\right)\right)}\left({x}_i-\overline{x}\right)}{{\displaystyle \sum_{i=1}^n{\left({x}_i-\overline{x}\right)}^2}}\\ {}\kern2.5em =\frac{{\displaystyle \sum_{i=1}^n\Big({Y}_i-\overline{Y}}\left)\left({x}_i-\overline{x}\right)-\widehat{\beta}{\displaystyle \sum_{i=1}^n\Big({x}_i-\overline{x}}\right){}^2}{{\displaystyle \sum_{i=1}^n{\left({x}_i-\overline{x}\right)}^2}}\\ {}\kern2.5em =\frac{{\displaystyle \sum_{i=1}^n\Big({Y}_i-}\overline{Y}\Big)\left({x}_i-\overline{x}\right)}{{\displaystyle \sum_{i=1}^n{\left({x}_i-\overline{x}\right)}^2}}-\widehat{\beta}\frac{{\displaystyle \sum_{i=1}^n\Big(}{x}_i-\overline{x}\Big){}^2}{{\displaystyle \sum_{i=1}^n{\left({x}_i-\overline{x}\right)}^2}}=0.\end{array} $$ Equation (7) has also been updated in the original article in order to rectify this publisher's error. Böhning D, Böhning W, Guha N, Cowan DA, Sönksen PH, Holt RI. Statistical methodology for age-adjustment of the GH-2000 score detecting growth hormone misuse. BMC Med Res Methodol. 2016;16:147. doi:10.1186/s12874-016-0246-8. Southampton Statistical Sciences Research Institute, University of Southampton, Southampton, SO17 1BJ, UK Dankmar Böhning Human Development and Health Academic Unit, University of Southampton Faculty of Medicine, IDS Building (MP887), Southampton General Hospital, Tremona Road, Southampton, SO16 6YD, UK Walailuck Böhning, Nishan Guha, Peter H. Sönksen & Richard I. G. Holt Nuffield Division of Clinical Laboratory Sciences, UK Department of Clinical Biochemistry Level 4, University of Oxford, John Radcliffe Hospital Headley Way, Headington, Oxford, OX3 9DU, UK Nishan Guha Department of Pharmacy and Forensic Science, Drug Control Centre, King's College London, 150 Stamford Street, London, SE1 9NH, UK David A. Cowan Walailuck Böhning Peter H. Sönksen Richard I. G. Holt Correspondence to Dankmar Böhning. Böhning, D., Böhning, W., Guha, N. et al. Erratum to: Statistical methodology for age-adjustment of the GH-2000 score detecting growth hormone misuse. BMC Med Res Methodol 16, 164 (2016). https://doi.org/10.1186/s12874-016-0262-8
CommonCrawl
Threshold dynamics of a delayed nonlocal reaction-diffusion cholera model Novel entire solutions in a nonlocal 2-D discrete periodic media for bistable dynamics On a matrix-valued PDE characterizing a contraction metric for a periodic orbit Peter Giesl Department of Mathematics, University of Sussex, Falmer, Brighton BN1 9QH, United Kingdom Received June 2020 Revised September 2020 Published September 2021 Early access October 2020 The stability and the basin of attraction of a periodic orbit can be determined using a contraction metric, i.e., a Riemannian metric with respect to which adjacent solutions contract. A contraction metric does not require knowledge of the position of the periodic orbit and is robust to perturbations. In this paper we characterize such a Riemannian contraction metric as matrix-valued solution of a linear first-order Partial Differential Equation. This enables the explicit construction of a contraction metric by numerically solving this equation in [7]. In this paper we prove existence and uniqueness of the solution of the PDE and show that it defines a contraction metric. Keywords: Periodic orbit, stability, contraction metric, converse theorem, matrix-valued partial differential equation, existence, uniqueness. Mathematics Subject Classification: Primary: 34C25, 34D20; Secondary: 37C27. Citation: Peter Giesl. On a matrix-valued PDE characterizing a contraction metric for a periodic orbit. Discrete & Continuous Dynamical Systems - B, 2021, 26 (9) : 4839-4865. doi: 10.3934/dcdsb.2020315 V. A. Boĭchenko and G. A. Leonov, Lyapunov orbital exponents of autonomous systems, Vestnik Leningrad. Univ. Mat. Mekh. Astronom., 3 (1988), 7–10. Google Scholar G. Borg, A condition for the existence of orbitally stable solutions of dynamical systems, Kungl. Tekn. Högsk. Handl. Stockholm, 153 (1960), 12 pp. Google Scholar C. Chicone, Ordinary Differential Equations with Applications, Texts in Applied Mathematics, 34. Springer, New York, 2006. Google Scholar F. Forni and R. Sepulchre, A differential Lyapunov framework for contraction analysis, IEEE Trans. Automat. Control, 59 (2014), 614-628. doi: 10.1109/TAC.2013.2285771. Google Scholar P. Giesl, Necessary conditions for a limit cycle and its basin of attraction, Nonlinear Anal., 56 (2004), 643-677. doi: 10.1016/j.na.2003.07.020. Google Scholar P. Giesl, Converse theorems on contraction metrics for an equilibrium, J. Math. Anal. Appl., 424 (2015), 1380-1403. doi: 10.1016/j.jmaa.2014.12.010. Google Scholar P. Giesl, Computation of a contraction metric for a periodic orbit using meshfree collocation, SIAM J. Appl. Dyn. Syst., 18 (2019), 1536-1564. doi: 10.1137/18M1220182. Google Scholar P. Giesl, Converse theorem on a global contraction metric for a periodic orbit, Discrete Cont. Dyn. Syst., 39 (2019), 5339-5363. doi: 10.3934/dcds.2019218. Google Scholar P. Giesl and H. Wendland, Kernel-based discretisation for solving matrix-valued PDEs, SIAM J. Numer. Anal., 56 (2018), 3386-3406. doi: 10.1137/16M1092842. Google Scholar P. Hartman, Ordinary Differential Equations, John Wiley & Sons, Inc., New York-London-Sydney, 1964. Google Scholar P. Hartman and C. Olech, On global asymptotic stability of solutions of differential equations, Trans. Amer. Math. Soc., 104 (1962), 154-178. doi: 10.2307/1993939. Google Scholar A. Yu. Kravchuk, G. A. Leonov and D. V. Ponomarenko, Criteria for strong orbital stability of trajectories of dynamical systems. I, Differentsial'nye Uravneniya, 28 (1992), 1507-1520. Google Scholar G. A. Leonov, On stability with respect to the first approximation, Prikl. Mat. Mekh., 62 (1998), 548-555. doi: 10.1016/S0021-8928(98)00067-7. Google Scholar G. A. Leonov, I. M. Burkin and A. I. Shepelyavyi, Frequency Methods in Oscillation Theory, Mathematics and its Applications, 357. Kluwer Academic Publishers Group, Dordrecht, 1996. doi: 10.1007/978-94-009-0193-3. Google Scholar W. Lohmiller and J.-J. E. Slotine, On contraction analysis for non-linear systems, Automatica J. IFAC, 34 (1998), 683-696. doi: 10.1016/S0005-1098(98)00019-3. Google Scholar Ian R. Manchester and J.-J. E. Slotine, Transverse contraction criteria for existence, stability, and robustness of a limit cycle, Systems Control Lett., 63 (2014), 32-38. doi: 10.1016/j.sysconle.2013.10.005. Google Scholar G. R. Sell and Y. You, Dynamics of Evolutionary Equations, Applied Mathematical Sciences, 143. Springer-Verlag, New York, 2002. doi: 10.1007/978-1-4757-5037-9. Google Scholar B. T. Stenström, Dynamical systems with a certain local contraction property, Math. Scand., 11 (1962), 151-155. doi: 10.7146/math.scand.a-10661. Google Scholar Peter Giesl. Converse theorem on a global contraction metric for a periodic orbit. Discrete & Continuous Dynamical Systems, 2019, 39 (9) : 5339-5363. doi: 10.3934/dcds.2019218 Qi Yao, Linshan Wang, Yangfan Wang. Existence-uniqueness and stability of the mild periodic solutions to a class of delayed stochastic partial differential equations and its applications. Discrete & Continuous Dynamical Systems - B, 2021, 26 (9) : 4727-4743. doi: 10.3934/dcdsb.2020310 Demetris Hadjiloucas. Stochastic matrix-valued cocycles and non-homogeneous Markov chains. Discrete & Continuous Dynamical Systems, 2007, 17 (4) : 731-738. doi: 10.3934/dcds.2007.17.731 Yongge Tian. A survey on rank and inertia optimization problems of the matrix-valued function $A + BXB^{*}$. Numerical Algebra, Control & Optimization, 2015, 5 (3) : 289-326. doi: 10.3934/naco.2015.5.289 Daniel Alpay, Eduard Tsekanovskiĭ. Subclasses of Herglotz-Nevanlinna matrix-valued functtons and linear systems. Conference Publications, 2001, 2001 (Special) : 1-13. doi: 10.3934/proc.2001.2001.1 Hongbin Chen, Yi Li. Existence, uniqueness, and stability of periodic solutions of an equation of duffing type. Discrete & Continuous Dynamical Systems, 2007, 18 (4) : 793-807. doi: 10.3934/dcds.2007.18.793 Sigurdur Freyr Hafstein. A constructive converse Lyapunov theorem on exponential stability. Discrete & Continuous Dynamical Systems, 2004, 10 (3) : 657-678. doi: 10.3934/dcds.2004.10.657 Antonio Siconolfi, Gabriele Terrone. A metric proof of the converse Lyapunov theorem for semicontinuous multivalued dynamics. Discrete & Continuous Dynamical Systems, 2012, 32 (12) : 4409-4427. doi: 10.3934/dcds.2012.32.4409 Meina Gao, Jianjun Liu. A degenerate KAM theorem for partial differential equations with periodic boundary conditions. Discrete & Continuous Dynamical Systems, 2020, 40 (10) : 5911-5928. doi: 10.3934/dcds.2020252 Giovanni Russo, Fabian Wirth. Matrix measures, stability and contraction theory for dynamical systems on time scales. Discrete & Continuous Dynamical Systems - B, 2021 doi: 10.3934/dcdsb.2021188 Helge Dietert, Josephine Evans, Thomas Holding. Contraction in the Wasserstein metric for the kinetic Fokker-Planck equation on the torus. Kinetic & Related Models, 2018, 11 (6) : 1427-1441. doi: 10.3934/krm.2018056 Anatoli F. Ivanov, Sergei Trofimchuk. Periodic solutions and their stability of a differential-difference equation. Conference Publications, 2009, 2009 (Special) : 385-393. doi: 10.3934/proc.2009.2009.385 Peter Giesl, Holger Wendland. Construction of a contraction metric by meshless collocation. Discrete & Continuous Dynamical Systems - B, 2019, 24 (8) : 3843-3863. doi: 10.3934/dcdsb.2018333 Shigui Ruan, Junjie Wei, Jianhong Wu. Bifurcation from a homoclinic orbit in partial functional differential equations. Discrete & Continuous Dynamical Systems, 2003, 9 (5) : 1293-1322. doi: 10.3934/dcds.2003.9.1293 Roberto Triggiani. A matrix-valued generator $\mathcal{A}$ with strong boundary coupling: A critical subspace of $D((-\mathcal{A})^{\frac{1}{2}})$ and $D((-\mathcal{A}^*)^{\frac{1}{2}})$ and implications. Evolution Equations & Control Theory, 2016, 5 (1) : 185-199. doi: 10.3934/eect.2016.5.185 Jaime Angulo Pava, Borys Alvarez Samaniego. Existence and stability of periodic travelling-wavesolutions of the Benjamin equation. Communications on Pure & Applied Analysis, 2005, 4 (2) : 367-388. doi: 10.3934/cpaa.2005.4.367 Tuoc Phan, Grozdena Todorova, Borislav Yordanov. Existence uniqueness and regularity theory for elliptic equations with complex-valued potentials. Discrete & Continuous Dynamical Systems, 2021, 41 (3) : 1071-1099. doi: 10.3934/dcds.2020310 Anete S. Cavalcanti. An existence proof of a symmetric periodic orbit in the octahedral six-body problem. Discrete & Continuous Dynamical Systems, 2017, 37 (4) : 1903-1922. doi: 10.3934/dcds.2017080 Nguyen Thieu Huy, Ngo Quy Dang. Dichotomy and periodic solutions to partial functional differential equations. Discrete & Continuous Dynamical Systems - B, 2017, 22 (8) : 3127-3144. doi: 10.3934/dcdsb.2017167 Ammari Zied, Liard Quentin. On uniqueness of measure-valued solutions to Liouville's equation of Hamiltonian PDEs. Discrete & Continuous Dynamical Systems, 2018, 38 (2) : 723-748. doi: 10.3934/dcds.2018032
CommonCrawl
Mathematics Educators Stack Exchange is a question and answer site for those involved in the field of teaching mathematics. It only takes a minute to sign up. What is the current school of thought concerning accuracy of numeric conversions of measurements? I posted this question earlier today on the Mathematics site (https://math.stackexchange.com/q/3988907/96384), but was advised it would be better here. I had a heated argument with someone online who claimed to be a school mathematics teacher of many years standing. The question which spurred this discussion was something along the lines of: "A horseman was travelling from (location A) along a path through a forest to (location B) during the American War of Independence. The journey was of 22 miles. How far was it in kilometres?" To my mind, the answer is trivially obtained by multiplying 22 by 1.6 to get 35.2 km, which can be rounded appropriately to 35 km. I was roundly scolded by this ancient mathematics teacher for a) not using the official conversion factor of 1.60934 km per mile and b) not reporting the correct value as 35.405598 km. Now I have serious difficulties with this analysis. My argument is: this is a man riding on horseback through a forest in a pre-industrial age. It would be impractical and impossible to measure such a distance to any greater precision than (at best) to the nearest 20 metres or so, even in this day and age. Yet the answer demanded was accurate to the nearest millimetre. But when I argued this, I was told that it was not my business to round the numbers. I was to perform the conversion task given the numbers I was quoted, and report the result for the person asking the question to decide how accurately the numbers are to be interpreted. Is that the way of things in school? As a trained engineer, my attitude is that it is part of the purview of anybody studying mathematics to be able to estimate and report appropriate limits of accuracy, otherwise you get laughably ridiculous results like this one. I confess I have never had a good relationship with teachers, apart from my A-level physics teacher whom I adored, so I expect I will be given a hard time over my inability to understand the basics of what I have failed to learn during the course of the above. secondary-education real-numbers Torsten Schoeneberg Prime MoverPrime Mover $\begingroup$ This isn't substantial enough to leave as an answer, but basically you're correct for the exact reasons you stated. And although nobody likes to be scolded by a teacher, especially when the teacher is wrong, that rarely (never?) happens here as long as questions are in good faith like yours is. I'm sorry about your previous encounters with teachers; maybe we can make up for them a bit here. $\endgroup$ – Thierry $\begingroup$ I've seen numerous instances of this teacher's error, converting to a new unit of measure and producing an absurd level of accuracy. I was taught at a very young age that normal human body temperature is 98.6 degrees Fahrenheit. More recently, 100.4 degrees is the official border for suspicion of covid19. These temperatures happen to result from converting Celsius temperatures of 37 and 38 degrees, respectively. (See also my answer at matheducators.stackexchange.com/questions/1572 for a similar example.) $\endgroup$ – Andreas Blass $\begingroup$ ancient mathematics teacher --- Apparently not too ancient, as this would have been ridiculously silly until calculators arrived (in my school this was 1975, when two or three students had one). I looked at several books I have from within 20 years of this, and most didn't even have English-metric conversions. Two that did, Dolciani's Modern Algebra. Structure and Method. Book 1 (1973 edition) and Lankford/Clark's Basic Ideas of Mathematics (1953), only gave the approximations 1 km = 0.6 mile (Dolciani, p. 577) and 1 km = 5/8 mile (L/K, p. 497) $\endgroup$ – Dave L Renfro $\begingroup$ Related: Brian Kernighan (of K&R C book) gave a guest lecture to Harvard's CS 50 course a decade back that was almost entirely making fun of innumeracy errors, including this over-precision class of misunderstanding: youtu.be/kw9KwjJCJH8?t=2170 $\endgroup$ $\begingroup$ Because of the coastline paradox, it's not really clear that that level of precision is even meaningful. en.wikipedia.org/wiki/Coastline_paradox $\endgroup$ – Ryan_L You're right. The random, anonymous person you met online is not competent. This is basic mathematical literacy, as taught in every freshman chemistry and physics class. $\begingroup$ Your answer may have shown the source of the problem. I was taught the principle "don't produce significant digits out of thin air" in a physics class, but never in a mathematics class. If that experience is widespread, people may have picked up the idea that this principle is limited to applications classes, whereas in mathematics classes we should "be precise" even if the precision is garbage. $\endgroup$ $\begingroup$ +1 for the abruptness of the answer $\endgroup$ $\begingroup$ @AndreasBlass's point could extend to thinking about spurious precision in the input $\endgroup$ – Chris H $\begingroup$ Outside of SE and some other isolated pockets, I'd suggest the first sentence of this answer is near universal. $\endgroup$ – Lamar Latrell $\begingroup$ @AndreasBlass: Yes, I'm sure you're right. I think the underlying problem is that to get this kind of thing right, you need strong number sense, which by far the majority of K-6 teachers lack. The following is a question that the majority even of my college-level science majors can't do without extensive help: If a real number is rounded to the nearest integer, what is the maximum rounding error? You'll also see this lack of number sense in the desire to teach and learn heuristics as if they were absolute rules, or the belief that such things are class rules set by each teacher. $\endgroup$ The product of two numbers should be given with as many significant digits as the least precise of the numbers multiplied (see https://www.nku.edu/~intsci/sci110/worksheets/rules_for_significant_figures.html). 1.60934 km/mile has six significant digits (or, if a mile is defined to be an exact number of km, then the conversion factor has an infinite number of significant digits). 22 miles has two significant figures. We take the smaller of these two, which is two significant figures from the 22 miles. This means that rounding to 35 km is correct. It is a good idea to use, during one's work, at least one significant digit more than the final quantity needed, so it would have been good practice to use the conversion factor of 1.61 if this were a test, but for a casual online conversation, 1.6 is fine. The importance of getting significant figures correctly pales in comparison of basic decency. Even if this person had been correct, scolding you would not be. If you believe that someone is in error, you should express that view politely. It appears that this person's civility may have atrophied from having a captive audience with such a power differential that they have been able to dispense with basic politeness. AcccumulationAcccumulation $\begingroup$ What dismays me most is the thought of those generations of students whose experience of mathematics will have been irretrievably compromised. "Follow these rules! Don't argue! You are not allowed the privilege of even expecting it to make any sense!" $\endgroup$ – Prime Mover $\begingroup$ The one struggle I still have with significant digits - convert 20 miles. Use 1.6, and get 32. Will you choose to keep significant digits to 1, or treat the zero of 20 as significant? (I realize, scientific notation takes care of this, indicating whether the zero was significant or not.) $\endgroup$ – JTP - Apologise to Monica $\begingroup$ @JTP-ApologisetoMonica This is a problem of all numbers whose SF are fewer than the number of figures on the left of the DP. Are the zeroes actually significant figures or are they merely placeholders? In such a situation it is up to the person communicating the number to specify to how many SF "20" is reported to -- either 1 or 2. This issue crops up over and over again. $\endgroup$ $\begingroup$ @JTP-ApologisetoMonica You would have to take the context into consideration, e.g. if there are other, similar, measurements to 2 S.F. in the same document, use that. Or you could write "about 32 km" to indicate that the amount is not necessarily as accurate as written. $\endgroup$ – Andrew Morton $\begingroup$ If they wanted an answer of 35.405598 km they would have had to specify the rider rode 22.000000 miles. $\endgroup$ – Schwern Here's a joke I like to tell when people could use a reminder about precision vs accuracy: A tour guide at Giza was explaining how the Pyramids were 4507 years old. Someone in the crowd asked: "That's oddly specific. How do we know this?" "Well. I was told they were 4500 years old when I started working here 7 years ago." I'm not sure the grumpy teacher you mentioned would be amused, though. Eric DuminilEric Duminil $\begingroup$ Heh! I had that in mind when I was pondering on this while driving around earlier: "This hominid skull is one hundred thousand and seventeen years old. And five months and two days." $\endgroup$ $\begingroup$ What a great joke! Good one! $\endgroup$ – Fattie Just to play the devil's teacher's advocate here: one can make a point that rounding should be generally avoided but measurement uncertainty instead be expressed explicitly. Specifically, rounding errors should always be much smaller than measurement errors. Now, if you have a figure of 22 miles, I'd interpret this as $(22\pm0.5)\mathrm{mi} = (35.4\pm0.8)\mathrm{km}$. I specified one more digit, but not only did I represent the center value better (which in your rounding adds a whopping 50% error), I also captured that the inaccuracy of that result is even bigger than simply $35\:\mathrm{km}$ would suggest. In particular, $36\:\mathrm{km}$ is also within the range! How many digits to write out is then uncritical; in physics convention is to write two non-significant digits in both the value and uncertainty figure. One is usually enough, but when completely omitting non-significant digits you do introduce excessive extra error. If the numbers are just stored in a computer, you should typically keep all the digits of the binary number representation – with double precision that means you keep a rather absurd 16 decimals! It doesn't really increase the precision, but it also doesn't really cost anything or suggest too high precision (because uncertainty is stored separately), and it makes sure that rounding really will have no contribution to the error of the final result. leftaroundaboutleftaroundabout $\begingroup$ Where does this convention of two non-significant digits come from? I'm not sure I've ever heard of it, so I would certainly disagree with saying that it's the convention in physics. $\endgroup$ $\begingroup$ @DavidZ it's not a universal convention, but it does seem to be used by most big-scale experimental physics projects nowadays. It is also the form in which NIST lists physical constants, e.g. the vacuum impedance is $376.730 313 668(57)\: \Omega$. $\endgroup$ – leftaroundabout $\begingroup$ Comments are not for extended discussion; I've moved the conversation that was attached to this answer to chat. $\endgroup$ – Chris Cunningham ♦ $\begingroup$ Or nitpick if "International Mile" or "US Statue Miles" was used? those have different conversion factors and for civil war era the quoted 1.60934 would be wrong if survey miles are used, those would be rounded to 1.60935. So the whole thing is kind of pointless if you have no agreement on measurment precision and time. $\endgroup$ – schlenk $\begingroup$ @EricDuminil Actually there is somewhat infamous contract during colonization of Africa, where german negotiators wrote miles in a contract and the local chieftain only knew british miles, but the negotiators insisted they meant german miles (7532,5 meters). Quite a significant difference and led to the Hereo and Nama genocide later. So mind your miles or people might die. $\endgroup$ When a tutoring student asks me about rounding, I tell them that absent specific instructions from a teacher, common sense should apply. For a conversion, 22 miles isn't 22.0000 miles, there's the assumption it's been rounded. You can't convert and find yourself with 6 digits of accuracy beyond the decimal. As you note, there's a number of digits that result to be the nearest meter, millimeter, etc. which is absurd. Before GPS, I'd give directions accurate to 1/10 mile, as that's what a car odometer reflects. Even that was often called a bit obsessive. My home scale gives me my weight to .1 lbs. Would it really be of value to have an extra digit of accuracy? A person's height? The nearest inch will do. The one thing I warn about - don't round while doing interim steps. This is a sure way to find that the final result may be off by enough to be graded as wrong. This issue commonly presents itself with trig functions which ask for a triangle side to the nearest 1/100. Rounding should be done as the final step. JTP - Apologise to MonicaJTP - Apologise to Monica $\begingroup$ There is more than an assumption that the figure has been rounded. There is a context into which the question has been placed. I think I understand what the main problem here is now: people seem to assume that the context is there merely to provide a pretty little story to keep the unmotivated students on board. No, the context is there to provide a scenario to be analysed. $\endgroup$ $\begingroup$ Didn't my examples offer that context? We can go off on multiple tangents here, from the observation that not all math problems offer the context required, to the fact that ultimately, perhaps unfortunately, I often ask a (tutoring) student "Do you want to be right, or do you want the credit for your answer?" You are not going to change that person. But you do have 10 answers here that are in your favor. I don't see one that sides with that teacher. $\endgroup$ $\begingroup$ I was not arguing against you, but I was specifically elaborating your statement "there's the assumption it's been rounded." Oh, and I'm not sure where I saw it now, but there are some answers here which appear to suggest that what you do in a maths class (i.e. calculate the numbers exactly) is different from what you do in a physics etc. class, because "maths is pure" or some such. And someone did say that the context in a word problem is just there to make it interesting and engage the students. Utter piffle, of course. $\endgroup$ $\begingroup$ I agree, 100%. Good talk. $\endgroup$ When I was in school, I once got an answer marked as error for having too many digits. IIRC it was in trigonometry and I had just written down as many digits as the calculator displayed. (I was able to discuss it away, but was told to avoid unreasonable amounts of digits in the future) That was in the 1990s in continental Europe, but I think it is still good enough for s counterexample: Not all teachers are like that. IF the horse ride were 22.00000000 miles then the other person would be right. Else if it were 22 miles then you should round the answer to zero decimal places. Some people are illogically pedantic without any rational reason for what they promulgate. practical manpractical man $\begingroup$ It's not decimal places you should have in mind, but significant figures, surely? $\endgroup$ $\begingroup$ I presumed that the numbers given were to sig figs limit $\endgroup$ – practical man $\begingroup$ @practicalman You would still round to sig digs though, not decimal places. Your number has 10 sig digs total, two left and 8 right of the radix. Your wording implies that your result should always have 8 digits right of the radix. For example: 22.00000000 miles = 35200.00000000 meters, when it should be 35200.00000 meters (assuming 1 mile = 1600 meters with infinite precision for simplicity). $\endgroup$ You're both right, depending on the domain of discourse and the rules of engagement. In pure math, the traditional expectation is that the numbers given are exact unless stated otherwise, and answers are also to be exact unless stated otherwise. So when the mathematician read "22 miles," he's using a tradition that means "exactly 22 miles." But in the physical sciences, all measurements are understood to be inexact and approximations and rounding are either "allowed" or "expected" (depending on the logical rigor applied). In this case, the correct answer boils down to a question of semantics and assumptions. What about this question: If a man traveled 22 miles, how far did he travel in kilometers? How would you answer that? The "If" complicates things. Some would say that it turns the question into a hypothetical that ignores the physical difficulties in measuring exactly 22 miles and turns it into a "given." It's not a stretch to read the original question as a hypothetical, even without an explicit "If" at the beginning. Some traditions say that integers are always expected to be exact and that the question should have used "approximately 22 miles," "22.0," or a bar on the last significant digit to show it's a real number instead of an integer. Even in the physical sciences, scenarios used for pedagogical purposes are sometimes idealized in order to remove confounding factors that might distract from the main point being taught. I don't think we know enough about the source of this question to know about what assumptions or simplifications are being made. You may argue that the use of "a man riding on horseback through a forest in a pre-industrial age" implies a real situation and an actual, inexact measurement. A counterargument is that the use of abstract identifiers "A" and "B" to to designate the starting and stopping point suggest an idealized situation. I would agree this is a good question for Mathematics Educators Stack Exchange. It emphasizes that in the classroom (as well as life) it's important to be explicit about assumptions and expectations and to lay out the ground rules. Adding a summary, based on comments, that tries to be more direct: Use of significant digits only applies to inexact numbers such as measurements. In a problem like "Convert 22 miles to kilometers," there is no reason to think 22 miles is a measurement. Rather, it is a "given": Something that is to be assumed or taken for granted for the sake of the problem. I think this question boils down to this: In the original question, is "22 miles" to be taken as a given or a measurement? I don't think we can tell. (At least not without more context about where the question came from and why it was asked.) The original question could merely be "Convert 22 miles to kilometers," dressed up in a story to make it engaging or interesting. My reading of some of the comments suggests a point of view of "If the problem resembles a real-world situation, then it must be interpreted as a real-world situation." Or more succinctly: If 22 miles could be interpreted as a measurement, then it must be interpreted as a measurement. Or that by phrasing the question in a historical, real-world context, that somehow forces the measurement interpretation. I don't follow that. It ignores the way real-world people write, talk, and teach. Syntax JunkieSyntax Junkie $\begingroup$ The subtext here, if mathematics and the sciences are taught this way is: "This is mathematics, here within this walled garden. The other side of the wall is physics and chemistry and messy stuff like that, and we don't have anything in common with them. We use numbers precisely, and those smelly hairy apes over there use (shudder) approximations." Oh, and the reason for using "A" and "B" is because I could not remember the actual place names. It is implicit that this is a real-world scenario being modelled mathematically ... $\endgroup$ $\begingroup$ ... and so the expectation is that the student reads all of the question and puts the entire situation into context. You cannot honestly say: "We're mathematicians and so we scoff at the real world because we work with ideals." You are given the real world situation and it is an important part of mathematics to be able to translate accurately and appropriate the full context of a "word problem" into the correct mathematical model. TL;DR: This is not a pure mathematics problem. It is at base an a exercise in mathematical modelling. $\endgroup$ $\begingroup$ It seems like a very odd choice to treat real-world, continuous measurements as integers, since it effectively implies infinite precision of the measurement. Integers seem appropriate for counting problems (i.e. there are exactly 22 apples in the basket), but not at all for this type of measurement problem (it's not even possible to measure something as exactly 22 miles away with infinite precision). $\endgroup$ – Nuclear Hoagie $\begingroup$ "It ignores the way real-world people write, talk, and teach." Does it? This lack of explanation and succinctness of this statement implies that you think it is an given. It is very unclear to me that this is actually the case. You have some explaining to do because if anything, the way people write, talk, and teach goes completely in the opposite direction in that the assumption is lack of precision in the absence of explicit information, rather than infinite precision. $\endgroup$ $\begingroup$ @DKNguyen. I regret that I gave that impression. I don't think we have enough information to know what interpretation works best. If my post seems to advocate for the "given" interpretation, I think it's because most of the posts that I read seemed to imply felt it was "obviously" a measurement (my phrasing, not theirs.) So I did want to make a case that were was an alternate interpretation. I appreciate your feedback. $\endgroup$ – Syntax Junkie I agree with just about everyone that the answer is 35, or perhaps 35.4 (a number I like better, see below). An answer of 35.405598 km is precise to the millimeter. I've ridden horses; they don't work in millimeters. Update: For what it's worth, after all this discussion, I think that the right number is "about 35 and a half" (not 35.5) kilometers. Thirty five and half has about the same uncertainty as "22 miles" (maybe even more), and is within "horseshoes and hand-grenades" of the exact answer of "just about 35.4 exactly". As you acknowledge, the intermediate answer you came up with (using an approximate conversion factor) of 35.2 km is wrong; 35 km is a correct answer, but 35.2 km is just plain wrong. It makes sense to consider that a distance of "22 miles" is likely more precise than "something between 21.5 and 22.5" which is what considering 22 as having only two significant figures means. It's more like 22.0 miles (i.e., between 21.95 and 22.05 (which gives you an uncertainty of about 500 feet (about 160 m)). But, when you multiply 22.0 by 1.6, then your answer should definitely only have 2 significant figures (not because of the 22, but because of the 1.6). You can tell that your 3 significant figure result is off, the "completely precise" number is off by 0.2 km (200 m) from your figure. Horses are more accurate than hundreds of meters. What you want to do working with numbers is to get an understanding of both the precision and the accuracy of the measurement. Saying something is about 22 miles, give or take 500 feet makes 22.0 about the right number to use. When doing a conversion, it's always best to use the most precise number you have for all intermediate work, and only round back to the correct number of significant figures at the end of the calculation. When doing distance calculations, I always use the fact that one inch is exactly 2.54 centimeters (i.e. 2.54000000000, as many zeros as you want). If I've got a calculator (or a slide-rule) handy, I'd do this: 22 miles * 5280 ft/mile * 12 in/ft * 2.54 cm/in / 100 cm/m / 1000 m/km = 35.405568 km Note that that number is off by 30 millimeters from what you quote. My number is correct. Also note that I carried the units through the calculation. That way, I can do some dimensional analysis and see that I get an answer in km, and that it's what I expect: (miles * (ft/mile) * (in/ft) * (cm/in) / (cm/m) / (m/km) works out to km). I'd look at that number and say "yes, it's 35.4 km." Also note that all those intermediate conversion constants are exact (the number of inches in a foot is exactly 12 - so you can treat 12 like 2.54, it has as many zeros as you want). But then again Way back when I was a student, I had a math prof who'd get upset at us engineers for saying the answer is about 35.4km. He's say that two numbers can be equal, but "about equal" or "approximately equal" have no mathematical meaning. Then he'd point out that it would be pretty easy to figure out that one was about equal to zero - and at the point, everything breaks. So, if you are in a math class and the teacher says "The relationship between miles and kilometers is 1.609344 km/mile, how many kilometers are there in 22 miles?", then the answer is 35.405568 km, not 35.4 km. Note the absence of the horse in this phrasing of the question. Flydog57Flydog57 $\begingroup$ I never gave $35.2$ as an answer. It was an interim calculation based on the necessities of the question. $22$ miles is only ever going to be a guessification. You don't need a conversion factor of anything better than $1.6$ km per mile in such circumstances. It's the same when I drive across Europe. I like to know how many miles and how many km left. I only need an approximate number. So I multiply by 8 and divide by 5 (or vice versa) and I can do it in my head. $\endgroup$ $\begingroup$ @DKNguyen Applied mathematics uses units. And unit conversion was part and parcel of the mathematics curriculum throughout the whole length of my high school, and even earlier, although early education did not put "names" on the classes so much, it was all just "lessons". $\endgroup$ $\begingroup$ @DKNguyen. Q: "If you're truly a math class, why is the professor using units at all?" a: to demonstrate that units need to be treated as algebraic quantities than can be manipulated as any other variable; (2) To demonstrate that having an answer with sensible units can help a student check their work. Q: "Or real world examples of things?" A: To test the student's ability to extract the relevant details from a body of text and convert that into mathematical symbolism. $\endgroup$ $\begingroup$ There are contexts (such as land measurements in the United States) where 1 inch is not exactly 2.54 cm. civilgeo.com/blog/when-a-foot-isnt-really-a-foot $\endgroup$ – Jasper $\begingroup$ ... It's very common, often a requirement, that math courses include applications with real-world units. See any math textbook in a course that engineers might take. Pedagogically most people find that making work concrete helps students get traction. Moreover, how else can students practice manipulating units before entering their applied/engineering courses? My dept.'s liberal-arts math course is mostly about teaching precisely that topic. $\endgroup$ It depends on the level of the class. I would expect someone who has a recent undergraduate degree in mathematics to have experienced significant figures at at least some point in their life, either in high school or in college. I would also expect common sense to kick in and say that the level of accuracy proposed is unreasonable. But it is reasonable to dodge the topic when teaching arithmetic, algebra, etc., because the students usually have a hard enough time as it is. Sometimes you can arrange for numbers that come out evenly anyway, but if you're stuck trying to teach an awkward conversion (miles to km) or if the task is to teach something about decimals or fractions then you may be unable to avoid it. For example, "Alex had five cookies and split them evenly with Blake. How many did each of them get?" Two and a half, and we aren't going to quibble over how precisely half of a cookie was achieved. If your students are advanced enough to be working with more precise numbers (and, presumably, starting to question what level of accuracy is acceptable) then the best way to dodge it is to simply specify what rounding you want in the question: "Answer to x decimal places." That way you can specify the correct precision without the students having to understand how to calculate what the correct precision should be. That's much simpler for the student to understand than the official way, which is according to NIST: The precision of your conversion should be based on relative error. If error isn't specified, then you can infer it from the number of digits in the values given. Use a conversion factor with equal or more precision to that to preform the calculation. Then you round the result to produce a relative error that is of the order of the original. $22$ miles implies an error range of plus or minus $\frac{1}{2}$ mile which is $2.\overline{27}\%$. Using a conversion factor of $1.61$ kilometers per mile (which has better than $2.\overline{27}\%$ accuracy, note that $1.6$ is not accurate enough)... $22*1.61=35.42$ km. We could also use $1.609$ or any more precise conversion, it does not matter because we will be rounding. (For example, in this case, $22*1.609 = 35.398$ km.) Now we round... $40$ km would have a relative error $5\div40\approx12.5\%$ which is too much, $35.4$ would have a relative error of $0.05\div35.4\approx0.141%$ which is too little. $35$ km has a relative error of $0.5\div35\approx 1.43\%$ which is just right. Note that we get the same (rounded) answer regardless of how much precision we used in the conversion factor, as long as the conversion factor met a minimum level precision. Question: Why do we assume plus or minus one-half mile? Wouldn't a distance of 22 miles be measured more accurately than that? Answer: No. If anything it is likely to be much worse. (Disclaimer, I'm not doing sigfigs in this section, I just can't be bothered.) In American revolutionary war era from the New York Public Library, Thomas Jefferson measuring exactly 22 miles would have actually gone over 22.3 miles (and he was a bit obsessive about measurements): Before he left on the trip, Jefferson bought from a Philadelphia watchmaker an odometer that counted the revolution's of his carriage's wheel. He had measured distance based "on the belief that the wheel of the Phaeton [his carriage] made exactly 360. revoln's in a mile." After the trip, though, he re-measured circumference of the wheel and found that it made only 354.95 revolutions in a mile. So for every seventy-one miles Jefferson thought he traveled, he had actually traveled seventy-two. But I use my car odometer, not a carriage! It's much more reliable! ...nope. From motus.com, if your car odometer says 22 miles then it could be anywhere between 21.12 to 22.88 miles: Surprisingly, there is no federal law that regulates odometer accuracy. The Society of Automotive Engineers set guidelines that allow for a margin of error of plus or minus four percent. Actually I use GPS, that's very accurate! ....nope again. GPS has a margin of error on every position measurement made, plus error from the distance between measurements. Essentially your path is like a coastline and the GPS can suffer from the coastline paradox. From singletracks.com (with pictures and a good explanation: In a very, very bad case (steep trail, lots of switchbacks) your GPS may report 22 miles when you've actually gone 44 miles! Holy guacamole. [...]GPS reports the full loop is right at 1 mile long. In fact, everyone else who rides this trail gets roughly the same measurement. But the trail always "feels" much longer than that. Recently I started riding with a wheel-based cyclocomputer, which I calibrated and verified during one of our track tests. Measuring this particular trail with the cyclocomputer reveals the trail is actually closers to 2 miles long, meaning our GPS units are underestimating distance by half! I'm not going to find sources for inaccuracy of distance calculated by counting steps, the time it took to travel, etc. It's pretty obvious that no one (and no horse) actually moves at that even of a pace for 22 miles. So accepting 22 miles on a trail as being between 21.5 and 22.5 is actually pretty generous. Better to just call it a day and say it was "some distance". $\begingroup$ Exactly. However, when working with familiar integers in conversation, it's not the same as working from a specification. If someone says "22 miles", I'm assuming that the uncertainty is less than +/- 0.5 mile. That would be "about 22 miles". Perhaps my estimate of +/- 500 feet is too narrow, but I suspect that it's not that far off. The real number may be "35 and a half" (not 35.5), there's a lot of slop in that wording. Question, if someone says "10 miles", how much uncertainty is there in that number - is it 1 sig fig, or 2, or, 1.5? I believe that "22 miles" has less slop than "10 miles" $\endgroup$ – Flydog57 $\begingroup$ @Flydog57 No. I added into my answer at the bottom, because it was much too long for a comment, but the short answer is that everyday tools for measuring distances traveled are really inaccurate. $\endgroup$ I would say that based on the words of the question, the answer is 35. It is not a distance or measurement of 22 miles between points A & B. It is a journey between locations A & B. Next time you see the guy who scolded you, ask him how one should answer if asked, "What is the numerical value of pi minus e?" AlanAlan Thanks for contributing an answer to Mathematics Educators Stack Exchange! Not the answer you're looking for? Browse other questions tagged secondary-education real-numbers or ask your own question. Examples of Innumeracy When did the American school system's progression of math classes take its current form? What is the best way to teach compound interest in high school? What other skill sets for middle school students can be introduced during the Stock Market Game? SMSG: Did any school districts actual teach the curriculum as planned and what were the results for the teachers and students? How early to start "abstract" math education, or, How to prevent smart kids from never getting exposed to math? Planning high school workshop on Goldbach Conjecture What are the resources to learn prerequisite knowledge to latter High school and undergrad prep textbooks?
CommonCrawl
Weak time discretization for slow-fast stochastic reaction-diffusion equations Asymptotic stability of traveling fronts to a chemotaxis model with nonlinear diffusion Uniform attractors and their continuity for the non-autonomous Kirchhoff wave models Yanan Li 1, , Zhijian Yang 2,, and Na Feng 3, College of Mathematical Sciences, Harbin Engineering University, Harbin 150001, China School of Mathematics and Statistics, Zhengzhou University, Zhengzhou 450001, China College of Science, Zhongyuan University of Technology, Zhengzhou 450007, China * Corresponding author: Zhijian Yang Received September 2020 Revised November 2020 Published December 2021 Early access January 2021 Fund Project: The authors are supported by NSFC (Grant No. 11671367) The paper investigates the existence and the continuity of uniform attractors for the non-autonomous Kirchhoff wave equations with strong damping: $ u_{tt}-(1+\epsilon\|\nabla u\|^{2})\Delta u-\Delta u_{t}+f(u) = g(x,t) $, where $ \epsilon\in [0,1] $ is an extensibility parameter. It shows that when the nonlinearity $ f(u) $ is of optimal supercritical growth $ p: \frac{N+2}{N-2} = p^* Keywords: Non-autonomous Kirchhoff wave models, extensibility parameter, supercritical nonlinearity, uniform attractor, residual continuity, upper semicontinuity. Mathematics Subject Classification: Primary: 37L15, 37L30; Secondary: 35B65, 35B33, 35B41. Citation: Yanan Li, Zhijian Yang, Na Feng. Uniform attractors and their continuity for the non-autonomous Kirchhoff wave models. Discrete & Continuous Dynamical Systems - B, 2021, 26 (12) : 6267-6284. doi: 10.3934/dcdsb.2021018 A. V. Babin and S. Yu. Pilyugin, Continuous dependence of attractors on the shape of domain, J. Math. Sci., 87 (1997), 3304-3310. doi: 10.1007/BF02355582. Google Scholar J. M. Ball, Global attractors for damped semilinear wave equations, Discrete Contin. Dyn. Syst., 10 (2004), 31-52. doi: 10.3934/dcds.2004.10.31. Google Scholar V. V. Chepyzhov and M. I. Vishik, Attractors for Equations of Mathematical Physics, American Mathematical Society Colloquium Publications, American Mathematical Society, Providence, RI, 2002. Google Scholar I. Chueshov, Long-time dynamics of Kirchhoff wave models with strong nonlinear damping, J. Differential Equations, 252 (2012), 1229-1262. doi: 10.1016/j.jde.2011.08.022. Google Scholar P. Y. Ding, Z. J. Yang and Y. N. Li, Global attractor of the Kirchhoff wave models with strong nonlinear damping, Appl. Math. Lett., 76 (2018), 40-45. doi: 10.1016/j.aml.2017.07.008. Google Scholar X. Fan and S. Zhou, Kernel sections for non-autonomous strongly damped wave equations of non-degenerate Kirchhoff-type, Appl. Math. Comput., 158 (2004), 253-266. doi: 10.1016/j.amc.2003.08.147. Google Scholar J. K. Hale and G. Raugel, Lower semicontinuity of attractors of gradient systems and applications, Ann. Mat. Pura Appl., 154 (1989), 281-326. doi: 10.1007/BF01790353. Google Scholar L. T. Hoang, E. J. Olason and J. C. Robinson, On the continuity of global attractors, Proc. Amer. Math. Sc., 143 (2015), 4389-4395. doi: 10.1090/proc/12598. Google Scholar L. T. Hoang, E. J. Olason and J. C. Robinson, Continuity of pullback and uniform attractors, J. Differential Equations, 264 (2018), 4067-4093. doi: 10.1016/j.jde.2017.12.002. Google Scholar G. Kirchhoff, Vorlesungen über Mechanik, Lectures on Mechanics, Teubner, Stuttgart, 1883. Google Scholar Y. N. Li and Z. J. Yang, Robustness of attractors for non-autonomous Kirchhoff wave models with strong nonlinear damping, Appl. Math. Optim., (2019). doi: 10.1007/s00245-019-09644-4. Google Scholar S. S. Lu, H. Q. Wu and C. K. Zhong, Attractors for non-autonomous $2D$ Navier-Stokes equations with normal external forces, Discrete Contin. Dyn. Syst., 13 (2005), 701-719. doi: 10.3934/dcds.2005.13.701. Google Scholar H. L. Ma and C. K. Zhong, Attractors for the Kirchhoff equations with strong nonlinear damping, Appl. Math. Lett., 74 (2017), 127-133. doi: 10.1016/j.aml.2017.06.002. Google Scholar H. L. Ma, J. Zhang and C. K. Zhong, Global existence and asymptotic behavior of global smooth solutions to the Kirchhoff equations with strong nonlinear damping, Discrete Contin. Dyn. Syst.-B, (2019). doi: 10.3934/dcdsb.2019027. Google Scholar H. L. Ma, J. Zhang and C. K. Zhong, Attractors for the degenerate Kirchhoff wave model with strong damping: Existence and the fractal dimension, J. Math. Anal. Appl., 484 (2020), 123670, 15 pp. doi: 10.1016/j.jmaa.2019.123670. Google Scholar T. Matsuyama and R. lkehata, On global solution and energy decay for the wave equation of Kirchhoff-type with nonlinear damping term, J. Math. Anal. Appl., 204 (1996), 729-753. doi: 10.1006/jmaa.1996.0464. Google Scholar I. Moise, R. Rosa and X. Wang, Attractors for noncompact non-autonomous systems via energy equations, Discrete Contin. Dyn. Syst., 10 (2004), 473-496. doi: 10.3934/dcds.2004.10.473. Google Scholar M. Nakao, An attractor for a nonlinear dissipative wave equation of Kirchhoff type, J. Math. Anal. Appl., 353 (2009), 652-659. doi: 10.1016/j.jmaa.2008.09.010. Google Scholar K. Ono, Global existence, decay, and blowup of solutions for some mildly degenerate nonlinear Kirchhoff strings, J. Differential Equations, 137 (1997), 273-301. doi: 10.1006/jdeq.1997.3263. Google Scholar J. Simon, Compact sets in the space $L^p(0, T;B)$, Ann. Mat. Pura Appl., 146 (1986), 65-96. doi: 10.1007/BF01762360. Google Scholar [21] A. M. Stuart and A. R. Humphries, Dynamical Systems and Numerical Analysis, Cambridge University Press, Cambridge, 1996. Google Scholar C. Y. Sun, D. M. Cao and J. Q. Duan, Uniform attractors for non-autonomous wave equations with nonlinear damping, SIAM J. Appl. Dyn. Syst., 6 (2007), 293-318. doi: 10.1137/060663805. Google Scholar B. X. Wang, Uniform attractors of non-autonomous discrete reaction-diffusion systems in weighted spaces, Int. J. Bifurcation Chaos, 18 (2008), 659-716. doi: 10.1142/S0218127408020598. Google Scholar Y. H. Wang and C. K. Zhong, Upper semicontinuity of pullback attractors for non-autonomous Kirchhoff wave models, Discrete Contin. Dyn. Syst., 33 (2013), 3189-3209. doi: 10.3934/dcds.2013.33.3189. Google Scholar Z. J. Yang and Y. Q. Wang, Global attractor for the Kirchhoff type equation with a strong dissipation, J. Differential Equations, 249 (2010), 3258-3278. Google Scholar Z. J. Yang and P. Y. Ding, Longtime dynamics of the Kirchhoff equation with strong damping and critical nonlinearity on $\mathbb{R}^N$, J. Math. Anal. Appl., 434 (2016), 1826-1851. doi: 10.1016/j.jmaa.2015.10.013. Google Scholar X.-G. Yang, Marcelo J. D. Nascimento and L. Pelicer Maurício, Uniform attractors for non-autonomous plate equations with $p$-Laplacian perturbation and critical nonlinearities, Discrete Contin. Dyn. Syst., 40 (2020), 1937-1961. doi: 10.3934/dcds.2020100. Google Scholar S. Zelik, Strong uniform attractors for non-autonomous dissipative PDEs with non translation-compact external forces, Discrete Contin. Dyn. Syst.-B, 20 (2015), 781-810. doi: 10.3934/dcdsb.2015.20.781. Google Scholar Zhijian Yang, Yanan Li. Upper semicontinuity of pullback attractors for non-autonomous Kirchhoff wave equations. Discrete & Continuous Dynamical Systems - B, 2019, 24 (9) : 4899-4912. doi: 10.3934/dcdsb.2019036 Chunyou Sun, Daomin Cao, Jinqiao Duan. Non-autonomous wave dynamics with memory --- asymptotic regularity and uniform attractor. Discrete & Continuous Dynamical Systems - B, 2008, 9 (3&4, May) : 743-761. doi: 10.3934/dcdsb.2008.9.743 Zhaojuan Wang, Shengfan Zhou. Existence and upper semicontinuity of random attractors for non-autonomous stochastic strongly damped wave equation with multiplicative noise. Discrete & Continuous Dynamical Systems, 2017, 37 (5) : 2787-2812. doi: 10.3934/dcds.2017120 Yonghai Wang, Chengkui Zhong. Upper semicontinuity of pullback attractors for nonautonomous Kirchhoff wave models. Discrete & Continuous Dynamical Systems, 2013, 33 (7) : 3189-3209. doi: 10.3934/dcds.2013.33.3189 Zhijian Yang, Yanan Li. Criteria on the existence and stability of pullback exponential attractors and their application to non-autonomous kirchhoff wave models. Discrete & Continuous Dynamical Systems, 2018, 38 (5) : 2629-2653. doi: 10.3934/dcds.2018111 Xiaolin Jia, Caidi Zhao, Juan Cao. Uniform attractor of the non-autonomous discrete Selkov model. Discrete & Continuous Dynamical Systems, 2014, 34 (1) : 229-248. doi: 10.3934/dcds.2014.34.229 Olivier Goubet, Wided Kechiche. Uniform attractor for non-autonomous nonlinear Schrödinger equation. Communications on Pure & Applied Analysis, 2011, 10 (2) : 639-651. doi: 10.3934/cpaa.2011.10.639 Shengfan Zhou, Caidi Zhao, Yejuan Wang. Finite dimensionality and upper semicontinuity of compact kernel sections of non-autonomous lattice systems. Discrete & Continuous Dynamical Systems, 2008, 21 (4) : 1259-1277. doi: 10.3934/dcds.2008.21.1259 Ling Xu, Jianhua Huang, Qiaozhen Ma. Upper semicontinuity of random attractors for the stochastic non-autonomous suspension bridge equation with memory. Discrete & Continuous Dynamical Systems - B, 2019, 24 (11) : 5959-5979. doi: 10.3934/dcdsb.2019115 Zhaojuan Wang, Shengfan Zhou. Existence and upper semicontinuity of attractors for non-autonomous stochastic lattice systems with random coupled coefficients. Communications on Pure & Applied Analysis, 2016, 15 (6) : 2221-2245. doi: 10.3934/cpaa.2016035 Na Lei, Shengfan Zhou. Upper semicontinuity of pullback attractors for non-autonomous lattice systems under singular perturbations. Discrete & Continuous Dynamical Systems, 2022, 42 (1) : 73-108. doi: 10.3934/dcds.2021108 Pengyu Chen, Xuping Zhang. Upper semi-continuity of attractors for non-autonomous fractional stochastic parabolic equations with delay. Discrete & Continuous Dynamical Systems - B, 2021, 26 (8) : 4325-4357. doi: 10.3934/dcdsb.2020290 Xueli Song, Jianhua Wu. Non-autonomous 2D Newton-Boussinesq equation with oscillating external forces and its uniform attractor. Evolution Equations & Control Theory, 2022, 11 (1) : 41-65. doi: 10.3934/eect.2020102 Zhaojuan Wang, Shengfan Zhou. Random attractor for stochastic non-autonomous damped wave equation with critical exponent. Discrete & Continuous Dynamical Systems, 2017, 37 (1) : 545-573. doi: 10.3934/dcds.2017022 Shengfan Zhou, Min Zhao. Fractal dimension of random attractor for stochastic non-autonomous damped wave equation with linear multiplicative white noise. Discrete & Continuous Dynamical Systems, 2016, 36 (5) : 2887-2914. doi: 10.3934/dcds.2016.36.2887 Zhaojuan Wang, Shengfan Zhou. Random attractor and random exponential attractor for stochastic non-autonomous damped cubic wave equation with linear multiplicative white noise. Discrete & Continuous Dynamical Systems, 2018, 38 (9) : 4767-4817. doi: 10.3934/dcds.2018210 Pablo G. Barrientos, Abbas Fakhari. Ergodicity of non-autonomous discrete systems with non-uniform expansion. Discrete & Continuous Dynamical Systems - B, 2020, 25 (4) : 1361-1382. doi: 10.3934/dcdsb.2019231 Rodrigo Samprogna, Tomás Caraballo. Pullback attractor for a dynamic boundary non-autonomous problem with Infinite Delay. Discrete & Continuous Dynamical Systems - B, 2018, 23 (2) : 509-523. doi: 10.3934/dcdsb.2017195 T. Caraballo, J. A. Langa, J. Valero. Structure of the pullback attractor for a non-autonomous scalar differential inclusion. Discrete & Continuous Dynamical Systems - S, 2016, 9 (4) : 979-994. doi: 10.3934/dcdss.2016037 Tomás Caraballo, David Cheban. On the structure of the global attractor for non-autonomous dynamical systems with weak convergence. Communications on Pure & Applied Analysis, 2012, 11 (2) : 809-828. doi: 10.3934/cpaa.2012.11.809 Yanan Li Zhijian Yang Na Feng
CommonCrawl
Artificial Intelligence Meta Artificial Intelligence Stack Exchange is a question and answer site for people interested in conceptual questions about life and challenges in a world where "cognitive" functions can be mimicked in purely digital environment. It only takes a minute to sign up. How can I use a 2-dimensional feature matrix as the input to a neural network? How can I use a 2-dimensional feature matrix, rather than a feature vector, as the input to a neural network? For a WWII naval wargame, I have sorted out the features of interest to approximate the game state $S$ at a given time $t$. they are a maximum of 50 naval task forces and airbases on the map each one has a lot of features (hex, weather, number of ships, type, distance to other task forces, distance to objective, cargo, intelligence, combat value, speed, damage, etc.) The output would be the probability of winning and the level of victory. neural-networks game-ai nbro♦ Carrier BattlesCarrier Battles $\begingroup$ I have seen that AlphaGo and AlphaGo Zero have been processing data as an image 19 x 19 x (17 or 48) features and then a CNN $\endgroup$ – Carrier Battles The most obvious way to do this would be to simply "unroll" your matrix into a vector. Your example input matrix would get turned into the following input vector: $$\left( \begin{array}{} a_1 & a_2 & \dots & a_t & b_1 & b_2 & \dots & b_t & c_1 & c_2 & \dots & c_t \end{array} \right)$$ I don't think there are any other clear ways to use an "input matrix" really. The only benefit I could see in using an input matrix rather than an unrolled vector (if it were possible to do so in whatever way) would be if doing so would somehow enable the learning algorithm to exploit the "domain knowledge" that certain input features are related to each other in special ways (i.e. features in the same row belong to the same unit, and features in the same column are the same "type" of feature, or other way around). Intuitively, I suspect something like this could be accomplished by restricting the number of connections you make to the next layer. For example, you could make a part of the next layer only be connected to all the $a_i$ features, a different part connected only to all the $b_i$ features, etc. Similarly, you could have a part that is connected only to the $a_1, b_1, c_1, \dots$ features, a different part only connected to the $a_2, b_2, c_2, \dots$ features, etc. I don't know for sure how well this would work though... just think that it could. Carrier Battles Dennis Soemers♦Dennis Soemers $\begingroup$ Then it would make sense to remove the dependency of the units: for instance instead of keeping all distance between them, then add a feature representing the density of friendly units for the given unit. It would still mean 2000 features. Is it too much for feed forward neural network and single computer ? $\endgroup$ $\begingroup$ @CarrierBattles Hmmm... difficult for me to judge if it would be "too much". 2000 features is a lot though. I think it may be too much, definitely wouldn't be surprised if it were... but not 100% sure. Best way is to try and see. Your problem does sound like a large and complex problem in and of itself though (controlling many different units at the same time, like in Real-Time Strategy games, simply is a complex problem). When talking about such complex problems, it often becomes difficult to avoid solutions that are "too much" for a small amount of hardware. $\endgroup$ – Dennis Soemers ♦ (Split the answer from Dennis in two) A very different approach would be to use a network architecture with recurrence, for example an LSTM. You could treat your input as a "sequence" rather than a matrix (just like a sentence in language processing would be a sequence of inputs), providing your feature vectors for different units as inputs one at a time. This would remove the need of having a giant input layer (with support for 50 units) in cases where only a small portion of them would be used (e.g., if you only have 5 units). There is not really a concept of "time" in your inputs though... ideally the output of your network would be invariant to the order in which you provide it with the different inputs for the different units, but that would not typically be the case with these kinds of architectures in practice. $\begingroup$ This seems a tweak of the usual way to use Recurrent Neural Networks but why not. I see the advantage of having only the right numbers of unit vectors instead a big block filled with zero for missing units. For the relationships between the units, I may strive to simplify the features using some friendly/enemy units density for each unit for instance. For each S(t), shall I always keep logical order (always the same unit first) or a Radom order ? $\endgroup$ $\begingroup$ By using this method and removing nice to have features, I may end up with less than 700 features in average What are the performance of such network compared to the classical perceptron ? $\endgroup$ Thanks for contributing an answer to Artificial Intelligence Stack Exchange! Not the answer you're looking for? Browse other questions tagged neural-networks game-ai or ask your own question. How can object types be differentiated in the input of a neural network? Can you analyse a neural network to determine good states? How can I design a reinforcement learning model for a game with multiple complex actions taken at a time? How to encode card game state into neural network input How can I process neural network with 25000 input nodes? Combine two feature vectors for a correct input of a neural network Can you use a graph as input for a neural network? How to make an output independent of input feature in neural networks?
CommonCrawl
Energy, Sustainability and Society Managing the energy trilemma in the Philippines Josef T. Yap ORCID: orcid.org/0000-0003-0218-08611, Aaron Joseph P. Gabriola2 & Chrysogonus F. Herrera2 Energy, Sustainability and Society volume 11, Article number: 34 (2021) Cite this article The transition to an energy mix with lower carbon emissions is hampered by the existence of the so-called Energy Trilemma. The primary consequence is a trade-off between various objectives of energy policy, e.g., equity and sustainability. This conflict can lead to policy gridlock if policymakers are unable to prioritize the goals. This paper proposes a framework and methodology to manage the trilemma by applying methods related to multi-criteria decision-making in order to assign weights to the various components of the trilemma. Following the International Energy Agency (IEA), an expanded concept of energy security is adopted and translates to a version of the trilemma different from that of the World Energy Council. This study takes into account autarky, price, supply, and carbon emissions. The values of these variables are generated by a software called PLEXOS and are incorporated in a welfare function. Trade-offs and complementarities among the four variables are taken into account by the equations in the PLEXOS model. Meanwhile, weights for each of the components of the trilemma are obtained using the Analytical Hierarchy Process. The experts interviewed for this exercise are considered hypothetical heads of the Philippine Department of Energy (DOE). Two scenarios were compared: a market-based simulation and one where a carbon-tax was imposed. As expected, the carbon-tax leads to a fall in the level of carbon emissions but a rise in the cost of electricity. Because the demand for electricity has a higher price elasticity among lower income classes, the carbon-tax will worsen equity. Attempting to resolve the conflict among the goals of energy policy is difficult leading to a possible gridlock. Policy options can, however, be ranked using the values generated by the welfare function. The ranking clearly depends on the preference or priorities of the hypothetical head of the DOE but at least a decision could be reached. In this manner, trade-offs are measured and the trilemma can be managed even if it is not resolved. Energy poverty continues to be a major concern in the Philippines, especially when compared with its neighbors in Asia. One aspect of energy poverty is household access to electricity. Table 1 shows that as of 2018, the Philippines has the lowest electrification rate among Asian countries with a similar level of development. Meanwhile, Table 2 shows that in 2020 the Philippines had the lowest per capita consumption of electricity in the same set of countries. It is not a coincidence that the Philippines also has one of the lowest levels of development as measured by per capita gross domestic product (GDP). Table 1 Electrification rate (% of population) for selected Asian countries Table 2 Per capita electricity consumption and per capita GDP in selected Asian countries To address the problem of energy poverty, the Philippine Department of Energy targeted 100 percent electrification of households with access to the grid by 2022. For off-grid areas, the 100 percent electrification rate is expected by 2040. The objective dovetails with one of the major components of Sustainable Development Goal (SDG) 7 which is to ensure universal access to affordable, reliable, sustainable, and modern energy by 2030. However, SDG 7 also targets a substantial increase in the share of renewable energy in the global energy mix. Hence, the increase in access must be accompanied by a transition from fossil fuels to renewable energy. Achieving increased access and a higher share of renewable energy requires managing the so-called Energy Trilemma. This refers to "the conflicting goals that governments face in securing energy supplies, providing universal energy access, and promoting environmental protection" (World Energy Council [4]). The Energy Trilemma is defined across three dimensions (Fig. 1). "Energy Security reflects a nation's capacity to meet current and future energy demand reliably and withstand and bounce back swiftly from system shocks with minimal disruption to supplies. Energy Equity assesses a country's ability to provide universal access to affordable, fairly priced, and abundant energy for domestic and commercial use. Environmental Sustainability of Energy Systems represents the transition of a country's energy system toward mitigating and avoiding potential environmental harm and climate change impacts."Footnote 1 Source: World Energy Council [5] The energy trilemma. Typically, the topic of sustainability should cover the concept of the environomy, which is the union of the environment and the economy (e.g., Ravago and Roumasset [6]). This would then involve a broader trilemma involving prosperity, equity and environmental sustainability. The Energy Trilemma has a narrower focus. Resolving the Energy Trilemma entails designing policies wherein trade-offs among goals can be avoided. This is a highly unlikely scenario and in the event of relatively large trade-offs, a policy gridlock may ensue. The key research question addressed in this paper is how to move past this possible gridlock. Instead of attempting to resolve the Energy Trilemma, a framework is developed to manage it by quantifying the trade-offs among the conflicting goals. Weights reflecting the preferences of policymakers are assigned to these goals thereby prioritizing them. Policies can then be ranked through a welfare function that combines quantitative measures of the different goals. Even if conflicts among the goals cannot be resolved, progress can be made by adopting policies that have a higher welfare rank. Situating the research Trade-offs and synergies The term "trilemma" implies that trade-offs are involved when energy policies are designed and implemented. For example, ten years ago, significantly increasing the share of variable renewable energy (VRE) like solar would have been infeasible because of the prohibitive costs involved (Table 3). The trade-off between equity, particularly affordability, and sustainability was quite clear-cut. Nowadays, because of the sharp decline in the cost of solar power generation, the trade-off emanates from the feasibility of integrating VRE in the grid system. In this context, the high cost of battery storage is the major factor that prevents the full utilization of wind and solar power in the grid system. Table 3 Summary of mean levelized cost of energy (LCOE) for different energy sources Thus, despite the sharp decline in generation costs involving VRE, the energy trilemma remains a problem that has to be managed. This paper proposes a methodology to achieve this objective. The approach is inspired by Barbier and Burgess [8] who evaluate trade-offs and complementarities—or synergies—among the SDGs. They adopt accepted methods to calculate changes in welfare under specified constraints. This allows measuring welfare effects of an increase in the level of one SDG while taking into account trade-offs or complementarities with other SDGs. In their study, a quantitative evaluation of progress over 2000–2016 for each of the 17 SDGs is carried out using a representative indicator for each goal. Their results have important implications for policies designed to achieve the SDGs. In particular, because synergies are taken into account, policies can be calibrated to be consistent with the priorities of policymakers. The essence of the framework in this study is specifying a welfare function W that is dependent on the components of the trilemma. One such specification is as follows: $$W= {\mathrm{Security}}^{\alpha }{\mathrm{Equity}}^{\beta }{\mathrm{Sustainability}}^{\gamma }$$ Different policies will yield different values for the three components of the trilemma, i.e., security, equity, and sustainability, thereby generating a set of values for W. This will enable policymakers to rank the policies. A conventional simulation package can generate the values of the three components, taking into account the trade-offs and complementarities among them. The obvious challenge is to arrive at reasonable values for the parameters α, β, and ϒ. They represent the preferences of the policymakers, which in turn, should ideally reflect the aspirations of society. Methods under multi-criteria decision-making (MCDM) can be applied for this purpose. Being able to rank policies will facilitate decision-making. Progress can therefore be achieved even if the conflicts or trade-offs cannot be resolved. This is the essence of managing the trilemma. The choice of the term "manage" is deliberate as "resolving" the trilemma is a difficult task. Energy trilemma is recognized as a global challenge. To track progress in coping with this challenge, the World Energy Trilemma Index has been prepared annually since 2010 by the World Energy Council. In its latest publication, WEC [9] presents a comparative ranking of the energy systems of 108 countries. An assessment of a country's energy system performance is also provided, based on the balance and progress in the three components of the Trilemma. The performance of the Philippines is shown in Fig. 2. The country is ranked 76th in terms of balance and progress in the different components of the trilemma. Evolution of the energy trilemma in the Philippines 2000–2020. The literature identifies strong and weaker versions of the trilemma. The former calls for policymakers to choose two of the three policy goals. This implies that the trilemma cannot be resolved but only managed. On the other hand, the weaker version recognizes that political, economic, and institutional reforms can lead to progress in all three components. Hence, from this perspective, the trilemma can be resolved by overcoming structural barriers through appropriate policy measures. Examples of studies that adopt the weaker version of the trilemma are country cases for the Philippines (La Viña et al. [10]) and Indonesia (Gunningham [11]). The discussion largely revolves around policies that govern the transition into a greater share of low-carbon sources in the energy mix. In the case of the Philippines, the authors argue that policymakers can and should work at two categories of reform: rationalization and diversification. At the core of rationalization efforts is a long-term energy plan that is impervious to shifts in government administrations. If this plan is perceived as robust, it will reduce political and regulatory risk, and at the same time encourage investments in the energy sector that will promote the goals of energy security, equity, and sustainability. Such a plan should also be cognizant of global technological developments which will discourage unnecessary subsidies for specific energy sources. Government–private sector coordination and public–private partnerships can be supported by a program such as the Competitive Renewable Energy Zones or CREZ (Lee et al. [12]). This is an example of an energy mapping system that identifies optimal areas for development vis-à-vis available energy sources and transmission lines. Overall, rationalization entails less emphasis on liberalization—or a market-led approach—and a greater role for government regulation. Meanwhile, the thrust of diversification is reducing the country's relatively heavy dependence on fossil fuel, particularly imported coal. The main obstacle to attaining this objective is the limited ability of renewable energy to perform the role of coal power plants as a source of baseload capacity. At present, the Philippines has an excess supply of coal plants that exceeds baseload needs, making it necessary for these coal plants to provide the mid-merit requirement. Policies have to be enacted to allow sources that can support the mid-merit requirement more efficiently than coal. "To address this, a cap on approved coal endorsements using a portfolio-based regional energy plan detailing the baseload, mid-merit, and peaking requirements in each of Luzon, Visayas, and Mindanao is necessary. This prevents an oversupply of coal plants beyond baseload needs, and, for the long-term, contractual lock-in of coal supply beyond what is economically, socially, and environmentally acceptable."Footnote 2 Indonesia is a resource-rich country that plays a significant role in the global energy market. However, its per capita consumption of electricity is relatively low (Table 2). One reason for this is a strategy that encourages exports of energy resources and heavy dependence on coal. Gunningham [11] recommends effective energy governance to increase access, reduce fuel subsidies, and at the same time, facilitate the transition of the energy sector to one with lower carbon emissions. Four important elements of the governance structure have to be analyzed. First, there is a need to instill norms—or standards of appropriate behavior—related to the importance of climate change. International organizations like the International Energy Agency (IEA) have an important role to play in convincing Indonesian policymakers of the importance of measures related to climate change adaptation and mitigation. Second, many stakeholders including international and local NGOs have argued against the implementation of fuel subsidies.Footnote 3 Third, global energy governance can also help address the biggest challenge to Indonesia's transition to a low-carbon scenario: the lack of financial resources that can underwrite a revolution in the energy sector. The more prominent financing tools include the Global Environment Fund (GEF) and the climate change funds of the World Bank, most notably the Clean Technology Fund. Neither of these initiatives has offered the financial resources needed to overcome Indonesia's climate change challenges. "If such carrots do not achieve the necessary changes (and they are small compared to the current cost of energy subsidies to the Indonesian budget of some $20 billion per annum), there remains the possibility of the use of sticks. Of the latter, the most plausible are carbon border taxes: taxing goods from countries that do not commit to climate change mitigation in order to ensure that those who do are not disadvantaged."Footnote 4 The preceding discussion highlights the difficulty of designing policy to resolve the energy trilemma. Moreover, the policies will still likely involve trade-offs. Managing the trilemma can be facilitated if the trade-offs can be quantified. A straightforward approach is the adoption of portfolio-based techniques widely used in financial markets. The general objective is to balance short-term costs with medium- to long-term price stability. The standard methodology is Markowitz's mean–variance analysis to determine the optimal energy mix for electricity generation. A recent application is the case of the Philippines (Balanquit and Daway-Ducanes [13]). In their study, they consider eight generating technologies, each associated with two important parameters: the expected rate of return ri and the risk measured by the variance in the return. These parameters are both derived from the technology's daily power price (PP) ratio, defined as the amount of energy sold or discharged over its average price. $${r}_{i}=E\left[\frac{{\mathrm{PP}}_{it}-{\mathrm{PP}}_{i(t-1)}}{{\mathrm{PP}}_{i(t-1)}}\right],$$ $${\sigma }_{i}^{2}=E\left[{\left(\frac{{\mathrm{PP}}_{it}-{\mathrm{PP}}_{i(t-1)}}{{\mathrm{PP}}_{i(t-1)}}\right)}^{2}\right]-{r}_{i}^{2},$$ $$E\left(r\right)={\sum }_{i=1}^{8}{\alpha }_{i}{r}_{i},$$ where \({\alpha }_{i}\in (\mathrm{0,1})\) is the share of technology \(i\) and that \({\sum }_{i}{\alpha }_{i}=1\). On the other hand, the expected portfolio risk is given by $$\mathrm{Var}\left(r\right)=\sum_{i=1}^{8}{\alpha }_{i}{\sigma }_{i}^{2}+2\sum_{1\le i\le j\le 8}{\alpha }_{i}{\alpha }_{j}{\sigma }_{ij},$$ where \({\sigma }_{ij}\) is the covariance of two distinct technologies \(i\) and \(j\). The methodology then adopts the approach of Markowitz [14] by minimizing a given portfolio's risk for every targeted rate of return r. The problem can be depicted as: $$\mathop {\min }\limits_{{\alpha_{i} \in \left[ {0,1} \right]}} {\text{Var}}\left( r \right) = \mathop \sum \limits_{i = 1}^{8} \alpha_{i}^{2} \sigma_{i}^{2} + 2\mathop \sum \limits_{1 \le i \le j \le 8}^{{}} \alpha_{i} \alpha_{j} \sigma_{ij}$$ $$s.t.\sum {_{i = 1}^{8} \alpha_{i} r_{i} = \overline{r}} ,$$ $${\sum }_{i=1}^{8}{\alpha }_{i}=1.$$ The procedure will yield optimal shares of each type of technology. A set of optimal portfolios can be depicted on the return-risk plane (Fig. 3). The curve is the optimal portfolio frontier. Any point to the left is infeasible while any point to the right is considered sub-optimal. Source: Fig. 6.1 of [13] An example of an optimal portfolio frontier. The energy trilemma is partially addressed in the portfolio model because energy security is associated with "risk" and equity is associated with "return". The authors claim that in their framework, consumer welfare is maximized in terms of price stability, energy security, and clean-energy investment, implying that the third horn of the trilemma, sustainability, is also incorporated. However, clean energy only figures in the discussion because VRE sources are among the eight technologies considered. There is no explicit procedure by which lower carbon emissions can be targeted. Unlike the application using Philippine data, the study of Stempien and Chan [15] makes categorical reference to the trilemma. Targeting "sustainability" is operationalized by adding another variable in the model: the expected return on emissions in terms of energy per unit of CO2, i.e., kWh per ton of CO2. Instead of having a two-dimensional optimal portfolio frontier, the efficient plane is as depicted in Fig. 4. The three dimensions represent the constraints imposed by the trilemma under which the portfolio is optimized. Source: Fig. 2 of [15] Modified Markowitz theory of energy portfolio optimization. Neither the studies of Balanquit and Daway-Ducanes [13] and Stempien and Chan [15] provide a mechanism to choose among the options along the optimal portfolio frontier. This can be done by specifying a set of indifference curves—or planes in the multi-dimensional case. These are analogous to the aforementioned welfare function. The indifference curves (planes) are specified by determining the risk–return profile of the policymakers involved, which can also be accomplished through methods associated with MCDM (see Box 1). The indifference curves should slope upward (Fig. 5). This indicates that in order for the investor to achieve the same level of utility, he must be compensated for accepting a greater level of risk with a higher expected rate of return. A higher indifference curve implies a higher level of utility. The choice of generation mix is where the indifference curve is tangent to the optimal portfolio frontier (point A in Fig. 5). In this framework, different policies will lead to various points in the risk–return plane. Policymakers should adopt the policy that generates the highest indifference curve or welfare. Equilibrium (point A) between optimal portfolio frontier and the indifference curves of the hypothetical DOE secretary Box 1 Multi-criteria decision-making Multiple-criteria decision-making (MCDM) or multiple-criteria decision analysis (MCDA) falls under the discipline of operations research. MCDM is a set of methodologies that deal with multiple criteria in decision-making. The methodologies that are identified in the literature mostly differ in terms of assigning weights to the criteria involved. Among the methods are the aggregated indices randomization method (AIRM), analytic hierarchy process (AHP), analytic network process (ANP), balance beam process, base-criterion method (BCM), best–worst method (BWM), Brown–Gibson model, etc. The AHP is applied in this study, the basic reference being Saaty [16]. By allowing the decision-maker to reveal his priorities, AHP streamlines a complex decision-making process. In a nutshell, a multifaceted process is reduced to a series of pairwise comparisons with the results being synthesized. AHP allows both subjective and objective aspects of a decision to be combined. The AHP generates a weight for each evaluation criterion according to the decision-maker's pairwise comparisons of the criteria. The higher the weight, the more important is the corresponding criterion. To make pairwise comparisons, a scale of numbers is established in order to indicate how many times more important or dominant one criterion is over another. The table below presents the scale. The fundamental scale of absolute numbers for AHP Preference scale Equally preferred 1 Two criteria contribute equally to the objective Equally to moderately preferred 2 Moderately preferred 3 Experience and judgment slightly favor one criterion over another Moderately to strongly preferred 4 Strongly preferred 5 Experience and judgment strongly favor one criterion over another Strongly to very strongly preferred 6 Very strongly preferred 7 A criterion is favored very strongly over another; its dominance demonstrated in practice Very strongly to extremely preferred 8 Extremely preferred 9 The evidence favoring one activity over another is of the highest possible order of affirmation Source: Saaty [17], page 86 A more complicated process is the Stochastic Multi-criteria Acceptability Analysis or SMAA (Lahdelma and Salminen [18]). This is a family of methods for aiding multi-criteria group decision-making in problems with uncertain, imprecise, or partially missing information. These methods are based on exploring the weight space in order to describe the preferences that make each alternative the most preferred one, or that would give a certain rank for a specific alternative. The main results of the analysis are rank acceptability indices, central weight vectors, and confidence factors for different alternatives. The rank acceptability indices describe the variety of different preferences resulting in a certain rank for an alternative, the central weight vectors represent the typical preferences favoring each alternative, and the confidence factors measure whether the criteria measurements are sufficiently accurate for making an informed decision.* SMAA was applied to the energy trilemma by Song et al. [19]. The different alternatives were evaluated based on three criteria which are the components of the trilemma. As an exercise, the authors used as alternatives the top ten countries based on the 2015 Energy Trilemma Index. Exact weights of the three criteria were not derived but these can be inferred from the reported rank acceptability indices. *Lahdelma and Salminen [18], page 285. Methods and framework Expanding the concept of energy security The IEA's website defines energy security as "the uninterrupted availability of energy sources at an affordable price. Energy security has many aspects: long-term energy security mainly deals with timely investments to supply energy in line with economic developments and environmental needs. On the other hand, short-term energy security focuses on the ability of the energy system to react promptly to sudden changes in the supply–demand balance."Footnote 5 Based on this rather broad definition, the concept of the trilemma is modified in this study. Energy governance seeks to promote energy security and one of the primary tasks is to manage the trade-off among its various components. Following the IEA's definition, these would be the major components to be considered: (1) adequate supply, (2) price, (3) environmental impact, and (4) ability to react promptly to sudden changes in the supply–demand balance. Hence, there is a "quadrilemma" among these components. Heretofore, however, the term "trilemma" is retained. A simulation package is applied to generate values of these four variables over a selected time period under reasonable assumptions. Some of these assumptions reflect policy choices. The trade-offs and synergies among the components of the trilemma are embedded in the equations of the simulation model. The authors have access to PLEXOS and therefore the study is limited to power generation. The main advantages of PLEXOS are the transparency of its methodology, flexibility in its application, and robustness of the results. Electricity demand can be scaled down to zonal and nodal levels, enabling the model to generate locational marginal prices. This is important given the archipelagic topography of the Philippines, which necessitates constructing an electric grid wherein marginal prices vary significantly. Meanwhile, PLEXOS can model physical elements of the system in a more detailed resolution. This implies that the bidding behavior of various plants can be modeled, allowing the idiosyncratic features of different energy sources to be incorporated. For example, the temporal nature of solar and wind power is readily defined, and specific features can vary on a regional and plant basis. Finally, unlike other commercial software, PLEXOS does not resort to heuristics. Instead, it takes advantage of the computational power of commercial LP Solvers to handle the problem of modeling and simulating the full Philippine power system even in the long term. The robustness of the results derives from the ability of PLEXOS to carry out sensitivity analysis, allowing users to simulate various scenarios. A consistency check of the results leads to confidence that algorithms are performed correctly. What should also be emphasized is that the framework and methodology presented and applied in this study are invariant to the specific software and assumptions. Basic PLEXOS framework The long-term (LT) phase plan of PLEXOS is discussed in this section to highlight the trade-offs and synergies among the components of the expanded trilemma: autarky (AT), affordability (P), Supply (S), and Sustainability as measured by carbon emissions (C). The other components of PLEXOS are presented in the appendix. The LT phase seeks to solve the long-term generation capacity expansion problem by finding an optimal set of builds and simultaneously solving for the dispatch optimization problem from a central planner's perspective. In particular, the LT plan looks to identify what type of generator units to put in, where to put them in the system, and when to build it. This is further subjected to reliability constraints such as respecting capacity reserve requirements. The general objective is to minimize net present value of capital and production costs of future generator build decisions and retirements (Fig. 6). Costs can be classified into two categories: Capital costs C(x), consisting of costs attributed to building new generator capacity and generator retirements. Generator build costs include the fixed amounts required to pay for capital and service debts. Production costs P(x), which include costs of operating the system using the existing plant line-up plus a basket of candidate builds. Also included in the formulation of production cost is the notional penalty of unserved energy. Source: Energy Exemplar [20] Illustration of the objective of the LT Plan: minimize net present value of capital and production costs. Expansion candidates like variable renewable sources such as solar and wind are examples requiring relatively high capital costs and virtually minimal production costs. Liquid fuel resources such as oil-based generating units are expected to have high production costs. Adding carbon tax augments production costs of carbon-intensive generating resources, and hence, will prompt the simulator to look for a solution that moves away from these fossil fuel-based options, favoring renewable sources more. The minimal formulation of the LT Plan is shown as follows: Minimize: $$\sum_{\left( y \right)} \sum_{\left( g \right)} {\text{DF}}_{y} \times \, \left( { {\text{BuildCost}}_{g} \times {\text{GenBuild}}_{(g,y)} } \right) + \sum_{\left( y \right)} DF_{y} \times \left[ { {\text{FOMCharge}}_{g} \times \, 1000 \, \times {\text{ PMAX}}_{g} \left( { {\text{Units}}_{g} + \sum_{i \le y} {\text{GenBuild}}_{g,i} } \right)} \right] + \sum_{t} DF_{t \in y} \times \, L_{t} \times \left[ {{\text{VoLL}} \times {\text{USE}}_{t} + \sum_{g} \left( {{\text{SRMC}}_{g} \times {\text{GenLoad}}_{g,t} } \right)} \right]$$ Energy balance constraint $$\sum_{\left( g \right)} {\text{GenLoad}}_{(g,y)} + {\text{USE}}_{t} = {\text{Demand}}_{t} \forall_{t}$$ Feasible energy dispatch $${\text{GenLoad}}_{{\left( {g,t} \right)}} \le {\text{ PMAX}} \left( {{\text{ Units}}_{g} + \sum _{i \le y} {\text{GenBuild}}_{g,i} } \right)$$ Feasible builds $$\sum _{i \le y} {\text{GenBuild}}_{g,i} \le {\text{MaxUnitsBuilt}}_{g,y}$$ Integrality $${\text{GenBuild}}_{{\left( {g,y} \right)}} {\text{integer}}$$ Capacity adequacy $$\sum_{\left( g \right)} {\text{PMAX}}_{g} \left( {{\text{Units}}_{g} + \sum_{i \le y} {\text{GenBuild}}_{i} } \right) + {\text{CapShort}}_{y} \ge {\text{PeakLoad}}_{y} + {\text{ReserveMargin}}_{y} \forall_{y}$$ GenBuild(g,y) Number of generating units build in year y for Generator g Integer GenLoad(g,t) Dispatch level of generating unit g in period t Continuous USEt Unserved energy in dispatch period t Continuous CapShorty Capacity shortage in year y Continuous D Discount rate. We then derive DFy = 1/(1 + D)y which is the discount factor applied to year, and DFt which is the discount factor applied to dispatch period t L t Duration of dispatch period t Hours BuildCostg Overnight build cost of generator g $ MaxUnitsBuilt(g,y) Maximum number of units of generator g allowed to be built by the end of year y PMAXg Maximum generating capacity of each unit of generator g MW Unitsg Number of installed generating units of generator g VoLL Value of lost load (energy shortage price) $/MWh SRMCg Short-run marginal cost of generator g which is composed of Heat Rate × Fuel Price + VO&M Charge $/MWh FOMChargeg Fixed operations and maintenance charge of generator g $ Loadt Average power demand in dispatch period t MW PeakLoadyFootnote To determine energy demand and its peak, GDP/economic growth across the forecast horizon is obtained along with growth of energy demand. The historical relationship between these variables is then used to project energy demand quantities (GWh) and the peak load (MW). Implied growth rates of peak and energy demand are similar and are assumed in this exercise not to diverge across the horizon. To preserve temporal patterns of electricity consumption (whose seasonality is affected by variables like temperature), an hourly profile of a base year (most recent year) is used to serve as basis for the period-by-period load consumption of forecasted years. System peak power demand in year y MW ReserveMarginy Margin required over maximum power demand in year y MW CapShortPrice Capacity shortage price $/MW The formulation is illustrative only and is usually extended to include terms to handle candidate generators subject to inter-temporal constraints such as hydro energy limits, ramp-rate limitations, storage units like batteries, or contracts with minimum and maximum off-take requirements. The following components of Energy Security are generated from PLEXOS: autarky (AT), affordability (P), supply (S), and sustainability (C). Autarky is defined as the share of energy from indigenous sources and is related to the ability to react promptly to sudden changes in the supply–demand balance. Affordability is equated to the price or cost of electricity. Meanwhile, the variable supply is proxied by the Capacity Reserve Margin = (Total generation capacity − peak load)/peak load. Sustainability is a broad concept. As explained earlier, sustainable development requires that the principles of public policy be extended to the environomy—the union of the environment and the economy. This requires the inclusion of natural resource depletion and pollution in production and consumer-preference structures.Footnote 7 This study simplifies the framework by using carbon emissions (C) as an indicator of sustainability. A more comprehensive set of indicators can be incorporated by expanding the welfare function. Applying the model Autarky (AT) is the annualized percentage of all indigenous generation (GenLoad(g,t)) against the total generation of all sources. Indigenous sources include renewable generation such as wind, biomass, solar, and geothermal as well as resources fueled by domestic coal and gas. Recall that the model follows an economic dispatch algorithm. In order to satisfy the load at a minimum total cost, the set of generators with the lowest marginal costs are used first, with the marginal cost of the final generator needed to satisfy load requirements setting the system marginal price. System marginal prices are adjusted per location with considerations on cost of congestion and cost of losses to arrive at the locational marginal price. The affordability variable (P) is the annual load weighted marginal price. Meanwhile, the Supply variable (S) refers to the total built capacity of existing fleet plus additional generation fleet (GenBuild(g,y) × PMAXg) to meet the peak demand and reserve margin of each year. The carbon emission variable (C) refers to the carbon intensity. It is calculated by the summation of all emissions of carbon generating resources divided by the GWh generation in a year. Or simply, summation of GenLoad(g,t) × CO2 emission factors divided by GenLoad(g,t) of all resources in the system. All variables AT, P, S, and C are further normalized to take a value of 0–1. In order to manage the trilemma, the variables will be combined in a welfare function, thus: $$W = {\text{AT}}^{\alpha } {\text{P}}^{\beta } {\text{S}}^{\gamma } {\text{C}}^{\delta } .$$ The parameters α, β, ϒ, δ are the weight of each factor in the welfare function and the most important objective is to maximize welfare, W. Let W* be the maximum welfare and by definition $$W^{*} = {\text{AT}}^{\alpha *} {\text{P}}^{\beta *} {\text{S}}^{\gamma *} {\text{C}}^{\delta *} .$$ Weights can be obtained through simulation-based optimization. However, a more practical application is to obtain the weights of a hypothetical Secretary of the Department of Energy (DOE). His welfare function is \({W}^{H}= {\mathrm{AT}}^{\alpha H}{\mathrm{P}}^{\beta H}{\mathrm{S}}^{\gamma h}{\mathrm{C}}^{\delta H}\), where the weights \({\alpha }^{H}, {\beta }^{H}, {\gamma }^{H}, {\delta }^{H}\) can be obtained from the Analytical Hierarchy Process (or a similar procedure as described in Box 1). \({W}^{H}\) can then be used to evaluate policy options. As stated in the introduction, different policies will yield different values for the components of the trilemma, in this case AT, P, S, and C, thereby generating a set of values for W. The policy associated with the highest W can then be selected and implemented. Similar to the argument made earlier, the framework is invariant to the specific methodology to obtain the weights. It should be noted that in the actual simulation, the welfare function is defined as $$W = AT^{\alpha } \left( {\frac{1}{{\text{P}}}} \right)^{\beta } {\text{S}}^{\gamma } \left( {\frac{1}{{\text{C}}}} \right)^{\delta } .$$ A decline in both the price level and amount of carbon emissions increases welfare. Moreover, the four variables are normalized to a [0,1] interval before W is calculated. For the portfolio model, instead of a welfare function, a utility function U that depends on r and σ2 is defined, i.e., U (r, σ2). The appropriate weights for risk and return can also be determined through one of the MCDM procedures. If patterned after the welfare function, the utility function can be specified as: \(U={(r)}^{\alpha }{({\sigma }^{2})}^{\beta }\). Such an application is left for future study. Using PLEXOS, the power sector was forecast for the period 2020–2040 under a market-based scenario (Fig. 7). In this approach, the electricity market is assumed to unfold along a path where growing demand is automatically satisfied in the least cost manner. There is no mandated generation mix across the study period and no carbon tax is applied. Variable renewable energy costs are anticipated to continue along a significant downward trajectory. Meanwhile, domestic natural gas, as it depletes, gets replaced by the use of imported liquid natural gas (LNG). Market-based simulation results using PLEXOS Under the market-based scenario, coal remains to be a significant part of the mix as it is a cheap option for running on baseload function. The share of coal in the mix is anticipated to reach a peak of more than 70 percent in the first half of the study horizon. Renewable energy generation, on the other hand, is seen to rise to unprecedented levels starting in the second half of the period. In 2040, the share of solar generation is estimated to increase by more than 10 times its original share in 2020. Following this market-based scenario, autarky is expected to fall from a high level of 54 percent in 2020 to 30 percent in 2030. The drop is influenced by the increased dependence on imported fuel energy sources, namely coal, and the switch to imported LNG as local natural gas gets depleted. Annual market price averages are projected to experience a slight increase from its initial price level by approximately 0.7 P/kWh (real 2018 terms) towards the period 2031–2040. The uplift is presumed to provide signals to encourage additional investment to support growing demand and reserve requirements. Capacity reserve margins remain stable at 25 percent throughout the horizon. Carbon intensity is anticipated to climb in the near term, starting from 854 tCO2/GWh in 2020, reaching a peak of 1048 tCO2/GWh in 2030. This will slowly pull back to a level of 990 tCO2/GWh in 2040. The rise of carbon intensity in the medium term is attributed to the increase in the share of thermal coal in the generation mix. On the other hand, the slow decline of carbon intensity in the second half is a result of the proliferation of variable renewable resources. Meanwhile, two energy experts were interviewed in order to obtain values for the parameters α, β, ϒ, and δ. They are identified as (hypothetical) Secretary 1 and Secretary 2. The Analytical Hierarchy Process was applied by presenting the four goals on a pairwise basis to each expert. There are six pairwise comparisons to be made. The basic process of AHP is described in Box 1 and the results are shown in Table 4. Table 4 Preferences of two hypothetical DOE secretaries Secretary 3 represents the optimal weights obtained from a simulation-based optimization procedure. These are the values \({\alpha }^{*}, {\beta }^{*}, {\gamma }^{*}, {\delta }^{*}\) described earlier. A corner solution is obtained meaning that all parameters are zero except for β which is unity. This is not surprising since a policymaker who favors a market-based solution will definitely emphasize the least-cost alternative. Under the market-based scenario, the value of W is calculated as follows (Table 5): Table 5 Value of W under market-based scenario These are obtained by substituting the annual values (AT), affordability (P), supply (S), and sustainability (C) into Eq. (1) and getting the average of W over the period 2020–2040. To demonstrate the application of the framework in dealing with the trilemma, the policy of imposing a carbon tax is simulated. In this exercise, a carbon tax is imposed, equivalent to the social cost of carbon (SCC), which is estimated to be USD 47.2 Real 2018/MT CO2. The estimate is from the United States Environmental Protection Agency (US EPA).Footnote 8 With an average discount rate of 3 percent, the social cost of carbon is USD 40.00 per metric ton of CO2 in 2018 using 2007 as a base year. This is converted to USD 47.2 to reflect current prices in 2018. Skeptics of climate change effects use a higher discount rate. At an average discount rate of 5 percent, the social cost of carbon falls to USD 12.00 per metric ton of CO2 in 2018. The debate on the appropriate level of carbon emissions and carbon tax is eschewed in this paper.Footnote 9 The carbon tax was incorporated in PLEXOS (see section on Basic PLEXOS Framework and Appendix) by adding to the short-run marginal cost (SRMC) of plants using coal, gas, and oil technologies. The appropriate emission factors were used. The simulation results after imposing a carbon tax are shown in Fig. 8. The major trade-off involved in this policy exercise is a rise in the price of electricity accompanied by a decline in carbon emissions. In other words, enhanced sustainability is achieved at the cost of a decline in affordability. This will worsen equity if electricity demand of lower income classes has a higher price elasticity, which is the case in the Philippines (Dumagan and Abrigo [22]). The policy options can be evaluated by comparing the values of W (Table 6). Comparing market-based scenario with carbon tax scenario equal to 100% of social cost of carbon Table 6 Comparing welfare before and after imposition of a carbon tax Welfare improves under a government headed by Secretary 1 or Secretary 2. Welfare declines under an administration led by Secretary 3. It should be noted that the value of W is higher under Secretary 3 for both policy regimes. Does this imply that Secretary 3 will be a more suitable head of the Department of Energy? Not at all. One can readily find a combination of values of the parameters and the variables that will generate a higher W. The parameters simply reflect the preferences of society. The welfare function is a mechanism to rank different policies given these parameters. What the results show is that both Secretary 1 and Secretary 2 will favor a carbon tax over a market-based scenario. Secretary 3 will not. The current application of the framework demonstrates its usefulness in avoiding a policy gridlock. Without the welfare function policymakers would grapple with the impact of the carbon tax. Would lower carbon emissions be an acceptable trade-off for higher cost of electricity and the accompanying rise in inequity? Comparing the value of the welfare function under the two scenarios would provide an objective basis for arriving at a decision. Progress can therefore be achieved even if the conflicts or trade-offs are not resolved. This is the essence of managing the trilemma. Meanwhile, the reverse question can be investigated: given the parameters α, β, ϒ, δ, what would be the values of the components to maximize welfare? These can be designated as \({\mathrm{AT}}^{*},{\mathrm{P}}^{*}{, \mathrm{S}}^{*}, {\mathrm{ C}}^{*}\). A time series for each variable can be generated. Policies can then be designed to target these values, with the full model taking into account the trade-offs and synergies. Another logical extension of the model is to include economic variables such as per capita GDP and poverty incidence in the analysis. This can be readily accomplished by linking PLEXOS to a full-fledged macroeconomic model. The welfare function can then include relevant economic variables. Policies that improve all components of the welfare function, while rare, can be designed. The Philippines should take advantage of the passage of Republic Act No. 11285 (An Act Institutionalizing Energy Efficiency and Conservation, Enhancing the Efficient Use of Energy, and Granting Incentives to Energy Efficiency and Conservation Projects) in 2019. Measures to improve energy efficiency will yield higher outputs or services from the same amount of resources. These measures include green building codes, minimum energy performance standards for equipment, and minimum standards for fuel efficiency, electric vehicles, and energy management systems industries. Improving energy efficiency can positively affect all components of the trilemma at the same time. This hypothesis can be verified by simulating the impact of measures to enhance energy efficiency. La Viña et al. [10] point out that energy efficiency is part of the general strategy of demand-side management. This, in turn, is an element of an overall energy transition strategy called 'change of individual energy consumption behavior' (CIECB). Resolving the trilemma can be achieved by altering the individual energy consumption behavior which is characterized mainly the use and purchase of energy services and devices. By understanding factors that influence consumption behavior—such as income, education, age, geography, mindset—a CIECB governance approach could help in designing policies that generates energy efficiency through effective demand-side management. Data used in this study are available upon request, but can only be used to check the empirical results. Permission to use the software has to be obtained through Energy Exemplar, the custodian of the PLEXOS Market Simulation Software. World Energy Council [5], page 13. [10], page 43. The paper of Gunningham was published in 2013. The Indonesian government eliminated gasoline subsidies in 2015 and set fixed subsidies for diesel. For more details, please refer to https://www.oecd.org/fossil-fuels/publication/Indonesia%20G20%20Self-Report%20IFFS.pdf. [11], pages 190–191. https://www.iea.org/topics/energysecurity/ (Accessed 26 November 2019). The discussion on "sustainability" is based on Ravago and Roumasset [6], page 43. https://19january2017snapshot.epa.gov/sites/production/files/2016-12/documents/sc_co2_tsd_august_2016.pdf (accessed 15 February 2020). See for example Dietz and Stern [21]. World Bank Open Data (2021) https://data.worldbank.org/indicator/EG.ELC.ACCS.ZS. Accessed 21 January 2021 Index Mundi (2021) Electricity Consumption Per Capita. https://www.indexmundi.com/map/?v=81000. Accessed 23 January 2021 World Bank Open Data (2021) https://data.worldbank.org/indicator/NY.GDP.PCAP.KD. Accessed 31 July 2021 World Energy Council (2011) Policies for the Future, 2011 Assessment of Country Energy and Climate Policies av https://www.worldenergy.org/assets/downloads/PUB_wec_2011_assessment_of_energy_and_climate_policies_2011_WEC.pdf. Accessed 27 November 2019 World Energy Council (2019) World Energy Trilemma Index 2019. https://www.worldenergy.org/assets/downloads/WETrilemma_2019_Full_Report_v4_pages.pdf. Accessed 27 November 2019 Ravago MV, Roumasset JA (2018) The public economics of electricity policy with Philippine applications. In: Ravago MV, Roumasset JA, Danao RA (eds) Powering the Philippine economy: electricity economics and policy. University of the Philippines Press, Quezon City Lazard (2019) Lazard's Levelized Cost of Energy Analysis—Version 13.0. https://www.lazard.com/media/451086/lazards-levelized-cost-of-energy-version-130-vf.pdf. Accessed 11 August 2020 Barbier EB, Burgess JC (2019) Sustainable development goal indicators: analyzing trade-offs and complementarities. World Dev 1(22):295–305 World Energy Council (2020) World Energy Trilemma Index 2020, https://www.worldenergy.org/assets/downloads/World_Energy_Trilemma_Index_2020_-_REPORT.pdf (Accessed 21 January 2021) La Viña AGM, Tan JM, Guanzon TIM, Caleda MJ, Ang L (2018) Navigating a trilemma: energy security, equity, and sustainability in the Philippines' low-carbon transition. Energy Res Soc Sci 35:37–47 Gunningham N (2013) Managing the energy trilemma: the case of Indonesia. Energy Policy 54:184–193 Lee, ND, Hurlbut, A, Dyreson, MI, McCan, I, Neri, EV, Reyes, NCR, Bagsik, J (2020) Ready for renewables: grid planning and competitive renewable energy zones (CREZ) in the Philippines. National Renewable Energy Laboratory and United States Agency for International Development, Golden Balanquit R, Daway-Ducanes SL (2018) Optimal energy mix using portfolio theory. In: Ravago MV, Roumasset JA, Danao RA (eds) Powering the Philippine economy: electricity economics and policy. University of the Philippines Press, Quezon City Markowitz H (1952) Portfolio selection. J Finance 7:77–91 Stempien JP, Chan SH (2017) Addressing energy trilemma via the modified Markowitz mean-variance portfolio optimization theory. Appl Energy 202:228–237 Saaty TL (1980) The analytic hierarchy process. McGraw Hill, New York MATH Google Scholar Saaty TL (2008) Decision making with the analytic hierarchy process. Int J Services Sci 1(1):83–98 Lahdelma R, Salminen P (2010) Stochastic multicriteria acceptability analysis. In: Ehrgott M, Figueira JP, Greco S (eds) Trends in multiple criteria decision analysis. Springer Science + Business Media, LLC, New York Song L, Fu Y, Zhou P, Lai K (2017) Measuring national energy performance via energy trilemma index: a stochastic multicriteria acceptability analysis. Energy Econ 66:313–319 Energy Exemplar (2015) PLEXOS 7.3 documentation. https://wiki.energyexemplar.com/. Accessed 30 November 30, 2019 Dietz S, Stern N (2015) Endogenous growth, convexity of damages and climate risk: how Nordhaus' framework supports deep cuts in carbon emissions. Econ J 125:574–620 Dumagan, JC, Abrigo, MRM (2021) Rational choices and welfare changes in Philippine family energy demand: evidence from family income and expenditure surveys. Ateneo School of Government Working Paper 21-016. https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3837066. Accessed 11 July 2021 The authors would like to acknowledge the financial support provided by the European Union and the Ateneo de Manila University School of Government through the Access to Sustainable Energy Program-Clean Energy Living Laboratories (ASEP-CELLs) project and the excellent research assistance of Joyce Marie P. Lagac. They would also like to express their appreciation to MERALCO PowerGen Corporation for allowing the use of the PLEXOS software. The usual disclaimer applies. The study was conducted under the research component of the Access to Sustainable Energy Program-Clean Energy Living Laboratories (ASEP-CELLs) project which is funded by the European Union and managed by the Ateneo de Manila University School of Government. Neither the EU nor ASoG participated in the study. Ateneo School of Government, Quezon City, Philippines Josef T. Yap MERALCO PowerGen Corporation, Pasig City, Philippines Aaron Joseph P. Gabriola & Chrysogonus F. Herrera Aaron Joseph P. Gabriola Chrysogonus F. Herrera JTY is the main author and responsible for the framework and bulk of text in the paper. AJPG ran the simulations and prepared the write-up on the results and description of the PLEXOS software. CFH provided guidance on the structure of the simulations. All authors read and approved the final manuscript. Correspondence to Josef T. Yap. Appendix: PLEXOS Platform PLEXOS is a commercial grade optimization-based software used to model electricity markets. The forecasting approach using PLEXOS is largely simulation-based, which is in contrast to other known practices where forecasts are done by regression. Its core simulation engine is centered on mixed-integer programming and the structure of the platform comprised interleaved simulation phases namely: Long-term phase (LT Plan) Projected assessment of system adequacy (PASA) Medium-term schedule (MT Schedule) Short-term schedule (ST Schedule) The phases are solved in sequence and the output of one becomes the input to the succeeding simulation steps. The LT Plan was presented in the main text. PASA step looks to find the optimal timing of annual maintenance events of generating units. Outputs of LT and PASA steps are passed on to the MT and ST Schedules to further solve the more detailed dispatch optimization problem—the final solution of which contains parameters of interest such as the projected hourly dispatch schedule of individual generating unit and hourly system market prices. PASA phase The PASA simulation phase automatically schedules distributed maintenance events to equalize capacity reserves across peak periods (e.g., daily, weekly, monthly peak periods). Capacity reserve is the spare capacity over peak load in a region. Distributed maintenance events refer to outage periods typically required annually by generating plants to allow maintenance activities such as periodic maintenance, inspection of facilities, etc. Maintenance events are considered to occur in discrete periods and explicitly expressed to cover an expected number of hours and performed at a defined frequency in a year. This is in contrast to forced outage events where the number of times unplanned outages are drawn are implemented randomly. The PASA phase is done after the LT phase when the annual future plant line-up is finalized. The distributed maintenance events are outputs of PASA and are passed down as input to the subsequent MT and ST simulation steps as optimal maintenance schedules. The optimal schedule of the PASA step is mainly based on capacity reserves only and not on production costs. This means maintenance timings handed down by PASA does not necessarily result in minimizing opportunity loss of an individual generator (due to lost revenue from the market). MT schedule MT schedule deals with the key problem in power system modeling which is to handle medium and long terms decisions in a computationally efficient way. In particular, this includes effectively addressing inter-temporal constraints present in energy-constrained generating units such as hydropower, storage units like battery, and contracts requiring fuel minimum/maximum off-takes by solving the economic dispatch optimization problem under a reduced chronology scenario. To illustrate, take for example a forecast horizon spanning 20 years: The simulator is expected to simultaneously optimize decisions in the higher resolution level (in this case, hourly) while respecting medium-term constraints that span weeks for energy-constrained hydro generator or up to a year for a gas contract with minimum gas off-take. A simple approach would be to formulate 20 × 8760 h = 175,200 dispatch intervals and solve it mathematically through one giant step. This simple approach, however, in reality, is computationally expensive and impossible to solve even with modern-day computers. To work around this, the MT Schedule finds an alternative solution over a reduced number of simulated periods by grouping together "similar" dispatch intervals and assigning them into blocks. Then, MT schedule optimizes decisions over this reduced chronology. The original medium-term constraints are then reduced into a set of equivalent short-term constraint targets and objectives that can be seamlessly integrated to the more detailed ST schedule that runs on full chronology. For example, given an energy-constrained hydropower plant with monthly limits–the MT schedule, because of its reduced number of chronological steps, will solve for an approximate hydro dispatch schedule based on the medium-term constraint. According to this approximate medium-term decisions, there is a set of shorter period target equivalents of the medium-term constraint that can be seamlessly passed on and enforced to the ST schedule–for instance, from monthly into daily energy targets. The ST schedule takes these daily targets as constraints added directly to the short-term formulation for its short-term dispatch policy. Because MT schedule runs on a reduced chronology, it deals with constraints that span longer periods such as weeks, months, or even several years. Strategic bidding models Included in the MT schedule step are methods for strategic bidding such as Long Run Marginal Cost (LRMC) recovery and Residual Supply Index methodology. SRMC or short-run marginal costs refer to the variable costs of a generating unit's operation. LRMC refers to variable costs combined with the fixed costs covering fixed operation and maintenance and capital recovery fees to cover debt servicing and return to shareholders. The PLEXOS LRMC cost recovery method is an automated price modification heuristic in which the price of generation from each Generator that belongs to a Company is modified to reflect the fixed cost burden of the Company as a whole. This price modification is dynamic, done iteratively, and designed to be consistent with the goal of recovering fixed costs across an annual time period. Residual Supply Index (RSI) method is an empirical approach to modeling strategic bidding. It adopts a historical relationship (regression) between Price–cost Mark-up and certain system conditions and uses it to predict Bid-cost Mark-up under future system conditions and applies the bid-cost mark-ups to the supply bids and runs the model to determine dispatch and market-clearing prices. ST schedule The ST schedule is a full chronological production cost simulation model used to emulate the dispatch and pricing of the real-time market clearing engine of the Wholesale Electricity Spot Market (WESM). The ST schedule solves both economic dispatch and unit commitment problems simultaneously. In its core is the following economic dispatch and unit commitment formulation described as follows: Minimize \(F={\sum }_{t=1}^{T}{\sum }_{i=1}^{N}\left[{C}_{i}\left({P}_{\mathrm{Gi}}(t)\right)+{S}_{i}\left({u}_{i}(t)\right)\right]\) Subject to: \({\sum }_{i=1}^{N}{P}_{Gi}(t)={P}_{D}^{total}+{P}_{\mathrm{loss}}\) Power balance \({P}_{\mathrm{Gi}}^{\mathrm{min}}\le {P}_{\mathrm{Gi}}\le {P}_{\mathrm{Gi}}^{\mathrm{max}}\) Gen. unit operating limit \({u}_{i}\in \left[\mathrm{0,1}\right]\) On or off Other unit constraints Min up/downtime, ramp rate, etc.where: \({C}_{i}\) is fuel cost of gen unit i \({S}_{i}\) is start up or shut down cost of gen unit i \({u}_{i}\) is decision variable of start-up or shutdown of gen unit i \({P}_{\mathrm{Gi}}\) is generation output of gen unit i \({P}_{d}\) is total demand plus losses at time t \({P}_{D}^{\mathrm{total}}\) is total demand \({P}_{\mathrm{loss}}\) is total transmission losses Marginal prices and nodal prices: The linear programming formulation described above refers to the primal problem which deals with physical quantities such as generation and demand. The formulation can be converted to a dual problem that primarily deals with economic values. The solution to the dual problem tells about the marginal price for energy which refers to the optimal value of the dual variable associated with the power balance constraint (\({\sum }{P}_{\mathrm{Gi}}(t)={P}_{D}^{\mathrm{total}}+{P}_{\mathrm{loss}}\)). The marginal price represents the cost to system cost changes (in $) for every one unit change in load (in MW). The formula is as follows: $$\lambda = \delta C/\delta D$$ where: λ is the system lambda. δC is the change in total system cost, $. δD is the change in load, MW. In a lossless transmission network and under no transmission constraints, the marginal prices across each electrical bus (represented by a trading node) are the same. Considering electrical network losses in the formulation results in separation of nodal prices. The same is true as network constraints causing congestion are introduced. The nodal price can be described as the system marginal price plus cost of losses and the cost of congestion. $$\lambda_{i} = \lambda + \alpha_{i} + \beta_{i}$$ where: λi is the nodal price. αi is the node's cost of congestion. βi is the node's marginal loss charge. Yap, J.T., Gabriola, A.J.P. & Herrera, C.F. Managing the energy trilemma in the Philippines. Energ Sustain Soc 11, 34 (2021). https://doi.org/10.1186/s13705-021-00309-1 Energy trilemma Energy security equity and sustainability Policy gridlock Multi-criteria decision-making Welfare function Submission enquiries: [email protected]
CommonCrawl
Sex differences in thermal detection and thermal pain threshold and the thermal grill illusion: a psychophysical study in young volunteers Beate Averbeck ORCID: orcid.org/0000-0002-1194-95321,3, Lena Seitz1, Florian P. Kolb1 & Dieter F. Kutz2 Sex-related differences in human thermal and pain sensitivity are the subject of controversial discussion. The goal of this study in a large number of subjects was to investigate sex differences in thermal and thermal pain perception and the thermal grill illusion (TGI) as a phenomenon reflecting crosstalk between the thermoreceptive and nociceptive systems. The thermal grill illusion is a sensation of strong, but not necessarily painful, heat often preceded by transient cold upon skin contact with spatially interlaced innocuous warm and cool stimuli. The TGI was studied in a group of 78 female and 58 male undergraduate students and was evoked by placing the palm of the right hand on the thermal grill (20/40 °C interleaved stimulus). Sex-related thermal perception was investigated by a retrospective analysis of thermal detection and thermal pain threshold data that had been measured in student laboratory courses over 5 years (776 female and 476 male undergraduate students) using the method of quantitative sensory testing (QST). To analyse correlations between thermal pain sensitivity and the TGI, thermal pain threshold and the TGI were determined in a group of 20 female and 20 male undergraduate students. The TGI was more pronounced in females than males. Females were more sensitive with respect to thermal detection and thermal pain thresholds. Independent of sex, thermal detection thresholds were dependent on the baseline temperature with a specific progression of an optimum curve for cold detection threshold versus baseline temperature. The distribution of cold pain thresholds was multi-modal and sex-dependent. The more pronounced TGI in females correlated with higher cold sensitivity and cold pain sensitivity in females than in males. Our finding that thermal detection threshold not only differs between the sexes but is also dependent on the baseline temperature reveals a complex processing of "cold" and "warm" inputs in thermal perception. The results of the TGI experiment support the assumption that sex differences in cold-related thermoreception are responsible for sex differences in the TGI. Evidence suggests women and men experience and report pain differently. The most pronounced sex differences have been found for heat pain, with females showing lower heat pain threshold, tolerating less thermal heat and perceiving hot temperatures as more painful and more unpleasant than males [1,2,3,4,5,6]. Sex differences in cold pain as well as in thermal non-painful sensation have been described more rarely in the literature and the existing studies report higher sensitivity of females compared with males [2, 4, 7, 8]. Sex differences in the thermal grill illusion (TGI), a phenomenon reflecting crosstalk between the thermoreceptive and nociceptive systems, have not been investigated so far. The TGI, first reported by Thunberg in 1886, is generated by pairing innocuous warm and cold temperatures. This leads to a sensation of strong, but not necessarily painful, heat often preceded by transient cold [9]. Several studies indicate that the TGI is a very complex phenomenon that is generated by central higher order processing and reveals a relationship between the thermoreceptive and nociceptive systems [10,11,12,13,14,15,16,17,18,19,20,21,22,23]. Thus, sex differences in thermoreception and thermal pain perception may be related to sex differences in the TGI. The aim of the present study was to determine sex differences in the TGI by testing 136 undergraduate medical students. We recorded qualities and intensities of sensation evoked by a 20/40 °C thermal grill stimulus in comparison to a uniform cold (20 °C) or warm (40 °C) stimulus in order to analyse sex differences in the changes of sensation evoked by thermal grill stimulation. In addition, we determined sex differences in thermal detection and thermal pain threshold in 1252 undergraduate students of medicine and dentistry, by retrospectively analysing quantitative sensory testing (QST) data that had been collected in student laboratory courses. QST data included cold and warm detection thresholds at different baseline (adaptation) temperatures between 20 and 40 °C. This allows further investigation of thermal sensation circuitries as earlier studies have analysed thermal detection thresholds only at baseline temperatures around the neutral/comfort zone of 32 °C [24, 25]. After finding sex differences in TGI and QST data, the objective was to test the hypothesis of a sex-dependent correlation of the TGI with the subject's thermal sensitivity and/or thermal pain sensitivity. Therefore, we correlated the TGI with cold or warm sensation and, in addition, with thermal pain sensitivity. The thermal grill experiments were performed in a group of 136 medical undergraduate students of the Ludwig-Maximilians University Munich (58 males and 78 females, aged 20–30 years). Volunteers were recruited by signing a list available in the student laboratories. This list included all information about the purpose of the study including the aim, i.e. the investigation of sex differences in thermoreception. On the day of the experiment, the subjects gave consent to participate in the experiments. The retrospective analysis of thermal sensation data included 12,874 records of QST measurements from 1252 students (776 females and 476 males, mean age 22 ± 3 years (median interquartile range)). The data were collected during neurophysiological laboratory courses for medical and dental undergraduate students at the Ludwig-Maximilians University Munich (Germany) in the years 2007–2011. Students were informed that data of thermal sensitivity were to be gathered from healthy subjects and that the data were later to be analysed anonymously to generate comprehensive results for instruction and publication purposes. Six or seven students of each class of around 20 students volunteered to undergo the non-invasive tests; the others performed the acquisition of data or other tasks. Volunteers gave consent to participate in the experiments before the start of the tests. The analysis was performed with the permission of the local ethics committee of the Ludwig-Maximilians University of Munich. For analysing correlations of thermal pain threshold with the TGI, all values were determined in one experimental session for each of 40 (20 female and 20 male) medical students. The recruitment of these subjects (aged 20–30 years) who were not part of the 136 cohort was the same as described above for the thermal grill experiments. On the day of the experiment, the subjects gave consent to participate in the experiments. The whole study (all three experiments) complied with the guidelines established by the Declaration of Helsinki and was approved by the local ethics committee of the Ludwig-Maximilians University Munich for experiments involving human subjects. Equipment and experimental protocol The thermal grill experiments were performed by an investigator. The subjects were naïve with respect to the "illusion phenomenon". They were informed about the rating procedure and assured of the harmlessness of all stimulation parameters. For stimulation, subjects placed the palmar surface of the right hand on the thermal grill that was fixed to a table. The setup of the thermal grill device has been described elsewhere [19]. Briefly, the thermal grill consists of 15 bars (tubes) that are perfused with warm or cold water. The temperatures tested were 20, 40, and 20 °C alternating with 40 °C (thermal grill stimulus). A second thermal grill with all bars (tubes) held at 32 °C was used as a control to establish a baseline temperature of the skin immediately prior to each thermal stimulus trial. To examine sensations associated with grill stimulation, the hand was first placed on the 32 °C reference grill surface for 20 s. The hand was then exposed to either a uniform 20° or uniform 40 °C stimulus or the interleaved 20/40 °C (grill) stimulus for 20 s. Each stimulus was presented three times with a minimum inter-stimulus interval of 5 min. After each stimulus presentation, subjects were asked to specify their evoked perception by using the descriptors "warm/heat, cold, unpleasantness, pain, burning, stinging and prickling". Then, the subjects rated the intensities of their sensations using numeric rating scales (NRS) from 0 to 100 to rate the "thermal intensity" of (a) their cold sensation and (b) their warm sensation. The scale anchors were "0 = neutral" and "100 = worst cold or worst warm/hot", respectively, and along one side of the scale there were three additional descriptors indicating that the subject should rate intensities of perceived coldness or warmth/heat. The scale was numbered from 0 to 100 in increments of 10. In addition, subjects rated perceived pain and unpleasantness on numeric scales from 0 to 100, also numbered from 0 to 100 in increments of 10 and with the anchors "0 = no pain or no unpleasantness" and "100 = worst pain or as unpleasant as can be imagined". QST measurements were self-performed by the students in groups of two or three. The students were informed about the procedure of the measurements but received no prior training. In each group, one student served as subject and the other(s) operated the computer and recorded the data. For stimulation, a computerized thermotest device TSA 2001-II Neurosensitive Analyser (MEDOC, Ramat Yishai, Israel) was used with a standard 30 × 30 mm thermode. The method of limits was employed [26] and the rate of temperature change was 1.5 °C/s. The cut-off temperatures were 0 and 50 °C, respectively. During the experiment, the subject was not able to see the computer screen. Cold and warm detection thresholds (CDT and WDT, respectively) were measured at different baseline temperatures of the thermal ramp stimulus with the thermode affixed with a Velcro strip to the ventral surface of the forearm near the wrist. Table 1 shows the sequence of measurements at different baseline temperatures in the range of 20–40 °C resulting in different values of CDTX°C and WDTX°C. The subject signalled that a threshold had been reached by pressing a button, at which point the temperature change of the thermode was halted, the direction reversed and the temperature returned to the respective baseline temperature. Subjects were instructed to press the button as soon as they detected a change of the temperature ("Press the response button immediately when you perceive a change of the thermode temperature, i.e. warmer during WDT tests or colder during CDT tests"). Thresholds of five consecutive runs were averaged to determine CDTX°C and WDTX°C. Table 1 Sequence of measurements at different baseline temperatures in the range of 20–40 °C For measuring cold and heat pain threshold (CPT and HPT), the thermode was placed on the subject's skin so as to stimulate the thenar eminence. The baseline temperature was 32 °C. The instruction was as follows: Indicate by pressing a button the occurrence of a painful or unpleasant sensation of heat or cold, respectively. Thresholds of five consecutive runs were averaged to determine CPT and HPT. Data used for correlation analyses of the TGI with thermal pain threshold were obtained in an additional experiment that was carried out by an investigator. The TGI and CPT and HPT were measured in 40 subjects (20 female and 20 male medical students) who had been trained in QST measurements. The subjects were naïve with respect to the "illusion phenomenon" according to the thermal grill experiment with 136 subjects. The thermal grill and QST parameters were the same as described above. Statistical analysis was performed using the program SPSS Statistics Version 22, IBM, Chicago, IL, USA. With respect to the thermal grill experiments, Kolmogorov–Smirnov test revealed that for NRS ratings of thermal sensation and for thermal threshold, data values were not normally distributed. Therefore, the median as well as the first and third quartiles (boxes) and range (error bars) were used for data description and Friedman's ANOVA with post hoc testing (Wilcoxon's signed-rank test) was employed for statistical analysis of the thermal grill-evoked sensations. To assess sex differences in the qualities of the thermal grill-evoked sensations, the Mann-Whitney U test was performed. Differences in sensations evoked by the uniform cold (20 °C) or warm (40 °C) stimulus and the thermal grill (20°/40 °C interleaved) stimulus are presented as means ± standard error of the mean (SEM; Δ values) for male and female subjects separately. For analysing sex differences in the intensities of grill-evoked sensations, Student's t test for unpaired samples was performed. Linear regression analyses were carried out to evaluate the effects of thermal sensation and thermal pain sensation on the effects of the thermal grill-evoked warmth/heat and coldness. To assess dependency between two variables of non-normally distributed data, Spearman's correlation coefficient (ρ) was calculated. A Bonferroni-type adjustment was made for multiple correlation analyses. P value < 0.05 was considered to indicate a statistically significant difference and is indicated by an asterisk (*) in the tables and figures. To assure the quality of the retrospectively analysed QST data, the variances of all 12.874 records were analysed. The variances were distributed in the range 0.00–260.26. The distribution of variances is characterized as follows: median 0.280, interquartile range (IQR) 1.430, mean 2.577, and standard deviation (SD) 9.565. Skewness was reduced calculating an upper criterion (Eq. 1), following McGill and colleagues [27] $$ \mathrm{crit}\kern0.5em =\kern0.5em \mathrm{median}+1.5\times \mathrm{IQR} $$ Any record with a variance above a crit value of 2.425 was excluded from further analysis. In addition, records with incomplete or implausible data were excluded, i.e. data lacking the information about sex or age or data obtained using incorrect stimulation sites. Some 9940 records remained for further analysis. Following the suggestions by Rolke and colleagues [2], all records were logarithmized (base 10) before being subjected to any statistical test. Sex differences were analysed post hoc using Tukey's honest significant difference test [28]. Analyses were performed using the language for statistical computing R [29]. For analysis of multimodal distribution of CPTs, the CPT data were fitted by a Gaussian mixed model using the R-program AdaptGauss [30] (Additional file 1). For calculating the probability density function (PDF), the median of five repetitions of the CPT test of each subject was estimated and rescaled for stimulus intensity and a log transformation was subsequently carried out. The PDF was calculated for all subjects participating in the CPT measurements (N = 296). Thermal grill illusion (TGI) Thermal grill-evoked sensations were analysed in 136 subjects (78 females and 58 males). Sensations reported after contact between the palmar hand surface and the thermal grill in grill mode, i.e. with alternating 20 and 40 °C tempered bars, exhibited a characteristic thermal profile. All 136 subjects reported sensations of "warm" or "hot" in the centre of the area in contact with the skin and 41 subjects (30%) described this sensation as painful (Table 2). All subjects (except three) reported "cold" mostly at the periphery of the contact surface (e.g. at the finger tips). The intensity of the cold sensation changed during grill stimulation in some subjects, and 10 subjects (7%) reported no "cold" at the end of the grill stimulation. Some 84 subjects (65%) described the entire sensation as "unpleasant" (Table 2). When being asked to report on additional qualities of the evoked perception, the descriptors burning, stinging and prickling were chosen by 57, 20 and 21 subjects (42, 15 and 15%), respectively (Table 2). Similar thermal grill data were obtained in the additional group of 40 subjects; the data are summarized in a table (see Additional file 1: Table S1). Table 2 Sex differences in the sensations evoked by thermal grill stimulation (20/40 °C) Pooled data of intensities of sensations arising from uniform thermal and grill stimuli are shown in Fig. 1 for all 136 subjects. In response to the uniform 40 °C stimulus all subjects reported the sensation of warm with a median intensity of 35 on the NRS. The uniform 20 °C stimulus evoked the sensation of cold in all subjects with a median intensity of 30 on the NRS. The thermal sensations of warm and cold evoked by the grill stimulus were rated with median intensities of 50 for warm (heat) and 30 for cold on the NRS. With respect to the intensity ratings, all four indicators "cold-induced cold sensation", "warm-induced warm sensation", "grill-evoked cold sensation" and "grill-evoked warm/heat sensation" differed significantly (χ 2 = 190.0, p < 0.001, Friedman's ANOVA, p < 0.02, post hoc tests using Wilcoxon's signed-rank test). Similarly, regarding the sensation of unpleasantness, the three stimulation conditions cold, warm and grill differed significantly (χ 2 = 85.5, p < 0.001, Friedman's ANOVA, post hoc tests using Wilcoxon's signed-rank test). Pain was felt by only one subject on exposure to the uniform warm stimulus (35 on the NRS). Under grill stimulation, pain was rated as less intense than unpleasantness (median value of 0 versus 20, p < 0.001, Wilcoxon's signed-rank test). Numeric scale ratings (NRS) of thermal sensations (cold, warm/ heat), unpleasantness and pain evoked by placement of the right hand on the thermal grill for 20 s. For insets "cold" and "warm/ heat" hold "0 = neutral" and "100 = worst cold or worst warm/hot", respectively. For insets "pain" and "unpleasantness" hold "0 = no pain or no unpleasantness" and "100 = worst pain or as unpleasant as can be imagined", respectively. Three different thermal stimuli were tested: uniform 20 °C, uniform 40 °C and grill (bars tempered alternately at 20 and 40 °C) and each stimulus induced up to four different sensations. Ratings of all subjects (N = 136) are shown as medians with first and third quartiles (box) and range (whiskers, i.e. capped bars). Friedman's ANOVA with post hoc testing using Wilcoxon's signed-rank test was performed and significant differences are marked by asterisks Figure 1 shows numeric scale ratings of thermal sensations evoked by stimulation with the thermal grill. Sex differences in the TGI Subjects were asked to report on the quality of the perception evoked by the thermal grill stimulation (20/40 °C interleaved). Table 2 lists the descriptors that the subjects could choose combined with the selection frequency for female and male subjects separately. Significant sex differences were observed regarding the sensations of "unpleasantness", "pain" and "burning". These sensations were described more often by females than males (p < 0.05, Mann-Whitney U test). Females and males did not differ with respect to the rated intensities of warm or cold sensation evoked by the uniform warm or cold stimuli (see Additional file 2: Figure S1). Additionally, both sexes rated the grill stimulus as warmer and more unpleasant than the uniform warm stimulus (p < 0.001, Wilcoxon's matched pairs test, Bonferroni's correction, Fig. 2). In order to analyse sex differences in thermal grill-evoked sensations, the changes of sensation intensities on moving from the uniform 20 or 40 °C to the grill mode (20/40 °C interleaved) were calculated. With respect to the change in warm intensity on moving from the uniform 40 °C to the grill mode, females showed greater changes than males. The difference in warm intensity between the two stimuli 40 °C and grill (∆ values) was 14.0 ± 1.2 versus 9.7 ± 1.3 for female and male subjects, respectively, p < 0.05, Student's t test, Fig. 2b). The difference in cold intensity was −4.5 ± 1.5 versus −2.5 ± 1.6 for female and male subjects (n.s., Student's t test, Fig. 2a). The difference in the sensation of unpleasantness was 24.9 ± 3.1 (females) versus 16.3 ± 3.0 (males) for the two stimulation conditions 20 °C and grill (p < 0.05, Student's t test, Fig. 2c) and 22.9 ± 2.9 (females) versus 13.0 ± 3.1 (males) for the two stimuli 40 °C and grill (p = 0.05, Student's t test, Fig 2d). Sex differences in the sensation of unpleasantness, with females rating higher intensities than males, were also significant when including only the subjects who felt unpleasantness by thermal grill stimulation (data not shown). The difference in pain intensity between the two stimuli "uniform 20 or 40 °C" and grill was 7.9 ± 1.6 and 3.8 ± 1.2 for female and male subjects, respectively (p < 0.05, Student's t test, Fig. 2e). However, when including the responders of pain only, the rated intensities did not differ between the two sexes (22.9 ± 2.6, n = 30 and 20.0 ± 2.8, n = 11 for females and males respectively, data not shown). The change in intensity of different sensations (∆ values, means ± SEM, panels a−e) occurring by moving from the uniform temperature condition (20 or 40 °C) to the thermal grill condition (20 and 40 °C interleaved). To assess sex differences, Student's t test for unpaired samples was performed and significant differences are marked by asterisks. In panel e, sensations evoked by the 20 or 40 °C stimulus were pooled because stimulation with uniform temperatures did not induce any pain sensation In summary, female subjects more often felt a burning sensation, unpleasantness and pain with a grill stimulus set at a 20/40 °C pattern than did males. In addition, females felt the grill stimulus, in comparison to the uniform cold or warm stimulus, as significantly warmer, less cold and more unpleasant than males. Figure 2 shows sex differences in thermal grill-induced sensations. Sex differences in thermal thresholds To investigate sex differences in thermal detection and thermal pain threshold, a total of 9940 records from 1252 students (776 females and 476 males) were analysed. Thermal detection thresholds (CDT, WDT) were measured on the ventral surface of the forearm. Mean threshold values (°C from baseline) are shown in Fig. 3 and means ± SEM are summarized in Table 3. Both CDT and WDT were dependent on the baseline (adaptation) temperature, i.e. the starting temperature of the thermal ramp stimulus. The curve of WDT plotted against the baseline temperature (20–40 °C) showed a systematic decrease of WDT, with females showing significantly lower WDT values (smaller ∆ warm values in Fig. 3b) than males over the whole range of baseline temperatures tested (Tukey's HSD, p < 0.05, Fig. 3b, Table 3). Within each sex group, WDT was significantly different in pairwise comparisons of neighbouring baseline temperatures, except for the pair 35 vs. 40 °C (Tukey's HSD, p < 0.001). In contrast, the function of CDT vs. baseline temperature showed a completely different progression, namely that of an optimum curve with the highest CDT (smallest ∆ cold value in Fig. 3a) at 30 °C baseline temperature. CDT values (∆ values) were significantly higher at all other baseline temperatures tested (Tukey's HSD, p < 0.001, Fig. 3a, Table 3). This effect was independent of sex. Females had significantly lower CDT values (smaller ∆ cold values) than males at all baseline temperatures tested (Tukey's HSD, p < 0.05, Fig. 3a). Cold (a) and warm (b) detection thresholds at different baseline (adaptation) temperatures for female (♀) and male (♂) participants. Means (log10(°C from baseline)) ± 95% confidence band of the means of the detection thresholds are shown. Mean threshold values of females are presented as open circles (confidence band: white area) and means of males are presented as filled triangles (confidence band: grey area) Table 3 Thermal detection thresholds at different baseline temperatures Figure 3 shows sex differences in cold and warm detection thresholds at different baseline temperatures. CPT and HPT were measured over the thenar eminence starting at a baseline temperature of 32 °C. CPT for females was significantly higher (less cold) and HPT lower than for males (Tukey's HSD, p < 0.01). The median CPT was 17.7 °C (N = 208) for females and 14.0 °C (N = 88) for males. The median HPT was 45.9 °C (N = 333) for females and 48.1 °C (N = 247) for males. In summary, females were more sensitive with respect to thermal detection and thermal pain thresholds than males. Analysis showed that the distribution of CPTs was clearly multi-modal. The distribution of the N = 296 log-transformed threshold data could be described with a Gaussian mixture model composed of M = 6 Gaussians (Fig. 4 and Additional file 3: Table S2). The first two Gaussians at 31.3 °C (grey) and 30.4 °C (yellow), after data was retransformed from the log domain to the threshold temperatures, represent the false responses of the subjects indicating CDT instead of CPT. The modes of the Gaussians #3, #4, #5 and #6 (Fig. 4: green, cyan, orange and blue curves) were obtained at 27.0, 23.0, 16.5 and 2.1 °C. The Gaussians #3, #4 and #5 showed significantly different modes between male and female subjects (Welch modified two-sample t test, p < 0.01; Additional file 3: Table S2). Gaussian mixture model of cold pain thresholds. The graphs display the data after rescaling for stimulus intensity and subsequent log transformation. The density distributions are presented as probability density function (PDF), estimated using a Gaussian mixture model (R-program AdaptGauss [30]). The optimum number of mixes was found to be M = 6. Black curve: PDF; red curve: Gaussian mixed model, Gaussians: grey = 1, yellow = 2, green = 3, cyan = 4, orange = 5, blue = 6; purple: Bayes boundaries of the Gaussians. ALL: N = 296 subjects, FEMALE: N = 208, MALE: N = 88 Figure 4 shows a Gaussian mixture model of cold pain thresholds. Correlation of the TGI with thermal sensitivity The sensation of warm evoked by the uniform warm stimulus correlated positively with the sensation of cold evoked by the uniform cold stimulus in both sexes (ρ = 0.26, p = 0.009 after Bonferroni's correction, data not shown) demonstrating that the thermal sensitivities to warmth and coldness are associated. Figure 5 illustrates simple linear regression results using either warm stimulus-evoked warm sensation or cold stimulus-evoked cold sensation as the only independent variable: The increase in warm sensation evoked by grill stimulation (∆ grill heat, i.e. the difference in warm intensity between the two stimuli 40 °C and grill) correlated negatively with the warm stimulus-evoked warm sensation in males (ρ = −0.38, p = 0.006 after Bonferroni's correction, Fig. 5b) whereas there was no significant correlation in females (ρ = 0.03, n.s.). Thus, in male subjects ∆ grill heat decreased with increasing sensitivity to warmth. ∆ grill cold, i.e. the difference in cold intensity between the two stimuli 20 °C and grill, decreased with an increasing cold stimulus-evoked sensation of cold (negative correlation with ρ = − 0.30, p = 0.001 after Bonferroni's correction). This means that the higher the cold sensitivity of a subject the less cold was the sensation evoked by the grill stimulus. This effect was stronger in females than in males (ρ = − 0.36, p = 0.001 after Bonferroni's correction) for females versus ρ = − 0.19, n.s. for males, Fig 5a). The correlation between ∆ grill cold, i.e. the change in cold intensity from the 20 °C to the thermal grill condition (20/40 °C) and the intensity of cold during the 20 °C condition (a). The correlation between the ∆ grill warm, i.e. the change in warm intensity from the 40 °C to the thermal grill condition (20/40 °C) and the intensity of warmth during the 40 °C condition (b). A positive number on the y-axis indicates an increase in cold or warm sensation during the thermal grill stimulation compared to the uniform cold (20 °C) or warm (40 °C) stimulus, respectively. A negative number indicates the opposite, i.e. a decrease. Data of male (N = 58) and female (N = 78) subjects are presented separately using different symbols and colours. As values of thermal intensity ratings were not normally distributed (Kolmogorov-Smirnov test), Spearman's correlation coefficient ρ was calculated. P < 0.05 after Bonferroni's correction was considered significant and this is indicated by asterisks Figure 5 shows correlations of the thermal grill-induced changes in thermal sensations with thermal sensitivity. Correlation of the TGI with thermal pain threshold Consistent with the retrospective data analysis of QST measurements in 1.252 subjects, females in the group of 40 subjects showed significantly higher (less cold) CPT values than males (median CPT at the thenar eminence 10.4 for females versus 7.4 for males, p < 0.03, Mann Whitney U test, Additional file 1: Table S1). Regarding ∆ thermal pain threshold (HPT versus CPT), females showed significant lower values than males revealing a higher thermal pain sensitivity of females compared with males (p < 0.02, Student's t test, Additional file 1: Table S1). The TGI obtained in the group of 40 subjects was similar to the TGI found in the group of 136 subjects, although sex differences were not significant in the small group (e.g. for ∆ grill cold p = 0.09, Student's t test, table in Additional file 1: Table S1). Multiple regression analyses of ∆ grill heat were carried out using CPT, HPT and ∆ thermal pain threshold as the independent variables. Including all 40 subjects of both sexes showed that ∆ grill heat correlated significantly with CPT (ρ = 0.44, p = 0.03 after Bonferroni's correction). Simply stated, within individual subjects, the higher, i.e. the less cold the CPT, the more intense the grill-induced increase of warm/heat sensation. Thus, females showing higher CPT than males showed a stronger grill heat sensation (Fig. 6b). Multiple regression analysis of ∆ grill cold with CPT, HPT and ∆ thermal pain threshold as the independent variables showed no significant dependency of ∆ grill cold with any of these variables. Sex differences in ∆ grill cold were found with respect to the slopes of the linear regressions. With both, increasing CPT and decreasing ∆ thermal pain threshold, ∆ grill cold decreased in females while it increased in males (Fig. 6a, e). This means that females described the grill stimulus as increasingly less cold while males described it as decreasingly less cold with higher cold pain sensitivity or higher thermal pain sensitivity, respectively. Linear regressions of ∆ grill heat (grill-induced increase in warm/heat sensation, panels b, d and f) and ∆ grill cold (grill-induced increase in cold sensation, panels a, c and e) with either CPT, HPT or ∆ thermal pain threshold (HPT versus CPT) as the only dependent variable. Data from male (N = 20) and female (N = 20) subjects are presented separately using different symbols and colours. As the thermal threshold values were not normally distributed (Kolmogorov-Smirnov test) Spearman's correlation coefficient (ρ) was calculated to assess dependency between two variables. P < 0.05 after Bonferroni's correction was considered significant and this is indicated by asterisks Figure 6 shows correlations of the thermal grill-induced changes in thermal sensations with thermal pain thresholds. The present study demonstrates that females show a stronger TGI than males, since females more often feel a burning sensation, unpleasantness and pain and describe the grill stimulus as significantly warmer and less cold than males. In addition, the study demonstrates that females show a higher thermal sensitivity and thermal pain sensitivity than males. The stronger TGI in females correlates with the higher sensitivity to cold and cold pain in females compared with males. Sex differences in the TGI and thermal sensitivity In the present study thermal grill stimulation (20/40 °C interleaved) of the hand induced a unique perception including sensations of warmth, coldness, unpleasantness, burning, stinging, prickling and pain. Consistent with the literature, the present study shows the TGI to be very complex [10,11,12,13,14,15,16,17,18,19,20,21,22,23]. A novel finding in the present study was that the TGI is also sex-dependent. Regarding thermal sensitivity and thermal pain sensitivity, we also found sex differences in the present study. With respect to detection thresholds, our data are consistent with earlier studies [4, 7]. We found that the mean detection threshold of females and males differed by 0.2–0.4 °C, with females reporting higher (less cold) CDT values and lower (less warm) WDT values than males, indicating thermal sensitivity is higher in females than in males (see Table 3 and Fig. 3). The sex difference in WDT was more pronounced at low skin temperatures, e.g. 1.5 °C for WDT20 °C and WDT25 °C (see Fig. 3b), a new finding that reveals clear sex differences in warm detection threshold at slightly cool (25 °C) or cold (20 °C) skin temperatures thus implying sex differences in the complex processing of "cold" and "warm" inputs in thermal perception. For thresholds of cold pain and heat pain, females showed lower HPT and higher (less cold) CPT values in the present study which is consistent with the literature, albeit for CPT data published findings are somehow contradictory [2,3,4, 6, 8]. With respect to the distribution of CPT data, we found a sex-dependent multimodal distribution in the present study that is similar to the CPT data distribution published recently by Lötsch and colleagues [31]. Fitting a Gaussian mixture model to the log-transformed CPT data revealed three Gaussian distributions with modes at 23, 17 and 2 °C (see Fig. 4). According to Lötsch and colleagues [31], the localization of the first and second Gaussians may be interpreted as reflecting the contribution of the TRPM8 receptor that starts to respond at 24 °C [32] and the TRPA1 receptor that starts to sense cold at 17 °C [33]. Sex differences were found for these Gaussians in the present study indicating sex-dependent receptor characteristics at the skin area where the cold stimuli had been applied. For the Gaussian with mode at 2 °C, a sex-dependent difference of response probability was found (female 15%, male 32%, respectively (see Additional file 3: Table S2) indicating that other temperature-sensing receptors, e.g. TRPC5, KCNK2, ASIC2 or ASIC3, might show sex-dependent spatial densities in the skin as well. The mode at 27 °C may be seen as the earliest response due to an uncomfortable cold stimulus and the modes at 31.3 and 30.4 °C as false responses of untrained students who indicated the CDT instead of the CPT. Thermal sensitivity as a function of the baseline temperature Our retrospective analysis of sex-dependent thermal threshold data provides the novel finding that warm and cold detection thresholds are seen to be affected by the baseline (adaptation) temperature. Studies to date have addressed thermal detection thresholds at baseline temperatures around the neutral/comfort zone, usually 32 °C, which is approximately the mean skin temperature at standard ambient temperature [24, 25]. Our data demonstrate that, independent of sex, the CDT as a function of the baseline temperature has the form of an optimum curve with the optimum in the range 25–30 °C. In humans, innocuous skin temperatures of cold are signalled by cold-sensitive Aδ fibres [13, 34]. Recently, a micro-neurography study in humans has shown that the response rate of an Aδ fibre to a staircase cold stimulation has the form of an optimum curve with the maximum response rate at 26 °C baseline temperature and lower response rates at lower or higher baseline temperatures (see Fig. 7 in [34]. Hence, our CDT data (see Fig. 3a) might be explained by Aδ fibre activation. In addition, the activity of C2 fibres, a population of C fibres responding to warming and innocuous cooling [34] is likely to play a role. C2 fibre activity is inhibited by Aδ fibre input; when the Aδ fibre input is blocked experimentally, innocuous cooling becomes painful [35, 36], probably by disinhibition of C2 fibre activity. Thus, the interplay between Aδ and C2 fibre activity during skin cooling is likely to depend on the baseline (adaptation) temperature of the skin. In contrast, the curves of WDT vs. baseline temperature show a hyperbolic form with the lowest WDT at 40 °C and the highest at 20 °C (see Fig. 3b) confirming data of early studies [37, 38]. According to the literature, warm fibres in primates respond to warm stimuli above 30 °C only [38,39,40,41,42], conversely there is no evidence for warm receptor activation by warm stimuli at baseline temperatures below 30 °C. Thus, the mechanism of warm detection at cool skin temperatures remains unclear. At cool skin temperatures, Aδ and C2 fibres are activated. During subsequent warming of the skin, activation decreases for Aδ fibres and increases for C2 fibres. This may lead to a sensation of declining cold and thus to the subjects' reaction to denote their WDT. Correlation of the TGI with cold sensitivity and cold pain sensitivity A model proposed to explain the TGI, albeit excluding the non-painful sensations, indicates that thermal grill stimulation with interleaved warm and cold stimuli reduces the cold signal that is responsible for inhibiting the activity in central multi-modal neurons responsive to heating, pinch and cold (HPC), hence leading to disinhibition and consequently by increasing the magnitude of the HPC signal to pain [10]. The assumption that a strong cold signal is jointly responsible for the TGI is supported by the recent finding that topical application of menthol, an activator of the cold signal, leads to an enhanced grill-evoked heat sensation [19], probably by enhancing the disinhibition of the HPC signal. Consistent with this is the finding of the present study that both cold and cold pain sensitivity are related to the intensity of the TGI (see Figs. 5 and 6), e.g. the stronger the individual sensation of cold under the uniform cold stimulus, the larger was the decrease of cold sensation under grill conditions in the present study (see Fig. 6a). In contrast, warm and heat pain sensitivity were not related to the thermal grill-evoked sensations in the present study showing that the "warm" input is less important for the TGI (see Fig. 6b). Recent publications have reported that higher sensitivity to cold and heat pain is associated with a stronger TGI, e.g. stronger sensations of unpleasantness and pain [18, 43]. The reverse effect has been found in studies with patients with psychiatric disorders showing that a lower sensitivity to cold and heat pain is accompanied by a less intense TGI compared with controls [16, 44]. Thus, sex differences in the thermal grill sensations are assumed to be related to sex differences in heat and cold pain. Our correlation analysis (see Figs. 5 and 6) shows that the sex-dependent sensitivity to cold and cold pain is responsible for the sex-dependent TGI. It is primarily the decrease of cold sensation under grill conditions that differs between the two sexes. In the present study, the reduction of cold sensation under grill stimulation increased with higher CPT in females but decreased in males (see Fig. 6a). Thus, the females' higher sensitivity to cold pain is related to the grill-evoked sensation of less cold in females compared with males. In addition, the present study shows that the higher the non-painful cold sensitivity of a subject the less cold was the grill-evoked sensation and this effect was stronger in females than in males (see Fig. 5a and Discussion above). The females' grill-evoked sensation—warmer and more unpleasant (and if so burning and painful) than the males' grill-evoked sensation—is supported by the females' sensation of less cold, which is dependent on the sensitivity to cold and cold pain. It has to be taken into account that the retrospective data analysis was performed on QST data collected by self-performed measurements by students who were only informed but not trained in the measurement procedure. This fact may reduce the validity of the test. When comparing thermal pain threshold of the retrospectively analysed data with thermal pain threshold obtained in additional QST measurements that were carried out by an investigator in a group of students who were trained in QST measurements, we found similar values for HPT and colder values for CPT in trained versus untrained subjects (see "Sex differences in thermal thresholds" and Additional file 1: Table S1). The less cold CPT in untrained than in trained subjects may be due to the untrained subjects indicating the CDT instead of the CPT in the experiment (see Fig. 4 and Additional file 3: Table S2). Another possibility is that trained subjects may wait a little longer before "confirming" their sensation of cold pain than untrained subjects. Regarding sex differences in CPT, females and males differed independent of training. A limitation of the present study is the fact that a female investigator tested the TGI and the thermal pain threshold in the small cohort of 40 subjects. This may bias psychophysical outcomes in relation to sex. For painful stimuli, male subjects reportedly show weaker responses when the investigator is female rather than male [45, 46]. Another limitation of the present study is the fact that different stimulation sites were used to determine thermal detection and thermal pain threshold in the retrospectively analysed QST measurements (ventral surface of the forearm near the wrist and thenar eminence, see 2.2). However, the stimulation sites including the hand stimulated in the thermal grill experiments are unlikely to differ markedly; in an earlier study, thermal detection and pain threshold and the TGI were found to be similar for the volar forearm and the palm of the hand [19]. Our study provides further evidence for a strong interaction between the thermoreceptive and nociceptive systems. The new aspect of the retrospective data analysis of our study is the finding that thermal detection threshold not only differs between the two sexes but is also dependent on the baseline temperature. A specific progression of an optimum curve was found for the function relating CDT to baseline temperature with the highest sensitivity of cold detection around the basal skin temperature implying a complex processing of "cold" and "warm" inputs in thermal perception. The multimodal distribution of the cold pain thresholds indicates sex-dependent differences in response characteristics and spatial densities of the cold receptors TRPA1 and TRPM8 in the skin. Our findings regarding the TGI, first, that the TGI was more pronounced in females and, second, that the intensity of the illusion correlated with the subjects' sensitivity to cold and cold pain leads to the assumption that sex differences in the cold-related thermoreception are responsible for sex differences in the TGI. For further investigation, it would be of interest to test the sex dependency of the TGI under a strong activation of the cold signal, e.g. by the application of cold receptor agonists corresponding to TRPA1 and TRPM8. CDT: Cold detection threshold CPT: Cold pain threshold HPC: Heating, pinch, cold HPT: Heat pain threshold IQR: NRS: Numeric rating scale QST: Quantitative sensory testing Standard error of the mean TGI: Thermal grill illusion WDT: Warm detection threshold Sarlani E, Farooq N, Greenspan JD. Gender and laterality differences in thermosensation throughout the perceptible range. Pain. 2003;106:9–18. Rolke R, Magerl W, Campbell KA, Schalber C, Caspari S, Birklein F, Treede RD. Quantitative sensory testing: a comprehensive protocol for clinical trials. Eur J Pain. 2006;10:77–88. Hashmi JA, Davis KD. Noxious heat evokes stronger sharp and annoying sensations in women than men in hairy skin but not in glabrous skin. Pain. 2010;151:323–9. doi:10.1016/j.pain.2010.06.026. Kuhtz-Buschbeck JP, Andresen W, Göbel S, Gilster R, Stick C. Thermal perception and nociception of the skin: a classic paper of Bessou and Perl and analyses of thermal sensitivity during a student laboratory exercise. Adv Physiol Educ. 2010;34:25–34. doi:10.1152/advan.00002.2010. Racine M, Tousignant-Laflamme Y, Kloda LA, Dion D, Dupuis G, Choinière M. A systematic literature review of 10 years of research on sex/gender and pain perception - part 1: are there really differences between women and men? Pain. 2012;153:619–35. doi:10.1016/j.pain.2011.11.025. Horn ME, Alappattu MJ, Gay CW, Bishop M. Fear of severe pain mediates sex differences in pain sensitivity responses to thermal stimuli. Pain Res Treat. 2014;897953 doi:10.1155/2014/897953. Golja P, Tipton MJ, Mekjavica IB. Cutaneous thermal thresholds: the reproducibility of their measurements and the effect of gender. J Thermal Biol. 2013;28:341–6. doi:10.1016/S0306-4565(03)00010-X. Waller R, Smith AJ, O'Sullivan PB, Slater H, Sterling M, McVeigh JA, Straker LM. Pessure and cold pain threshold reference values in a large, young adult, pain-free population. Scand J Pain. 2016;13:114–22. doi:10.1016/j.sjpain.2016.08.003. Alrutz S. On the temperature senses: II. The sensation 'hot'. Mind. 1898;7:141–4. Craig AD, Bushnell MC. The thermal grill illusion: unmasking the burn of cold pain. Science. 1994;265:252–5. Bouhassira D, Kern D, Rouaud J, Pelle-Lancien E, Morain F. Investigation of the paradoxical painful sensation ('illusion of pain') produced by a thermal grill. Pain. 2005;114:160–7. Leung AY, Wallace MS, Schulteis G, Yaksh T. Qualitative and quantitative characterization of the thermal grill. Pain. 2005;116:26–32. Defrin R, Benstein-Sheraizin A, Bezalel A, Mantzur O, Arendt-Nielsen L. The spatial characteristics of the painful thermal grill illusion. Pain. 2008;138:577–86. doi:10.1016/j.pain.2008.02.012. Bach P, Becker S, Kleinböhl D, Hölzl R. The thermal grill illusion and what is painful about it. Neurosci Lett. 2011;505:31–5. doi:10.1016/j.neulet.2011.09.061. Boettger MK, Schwier C, Bär KJ. Sad mood increases pain sensitivity upon thermal grill illusion stimulation: implications for central pain processing. Pain. 2011;152:123–30. doi:10.1016/j.pain.2010.10.003. Boettger MK, Grossmann D, Bär KJ. Thresholds and perception of cold pain, heat pain, and the thermal grill illusion in patients with major depressive disorder. Psychosom Med. 2013;75:281–7. doi:10.1097/PSY.0b013e3182881a9c. Lindstedt F, Johansson B, Martinsen S, Kosek E, Fransson P, Ingvar M. Evidence for thalamic involvement in the thermal grill illusion: an FMRI study. PLoS One. 2011; doi:10.1371/journal.pone.0027075. Lindstedt F, Lonsdorf TB, Schalling M, Kosek E, Ingvar M. Perception of thermal pain and the thermal grill illusion is associated with polymorphisms in the serotonin transporter gene. PLoS One. 2011; doi:10.1371/journal.pone.0017752. Averbeck B, Rucker F, Laubender RP, Carr RW. Thermal grill-evoked sensations of heat correlate with cold pain threshold and are enhanced by menthol and cinnamaldehyde. Eur J Pain. 2013;17:724–34. doi:10.1002/j.1532-2149.2012.00239. Adam F, Alfonsi P, Kern D, Bouhassira D. Relationships between the paradoxical painful and nonpainful sensations induced by a thermal grill. Pain. 2014;155:2612–7. doi:10.1016/j.pain.2014.09.026. Harper DE, Hollins M. Coolness both underlies and protects against the painfulness of the thermal grill illusion. Pain. 2014;155:801–7. doi:10.1016/j.pain.2014.01.017. Hunter J, Dranga R, van Wyk M, Dostrovsky JO. Unique influence of stimulus duration and stimulation site (glabrous vs. hairy skin) on the thermal grill-induced percept. Eur J Pain. 2015;19:202–15. doi:10.1002/ejp.538. Alfonsi P, Adam F, Bouhassira D. Thermoregulation and pain perception: evidence for a homoeostatic (interoceptive) dimension of pain. Eur J Pain. 2016;20:138–48. doi:10.1002/ejp.717. Jakovljević M, Mekjavić IB. Reliability of the method of levels for determining cutaneous temperature sensitivity. Int J Biometeorol. 2012;56:811–21. doi:10.1007/s00484-011-0483-9. Selim MM, Wendelschafer-Crabb G, Hodges JS, Simone DA, Foster SX, Vanhove GF, Kenned WR. Variation in quantitative sensory testing and epidermal nerve fibre density in repeated measurements. Pain. 2010;151:575–81. doi:10.1016/j.pain.2010.06.034. Yarnitsky D, Sprecher E, Zaslansley R, Hemli JA. Heat pain thresholds: normative data and repeatability. Pain. 1995;60:329–32. McGill R, Tukey JW, Larsen WA. Variations of box plots. Am Stat. 1978;32:12–6. Tukey JW. Comparing individual means in the analysis of variance. Biometrics Vol. 1949;5:99–114. R Core Team R. A language and environment for statistical computing. Vienna: R Foundation for Statistical Computing; 2015. Ultsch A, Thrun MC, Hansen-Goos O, Lötsch J. Identification of molecular fingerprints in human heat pain thresholds by use of an interactive mixture model R toolbox (AdaptGauss). Int J Mol Sci. 2015;16:25897–911. doi:10.3390/ijms161025897. Lötsch J, Dimova V, Lieb I, Zimmermann M, Oertel BG, Ultsch A. Multimodal distribution of human cold pain thresholds. PLoS One. 2015;10:e0125822. doi:10.1371/journal.pone.0125822. McKemy DD, Neuhausser WM, Julius D. Identification of a cold receptor reveals a general role for TRP channels in thermosensation. Nature. 2002;416:52–8. Story GM, Peier AM, Reeve AJ, Eid SR, Mosbacher J, Hricik TR, Earley TJ, Hergarden AC, Andersson DA, Hwang SW, McIntyre P, Jegla T, Bevan S, Patapoutian A. ANKTM1, a TRP-like channel expressed in nociceptive neurons, is activated by cold temperatures. Cell. 2003;112:819–29. Campero M, Baumann TK, Bostock H, Ochoa JL. Human cutaneous C-fibres activated by cooling, heating and menthol. J Physiol. 2009;587:5633–52. doi:10.1113/jphysiol.2009.176040. Fruhstorpher H. Thermal sensibility changes during ischemic nerve block. Pain. 1984;20:355–61. Yarnitsky D, Ochoa JL. Release of cold-induced burning pain by block of cold-specific afferent input. Brain. 1990;113:893–902. Kenshalo DR, Nafe JP, Brooks B. Variations in thermal sensitivity. Science. 1961;134:104–5. Kenshalo DR. Correlations of temperature sensitivity in man and monkey, a first approximation. In: Zotterman Y, editor. Sensory functions of the skin in primates with special reference to man, Wenner–Gren center international symposium series. Oxford: Pergamon Press Oxford; 1976. p. 305–30. Konietzny F, Hensel H. Letters and notes: warm fibre activity in human skin nerves. Eur J Physiol/Pflügers Arch. 1975;359:265–7. Konietzny F, Hensel H. The dynamic response of warm units in human skin nerves. Eur J Physiol/Pflügers Arch. 1977;370:111–4. Darian-Smith I, Johnson KO, LaMotte C, Shigenaga Y, Kenins P, Champness P. Warm fibres innervating palmar and digital skin of the monkey: responses to thermal stimuli. J Neurophysiol. 1979;42:1297–315. Johnson KO, Darian-Smith I, LaMotte C, Johnson B, Oldfield S. Coding of incremental changes in skin temperature by a population of warm fibres in the monkey: correlation with intensity discrimination in man. J Neurophysiol. 1979;42:1332–53. Schaldemose EL, Horjales-Araujo E, Svensson P, Finnerup NB. Altered thermal grill response and paradoxical heat sensations after topical capsaicin application. Pain. 2015;156:1101–11. doi:10.1097/j.pain.0000000000000155. Bekrater-Bodmann R, Chung BY, Richter I, Wicking M, Foell J, Mancke F, Schmahl C, Flor H. Deficits in pain perception in borderline personality disorder: results from the thermal grill illusion. Pain. 2015;156:2084–92. doi:10.1097/j.pain.0000000000000275. Levine FM, De Simone LL. The effects of experimenter gender on pain report in male and female subjects. Pain. 1991;44:69–72. Gijsbers K, Nicholson F. Experimental pain thresholds influenced by sex of experimenter. Percept Mot Skills. 2005;101:803–7. doi:10.2466/pms.101.3.803-807. We would like to thank Franz Rucker for his efforts in constructing the thermal grill device (Averbeck et al. [19]). We are very grateful to John Davis and Peter Grafe for their support throughout the project and would like to thank them for comments on the manuscript and for improving the English. The datasets generated and/or analysed during the current study are available from the corresponding author on reasonable request. Department of Physiology, University of Munich, Munich, Germany Beate Averbeck, Lena Seitz & Florian P. Kolb Institute of Human Movement Science and Health, Faculty of Behavioral and Social Science, Chemnitz University of Technology, Chemnitz, Germany Dieter F. Kutz Department of Physiology, Biomedical Center Munich (BMC), University of Munich, Planegg-Martinsried, D-82152, Germany Beate Averbeck Lena Seitz Florian P. Kolb BA, DK and FK conceived and designed the experimental setup and the experiments. BA and DK wrote the manuscript. Quantitative sensory testing of the subjects was carried out in the students' laboratory courses and data were analysed by LS, DK and FK. Testing of subjects with the thermal grill device as well as testing of the 40 subjects applying QST and thermal grill stimulation was done by BA and data were analysed by BA. Statistical analyses were carried out by BA and DK. All authors read and approved the final manuscript. Correspondence to Beate Averbeck. The analysis was performed with the permission of the local ethics committee of the Ludwig-Maximilians University of Munich (reference number 150–14). Thermal pain threshold and thermal grill data obtained in 40 subjects. These data were used for correlation analysis of thermal pain threshold and the TGI. (DOCX 38 kb) Numeric scale ratings (NRS) of sensations (cold, warm/heat, unpleasantness, pain) evoked by stimulation with three different thermal stimuli: uniform 20 °C or 40 °C or grill mode (bars tempered alternately at 20 °C and 40 °C). Ratings (medians with first and third quartiles (box) and range (whiskers)) are presented sex-dependently, in red for female (N = 78) and in blue for male (N = 58) subjects. Significant differences between sexes are marked by asterisks (Mann Whitney U-Test). (PDF 100 kb) Characteristic values of the Gaussian mixture model for the cold pain thresholds. After rescaling the data for stimulus intensity and subsequent log transformation the probability density function (PDF) was estimated using a Gaussian mixture model with six modes (R-program AdaptGauss [30]). (DOCX 13 kb) Averbeck, B., Seitz, L., Kolb, F.P. et al. Sex differences in thermal detection and thermal pain threshold and the thermal grill illusion: a psychophysical study in young volunteers. Biol Sex Differ 8, 29 (2017). https://doi.org/10.1186/s13293-017-0147-5 Thermal thresholds Thermal pain Cold sensitivity Cold receptors
CommonCrawl
global issues test 3 Jazzie-fizzle How many countries voted against the UDHR in 1948? How many countries abstained from voting on the UDHR in 1948? Prior the creation of the UDHR, what document first mentions human rights? The Preamble to the UN Charter When was the Universal Declaration of Human Rights adopted? What is the difference between a treaty and a declaration? A treaty is a legally binding instrument whereas a declaration is a statement of principles What is the name of the key human rights document of the post-WWII era? What was the immediate context for why the UDHR was drafted (select all that apply; there may be one to four possible correct answers)? -The terrible atrocities committed during WWII, such as the Holocaust -The horror of over 50 million dead in WWII Which UN organ can authorize military action to enforce human rights? The Security Council The concept of Responsibility to Protect (R2P), developed by the UN, shows the growing importance of human rights by (select all that apply, there are one to four possible correct answers): -Stressing that states have the responsibility to protect their populations from e.g. genocide, crimes against humanity, or war crimes -Putting forward the idea that states have the responsibility to intervene in situations where a state fails to protect its population from e.g. genocide, crimes against humanity or war crimes The International Criminal Court: (select all that apply, there are one to four possible correct answers) -Can try cases involving crimes against humanity, war crimes, and genocide -Is currently investigating cases involving defendants who come mostly from African states The International Criminal Court (ICC) is located: -In The Hague Which region has the most cases currently before the ICC: What is the name of the idea that argues the international community has a responsibility to intervene in situations where a state fails to protect its population from genocide, crimes against humanity, or war crimes? -Responsibility to Protect Which are the ways in which the UN supports human rights (select all that apply, there are one to four possible correct answers)? -The UN helps implement human rights through enforcement activities such as peacekeeping -The UN helps formulate international human rights standards through treaties and declarations -The UN helps implement human rights through economic development -The UN promotes knowledge and public support through educating publics about human rights In what conflict in 2011 did the UN invoke the principle of R2P (Responsibility to Protect) as justification for military intervention? -Libya After the end of the Cold War, the UN created international criminal tribunals to prosecute individuals responsible for ethnic cleansing, genocidal violence and crimes against humanity in which of the following countries (select all that apply, there are one to four possible correct answers)? -Rwanda -Yugoslavia Which of the following is a first-generation right? -Everyone has the right to freedom of thought, conscience and religion First-generation rights are also known as negative rights because: -They prohibit certain government action and are therefore based on the absence of government interference The Universal Declaration of Human Rights is typically divided into ___________ generations, or categories: -Three As discussed in the context of Uruguay, what is the role of a human rights rapporteur (select all that apply, there may be one to four possible correct answers)? -To ensure justice and compensation for survivors of human rights abuses -To assess progress in discovering the truth about past atrocities committed in a country Which generation of human rights includes freedom of speech and assembly? -First What are reasons for why the U.S. didn't ratify the International Covenant on Civil and Political Rights (ICCPR) until long after the treaty had gone into effect (select all that apply, there may be one to four possible correct answers)? -The U.S. had concerns that ratifying the treaty could erode its national sovereignty -The U.S. was reluctant to expose itself to international criticism The U.S. President Jimmy Carter signed the International Covenant on Civil and Political Rights (ICCPR) in 1977. When did the US ratify the treaty? What are some of the ways in which MNCs can promote human rights (select all that apply, there are one to four possible correct answers)? -They can use their access to markets around the world as a way of building networks based on real-time data exchanges, allowing the UN faster response time during emergencies -Through public private partnerships they can help develop the technology needed for the UN or NGOs to get more data on human rights issues Which of the following are human rights NGOs (select all that apply, there are one to four possible correct answers)? -Amnesty International -Human Rights Watch Which of the following is NOT true about the role of technology in human rights (select all that apply, there are one to four possible correct answers)? -Whether people live in developed or underdeveloped countries, technology has benefitted everyone equally While NGOs do not have official membership status at the UN, they do have important influence through their close cooperation with _______________: -The Economic and Social Council (ECOSOC) Which of the following are examples through which technological innovations have been successfully used to promote human rights (select all that apply, there are one to four possible correct answers)? -When a solar-powered internet network allowed villages in Uganda to get access to crop information and thus to improve their yields and incomes Which of the following are some of the ways through which celebrities can affect human rights (select all that apply, there are one to four possible correct answers)? -Using their celebrity status to raise awareness -Engaging in philanthropic behavior by giving money to human rights causes -Founding Human Rights Organizations Where second-generation rights require a proactive government to take action on behalf of its citizens, third-generation rights require ________________ (select all that apply, there are one to four possible correct answers): -Solidarity -International cooperation First-generation rights are influenced by the philosophical tradition of ________________, whereas second-generation rights are influenced by the philosophical tradition of ________________: -Capitalism / socialism Which of the following is an example of second-generation rights? -Everyone, without any discrimination, has the right to equal pay for equal work The difference between a negative and a positive obligation imposed on states by human rights rests on ______________________: -Whether, and how much, action a government needs to take to ensure that rights are protected Where first-generation rights are often associated with the rights of ________________ people, second and third-generation rights reflect the rights of ________________ people. -Individual / groups of Cultural relativism is based on the notion that (select all that apply, there are one to four possible correct answers): -Local traditions trump the rights in the UDHR -The final authority in determining for the rights of a citizenry lies with the people and their government What are some of the examples through which globalization challenges state sovereignty (select all that apply, there are one to four possible correct answers)? -International financial institutions can grant or withhold resources and overrule the decisions of states -International organized crime has become so powerful in some places that it controls territory and large segments of a state's -MNCs can more easily move across borders and escape regulation by governments -Technology has made it possible to put pressure on a state by facilitating communication that goes around government restrictions Universalism is represented by the idea that: -Human rights are accorded to everyone regardless of citizenship or status Sovereignty assumes: -That countries are self-governing, have territorial integrity, and self-determination The following factors show how globalization has helped in promoting human rights (select all that apply, there are one to four possible correct answers): -Information technology has made it easier for human rights NGOs to communicate with their stakeholders and put greater pressure on governments to comply with human rights standards -Due to advances in technology and gains in literacy and education, more people know about human rights -Social media platforms have allowed people to organize even when governments try to limit free speech and assembly When looking at Niger's population pyramid, which of the following are correct statements (select all that apply, there are one to four possible correct answers)? -More than 35% of Niger's population are under 10 years of age -Niger has a youthful population What do we call the graphical representation of the proportions of persons in different age groups in a population? -A population pyramid What is replacement fertility? -Approximately 2.1 births per woman Which of the following are correct statements about fertility rates (select all that apply, there are one to four possible correct answers)? -The total fertility rate measures the average number of births per woman of childbearing age -the fertility rate in more developed countries is below replacement and stands at about 1.67 per woman -The fertility rate in poorer (less or least developed) countries is higher, often significantly higher, than replacement level The fertility rate in more developed regions of the world is higher than the fertility rate in less developed regions. -The fertility rate for the world as a whole is about 2.51 births per woman -The fertility rate is a strong indicators of overall population growth What are some of the reasons for why the world's population drastically increased over the past 70 years (select all that apply, there are one to four possible correct answers)? -Major public health advances -Improved food production and distribution Which of the following is true about population and population growth (select all that apply, there are one to four possible correct answers)? -Nearly 60% of the world's population lives in Asian countries -More than 80% of the world's population is located in the less-developed world -Patterns of population growth differ significantly between more- and less-developed regions of the world -European populations make up about 10% of the world's population -Fertility rates in the least developed countries are declining but still very high -Nearly one-fifth of the world's population lives in China -The world's population continues to grow but the rate of population growth is decreasing How is replacement fertility quantified? What do we mean by the term "replacement fertility" (select all that apply, there are one to four possible correct answers)? -It is the rate at which one generation of parents is replacing itself in the next generation -It is a term that describes the interaction between constant patterns of childbearing with constant mortality and migration to yield zero population change How is a country's population size (P) determined? -P= (+) births (-) deaths (+) in-migration (-) out-migration The world's population is currently at about: -7.6 billion people When did the world's population reach one billion people? -At the beginning of the 19th century What do we call the "study of population change and characteristics"? -Demography According to the UN's 2015 projections, the two countries with the largest populations will continue to see population growth, but where ___________ is projected to peak in size at 1.42 billion in 2030, _____________ is projected to soon overtake it, reaching an estimated 1.71 billion by the middle of the 21st century. -China / India What regions hold what percentage of the world's population? The more developed countries (MDCs) hold _________________ of the world's population, whereas the less developed countries (LDCs) hold ______________ of the world's population. -Less than 20% / more than 80% Which of the following are correct statements about the world's population (select all that apply, there are one to four possible correct answers)? -The number of persons aged 80 or over is projected to triple by 2050, and by 2100 to increase to nearly seven times the number in 2017 -Fertility has declined in nearly all regions of the world -The world has added one billion people since 2005 -International migration at or around current levels will be unable to compensate fully for the expected loss of population in areas with low levels of fertility, especially in the European region -The world has added two billion people since 1993 -In 2017, about a quarter of the world's population was under the age of 15 What has been the demographic impact of HIV/AIDS on those countries most highly affected by the disease? -A decline in life expectancy at birth from 62 to 52 between 1995 and 2005 Where do you find the majority of countries identified as the most severely impacted by HIV/AIDS? According to the UN Population Division's 2015 analyses, what is the medium variant projection for the world's population by 2050: -9.7 billion people by 2050 Which countries will continue to hold the largest share of the world's population? -Countries in Asia In his 1798 publication, Thomas Malthus argued that (select all that apply, there are one to four possible correct answers): -The "positive checks" of famine, war, and epidemics would bring populations back under control -The human desire to reproduce would lead to starvation, poverty and human misery -Technological innovation can overcome problems of population growth Which writer first proposed a negative relationship between human population growth and the supply of food and other resources? -Thomas Malthus The Cornucopian perspective rests on the following ideas (select all that apply, there are one to four possible correct answers): -Population growth is positive as it increases economic productivity and capacity for economic progress -Use of natural resources is best managed through market forces Which of the following are common themes connecting all models of thought on the question of how to address population growth (select all that apply, there are one to four possible correct answers)? -Improving the status of women -Poverty reduction -Increasing sustainability of food production -Improvement in water quality The demographic transition model suggests that as societies industrialize and urbanize: -Death rates will fall, and when values shift from large to small families, fertility rates will decline Those pointing out that we need to focus on the structural dimensions of social change, argue that: -Population growth, in particular high fertility, is a consequence rather than a cause of slow economic development Critics of the demographic transition model argue that: -The model applies the record of population change and development in Europe and North America as the universal standard In 2014, the number of people living as refugees or internally displaced people worldwide was estimated by the United Nations High Commissioner for Refugees (UNHCR) at: -55 million Key "general tendencies" of international population movements in an age of globalization include (select all that apply, there are one to four possible correct answers): -The involvement of an increasing number of countries, both as sending and receiving states -Ongoing changes in where people go and where they come from -The increasing politicization of migration -The "feminization" of migration, as women are an increasing part of workforces The majority of the world's refugees and internally displaced people are: -Women and children Which of the following statements is true concerning urbanization? -More Developed Countries (MDCs) are more urbanized than Less Developed Countries (LDCs) Across the globe, urbanization has been caused by (select all that apply, there are one to four possible correct answers)? -Technological developments, "pushing" people off the farm -Environmental decline, such as the overgrazing of land -Industrialization, "pulling" people into cities "Brain Drain" is (select all that apply, there are one to four possible correct answers): -Sometimes not entirely negative, as those who emigrate can sometimes return home with new skills or capital -The loss of highly skilled persons, mostly from developing countries, as they migrate to, most often, developed countries Match the term to its definition. -internally Displace Persons ////// People displaced within their own countries -Emigration//// International migration out of a country -Urbanization ///Grown in the size of cities, with an increasing percentage of the overall population -Immigration////International migration into a country -Refugee/////A person who is outside their country of origin because of well-founded fears of being persecuted U.S. support for international family planning programs: -Has shifted with changing administrations, with Republicans barring fertility control programs that include legal access to abortion, and Democrats supporting such programs -Many LDCs have enacted policies to reduce population growth, while many MDCs have enacted policies to increase population growth Which country accepts the largest total number of immigrants worldwide? -The United States Which country has been known for its "one child policy"? -China The role of women in population control: -Has been seen as central because women's empowerment tends to lead to smaller but healthier families In the map above, countries in green have the highest "Population Growth," generally showing that they are in the earlier stages of the demographic transition, whereas the countries in red have the lowest population growth or declining populations, generally showing they are in the latter stages. Using the RCII, match each country below to its population growth rate. -india 1.20 Germany -.20 Nigeria 2.40 United states .80 Russia -.10 China .40 Ethiopia 2.90 While having high fertility rates and surging populations can be a challenge, especially for poorer developing countries, having large numbers of elderly citizens can also pose problems, such as not having enough working age people to support social security systems. Six countries have more than 20% of their populations over 65 years of age (five in Europe). Match the letter in the histogram to the country to find the world's four oldest populations. -A portugal B germany C italy D japan As can be seen in the bar chart above, Syria leads the world with almost 5 million refugees. Five other countries after Syria have at least 500,000 refugees. Place these five in order from most refugees to least refugees. Do you know why these countries have so many refugees? Most-least -afghanistan ; somalia; south sudan; sudan; democratic republic of congo Which region of the world has 33 of the top 35 countries with the highest birth rates? "Urbanization," a key dynamic in the movement of people worldwide, has been sparked by such factors as technological change, industrialization, and rural environmental decline. It is seen as an important factor in the demographic transition. Order the countries in the chart above from least urbanized to the most urbanized. -afghanistan; india; china; ireland; united states; japan The role of women has been seen as central to population control because women's empowerment tends to lead to smaller but healthier families. Clearly, even with a number of countries having missing data, the map above shows that women in Africa, the Middle East, and Asia face the most disparities and/or persecution. Match the country below to the main subdimension under "Gender Inequality and Integrity" where that country ranks as the worst in the world for that subdimension. Pakistan::::Female/Male Economic Participation and Opportunity Chad::::Female/Male Educational Attainment Yemen::::Female/Male Political Empowerment Mali::::Female Physical Integrity China::::Female/Male Health and Survival Health factors, such as the quality of health care systems and the prevalence of disease, can have a significant impact on demographics. Clearly, from the map above, Africa as well as South and Southeast Asia face major health challenges. Match the country below to the subdimension under "Health" where that country ranks as the worst in the world for that subdimension. Does anything surprise you? Do you know the location of each of these countries? Non-infectious diseases//// Nauru Health Systems//// Central African Republic Substance Abuse//// United States Infectious diseases//// Mozambique Health Outcomes////// Chad The moderator interviews an economic immigrant from Mexico who builds homes. How much more can he make in the U.S. than in Mexico? -He can make in a day what he would in a week in Mexico How many unaccompanied minors entered the U.S. during the 2014 crisis? -60,000-80,000 As discussed in the video, many of the immigrants coming into the United States are from: -The Northern Triangle of Central America (Guatemala, Honduras, El Salvador), the world's deadliest countries outside war zones The video points to which country as having the highest homicide rates in the world? -El Salvador The analyst points to a 15-year, $10 billion U.S. effort in which country as a model for what's needed in Central America? -Colombia As discussed in the video, why is there so much violence in the Central American states? (select all that apply, there are one to four possible correct answers) -Legacies of the 1980s the civil wars and the lack of full reconciliation -The prevalence of power drug gangs, with many members trained in the U.S. -The prevalence of significant amounts of guns The moderator interviews an 11-year old boy who fled to the U.S. by himself because of the extreme drug-gang violence in his country. What country was this boy from? -Honduras Which of the following provides evidence that a warming planet is related to human behavior? -Scientists have shown a close relationship between global carbon dioxide levels and average global temperatures, while demonstrating that there has been an increase in atmospheric carbon due to the burning of fossil fuels For how long has scientific research on carbon dioxide been conducted? -For about 150 years Which of the following are empirically observable changes showing that our biosphere is changing (select all that apply, there are one to four possible correct answers)? -The extent and thickness of the Arctic sea has declined rapidly over the past decades -The number of record-high temperature events in the U.S. has been increasing since the 1950s Which is correct about global warming and climate change (select all that apply; there may be one to four possible correct answers)? -Climate change refers to the consequences of global warming on weather patterns over time -Global warming refers to Earth's rising temperatures due to increased greenhouse gases The hypothetical example used to explain why groups of individuals might overexploit shared environmental resources even when they know that it could be against their long-term interests is known as: -The Tragedy of the Commons By how much has the Earth's average temperature risen since 1900? -By 1.6 degrees Fahrenheit According to scientists, what is the maximum concentration of carbon dioxide in the atmosphere that the Earth could sustain? -About 390 parts per million Which of the following are greenhouse gases (select all that apply, there are one to four possible correct answers)? -Methane -Chlorofluorocarbons (CFCs) -Ozone -Carbon dioxide -Water vapor While _____________ is a term that refers to the concern that increased greenhouse gases increase the Earth's temperature, ______________ is the term that includes reference to changes in extremes of temperatures and precipitation. -Global warming / climate change Why are some gases considered greenhouse gases (select all that apply; there are one to four possible correct answers)? -Because they trap solar radiation in the atmosphere -Because of their effect on the warming of the atmosphere Carbon dioxide represents the highest percentage of total greenhouse gases, accounting for ______________ of the total. Which of the following were contributing factors to the outbreak of the civil war in Syria in 2011 (select all that apply, there are one to four possible correct answers)? -Permanent changes to rain and windfall patterns brought on by global warming -Increasing scarcity of water and food sources -The 2006 draught that had forced large numbers of rural Syrians into already crowded cities As noted in your reading, which of the following had the highest 2014 per capita carbon dioxide emissions from consumption of energy? -Saudi Arabia Certain gases in the atmosphere - water vapor, carbon dioxide, methane and nitrous oxide - can trap solar radiation and cause the atmosphere to warm. These are called: -Greenhouse gases As noted in your reading, which of the following had the highest 2014 total carbon dioxide emissions from consumption of energy? Which of the following are likely consequences of climate change (select all that apply, there are one to four possible correct answers)? -Increased number of droughts -Increased flooding -Rapidly changing habitats and species extinction -Growing number of climate refugees What was the biggest limitation of the Copenhagen Accord (2009) which included provisions for limiting the average rise in global temperatures to 2 degree Celsius and a commitment by the developed nations to generate $100 billion in additional resources to aid developing countries? -It was not legally binding The practice by countries that reduce their emissions below their allotted level to sell their "unused emissions" is called: -Emissions trading The 2016 Paris Agreement, which focused on renewed efforts to set emissions reduction targets to be met by both developing and developed countries, was signed by __________ states. What were the concerns raised with the Kyoto Protocol-195 (select all that apply; there are one to four possible correct answers)? -The fact that key developing countries (i.e., China and India) did not participate meaningfully (i.e., didn't face binding emissions targets) -The vagueness on how emissions trading could achieve reductions -The degree to which carbon sinks should count toward a country's effort to reduce global warming What is a key difference in climate change laws between more developed (MDCs) and less developed countries (LDCs)? -MDCs focus on cutting emissions, while LDCs focus on adaptation When was the Kyoto Protocol set to expire? What is a carbon sink? -An area that absorbs carbon, such as forests, oceans, or croplands What are the reasons for why Small Island States are particularly concerned with reducing carbon dioxide emissions (select all that apply, there are one to four possible correct answers)? -A rise of one meter in sea level could flood the Small Island States -Rising sea levels could destroy their economies The 1997 agreement that contains legally binding emission targets for key greenhouse gases, especially carbon dioxide, methane, and nitrous oxide, is: -The Kyoto Protocol When did the first World Climate Conference in Geneva take place? The Paris Agreement was successful in as much as it built on commitments to reduce greenhouse gas emissions by, amongst others, both the United States and China. However, it has been criticized for (select all that apply, there are one to four possible correct answers): -Lacking mandatory cuts in emissions -Having cuts that don't go deep enough Rank the following alternative energy sources according to how much they contribute to all alternative energy production (1 = the highest to 4 = the lowest) 1 Hydroelectricity 2 wind power 3 biomass 4 solar energy 5 geothermal 6 marine energies Which of the following countries is the leader in renewable energy investment? How much of the GLOBAL electricity supply comes from alternative energy sources? Which of the following are alternative energy sources (select all that apply, there are one to four possible correct answers)? -Hydroelectricity -Biomass -Solar Which of the following are examples of geoengineering (select all that apply, there are one to four possible correct answers)? -Manipulation of Earth's cloud cover -Finding ways to capture and safely store carbon dioxide -Creating a planetary filter to reflect sunlight -Changing ocean chemistry to increase their carbon absorption How much of the UNITED STATES electricity supply is generated by alternative energy sources? -About 15% Geoengineering is the planned manipulation of Earth's climate to counteract the effects of global warming. Geoengineering can be divided into two main categories: -Carbon dioxide removal and solar radiation management Sustainable development involves: (select all that apply, there are one to four possible correct answers) -Increasing efficiency -Reducing consumption -Using renewable energies What action did the UN take in 2015 to push for greater environmental stewardship in its development agenda? -It adopted the Sustainable Development Goals Sustainable development has been defined as (select all that apply, there are one to four possible correct answers): -Development that improves the quality of human life while living within the carrying capacity of supporting ecosystems -Development that meets the needs of the present without compromising the ability of future generations to meet their own needs -Sound environmental planning without sacrificing economic and social improvements The tension over sustainable development can be stated as: -Whether the focus should be on environmental conservation of living and nonliving resources or on enhancing economic growth and development What distinguishes the Brundtland Commission's definition of sustainable development from the one proposed by the International Union for the Conservation of Nature, the United Nations Environment Programme, and the World Wide Fund for Nature? -The former focuses on the well-being of humans while the latter extends the definition to the biosphere What is the main critique that developing states (LDCs) levy against industrial states (MDCs) over global environmental issues (select all that apply, there are one to four possible correct answers)? -That the long history of MDC industrialization processes has been the greatest contributor to global environmental problems -That the average person in an industrialized country accounts for significantly more resource consumption than someone in a poorer country At which of the following conferences was the UN's Environment Program (UNEP) created? -1972 Stockholm Conference While negotiated independently of the 1992 Earth Summit, what legally-binding treaties are also associated with the conference in Rio de Janeiro (and sometimes referred to as the Rio Conventions)? -The United Nations Framework Convention on Climate Change and the Convention on Biological Diversity The Brundtland Commission's 1987 report "Our Common Future" identified: (select all that apply, there are one to four possible correct answers): -A need for world-wide development strategies that recognized the limits of our ecosystem -Poverty eradication as a necessary requirement for environmentally sustainable development What change in thinking is linked to Agenda 21? (select all that apply, there are one to four possible correct answers) -A growing recognition that economic development strategies need to be in harmony with nature -A growing recognition that developed countries should cut down on their wasteful consumption patterns -A growing recognition that developing countries should integrate the environment into their development strategies Which of the following environmental conferences first combined scientific issues with broader political, social, and economic issues? One of the key outcomes of Rio+20 was a focus on building a green economy. A green economy is an economy that (select all that apply, there are one to four possible correct answers): -Improves human well-being and social equity while reducing environmental risks -Is low-carbon, resource-efficient, and socially inclusive Which of the following were the key outcomes of the Johannesburg Summit in 2002 (select all that apply, there are one to four possible correct answers)? -The forming of nearly 300 voluntary partnerships between private sector and civil society organizations to support sustainable development -An implementation plan detailing a comprehensive program of action with quantifiable goals and targets -A pledge by the world's leaders to commit fully to the goal of sustainable development Which of the following are listed as part of the MDGs (select all that apply, there are one to four possible correct answers)? -Achieve universal primary education -Reduce child mortality -Combat HIV/AIDS and other diseases -Eradicate extreme poverty and hunger -Develop a global partnership for development -Improve maternal health -Ensure environmental sustainability -Promote gender equality and women's empowerment What are reasons for why in 2000 the UN General Assembly adopted the MDGs (select all that apply, there are one to four possible correct answers)? -Official development assistance from industrialized countries had declined since the 1992 Rio Summit -A growing recognition that the world was failing to reach most of the goals set out in Agenda 21 -Increasing concerns over globalization and emerging issues such as the HIV/AIDS pandemic What does "UN MDGs" stand for? -United Nations Millennium Development Goals Which of the following are points of criticism made against the SDGs (select all that apply, there are one to four possible correct answers)? -The SDGs are internally inconsistent because they don't acknowledge that global poverty and ecological decline are tied to extreme wealth, inequality, and overconsumption -The goals are presented as if in isolation and this could contribute to missing important connections that undermine the goal of sustainability -There are too many goals and targets for them to be effective While the MDGs included 8 goals and 21 targets, the SDGs include: -17 goals and 169 targets Which of the following are actions individuals can take to help achieve the SDGs (select all that apply, there are one to four possible correct answers)? -Reduce, reuse, recycle -Ask your local and national authorities to engage in activities that don't harm people or the planet -Save water: avoid baths, take shorter showers -Don't waste food (buy only what you need, freeze left-overs) -Unplug appliances -Compost any food remains to reduce landfills -Buy products from companies that have sustainable practices -Turn off the lights -Whenever possible, bike, walk or take public transportation Who was responsible for crafting the Sustainable Development Goals (SDGs)? -Governments working together in the Open Working Group on Sustainable Development in 2015, the UN's ___________________ expired and were replaced by the ____________________. -Millennium Development Goals / Sustainable Development Goals What apparent tension is part of the concept of sustainable development? -The tension between economic health, social health, and ecological health Which of the following are questions that remain unanswered even after the SDGs were adopted (select all that apply, there are one to four possible correct answers)? -How should the roles and responsibilities of implementation be shared between developing and developed states? -Who will decide on the how much of the financial costs will be taken on by public and how much by private sources? -Will MDCs carry a greater share of the financial obligations than LDCs? Who was responsible for crafting the Millennium Development Goals (MDGs)? -A Group of UN experts under the guidance of the UN Secretary-General As can be seen in the table above of "Greenhouse Gas Emissions" (one of the subdimensions under "Air Quality"), China and the United States rank #142 and #141 (i.e., the worst). Which three countries rank as next lowest on "Greenhouse Gas Emissions"? -Qatar, United Arab Emirates, Russia In the radar chart above, three of the most pivotal countries to global environmental concerns are shown across "Environmental Sustainability" and its four main subdimensions. Match the country to the color. -green= united state blue= china orange= india Africa has many of the countries where significant numbers of people lack access to drinking water and improved sanitation. In the table above, which countries stand out as clearly having the worst "Water Access"? -Chad, Burundi, Kenya As can be seen in the Country Profile drill down above, the United States ranks 127th in "Air Quality," 24th in "Water Quality," and 79th in "Land Quality." Look inside these three subdimensions and select all of the below that are true. -81.84% of Americans have access to sanitation -The U.S. ranks last in "Carbon Emissions" China ranks as the worst in the world on five of the 15 "Air Quality" variables in the RCII (both aggregate variables and raw data variables). Select all the variables below where the country ranks last (score of 1). -Other Greenhouse Gas Emissions -Nitrous Oxide (KT of CO2 Equivalent) -CO2 Emissions (KT) -Greenhouse Gas Emissions -Methane Emissions (KT of CO2 Equivalent) "Water Quality" issues are particularly acute in Africa. Which country ranks as the worst in the world on "Water Quality" in 2017? Outside its focus on air, land, and water quality, the RCII has one stand-alone variable in its "Environmental Sustainability" subdimension, "Environmental Risk Exposure." From the map above you can clearly see that the African and Asian states generally do not do well on this variable. What is its definition (i.e., what is it trying to measure)? -The Environmental Risk Exposure (ERE) indicator assesses hazards to human health posed by five environmental risk factors: unsafe water, unsafe sanitation, ambient particulate matter pollution, household air pollution from solid fuels, and ambient ozone pollution. Global Issues: The Quest for Universal Human Rights jessicalim6 NGOs, Celebrities, and Technology and Human Rights Mervat_Clark victoria_pessoa6 Global Issues Quiz 3 Questions Aharveys17 Global Issues Exam 2 (Modules 5, 6, 7) ktbkrrr Unit 3 POLS 2401 yasminmartinez final global issues exam jblantoncarter1 Global Issue Module 10 Test 3 nick_iannone Origins/Insertions of the Muscle American Gov Exam 2 study guide (shortened) Lab Chemistry American Government Exam 2 Chamberlain Co. wants to issue new 20-year bonds for some much-needed expansion projects. The company currently has 7 percent coupon bonds on the market that sell for $1,083, make semiannual payments, and mature in 20 years. What coupon rate should the company set on its new bonds if it wants them to sell at par? Garcia Corporation uses straight-line depreciation for financial reporting purposes but an accelerated method for tax purposes. Is it acceptable to use different methods for the two purposes? What is Garcia Corporation's motivation for doing this? a. What is a sensory register? How is a sensory register different from short-term memory? b. What is meant by the claim that memory is reconstructive? Why is this claim significant? c. What evidence suggests that short-term memory is limited to a few items? How convincing is this evidence? What is the new balance for the credit card statement whose data are in the following table? $$ \scriptstyle\begin{array}{|c|c|c|c|c|c|} \hline \begin{array}{c} \text { Billing } \\ \text { Date } \end{array} & \begin{array}{c} \text { Previous } \\ \text { Balance } \end{array} & \begin{array}{c} \text { Finance } \\ \text { Charge } \end{array} & \begin{array}{c} \text { New } \\ \text { Purchases } \end{array} & \begin{array}{c} \text { Payments } \\ \text { & Credits } \end{array} & \begin{array}{c} \text { New } \\ \text { Balance } \end{array} \\ \hline 6/15 & \$ 1,239.92 & \$ 18.56 & \$ 500.00 & \$ 895.00 & \\ \hline \end{array} $$ Politics in States and Communities 15th Edition•ISBN: 9780205994861Susan A. MacManus, Thomas R. Dye Fundamentals of Financial Management, Concise Edition 10th Edition•ISBN: 9781337902571 (1 more)Eugene F. Brigham, Joel Houston 1st Edition•ISBN: 9781938168178Glen Krutz 15th Edition•ISBN: 9781337520164John David Jackson, Patricia Meglich, Robert Mathis, Sean Valentine Cardio PEDS sara_bear25 final exam kahoot sparkymistPlus Quiz 2 Psych Alb2017_ COS CHAPTER 15 madisonkrzynowek
CommonCrawl
Molecular architecture of black widow spider neurotoxins Minghao Chen1,2, Daniel Blum3, Lena Engelhard2, Stefan Raunser ORCID: orcid.org/0000-0001-9373-30162, Richard Wagner3 & Christos Gatsogiannis ORCID: orcid.org/0000-0002-4922-45451,2 Ion transport Latrotoxins (LaTXs) are presynaptic pore-forming neurotoxins found in the venom of Latrodectus spiders. The venom contains a toxic cocktail of seven LaTXs, with one of them targeting vertebrates (α-latrotoxin (α-LTX)), five specialized on insects (α, β, γ, δ, ε- latroinsectotoxins (LITs), and one on crustaceans (α-latrocrustatoxin (α-LCT)). LaTXs bind to specific receptors on the surface of neuronal cells, inducing the release of neurotransmitters either by directly stimulating exocytosis or by forming Ca2+-conductive tetrameric pores in the membrane. Despite extensive studies in the past decades, a high-resolution structure of a LaTX is not yet available and the precise mechanism of LaTX action remains unclear. Here, we report cryoEM structures of the α-LCT monomer and the δ-LIT dimer. The structures reveal that LaTXs are organized in four domains. A C-terminal domain of ankyrin-like repeats shields a central membrane insertion domain of six parallel α-helices. Both domains are flexibly linked via an N-terminal α-helical domain and a small β-sheet domain. A comparison between the structures suggests that oligomerization involves major conformational changes in LaTXs with longer C-terminal domains. Based on our data we propose a cyclic mechanism of oligomerization, taking place prior membrane insertion. Both recombinant α-LCT and δ-LIT form channels in artificial membrane bilayers, that are stabilized by Ca2+ ions and allow calcium flux at negative membrane potentials. Our comparative analysis between α-LCT and δ-LIT provides first crucial insights towards understanding the molecular mechanism of the LaTX family. Latrotoxins (LaTXs) are potent high molecular weight neurotoxins from the venom of black widow spiders. The venom contains an arsenal of phylum-specific toxins, including one vertebrate-specific toxin, α-latrotoxin (α-LTX)1, five highly specific insecticidal toxins (α-, β-, γ-, δ-, and ε-latroinsectotoxin (LITs))2, 3, and one crustacean-specific toxin, α-latrocrustatoxin (α-LCT)3, 4. The vertebrate-specific α-LTX causes a clinical syndrome named lactrodectism upon a venomous bite to humans, which is fortunately rarely life-threatening but often characterized by severe muscle cramps and numerous other side effects such as hypertension, sweating, and vomiting5, 6. LaTXs are produced as ~160 kDa inactive precursor polypeptides in venom glands and secreted into the gland lumen. There the final mature 130 kDa toxin is produced by proteolytic processing at two furin sites and cleavage of a N-terminal signal peptide and a C-terminal inhibitory domain7, 8. Most of the physiological and molecular biological researches to date have been carried out using the vertebrate-specific toxin α-LTX. α-LTXs have been shown to form cation-selective pores upon binding to specific receptors on the presynaptic membrane and induce Ca2+ influx, thereby mimicking physiological voltage-dependent calcium channels9, 10. Ca2+ influx activates the exocytosis machinery11 and triggers a massive release of neurotransmitters. α-LTX was shown to form also pores on artificial lipid bilayers, which have high conductance for monovalent and divalent cations such as K+, Na+, Ca2+, and Mg2+, but are blocked by transition metals and trivalent ions such as Cd2+ and La3+,12,13,14,15,16. Efficient incorporation into biological membranes strictly relies however on the presence of specific receptors17,18,19. To date, three receptors for α-LTX have been isolated, i.e., the cell adhesion protein neurexin (NRX) which binds to the α-LTX in a Ca2+-dependent manner20,21,22, the G protein-coupled receptor latrophilin (LPHN or CIRL, stands for Calcium-Independent Receptor of Latrotoxin)23, 24 and the receptor-like protein tyrosine phosphatase σ (PTPσ)25. With regard to α-LTX, NRX and PTPσ are suggested to provide only a platform for binding and subsequent pore formation events25,26,27,28,29. In contrast, Ca2+-independent binding to LPHN does not involve oligomerization and channel formation, but direct downstream stimulation of the synaptic fusion machinery27, 30,31,32. The channel-dependent and independent functions of α-LTX have attracted the attention of neurobiologists for several decades, studying the effects of α-LTX on neurotransmitter release and mechanisms underlying synaptic plasticity. The α-LTX variant LTXN4C,33, which lacks the ability of pore-formation but retains the full binding affinity to receptors, played a key role in the investigations of α-LTX action. Today, α-LTX is an indispensable tool for stimulating exocytosis of nerve and endocrine cells29, 34,35,36. α-LTXs are furthermore considered to antagonize botulinum poisoning and attenuate the neuromuscular paralysis via synapse remodeling37. The surprising structural homology of α-LTX to the glucogen-like peptide-1 (GLP1)-like family of secretagogic hormones might also open opportunities for pharmacological applications in blood glucose normalization and reversal of neuropathies38. Invertebrate LaTXs are less well understood, but considered as promising candidates for the development of novel bio-pesticides. Orthologues of the three receptor classes shown to bind α-LTX are also present in insects39. To date, four LaTXs have been cloned, including α-LTX7, α-LIT40, δ-LIT41, and α-LCT42. Despite their high specificity, the different LaTXs display a 30–60% sequence identity and are expected to share an overall similar domain organization and membrane insertion mechanism. Low-resolution 3D maps (14–18 Å) of the α-LTX dimer, α-LTX tetramer, and δ-LIT monomer were previously determined using single-particle negative stain and cryoEM39, 43, suggesting indeed an overall similar architecture of the different members of the LaTX family. A structural and mechanistic understanding of LaTX function is a significant priority for the development of novel anti-toxin therapeutics and/or insecticides. However, a high-resolution structure of a LaTX, which is a prerequisite for the understanding of LaTXs' mechanism of action at molecular detail, has been missing. Here we present a 4.0 Å cryoEM structure of the α-LCT monomer and a 4.6 Å cryoEM structure of the δ-LIT dimer revealing the molecular architecture of LaTX neurotoxins as well as the molecular details of their oligomerization mechanism prior to membrane insertion. In addition we characterized the principal basic pore characteristics of α-LCT, the precursor δ-LIT and δ-LIT channels after reconstitution into planar lipid bilayer. CryoEM structure determination of α-LCT We recombinantly expressed the mature α-LCT (amino acids 16–1240) from Latrodectus tredecimguttatus (Mediterranean black widow) (UniprotKB ID: Q9XZC0 (LCTA_LATTR)) (Fig. 1a and Supplementary Fig. 1) in insect cells using the MultiBac system and purified it using a combination of affinity and size exclusion chromatography to obtain a monodisperse sample for cryoEM single particle analysis (Supplementary Fig. 2a, b). The cryoEM sample showed a homogeneous set of characteristic G-shaped flat particles, corresponding to soluble monomers of the 130 kDa mature α-LCT complex (prepore state; before membrane insertion). The G-shaped particle is composed of a C-like curved region, corresponding to the long C-terminal domain of ankyrin-like repeats (ARs), that is engulfing a central compact head region (Fig. 1b, Supplementary Fig. 2c). Fig. 1: Structure of α-LCT monomer. a Domain organization of mature α-LCT. Gray diagonal lines indicate regions not resolved in the cryoEM density. b Representative reference-free two-dimensional class averages. Scale bar: 10 nm. c Side views of the α-LCT monomer superposed with the EM map (transparent) contoured at 10σ. Domains are depicted in the same colors as in (a). d Close-up view of the helical bundle domain. The front helix (H7) is not shown in the left image for clarity. e Close-up view of the interface between CD (H1-3) and ARD (AR5-10). f Close-up view of the ARD C-terminal tail. The gray ellipse indicates the last five ARs (AR14-18). Note the change in orientation g Electrostatic potential calculated in APBS. Red: −10 kT/e; Blue: +10 kT/e. PD:plug domain; HBD:helical bundle domain; CD: connector domain; ARD: ankyrin-like repeat domain. Subsequent image processing and 3D classifications revealed an inherent flexibility between both regions. The N-terminal head is orientated perpendicular and in close vicinity to the tail of the AR region in the best-resolved class (compact conformation), but shows a continuous movement and is tilted away the tail of the AR-domain in less well-resolved classes, resulting in less compact conformations (Supplementary Figs. 2d, e, 3a). The α-LCT monomer in the compact conformation adopts a flat architecture and is 130 Å long and 30 Å wide. The path of the polypeptide is clear in the density map, allowing us to build an atomic model covering 81% of the sequence of the molecule (residues 48–1066, except two disordered loop regions 226–232, 349–360) (Supplementary Fig. 4a). We deleted side-chain atoms beyond Cβ in regions where side-chain density was only rarely evident. The C-terminal end of the ARs domain including the last four ankyrin repeats is not well resolved. The local resolution is highest in the N-terminal head region (Supplementary Fig. 2h). Architecture of the soluble α-LCT monomer The resulting atomic model reveals that the N-terminal head region is composed of three domains: a four-helix domain at the N-terminal end, termed here as connector domain (residues 48–115); a central helical bundle domain (residues 116–352), and a short β-sheet domain, linking the helical bundle domain with the AR domain, termed here as plug domain (residues 353–452) (Fig. 1a, c, Supplementary Fig. 5, Supplementary Movie 1). The helical bundle domain shows a novel fold of a six-helix bundle, with five parallel aligned helices assembling into a cylindrical structure (H5-7, H9-10), encircling a central α-helix (H8) (Fig. 1d). The conserved helix H8 contains many hydrophobic residues and is predicted to act as transmembrane region of the tetrameric LaTX pore (Supplementary Fig. 6d). The surrounding helices shield the hydrophobic surface of H8 within the cylindrical bundle and protect it from the aqueous environment (Supplementary Fig. 7a, b). The helical bundle domain is expected to undergo severe conformational changes during pore formation to allow exposure of the transmembrane helix H8 and transition of the toxin from a soluble monomer to a transmembrane tetramer. Interestingly, helices H7 and H9 are kinked and interrupted by two (residues 191–196, 203–211) and one (residues 296–304) short loops, respectively. Such short breaks interrupting long α-helices in close vicinity to the putative transmembrane regions, were shown in other α-helical pore forming toxins to provide the necessary flexibility for major conformational changes towards membrane insertion44. The long and curved C-shaped C-terminal AR domain consists of 22 ankyrin-like repeats, accounting for two thirds of the sequence of α-LCT. In total, the first 18 out of 22 ankyrin-like repeats (ARs) were resolved in the present map. Interestingly, there is a redirection of orientation of the ARs at the loop (residues 890–902) connecting AR13 and AR14. ARs14-18 are rotated almost 90 degrees compared to ARs1-13 along the long axis of the domain (Fig. 1f). This arrangement goes along with a characteristic bipolar charge distribution, with ARs1-13 dominantly positively charged and the tail of ARs14-18 displaying a prominent negatively charged patch (Fig. 1g, Supplementary Fig. 6a). The C-terminal tail of the AR-domain is considerably close to the helical bundle domain, but the I226-A232 loop of the helical bundle that connects H7 and H8 and might cross-bridge the 5 Å distance between both domains, is not resolved in the cryoEM density and we were also not able to find strong candidates for electrostatic and hydrophobic interactions. Furthermore, 3D classification (Supplementary Fig. 2d) of the dataset further suggests high flexibility in this region and variable distances between both domains, rendering the possibility of stable large scale interactions between the helical bundle domain and AR-domain rather unlikely (Supplementary Fig. 7c). The connector domain at the N-terminal end forms the lower interface connecting the central helical bundle with the AR-domain (Fig. 1c, e). Helices H1, H2, H4 and the short helix H3 of the connector domain, assemble into a flat triangle structure, which is attached to the inner curved surface of the AR-domain and interacts with ARs 6-10. Most interaction surface to AR-domain is provided by H2, whereas the short helix H3 is directly positioned in close proximity to the loop connecting AR9 and AR10 and parallel aligned to the helices of AR9 and AR10 (Fig. 1e). The residues involved in this interface are only conserved in the AR-, not in the connector domain (Supplementary Fig. 7d, e). They are mainly hydrophobic, suggesting a major role of hydrophobic interactions. Helix H2 of the connector domain is hydrophobic in our structure and interestingly, this helix has been also predicted as the second transmembrane region of the insecticidal δ-LIT (Supplementary Fig. 6h), but not predicted as such from the sequences of other latrotoxin family members, such as α-LCT (Supplementary Fig. 6d). The plug-domain covalently links the primary sequence of the helical bundle with the AR-domain, positioning the helical bundle directly below AR1 (upper interface) (Fig. 1c, Supplementary Fig. 7f–h). The plug-domain is organized in two layers: a region of several flexible loops and a core region of four β-strands that is attached to H5 and a short loop between H8 and H9 of the helical bundle (Supplementary Fig. 7i). The plug-domain plays an important role in the oligomerization of the complex prior complex formation, which will be discussed in detail in the next section. CryoEM structure of inactive precursor soluble δ-LIT dimer In subsequent experiments, we were not able to induce oligomerization of α-LCT and trigger insertion into liposomes for further visualization of pore formation events as previously described for α-LTX43, 45. To provide further insights into the LaTX family, we then focused on the insecticidal δ-LIT (amino acids 1-1214 from Latrodectus tredecimguttatus, UniprotKB ID: Q25338 (LITD_LATTR)). As expected, mature δ-LIT was toxic for our insect cell cultures and therefore, we expressed, purified, and subjected to cryoEM analysis the precursor uncleaved inactive toxin (Supplementary Fig. 8a–c). The precursor toxin contains an additional N-terminal signal peptide (residues 1–28) and a C-terminal inhibitory domain (α-LCT residues 1037–1214), compared to the matured form (Fig. 2a). Albeit we performed the cryoEM analysis with the same procedure used for mature α-LCT, subsequent processing did not only reveal G-shaped monomers, as was the case for mature α-LCT, but also higher order oligomers. In particular, reference-free 2D classifications revealed approximately 45% monomers, 50% dimers, 2.5% trimers and 2.5% tetramers (Fig. 2b, Supplementary Fig. 8e–h). Interestingly, such particle populations of oligomers were not observed in negative-stain EM (Supplementary Fig. 8c), indicating that the interactions are rather weak and dilution of the sample which is necessary for negative-stain EM, as well as the low pH of the stain might induce dissociation of the oligomers. We were finally able to obtain a map of the δ-LIT dimer at 4.6 Å average resolution from 81,192 particles, with no symmetry imposed (Supplementary Fig. 9). Fig. 2: Structure of δ-LIT dimer. a Domain organization of full-length precursor δ-LIT. Disordered regions in the cryoEM map are indicated in gray diagonal lines and boxes. b Representative two-dimensional class averages of each oligomeric state. Scale bar: 100 Å. c Side and top view of the δ-LIT dimer superposed with the EM map (transparent) contoured at 10σ. Domains of protomer A are depicted in the same colors as in (a); protomer B is colored in gray. d Close-up view of the PD-ARD dimerization interface. The position of the four amino acid insertion variant (VPRG) is indicated and colored in yellow. e Side view of the dimerization interface. Protomers A and B are rotated 90° to left and right, for better clarity. Polar and charged candidate residues (<5 Å to the opposite protomer) are shown as spheres and colored in cyan and pink. f Electrostatic potential calculated in APBS. Red: −10 kT/e; Blue: +10 kT/e. PD:plug doman; HBD:helical bundle domain; CD: connector domain; ARD: ankyrin-like repeat domain. Based on the structure of α-LCT monomer, we were able to build a molecular model of the δ-LIT dimer, including residues 50–928 for both protomers (Fig. 2c, Supplementary Fig. 4b, Supplementary Movie 2). As expected, the structure of the protomer of δ-LIT is very similar to that of α-LCT (sequence identity 39%), showing the characteristic G-shaped architecture and domain organization. δ-LIT displays however a shorter repetitive C-terminal AR-domain, containing only 15 ARs instead of 22 in α-LCT. The path of the polypeptide is clear for all four domains, i.e., connector domain, helical bundle domain, plug domain, and AR-domain, except for a few loop regions (residues 91–92, 99–105, 237–245, 355–365) in protomer A and (237–244, 356–361) in protomer B, the last (15th) AR and the complete signal peptide and inhibitory domain in both protomers (Fig. 2c). These structural regions were not resolved in the cryoEM density map, probably because they are disordered or highly flexible. Due to limited resolution, we deleted most of the side-chain atoms beyond Cβ in the molecular model unless the electron density was sufficiently clear with regard to bulky side chains. The δ-LIT dimer is formed with the two protomers rotated 90 degrees relative to each other and the plug domain of protomer A plugging from the side into a cleft formed by ARs 1-6 of protomer B (Fig. 2c). The protomers A and B are in basically the same conformation, but the small differences, mainly due to the flexibility of the helical bundle domain, precluded successful C2 symmetrization of the particle (Supplementary Fig. 3e). The plug-domain has a hemispherical architecture matching the cleft formed by the 1st–6th ARs of the AR-domain (Fig. 2d), suggesting an induced fit and shape complementarity as the basis for the interaction. We found clusters of 17 polar and charged residues on the plug-domain of protomer A and 18 polar and charged residues on the cleft of protomer B, that may be involved in this interface (Fig. 2e, Supplementary Table 1). In previous studies, the α-LTX variant LTXN4C with an insertion of a thrombin site (VPRG) at the linker peptide connecting the plug domain with the AR-domain was first introduced to functionally characterize the N- and C-terminals individually33. Although cleavage was not successful, this LaTX variant was shown later to retain its binding affinity to receptors but lose its ability to oligomerize into tetramers and form pores27. This variant played a key role in understanding the dual mode of action of LaTXs. The corresponding position of this insertion is highlighted on the molecular model of δ-LIT (Fig. 2d). This insertion is not positioned directly at the dimerization interface, but in close proximity to the loops of the plug-domain involved in the interaction and might thus disturb the overall shape complementarity and/or induce a shift of positions and a mismatch between the residues involved in this interface, thereby blocking dimerization. Besides the main interaction between the plug-domain of protomer A and ARs 1–6 of protomer B, there is a less pronounced interaction between the helical bundle domains of both protomers, involving H9 of protomer A and H6/H7a of protomer B (Fig. 2e, Supplementary Table 1). Whereas H9 is interrupted in protomer A and in the structure of α-LCT by a short loop in its middle, in protomer B (thus most probably upon dimer formation), this loop folds helically to straighten and complete H9 (Supplementary Fig. 3f, Supplementary Fig. 4b). Interestingly, because δ-LIT is lacking seven terminal ARs, the surface of the AR-domain does not display a clear bipolar charge distribution as in α-LCT (Fig. 2f, Supplementary Fig. 6e). However, in both LaTXs, the terminal tail of the AR-domain is clearly negatively charged (Supplementary Fig. 6a, e). We further processed the subset including δ-LIT monomers, and upon 3D classification, we finally obtained two 3D reconstructions at a nominal resolution of 8.8 Å and 12 Å, respectively (Supplementary Fig. 10a). The molecular model of δ-LIT obtained from the dimer was then rigid body fitted in the 3D volumes (Supplementary Fig. 10b, c). This comparison revealed an additional globular density at the C-terminal tail of both reconstructions that is not occupied by the model, which can be interpreted as the C-terminal 22 kDa inhibitory domain. The two δ-LIT monomer reconstructions show very similar conformations with the protomers of the dimer, with small differences in the orientation of the helical bundle- and AR-domain, suggesting inherent flexibility of the particle (Supplementary Fig. 10d). We were not however able to obtain 3D reconstructions of the δ-LIT trimers and tetramers due to the preferred orientation of the top views towards the air–water interface (Supplementary Fig. 8g, h). Nevertheless, according to our knowledge, this is the first observation of a LaTX trimer as an intermediate state, towards the formation of the tetramer. Structural differences between δ-LIT and α-LCT In comparison to the compact conformation of the truncated α-LCT monomer, full-length precursor δ-LIT (both in the monomer and dimer state) shows a different, rather extended conformation and a larger distance between the helical bundle- and the AR-domain. Here, we used the model obtained by the best resolved δ-LIT density (protomer A in the δ-LIT dimer, (4.6 Å)), for direct comparison with the α-LCT. In particular, the outermost helix H9 and the overall helical bundle domain are straightened, the helical bundle of each protomer is further tilted 15 degrees away from the long axis of the AR-domain and the distance between the helical bundle domain and the shorter AR-domain is substantially longer. Notably, the AR-domain of δ-LIT is significantly less curved (Fig. 3a) and its C-terminal tail is further twisted and positioned outside the helical bundle domain and ankyrin-like repeat domain (HBD-ARD) plane (Fig. 3b). As a consequence, the bottom part of the helical bundle domain becomes exposed in this case (Fig. 3c). Fig. 3: Conformational changes during dimerization. a Side-by-side comparison of α-LCT (compact state, sea green) and δ-LIT (extended state, extracted from the dimer, orange) superposed with α-LCT (transparent). b Front view of the ARD. Arrows indicate domain motion during dimerization. c Bottom views of the compact and extended states. The bottom part of the cylindrical (HBD) is exposed in the extended state. d Magnified view of the HBDs. The front helix (H7) is not shown for clarity. The H9a-H9b loop folds helically to complete H9 in the extended state. e Magnified view of the interface between the connector- and the AR-domain. f The schematic diagram illustrates the conformational change. PD:plug domain; HBD:helical bundle domain; CD: connector domain; ARD: ankyrin-like repeat domain. The enlarged distance between AR-domain and helical bundle domain, also requires a more extended conformation of the connector domain, which is further stretched in δ-LIT, but still bridges both domains. This results into repositioning of H1 and unfolding of the lower half of H4 of the connector domain (Fig. 3e). These differences between the compact α-LCT and the extended δ-LIT are summarized in Supplementary Movie 3 and a schematic diagram in Fig. 3f. Interestingly, the less well-resolved 3D class of α-LCT can be considered as flexible intermediate between both structures (Supplementary Fig. 3g, h, Supplementary Movie 3) and the two low-resolution 3D reconstructions of the δ-LIT monomer further suggest inherent flexibility between the helical bundle domain and the AR-domain. In general, α-LCT appears more compact, due to the seven additional terminal ARs, possibly allowing additional interactions between the AR-domain and the flexible central helical bundle. This might explain the different oligomerization properties observed for both molecules under identical cryoEM conditions. The directional change observed in the AR-domain of the extended δ-LIT protomer, would also result in the exposure of the helical bundle domains, even for LaTXs with significantly longer AR-domains such as α-LCT or α-LTX (Fig. 3c, right panel). It should be noticed that the previous low-resolution 3D reconstructions on α-LTX and δ-LIT show an overall different LaTX architecture. The previous 2D cryoEM class averages of α-LTX43 and the present 2D classes of δ-LIT dimers and tetramers (Fig. 2b) display nevertheless clear similarities, indicating structure conservation. Taking into account in addition the high sequence similarity within the LaTX family, we rather conclude that the earlier LaTX reconstructions determined more than two decades ago, are apparently to some extent affected by the previous bottlenecks of the technique and the significantly lower signal to noise ratio in the cryoEM micrographs. The members of the LaTX family share an overall common domain organization and architecture. Electrophysiological characteristics of the pore-forming precursor δ-LIT in comparison with the mature δ-LIT and α-LCT We further performed electrophysiology studies to demonstrate pore-formation activity of the recombinant proteins and provide a detailed characterization for the less well-studied invertebrate LaTX channels. One important aspect herein is the role of the C-terminal domain in channel formation, since its cleavage is required for activation of the toxins. Therefore, we additionally prepared mature truncated δ-LIT for further functional comparisons with precursor full-length δ-LIT and mature truncated α-LCT. The inherent cleavage sites were however not recognized by furin protease and therefore we inserted two additional cleavage sites into the sequence of precursor δ-LIT, before the N-terminal- (residue 29) and after the AR-domain (residue 1019) followed by proteolysis treatment after expression (Supplementary Fig. 11). The two mature toxins and precursor δ-LIT were then reconstituted into planar lipid bilayer and voltage-dependent (\({V}_{{{{{{{\rm{mem}}}}}}}}\), membrane potential) membrane currents were recorded at single-channel resolution as previously described46. Precursor δ-LIT spontaneously inserted into the lipid bilayer and formed open channels as obvious from the observed large voltage induced currents (Supplementary Fig. 12a, c). With asymmetric (cis/trans) 150/25 mM KCl buffer conditions, the reversal potential (Vrev) was Vrev=4 ± 1 mV (SD of the linear i/v-curve fit) yielding \({P}_{{{{{{{\rm{K}}}}}}}^{+}}/{P}_{{{{{{{{\rm{Cl}}}}}}}}^{-}}\)=1.2547, 48, demonstrating only marginal selectivity to \({{{{{{\rm{K}}}}}}}^{+}\)over \({{{{{{{\rm{Cl}}}}}}}}^{-}\,\)ions (Fig. 4a). However, upon addition of 10/1 mM CaCl2 (cis/trans)) gradient in symmetrical 150/150 mM KCl buffer, we observed an approximate five-fold increase in the reversal potential (\({V}_{{{{{{{\rm{rev}}}}}}}}=20\pm {2,5}{{{{{{\rm{mV}}}}}}}\) (SD of the linear i/v-curve fit) (Fig. 4b)) revealing that the precursor δ-LIT voltage-activated channel preferentially conducts \({{{{{{{\rm{Ca}}}}}}}}^{2+}\) ions (\({P}_{{{{{{{{\rm{Ca}}}}}}}}^{2+}}/{P}_{{{{{{{{\rm{Cl}}}}}}}}^{-}}\)=18) and the current–voltage relation became rectifying. Moreover, calcium reduced the root-mean-square (rms) values of the current noise drastically. The high rms current noise values in the recordings with the precursor δ-LIT channel are due to a seemingly additional current noise presumably arising from a less stable channel conformation with very fast (<10 ns) and probably incomplete brief open/closing transitions of the channel. This could not be resolved in time and amplitude by our technique and thus appeared as a broad current band with significantly larger rms values than arising from the instrumental set-up only. The mean main conductance as determined from all point current histograms at different voltages was \({\bar{G}}_{{{{{{{\rm{main}}}}}}}}=330\pm 36{{{{{{\rm{pS}}}}}}}\,({{{{{{\rm{SD}}}}}}},{N}=12)\) Fig. 4: Calcium induced ion-channel properties of reconstituted δ-LIT variants within lipid bilayers. Current–Voltage ramp recordings at different cis/trans buffer conditions. a Precursor δ-LIT shows in asymmetric KCl buffer a noisy linear current–voltage relation with reversal potential of Vrev = +4 mV, indicating a slight cation selective channel (\({P}_{{{{{{{\rm{K}}}}}}}^{+}}/{P}_{{{{{{{{\rm{Cl}}}}}}}}^{-}}\)=1.25). A total of n=6 recordings similar to (a) from 2 different precursor δ-LIT preparations have been conducted. b With symmetric KCl and added asymmetric calcium the precursor δ-LIT channel displays a Vrev = +20 mV, characteristic now for a highly Ca2+ selective channel (\({P}_{{{{{{{{\rm{Ca}}}}}}}}^{2+}}/{P}_{{{{{{{{\rm{Cl}}}}}}}}^{-}}=18\)), demonstrating a dramatic property change induced by the presence of Ca2+ ions. A total of n = 6 recordings similar to (b) from two different precursor δ-LIT preparations have been conducted. c The current–voltage relation of the mature δ-LIT is again linear but with low current noise and frequent, defined gating from the open towards the closed state. (displayed in extension box). d Next trans chamber perfusion within the same experiment. The Vrev = 7 mV value discloses cation selective properties \(({P}_{{{{{{{\rm{K}}}}}}}^{+}}/{P}_{{{{{{{{\rm{Cl}}}}}}}}^{-}} = 1.47)\) preserved within the mature variant. A total of n = 6 recordings similar to (c, d) from two different δ-LIT preparations have been conducted. With asymmetric KCl and cis added calcium (lower panel, no calcium addition trans) the precursor δ-LIT channel displays an asymptotic sine current–voltage relation with a zero-current crossing at Vrev = 0 mV. Extrapolation of the tangent to zero net current yields \({V}_{{{{{{{\rm{rev}}}}}}}}=\approx 40{{{{{{\rm{mV}}}}}}}\) and a very high calcium selectivity of the mature δ-LIT channel (\({P}_{{{{{{{{\rm{Ca}}}}}}}}^{2+}}/{P}_{{{{{{{\rm{K}}}}}}}^{+}}{/P}_{{{{{{{{\rm{Cl}}}}}}}}^{-}}\cong 600/1.47/1\)). A total of n ≥ 6 recordings similar to (d; lower panel) from 2 different δ-LIT preparations have been conducted. e Calculated GHK current–voltage relation using the experimental relative permeabilities from (d; lower panel) and the experimental concentration of \({{{{{{{\rm{Ca}}}}}}}}^{2+}\),\({{{{{{\rm{K}}}}}}}^{+}\) and \({{{{{{{\rm{Cl}}}}}}}}^{-}\)ions in the cis and trans compartment (see Supplemental Fig. 12 for details). The current–voltage relations of membrane inserted mature δ-LIT in symmetrical 250/250 mM KCl buffer showed a linear course with high gating activity (Fig. 4c). In contrast to the precursor variant, we observed low noise currents with structured gating which could be resolved in amplitude and time. Similarly to the precursor-, mature δ-LIT channels display also only a slight cation selectivity (\({P}_{{{{{{{\rm{K}}}}}}}^{+}}/{P}_{{{{{{{{\rm{Cl}}}}}}}}^{-}}\)=1.47, Fig. 4d) after establishing the 250/25 mM KCl gradient. Addition of 5 mM Ca2+ ions to the cis side of the bilayer changed however the properties of the mature δ-LIT channel completely. The asymptotic-sine course of the current–voltage relation increased at both negative and positive command voltages (Vcmd) while crossing the zero voltage with zero net current (Fig. 4d upper panel). To further analyze this, we applied the widely used Goldman–Hodgkin–Katz (GHK) approach47,48,49 (see Supplemental for details). It turned out that the course of the current–voltage relation in the lower panel of Fig. 4d can be explained if the currents at negative Vcmd are carried from cis to trans mainly by Ca2+ ions and at positive Vcmd predominantly by Cl− ions (Fig. 4e; see Supplemental for details). The analysis of current traces revealed two open channel states with mean amplitudes of \({\bar{G}}_{1}=25\pm 2.8{{{{{{\rm{pS}}}}}}}\) and \({\bar{G}}_{2}=180\pm 24.8{{{{{{\rm{pS}}}}}}}\) (SD of the histogram fit) corresponding to a pore restriction diameter of about 0.8 nm48, 50 (Supplementary Fig. 12e, f) (N=15) significantly smaller than the one calculated for the precursor δ-LIT channel (~1.5 nm see Supplementary Fig. 12b). Thus the mature δ-LIT channel appears to form a denser, stabilized conformation compared to its precursor variant. Additionally, the mature δ-LIT seems to act like a complete rectifier, which in the presence of Ca2+, depending on membrane polarization, allows mainly either flux of calcium −Vmem or chloride currents (+Vmem) from cis to trans. In this context, it seems important that both the precursor δ-LIT and the mature δ-LIT are incorporated into the bilayer predominantly in a unidirectional manner, with a regulative Ca2+-binding site on the cis side. Strong rectifying current–voltage relations were observed in all bilayer experiments in the presence of Ca2+ ions added on the cis side (Fig. 4b, d lower panel). Surprisingly, mature α-LCT incorporates into the bilayer membrane and forms channels only in the presence of Ca2+ ions (cis) (Supplementary Fig. 12g). The analysis of the single-channel traces (Supplementary Fig. 12h, i) revealed, similar to mature δ-LIT, two distinct open channel states \({\bar{G}}_{1}=78\pm 5.6{{{{{{\rm{pS}}}}}}}\) and \({\bar{G}}_{2}=180\pm 17.3{{{{{{\rm{pS}}}}}}}\) (N=7) (SD, from the fit of amplitude histogram, Supplementary Fig. 12i). Besides the 10/1 mM CaCl2 gradient, the α-LCT channel does not show, in contrast to δ-LIT, any preference for the involved anions and cations (\({V}_{{{{{{{\rm{rev}}}}}}}}=0\pm 2.1{{{{{{\rm{mV}}}}}}}\) (Supplementary Fig. 12g). In comparison to mature δ-LIT (Fig. 4f), the α-LCT channel displays a higher current noise level in the recordings and surprisingly, considering the slightly asymptotic sine course of the current–voltage relation (Supplementary Fig. 12g), the α-LCT apparently also harbors rectifying properties, making it likely that rectified currents of Ca2+ ion and Cl− ions may convey similar to mature δ-LIT. To sum up, Ca2+ ions appear to further stabilize precursor and mature LaTX oligomers after incorporation into the membrane resulting in a rectifying calcium selective channel allowing calcium flux at negative membrane potentials. High Ca2+ permeability was previously also shown for native δ-LIT channels in locust muscle membrane and artificial bilayer, but not for truncated recombinant variants41. Provided that as a result of the predominantly unidirectional insertion the cis side corresponds to the extracellular side of the respective channel, δ-LIT and α-LCT display ion-channel characteristics similar to Ca2+-release channels51. Formation of an insertion-competent prepore complex The reference free 2D class averages of monomers, dimers, trimers and tetramers identified in the δ-LIT cryoEM dataset (Fig. 2b) suggest a general sequential oligomerization mechanism of LaTXs towards the formation of a soluble tetrameric complex prior membrane insertion. The bent cleft at the upper part of the AR-domain of a protomer is employed as a binding site for the hemispherical plug domain of an adjacent protomer, with both molecules rotated 90 degrees relative to each other. Four LaTX molecules dock into each other in a sequential circular ring-fashion via 1/2/3-mers to eventually form tetramers (Supplementary Fig. 13a), suggesting that the tetramer is not exclusively formed upon assembly of two dimers, as previously proposed45. Based on our data, we generated a 3D model of the tetramer by arranging the δ-LIT dimer ("extended state") according the 2D class averages of the top view of the tetramer. The resulting 3D model of the tetramer is approximately 140 × 140 Å large and has a height of approximately 120 Å, with an overall striking configuration resembling a four-finger crane claw, with each curved AR-domain resembling a finger of the crane claw (Fig. 5a). Fig. 5: Latrotoxin tetramer assembly prior membrane insertion. a Simulated volume of a soluble δ-LIT insertion-competent tetramer. b Proposed model of latrotoxin action at the presynaptic membrane prior membrane insertion. The G-shaped toxin is activated upon cleavage of the 200aa C-terminal inhibitory domain. The present study reveals a characteristic bipolar charge distribution with the tail of the complex displaying a prominent negatively charged patch. We therefore propose that the toxin is orientated with its tail facing the positively charged outer surface of the presynaptic membrane (alternative scenarios canot be ruled out; see Supplementary Fig. 16). It is likely that toxin-receptor interactions trigger the oligomerization of the toxin and assembly of a competent prepore tetramer. The dimer is thereby first formed with the two protomers rotated 90 degrees relative to each other. The receptor-binding site is probably positioned at the C-terminal tail of the toxin (see "Discussion"). Structures of receptor-toxin complexes are however still lacking and further studies are now required to shed light into these interactions. Latrotoxins with longer C-terminal tails are expected to undergo large conformational changes during assembly of the prepore tetramer (see Discussion). Subsequently, a calcium selective pore is inserted into the presynaptic membrane, the underlying mechanism is however still unclear. In the resulting model, the four cylindrical helical bundle domains are surrounding a central 10 Å diameter channel, which agrees excellent with the 2D class averages of the soluble δ-LIT tetramer (Fig. 2b) and the previous 2D class averages of the soluble α-LTX tetramer43. The putative transmembrane helices are still however completely shielded from the aqueous environment within the respective helical bundle domains. Nevertheless, the bottom part of the helical bundle domains is exposed in this arrangement. We therefore propose that the tetramer, shown here, composed of extended protomers, resembles an insertion-competent prepore state of the toxin. Assembling the tetramer in a similar manner from α-LCT compact protomers with 22 ARs results instead into a completely closed crane claw (Supplementary Fig. 13b). In this scenario, the AR-domains shield the central helical bundle domains from both sides and the central channel is closed. The resulting compact structure does not match however the 2D class averages of LaTX tetramers (Fig. 2b and Orlova et al.43) and there are severe clashes between the AR-domains of adjacent subunits. Note that in previous 2D class averages of α-LTX tetramers with 22 ARs43, the central helical bundle domains are also exposed, suggesting that the single protomers are in the extended conformation. This suggests that α-LCT has to undergo a conformational switch from the compact to extended conformation during the oligomerization process and not after formation of the tetramer. Low-resolution 3D classes of the α-LCT monomer resembling intermediate states between the compact and the extended conformation further support this scenario (Supplementary Fig. 3g, h, Supplementary Movie 3). Interestingly, in contrast to α-LCT, the δ-LIT monomer with the shorter C-terminal tail of 15 instead of 22 ARs is already in the extended state. A conformational change (compact to extended) is apparently only required for LaTXs with longer C-terminal domains. α-LCT shows an intriguing bipolar charge contribution, which is present also in α-LTX which has also 22 ARs, but however less pronounced in δ-LIT with 15 ARs (Supplementary Fig. 14, Supplementary Fig. 6a, e). The claw tips (lower ends of the AR-domains) are however clearly negatively charged in all LaTXs. Oligomerization characteristics and function of the inhibitory domain Albeit extensive efforts, also in presence of artificial membranes, we were not able to determine in vitro, in absence of receptors, factors controlling the oligomerization and subsequent pore formation process efficiently for further visualization of the pore formation events. Interestingly, we observed the various populations of prepore oligomers of precursor δ-LIT (2.5% of the particles) only in cryoEM samples (Supplementary Fig. 8h) but not during negative stain EM analysis (Supplementary Fig. 8c). The oligomerization occurred in buffer containing EDTA, and the substitution of EDTA with Mg2+ or Ca2+ did not affect the sample (Supplementary Fig. 15). This observation corresponds to a previously reported study45, suggesting oligomerization of δ-LIT as a process independent of divalent cations. Besides cations, we tried in addition to induce oligomerization/pore-formation of both α-LCT and δ-LIT in presence of detergents or various lipid compositions52 and even simulate interactions with the air–water interface by bubbling or pouring over a glass rod53. These factors did not have a clear effect on toxin oligomerization, i.e., α-LCT and δ-LIT formed exclusively stable monomers during our screenings, whereas δ-LIT and LCT oligomers were intriguingly observed only on cryoEM grids. Considering in addition that previous studies suggesting that α-LTX exists mainly in its dimerized or tetramerized form, we conclude that different LaTXs display different oligomerization characteristics. Indeed, although the dimerization interface suggests a general induced fit mechanism in LaTXs, the surface of the involved AR-domain cleft is not highly conserved in the LaTX family (Supplementary Fig. 1 and Supplementary Fig. 6c, g). The electrophysiological analysis at single channel resolution allowed us to confirm the tetramerization process for both recombinant expressed LaTX samples (precursor δ-LIT and mature α-LCT) in an indirect way, since a complete LaTX tetramer is the prerequisite of functional pore-formation and thus enabling measurements of single channel currents. Indeed, our samples showed full activity and ion-channel gating events of single pore units were detected under the conditions of the high-resolution electrophysiological experiment. This is in complete agreement with previous experiments on α-LTX purified from the venom, which was also shown to form pores on artificial lipid bilayers, but efficient incorporation into biological membranes was only achieved in presence of specific receptors17,18,19. With regard however to our experiments on δ-LIT, we assume that rapid concentration of the sample during cryoEM grid preparation might have been an important factor, for successful assembly of the soluble tetramer, even for a small subpopulation of particles. The charge on the artificial bilayers might be another important factor for efficient LaTX recruitment and subsequent pore formation in the electrophysiology experiments, due to the bipolar charge distribution of the AR-domains. Under physiological conditions, the individual LaTX receptors are however apparently the critical factors for efficient toxin recruitment, assembly of the tetramer and subsequent pore formation17,18,19. Similar dependencies are well known for prepore oligomers of other toxins assembling at the membrane prior pore formation54. Oligomerization might also be reinforced by additional factors in the venom of the spider or receptor mediated interactions at the outer cell surface. Latrodectins, low molecular weight proteins characterized from the black widow venom, are known for example to associate to LaTXs and suspected to enhance their potency by altering the local ion balance55. Implications for latrotoxin receptor recognition, pore-formation, and Ca2+ sensing Neurexin and latrophilin are two well-studied receptors of latrotoxin36, 56, but the receptor-binding site (also for their respective invertebrate homologues) on LaTXs is still unknown. On the one hand, the helical bundle, connector and plug domains are buried deeply in the crane claw-like tetramer and the empty space below the helical bundles is most probably required for subsequent pore formation events (Fig. 5a). The four outer claw fingers formed by AR-domains contribute on the other hand the largest exposed surface for receptor interaction. The flexibility of the AR-domain in LaTX might provide a versatile and adaptive platform necessary for sensing and binding three structurally different receptors57, 58. Furthermore, the AR-domains of different LaTXs display a low level of conservation, which might be a result of evolution for the purpose of toxin specialization to specific preys. In addition, our data suggest, that the inhibitory domain located at the tail of the AR-domain, probably interrupts the toxin-receptor interface. Therefore, LaTX uses most likely the lower half ARs for receptor recognition. Different LaTXs vary indeed in the number of their ARs, but high-resolution structures of LaTX-receptor complexes are now necessary, towards understanding their specificity in detail. Interestingly, the helical bundle domain of LaTX is reminiscent of domain I of the pore forming Cry toxin, in which six (but not five) amphipathic helices surround a hydrophobic central helix. Moreover, an α-helix in Cry toxin domain I is also interrupted by a short loop59, 60 as we observed in the H7 and H9 helices in our structure. Although the exact pore formation mechanism of Cry toxins is yet unknown, an umbrella model of toxin-insertion, derived from structural studies on colicin, is widely accepted61. Recently, the RhopH complex, a pore forming protein of malaria parasites was in addition shown to possess an intriguing similar helical bundle domain to the helical bundle domain of LaTX62. This suggests a common strategy in until recently unrelated pore forming proteins, to protect central hydrophobic surface helices prior membrane insertion, but further studies are now required to unravel possible similarities in the respective pore formation mechanisms. It is known that LaTX pores are permeable for cations and small molecules such as ATP or acetylcholine. At the given ionic conditions the presented electrophysiological measurement further reveal conductance values that are in the order of magnitude as typically observed for K+ or Ca2+ specific channels with comparable ion concentrations63, 64. The LaTX pore is stabilized by Ca2+ and has a preferred permeability for this ion, suggesting binding sites in the LaTX pore specialized for Ca2+ sensing. A flexible loop in the pore can match both requirements: four loops might form an ion filter-like structure through coordinate bonds with a Ca2+, which can further facilitate the intake of the following Ca2+. In the absence of Ca2+, the loops become disordered, as observed for KcsA in the absence of K+ ions in the filter region65, but the pore might be then large enough to pass through the other substrates. Aspartic acid (Asp) and glutamic acid (Glu) residues have been known to play a critical role in Ca2+ filtering64, 66. There are three Asp residues and five Glu residues strictly conserved at the helical bundle domain: Asp232, Asp289, Asp427, Glu121, Glu132, Glu146, Glu185, and Glu203 (residue numbers according to δ-LIT). LaTX mechanism of oligomerization prior membrane insertion Taking all observations together, we propose a four-step mechanism of oligomerization and membrane binding of LaTXs (Fig. 5b). Firstly, the inhibitory domain of LaTX is removed by proteolytic cleavage. This enables the toxins to be recognized by receptors at the extracellular side of the cell membrane. The negatively charged C-terminal tails of AR-domains are further attracted by the cations (e.g. Na+ or Ca2+) at the extracellular side of the cell membrane, which might be crucial to orientate the molecules properly, with the claw tips directly facing the membrane. LaTXs oligomerize in a cyclic sequential manner, with four protomers rotated 90 degrees relative to each other. In particular, the individual protomers dock into each other, via the insertion of the plug domain of one protomer into the cleft formed by the AR-domain of the adjacent protomer towards the formation of the tetramer. For α-LCT (and possibly also for other LaTXs with longer C-terminal tails of 22 ARs) with protomers in the more compact state due to additional interactions between the helical bundle domain and the longer AR-tail, this interaction is expected to trigger in addition a conformational change for each protomer and stabilize the extended conformation. The resulting tetramer resembles in shape an open crane claw (prepore; membrane insertion competent state). Notably, in this orientation, the bottom part of the cylindrical helical bundle domains, composed each of five parallel aligned helices protecting the central putative transmembrane helix, are exposed, perpendicular aligned and directly facing the membrane. Even though the alternative possibility (i.e., approaching the membrane from the plug domain side) cannot be completely excluded, this scenario is more reasonable considering the bipolar charge of the AR-tail and the overall arrangement of the putative transmembrane domains (Supplementary Fig. 16). This orientation of the soluble prepore towards the membrane, suggests that subsequent pore formation events, might involve massive rearrangements within the four helical bundle domains, resulting, among other events, into downwards injection of the putative transmembrane helices for membrane penetration and finally formation of a transmembrane channel. Our cryoEM results reveal the general architecture of LaTXs and allow us in combination with first functional studies to understand key steps of LaTX action at molecular level. Future studies of receptor-bound LaTXs in membrane inserted state together with mutational analysis of Ca2+ sensing candidate loops and subsequent electrophysiological studies will be necessary to shed light into the intriguing structure and physiological function of the LaTX pore. Protein expression and purification Our expression protocol was modified from Volynski K.E., et. al.8 based on secretion in the baculovirus expression system. The cDNAs of mature α-LCT (residues 16–1211; UniProtKB Q9XZC0) and full-length δ-LIT (residues 1–1214; UniProtKB Q25338) were optimized for recombinant protein expression in insect cells and fused with an N-terminal honeybee melittin signal peptide (MKFLVNVALVFMVVYISYIY)67 to enhance the efficiency of secretion. A Strep-tag II followed by an HRV-3C protease cleavage site was added after the melittin peptide, and a C-terminal His8 was added with a thrombin cleavage site. All DNA fragments were synthesized (GenScript Biotech) and cloned into a pACEBac1 plasmid68. For producing the mature δ-LIT, Domain I (residues 1–28) was deleted from the construction, and a TEV protease site was inserted between Domain III (CTD) and Domain IV (after residue 1019). The sequences of primers used for deleting Domain I and introducing TEV protease site were shown in Supplementary Table 2. All the final plasmids used in the present study are shown in Source Data. The bacmids were generated by transforming 200 ng of each plasmid to DH10EMBacY E.coli cells. Positive baculovirus genomes were selected using blue/white screening on LB-agar plates containing 100 μg/ml Ampicillin, 10 μg/ml Gentamicin, 10 μg/ml tetracycline, 500 μg/ml 5-Bromo-4-chloro-3-indolyl-β-D-galactopyranoside (X-Gal), and 0.5 mM Isopropyl β-D-1-thiogalactopyranoside (IPTG). The white single colonies were inoculated into 5 ml LB containing 10 μg/ml Gentamicin at 37 °C for 24 h. The E.coli cells were lysed by the P1, P2, and N3 buffers of Plasmid Miniprep Kit (QIAGEN), followed by isopropanol precipitation. 700 μl isopropanol was applied to 800 μl cell lysate. The pellets were collected by centrifugation at 16,000 g for 10 min, washed with 200 μl 70% (v/v) ethanol, and solubilized in 50 μl sterilized water. The final concentration of bacmid was approximately 3 μg/μl. For generating the virus stocks, 10 μg of each bacmid were added with 250 μl Sf-900 II serum-free medium (Thermo Fisher Scientific) and 4 μl of FuGENE HD Transfection Reagent (Promega). The mixtures were transfected into 3 ml of 0.5 × 106 cells/ml Sf9 cells and the supernatants, namely the V0 virus stocks, were collected after incubation at 27 °C for 72 h. In order to obtain the virus stocks in higher titration and larger amount, V1 and V2 virus stocks were generated by suspension culture of the 1.0 × 106 cells/ml Sf9 cells infected with 0.2%(v/v) virus from the previous step at 27 °C in 100 rpm for 72 h and stored at 4 °C with 10% fetal bovine serum (FBS) (Thermo Fisher Scientific). The titration of the final V2 virus stocks was approximately 6.0 × 108 PFU/ml measured by plaque assay. For large scale expression, 2.0 × 106 cells/ml Hi5 cells were infected with the V2 virus at a multiplicity of infection (MOI) of 1-2. After suspension culture at 27 °C in 100 rpm for 72 h, the Hi5 cells were removed by centrifugation. The supernatants containing the secreted proteins were chilled to 4 °C and adjusted to pH 8.0 by adding Tris. All the following procedures were conducted at 4 °C unless otherwise noted. White precipitation formed during pH adjustment and was removed by passing it through a 0.45 μm filter (Millipore). Cleared supernatants were applied to gravity-flow columns filled with 10 ml of Strep-Tactin Sepharose resin (iba) and equilibrated in Wash Buffer (100 mM Tris-HCl pH 8.0, 150 mM NaCl, 1 mM Ethylenediaminetetraacetic acid (EDTA)). Subsequently, the columns were washed with 50 ml (5 CV) of Wash Buffer and eluted with Elution Buffer (100 mM Tris-HCl pH 8.0, 150 mM NaCl, 1 mM EDTA, 2.5 mM desthiobiotin). The elutes were fractionated and confirmed by SDS-PAGE. The fractions containing target proteins were pooled and concentrated to 0.5–5 ml. The protein samples were further purified by size exclusion chromatography (SEC) on Superdex 200 increase 10/300 or Superdex 200 26/60 column (GE Healthcare) equilibrated in Wash Buffer. The final concentration was measured by the Bradford method (Bradford protein assay kit, Bio-Rad) and the purity was confirmed by SDS-PAGE and negative stain electron microscopy. The mature δ-LIT was generated by incubating the elution of Strep-tag affinity chromatography with 10% (w/w) His-PreScission 3C protease and TEV protease (provided by the Protein Chemistry Facility, Max-Planck Institute for Molecular Physiology) at 4 °C overnight. The product was confirmed by SDS-PAGE followed by SEC as aforementioned. Negative-stain-EM analysis Negative-stain-EM was applied to all the three toxin samples to assess sample quality prior cryoEM analysis. 4 μl of each toxin sample with a concentration of 0.01–0.02 mg/ml were applied to a copper grid covered by a carbon layer and incubated for 2 min at room temperature. The excess protein solution was blotted with filter paper. The grids were washed with 10 μl double-distilled water and 0.75% uranyl formate once each, followed by staining with 10 μl 0.75% uranyl formate for 45 sec. Image acquisition was performed using a JEOL JEM-1400 transmission electron microscope operating at an acceleration voltage of 120 kV. Datasets were acquired with a 4k × 4k CMOS camera F416 (TVIPS) at a magnification of × 80,000. The defocus range was approximately −0.8 μm to −1.8 μm at a pixel size of 1.3 Å/px. Sample vitrification and cryoEM data acquisition For cryoEM sample preparation, 4 μl of protein solution were applied onto a freshly glow-discharged holey carbon grid (QUANTIFOIL R 1.2/1.3 mesh 300) and vitrified using a Vitrobot cryo-plunger (Thermo Fisher Scientific). The plunging condition was optimized by several rounds of screening sessions. The final concentrations of mature α-LCT and full-length δ-LIT solution were 0.6 mg/ml and 0.4 mg/ml, respectively. Two datasets of the α-LCT were collected with a 300 kV Titan Krios microscope (Thermo Fisher Scientific) equipped with an X-FEG and operated by the software EPU (Thermo Fisher Scientific). One image per hole with defocus of −1.3 to −2.5 μm was collected with the K3 Summit (Gatan) direct electron detector in super-resolution mode at a magnification of 105,000 x and a corresponding pixel size of 0.45 Å/px with a GIF quantum-energy filter set to a filter width of 20 eV. Image stacks with 60 frames were collected with a total exposure time of 3 s and a total dose of 69.1 e/Å2. These two datasets were combined and used for the following processing. Datasets of the δ-LIT were collected with a 300 kV Titan Krios microscope (Thermo Fisher Scientific) equipped with an X-FEG and a Cs corrector and operated by the software EPU (Thermo Fisher Scientific). One image per hole with defocus of −1.3 to −2.4 μm was collected with the K3 Summit (Gatan) direct electron detector in super-resolution mode at a magnification of 81,000 x and a corresponding pixel size of 0.45 Å/px with a GIF quantum-energy filter set to a filter width of 20 eV. Image stacks with 60 frames were collected with a total exposure time of 4 s and a total dose of 78.7 e/Å2. The details of the dataset collection are summarized in Supplementary Table 3. Image processing and 3D reconstruction Image transfer, motion correction, and CTF estimation were performed using Motioncor2 and CTFFIND4 implemented with TranSPHIRE69, a recently released software package that allows automated on-the-fly processing for cryoEM. In particular, the motion correction was performed by MotionCor270 to create aligned dose-weighted averages. The super-resolution images were binned twice after motion correction to speed up subsequent processing steps. The non-dose-weighted average micrographs were used for CTF estimation performed by CTFFIND 4.1.1071. Outliers were removed based on the estimated defocus values and resolution limits. Further image processing was performed using the software package SPHIRE72 unless otherwise noted. Single particles were automatically picked by crYOLO73 based on a manually trained model and extracted with a final window size of 224 × 224 pixels. The 2D classification was performed by ISAC74 at a pixel size of 3.1 Å/pixel. The Beautifier tool implemented in SPHIRE was utilized for obtaining refined 2D classes. Classes displaying high-resolution features were selected and combined into a subset. For the α-LCT monomer, an initial 3D model was generated by RVIPER75 from a previously collected test dataset and subsequently used for 3D refinement in MERIDIEN72 with C1 symmetry. Since the resulting 3D reconstruction showed anisotropic resolution and several low-resolution regions, several rounds of 3D classification were performed by SPHIRE and RELION 3.176, 77 until the compact and flexible intermediate classes were separated from the other conformations. The compact subset containing 376,474 particles was further improved by CTF refinement in RELION 3.1, followed by a final round of 3D refinement in MERIDIEN. The flexible intermediate class only contained 61,526 particles. Therefore, only standard 3D refinement was performed to generate a map for secondary structure fitting. The δ-LIT dimer dataset was processed with the same procedure as the α-LCT monomer until the 2D classification step. The window size was enlarged to 360 × 360 pixels due to the larger particle size. After sorting the monomer, dimer, trimer, and tetramer classes into subsets, a multi-reference classification of the dimer subset was performed in SPHIRE followed by an additional round of 3D classification in RELION 3.1. Eventually, a subset containing 81,192 particles was selected. The initial model for 3D refinement was generated by manually combining two α-LCT monomers, as indicated in the 2D class averages and applying a 15 Å low-pass filter. Subsequent polishing and CTF refinement were performed as described for the α-LCT dataset. The subset of monomer was further divided by 3D classification into two classes containing 70,924 and 41,599 particles. The initial model for 3D refinement was generated by the δ-LIT dimer map. The final half-maps were combined upon masking using a 3D mask generated by the PostRefiner tool implemented in SPHIRE, which in addition automatically determines the B-factor and filters the resulting volume to estimated resolutions. The global resolutions and the 3DFSC were calculated at the gold standard 0.143 criterion using the 3DFSC server78. The angular distribution and the local resolution distribution of the final map were analyzed using the o angular_distribution and sp_locres in SPHIRE Model building, validation, and visualization The de novo model building of the compact form of the α-LCT monomer started from a 3D structure prediction using trRossetta (Yang 2020). The resulting model did not agree well with the density maps, therefore it was fragmented into several structural domains according to the secondary structure prediction from RaptorX Property79. Rigid-body fitting of the individual domain was then performed using Pymol (The PyMOL Molecular Graphics System, Version 2.0 Schrödinger, LLC). Further refinement was conducted using the real-time molecular-dynamics simulation-based program ISOLDE80 implemented in the visualization softwares UCSF Chimera and UCSF ChimeraX81, 82, and the resulting model was further refined using a combination of the model editing software COOT83 and real-space refinement in Phenix84. The above adjustments were performed for several rounds until the model sufficiently fitinto the map. The model of the flexible intermediate state was generated by truncating side chains with the Chainsaw tool in CCP485 and fitted into the corresponding low-resolution map. The starting model of the δ-LIT dimer was generated by the homology-modeling service SWISS-MODEL86 from the α-LCT model, which was subsequently fitted into the map using ISOLDE, COOT, and Phenix. The disordered regions and side-chain atoms beyond Cβ in regions where side-chain density was only rarely evident were deleted during the model fitting. Rigid-body fitting into the low-resolution maps of this study (e.g., the flexible intermediate state of α-LCT monomer and the two δ-LIT monomers) was performed with the molecular model of the compact state of α-LCT monomer (4 Å) for α-LCT maps and the protomer A of δ-LIT dimer (4.6 Å) for δ-LIT maps. Fitting was performed with Chimera and further refined using ISOLDE. Further model refinement was not applied due to the limited quality of the maps. To calculate the protein properties, i.e., surface electrostatics, hydrophobicities, residue conservations, and transmembrane region predictions, the side chains were put back with ideal rotamers (Supplementary Fig. 6). The figures were prepared using UCSF Chimera and ChimeraX81, 82. The threshold values (sigma) were calculated by dividing the corresponding contour level by the root-mean-square (RMS) value. Multiple sequence alignment and the prediction of transmembrane helices were done using Clustal Omega87 and TMHMM server v2.088, respectively. The surface electrostatics was calculated in APBS89 and visualized by UCSF ChimeraX. These sequence features were illustrated by ESPript390 and Adobe Illustrator. Single channel recordings from planar lipid bilayers and data analysis Planar lipid bilayer measurements using the Compact bilayer platform (Ionovation GmbH) were performed as described in detail previously91. In brief: If not explicitly stated otherwise, symmetric conditions (250 mM KCl, 10 mM HEPES, pH 7.0) were used in cis and trans compartments. The denomination cis and trans corresponds to the half-chambers of the bilayer unit. Reported membrane potentials are always referred to the trans compartment. Bilayer fabrication was performed on PFTE film at a 100 µm prepainted (1 % hexadecane in n-hexane) aperture with a Phosphatidylcholine (18:1) (PC)/Phosphatidylethanolamine (18:1) (PE) (7:3 ratio) lipid mixture (both lipids were purchased from Avanti Polar Lipids) in n-pentane using the thinning method91. Stock solutions of purified LaTXs with typically 0.5–2.5 mg/ml contained in a buffer of 100 mM NaCl, 20 mM Hepes, pH 7.4 were added to the cis compartment under slight stirring to a final protein concentration of ~1 μg/ml to 10 ng/ml. Ion-channel currents were recorded using an EPC 10 USB amplifier (HEKA Elektronik GmbH) in combination with the Patchmaster data acquisition software (HEKA Elektronik GmbH). For data acquisition a sampling rate of 5 kHz (voltage ramps) and 10 kHz (continuous recording) was used and the data were further analyzed using the Origin package (Origin Lab) and the MATLAB (MathWorks) based Ion-channel-Master software developed in our laboratory91. The GHK approach The Goldman-Hodgkin-Katz approach is by far the most commonly used framework to describe ion permeability and selectivity of membranes47, 92. Besides the principal difficulties underlying the macroscopic GHK constant field theory, which assumes independent movement of the ions through membrane pores (see references93,94,95 for a detailed discussion) it has been demonstrated that the methodology can be used to obtain reliable semi-quantitative measures for permeation of charged drugs through membranes48. We were interested in obtaining information on the selectivity of the membrane reconstituted δ-LIT. To characterize the ion fluxes mediated by the δ-LIT we employed the following experimental conditions for bilayer containing an unknown number of open δ-LIT channels. Permeability δ-LIT: $${P}_{{{{{{{\rm{K}}}}}}}^{+}}=1.47$$ $${P}_{{{{{{{{\rm{Cl}}}}}}}}^{-}}={{{{{{\rm{variable}}}}}}}\left(+{V}_{{{{{{{\rm{cmd}}}}}}}}\right)$$ $${P}_{{{{{{{{\rm{Ca}}}}}}}}^{2+}}={{{{{{\rm{variable}}}}}}}\left(-{V}_{{{{{{{\rm{cmd}}}}}}}}\right)* ({{{{{\rm{negative}}}}}}{V}_{{{{{{{\rm{cmd}}}}}}}})$$ Cation: $${{z}_{{{{{{{\rm{K}}}}}}}^{+}}=1{{{{{\rm{;}}}}}\;}c}_{{{{{{{\rm{K}}}}}}}^{+}{cis}}=250{{{{{{\rm{mM}}}}}}}{{{{{\rm{;}}}}}}\;{c}_{{{{{{{\rm{K}}}}}}}^{+}{trans}}=25{{{{{{\rm{mM}}}}}}}$$ Anion: $${{z}_{{{{{{{{\rm{Cl}}}}}}}}^{-}}=-1.0\;{{{{{\rm{;}}}}}}c}_{{{{{{{{\rm{Cl}}}}}}}}^{-}{cis}}=250{{{{{{\rm{mM}}}}}}}{{{{{\rm{;}}}}}}\;{c}_{{{{{{{{\rm{Cl}}}}}}}}^{-}{trans}}=25{{{{{{\rm{mM}}}}}}}$$ $${{z}_{{{{{{{{\rm{Ca}}}}}}}}^{2+}}=2\;{{{{{\rm{;}}}}}}c}_{{{{{{{{\rm{Ca}}}}}}}}^{2+}{cis}}=5{{{{{{\rm{mM}}}}}}}{{{{{\rm{;}}}}}}\;{c}_{{{{{{{{\rm{Ca}}}}}}}}^{2+}{trans}} < 10\;\mu {{{{{{\rm{mM}}}}}}}$$ Zero-current potential: Ca 2+ : $${V}_{{{{{{{\rm{rev}}}}}}}}\approx 47\pm 2.3{{{{{{\rm{mV}}}}}}}\;{{{{{\rm{experimentalvalue}}}}}}({{\pm }}{{{{{\rm{SD}}}}}})$$ Cl 1- : $${V}_{{{{{{{\rm{rev}}}}}}}}\approx -57\pm 3.4{{{{{{\rm{mV}}}}}}}\;{{{{{\rm{experimentalvalue}}}}}}({{\pm }}{{{{{\rm{SD}}}}}})$$ Using these values (1–6), the above concentrations in the cis and trans compartment and considering that the assumptions of the GHK-theory are valid under the applied conditions we can use Eqs. (9–13) to calculate the expected current–voltage relation for the above bilayer membrane containing an unknown number of active δ-LIT channels. Extrapolating the current slope at high positive and negative voltages linearly to zero net current (Fig. 4b) yields a reversal potential of ≈ 47 mV (7) and ≈ −57 mV (8), these values would be compatible with a very high calcium or chloride selectivity of the mature δ-LIT channel depending on the current direction (\({{P}_{{{{{{{{\rm{Ca}}}}}}}}^{2+}}/P}_{{{{{{{\rm{K}}}}}}}^{+}}/{P}_{{{{{{{{\rm{Cl}}}}}}}}^{-}}\cong 400/1.47/1\)) flux of calcium (−Vmem) or \(({{P}_{{{{{{{{\rm{Ca}}}}}}}}^{2+}}/P}_{{{{{{{\rm{K}}}}}}}^{+}}/{P}_{{{{{{{{\rm{Cl}}}}}}}}^{-}}\cong 1/1.47/150)\) chloride currents (Vmem)93,94,95. In summary, the course of the current–voltage relation in the lower panel of Fig. 4d can be explained from our calculations if the currents at negative Vcmd are carried from cis to trans mainly by Ca2+ ions and at positive Vcmd predominantly by Cl− ions. GHK-current equations: $${I}_{x}\left(V,{P}_{X},z,{c}_{{cis}},{c}_{{trans}}\right)={{P}_{x}z}^{2}\frac{V{F}^{2}}{{RT}}{{\cdot }}\frac{\left({c}_{x,{cis}}-{c}_{x,{trans}}{\exp }\left(\frac{-{zFV}}{{RT}}\right)\right)\,}{1-{\exp }\left(\frac{-{zFV}}{{RT}}\right)}$$ $${I}_{{{{{{{\rm{K}}}}}}}^{+}}\left(V\right)=I\left(V,{P}_{{{{{{{\rm{K}}}}}}}^{+}},{z}_{{{{{{{\rm{k}}}}}}}^{+}},{c}_{{{{{{{\rm{K}}}}}}}^{+}{cis}},{c}_{{{{{{{\rm{K}}}}}}}^{+}{trans}}\right),$$ $${I}_{{{{{{{{\rm{Cl}}}}}}}}^{-}}\left(V\right)=I\left(V,{P}_{{{{{{{{\rm{Cl}}}}}}}}^{-}},{z}_{{{{{{{{\rm{Cl}}}}}}}}^{-}},{c}_{{{{{{{{\rm{Cl}}}}}}}}^{-}{cis}},{c}_{{{{{{{{\rm{Cl}}}}}}}}^{-}{trans}}\right)$$ $${I}_{{{{{{{{\rm{Ca}}}}}}}}^{2+}}\left(V\right)=I\left(V,{P}_{{{{{{{{\rm{Ca}}}}}}}}^{2+}},{z}_{{{{{{{{\rm{Ca}}}}}}}}^{2+}},{c}_{{{{{{{{\rm{Ca}}}}}}}}^{2+}{cis}},{c}_{{{{{{{{\rm{Ca}}}}}}}}^{2+}{trans}}\right)$$ $$\sum I\left(V\right)={I}_{{{{{{{\rm{K}}}}}}}^{+}}\left(V\right)+{I}_{{{{{{{{\rm{Cl}}}}}}}}^{-}}\left(V\right)++{I}_{{{{{{{{\rm{Ca}}}}}}}}^{2+}}\left(V\right)$$ Using Eqs. (9–13) we calculated the corresponding current–voltage relations (Fig. 4e) for the ionic conditions given above using a Mathcad (PTC-Software) based routine46. The cryoEM datasets of α-LCT and δ-LIT generated in this study have been deposited in EMPIAR under accession codes EMPIAR-10827 and EMPIAR-10828. The cryoEM maps of α-LCT monomer and δ-LIT dimer have been deposited in the Electron Microscopy Data Bank (EMDB) under accession codes EMD-13642 and EMD-13643. The coordinates of the corresponding models have been deposited to the Protein Data Bank (PDB) under accession codes 7PTX and 7PTY. The full sequences of the plasmids used in this study and the raw electrophysiological data as tab-separated asci-files of the respective individual figures are provided in the Source data. Other data are available from the corresponding author upon reasonable request. Source data are provided with this paper. Tzeng, M. C. & Siekevitz, P. The effect of the purified major protein factor (alpha-latrotoxin) of black widow spider venom on the release of acetylcholine and norepinephrine from mouse cerebral cortex slices. Brain Res 139, 190–196 (1978). Krasnoperov, V. G., Shamotienko, O. G. & Grishin, E. V. [Isolation and properties of insect-specific neurotoxins from venoms of the spider Lactodectus mactans tredecimguttatus]. Bioorg Khim 16, 1138–1140 (1990). Grishin, E. V. Black widow spider toxins: the present and the future. Toxicon 36, 1693–1701 (1998). Krasnoperov, V. G., Shamotienko, O. G. & Grishin, E. V. [A crustacean-specific neurotoxin from the venom of the black widow spider Latrodectus mactans tredecimguttatus]. Bioorg. Khim 16, 1567–1569 (1990). Muller, G. J. Black and brown widow spider bites in South Africa. A series of 45 cases. S Afr. Med. J. 83, 399–405 (1993). Zukowski, C. W. Black widow spider bite. J. Am. Board Fam. Pr. 6, 279–281 (1993). Kiyatkin, N. I., Dulubova, I. E., Chekhovskaya, I. A. & Grishin, E. V. Cloning and structure of cDNA encoding alpha-latrotoxin from black widow spider venom. FEBS Lett. 270, 127–131 (1990). Volynski, K. E., Nosyreva, E. D., Ushkaryov, Y. A. & Grishin, E. V. Functional expression of alpha-latrotoxin in baculovirus system. FEBS Lett. 442, 25–28 (1999). Grasso, A., Alema, S., Rufini, S. & Senni, M. I. Black widow spider toxin-induced calcium fluxes and transmitter release in a neurosecretory cell line. Nature 283, 774–776 (1980). ADS PubMed CAS Google Scholar Hurlbut, W. P., Chieregatti, E., Valtorta, F. & Haimann, C. Alpha-latrotoxin channels in neuroblastoma cells. J. Membr. Biol. 138, 91–102 (1994). Rizo, J. Mechanism of neurotransmitter release coming into focus. Protein Sci. 27, 1364–1391 (2018). PubMed PubMed Central CAS Google Scholar Finkelstein, A., Rubin, L. L. & Tzeng, M. C. Black widow spider venom: effect of purified toxin on lipid bilayer membranes. Science 193, 1009–1011 (1976). Mironov, S. L., Sokolov Yu, V., Chanturiya, A. N. & Lishko, V. K. Channels produced by spider venoms in bilayer lipid membrane: mechanisms of ion transport and toxic action. Biochim Biophys. Acta 862, 185–198 (1986). Robello, M., Fresia, M., Maga, L., Grasso, A. & Ciani, S. Permeation of divalent cations through alpha-latrotoxin channels in lipid bilayers: steady-state current-voltage relationships. J. Membr. Biol. 95, 55–62 (1987). Scheer, H. W. Interactions between alpha-latrotoxin and trivalent cations in rat striatal synaptosomal preparations. J. Neurochem 52, 1590–1597 (1989). Rosenthal, L., Zacchetti, D., Madeddu, L. & Meldolesi, J. Mode of action of alpha-latrotoxin: role of divalent cations in Ca2(+)-dependent and Ca2(+)-independent effects mediated by the toxin. Mol. Pharm. 38, 917–923 (1990). Hlubek, M. D., Stuenkel, E. L., Krasnoperov, V. G., Petrenko, A. G. & Holz, R. W. Calcium-independent receptor for alpha-latrotoxin and neurexin 1alpha [corrected] facilitate toxin-induced channel formation: evidence that channel formation results from tethering of toxin to membrane. Mol. Pharm. 57, 519–528 (2000). Van Renterghem, C. et al. alpha-latrotoxin forms calcium-permeable membrane pores via interactions with latrophilin or neurexin. Eur. J. Neurosci. 12, 3953–3962 (2000). Volynski, K. E. et al. Latrophilin, neurexin, and their signaling-deficient mutants facilitate alpha -latrotoxin insertion into membranes but are not involved in pore formation. J. Biol. Chem. 275, 41175–41183 (2000). Petrenko, A. G. et al. Isolation and properties of the alpha-latrotoxin receptor. EMBO J. 9, 2023–2027 (1990). Ushkaryov, Y. A., Petrenko, A. G., Geppert, M. & Sudhof, T. C. Neurexins: synaptic cell surface proteins related to the alpha-latrotoxin receptor and laminin. Science 257, 50–56 (1992). Davletov, B. A., Krasnoperov, V., Hata, Y., Petrenko, A. G. & Sudhof, T. C. High affinity binding of alpha-latrotoxin to recombinant neurexin I alpha. J. Biol. Chem. 270, 23903–23905 (1995). Davletov, B. A., Shamotienko, O. G., Lelianova, V. G., Grishin, E. V. & Ushkaryov, Y. A. Isolation and biochemical characterization of a Ca2+-independent alpha-latrotoxin-binding protein. J. Biol. Chem. 271, 23239–23245 (1996). Krasnoperov, V. G. et al. The calcium-independent receptor of alpha-latrotoxin is not a neurexin. Biochem Biophys. Res Commun. 227, 868–875 (1996). Krasnoperov, V. et al. Protein-tyrosine phosphatase-sigma is a novel member of the functional family of alpha-latrotoxin receptors. J. Biol. Chem. 277, 35887–35895 (2002). Sugita, S., Khvochtev, M. & Sudhof, T. C. Neurexins are functional alpha-latrotoxin receptors. Neuron 22, 489–496 (1999). Volynski, K. E. et al. Mutant alpha-latrotoxin (LTXN4C) does not form pores and causes secretion by receptor stimulation: this action does not require neurexins. J. Biol. Chem. 278, 31058–31066 (2003). Lajus, S. & Lang, J. Splice variant 3, but not 2 of receptor protein-tyrosine phosphatase sigma can mediate stimulation of insulin-secretion by alpha-latrotoxin. J. Cell Biochem 98, 1552–1559 (2006). Silva, J. P., Suckling, J. & Ushkaryov, Y. Penelope's web: using alpha-latrotoxin to untangle the mysteries of exocytosis. J. Neurochem 111, 275–290 (2009). Davletov, B. A. et al. Vesicle exocytosis stimulated by alpha-latrotoxin is mediated by latrophilin and requires both external and stored Ca2+. EMBO J. 17, 3909–3920 (1998). Ashton, A. C. et al. alpha-Latrotoxin, acting via two Ca2+-dependent pathways, triggers exocytosis of two pools of synaptic vesicles. J. Biol. Chem. 276, 44695–44703 (2001). Capogna, M., Volynski, K. E., Emptage, N. J. & Ushkaryov, Y. A. The alpha-latrotoxin mutant LTXN4C enhances spontaneous and evoked transmitter release in CA3 pyramidal neurons. J. Neurosci. 23, 4044–4053 (2003). Ichtchenko, K. et al. alpha-latrotoxin action probed with recombinant toxin: receptors recruit alpha-latrotoxin but do not transduce an exocytotic signal. EMBO J. 17, 6188–6199 (1998). Sudhof, T. C. alpha-Latrotoxin and its receptors: neurexins and CIRL/latrophilins. Annu Rev. Neurosci. 24, 933–962 (2001). Ushkaryov, Y. Alpha-latrotoxin: from structure to some functions. Toxicon 40, 1–5 (2002). Ushkaryov, Y. A., Rohou, A. & Sugita, S. alpha-Latrotoxin and its receptors. Handb. Exp. Pharmacol. 18, 171-206, (2008). Mesngon, M. & McNutt, P. Alpha-latrotoxin rescues SNAP-25 from BoNT/A-mediated proteolysis in embryonic stem cell-derived neurons. Toxins (Basel) 3, 489–503 (2011). Holz, G. G. & Habener, J. F. Black widow spider alpha-latrotoxin: a presynaptic neurotoxin that shares structural homology with the glucagon-like peptide-1 family of insulin secretagogic hormones. Comp. Biochem Physiol. B: Biochem Mol. Biol. 121, 177–184 (1998). Rohou, A., Nield, J. & Ushkaryov, Y. A. Insecticidal toxins from black widow spider venom. Toxicon 49, 531–549 (2007). Kiyatkin, N., Dulubova, I. & Grishin, E. Cloning and structural analysis of alpha-latroinsectotoxin cDNA. Abundance of ankyrin-like repeats. Eur. J. Biochem. 213, 121–127 (1993). Dulubova, I. E. et al. Cloning and structure of delta-latroinsectotoxin, a novel insect-specific member of the latrotoxin family: functional expression requires C-terminal truncation. J. Biol. Chem. 271, 7535–7543 (1996). Volynski, K. E. et al. [Molecular cloning and primary structure of cDNA fragment for alpha-latrocrustatoxin from black widow spider venom]. Bioorg. Khim 25, 25–30 (1999). Orlova, E. V. et al. Structure of alpha-latrotoxin oligomers reveals that divalent cation-dependent tetramers form membrane pores. Nat. Struct. Biol. 7, 48–53 (2000). Gatsogiannis, C. et al. Membrane insertion of a Tc toxin in near-atomic detail. Nat. Struct. Mol. Biol. 23, 884–890 (2016). Ashton, A. C. et al. Tetramerisation of alpha-latrotoxin by divalent cations is responsible for toxin-induced non-vesicular release and contributes to the Ca(2+)-dependent vesicular exocytosis from synaptosomes. Biochimie 82, 453–468 (2000). Bartsch, P., Harsman, A. & Wagner, R. Single channel analysis of membrane proteins in artificial bilayer membranes. Methods Mol. Biol. 1033, 345–361 (2013). Goldman, E. Potentials impedance, and rectification in membranes. J. Gen. Physiol. 27, 37–60 (1943). Hille, B. Ionic Channels of Excitable Membranes. Vol. 3 (Sinauer Ass. Inc., 2001). Ghai, I. et al. General method to determine the flux of charged molecules through nanopores applied to β-lactamase inhibitors and OmpF. J. Phys. Chem. Lett. 8, 1295–1301 (2017). Smart, O. S., Breed, J., Smith, G. R. & Sansom, M. S. P. A novel method for structure-based prediction of ion channel conductance properties. Biophys. J. 72, 1109–1126 (1997). ADS PubMed PubMed Central CAS Google Scholar Santulli, G., Nakashima, R., Yuan, Q. & Marks, A. R. Intracellular calcium release channels: an update. J. Physiol. 595, 3041–3051 (2017). Schubert, E., Vetter, I. R., Prumbaum, D., Penczek, P. A. & Raunser, S. Membrane insertion of alpha-xenorhabdolysin in near-atomic detail. Elife 7, https://doi.org/10.7554/eLife.38017 (2018). D'Imprima, E. et al. Protein denaturation at the air-water interface and how to prevent it. Elife 8, https://doi.org/10.7554/eLife.42747 (2019). Kintzer, A. F., Sterling, H. J., Tang, I. I., Williams, E. R. & Krantz, B. A. Anthrax toxin receptor drives protective antigen oligomerization and stabilizes the heptameric and octameric oligomer by a similar mechanism. PLoS ONE 5, e13888 (2010). McCowan, C. & Garb, J. E. Recruitment and diversification of an ecdysozoan family of neuropeptide hormones for black widow spider venom expression. Gene 536, 366–375 (2014). Krasnoperov, V., Bittner, M. A., Holz, R. W., Chepurny, O. & Petrenko, A. G. Structural requirements for alpha-latrotoxin binding and alpha-latrotoxin-stimulated secretion. A study with calcium-independent receptor of alpha-latrotoxin (CIRL) deletion mutants. J. Biol. Chem. 274, 3590–3596 (1999). Mosavi, L. K., Cammett, T. J., Desrosiers, D. C. & Peng, Z. Y. The ankyrin repeat as molecular architecture for protein recognition. Protein Sci. 13, 1435–1448 (2004). Li, J., Mahajan, A. & Tsai, M. D. Ankyrin repeat: a unique motif mediating protein-protein interactions. Biochemistry 45, 15168–15178 (2006). Li, J. D., Carroll, J. & Ellar, D. J. Crystal structure of insecticidal delta-endotoxin from Bacillus thuringiensis at 2.5 A resolution. Nature 353, 815–821 (1991). Xu, C., Wang, B. C., Yu, Z. & Sun, M. Structural insights into Bacillus thuringiensis Cry, Cyt and parasporin toxins. Toxins (Basel) 6, 2732–2770 (2014). Gazit, E., La Rocca, P., Sansom, M. S. & Shai, Y. The structure and organization within the membrane of the helices composing the pore-forming domain of Bacillus thuringiensis delta-endotoxin are consistent with an "umbrella-like" structure of the pore. Proc. Natl Acad. Sci. USA 95, 12289–12294 (1998). Ho, C. M. et al. Native structure of the RhopH complex, a key determinant of malaria parasite nutrient acquisition. bioRxiv https://doi.org/10.1101/2021.01.10.425752 (2021). Moldenhauer, H., Diaz-Franulic, I., Gonzalez-Nilo, F. & Naranjo, D. Effective pore size and radius of capture for K(+) ions in K-channels. Sci. Rep. 6, 19893 (2016). Wu, J. et al. Structure of the voltage-gated calcium channel Ca(v)1.1 at 3.6 A resolution. Nature 537, 191–196 (2016). Renart, M. L. et al. Effects of conducting and blocking ions on the structure and stability of the potassium channel KcsA. J. Biol. Chem. 281, 29905–29915 (2006). Tang, L. et al. Structural basis for Ca2+ selectivity of a voltage-gated calcium channel. Nature 505, 56–61 (2014). Tessier, D. C., Thomas, D. Y., Khouri, H. E., Laliberte, F. & Vernet, T. Enhanced secretion from insect cells of a foreign protein fused to the honeybee melittin signal peptide. Gene 98, 177–183 (1991). Trowitzsch, S., Bieniossek, C., Nie, Y., Garzoni, F. & Berger, I. New baculovirus expression tools for recombinant protein complex production. J. Struct. Biol. 172, 45–54 (2010). Stabrin, M. et al. TranSPHIRE: automated and feedback-optimized on-the-fly processing for cryo-EM. Nat. Commun. 11, 5716 (2020). Zheng, S. Q. et al. MotionCor2: anisotropic correction of beam-induced motion for improved cryo-electron microscopy. Nat. Methods 14, 331–332 (2017). Rohou, A. & Grigorieff, N. CTFFIND4: Fast and accurate defocus estimation from electron micrographs. J. Struct. Biol. 192, 216–221 (2015). Moriya, T. et al. High-resolution single particle analysis from electron cryo-microscopy images using SPHIRE. J. Vis. Exp. https://doi.org/10.3791/55448 (2017). Wagner, T. et al. SPHIRE-crYOLO is a fast and accurate fully automated particle picker for cryo-EM. Commun. Biol. 2, 218 (2019). Yang, Z., Fang, J., Chittuluru, J., Asturias, F. J. & Penczek, P. A. Iterative stable alignment and clustering of 2D transmission electron microscope images. Structure 20, 237–247 (2012). Penczek, P. A. & Asturias, F. J. Ab initio cryo-EM structure determination as a validation problem. In 2014 IEEE International Conference on Image Processing (ICIP) 2090–2094 https://doi.org/10.1109/ICIP.2014.7025419 (2014). Zivanov, J. et al. New tools for automated high-resolution cryo-EM structure determination in RELION-3. Elife 7, https://doi.org/10.7554/eLife.42166 (2018). Zivanov, J., Nakane, T. & Scheres, S. H. W. Estimation of high-order aberrations and anisotropic magnification from cryo-EM data sets in RELION-3.1. IUCrJ 7, 253–267 (2020). Tan, Y. Z. et al. Addressing preferred specimen orientation in single-particle cryo-EM through tilting. Nat. Methods 14, 793–796 (2017). Wang, S., Li, W., Liu, S. & Xu, J. RaptorX-Property: a web server for protein structure property prediction. Nucleic Acids Res. 44, W430–W435 (2016). Croll, T. I. ISOLDE: a physically realistic environment for model building into low-resolution electron-density maps. Acta Crystallogr D 74, 519–530 (2018). Pettersen, E. F. et al. UCSF Chimera—a visualization system for exploratory research and analysis. J. Comput Chem. 25, 1605–1612 (2004). Pettersen, E. F. et al. UCSF ChimeraX: Structure visualization for researchers, educators, and developers. Protein Sci. 30, 70–82 (2021). Emsley, P. & Cowtan, K. Coot: model-building tools for molecular graphics. Acta Crystallogr D: Biol. Crystallogr. 60, 2126–2132 (2004). Liebschner, D. et al. Macromolecular structure determination using X-rays, neutrons and electrons: recent developments in Phenix. Acta Crystallogr. D: Struct. Biol. 75, 861–877 (2019). Winn, M. D. et al. Overview of the CCP4 suite and current developments. Acta Crystallogr. D: Biol. Crystallogr. 67, 235–242 (2011). Sievers, F. et al. Fast, scalable generation of high-quality protein multiple sequence alignments using Clustal Omega. Mol. Syst. Biol. 7, 539 (2011). Krogh, A., Larsson, B., von Heijne, G. & Sonnhammer, E. L. L. Predicting transmembrane protein topology with a hidden Markov model: Application to complete genomes. J. Mol. Biol. 305, 567–580 (2001). Jurrus, E. et al. Improvements to the APBS biomolecular solvation software suite. Protein Sci. 27, 112–128 (2018). Robert, X. & Gouet, P. Deciphering key features in protein structures with the new ENDscript server. Nucleic Acids Res. 42, W320–W324 (2014). Hodgkin, A. L. & Katz, B. The effect of sodium ions on the electrical activity of giant axon of the squid. J. Physiol. 108, 37–77 (1949). Corry, B., Kuyucak, S. & Chung, S.-H. Tests of continuum theories as models of ion channels. II. Poisson–Nernst–Planck theory versus Brownian dynamics. Biophys. J. 78, 2364–2381 (2000). Moy, G., Corry, B., Kuyucak, S. & Chung, S.-H. Tests of continuum theories as models of ion channels. I. Poisson−Boltzmann theory versus Brownian dynamics. Biophys. J. 78, 2349–2363 (2000). Syganow, A. & von Kitzing, E. (In)validity of the constant field and constant currents assumptions in theories of ion transport. Biophys. J. 76, 768–781 (1999). The authors thank Dr. O. Hofnagel and Dr. D. Prumbaum for assistance with dataset acquisition and the development team of the SPHIRE software suite for assistance with image processing. This work was supported by funds from Uehara Memorial Foundation Overseas Postdoctoral Fellowships, Japan Society for the Promotion of Science (JSPS) Overseas Research Fellowships and the Humboldt Research Fellowship for Postdoctoral Researchers (to M.C.), the Max Planck Society (to S.R.) and the Medical Faculty of the University of Münster (to C.G.). We thank Prof. Dr. M. Winterhalter (JUB) for lab-support. Institute for Medical Physics and Biophysics and Center for Soft Nanoscience, Westfälische Wilhelms Universität Münster, 48149, Münster, Germany Minghao Chen & Christos Gatsogiannis Department of Structural Biochemistry, Max Planck Institute of Molecular Physiology, 44227, Dortmund, Germany Minghao Chen, Lena Engelhard, Stefan Raunser & Christos Gatsogiannis MOLIFE Research Center, Jacobs University Bremen, 28759, Bremen, Germany Daniel Blum & Richard Wagner Minghao Chen Daniel Blum Lena Engelhard Stefan Raunser Christos Gatsogiannis C.G. conceived and supervised the study, M.C. designed latrotoxin expression and purification experiments, M.C., L.E. purified samples, performed cryoEM experiments and processed cryoEM data; M.C. analyzed cryoEM data, performed model building and interpreted data with contributions from S.R. and C.G.; D.B. performed electrophysiological analysis. R.W., D.B. analyzed electrophysiological data, M.C. drafted the manuscript, C.G., R.W. wrote the manuscript, S.R., R.W., C.G. revised the manuscript. All authors read and approved the final manuscript. Correspondence to Christos Gatsogiannis. Peer review information Nature Communications thanks the anonymous reviewer(s) for their contribution to the peer review of this work. Chen, M., Blum, D., Engelhard, L. et al. Molecular architecture of black widow spider neurotoxins. Nat Commun 12, 6956 (2021). https://doi.org/10.1038/s41467-021-26562-8
CommonCrawl
Simulation of the Arctic – North Atlantic Ocean Circulation with a Two-Equation k-omega Turbulence Parameterization Sergey Moshonkin, Vladimir Zalesny, Anatoly Gusev Subject: Earth Sciences, Oceanography Keywords: ocean circulation; numerical modelling; turbulence parameterization A method for solving the turbulence equations embedded in the sigma ocean general ocean circulation model is proposed. Like the general circulation model, the turbulence equations are solved using the splitting method by physical processes. The turbulence equations are split into two main stages describing transport-diffusion and generation-dissipation processes. Parameterization of turbulence in the framework of equations allows, at the generation-dissipation stage, to use both numerical and analytical solutions and to ensure high efficiency of the algorithm. The results of large-scale ocean dynamics simulation taking into account the parameterization of vertical turbulent exchange are considered. Numerical experiments were carried out using k-omega turbulence model embedded to the Institute of Numerical Mathematics Ocean general circulation Model (INMOM). Both the circulation and turbulence models are solved using the splitting method with respect to physical processes. The coupled model is used to simulate the hydrophysical fields of the North Atlantic and Arctic Oceans for 1948--2009. The model has a horizontal resolution of 0.25 degree and 40 sigma-levels along the vertical. The sensitivity of the solution to the changes in mixing parameterization is studied. Experiments demonstrate that taking into account the climatic annual mean buoyancy frequency improves the reproduction of large-scale ocean characteristics. There is a positive effect of Prandtl number variations for reproducing the upper mixed layer depth. The experiments also demonstrate the computational effectiveness of the proposed approach in solving the turbulence equations. Upper-Bound General Circulation of the Ocean: a Theoretical Exposition Hsien-Wang Ou Subject: Physical Sciences, Fluids & Plasmas Keywords: general ocean circulation; Sverdrup dynamics, potential vorticity homogenization; thermal/dynamical coupling; upper-bound circulation This paper considers the general ocean circulation within the thermodynamical closure of our climate theory, which aims to deduce the generic climate state from first principles. The preceding papers of the theory have reduced planetary fluids to warm/cold masses and determined their bulk thermal properties, which provide prior constraints for the derivation of the upper-bound circulation when the potential vorticity is homogenized in moving masses. In a companion paper on the atmosphere, this upper bound is seen to reproduce the prevailing wind, forsaking therefore previous discordant explanations of the easterly trade and the polar jet stream. In this paper on the ocean, we again show that this upper bound may replicate broad features of the observed circulation, including a western-intensified subtropical gyre and a counter-rotating tropical gyre feeding the equatorial undercurrent. Together, we posit that PV homogenization may provide a unifying dynamical principle of the large-scale planetary circulation, which may be interpreted as the maximum macroscopic motion extractable by microscopic stirring --- within the confine of the thermal differentiation. Assessing Diurnal Variability of Biogeochemical Processes Using the Geostationary Ocean Color Imager (GOCI) Javier Concha, Antonio Mannino, Bryan Franz, Wonkook Kim Subject: Earth Sciences, Oceanography Keywords: geostationary ocean color imager (GOCI); ocean color; diurnal dynamics; diurnal variability; uncertainties Short-term (hours) biological and biogeochemical processes cannot be fully captured by the current suite of polar-orbiting satellite ocean color sensors, as their temporal resolution is limited to potentially one clear image per day. Geostationary sensors, such as the Geostationary Ocean Color Imager (GOCI) from the Republic of Korea, allow the study of these short-term processes because their geostationary orbits permit the collection of multiple images throughout each day. To assess the capability to detect changes in water properties caused by these processes, however, requires an understaning of the uncertainties introduced by the instrument and/or geophysical retrieval algorithms. This work presents a study of the variability during the day over a water region of low-productivity with the assumption that only small changes in the water properties occur during the day over the area of study. The complete GOCI mission data were processed using the SeaDAS/l2gen package. Filtering criteria were applied to assure the quality of the data. Relative differences with respect to the midday value were calculated for each hourly observation of the day. Also, the influence of the solar zenith angle in the retrieval of remote sensing reflectances and derived products was analyzed. We determined that the uncertainties in water-leaving "remote-sensing" reflectance ($R_\text{rs}$) for the 412, 443, 490, 555, 660 and 680 nm bands on GOCI are 8.05$\times10^{-4}$, 5.49$\times10^{-4}$, 4.48$\times10^{-4}$, 2.51$\times10^{-4}$, 8.83$\times10^{-5}$, and 1.36$\times10^{-4}$ sr$^{-1}$, respectively, and 1.09$\times10^{-2}$ mg m$^{-3}$ for the chlorophyll-a concentration (Chl-{\it a}), 2.09$\times10^{-3}$ m$^{-1}$ for the absorption coefficient of chromophoric dissolved organic matter at 412 nm ($a_{\text{g}}(412)$), and 3.7 mg m$^{-3}$ for particulate organic carbon (POC). We consider these to be the floor values for detectable changes in the water properties due to biological, physical or chemical processes. Ocean Dynamics Observed by the VIIRS Day-Night Band Measurements Wei Shi, Menghua Wang Subject: Earth Sciences, Oceanography Keywords: VIIRS; DNB observation; ocean dynamics; satellite remote sensing; ocean color; nocturnal study Three cases of Day-Night Band (DNB) observations of the Visible Infrared Imaging Radiometer Suite (VIIRS) onboard the Suomi National Polar-orbiting Partnership (SNPP) are explored for applications to assess the ocean environment and monitor ocean dynamics. An approach to use the DNB radiance ratio was developed in order to better continuously assess the ocean diurnal and short-term environmental changes with VIIRS DNB observations. In the La Plata River Estuary, the sediment fronts showed 20–25 km diurnal inshore-offshore movements on March 13, 2017. In the waters off the Argentina coast in the South Atlantic, VIIRS DNB measurements provided continuous observations and monitoring of the algae bloom development and migration between 24–26 March 2016. This algae bloom generally kept the same spatial patterns, but moved nearly 20 km eastward in the three-day period. In the Yangtze River Estuary and Hangzhou Bay region in China's east coast, VIIRS DNB observations also revealed the complicated coastal dynamic changes between 12–14 April 2017. Even though there are still some challenges and limitations for monitoring the ocean environment with VIIRS DNB observations, this study shows that satellite DNB observations can provide additional data sources for ocean observations, especially observations during the nighttime. SailBuoy Ocean Currents:Low Cost Upper Layer Ocean Current Measurements Nellie Wullenweber, Lars Robert Hole, Peygham Ghaffari, Inger Graves, Harald Tholo, Lionel Camus Subject: Earth Sciences, Oceanography Keywords: SailBuoy; ADCP; Ocean current; Observation This study introduces an alternative to the existing methods for measuring ocean currents based on a recently developed technology. The SailBuoy is an unmanned surface vehicle powered by wind and solar panels that can navigate autonomously to predefined way-points and record velocity profiles using an integrated downward-looking Acoustic Doppler Profiler (ADCP). Data, collected on two validation campaigns, shows a satisfactory correlation between the SailBuoy current records and traditional observation techniques such as bottom mounted and moored current profilers and moored single-point current meter. While the highest correlations were found in tidal signals, strong current, and calm weather conditions, low current speeds and varying high wave and wind conditions reduced correlation considerably. Filtering out some events with high sea surface roughness associated with high wind and wave conditions may increase the SailBuoy ADCP listening quality and lead to better correlations. Not yet resolved is a systematic offset between the measurements obtained by the SailBuoy and the reference instruments of ± 0.03 m/s. Possible reasons are discussed to be the differences between instruments (various products) as well as changes in background noise levels due to environmental conditions. The Current Status of Halophila beccarii: An Ecologically Significant, Yet Vulnerable Seagrass of India Amrit Kumar Mishra, Deepak Apte Subject: Biology, Ecology Keywords: Seagrass; Ocean Turf grass; climax communities; Indian Ocean region; IUCN; Population traits; conservation; management We reviewed the current status of a Vulnerable seagrass, Halophila beccarii from the coast of India using the published data from 1977-2020. We found that the seagrass, H. beccarii has a pan India distribution on both east and west coast. It is abundant in the intertidal silty-muddy region on the west coast, while on the east coast it is found on sandy habitats, with few exceptions of muddy habitat. H. beccarii was found to be associated with mangroves or smaller seagrass species within a depth limit of 1.7m. Low salinity and high nitrate levels were observed for the H. beccarii meadows of the west coast due to its association with mangroves. The nutrient levels in H. beccarii meadows of India were comparatively lower than other seagrass meadows. Most of the research on H. beccarii has focoused on its morphometrics (41%), reproductive (33%) and distribution (29%) along the coast of India. Reproductive traits such as flowering and fruiting varying according to the seasons of each coast due to the influence of monsoon and its associated temperature, salinity and nutrient influx. H. beccarii has a great potential of various bioactive compounds, which needs further investigation. Habitat disturbance, anthropogenic pollution and coastal development are the major cause of declining H. beccarii ecosystems in India. Significant loss of the seagrass was observed from the west coast of India due to increased coastal development activities. There is a significant need in quantifying H. beccarii population trends, impact of climate and anthropogenic stressors, economic values of ecosystem services and the role of ecological connectivity for better conservation and management of H. beccarii seascapes across India. There is a need for integration of research outcomes in policy framing for preventing the decline and further loss of this vulnerable seagrass ecosystem. Gas Seeps at the Edge of the Gas Hydrate Stability Zone on Brazil's Continental Margin Marcelo Ketzer, Daniel Praeg, Maria A.G. Pivel, Adolpho H. Augustin, Luiz F. Rodrigues, Adriano R. Viana, Jose A. Cupertino Subject: Earth Sciences, Geology Keywords: Gas hydrates, gas seeps, ocean acidification Gas hydrate provinces are present in at least in two areas along Brazil's continental margin: (1) the Rio Grande Cone in the southeast, and (2) the Amazon deep-sea fan in the equatorial region. The occurrence of gas hydrates in these depocentres was first detected geophysically and has recently been proven by seafloor sampling of gas vents, detected as water column acoustic anomalies rising from seafloor depressions (pockmarks) and/or mounds, many associated with seafloor faults. The gas vents include typical features of cold seep systems, including shallow sulphate reduction depths (<4 m), authigenic carbonate pavements and chemosynthetic ecosystems. In both areas, gas sampled in hydrate and in sediments is dominantly formed by biogenic methane. Calculation of the methane hydrate stability zone for water temperatures in the two areas shows that gas vents occur along its feather edge (water depths between 510-760 m in the Rio Grande Cone and 500-670 m in the Amazon deep-sea fan) but also in deeper waters within the stability zone. Gas venting along the feather edge of the stability zone could reflect gas hydrate dissociation and release to the oceans, as inferred on other continental margins, or upward fluid flow through the stability zone facilitated by tectonic structures recording the gravitational collapse of both depocentres. The potential quantity of venting gas on the Brazilian margin under different scenarios of natural or anthropogenic change require further investigation. The studied areas provide a natural laboratory where these critical processes can be analysed and quantified. Polyethylene Identification in Ocean Water Samples by Means of 50 Kev Energy Electron Beam John. I Adlish, Davide Costa, Enrico Mainardi, Piero Neuhold, Riccardo Surrente, Luca J. Tagliapietra Subject: Biology, Ecology Keywords: Microplastics; Polyethylene Ocean Water; Microplastics identification; Microorganisms identification; Ocean Water quality; Drinking water; Food quality; Cancer and microplastics; plastic and ocean; particle physics; particle accelerators in environmental studies. The study presented hereafter shows a new methodology to reveal traces of polyethylene (the most common microplastic particles, known as a structure of C2H4) in a sample of ocean water by the irradiation of a 50 keV, 1 µA electron beam. This is performed by analyzing the photon (produced by the electrons in water ) fluxes and spectra (i.e. fluxes as a function of photon energy) at different types of contaminated water with an adequate device and in particular looking at the peculiar interactions of electrons/photons with the potential abnormal atomic hydrogen (H), oxygen (O), carbon (C), phosphorus (P) compositions present in the water, as a function of living and not living organic organisms with a PO4 group RNA/DNA strands in a cluster configuration through a volumetric cells grid. Impact of Ocean Currents on Wind Stress in the Tropical Indian Ocean Neethu Chacko, M M Ali Subject: Earth Sciences, Oceanography Keywords: Scatterometer; wind stress; surface currents; Indian Ocean This study examines the effect of surface currents on the bulk algorithm calculation of wind stress estimated using the scatterometer data during 2007-2020 in the Indian Ocean. In the study region as a whole the wind stress decreased by 5.4% by including currents into the wind stress equation. The most significant reduction in the wind stress is found along the most energetic regions with strong currents such as Somali Current, Equatorial Jets and Aghulhas retroflection. A highest reduction of 11.5% is observed along the equator where the Equatorial Jets prevail. A sensitivity analysis has been carried out for the study region and for different seasons to assess the relative impact of winds and currents in the estimation of wind stress by changing the winds while keeping the currents constants and vice versa. The inclusion of currents decreased the wind stress and this decrease is prominent when the currents are stronger. This study showed that equatorial Indian Ocean is the most sensitive region where the current can impact on wind stress estimation. The results showed that uncertainties in the wind stress estimations are quite large at regional levels and hence better representation of wind stress incorporating ocean currents should be considered in the ocean/climatic models for accurate air-sea interaction studies. Preprint TECHNICAL NOTE | doi:10.20944/preprints202106.0226.v1 Global Ocean Studies from CALIOP/CALIPSO by Removing Polarization Crosstalk Effects Xiaomei Lu, Yongxiang Hu, Ali Omar, Rosemary Baize, Mark Vaughan, Sharon Rodier, Jayanta Kar, Brian Getzewich, Patricia Lucker, Charles Trepte, Chris Hostetler, David Winker Subject: Earth Sciences, Atmospheric Science Keywords: CALIPSO; space lidar; ocean; depolarization ratio; crosstalk Recent studies indicate that the Cloud-Aerosol Lidar with Orthogonal Polarization (CALIOP) aboard the Cloud-Aerosol Lidar and Infrared Pathfinder Satellite Observations (CALIPSO) satellite provides valuable information about ocean phytoplankton distributions. CALIOP's attenuated backscatter coefficients, measured at 532 nm in receiver channels oriented parallel and perpendicular to the laser's linear polarization plane, are significantly improved in the Version 4 data product. However, due to non-ideal instrument effects, a small fraction of the backscattered optical power polarized parallel to the receiver polarization reference plane is misdirected into the perpendicular channel, and vice versa. This effect, known as polarization crosstalk, typically causes the measured perpendicular signal to be higher than its true value and the measured parallel signal to be lower than its true value. Therefore, the ocean optical properties derived directly from CALIOP's measured signals will be biased if the polarization crosstalk effect is not taken into account. This paper presents methods that can be used to estimate the CALIOP crosstalk effects from on-orbit measurements. The global ocean depolarization ratios calculated both before and after removing the crosstalk effects are compared. Using CALIOP crosstalk-corrected signals is highly recommended for all ocean subsurface studies. The Seasonal Variability of the Ocean Energy Cycle From a Quasi-Geostrophic Double Gyre Ensemble Takaya Uchida, Bruno Deremble, Thierry Penduff Subject: Earth Sciences, Atmospheric Science Keywords: Ocean circulation; Geostrophic turbulence; Quasi-geostrophic flows With the advent of submesoscale O(1km) permitting basin-scale ocean simulations, the seasonality in the mesoscale O(50km) eddies with kinetic energies peaking in summer has been commonly attributed to submesoscale eddies feeding back onto the mesoscale via an inverse energy cascade under the constraint of stratification and Earth's rotation. In contrast, by running a 101-member, seasonally forced, three-layer quasi-geostrophic (QG) ensemble configured to represent an idealized double-gyre system of the subtropical and subpolar basin, we find that the mesoscale kinetic energy shows a seasonality consistent with the summer peak without resolving the submesoscales; by definition, a QG model only resolves small Rossby number dynamics (O(Ro)≪1) while as submesoscale dynamics are associated with O(Ro)∼1. Here, by quantifying the Lorenz cycle of the mean and eddy energy, defined as the ensemble mean and fluctuations about the mean respectively, we propose a different mechanism from the inverse energy cascade by which the stabilization and strengthening of the western-boundary current during summer due to increased stratification leads to a shedding of stronger mesoscale eddies from the separated jet. Conversely, the opposite occurs during the winter; the separated jet destablizes and results in overall lower mean and eddy kinetic energies despite the domain being more susceptible to baroclinic instability from weaker stratification. On the Optimal Design of Doppler Scatterometers Ernesto Rodriguez Subject: Earth Sciences, Oceanography Keywords: surface currents; ocean vector winds; scatterometry; Doppler Pencil-beam Doppler scatterometers are a promising remote sensing tool for measuring ocean vector winds and currents from space. While several point designs exist in the literature, these designs have been constrained by the hardware they inherited, and the design is sub-optimal. Here, I present guidelines to optimize the design of these instruments starting from the basic sensitivity equations. Unlike conventional scatterometers or pencil-beam imagers, appropriate sampling of the Doppler spectrum and optimizing the radial velocity error lead naturally to a design that incorporates a pulse-to-pulse separation and pulse length that vary with scan angle. Including this variation can improve radial velocity performance significantly and the optimal selection of system timing and bandwidth is derived. Following this, optimization of the performance based on frequency, incidence angle, antenna length, and spatial sampling strategy are considered. It is shown that antenna length influences the performance most strongly, while the errors depend only on the square root of the transmit transmit power. Finally, a set of example designs and associated performance are presented. Mathematical Modeling of Rogue Waves: A Survey of Recent and Emerging Mathematical Methods and Solutions Sergio Manzetti Subject: Physical Sciences, Fluids & Plasmas Keywords: rogue; wave; models; KdV; NLSE; non-local; ocean; optics Anomalous waves and rogue events are closely associated with irregularities and unexpected events occurring at various levels of physics, such as in optics, in oceans and in the atmosphere. Mathematical modeling of rogue waves is a highly actual field of research, which has evolved over the last decades into a specialized part of mathematical physics. The applications of the mathematical models for rogue events is directly relevant to technology development for prediction of rogue ocean waves, and for signal processing in quantum units. In this survey, a comprehensive perspective of the most recent developments in methods for representing rogue waves is given, along with discussion of the devised and forms and solutions. The standard nonlinear Schrödinger equation, the Hirota equation, the MMT equation and further to other models are discussed, and their properties highlighted. This survey shows that the most recent advancement in modeling rogue waves give models which can be used to establish methods for prediction of rogue waves at open seas, which is important for the safety and activity of marine vessels and installations. The study further puts emphasis on the difference between the methods, and how the resulting models form a basis for representing rogue waves in various forms, solitary or with a wave-background. This review has also a pedagogic component directed towards students and interested non-experts, and forms a complete survey of the most conventional and emerging methods published until recently Estimating Ocean Vector Winds and Currents Using a Ka-Band Pencil-Beam Doppler Scatterometer Ernesto Rodriguez, Alexander Wineteer, Dragana Perkovic-Martin, Tamás Gál, Bryan Stiles, Noppasin Niamsuwan, Raquel Rodriguez Monje Ocean surface currents and winds are tightly coupled essential climate variables, and, given their short time scales, observing them at the same time and resolution is of great interest. DopplerScatt is an airborne Ka-band scatterometer that has been developed under NASA's Instrument Incubator Program (IIP) to provide a proof of concept of the feasability of measuring these variables using pencil-beam scanning Doppler scatterometry. In the first half of this paper, we present the Doppler scatterometer measurement and processing principles, paying particular attention to deriving a complete measurement error budget. Although Doppler radars have been used for the estimation of surface currents, pencil-beam Doppler Scatterometry offers challenges and opportunities that require separate treatment. The calibration of the Doppler measurement to remove platform and instrument biases has been a traditional challenge for Doppler systems, and we introduce several new techniques to mitigate these errors when conical scanning is used. The use of Ka-band for airborne Doppler scatterometry measurements is also new, and, in the second half of the paper, we examine the phenomenology of the mapping from radar cross section and radial velocity measurements to winds and surface currents. To this end, we present new Ka-band Geophysical Model Functions (GMF's) for winds and surface currents obtained from multiple airborne campaigns. We find that the wind Ka-band GMF exhibits similar dependence to wind speed as that for Ku-band scatterometers, such as QuikSCAT, albeit with much greater upwind-crosswind modulation. The surface current GMF at Ka-band is significantly different from that at C-band, and, above 4.5 m/s has a weak dependence on wind speed, although still dependent on wind direction. We examine the effects of Bragg-wave modulation by long waves through a Modululation Transfer Function (MTF), and show that the observed surface current dependence on winds is consistent with past Ka-band MTF observations. Finally, we provide a preliminary validation of our geophysical retrievals, which will be expanded in subsequent publications. Our results indicate that Ka-band Doppler scatterometry could be a feasible method for wide-swath simultaneous measurements of winds and currents from space. A Contrast Minimization Approach to Quantify and Remove Sun Glint in Landsat 8 / OLI Imagery Frank Fell Subject: Earth Sciences, Oceanography Keywords: ocean color; sun glint; atmospheric correction; Landsat 8 Sun glint, i.e., direct solar radiation reflected from a water surface, negatively affects the accuracy of ocean color retrieval schemes if entering the field-of-view of the observing instrument. Herein, a simple and robust method to quantify the sun glint contribution to top-of-atmosphere (TOA) reflectances in the visible (VIS) and near-infrared (NIR) is proposed, exploiting concomitant observations of the sun glint's morphology in the shortwave infrared (SWIR) characterized by reflectance contrasts typically higher than those resulting from other in-water or atmospheric processes. The proposed method, termed Glint Removal through Contrast Minimization (GRCM), requires high spatial resolution (ca. 10–50 m) imagery to resolve the sun glint's characteristic morphology, meeting additional criteria on radiometric resolution and temporal delay between the individual band's acquisitions. It has been applied with good success to a selection of Landsat 8 (L8) Operational Land Imager (OLI) scenes encompassing a wide range of environmental conditions in terms of observation geometry and glint intensity, as well as aerosol and Rayleigh optical depth. The method proposed herein is entirely image based and does not require ancillary information on the sea surface roughness or related parameters (e.g., surface wind), neither the presence of clear water areas in the image under consideration. Limitations of the proposed method are discussed, and its potential for sensors other than OLI and applications beyond glint removal is sketched. Diversity and Life-Cycle Analysis of Pacific Ocean Zooplankton by Videomicroscopy and Dna Barcoding: Crustacea Peter J Bryant, Timothy Arehart Subject: Biology, Anatomy & Morphology Keywords: Crustacea; Zooplankton; Plankton; Pacific Ocean; Larvae; DNA barcoding Determining the DNA sequencing of a small element in the mitochondrial DNA (DNA barcoding) makes it possible to easily identify individuals of different larval stages of marine crustaceans without the need for laboratory rearing. It can also be used to construct taxonomic trees, although it is not yet clear to what extent this barcode-based taxonomy reflects more traditional morphological or molecular taxonomy. Collections of zooplankton were made using conventional plankton nets in Newport Bay and the Pacific Ocean near Newport Beach, California (Lat. 33.628342, Long. -117.927933) between May 2013 and January 2020, and individual crustacean specimens were documented by videomicroscopy. Adult crustaceans were collected from solid substrates in the same areas. Specimens were preserved in ethanol and sent to the Canadian Centre for DNA Barcoding at the University of Guelph, Ontario, Canada for sequencing of the COI DNA barcode. From 1042 specimens, 544 COI sequences were obtained falling into 199 Barcode Identification Numbers (BINs), of which 76 correspond to recognized species. The results show the utility of DNA barcoding for matching life-cycle stages as well as for documenting the diversity of this group of organisms. Abundance Indices of Skipjack Tuna (Katsuwonus pelamis) from the Indonesian Drifting Gillnet Fishery in the Indian Ocean Dian Novianto, Ilham Ilham, Bram Setyadji, Chandara Nainggolan, Djodjo Suwardjo, Suciadi Catur Nugroho, Syarif Syamsuddin, Arief Efendi, Yaser Krisnafi, Muhamad Handri, Abdul Basith, Yusrizal Yusrizal, Erick Nugraha Subject: Biology, Other Keywords: abundance indices; skipjack tuna; drifting gillnet; Indian Ocean Skipjack tuna supports a valuable commercial fishery in Indonesia. Skipjack tuna are exploited in the Indian and Pacific Oceans with a variety of gear but drifting gillnets are a common method used by Indonesian fishers. However, despite of its importance, little information on the drifting gillnet fishery is available. This study describes a preliminary examination of the catch and effort data from the Indonesian skipjack drifting gillnet fishery. Utilizing daily landing report from 2010-2015, nominal catch per unit of effort (CPUE) data were calculated as kg/day at sea. Generalized Linear Models (GLM) were used to standardize the CPUE, using year, quarter, day at sea, and area as fixed variables. Model Goodness-of-fit and model comparison was carried out with the Akaike Information Criteria (AIC), the pseudo coefficient of determination (R2) and model validation with a residual analysis. The final estimation of abundance indices was calculated by least square means (LSMeans) or Marginal Means. The results showed that days accounted for most of the variation in CPUE, followed by year, quarter, and area. In general, there were no noticeable trends indicative of over exploitation or population depletion suggesting a sustainable fishery for Skipjack tuna in Indonesian waters. Diversity and Life-cycle Analysis of Pacific Ocean Zooplankton by Videomicroscopy and DNA Barcoding: Gastropods Peter Bryant, Timothy Arehart Subject: Life Sciences, Molecular Biology Keywords: Mollusks; gastropods; Zooplankton; Plankton; Pacific Ocean; larvae; DNA barcoding The life cycles and biodiversity of Pacific coast gastropods were analyzed by videomicroscopy and DNA barcoding of indi-viduals collected from tide pools and in plankton nets from a variety of shore stations. In many species (Families Calyptrae-idae, Cerithiopsidae, Strombidae, Vermetidae, Columbellidae, Nassariidae, Olivellidae, Hermaeidae, Onchidorididae, Gas-tropteridae, Haminoeidae), the free-swimming veligers were recovered from plankton collections; in Roperia poulsoni (family Muricidae) veligers were usually recovered from egg sacs where they had been retained although some escapees were found in plankton collections; in Pteropurpura festiva (family Muricidae) free-living veligers were also found; and in Atlanta californiensis (family Atlantidae) both veligers and adults were obtained from plankton collections making this a holoplank-tonic species. The results confirm that DNA barcoding using the COI barcode is a useful strategy to match life-cycle stages within species as well as to identify species and to document the level of biodiversity within the gastropods. Analysis of Ocean Bottom Pressure Anomalies and Seismic Activities in MedRidge Zone Senol Hakan Kutoglu, Kazimierz Becek Subject: Earth Sciences, Geology Keywords: GRACE; Ocean Bottom Pressure; Earthquakes; Mediterranean Ridge accretionary complex. Mediterranean Ridge accretionary complex (MAC) is one of the most critical subduction zones in the world. It is known that the region exhibits a continuous mass change (horizontal/vertical movements). This process is associated with the devastating and tragic earthquakes shaking the MAC for centuries. Here, we investigate the ocean bottom pressure (OBP) anomalies in the MAC derived from the Gravity Recovery and Climate Experiment (GRACE) and GRACE Follow On (GRACE-FO) satellite missions. The OBP time series for the MAC comprises a decreasing trend in addition to 1-, 1.53-, 2.36-, 3.67-, and 9.17-year periodic components partially explained by the atmosphere, oceans, and hydrosphere (AOH) processes, and Earth's pole movement. We noticed that the OBP anomalies appear to link to a rising trend and periods in earthquakes' power time series. This finding sheds new light on the mechanisms controlling the most destructive natural hazard. Fast Segmentation Method for Sonar Images Blurred by Noise Reduction Mao Hande, Zhipeng Xu, Sheng Chang, Yuliang Liu, Xiangcheng Zhu Subject: Engineering, Automotive Engineering Keywords: Image segmentation; sonar image; ocean engineering;morphological image processing It has remained a hard nut for years to segment sonar images, most of which are noisy images with inevitable blur after noise reduction. For the purpose of solutions to this problem, a fast segmentation algorithm is proposed on the basis of the gray value characteristics of sonar images. This algorithm is endowed with the advantage in no need of segmentation thresholds to be calculated. To realize this goal, it follows the undermentioned steps: first, calculate the gray matrix of the fuzzy image background. After adjusting the gray value, segment the region into the background region, buffer region and target regions. After filtering, reset the pixels with gray value lower than 255 to binarize images and eliminate most artifacts. Finally, remove the remaining noise from images by means of morphological image processing. The simulation results of several sonar images show that the algorithm can segment the fuzzy sonar image quickly and effectively, with no problem of incomplete image target shape. Thus, the stable and feasible method is testified. Identifying the Frequency Dependent Interactions Between Ocean Waves and The Continental Margin on Seismic Noise Recordings Zhen Guo, Yu Huang, Adnan Aydin, Mei Xue Subject: Earth Sciences, Geophysics Keywords: ocean waves; double-frequency microseisms; continental margin; continental slope This study presents an exploration into identifying the interactions between ocean waves and the continental margin in the origination of double-frequency (DF, 0.1-0.5 Hz) microseisms recorded at 33 stations across East Coast of USA (ECUSA) during a ten-day period of ordinary ocean wave climate. Daily primary vibration directions are calculated in three frequency bands and projected as great circles passing through each station. In each band, the great circles from all stations exhibit largest spatial density primarily near the continental slope in the western North Atlantic Ocean. Generation mechanisms of three DF microseism events are explored by comparing temporal and spatial variations of the DF microseisms with the migration patterns of ocean wave fronts in Wavewatch III hindcasts. Correlation analyses are conducted by comparing the frequency compositions of and calculating the correlation coefficients between the DF microseisms and the ocean waves recorded at selected buoys. The observations and analyses lead to a hypothesis that the continental slope causes wave reflection, generating low frequency DF energy and that the continental shelf is where high frequency DF energy is mainly generated in ECUSA. The hypothesis is supported by the primary vibration directions being mainly perpendicular to the strike of the continental slope. Gastropod Shell Dissolution as a Tool for Biomonitoring Marine Acidification, with Reference to Coastal Geochemical Discharge David Marshall, Azmi Aminuddin, Nurshahida Mustapha, Dennis Ting Teck Wah, Liyanage De Silva Subject: Earth Sciences, Environmental Sciences Keywords: ocean acidification; acid sulphate soils; calcification; molluscs; snails; tropical Marine water pH is becoming progressively reduced in response to atmospheric CO2 elevation. Considering that marine environments support a vast global biodiversity and provide a variety of ecosystem functions and services, monitoring of the coastal and intertidal water pH assumes obvious significance. Because current monitoring approaches using meters and loggers are typically limited in application in heterogeneous environments and are financially prohibitive, we sought to evaluate an approach to acidification biomonitoring using living gastropod shells. We investigated snail populations exposed naturally to corrosive water in Brunei (Borneo, South East Asia). We show that surface erosion features of shells are generally more sensitive to acidic water exposure than other attributes (shell mass) in a study of rocky-shore snail populations (Nerita chamaeleon) exposed to greater or lesser coastal geochemical acidification (acid sulphate soil seepage, ASS), by virtue of their spatial separation. We develop a novel digital approach to measuring the surface area of shell erosion. Surficial shell erosion of a muddy-sediment estuarine snail, Umbonium vestiarium, is shown to capture variation in acidic water exposure for the timeframe of a decade. Shell dissolution in Neripteron violaceum from an extremely acidic estuarine habitat, directly influenced by ASS inflows, was high variable among individuals. In conclusion, gastropod shell dissolution potentially provides a powerful and cost-effective tool for rapidly assessing marine pH change across a range of spatial and temporal frameworks and coastal intertidal environments. We discuss caveats when interpreting gastropod shell dissolution patterns. Correction of Sea Brightness Coefficient Satellite Data in the Presence of Dust Anna Stanislavovna Papkova, Evgeny Borisovich Shybanov Subject: Earth Sciences, Atmospheric Science Keywords: sea brightness coefficient; optical characteristics; ocean color; dust; atmospheric correction Satellite measurements are one of the main sources of data on the state of the marine environment. To obtain information about the sea brightness coefficient , it is needed to correctly carry out atmospheric correction. In the presence of dust aerosol over the Black Sea, physically incorrect values of the spectral brightness coefficient often occur, and specifically negative values in the IR region of the spectrum. The main objective of the study is to evaluate the influence of dust aerosol on the spectral dependence of sea brightness, based on analytical calculations from the transfer theory using the principle of plane-parallel layers and results of validation of AERONET-OC field and remote sensing data. The work analyzes spectral dependence of the first error eigenvector of the standard atmospheric correction in the presence of dust aerosol. As result it is given that with an absorbing aerosol, the atmospheric correction error is described by the spectral course of molecular scattering, i.e. close to Energy and Information Fluxes at Upper Ocean Density Fronts Pablo Cornejo, Adolfo Bahamondes Subject: Physical Sciences, Fluids & Plasmas Keywords: Submesoscale; Turbulence; Frontogenesis; Energy cascade; Mutual communication; Ocean density fronts We present large eddy simulations of a mid-latitude open ocean front using a modified state of the art computational fluid dynamics code. We investigate the energy and information fluxes at the submesoscale/small-scale range in the absence of any atmospheric forcing. We find submesoscale conditions (Ro∼1, Ri∼1) near surface, within baroclinic structures related with partially imbalanced frontogenetic activity. Near surface, the simulations show a significant scale coupling on scales larger than ∼103(m). This is manifested as a strong direct energy cascade and intense mutual communication between scales, where the latter was evaluated using an estimator based on Mutual Information Theory. At scales smaller than ∼103(m) the results show near-zero energy flux, however, at this scale range, the estimator of mutual communication still shows values corresponding with a significant level of communication between them. This fact motivates to investigate the nature of the self-organized turbulent motion at this scale range with weak energetic coupling but where communication between scales is still significant and to inquire into the existence of synchronization or functional relationships between scales with emphasis on eventual underlying non-local processes. Checklist of Helminth Parasites and Epizoites in Common Dolphins from Coastal Peru and Ecuador Joanna Alfaro-Shigueto, Koen Van Waerebeek, Julio C. Reyes, Marie-Francoise Van Bressem, Jorge Samaniego, Fernando Félix, Natalia Fraija-Fernández Subject: Biology, Animal Sciences & Zoology Keywords: helminths; parasitology; Delphinus; common dolphins; Southeast Pacific Ocean; Peru; Ecuador Data on helminth parasites and epizoites is presented for the long-beaked and short-beaked common dolphins from the Southeast Pacific. Sampling in 1985-2000 was conducted mainly at six fishing ports in Peru and Ecuador where cetaceans were landed. From a total of 473 common dolphins sampled, we identified helminths including three species of Trematoda: Nasitrema globicephalae, Pholeter gastrophilus and Braunina cordiformis; three species of Nematoda, which includes Anisakis spp., Crassicauda sp. and Halocercus sp.; and two cestodes Tetrabothrius forsteri and Clistobothrium delphini (formerly Phyllobothrium delphini). No acanthocephalans were observed. No statistically significant sexual and ontogenetic variation in helminth prevalence was detected after which samples were pooled. The highest prevalence in the long-beaked common dolphin (n=440) was observed for N. globicephalae (96.3%) in cranial sinuses, Crassicauda sp. (83.3%) in mammary glands, Crassicauda sp. (78.8%) infesting the cranial sinuses, followed by Cl. delphini (28.6%) in the blubber and P. gastrophilus and B. cordiformis (20.4%) in the digestive tract. Although comparative testing was unfeasible due to minimal samples of short-beaked common dolphin (n=33), several of the same helminth species were found, but not N. globicephalae nor B. cordiformis. No cyamids were encountered while pseudo-stalked barnacles Xenobalanus globicipitis were common. No new (global) helminth host records are revealed for common dolphins, but this study presents a first checklist of parasites separately for the Southeast Pacific long-beaked and short-beaked common dolphins. Future work should include exhaustive laboratory-based necropsies, enhanced sampling of the short-beaked form, focus on intermediate hosts and parasitic pathology, including potential human health impact from consumption of small cetaceans. Hydrometeorological Variability and Its Nonstationarity According to the Evolution Pattern of Indian Ocean Dipole over the East Asia Region Jong-Suk Kim, Sun-Kwon Yoon, Sang-Myeong Oh Subject: Earth Sciences, Atmospheric Science Keywords: hydrometeorological variability; Indian Ocean Dipole; principal component analysis; mutual information In this study, we used statistical models to analyze nonlinear behavior links with atmospheric teleconnections between hydrometeorological variables and Indian Ocean Dipole (IOD) mode over the East Asia (EA) region. The analysis of atmospheric teleconnections was conducted using principal component analysis and singular spectrum analysis techniques. Moreover, the nonlinear lag-time correlations between climate indices and hydrological variables were calculated using mutual information (MI) techniques. The teleconnection-based nonlinear correlation coefficients (CCs) were higher than the linear CCs in each lag time. Additionally, we documented that the IOD has a direct influence on hydro-meteorological variables, such as precipitation within the Korean Peninsula (KP). Moreover, during the warm season (June to September) the variation of hydro-meteorological variables in the KP demonstrated significantly decreasing patterns during positive IOD years and they have neutral conditions during negative IOD years in comparison with long-term normal conditions. Finally, the revealed relationship between climate indices and hydro-meteorological variables and their possible changes will allow better understanding of stakeholder decision-making regarding to manage of freshwater management over the EA region. It can also provide useful data for long-range water resources prediction, to minimize hydrological uncertainties in a changing climate. Analysis of Operability Envelopes for Subsea Production Tree Installation Fei Wang, Neng Chen Subject: Engineering, Marine Engineering Keywords: operability envelope; SPT installation; drill pipe; ocean conditions; mechanical behavior The article presents a mathematical model to investigate the operability envelopes for subsea production tress (SPT) installation using drill pipe. The finite differential method was used to solve the established governing equations in which the ocean conditions were considered. Based on the evaluations of the ocean wave, ocean current, water depth, specification of drill pipe and SPT weight that might dominate the mechanical behaviors of the pipe, the operability envelopes with permissible ocean conditions for SPT installation were obtained. The results indicate that changes of depths in deep water and SPT weight have few effects on the operation conditions and it would be better to choose smaller pipe to obtain larger permissible ocean conditions during SPT installation. Exposing the Fundamental Role of Spectral Scattering in the PFT Signal Lisl Robertson Lain, Stewart Bernard Subject: Earth Sciences, Oceanography Keywords: phytoplankton; PFT; ocean colour; satellite radiometry; radiative transfer; optical modelling There is increasing interdisciplinary interest in phytoplankton community dynamics as the growing environmental problems of water quality (particularly eutrophication) and climate change demand attention. This has led to a pressing need for improved biophysical and causal understanding of Phytoplankton Functional Type (PFT) optical signals, in order that satellite radiometry may be used to detect ecologically relevant phytoplankton assemblage changes. This understanding can best be achieved with biophysically and biogeochemically consistent phytoplankton Inherent Optical Property (IOP) models, as it is only via modelling that phytoplankton assemblage characteristics can be examined systematically in relation to the bulk optical waterleaving signal. The Equivalent Algal Populations (EAP) model is used here to investigate the source and magnitude of size- and pigment- driven PFT signals in the water-leaving reflectance, as well as the potential to detect these using satellite radiometry. This model places emphasis on explicit biophysical modelling of the phytoplankton population as a holistic determinant of IOPs, and a distinctive attribute is its comprehensive handling of the spectral and angular character of phytoplankton scattering. Selected case studies and sensitivity analyses reveal that phytoplankton spectral scattering is the primary driver of the PFT-related signal. Key findings are that the backscattering-driven signal in the 520 to 600 nm region is the critical PFT identifier at marginal biomass, and that while PFT information does appear at blue and red wavelengths, it is compromised by biomass/gelbstoff ambiguity in the blue and low signal in the red, due primarily to absorption by water. These findings are hoped to provide considerable insight into the next generation of PFT algorithms. Low-Frequency Sea Surface Radar Doppler Echo Yury Yu. Yurovsky, Vladimir N. Kudryavtsev, Semyon A. Grodsky, Bertrand Chapron Subject: Earth Sciences, Oceanography Keywords: Radar; ocean; backscatter; Doppler shift; wave groups; non-linearity; modulation Observed sea surface Ka-band normalized radar backscatter cross section (NRCS) and Doppler velocity (DV) exhibit energy at low frequencies (LF) below the surface wave range. It is shown that non-linearity in NRCS-wave slope Modulation Transfer Function (MTF) and inherent NRCS averaging within the footprint account for the NRCS and DV LF variance with the exception of VV NRCS for which almost half of the LF variance is attributable to wind fluctuations. Although the distribution of radar DV is quasi-Gaussian suggesting virtually little impact of non-linearity, the LF DV variations arise due to footprint averaging of correlated local DV and non-linear NRCS. Numerical simulations demonstrate that MTF non-linearity weakly affects traditional linear MTF estimate (less than 10% for |MTF|< 20). Thus the linear MTF is a good approximation to evaluate the DV averaged over large footprints typical of satellite observations. Quality assessment of Sea Surface Salinity from multiple Ocean Reanalysis Products Haodi Wang, You Ziqi, Hailong Guo, Wen Zhang, Peng Xu, Kaijun Ren Subject: Earth Sciences, Oceanography Keywords: sea surface salinity; ocean reanalysis; moored buoy; in situ measurements; validation Sea surface salinity (SSS) is one of the Essential Climate Variables (ECVs) as defined by the Global Climate Observing System (GCOS). Acquiring high-quality SSS datasets with high spatial-temporal resolution is cruicial for research on the hydrological cycle and earth climate. This study assessed the quality of SSS data provided by four high-resolution ocean reanalysis products, including the Hybrid Coordinate Ocean Model (HYCOM) 1/12° global reanalysis, the The Copernicus Global 1/12° Oceanic and Sea Ice GLORYS12 Reanal-ysis, the Simple Ocean Data Assimilation (SODA) reanalysis, the ECMWF Oceanic Reanalysis System 5 (ORAS5) product and the Estimating the Circulation and Climate of the Ocean Phase II (ECCO2) reanalysis. Regional comparison in the Mediterranean Sea shows that reanalysis largely depicts the accurate spatial SSS structure away from river mouths and coastal areas but slightly underestimates the mean SSS values. Better SSS reanalysis performance is found in the Levantine Sea while larger SSS uncertainties are found in the Adriatic Sea and the Aegean Sea. The global comparison with CMEMS level-4 (L4) SSS show generally con-sistent large-scale structures. The mean ΔSSS between monthly gridded reanalysis data and in situ analyzed data is -0.1 PSU in the open seas between 40°S and 40°N with the mean Root Mean Square Deviation (RMSD) generally smaller than 0.3 PSU and the majority of correlation coefficients higher than 0.5. Comparison with collocated buoy salinity shows that reanalysis products well captures the SSS variations at the locations of tropical moored buoy arrays at weekly scale. Among all the four products, the data quality of HYCOM re-analysis SSS is highest in marginal sea, GLORYS12 has the best performance in the global ocean especially in tropical regions. Comparatively, ECCO2 has the overall worst performance to reproduce SSS states and variations by showing the largest discrepancies with CMEMS L4 SSS. Spatial Variability and Detection Levels for Chlorophyll-A Estimates in High Latitude Lakes Using Landsat Imagery Filipe Lisboa, Vanda Brotas, Filipe Duarte Santos, Sakari Kuikka, Laura Kaikkonen, Eduardo Maeda Subject: Earth Sciences, Environmental Sciences Keywords: Ocean colour; phytoplankton ecology; Earth Observation; Inland Waters; Lakes; phenology change Monitoring lakes in high-latitude areas can provide a better understanding of freshwater systems sensitivity and accrete knowledge on climate change impacts. Phytoplankton are sensitive to various conditions: warmer temperatures, earlier ice-melt and changing nutrient sources. Satellite imagery can monitor algae biomass over large areas. The detection of chlorophyll a (chl-a) concentrations in small lakes is hindered by the low spatial resolution of conventional ocean colour satellites. The short time-series of the newest generation of space-borne sensors (e.g. Sentinel-2) is a bottleneck for assessing long-term trends. Although previous studies have evaluated the use of high-resolution sensors for assessing lakes' chl-a, it is still unclear how the spatial and temporal variability of chl-a concentration affect the performance of satellite estimates. We discuss the suitability of Landsat (LT) 30-m resolution imagery to assess lakes' chl-a concentrations under varying trophic conditions, across extensive high-latitude areas in Finland. We use in situ data obtained from field campaigns in 19 lakes and generate remote sensing estimates of chl-a, taking advantage of the long-time span of the LT 5 and 7 archives, from 1984 to 2017. Our results show that linear models based on LT data can explain approximately 50 % of the chl-a interannual variability. However, we demonstrate that the accuracy of the estimates is dependent on the lake's trophic state, with models performing in average twice as better in lakes with higher chl-a concentration (> 20 µg/l) in comparison with less eutrophic lakes. Finally, we demonstrate that linear models based on LT data can achieve high accuracy (R2 = 0.9; p-value < 0.05) in determining lakes' annual mean chl-a concentration, allowing the mapping of the trophic state of lakes across large regions. Given the long time-series and high spatial resolution, LT-based estimates of chl-a provide a tool for assessing the impacts of environmental change. Removing Striping Noise from Cloudy Level 2 Sea Surface Temperature and Ocean Color Datasets Brahim Boussidi, Ronan Fablet, Bertrand Chapron Subject: Earth Sciences, Oceanography Keywords: Destriping; Undecimated wavelet transform; Fourier filtering; Sea Surface Temperature; Ocean color This paper introduces a new destriping algorithm for remote sensing data. The method is based on combined Haar Stationary Wavelet transform and Fourier filtering. State-of-the-Art methods based on the discrete wavelet transform (DWT) may not always be effective and may cause different artifacts. Our contribution is three-fold: i) we propose to use the Undecimated Wavelet transform (UWT) to avoid as much as possible shortcomings of the classical DWT; ii) we combine a spectral filtering and UWT using the simplest possible wavelet, the Haar basis, for a computational efficiency; iii) we handle 2D fields with missing data, as commonly observed in ocean remote sensing data due to atmospheric conditions (e.g., cloud contamination). The performances of the proposed filter are tested and validated on the suppression of horizontal strip artifacts in cloudy L2 Sea Surface Temperature (SST) and ocean color snapshots. Parameterization of Eddy Mass Transport in the Arctic Seas based on the Sensitivity Analysis of Large-scale Flows Gennady Platov, Dina Iakshina, Elena Golubeva Subject: Earth Sciences, Oceanography Keywords: eddy mass transport; subgrid-scale processes; parametrization; Arctic ocean; sensitivity study; clustering The characteristics of the eddy mass transport are estimated depending on the values of the parameters of a large-scale flow that forms under the conditions of the shelf seas in the Arctic. For this, the results of numerical simulation of the Kara Sea with a horizontal resolution permitting the development of mesoscale eddies are used. The parameters resulting from numerical experiment are considered as a statistical sample and are analyzed using methods of sensitivity study and clustering of sample elements. Functional dependencies are obtained that are closest to the simulated distributions of quantities. These expressions make it possible, within the framework of large-scale models, to evaluate the characteristics of the cross-isobatic eddy mass transport in the diffusion approximation with a counter-gradient flux. Numerical experiments using the SibCIOM model showed that areas along the Fram branch of the Atlantic waters trajectory in the Arctic as well as the shelf of the East Siberian and Laptev seas with adjacent deep water areas are most sensitive to proposed parameterization of eddy exchanges. Accounting for counter-gradient eddy fluxes turned out to be less important. A New Look at Physico-Chemical Causes of Changing Climate: Is the Seasonal Variation in Seawater Temperature a Significant Factor in Establishing the Partial Pressure of Carbon Dioxide in the Earth's Atmosphere? Ivan R. Kennedy, John W. Runcie, Shuo Zhang, Raymond J. Ritchie Subject: Earth Sciences, Atmospheric Science Keywords: CO2; Keeling curve; Mauna Loa; carbonates; ocean pH; chemical potential; acidification 1 Seasonal oscillations in the partial pressure of carbon dioxide (pCO2) in the Earth's atmosphere stronger in northern latitudes are assumed to show that terrestrial photosynthesis exceeds respiration in summer reducing the pCO2, but increasing in winter when respiration exceeds photosynthesis. We disagree, proposing that variations in the temperature of the surface mixing zone of seawater also regulate the atmospheric pCO2, thermodynamically. We show that carbonate (CO32-) concentrations will therefore increase in summer with CaCO3 (calcite or aragonite) becoming less soluble, so calcium and carbonate ions are predicted to aggregate more while CO2 concentration falls in warmer seawater, thermodynamically favoring lower atmospheric pCO2. In winter, these physical processes are reversed, redissolving suspended calcite thus increasing carbonate alkalinity; carbonate concentration lessens as bicarbonate and soluble CO2 increase, raising the pCO2 in air. Our numerical computation shows that thermal fluctuations in equilibria favor absorption from air of more than one mole of CO2 per square meter in summer, coinciding with calcite formation maximizing in warmer water, potentially augmenting limestone reefs if there is a trend for increasing temperature . Another assumption we challenge is that upwelling from deeper water is the sole cause of increases in dissolved inorganic carbon (DIC) and alkalinity in surface waters, particularly in the southern hemisphere. Instead, calcite dissolution is favored as water temperature falls near the surface. It is well established that the seasonal summer decline in atmospheric CO2 coincides in fertile seawater with higher rates of biotic calcification and acidity, allowing increased CO2 capture by photosynthesis. However, its reversal in winter is proposed to be also a result of the cyclic dissolution of calcite as temperature falls, facilitated by biogenic respiration now exceeding photosynthesis; this can mutually provide the CO2 needed to convert carbonate ion alkalinity from calcite dissolution with bicarbonate increasing. Physical reasons why this oscillation is more obvious in the northern hemisphere include greater seasonal variations in water temperature (ca. 7.1 oC) being almost twice that in the cooler southern hemisphere (ca. 4.7 oC) and the greater depth of the surface mixing zone of seawater in the southern oceans. Evidence from 13CO2 fluxes between surface seawater and air is also assessed to test this hypothesis but questions remain regarding the regional rates of inorganic precipitation and dissolution of CaCO3 in the mixing zone. In summary, rapid biogenic calcification is favored by summer photosynthesis, but slower abiogenic calcification is more likely in warmer water. Trial Assay for Safe First-Aid Protocol for the Stinging Sea Anemone Anemonia viridis (Cnidaria: Anthozoa) and a Severe Toxic Reaction Ainara Ballesteros, Janire Salazar, Macarena Marambio, Jose Tena, Jose Rafael Garcia-March, Diana López, Clara Tellez, Carles Trullàs, Eric Jourdan, Corinne Granger, Josep-Maria Gili Subject: Biology, Animal Sciences & Zoology Keywords: cnidarian venom; cnidocyst discharge; cnidocyte; ocean literacy; risk prevention; seawater; sting; vinegar Anemonia viridis is an abundant and widely-distributed temperate sea anemone that can form dense congregations of individuals. Despite the potential severity of its sting, few detailed cases have been reported. We report a case of a severe toxic reaction following an A. viridis sting in a 35-year-old oceanographer. She developed severe pain, itching, redness and burning sensation, which worsened one week after treatment with anti-inflammatories, antihistamines and corticosteroids. Prompted by this event, and due to the insufficient risk prevention, lack of training for marine-environment users and lack of research into sting-specific first-aid protocols, we evaluated the cnidocyst response to five different compounds commonly recommended as rinse solutions in first-aid protocols (seawater, vinegar, ammonia, baking soda and freshwater) by means of the Tentacle Solution Assay. Vinegar and ammonia triggered an immediate and massive cnidocyst discharge after their application and were classified as activator solutions. Baking soda and freshwater were also classified as activator solutions, although with a lower intensity of discharge. Only seawater was classified as a neutral solution and therefore recommended as a rinse solution after A. viridis sting, at least until an inhibitory solution is discovered. Remote Sensing of Ocean Fronts in Marine Ecology and Fisheries Igor Belkin Subject: Biology, Anatomy & Morphology Keywords: Ocean front; Marine ecology; Fisheries; Front detection; Satellite imagery; Feature-based approach This paper provides a concise review of remote sensing of ocean fronts and its applications in marine ecology and fisheries, with a particular focus on the most popular front detection algorithms/techniques: Canny (1986), Cayula and Cornillon (1990, 1992, 1995), Miller (2004, 2009), Shimada et al. (2005), Belkin and O'Reilly (2009), and Nieto et al. (2012). A case is made for feature-based approach in marine ecology and fisheries that emphasizes fronts as major structural and circulation features of the ocean realm that play key roles in various aspects of marine ecology. Long-Term Effects of Elevated Co2 on the Population Dynamics of The Seagrass Cymodocea Nodosa: Evidence from Volcanic Seeps Amrit Kumar Mishra, Susana Cabaco, Carmen de los Santos, Eugenia Apostolaki, Salvatrice Vizzini, Rui Santos Subject: Biology, Ecology Keywords: Reconstruction techniques; population dynamics; seagrass; ocean acidification; volcanic CO2 seeps; Mediterranean Sea We used population reconstruction techniques to assess for the first time the population dynamics of a seagrass, Cymodocea nodosa, exposed to long-term elevated CO2 near three volcanic seeps and compare them with reference sites away from the seeps. Under high CO2, the density of shoots and of individuals (apical shoots), and the vertical and horizontal elongation and production rates, were higher. Nitrogen effects on rhizome elongation and production rates and on biomass, were stronger than CO2 as these were highest at the location where the availability of nitrogen was highest. At the seep where the availability of CO2 was highest and nitrogen lowest, density of shoots and individuals were highest, probably due to CO2 effects on shoot differentiation and induced reproductive output, respectively. In all three seeps there was higher short- and long-term recruitment and growth rates around zero, indicating that elevated CO2 increases the turnover of C. nodosa shoots. Human Discovery and Settlement of the Remote Easter Island (SE Pacific) Valenti Rull Subject: Earth Sciences, Other Keywords: islands, discovery, settlement, colonization, Easter Island, Rapa Nui, Pacific Ocean, Polynesians, Amerindians The discovery and settlement of the tiny and remote Easter Island (Rapa Nui) has been a classical controversy for decades. Present-day aboriginal people and their culture are undoubtedly of Polynesian origin but it has been debated whether Native Americans discovered the island before the Polynesian settlement. Until recently, the paradigm was that Easter Island was discovered and settled just once by Polynesians in their millennial-scale eastward migration across the Pacific. However, the evidence for cultivation and consumption of an American plant, the sweet potato (Ipomoea batatas), on the island before the European contact (1722 CE), even prior to the Europe-America contact (1492 CE), revived the controversy. This paper reviews the classical archaeological, ethnological and paleoecological literature on the subject and summarizes the information into four main hypotheses to explain the sweet potato enigma: the long-distance-dispersal hypothesis, the back-and-forth hypothesis, the Heyerdahl hypothesis and the newcomer's hypothesis. These hypotheses are evaluated in light of the more recent evidence (last decade), including molecular DNA phylogeny and phylogeography of humans and associated plants and animals, physical anthropology (craniometry, dietary analysis) and new paleoecological findings. It is concluded that, with the available evidence, none of the former hypotheses may be rejected and, therefore, all possibilities remain open. For future work, it is recommended to use the multiple-working-hypothesis framework and the strong inference method of hypothesis testing, rather than the ruling theory approach, very common in Easter Island's research. Estimating Chlorophyll-a Absorption with the Total Algae Peak Integration Retrieval TAPIR Considering Chlorophyll-a Fluorescence from Hyperspectral Top of the Atmosphere Signals in Optically Complex Waters Therese Keck, René Preusker, Jürgen Fischer Subject: Earth Sciences, Oceanography Keywords: fluorescence; absorption; chlorophyll-a; remote sensing; hyperspectral; ocean color; IOP; TAPIR; EnMAP The Total Algae Peak Integration Retrieval TAPIR relates the chlorophyll-a absorption coefficient at 440 nm (a440) to the reflectance peak near 683 nm induced by chlorophyll-a properties. The two-step retrieval provides both the hyperspectral quantification of the phytoplankton fluorescence and scattering and the estimation of a440 from reflectance signals. Integrating the peak, the Total Algae Peak (TAP) accounts for the variance in the peak's magnitude, shape, and central peak wavelength. TAPIR is a solely optical approach estimating a440 and supports the application of retrieval-independent individual regional bio-optical models afterwards to retrieve the chlorophyll-a concentration. Simulations reveal the major sensitivity on the considered model chlorophyll-a absorption spectrum and its single scattering albedo. Additional water and atmosphere constituents have a low impact. An uncertainty assessment reveals uncertainties of less than 30% for TAPIR a440 greater than 0.8 m-1 and less than 38% for lower a440. In optically complex waters, first validation efforts promise the applicability of TAPIR for high chlorophyll-a concentration estimations in the presence of additional water constituents. The technique is generic and considers external conditions (sun zenith angle, number of measurement bands, surface or satellite measurements, and radiometric quantity). TAPIR applies to all kind of waters including optically complex waters, arctic to tropical regions, and inland, coastal, and open ocean waters. Among other hyperspectral satellite sensors, the Environmental Mapping and Analysis Program (EnMAP) provides sufficient sampling bands for the application of TAPIR. A Unifying Perspective on Transfer Function Solutions to the Unsteady Ekman Problem Jonathan M. Lilly, Shane Elipot Subject: Earth Sciences, Oceanography Keywords: Ekman currents; ocean surface currents; wind stress forcing; transfer function; wind-driven response The unsteady Ekman problem involves finding the response of the near-surface currents to wind stress forcing under linearized dynamics. Its solution can be conveniently framed in the frequency domain in terms of a quantity that is known as the transfer function, the Fourier transform of the impulse response function. In this paper, a theoretical investigation of a fairly general transfer function form is undertaken with the goal of paving the way for future observational studies. Building on earlier work, we consider in detail the transfer function arising from a linearly-varying profile of the vertical eddy viscosity, subject to a no-slip lower boundary condition at a finite depth. The linearized horizontal momentum equations are shown to transform to a modified Bessel's equation for the transfer function. Two self-similarities, or rescalings that each effectively eliminate one independent variable, are identified, enabling the dependence of the transfer function on its parameters to be more readily assessed. A systematic investigation of asymptotic behaviors of the transfer function is then undertaken, yielding expressions appropriate for eighteen different regimes, and unifying the results from numerous earlier studies. A solution to a numerical overflow problem that arises in the computation of the transfer function is also found. All numerical code associated with this paper is distributed freely for use by the community. Simulating the Impact of the Internal Structure of Phytoplankton on the Bulk Particulate Backscattering Ratio using a Two-Layered Spherical Geometry for Cells Lucile Duforêt-Gaurier, David Dessailly, William Moutier, Hubert Loisel Subject: Physical Sciences, Optics Keywords: Ocean optics; backscattering ratio; phytoplankton, coated-sphere model, bulk refractive index, seawater component The bulk backscattering ratio ($\tilde{b_{bp}}$) is commonly used as a descriptor of the bulk real refractive index of the particulate assemblage in natural waters. Based on numerical simulations, we analyze the impact of heterogeneity of phytoplankton cells on $\tilde{b_{bp}}$. $\tilde{b_{bp}}$ is modeled considering viruses, heterotrophic bacteria, phytoplankton, detritus, and minerals. Three study cases are defined according to the relative abundance of these different components. Two study cases represent typical situations in open ocean, outside (No-B/No-M) and inside bloom (B/No-M). The third study case is typical of coastal waters with the presence of minerals. Phytoplankton cells are modeled by a two-layered spherical geometry representing a chloroplast surrounding the cytoplasm. The $\tilde{b_{bp}}$ values are higher when heterogeneity is considered because the contribution of coated spheres to backscattering is higher than homogeneous spheres. The impact of heterogeneity is however strongly conditioned by the hyperbolic slope $\xi$ of the particle size distribution. Even if the relative concentration of phytoplankton is small (<1%), $\tilde{b_{bp}}$ increases by about 60% (for $\xi=4.3$ and for the No-B/No-M water body), when the heterogeneity is taken into account, in comparison with a particulate population only composed by homogeneous spheres. As expected, heterogeneity has a much smaller impact (about 5$\%$ for $\xi=4.3$) on $\tilde{b_{bp}}$ when minerals are added. Modeling Link for Ocean Wave Energy of Point Absorbers: A Mathematical Framework Lorenzo Baños Hernandez Subject: Engineering, Marine Engineering Keywords: ocean wave energy; fluid-structure interaction; BEM; diffraction/ radiation; floating cylinder; heave; array This compendium presents new mathematical techniques for modeling Point Absorbers. A combined frequency-time domain framework is developed. It is used to simulate the energy generated by the wave farms. With Matlab and Fortran as a base, this leads to obtain physical variables of primary importance, namely position, velocity and power to energy net balance relationships of absorption. Integration of different degrees of freedom with heave as main executable leads in turn to a single buoy motion focus. Acquisition of the needed hydrodynamic coefficients is provided through application of potential field solvers with Boundary Element Methodology background. Initially, this Wave-to-motion model is validated by comparison with previous experimental results for a floating cone cylinder shape (Buldra-FO3). A single, generic, vertical floating cylinder is contemplated then, that responds to the action of the passing regular waves excitation. Later, two equally sized vertical floating cylinders aligned with the incident wave direction are modeled for a variable distance between the bodies. For both unidirectional regular and irregular waves as an input in deep water, we approximate the convolutive radiation force function term through the Prony method. By changing the spatial disposition of the axisymmetric buoys, using for instance triangular or rectangular shaped arrays of three and four bodies respectively, the study delves into motion characteristics for regular waves. The results highlight efficient layouts for maximizing the energy production whilst providing important insights into their performance, revealing displacement amplification- and capture width-ratios, while deriving in possible interpretations of scenarios related to the known park effect. These terms are encompassed by the novelty of a new conceptual Post-Processing methodology in the field, which leads to obtain an optimal distance for the separated bodies with effective energy absorption in a regular wave regime. The main objective is to generate a tendency within the hydrodynamic field of study, which is the Wave to motion perspective. More generally, this computational excursion envisions and depicts potential fields of study, which will surely enhance new connections and link this renewable energy form. Therefore, this research delves first into the historical and technical background on Ocean Wave Energy. Next, it is in the section regarding Materials and Methods, where boundaries and related equations are introduced step by step, together with latter mentioned case scenarios, and their corresponding configuration parameters. A separate section frames then the scope of results, while finally, there is an ensuing discussion and conclusions for evaluation assessment. Comparative Framework Linking Some Capabilities of Small Array Dispositions of Generic Points Absorbers This thesis completion works it out to deepen into diverse modeling techniques for Point Absorbers. A combined frequency-time domain model is conceived, designed and developed in Matlab with Fortran as a base, leading to obtain physical variables of primary importance, namely position, velocity and power to energy net balance relationships of absorption. Integration of different degrees of freedom with heave as main executable leads in turn to a single buoy motion focus. Acquisition of the needed hydrodynamic coefficients is provided through application of NEMOH \& BEMIO solvers due to the Boundary Element Methodology. Initially, this Wave-to-motion model is validated by comparison with previous experimental results for a floating cone cylinder shape (Buldra-FO3). A single, generic, vertical floating cylinder is contemplated then, that responds to the action of the passing regular waves excitation. Later, two equally sized vertical floating cylinders aligned with the incident wave direction are modeled for a variable distance between the bodies. For both unidirectional regular and irregular waves as an input in deep water, we approximate the convolutive radiation force function term through the Prony method. By changing the spatial disposition of the axisymmetric buoys, using for instance triangular or rectangular shaped arrays of three and four bodies respectively, the study delves into motion characteristics for regular waves. The results highlight efficient layouts for maximizing the energy production whilst providing important insights into their performance, revealing displacement amplification- and capture width-ratios, while deriving in possible interpretations of scenarios related to the the known park effect. These terms are encompassed by the novelty of a new conceptual Post-Processing methodology in the field, which leads to obtain an optimal distance for the separated bodies with effective energy absorption in a regular wave regime. In conclusion, this computational excursion envisions and proposes potential fields of study, which will surely enhance new connections and link this renewable energy form. Computational Investigation on Various Small Array Dispositions of Generic Points Absorbers Subject: Engineering, Energy & Fuel Technology Keywords: ocean wave energy; fluid-structure interaction; BEM; diffraction/ radiation; floating cylinder; heave; array This thesis completion works it out to deepen into diverse modeling techniques for Point Absorbers. A combined frequency-time domain model is conceived, designed and developed in Matlab with FoRtran as a base., leading to obtain physical variables of primary importance, namely position, velocity and power to Energy net balance relationships of absorption. Integration of different degrees of freedom with heave as main executable leads in turn to a single buoy motion focus. Acquisition of the needed hydrodynamic coefficients is provided though application of NEMOH & BEMIO solvers due to the Boundary Element Methodology. Initially, this Wave-to-motion model is validated through comparison with previous experimental results for a floating cone cylinder shape (Buldra-FO3). A single, generic, vertical floating cylinder is contemplated then, that responds to the action of the passing regular waves excitation. Later, two equally sized vertical floating cylinders aligned with the incident wave direction are modeled for a variable distance between the bodies. For both unidirectional regular and irregular waves as an input in deep water, we approximate the convolutive radiation force function term through the Prony method. By changing the spatial disposition of the axisymmetric buoys, using for instance triangular or rectangular shaped arrays of three and four bodies respectively, the study delves into motion characteristics for regular waves. The results highlight efficient layouts for maximizing the energy production whilst providing important insights into their performance, revealing displacement amplification- and capture width-ratios, while deriving in possible interpretations of scenarios related to the the known park effect. These terms are encompassed by the novelty of a new conceptual Post-Processing methodology in the field, which leads to obtain an optimal distance for the separated bodies with effective energy absorption in a regular wave regime. In conclusion, this computational excursion envisions and proposes potentials fields of study, which will surely enhance new connections and link this renewable energy form. Simulation and Modelling for a Three-Dimensional Ocean Surface Wave using and Inverse Fourier Transform Young Jun Yang Subject: Engineering, Marine Engineering Keywords: ocean surface wave simulation; inverse Fourier transform; directional wave spectrum; linear superposition; Nyquist frequency Ocean surface waves have been utilized as fundamental information in various fields of oceanic research. In this paper, we suggest a simulation and modelling technique for generating an ocean surface wave using an inverse Fast Fourier Transform (iFFT), and we subsequently verify the wave's accuracy. The conventional method, linear superposition, requires recursive calculation because of the double summation and the time variable; to circumvent this issue, the new algorithm is presented. The Joint North Sea Wave Project (JONSWAP) spectrum is utilized for the ocean surface wave simulation example, and the parameters are the significant wave height (HS) and the zero-crossing wave period (TZ). A coordinate transform for the wavenumber domain was used to apply the inverse FFT algorithm. To verify the accuracy of the simulation result, the relative error between the input condition and the analysis result was calculated. The result for TZ is below 4% relative error, and the maximum relative error for HS is 7%. To avoid the Nyquist frequency for wave-field analysis and simulation, the minimum grid size was calculated by twice applying the maximum wavenumber. Automated Identification of Discrepancies between Nautical Charts and Survey Soundings Giuseppe Masetti, Tyanne Faulkes , Christos Kastrisios Subject: Engineering, Marine Engineering Keywords: nautical cartography; chart adequacy; change detection; triangulated irregular network; safety of navigation; ocean mapping Timely and accurate identification of change detection for areas depicted on nautical charts constitutes a key task for marine cartographic agencies in supporting maritime safety. Such a task is usually achieved through manual or semi-automated processes, based on best practices developed over the years requiring a substantial level of human commitment (i.e., to visually compare the chart with the new collected data or to analyze the result of intermediate products). This work describes an algorithm that aims to largely automate the change identification process as well as to reduce its subjective component. Through the selective derivation of a set of depth points from a nautical chart, a triangulated irregular network is created to apply a preliminary tilted-triangle test to all the input survey soundings. Given the complexity of a modern nautical chart, a set of feature-specific, point-in-polygon tests are then performed. As output, the algorithm provides danger-to-navigation candidates, chart discrepancies, and a subset of features that requires human evaluation. The algorithm has been successfully tested with real-world electronic navigational charts and survey datasets. In parallel to the research development, a prototype application implementing the algorithm was created and made publicly available. Effects of El-Niño, Indian Ocean Dipole and Madden-Julian Oscillation on Sea Surface Temperature and Rainfall Anomalies in Southeast Asia. Case Study: Biomass Burning Episode of 2015 Amirul Islam, Andy Chan, Matthew Ashfold, Chel Gee Ooi, Majid Azari Subject: Earth Sciences, Atmospheric Science Keywords: monsoon, maritime continent, ocean-atmospheric phenomena, Southeast Asia, biomass burning, sea surface temperature, rainfall. Maritime Continent (MC) positions in between Asian and Australian summer monsoons zone. Its complex topography and shallow seas around it is a major challenge for the climate researchers to model and understand it. Monsoon in this area is affected by inter-scale ocean-atmospheric interactions like El-Niño Southern Oscillation (ENSO), Indian Ocean Dipole (IOD) and Madden-Julian Oscillation. Monsoon rainfall in MC (especially in Indonesia and Malaysia) profoundly exhibits its variability dependency on ocean-atmospheric phenomena in this region. This monsoon shift often introduces to dreadful events like biomass burning (BB) in Southeast Asia (SEA) which sometimes leads to severe trans-boundary haze pollution. In this study, the episode of BB in 2015 of SEA is highlighted and discussed. Observational satellite datasets are tested by performing simulations with numerical weather prediction (NWP) model using WRF-ARW (Advanced research WRF). Observed and model datasets are compared to study the sea surface temperature (SST) and precipitation (rainfall) anomalies influenced by ENSO, IOD and MJO. Correlations have been recognised which explains the delayed rainfall of regular monsoon in MC due to the influence of ENSO, IOD and MJO during 2015 BB episode, eventually leading to intensification of fire and severe haze. Risk Factors of Extended-Spectrum β-Lactamase Producing Enterobacteriaceae Occurrence in Farms in Reunion, Madagascar and Mayotte Islands, 2016–2017 Noellie Gay, Alexandre Leclaire, Morgane Laval, Guillaume Miltgen, Mael Jego, Stéphane Ramin, Julien Jaubert, Olivier Belmonte, Eric Cardinale Subject: Biology, Animal Sciences & Zoology Keywords: Indian ocean; livestock; Extended-Spectrum β-Lactamase producing Enterobacteriaceae; risk factors; CTX-M; enzymes In South Western Indian ocean (IO), Extended-Spectrum β-Lactamase producing Enterobacteriaceae (ESBL) are a main public health issue. In livestock, ESBL burden was unknown. The aim of this study was estimating the prevalence of ESBL on commercial farms in Reunion, Mayotte and Madagascar and genes involved. Secondly, risk factors of ESBL occurrence in broiler, beef cattle and pig farms were explored. In 2016-2017, commercial farms were sampled using boot swabs and samples stored at 4°C before microbiological analysis for phenotypical ESBL and gene characterization. A semi-directive questionnaire was performed. Prevalences observed in all production types and territories were elevated, except for beef cattle in Reunion which differed significantly. The most common ESBL gene was the CTX-M-1 subtype. Generalized linear models explaining ESBL occurrence varied between livestock production sectors and allowed identifying main protective (e.g., water quality control and detergent use for cleaning) and risk factors (e.g., recent antibiotic use, other farmers visiting the exploitation, pet presence). This study is the first to explore tools for antibiotic resistance management in IO farms. It provides interesting hypothesis to explore about antibiotic use in IO and ESBL transmission between pig, beef cattle and humans in Madagascar. Similarities and Contrasts in Stationary Striations of Surface Tracers in Pacific Eastern Boundary Upwelling Systems Ali Belmadani, Pierre-Amaël Auger, Katherine Gomez, Nikolai Maximenko, Sophie Cravatte Subject: Earth Sciences, Oceanography Keywords: striations; satellite data; sea surface temperature; sea surface salinity; chlorophyll-a; eastern boundaries; Pacific Ocean Eastern boundary upwelling systems feature strong zonal gradients of physical and biological ocean properties between cool, productive coastal oceans and warm, oligotrophic subtropical gyres. Zonal currents and jets (striations) are therefore likely to contribute to the transport of water properties between coastal and open oceanic regions. Multi-sensor satellite data are used to characterize the signatures of striations in sea surface temperature (SST), salinity (SSS), and chlorophyll-a (Chl-a) in subtropical eastern North/South Pacific (ENP/ESP) upwelling systems. In the ENP, tracers exhibit striated patterns extending up to ~2500 km offshore. Striations in SST and SSS are highly correlated with quasi-zonal jets, suggesting that these jets contribute to SST/SSS mesoscale patterns via zonal advection. Chl-a striations are collocated with sea surface height (SSH) bands, a possible result of mesoscale eddy trains trapping nutrients and forming striated signals. In the ESP, striations are only found in SST and coincide with SSH bands, consistently with quasi-zonal jets located outside major zonal tracer gradients. An interplay between large-scale SST/SSS advection by the quasi-zonal jets, mesoscale SST/SSS advection by the large-scale meridional flow and eddy advection may explain the persistent ENP hydrographic striations. These results underline the importance of quasi-zonal jets for surface tracer structuring at the mesoscale. Estimating Sea Surface pCO2 in the North Atlantic based on CatBoost Hongwei Sun, Yihui Chen, Lin Li, Yihui Chen Subject: Earth Sciences, Atmospheric Science Keywords: sea surface pCO2; ocean color remote sensing; CatBoost algorithm; temporal and spatial distribution; influencing factors Sea surface partial pressure of CO2 (pCO2) is a critical parameter in the quantification of air-sea CO2 flux, which plays an important role in calculating the global carbon budget and ocean acidification. In this study, we use chlorophyll-a concentration (Chla), sea surface temperature (SST), absorption due to dissolved and particulate detrital matter (Adg), diffuse attenuation coefficient of downwelling irradiance at 490nm (Kd) and mixed layer depth (MLD) as input data for retrieving the sea surface pCO2 in the North Atlantic based on a remote sensing empirical approach with the Categorical Boosting (CatBoost) algorithm. The results show that the root mean square error (RMSE) is 8.25μatm, the mean bias error (MAE) is 4.92μatm and the coefficient of determination (R2) can reach 0.946 in the validation set, which mean that the CatBoost model makes an improvement compared to other models in the published studies. In the further analysis of the spatial and temporal distribution of the sea surface pCO2 in the North Atlantic, it can be found that the North Atlantic sea surface pCO2 has a clear trend with latitude variations and have strong seasonal changes. Furthermore, the sea surface pCO2 in this area is mainly affected by sea temperature and salinity, and influenced by biological activities in some sub-regions. Subtropical Frontal Zone of the Southern Ocean Igor M Belkin Subject: Earth Sciences, Oceanography Keywords: Front; Subtropical Front; Southern Ocean; Subtropical Frontal Zone; Subtropical Mode Water; Chilean jack mackerel; Trachurus murphyi This paper combines a literature survey and data analysis. The literature on the Subtropical Front (STF) in the Southern Ocean is reviewed with a two-pronged emphasis on the double-front structure of the STF, hence the existence of a subtropical frontal zone (STFZ), and the circumpolar continuity of the STFZ. The data analysis is based on the World Ocean Circulation Experiment (WOCE) sections. The STFZ is detected along each section independently from other sections, while moving circum-polarly downstream (eastward). The literature survey and data analysis confirm the circumpolar continuity of the STFZ extending from the Brazil Current across the South Atlantic, South Indian, and South Pacific up to Chile, being bound by the North and South STF. The circumpolar continuity of the STFZ is partly interrupted by South Africa and Tasmania, where the North STF ceases, while the South STF continues eastward. The South Atlantic STFZ is the southern boundary of the well-defined Subtropical Mode Water (STMW) thermostad, which cools eastward from 15°C to 11°C between the Brazil Current and Greenwich Meridian. In the southeast Pacific, the STFZ is the southern boundary of the 17-to-19°C thermostad (South Pacific Eastern STMW). The STFZ's vertical extent is at maximum in the South Atlantic (>1000 m), decreasing eastward to 300 m in the southeast Pacific off Chile. A special attention is given to the South Pacific and the STFZ's role in the ecology of Chilean jack mackerel Trachurus murphyi that spawn at the STFZ and migrate along the STFZ from Chile up to New Zealand. The Early Silurian Gabbro in the Eastern Kunlun Orogenic Belt, Northeast Tibet: Constraints on the Proto-Tethyan Ocean Closure Wenxiao Zhou, Haiquan Li, Feng Chang, Xinbiao Lv Subject: Earth Sciences, Geochemistry & Petrology Keywords: northeast Tibet; Proto-Tethyan Ocean; early Silurian; eastern Kunlun Orogenic Belt; gabbro; zircon U–Pb dating The early Paleozoic is a crucial period in the formation and evolution of the Eastern Kunlun Orogenic Belt (EKOB), and is of great significance for understanding the evolutionary history of the Proto-Tethyan Ocean. This paper presents new petrography, geochemistry, zircon U–Pb dating, and Lu–Hf isotopic research on the Yuejingshan gabbro from the eastern segment of the EKOB. Zircon U–Pb data suggests that the gabbro formed in the Early Silurian (435 ± 2 Ma). All samples have relatively low TiO2 contents (0.45–2.97%), widely varying MgO (6.58–8.41%) and Mg# (58–65) contents, and are rich in large ion lithophile elements (LILE such as Rb, Ba, Th, and U) and light rare earth elements (LREE). This indicates that it has a similar geochemical composition to island arc basalt. The major element features indicate that the formation of this gabbro underwent fractional crystallization of clinopyroxene, olivine, and plagioclase. The depletion of high field strength elements (HFSE, such as Nb, Ta, and Ti), and a slightly positive Hf isotope (with εHf(t) ranging from 1.13 to 2.45) may be related to the partial melting of spinel-bearing peridotite, led by slab fluid metasomatism. The gabbro likely represents magmatic records of the latest period of the early Paleozoic oceanic crust subduction in the Eastern Kunlun. Therefore, the final closure of the Proto-Tethyan Ocean and the beginning of collisional orogeny occurred before the Early Silurian. Research on the Visualization of Ocean Big Data Based on the Cite-Space Software Jiajing Wu, Dongning Jia, Zhiqiang Wei, Xin Dou Subject: Mathematics & Computer Science, Information Technology & Data Management Keywords: ocean; big-data; cite-space; co-authorship analysis; co-citation analysis; keywords co-occurrence analysis; visualization Ocean big data is the scientific practice of using big data technology in the marine field. Data from satellites, manned spacecraft, space stations, airship, unmanned aerial vehicles, shore-based radar and observation stations, exploration platforms, buoys, underwater gliders, submersibles, and submarine observation networks are seamlessly combined into the ocean's big data. Increasing numbers of scholars have tried to fully analyze the ocean's big data. To explore the key research technology knowledge graphs related to ocean big data, articles between 1990 and 2020 were collected from the "Web of Science". By comparing bibliometric software and using the visualization software Cite-Space, the pivotal literature related to ocean big data, as well as countries, institutions, categories, and keywords, were visualized and recognized. Journal co-citation analysis networks can help determine the national distribution of core journals. Co-citation analysis networks for documents show authors who are influential at key technical levels. Key co-occurrence analysis network keywords can determine research hot spots and research frontiers. The three supporting elements of marine big data research are shown in the co-citation network. These elements are author, institution, and country. By examining the co-occurrence of keywords, the key technology research directions for future marine big data were determined. Green Rust: The Simple Organizing 'Seed' of All Life? Michael J Russell Subject: Keywords: Hadean; carbonic ocean; mantle plumes; Banded Iron Formation; green rust; submarine alkaline vents; emergence of life Korenaga and coworkers present evidence to suggest that 4.3 billion years ago the Earth's mantle was dry and water filled the ocean to twice its present volume.[2] CO2 was constantly exhaled during the mafic to ultramafic volcanic activity associated with magmatic plumes that produced the thick, dense and relatively stable oceanic crust. In that setting two distinct major types of sub-marine hydrothermal vents were active: ~400 °C acidic springs whose effluents bore vast quantities of iron into the ocean, and ~120 °C, highly alkaline and reduced vents exhaling from the cooler, serpentinizing crust at some distance from the heads of the plumes. When encountering the alkaline effluents, the iron from the plume head vents precipitated out forming mounds likely surrounded by voluminous exhalative deposits similar to the banded iron formations known from the Archean. These mounds and the surrounding sediments likely comprising nanocrysts of the variable valence FeII/FeIII oxyhydroxide, green rust. The precipitation of green rust, along with subsidiary iron sulfides and minor concentrations of Ni, Co and Mo in the environment at the alkaline springs may have established both the key bio-syntonic disequilibria, and the means to properly make use of them – those needed to drive the essential inanimate-to-animate transitions that launched life. In the submarine alkaline vent model for the emergence of life specifically it is first suggested that the redox-flexible green rust microcrysts spontaneously formed precipitated barriers to the complete mixing of carbonic ocean and alkaline hydrothermal fluids, barriers that created and maintained steep ionic disequilibria; and second, that the hydrous interlayers of green rust acted as 'engines' that were powered by those ionic disequilibria and drove essential endergonic reactions. There, aided by sulfides and trace elements acting as catalytic promoters and electron transfer agents, nitrate could be reduced to ammonia and carbon dioxide to formate, while methane may have been oxidized to methyl and formyl groups. Acetate and higher carboxylic acids could then have been produced from these C1 molecules and aminated to amino acids, and thence oligomerized to offer peptide nests to phosphate and iron sulfides and secreted to form primitive amyloid-bounded structures, leading conceivably to protocells. Chikungunya Manifestations and Viremia in Patients Who Presented to the Fever Clinic at Bangkok Hospital for Tropical Diseases During the 2019 Outbreak in Thailand Hisham A Imad, Juthamas Phadungsombat, Emi E Nakayama, Sajikapon Kludkleeb, Wasin Matsee, Thitiya Ponam, Keita Suzuki, Pornsawan Leaungwutiwong, Watcharapong Piyaphanee, Weerapong Phumratanaprapin, Tatsuo Shioda Subject: Life Sciences, Biochemistry Keywords: Alphavirus; chikungunya virus; East Central South African lineage; Indian Ocean sub-lineage; acute febrile illness; viremia; arthritides Chikungunya virus is an Alphavirus belonging to the family Togaviridae that is transmitted to humans by an infected Aedes mosquito. Patients develop fever, inflammatory arthritis, and rash during the acute stage of infection. Although the illness is self-limiting, atypical and severe cases are not uncommon, and 60% may develop chronic symptoms that persist for months or even for longer durations. Having a distinct periodical epidemiologic outbreak pattern, chikungunya virus reappeared in Thailand in December 2018. Here, we describe a cohort of acute chikungunya patients who had presented to the Bangkok Hospital for Tropical Diseases during October 2019. Infection was confirmed by real-time RT-PCR using serum collected at presentation to the Fever Clinic. Other possible acute febrile illnesses such as influenza, dengue, and malaria were excluded. We explored the sequence of clinical manifestations at presentation during the acute phase and associated the viral load with the clinical findings. Most of the patients were healthy individuals in their forties. Fever and arthralgia were the predominant clinical manifestations found in this patient cohort, with a small proportion of patients with systemic symptoms. Higher viral loads were associated with arthralgia, and arthralgia with the involvement of the large joints was more common in female patients A Simple Procedure to Preprocess and Ingest High-Resolution Ocean Color Data into Google Earth Engine Elígio de Raús Maúre, Simon Ilyushchenko, Genki Terauchi Subject: Earth Sciences, Oceanography Keywords: remote sensing; ocean color; Google Earth Engine; MODIS/Aqua, SGLI/GCOM-C, swath reprojection, Earth Engine data ingestion Data from ocean color (OC) remote sensing are considered a cost-effective tool for the study of biogeochemical processes globally. Satellite-derived chlorophyll, for instance, is considered an Essential Climate Variable since it is helpful in detecting climate change impacts. Google Earth Engine (GEE) is a planetary scale tool for remote sensing data analysis. Along with OC data, such tools allow an unprecedented spatial and temporal scale analysis of water quality monitoring in a way that has never been done before. Although OC data have been routinely collected at medium (~1 km) and more recently at high (~250 m) spatial resolution, only coarse resolution (≥4 km) data are available in GEE, making them unattractive for applications in the coastal regions. Data reprojection is needed prior to making OC data readily available in the GEE. In this paper, we introduce a simple but practical procedure to reproject and ingest OC data into GEE. The procedure is applicable to OC swath (Level-2) data and is easily adaptable to higher-level products. The results showed consistent distributions between swath and reprojected data, building confidence in the introduced framework. The study aims to start a discussion on making high resolution OC data readily available in GEE. A New Retrieval of Sun-Induced Chlorophyll Fluorescence in Water from Ocean Colour Measurements Applied on OLCI L-1b and L-2 Lena Kritten, Rene Preusker, Jürgen Fischer Subject: Earth Sciences, Environmental Sciences Keywords: Remote Sensing; Ocean Colour; Retrievals; Fluorescence; Optical Properties; Satellite; Spectral; Radiative Transfer; optically complex waters; chlorophyll; absorption; scattering The retrieval of sun-induced chlorophyll fluorescence is greatly beneficial to studies of marine phytoplankton biomass, physiology, and composition and is required for user applications and services. Customarily phytoplankton chlorophyll fluorescence is determined from satellite measurements through a fluorescence line-height algorithm using three bands around 680 nm. We propose here a modified retrieval, making use of all available bands in the relevant wavelength range with the goal to improve the effectiveness of the algorithm in optically complex waters. For the Ocean and Land Colour Instrument (OLCI) we quantify a Fluorescence Peak Height from fitting a Gaussian function and related terms into the top-of-atmosphere reflectance bands between 650 and 750 nm. This algorithm retrieves, what we call Fluorescence Peak Height from fitting a Gaussian function upon other terms to top-of-atmosphere reflectance bands between 650 and 750 nm. This approach is applicable to Level-1 and Level-2 data. We find a good correlation of the retrieved fluorescence product to global in-situ chlorophyll measurements, as well as a consistent relation between chlorophyll concentration and fluorescence from radiative transfer modelling and OLCI/in-situ comparison. The algorithm is applicable to complex waters without needing an atmospheric correction and vicarious calibration and features an inherent correction of small spectral shifts, as required for OLCI measurements. Cell-Based Fish: A Novel Approach to Seafood Production and an Opportunity for Cellular Agriculture Natalie Rubio, Isha Datar, David Stachura, Kate Krueger Subject: Life Sciences, Biochemistry Keywords: cellular agriculture; cell-based seafood; fish tissue culture; bioreactor; serum-free media; ocean conservation; marine cell culture; aquaculture Cellular agriculture is defined as the production of agricultural products from cell cultures rather than from whole plants or animals. With growing interest in cellular agriculture as a means to address the public health, environmental, and animal welfare challenges of animal agriculture, the concept of producing seafood from fish cell- and tissue-cultures is emerging as a means to address similar challenges with industrial aquaculture systems and marine capture. Cell-based seafood - as opposed to animal-based seafood - can combine developments in biomedical engineering with modern aquaculture techniques. Biomedical engineering developments such as closed-system bioreactor production of land animal cells create a basis for large scale production of marine animal cells. Aquaculture techniques such as genetic modification and closed system aquaculture have achieved marked gains in production that can pave the way for innovations in cell-based seafood production. Here, we present the current state of innovation relevant to the development of cell-based seafood across multiple species as well as specific opportunities and challenges that exist for advancing this science. The authors find that the physiological properties of fish cell- and tissue- culture may be uniquely suited to cultivation in vitro. These physiological properties, including hypoxia tolerance, high buffering capacity, and low-temperature growth conditions, make marine cell culture an attractive opportunity for scale production of cell-based seafood; perhaps even more so than mammalian and avian cell cultures for cell-based meats. This, coupled with the unique capabilities of crustacean tissue-friendly scaffolding such as chitosan, a common seafood waste product and mushroom derivative, presents great promise for cell-based seafood production via bioreactor cultivation. To become fully realized, cell-based seafood research will require more understanding of fish muscle culture and cultivation; more investigation into serum-free media formulations optimized for fish cell culture; and bioreactor designs tuned to the needs of fish cells for large scale production. Adaptive Wavelet Methods for Earth Systems Modelling Nicholas Kevlahan Subject: Mathematics & Computer Science, Numerical Analysis & Optimization Keywords: daptive mesh refinement; adaptive numerical methods; atmosphere modelling; climate modelling; Earth systems models; large-eddy simulation; ocean modelling; wavelets This paper reviews how dynamically adaptive wavelet methods can be designed to simulate atmosphere and ocean dynamics in both flat and spherical geometries. We highlight the special features that these models must have in order to be valid for climate modelling applications. These include exact mass conservation and various mimetic properties that ensure the solutions remain physically realistic, even in the under-resolved conditions typical of climate models. Particular attention is paid to the implementation of complex topography in adaptive models. Using \textsc{wavetrisk} as an example, we explain in detail how to build a semi-realistic global atmosphere or ocean model of interest to the geophysical community. We end with a discussion of the challenges that remain to developing a realistic dynamically adaptive atmosphere or ocean climate models. These include scale-aware subgrid scale parameterizations of physical processes, such as clouds. Although we focus on adaptive wavelet methods, many of the topics we discuss are relevant for adaptive mesh refinement (AMR). Spatial Environmental Assessment Tool (SEAT): A Modeling Tool to Evaluate Potential Environmental Risks Associated with Wave Energy Converter Deployments Craig Jones, Grace Chang, Kaustubha Raghukumar, Samuel McWilliams, Ann Dallman, Jesse Roberts Subject: Earth Sciences, Oceanography Keywords: marine renewable energy; ocean energy; wave energy; environmental effects; wave modeling; wave propagation; numerical modeling; sediment dynamics; risk assessment Wave energy converter (WEC) arrays deployed in coastal regions may create physical disturbances potentially resulting in environmental stresses. Presently, limited information is available on the nature of these physical disturbance or the resultant effects. A quantitative Spatial Environmental Assessment Tool (SEAT) for evaluating potential effects of wave energy converter (WEC) arrays on nearshore hydrodynamics and sediment transport is presented for the central Oregon coast (USA) through coupled numerical model simulations of an array of WECs. Derived climatological wave conditions were used as inputs to the model to allow for the calculation of risk metrics associated with various hydrodynamic and sediment transport variables such as maximum shear stress, bottom velocity, and change in bed elevation. The risk maps provided simple, quantitative, and spatially-resolved means of evaluating physical changes in the vicinity of a hypothetical WEC array in response to varying wave conditions. Near-field risk of sediment mobility was determined to be moderate in the lee of the densely spaced array, where the potential for increased sediment deposition could result in benthic habitat alteration. Modifications to the nearshore sediment deposition and erosion patterns were observed near headlands and topographic features, which could have implications for littoral sediment transport. The results illustrate the benefits of a risk evaluation tool for facilitating coastal resource management at early market marine renewable energy sites. Validating Salinity From SMAP With Saildrones and Research Vessel Data During EUREC4A-OA/ATOMIC Kashawn Hall, Alton Daley, Shanice Whitehall, Sanola Sandiford, Chelle Leigh Gentemann Subject: Earth Sciences, Oceanography Keywords: Saildrone; Soil Moisture Active Passive (SMAP); Hybrid Coordinate Ocean Model (HYCOM); EUREC4A; ATOMIC; physical oceanography; remote sensing; air-sea interactions The 2020 Elucidating the role of clouds-circulation coupling in climate - Ocean-Atmosphere (EUREC4A-OA) and Atlantic Tradewind Ocean-Atmosphere Mesoscale Interaction Campaign (ATOMIC) campaigns sought to improve the knowledge of the interaction between clouds, convection and circulation and their function in our changing climate. The campaign consisted of numerous research technologies, some of which are relatively novel to the scientific community. In this study we used a saildrone uncrewed surface vehicle to validate satellite and modelled sea surface salinity (SSS) products in the Western Tropical Atlantic. These products include the Soil Moisture Active Passive (SMAP) Jet Propulsion Laboratory (JPL), SMAP Remote Sensing Systems (RSS), and Hybrid Coordinate Ocean Model (HYCOM). In addition to the validation, we investigated a fresh tongue south east of Barbados. The saildrones accurately depicted the salinity conditions and all satellite and modelled products performed well in areas that lacked small-scale salinity variability. However, SMAP RSS 70 km outperformed its counterparts in areas with small submesoscale irregularities while RSS 40 km was better at identifying small irregularities in salinity such as a fresh tongue. These results will allow researchers to make informed decisions regarding the most ideal product for their application and aid in the improvement of mesoscale and submesoscale SSS products, which can lead to the refinement of numerical weather prediction (NWP) and climate models. Impacts of the Tropical Pacific–Indian Ocean Associated Mode on Madden–Julian Oscillation over the Maritime Continent in Winter Xin Li, Ming Yin, Xiong Chen, Minghao Yang, Fei Xia, Lifeng Li, Guangchao Chen, Peilong Yu, Chao Zhang Subject: Earth Sciences, Atmospheric Science Keywords: the tropical Pacific-Indian Ocean associated mode (PIOAM); Madden Julian Oscillations (MJO); Maritime Continent (MC); MJO kinetic energy; MJO convection Based on the observation and reanalysis data, the relationship between Madden-Julian Oscillation (MJO) over the Maritime Continent (MC) and the tropical Pacific-Indian Ocean temperature anomaly mode is analyzed. The results showed that the MJO over the MC region (100°-140°E, 10°S-5°N) (referred to as MC-MJO) possesses prominent interannual and interdecadal variations and seasonally "phase-locked" features. MC-MJO is strongest in the boreal winter and weakest in the boreal summer. Winter MC-MJO kinetic energy variation has significant relationships with the El Niño-Southern Oscillation (ENSO) in winter and the Indian Ocean Dipole (IOD) in autumn, but it correlates better with the tropical Pacific-Indian Ocean associated mode (PIOAM). The correlation coefficient between the winter MC-MJO kinetic energy index and the autumn PIOAM index is as high as -0.43. This means that when the positive (negative) autumn PIOAM anomaly strengthens, the MJO kinetic energy over the winter MC region weakens (strengthens). However, the correlation between the MC-MJO convection and PIOAM in winter is significantly weaker. The propagation of MJO over the Maritime Continent differs significantly in the contrast phases of PIOAM. During the positive phase of the PIOAM, the eastward propagation of the winter MJO kinetic energy always fails to move across the MC region and cannot enter the western Pacific. However, during the negative phase of the PIOAM, the anomalies of MJO kinetic energy over the MC is not significantly. MJO can propagate farther eastward and enter the western Pacific. One thing must be pointed out that there is a significant difference between the propagation of MJO convection over the MC region in winter and that of the MJO kinetic energy. That said, the MJO convection is more likely to extend to the western Pacific in the positive phases of PIOAM than in the negative phases Ultimate Compressive Strength of Stiffened Panel: An Empirical Formulation for Flat-bar Type Do Kyun Kim, Su Young Yu, Hui Ling Lim, Nak-Kyun Cho Subject: Engineering, Marine Engineering Keywords: ocean and shore technology (OST); empirical formula; ultimate limit state; longitudinal compression; stiffened plate; ships and offshore structures; structural design This research aims to study the ultimate limit state (ULS) behaviour of stiffened panel under longitudinal compression by non-linear finite element method (NLFEM). There are different types of stiffeners being used in shipbuilding i.e. T-bar, flat-bar and angle-bar. However, this research focuses on the ultimate compressive strength behaviour of flat-bar stiffened panel. A total of 420 of reliable scenarios of flat-bar stiffened panel are selected for numerical simulation by ANSYS NLFEM. The ultimate strength behaviours obtained were used as data for the development of closed form shape empirical formulation. Recently, Kim et al. [1] proposed for advanced empirical formulation for T-bar stiffened panel and the applicability of the proposed formulation to flat-bar stiffened panel will be confirmed by this study. The accuracy of the empirical formulation obtained for flat-bar stiffened panel has been validated by FE simulation results of statistical analysis (R2 = 0.9435). The outcome obtained will be useful for ship structural designers in predicting the ultimate strength performance of flat-bar type stiffened panel under longitudinal compression. Ocean Thermal Energy Conversion – Flexible Enabling Technology for Variable Renewable Energy Integration in the Caribbean R. J. Brecha, Katherine Schoenenberger, Masaō Ashtine, Randy Koon Koon Subject: Engineering, Automotive Engineering Keywords: Ocean thermal energy conversion; OTEC; seawater air conditioning; SWAC; desalination; variable renewable energy; wind power; solar PV; 100% renewable energy; Caribbean Many Caribbean island nations have historically been heavily dependent on imported fossil fuels for both power and transportation, while at the same time being at an enhanced risk from the impacts of climate change, although their emissions represent a very tiny fraction of the global total responsible for climate change. Small island developing states (SIDS) are among the leaders in advocating for the ambitious 1.5°C Paris Agreement target and the transition to 100% sustainable, renewable energy systems. In this work we present three central results. First, we show through GIS mapping of all Caribbean islands the potential for near-coastal deep-water as a resource for Ocean Thermal Energy Conversion (OTEC) and couple these results with an estimate of the countries for which OTEC would be most advantageous due to a lack of other dispatchable renewable power options. Second, hourly data have been utilized to explicitly show the trade-offs between battery storage needs and dispatchable renewable sources such as OTEC in 100% renewable electricity systems, both in technological and economic terms. Finally, the utility of near-shore, open-cycle OTEC with accompanying desalination is shown to enable a higher penetration of renewable energy and lead to lower system levelized costs than those of a conventional fossil fuel system. Nearshore Wave Energy Resource Assessment for Off-grid Islands: A Case Study in Cuyo Island, Palawan, Philippines Jonathan Cabiguen Pacaldo, Michael Lochinvar Sim Abundo Subject: Engineering, Marine Engineering Keywords: SWAN wave model; Nearshore wave energy resource assessment; Ocean renewable energy; Wave energy model simulation; Off-grid island electrification; Cuyo Island; Palawan Electrifying off-grid and isolated islands in the Philippines remains one of the challenges that hinders community development. One of the solutions seen to ensure energy security, expand energy access and promote a low carbon future in this isolated islands is the use of renewable energy sources. This study wishes to determine the nearshore wave energy resource during monsoon seasons in Cuyo Island using a 40-year wave hindcast and 9-year on-site wind speed data to develop high resolution wave energy model using SWAN wave model, and assessed its annual energy production through matching with wave energy devices. Results shows that average significant wave height (Hs), peak period (Tp) and wave power density (Pd) during northeast monsoon are Hs = 1.35 m, Tp = 4.79 s and Pd = 4.05 kW/m respectively, while southwest monsoon which is sheltered by the mainland resulted to a lesser outcome, Hs = 0.52 m, Tp = 3.37 s and Pd = 0.34 kW/m. While the simulated model was observed to overestimate the wave energy resource (Bias = 0.398, RMSE = 0.54 and SI = 1.34), it has a strong relationship with the observed values (average r = 0.9). Its annual energy production is highest at Station 5, with AEPWaveBouy = 43.761 MWh, AEP-Pelamis = 216.786 MWh and AEPWave Dragon = 2462.66 MWh. At present, the minimum requirement for a wave energy development to be feasible is 5 kW/m, which in this case, Cuyo Island falls short, but with the continuous evolution of wave energy converters, applications on milder re-sources will soon materialized. Remote Sensing of the Subtropical Front in the Southeast Pacific and the Ecology of Chilean Jack Mackerel Trachurus murphyi Igor M Belkin, Xin-Tang Shen Subject: Earth Sciences, Oceanography Keywords: Front; Southern Ocean; Subtropical Front; Subtropical Convergence; Subtropical Frontal Zone; Remote sensing; Satellite oceanography; SMOS; Marine ecology; Fisheries; Chilean jack mackerel; Trachurus m The Subtropical Front (STF) plays a key role in the ecology of Chilean jack mackerel Trachurus murphyi. Nonetheless, there are few remote sensing studies of the STF in the open Southeast Pacific Ocean, and almost all of them have been conducted by satellite oceanographers in Russia and Ukraine to support respective large-scale fisheries of jack mackerel in this region. We reviewed these studies that documented long-term seasonal and interannual variability of the STF from sea surface temperature (SST) and sea surface height (SSH) data. We also mapped the STF from satellite sea surface salinity (SSS) data of the SMOS mission (2012-2019). The Subtropical Front consists of two fronts -- North and South STF about 500 km apart -- that border the Subtropical Frontal Zone (STFZ) in-between. The STF is density-compensated, with spatially divergent manifestations in temperature and salinity. In the temperature field, the STF extends in the WNW to ESE direction in the Southeast Pacific. In the salinity field, the STFZ appears as a broad frontal zone, extending zonally between 30-35°S across the entire South Pacific. Three major types of satellite data – SST, SSH, and SSS – can be used to locate the STF. The SSH data is most advantageous with regard to the jack mackerel fisheries owing to the all-weather capability of satellite altimetry and the radical improvement of the spatial resolution of SSH data in the near future. Despite the dearth of dedicated in situ studies of the South Pacific STFZ, there is a broad consensus regarding the STFZ being the principal spawning and nursing ground of T. murphyi as well as a major migration corridor between Chile and New Zealand. A New Perspective on Four Decades of Changes in Arctic Sea Ice from Satellite Observations Xuanji Wang, Yinghui Liu, Jeffrey R. Key, Richard Dworak Subject: Earth Sciences, Oceanography Keywords: sea ice; Cryosphere; Arctic Ocean; Arctic sea ice change; Arctic climate change; remote sensing retrieval; satellite remote sensing; APP; APP-x; trend study Arctic sea ice characteristics have been changing rapidly and significantly in the last few decades. Using a long-term time series of sea ice products from satellite observations - the extended AVHRR Polar Pathfinder (APP-x), trends in sea ice concentration, ice extent, ice thickness, and ice volume in the Arctic from 1982 to 2020 are investigated. Results show that the Arctic has become less ice-covered in all seasons, especially in summer and autumn. Arctic sea ice thickness has been decreasing at the rate of -3.24 cm per year, resulting in about a 52% reduction in thickness from 2.35 m in 1982 to 1.13 m in 2020. Arctic sea ice volume has been decreasing at the rate of -467.7 km3 per year, resulting in about a 63% reduction in volume, from 27590.4 km3 in 1982 to 10305.5 km3 in 2020. These trends are further examined from a new perspective, where the Arctic Ocean is classified into open water, perennial, and seasonal sea ice-covered areas based on the sea ice persistence. The loss of the perennial sea ice-covered area is the major factor in the total sea ice loss in all seasons. If the current rates of sea ice changes in extent, concentration, and thickness continue, the Arctic is expected to have ice-free summer by the early 2060s. Divergent Proteomic Responses Offer Insights Into Resistant Physiological Responses of a Reef-Foraminifera to Climate Change Scenarios Marleen Stuhr, Louise P. Cameron, Bernhard Blank-Landeshammer, Steve S. Doo, Claire E. Reymond, Hildegard Westphal, Albert Sickmann, Justin B. Ries Subject: Biology, Anatomy & Morphology Keywords: Amphistegina lobifera; Red Sea; pH microsensor; global warming; thermal stress; ocean acidification; large benthic foraminifera; coral reef; LC-MS/MS proteomics; photosymbiotic calcifier Reef-dwelling calcifiers face numerous environmental stresses associated with anthropogenic carbon dioxide emissions, including ocean acidification and warming. Photosymbiont-bearing calcifiers, such as large benthic foraminifera, are particularly sensitive. To gain insight into their resistance and adaptive mechanisms to climate change, Amphistegina lobifera from the Gulf of Aqaba were cultured under elevated pCO2 (492, 963, and 3182 ppm) fully-crossed with elevated temperature (28°C and 31°C) for two months. Differential protein abundances in host and photosymbionts amongst treatments were investigated alongside physiological responses and microenvironmental pH variations. Over 1000 proteins were identified, of which one-third varied significantly between treatments. Thermal stress induced protein depletions, along with reduced holobiont growth. Elevated pCO2 caused only minor proteomic alterations and color changes. However, combined stressors reduced pore sizes and increased microenvironmental pH, indicating adaptive modifications to gas exchange. Notably, substantial proteomic variations at moderate-pCO2 and 31°C indicate cellular stress, while stable physiological performance at high-pCO2 and 31°C is scrutinized by putative decreases in test stability. Our experiment shows that the effects of climate change can be missed when stressors are assessed in isolation, and that physiological responses should be assessed across organismal levels to make more realistic predictions for the fate of reef calcifiers. Local-Basis-Function Equation of State for Ice VII-X to 450 GPa at 300 K J. Michael Brown, Baptiste Journaux Subject: Physical Sciences, Condensed Matter Physics Keywords: equation of state; Helmholtz energy; phase transition; Ice VII; Ice X; NaCl; exoplanets; icy/ocean worlds; local basis function; b spline; Tikhonov Inverse Helmholtz energy of ice VII-X is determined in a pressure regime extending to 450 GPa at 300 K using local basis functions in the form of b splines. The new representation for the equation of state is embedded in a physics-based inverse theory framework of parameter estimation. Selected pressures as a function of volume from 14 prior experimental studies and two theoretical studies constrain the behavior of Helmholtz energy. Separately measured bulk moduli, not used to construct the representation, are accurately replicated below about 20 GPa and above 60 GPa. In the intermediate range of pressure, the experimentally determined moduli are larger and have greater scatter than values predicted using the Helmholtz representation. Although systematic experimental error in the experimental elastic moduli is possible and likely, the alternative hypothesis is a slow relaxation time associated with changes in proton mobility or the ice VII to X transition. A correlation is observed between anomalies in the pressure derivative of the predicted bulk modulus and previously suggested higher-order phase transitions. Improved determinations of elastic properties at high pressure would allow refinement of the current equation of state. More generally, the current method of data assimilation is broadly applicable to other materials in high-pressure studies. Self- and Inter-Crossover Points of Jasons' Missions as New Essential Add-on of Satellite Altimetry in the Sub-Arctic Seas and the Southern Ocean Sergei Badulin, Andrey Kostianoy, Pavel Shabanov, Vitali Sharmar, Vika Grigorieva, Sergey Lebedev Subject: Earth Sciences, Oceanography Keywords: Satellite altimetry, Topex/Poseidon, Jasons missions, self-crossover points, inter-crossover points, Sub-Arctic Seas, Southern Ocean, sea level, wind speed, wave height, virtual buoy Satellite altimetry is successfully developing during the past three decades for the sea level, ocean dynamics, coastal oceanography, planetary waves, ocean tides, wind and wave, ice cover, Earth's gravity field, and climatology research. We propose a new essential add-on of satellite altimetry related to the peculiarities of the orbits of the Topex/Poseidon and Jasons' satellite missions which were not mentioned before in the scientific publications. Derived subsets of "self-crossover" and "inter-crossover" points in sub-polar latitudes are discussed in detail in the context of water exchange, and wind-wave dynamics, and potential challenges to be solved. The relatively short time lags between measurements at these crossovers provide additional information on anomalies of magnitudes and directions of ocean currents, and characteristics of wind-driven waves. Resulting data snapshots with constant space and time intervals can be regarded as time series of virtual buoys, an analog of continuous buoy measurements of the sea level, wind speed, and wave height. Areas of the World Ocean where these specific crossovers occur are described in the context of water exchange, wind wave studies, and potential challenges to be solved. The value of these special crossovers for studies and monitoring of the sub-polar seas is illustrated by a case study.
CommonCrawl
Moduli space of semistable bundles It is well-known that the space of $S$-equivalence classes of rank 2 semistable holomorphic vector bundles with trivial determinant on a genus 2 Riemann surface $M$ is $CP^3$ (more concretely $PH^0(Jac(M),L(2\theta)$). Especially, the points corresponding to semistable (and not stable) bundles are smooth points. On the open dense subspace consisting of points corresponding to stable bundles, there is a natural symplectic structure, compatible with the natural Riemannian metric. It defines a Kaehler structure. It should be true that this Kaehler structure extends to the semistable points (and therefore $CP^3$ is equipped with its natural (only) Kaehler structure). Does anyone know a reference, or a proof of this? Thank you. ag.algebraic-geometry reference-request dg.differential-geometry riemann-surfaces character-varieties Sean Lawton SebastianSebastian $\begingroup$ The word "Bundles" in the title was changed to "vundles". This seems awfully strange to me, but I'll wait for some corroboration that "vundle" is not the hip new term for "vector bundle" before suggesting a rollback. $\endgroup$ – Pete L. Clark Jul 23 '10 at 11:43 $\begingroup$ What natural Riemannian metric are you referring to? I would say that what this moduli space possesses, very naturally, is a complex structure. The questions are then whether the Goldman form v(x,y) -- the natural symplectic form -- extends over the semistable locus where it is not a priori defined, and whether g(x,y) := v(ix,y) is positive definite. If so, since we know v is closed, we would conclude that g is automatically Kähler (see p. 107 of Griffiths & Harris). However, I don't know offhand whether the answers are positive or negative. $\endgroup$ – Michael Thaddeus Jul 23 '10 at 18:21 $\begingroup$ It depends on your point of view. In the differential geometric setup, the metric is quite natural. The moduli space can be identified with the moduli space of flat SU(2) connections, where the stable bundle correspond to the irreducible connections. The tangent space at some connection A can be identified with the space of harmonic $\phi\in\Omega^1(su(V)).$ This space has a natural metric $\int trace(\phi\wedge*\tilde\phi).$ $\endgroup$ – Sebastian Jul 26 '10 at 6:20 Disclaimer: This answer is rewritten in response to Chris Woodward's insightful comments. The moduli space of rank 2 semistable holomorphic vector bundles with trivial determinant on a genus 2 surface $\Sigma$ is homeomorphic to the character variety $$\mathfrak{X}_\Sigma(SU(2)):=\mathrm{Hom}(\pi_1(\Sigma),SU(2))/SU(2).$$ This homeomorphism restricts to a diffeomorphism between stable bundles and irreducible representations. In the case of fixed determinant Higgs bundles on $\Sigma$ there is a similar homeomorphic correspondence with $$\mathfrak{X}_\Sigma(SL(2,\mathbb{C})):=\mathrm{Hom}(\pi_1(\Sigma),SL(2,\mathbb{C}))/\!/SL(2, \mathbb{C}).$$ Carlos Simpson has shown under this latter correspondence all points, even singular points, have isomorphic corresponding étale neighborhoods (Isosingularity Theorem). In short, they are locally isomorphic. However, it is known these moduli spaces are not even biholomorphic let alone biregular, since the complex structure of the Higgs moduli space depends on the complex structure of $\Sigma$ as a Riemann surface whereas the complex structure on the character variety only depends on the group $SL(2,\mathbb{C})$. The moduli space of holomorphic vector bundles naturally embeds into the moduli space of Higgs bundles as those with trivial Higgs field. Likewise, $\mathfrak{X}_\Sigma(SU(2))$ embeds into $\mathfrak{X}_\Sigma(SL(2,\mathbb{C}))$, and the homeomorphisms above respect these embeddings. So it is natural to think that the stratified analytic smooth structures on the moduli space of vector bundles and that of the semi-algebraic space $\mathfrak{X}_\Sigma(SU(2))$ also correspond; as they do generically and also do on their "complexifications". However, as pointed out by Chris Woodward, Johannes Huebschmann proves in Smooth Structures on Certain Moduli Spaces for Bundles on a Surface that this is not the case. In particular, in Section 8 he explicitly addresses the case of a genus 2 surface where the moduli spaces in question are homeomorphic to $\mathbb{C}P^3$, showing explicitly that the natural smooth structure on $\mathfrak{X}_\Sigma(SU(2))$ differs from that of $\mathbb{C}P^3$. That is sufficient to answer the original question as: NO, if one interprets the question as "Is $\mathfrak{X}_\Sigma(SU(2))$ Kähler isomorphic to $\mathbb{C}P^3$?" Remark: With regard to the (singular) symplectic structure on $\mathfrak{X}_\Sigma(SU(2))$. The Goldman Poisson structure, a Lie algebra structure and derivation, is defined globally on the coordiante ring of the character variety and makes sense even in a singular setting. In general, such a Poisson structure gives a foliation of the smooth locus by symplectic sub-manifolds. In this case there is one leaf (since the surface in question is closed). Remark: For a direct proof, not using the theory of holomorphic bundles, that the character variety is $\mathbb{C}P^3$ see here. We can see the singularities directly in Choi's construction. Sean LawtonSean Lawton $\begingroup$ My impression was that the answer was no, because the smooth structures are "different" in the sense of Huebschmann last section of arxiv.org/pdf/dg-ga/9411008.pdf. $\endgroup$ – Chris Woodward Jun 3 '18 at 12:08 $\begingroup$ p.s. The Kahler structure on the moduli space depends on the choice of complex structure on the underlying curve; having this structure be the a toric structure would mean that the complex structure would be invariant under the Goldman flow which is not true for a general curve, see the paper www2.math.umd.edu/~raw/papers/local.pdf by Daskalopolous-Wentworth. $\endgroup$ – Chris Woodward Jun 3 '18 at 12:36 $\begingroup$ Thanks Chris! These are great links. I will edit my post. I see where I got confused, and will try to note it. $\endgroup$ – Sean Lawton Jun 3 '18 at 15:08 Not the answer you're looking for? Browse other questions tagged ag.algebraic-geometry reference-request dg.differential-geometry riemann-surfaces character-varieties or ask your own question. Rank two vector bundles on a curve of genus two How many "elementary" characterizations of twisted SU(2) representation varieties are known? The space of generalized complex structures in sense of N.Hitchin is contractible? Integrable compatible complex structures Is there a non-abelian version of the Torelli map?
CommonCrawl